* [dpdk-dev] [PATCH v2 10/11] cryptodev: add mempool pointer in queue pair setup
` (5 preceding siblings ...)
2017-06-30 17:09 1% ` [dpdk-dev] [PATCH v2 09/11] cryptodev: support device independent sessions Pablo de Lara
@ 2017-06-30 17:09 2% ` Pablo de Lara
6 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-30 17:09 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
The session mempool pointer is needed in each queue pair,
if session-less operations are being handled.
Therefore, the API is changed to accept this parameter,
as the session mempool is created outside the
device configuration function, similar to what ethdev
does with the rx queues.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/main.c | 5 +--
doc/guides/rel_notes/release_17_08.rst | 2 +-
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 4 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 4 +-
drivers/crypto/armv8/rte_armv8_pmd_ops.c | 4 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 3 +-
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 4 +-
drivers/crypto/null/null_crypto_pmd_ops.c | 4 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 4 +-
drivers/crypto/qat/qat_crypto.h | 3 +-
drivers/crypto/qat/qat_qp.c | 2 +-
drivers/crypto/scheduler/scheduler_pmd_ops.c | 13 ++++--
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 4 +-
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 4 +-
lib/librte_cryptodev/rte_cryptodev.c | 11 +++--
lib/librte_cryptodev/rte_cryptodev.h | 9 ++--
lib/librte_cryptodev/rte_cryptodev_pmd.h | 3 +-
test/test/test_cryptodev.c | 58 +++++++++++++++-----------
test/test/test_cryptodev_perf.c | 5 ++-
19 files changed, 81 insertions(+), 65 deletions(-)
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 9f5c6be..0cf4618 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -108,8 +108,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
};
- ret = rte_cryptodev_configure(enabled_cdevs[cdev_id], &conf,
- cperf_mempool);
+ ret = rte_cryptodev_configure(enabled_cdevs[cdev_id], &conf);
if (ret < 0) {
printf("Failed to configure cryptodev %u",
enabled_cdevs[cdev_id]);
@@ -117,7 +116,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
}
ret = rte_cryptodev_queue_pair_setup(enabled_cdevs[cdev_id], 0,
- &qp_conf, SOCKET_ID_ANY);
+ &qp_conf, SOCKET_ID_ANY, cperf_mempool);
if (ret < 0) {
printf("Failed to setup queue pair %u on "
"cryptodev %u", 0, cdev_id);
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index ad1f269..923cc3e 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -167,7 +167,7 @@ API Changes
* Mempool pointer ``mp`` has been removed from ``rte_cryptodev_sym_session`` structure.
* Replaced ``private`` marker with array of pointers to private data sessions
``sess_private_data`` in ``rte_cryptodev_sym_session``
-
+ * Added new field ``session_pool`` to ``rte_cryptodev_queue_pair_setup()``.
ABI Changes
-----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 0b063fd..d47a041 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -244,7 +244,7 @@ aesni_gcm_pmd_qp_create_processed_pkts_ring(struct aesni_gcm_qp *qp,
static int
aesni_gcm_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
+ int socket_id, struct rte_mempool *session_pool)
{
struct aesni_gcm_qp *qp = NULL;
@@ -269,7 +269,7 @@ aesni_gcm_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->processed_pkts == NULL)
goto qp_setup_cleanup;
- qp->sess_mp = dev->data->session_pool;
+ qp->sess_mp = session_pool;
memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index fda8296..1216e56 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -397,7 +397,7 @@ aesni_mb_pmd_qp_create_processed_ops_ring(struct aesni_mb_qp *qp,
static int
aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
+ int socket_id, struct rte_mempool *session_pool)
{
struct aesni_mb_qp *qp = NULL;
struct aesni_mb_private *internals = dev->data->dev_private;
@@ -426,7 +426,7 @@ aesni_mb_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->ingress_queue == NULL)
goto qp_setup_cleanup;
- qp->sess_mp = dev->data->session_pool;
+ qp->sess_mp = session_pool;
memset(&qp->stats, 0, sizeof(qp->stats));
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 3db420b..735116f 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -247,7 +247,7 @@ armv8_crypto_pmd_qp_create_processed_ops_ring(struct armv8_crypto_qp *qp,
static int
armv8_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
+ int socket_id, struct rte_mempool *session_pool)
{
struct armv8_crypto_qp *qp = NULL;
@@ -272,7 +272,7 @@ armv8_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->processed_ops == NULL)
goto qp_setup_cleanup;
- qp->sess_mp = dev->data->session_pool;
+ qp->sess_mp = session_pool;
memset(&qp->stats, 0, sizeof(qp->stats));
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index c71b0c7..0382dc0 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -660,7 +660,8 @@ dpaa2_sec_queue_pair_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
static int
dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
__rte_unused const struct rte_cryptodev_qp_conf *qp_conf,
- __rte_unused int socket_id)
+ __rte_unused int socket_id,
+ __rte_unused struct rte_mempool *session_pool)
{
struct dpaa2_sec_dev_private *priv = dev->data->dev_private;
struct dpaa2_sec_qp *qp;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index d590b91..2805e6f 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -223,7 +223,7 @@ kasumi_pmd_qp_create_processed_ops_ring(struct kasumi_qp *qp,
static int
kasumi_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
+ int socket_id, struct rte_mempool *session_pool)
{
struct kasumi_qp *qp = NULL;
@@ -248,7 +248,7 @@ kasumi_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->processed_ops == NULL)
goto qp_setup_cleanup;
- qp->sess_mp = dev->data->session_pool;
+ qp->sess_mp = session_pool;
memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index 04147fe..7d0ac2d 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -215,7 +215,7 @@ null_crypto_pmd_qp_create_processed_pkts_ring(struct null_crypto_qp *qp,
static int
null_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
+ int socket_id, struct rte_mempool *session_pool)
{
struct null_crypto_private *internals = dev->data->dev_private;
struct null_crypto_qp *qp;
@@ -258,7 +258,7 @@ null_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
goto qp_setup_cleanup;
}
- qp->sess_mp = dev->data->session_pool;
+ qp->sess_mp = session_pool;
memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 005855b..d8eb335 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -602,7 +602,7 @@ openssl_pmd_qp_create_processed_ops_ring(struct openssl_qp *qp,
static int
openssl_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
+ int socket_id, struct rte_mempool *session_pool)
{
struct openssl_qp *qp = NULL;
@@ -627,7 +627,7 @@ openssl_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->processed_ops == NULL)
goto qp_setup_cleanup;
- qp->sess_mp = dev->data->session_pool;
+ qp->sess_mp = session_pool;
memset(&qp->stats, 0, sizeof(qp->stats));
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index d5466c2..65af962 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -103,7 +103,8 @@ void qat_crypto_sym_stats_get(struct rte_cryptodev *dev,
void qat_crypto_sym_stats_reset(struct rte_cryptodev *dev);
int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
- const struct rte_cryptodev_qp_conf *rx_conf, int socket_id);
+ const struct rte_cryptodev_qp_conf *rx_conf, int socket_id,
+ struct rte_mempool *session_pool);
int qat_crypto_sym_qp_release(struct rte_cryptodev *dev,
uint16_t queue_pair_id);
diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c
index 3921c2e..2b2ab42 100644
--- a/drivers/crypto/qat/qat_qp.c
+++ b/drivers/crypto/qat/qat_qp.c
@@ -134,7 +134,7 @@ queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
int qat_crypto_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
+ int socket_id, struct rte_mempool *session_pool __rte_unused)
{
struct qat_qp *qp;
struct rte_pci_device *pci_dev;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index 6e37efc..b434244 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -101,8 +101,7 @@ scheduler_pmd_config(struct rte_cryptodev *dev,
for (i = 0; i < sched_ctx->nb_slaves; i++) {
uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
- ret = rte_cryptodev_configure(slave_dev_id, config,
- dev->data->session_pool);
+ ret = rte_cryptodev_configure(slave_dev_id, config);
if (ret < 0)
break;
}
@@ -400,7 +399,8 @@ scheduler_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
/** Setup a queue pair */
static int
scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
- const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+ const struct rte_cryptodev_qp_conf *qp_conf, int socket_id,
+ struct rte_mempool *session_pool)
{
struct scheduler_ctx *sched_ctx = dev->data->dev_private;
struct scheduler_qp_ctx *qp_ctx;
@@ -422,8 +422,13 @@ scheduler_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
for (i = 0; i < sched_ctx->nb_slaves; i++) {
uint8_t slave_id = sched_ctx->slaves[i].dev_id;
+ /*
+ * All slaves will share the same session mempool
+ * for session-less operations, so the objects
+ * must be big enough for all the drivers used.
+ */
ret = rte_cryptodev_queue_pair_setup(slave_id, qp_id,
- qp_conf, socket_id);
+ qp_conf, socket_id, session_pool);
if (ret < 0)
return ret;
}
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 1967409..04151bc 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -220,7 +220,7 @@ snow3g_pmd_qp_create_processed_ops_ring(struct snow3g_qp *qp,
static int
snow3g_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
+ int socket_id, struct rte_mempool *session_pool)
{
struct snow3g_qp *qp = NULL;
@@ -245,7 +245,7 @@ snow3g_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->processed_ops == NULL)
goto qp_setup_cleanup;
- qp->sess_mp = dev->data->session_pool;
+ qp->sess_mp = session_pool;
memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index cdc783f..7cf6542 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -220,7 +220,7 @@ zuc_pmd_qp_create_processed_ops_ring(struct zuc_qp *qp,
static int
zuc_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id)
+ int socket_id, struct rte_mempool *session_pool)
{
struct zuc_qp *qp = NULL;
@@ -245,7 +245,7 @@ zuc_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
if (qp->processed_ops == NULL)
goto qp_setup_cleanup;
- qp->sess_mp = dev->data->session_pool;
+ qp->sess_mp = session_pool;
memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index f5a72de..7e3b202 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -686,8 +686,7 @@ rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
}
int
-rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config,
- struct rte_mempool *session_pool)
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
{
struct rte_cryptodev *dev;
int diag;
@@ -707,8 +706,6 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config,
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
- dev->data->session_pool = session_pool;
-
/* Setup new number of queue pairs and reconfigure device. */
diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
config->socket_id);
@@ -820,7 +817,9 @@ rte_cryptodev_close(uint8_t dev_id)
int
rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
- const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+ const struct rte_cryptodev_qp_conf *qp_conf, int socket_id,
+ struct rte_mempool *session_pool)
+
{
struct rte_cryptodev *dev;
@@ -844,7 +843,7 @@ rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);
return (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,
- socket_id);
+ socket_id, session_pool);
}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 2204982..aa68095 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -439,15 +439,13 @@ struct rte_cryptodev_config {
*
* @param dev_id The identifier of the device to configure.
* @param config The crypto device configuration structure.
- * @param session_pool Pointer to device session mempool
*
* @return
* - 0: Success, device configured.
* - <0: Error code returned by the driver configuration function.
*/
extern int
-rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config,
- struct rte_mempool *session_pool);
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
/**
* Start an device.
@@ -506,6 +504,8 @@ rte_cryptodev_close(uint8_t dev_id);
* *SOCKET_ID_ANY* if there is no NUMA constraint
* for the DMA memory allocated for the receive
* queue pair.
+ * @param session_pool Pointer to device session mempool, used
+ * for session-less operations.
*
* @return
* - 0: Success, queue pair correctly set up.
@@ -513,7 +513,8 @@ rte_cryptodev_close(uint8_t dev_id);
*/
extern int
rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
- const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+ const struct rte_cryptodev_qp_conf *qp_conf, int socket_id,
+ struct rte_mempool *session_pool);
/**
* Start a specified queue pair of a device. It is used
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 959a8ae..886aa23 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -207,12 +207,13 @@ typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,
* @param qp_id Queue Pair Index
* @param qp_conf Queue configuration structure
* @param socket_id Socket Index
+ * @param session_pool Pointer to device session mempool
*
* @return Returns 0 on success.
*/
typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,
uint16_t qp_id, const struct rte_cryptodev_qp_conf *qp_conf,
- int socket_id);
+ int socket_id, struct rte_mempool *session_pool);
/**
* Release memory resources allocated by given queue pair.
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 9d08eb8..bf117b3 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -401,7 +401,7 @@ testsuite_setup(void)
"session mempool allocation failed");
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
- &ts_params->conf, ts_params->session_mpool),
+ &ts_params->conf),
"Failed to configure cryptodev %u with %u qps",
dev_id, ts_params->conf.nb_queue_pairs);
@@ -410,7 +410,8 @@ testsuite_setup(void)
for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
dev_id, qp_id, &ts_params->qp_conf,
- rte_cryptodev_socket_id(dev_id)),
+ rte_cryptodev_socket_id(dev_id),
+ ts_params->session_mpool),
"Failed to setup queue pair %u on cryptodev %u",
qp_id, dev_id);
}
@@ -455,7 +456,7 @@ ut_setup(void)
ts_params->conf.socket_id = SOCKET_ID_ANY;
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf, ts_params->session_mpool),
+ &ts_params->conf),
"Failed to configure cryptodev %u",
ts_params->valid_devs[0]);
@@ -463,7 +464,8 @@ ut_setup(void)
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0], qp_id,
&ts_params->qp_conf,
- rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+ rte_cryptodev_socket_id(ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Failed to setup queue pair %u on cryptodev %u",
qp_id, ts_params->valid_devs[0]);
}
@@ -537,23 +539,20 @@ test_device_configure_invalid_dev_id(void)
/* Stop the device in case it's started so it can be configured */
rte_cryptodev_stop(ts_params->valid_devs[dev_id]);
- TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf,
- ts_params->session_mpool),
+ TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
"Failed test for rte_cryptodev_configure: "
"invalid dev_num %u", dev_id);
/* invalid dev_id values */
dev_id = num_devs;
- TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf,
- ts_params->session_mpool),
+ TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
"Failed test for rte_cryptodev_configure: "
"invalid dev_num %u", dev_id);
dev_id = 0xff;
- TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf,
- ts_params->session_mpool),
+ TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
"Failed test for rte_cryptodev_configure:"
"invalid dev_num %u", dev_id);
@@ -573,7 +572,7 @@ test_device_configure_invalid_queue_pair_ids(void)
ts_params->conf.nb_queue_pairs = 1;
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf, ts_params->session_mpool),
+ &ts_params->conf),
"Failed to configure cryptodev: dev_id %u, qp_id %u",
ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
@@ -582,7 +581,7 @@ test_device_configure_invalid_queue_pair_ids(void)
ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf, ts_params->session_mpool),
+ &ts_params->conf),
"Failed to configure cryptodev: dev_id %u, qp_id %u",
ts_params->valid_devs[0],
ts_params->conf.nb_queue_pairs);
@@ -592,7 +591,7 @@ test_device_configure_invalid_queue_pair_ids(void)
ts_params->conf.nb_queue_pairs = 0;
TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf, ts_params->session_mpool),
+ &ts_params->conf),
"Failed test for rte_cryptodev_configure, dev_id %u,"
" invalid qps: %u",
ts_params->valid_devs[0],
@@ -603,7 +602,7 @@ test_device_configure_invalid_queue_pair_ids(void)
ts_params->conf.nb_queue_pairs = UINT16_MAX;
TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf, ts_params->session_mpool),
+ &ts_params->conf),
"Failed test for rte_cryptodev_configure, dev_id %u,"
" invalid qps: %u",
ts_params->valid_devs[0],
@@ -614,7 +613,7 @@ test_device_configure_invalid_queue_pair_ids(void)
ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf, ts_params->session_mpool),
+ &ts_params->conf),
"Failed test for rte_cryptodev_configure, dev_id %u,"
" invalid qps: %u",
ts_params->valid_devs[0],
@@ -644,7 +643,7 @@ test_queue_pair_descriptor_setup(void)
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf, ts_params->session_mpool),
+ &ts_params->conf),
"Failed to configure cryptodev %u",
ts_params->valid_devs[0]);
@@ -658,7 +657,8 @@ test_queue_pair_descriptor_setup(void)
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0], qp_id, &qp_conf,
rte_cryptodev_socket_id(
- ts_params->valid_devs[0])),
+ ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Failed test for "
"rte_cryptodev_queue_pair_setup: num_inflights "
"%u on qp %u on cryptodev %u",
@@ -672,7 +672,8 @@ test_queue_pair_descriptor_setup(void)
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0], qp_id, &qp_conf,
rte_cryptodev_socket_id(
- ts_params->valid_devs[0])),
+ ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Failed test for"
" rte_cryptodev_queue_pair_setup: num_inflights"
" %u on qp %u on cryptodev %u",
@@ -686,7 +687,8 @@ test_queue_pair_descriptor_setup(void)
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0], qp_id, &qp_conf,
rte_cryptodev_socket_id(
- ts_params->valid_devs[0])),
+ ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Failed test for "
"rte_cryptodev_queue_pair_setup: num_inflights"
" %u on qp %u on cryptodev %u",
@@ -701,7 +703,8 @@ test_queue_pair_descriptor_setup(void)
TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0], qp_id, &qp_conf,
rte_cryptodev_socket_id(
- ts_params->valid_devs[0])),
+ ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Unexpectedly passed test for "
"rte_cryptodev_queue_pair_setup:"
"num_inflights %u on qp %u on cryptodev %u",
@@ -716,7 +719,8 @@ test_queue_pair_descriptor_setup(void)
TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0], qp_id, &qp_conf,
rte_cryptodev_socket_id(
- ts_params->valid_devs[0])),
+ ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Unexpectedly passed test for "
"rte_cryptodev_queue_pair_setup:"
"num_inflights %u on qp %u on cryptodev %u",
@@ -730,7 +734,8 @@ test_queue_pair_descriptor_setup(void)
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0], qp_id, &qp_conf,
rte_cryptodev_socket_id(
- ts_params->valid_devs[0])),
+ ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Failed test for"
" rte_cryptodev_queue_pair_setup:"
"num_inflights %u on qp %u on cryptodev %u",
@@ -745,7 +750,8 @@ test_queue_pair_descriptor_setup(void)
TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0], qp_id, &qp_conf,
rte_cryptodev_socket_id(
- ts_params->valid_devs[0])),
+ ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Unexpectedly passed test for "
"rte_cryptodev_queue_pair_setup:"
"num_inflights %u on qp %u on cryptodev %u",
@@ -761,7 +767,8 @@ test_queue_pair_descriptor_setup(void)
TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0],
qp_id, &qp_conf,
- rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+ rte_cryptodev_socket_id(ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Failed test for rte_cryptodev_queue_pair_setup:"
"invalid qp %u on cryptodev %u",
qp_id, ts_params->valid_devs[0]);
@@ -771,7 +778,8 @@ test_queue_pair_descriptor_setup(void)
TEST_ASSERT_FAIL(rte_cryptodev_queue_pair_setup(
ts_params->valid_devs[0],
qp_id, &qp_conf,
- rte_cryptodev_socket_id(ts_params->valid_devs[0])),
+ rte_cryptodev_socket_id(ts_params->valid_devs[0]),
+ ts_params->session_mpool),
"Failed test for rte_cryptodev_queue_pair_setup:"
"invalid qp %u on cryptodev %u",
qp_id, ts_params->valid_devs[0]);
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 68c5fdd..8a40381 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -413,7 +413,7 @@ testsuite_setup(void)
"session mempool allocation failed");
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
- &ts_params->conf, ts_params->sess_mp),
+ &ts_params->conf),
"Failed to configure cryptodev %u",
ts_params->dev_id);
@@ -423,7 +423,8 @@ testsuite_setup(void)
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
ts_params->dev_id, qp_id,
&ts_params->qp_conf,
- rte_cryptodev_socket_id(ts_params->dev_id)),
+ rte_cryptodev_socket_id(ts_params->dev_id),
+ ts_params->sess_mp),
"Failed to setup queue pair %u on cryptodev %u",
qp_id, ts_params->dev_id);
}
--
2.9.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v2 09/11] cryptodev: support device independent sessions
` (4 preceding siblings ...)
2017-06-30 17:09 4% ` [dpdk-dev] [PATCH v2 08/11] cryptodev: remove mempool " Pablo de Lara
@ 2017-06-30 17:09 1% ` Pablo de Lara
2017-06-30 17:09 2% ` [dpdk-dev] [PATCH v2 10/11] cryptodev: add mempool pointer in queue pair setup Pablo de Lara
6 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-30 17:09 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Slawomir Mrozowicz, Pablo de Lara
From: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Change crypto device's session management to make it device independent
and simplify architecture when session is intended to be used on more than
one device.
Sessions private data is agnostic to underlying device by adding an
indirection in the sessions private data using the crypto driver identifier.
A single session can contain indirections to multiple device types.
New function rte_cryptodev_sym_session_init has been created,
to initialize the private data per driver to be used on
a same session.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf.h | 5 +-
app/test-crypto-perf/cperf_ops.c | 30 +-
app/test-crypto-perf/cperf_ops.h | 1 +
app/test-crypto-perf/cperf_test_latency.c | 7 +-
app/test-crypto-perf/cperf_test_latency.h | 5 +-
app/test-crypto-perf/cperf_test_throughput.c | 7 +-
app/test-crypto-perf/cperf_test_throughput.h | 5 +-
app/test-crypto-perf/cperf_test_verify.c | 7 +-
app/test-crypto-perf/cperf_test_verify.h | 5 +-
app/test-crypto-perf/main.c | 50 +--
doc/guides/rel_notes/release_17_08.rst | 6 +-
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 54 ++--
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 28 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 29 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 27 +-
drivers/crypto/armv8/rte_armv8_pmd.c | 11 +-
drivers/crypto/armv8/rte_armv8_pmd_ops.c | 31 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 53 +++-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 16 +-
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 28 +-
drivers/crypto/null/null_crypto_pmd.c | 14 +-
drivers/crypto/null/null_crypto_pmd_ops.c | 31 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 10 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 28 +-
drivers/crypto/qat/qat_crypto.c | 68 ++--
drivers/crypto/qat/qat_crypto.h | 12 +-
drivers/crypto/qat/rte_qat_cryptodev.c | 1 -
drivers/crypto/scheduler/scheduler_failover.c | 45 +--
.../crypto/scheduler/scheduler_pkt_size_distr.c | 18 --
drivers/crypto/scheduler/scheduler_pmd_ops.c | 87 +++---
drivers/crypto/scheduler/scheduler_pmd_private.h | 4 -
drivers/crypto/scheduler/scheduler_roundrobin.c | 41 ---
drivers/crypto/snow3g/rte_snow3g_pmd.c | 16 +-
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 28 +-
drivers/crypto/zuc/rte_zuc_pmd.c | 15 +-
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 28 +-
examples/ipsec-secgw/ipsec-secgw.c | 41 ++-
examples/ipsec-secgw/ipsec.c | 7 +-
examples/ipsec-secgw/ipsec.h | 2 +
examples/l2fwd-crypto/main.c | 74 +++--
lib/librte_cryptodev/rte_cryptodev.c | 128 ++++----
lib/librte_cryptodev/rte_cryptodev.h | 65 ++--
lib/librte_cryptodev/rte_cryptodev_pmd.h | 24 +-
lib/librte_cryptodev/rte_cryptodev_version.map | 2 +
test/test/test_cryptodev.c | 346 ++++++++++++++-------
test/test/test_cryptodev_blockcipher.c | 15 +-
test/test/test_cryptodev_blockcipher.h | 1 +
test/test/test_cryptodev_perf.c | 187 +++++++----
48 files changed, 1071 insertions(+), 672 deletions(-)
diff --git a/app/test-crypto-perf/cperf.h b/app/test-crypto-perf/cperf.h
index 293ba94..c9f7f81 100644
--- a/app/test-crypto-perf/cperf.h
+++ b/app/test-crypto-perf/cperf.h
@@ -41,7 +41,10 @@ struct cperf_options;
struct cperf_test_vector;
struct cperf_op_fns;
-typedef void *(*cperf_constructor_t)(uint8_t dev_id, uint16_t qp_id,
+typedef void *(*cperf_constructor_t)(
+ struct rte_mempool *sess_mp,
+ uint8_t dev_id,
+ uint16_t qp_id,
const struct cperf_options *options,
const struct cperf_test_vector *t_vec,
const struct cperf_op_fns *op_fns);
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index c2c3db5..23bdd70 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -334,14 +334,16 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
}
static struct rte_cryptodev_sym_session *
-cperf_create_session(uint8_t dev_id,
+cperf_create_session(struct rte_mempool *sess_mp,
+ uint8_t dev_id,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector)
{
struct rte_crypto_sym_xform cipher_xform;
struct rte_crypto_sym_xform auth_xform;
- struct rte_cryptodev_sym_session *sess = NULL;
+ struct rte_cryptodev_sym_session *sess;
+ sess = rte_cryptodev_sym_session_create(sess_mp);
/*
* cipher only
*/
@@ -362,7 +364,8 @@ cperf_create_session(uint8_t dev_id,
cipher_xform.cipher.key.length = 0;
}
/* create crypto session */
- sess = rte_cryptodev_sym_session_create(dev_id, &cipher_xform);
+ rte_cryptodev_sym_session_init(dev_id, sess, &cipher_xform,
+ sess_mp);
/*
* auth only
*/
@@ -388,7 +391,8 @@ cperf_create_session(uint8_t dev_id,
auth_xform.auth.key.data = NULL;
}
/* create crypto session */
- sess = rte_cryptodev_sym_session_create(dev_id, &auth_xform);
+ rte_cryptodev_sym_session_init(dev_id, sess, &auth_xform,
+ sess_mp);
/*
* cipher and auth
*/
@@ -452,29 +456,31 @@ cperf_create_session(uint8_t dev_id,
RTE_CRYPTO_CIPHER_OP_ENCRYPT) {
cipher_xform.next = &auth_xform;
/* create crypto session */
- sess = rte_cryptodev_sym_session_create(dev_id,
- &cipher_xform);
+ rte_cryptodev_sym_session_init(dev_id,
+ sess, &cipher_xform, sess_mp);
+
} else { /* decrypt */
auth_xform.next = &cipher_xform;
/* create crypto session */
- sess = rte_cryptodev_sym_session_create(dev_id,
- &auth_xform);
+ rte_cryptodev_sym_session_init(dev_id,
+ sess, &auth_xform, sess_mp);
}
} else { /* create crypto session for other */
/* cipher then auth */
if (options->op_type == CPERF_CIPHER_THEN_AUTH) {
cipher_xform.next = &auth_xform;
/* create crypto session */
- sess = rte_cryptodev_sym_session_create(dev_id,
- &cipher_xform);
+ rte_cryptodev_sym_session_init(dev_id,
+ sess, &cipher_xform, sess_mp);
} else { /* auth then cipher */
auth_xform.next = &cipher_xform;
/* create crypto session */
- sess = rte_cryptodev_sym_session_create(dev_id,
- &auth_xform);
+ rte_cryptodev_sym_session_init(dev_id,
+ sess, &auth_xform, sess_mp);
}
}
}
+
return sess;
}
diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h
index 1b748da..36daf9d 100644
--- a/app/test-crypto-perf/cperf_ops.h
+++ b/app/test-crypto-perf/cperf_ops.h
@@ -41,6 +41,7 @@
typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
+ struct rte_mempool *sess_mp,
uint8_t dev_id, const struct cperf_options *options,
const struct cperf_test_vector *test_vector);
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index e61ac97..546a960 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -76,7 +76,7 @@ cperf_latency_test_free(struct cperf_latency_ctx *ctx, uint32_t mbuf_nb)
if (ctx) {
if (ctx->sess)
- rte_cryptodev_sym_session_free(ctx->dev_id, ctx->sess);
+ rte_cryptodev_sym_session_free(ctx->sess);
if (ctx->mbufs_in) {
for (i = 0; i < mbuf_nb; i++)
@@ -187,7 +187,8 @@ cperf_mbuf_create(struct rte_mempool *mempool,
}
void *
-cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
+cperf_latency_test_constructor(struct rte_mempool *sess_mp,
+ uint8_t dev_id, uint16_t qp_id,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
const struct cperf_op_fns *op_fns)
@@ -207,7 +208,7 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
ctx->options = options;
ctx->test_vector = test_vector;
- ctx->sess = op_fns->sess_create(dev_id, options, test_vector);
+ ctx->sess = op_fns->sess_create(sess_mp, dev_id, options, test_vector);
if (ctx->sess == NULL)
goto err;
diff --git a/app/test-crypto-perf/cperf_test_latency.h b/app/test-crypto-perf/cperf_test_latency.h
index 6a2cf61..1bbedb4 100644
--- a/app/test-crypto-perf/cperf_test_latency.h
+++ b/app/test-crypto-perf/cperf_test_latency.h
@@ -43,7 +43,10 @@
#include "cperf_test_vectors.h"
void *
-cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
+cperf_latency_test_constructor(
+ struct rte_mempool *sess_mp,
+ uint8_t dev_id,
+ uint16_t qp_id,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
const struct cperf_op_fns *ops_fn);
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 61b27ea..df85c67 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -65,7 +65,7 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx, uint32_t mbuf_nb)
if (ctx) {
if (ctx->sess)
- rte_cryptodev_sym_session_free(ctx->dev_id, ctx->sess);
+ rte_cryptodev_sym_session_free(ctx->sess);
if (ctx->mbufs_in) {
for (i = 0; i < mbuf_nb; i++)
@@ -175,7 +175,8 @@ cperf_mbuf_create(struct rte_mempool *mempool,
}
void *
-cperf_throughput_test_constructor(uint8_t dev_id, uint16_t qp_id,
+cperf_throughput_test_constructor(struct rte_mempool *sess_mp,
+ uint8_t dev_id, uint16_t qp_id,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
const struct cperf_op_fns *op_fns)
@@ -195,7 +196,7 @@ cperf_throughput_test_constructor(uint8_t dev_id, uint16_t qp_id,
ctx->options = options;
ctx->test_vector = test_vector;
- ctx->sess = op_fns->sess_create(dev_id, options, test_vector);
+ ctx->sess = op_fns->sess_create(sess_mp, dev_id, options, test_vector);
if (ctx->sess == NULL)
goto err;
diff --git a/app/test-crypto-perf/cperf_test_throughput.h b/app/test-crypto-perf/cperf_test_throughput.h
index f1b5766..987d0c3 100644
--- a/app/test-crypto-perf/cperf_test_throughput.h
+++ b/app/test-crypto-perf/cperf_test_throughput.h
@@ -44,7 +44,10 @@
void *
-cperf_throughput_test_constructor(uint8_t dev_id, uint16_t qp_id,
+cperf_throughput_test_constructor(
+ struct rte_mempool *sess_mp,
+ uint8_t dev_id,
+ uint16_t qp_id,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
const struct cperf_op_fns *ops_fn);
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 454221e..3c2700f 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -69,7 +69,7 @@ cperf_verify_test_free(struct cperf_verify_ctx *ctx, uint32_t mbuf_nb)
if (ctx) {
if (ctx->sess)
- rte_cryptodev_sym_session_free(ctx->dev_id, ctx->sess);
+ rte_cryptodev_sym_session_free(ctx->sess);
if (ctx->mbufs_in) {
for (i = 0; i < mbuf_nb; i++)
@@ -179,7 +179,8 @@ cperf_mbuf_create(struct rte_mempool *mempool,
}
void *
-cperf_verify_test_constructor(uint8_t dev_id, uint16_t qp_id,
+cperf_verify_test_constructor(struct rte_mempool *sess_mp,
+ uint8_t dev_id, uint16_t qp_id,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
const struct cperf_op_fns *op_fns)
@@ -199,7 +200,7 @@ cperf_verify_test_constructor(uint8_t dev_id, uint16_t qp_id,
ctx->options = options;
ctx->test_vector = test_vector;
- ctx->sess = op_fns->sess_create(dev_id, options, test_vector);
+ ctx->sess = op_fns->sess_create(sess_mp, dev_id, options, test_vector);
if (ctx->sess == NULL)
goto err;
diff --git a/app/test-crypto-perf/cperf_test_verify.h b/app/test-crypto-perf/cperf_test_verify.h
index 3fa78ee..e67b48d 100644
--- a/app/test-crypto-perf/cperf_test_verify.h
+++ b/app/test-crypto-perf/cperf_test_verify.h
@@ -44,7 +44,10 @@
void *
-cperf_verify_test_constructor(uint8_t dev_id, uint16_t qp_id,
+cperf_verify_test_constructor(
+ struct rte_mempool *sess_mp,
+ uint8_t dev_id,
+ uint16_t qp_id,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
const struct cperf_op_fns *ops_fn);
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 9a22925..9f5c6be 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -46,6 +46,8 @@ const struct cperf_test cperf_testmap[] = {
}
};
+static struct rte_mempool *cperf_mempool;
+
static int
cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
{
@@ -69,6 +71,30 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
return -EINVAL;
}
+ /* Create a mempool shared by all the devices */
+ uint32_t max_sess_size = 0, sess_size;
+
+ for (cdev_id = 0; cdev_id < enabled_cdev_count &&
+ cdev_id < RTE_CRYPTO_MAX_DEVS; cdev_id++) {
+ uint8_t enabled_id = enabled_cdevs[cdev_id];
+ sess_size = rte_cryptodev_get_private_session_size(enabled_id);
+ if (sess_size > max_sess_size)
+ max_sess_size = sess_size;
+ }
+
+ cperf_mempool = rte_mempool_create("sess_mp",
+ NUM_SESSIONS,
+ max_sess_size,
+ SESS_MEMPOOL_CACHE_SIZE,
+ 0, NULL, NULL, NULL,
+ NULL, SOCKET_ID_ANY,
+ 0);
+
+ if (cperf_mempool == NULL) {
+ printf("Failed to create device session mempool\n");
+ return -ENOMEM;
+ }
+
for (cdev_id = 0; cdev_id < enabled_cdev_count &&
cdev_id < RTE_CRYPTO_MAX_DEVS; cdev_id++) {
@@ -81,28 +107,9 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
.nb_descriptors = 2048
};
- unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
- rte_cryptodev_get_private_session_size(enabled_cdevs[cdev_id]);
-
- char mp_name[RTE_MEMPOOL_NAMESIZE];
- struct rte_mempool *sess_mp;
-
- snprintf(mp_name, sizeof(mp_name), "sess_mp_%u", cdev_id);
- sess_mp = rte_mempool_create(mp_name,
- NUM_SESSIONS,
- session_size,
- SESS_MEMPOOL_CACHE_SIZE,
- 0, NULL, NULL, NULL,
- NULL, SOCKET_ID_ANY,
- 0);
-
- if (sess_mp == NULL) {
- printf("Failed to create device session mempool\n");
- return -ENOMEM;
- }
ret = rte_cryptodev_configure(enabled_cdevs[cdev_id], &conf,
- sess_mp);
+ cperf_mempool);
if (ret < 0) {
printf("Failed to configure cryptodev %u",
enabled_cdevs[cdev_id]);
@@ -388,7 +395,8 @@ main(int argc, char **argv)
cdev_id = enabled_cdevs[i];
- ctx[cdev_id] = cperf_testmap[opts.test].constructor(cdev_id, 0,
+ ctx[cdev_id] = cperf_testmap[opts.test].constructor(
+ cperf_mempool, cdev_id, 0,
&opts, t_vec, &op_fns);
if (ctx[cdev_id] == NULL) {
RTE_LOG(ERR, USER1, "Test run constructor failed\n");
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 6215584..ad1f269 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -77,7 +77,9 @@ New Features
* **Updated cryptodev library.**
- Added helper functions for crypto device driver identification.
+ * Added helper functions for crypto device driver identification.
+ * Added support for multi-device sessions, so a single session can be
+ used in multiple drivers.
Resolved Issues
@@ -163,6 +165,8 @@ API Changes
* ``dev_id`` field has been removed from ``rte_cryptodev_sym_session`` structure.
* ``driver_id`` field has been removed from ``rte_cryptodev_sym_session`` structure.
* Mempool pointer ``mp`` has been removed from ``rte_cryptodev_sym_session`` structure.
+ * Replaced ``private`` marker with array of pointers to private data sessions
+ ``sess_private_data`` in ``rte_cryptodev_sym_session``
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 2774b4e..2438b55 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -141,27 +141,35 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
/** Get gcm session */
static struct aesni_gcm_session *
-aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
+aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_op *op)
{
struct aesni_gcm_session *sess = NULL;
- if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- sess = (struct aesni_gcm_session *)op->session->_private;
- } else {
- void *_sess;
-
- if (rte_mempool_get(qp->sess_mp, &_sess))
- return sess;
+ if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (likely(op->sym->session != NULL)) {
+ sess = (struct aesni_gcm_session *)
+ get_session_private_data(
+ op->sym->session,
+ cryptodev_driver_id);
+ }
+ } else {
+ /* provide internal session */
+ void *_sess = NULL;
- sess = (struct aesni_gcm_session *)
- ((struct rte_cryptodev_sym_session *)_sess)->_private;
+ if (!rte_mempool_get(qp->sess_mp, (void **)&_sess)) {
+ sess = (struct aesni_gcm_session *)_sess;
- if (unlikely(aesni_gcm_set_session_parameters(sess,
- op->xform) != 0)) {
- rte_mempool_put(qp->sess_mp, _sess);
- sess = NULL;
+ if (unlikely(aesni_gcm_set_session_parameters(
+ sess, op->sym->xform) != 0)) {
+ rte_mempool_put(qp->sess_mp, _sess);
+ sess = NULL;
+ }
}
}
+
+ if (sess == NULL)
+ op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
+
return sess;
}
@@ -323,17 +331,16 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
* - Returns NULL on invalid job
*/
static void
-post_process_gcm_crypto_op(struct rte_crypto_op *op)
+post_process_gcm_crypto_op(struct rte_crypto_op *op,
+ struct aesni_gcm_session *sess)
{
struct rte_mbuf *m = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
- struct aesni_gcm_session *session =
- (struct aesni_gcm_session *)op->sym->session->_private;
-
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
/* Verify digest if required */
- if (session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION) {
+ if (sess->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION) {
uint8_t *tag = rte_pktmbuf_mtod_offset(m, uint8_t *,
m->data_len - op->sym->auth.digest.length);
@@ -365,12 +372,13 @@ post_process_gcm_crypto_op(struct rte_crypto_op *op)
*/
static void
handle_completed_gcm_crypto_op(struct aesni_gcm_qp *qp,
- struct rte_crypto_op *op)
+ struct rte_crypto_op *op, struct aesni_gcm_session *sess)
{
- post_process_gcm_crypto_op(op);
+ post_process_gcm_crypto_op(op, sess);
/* Free session if a session-less crypto op */
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ memset(op->sym->session, 0, sizeof(struct aesni_gcm_session));
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
}
@@ -391,7 +399,7 @@ aesni_gcm_pmd_dequeue_burst(void *queue_pair,
for (i = 0; i < nb_dequeued; i++) {
- sess = aesni_gcm_get_session(qp, ops[i]->sym);
+ sess = aesni_gcm_get_session(qp, ops[i]);
if (unlikely(sess == NULL)) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
qp->qp_stats.dequeue_err_count++;
@@ -405,7 +413,7 @@ aesni_gcm_pmd_dequeue_burst(void *queue_pair,
break;
}
- handle_completed_gcm_crypto_op(qp, ops[i]);
+ handle_completed_gcm_crypto_op(qp, ops[i], sess);
}
qp->qp_stats.dequeued_count += i;
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 721dbda..0b063fd 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -313,21 +313,37 @@ aesni_gcm_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
}
/** Configure a aesni gcm session from a crypto xform chain */
-static void *
+static int
aesni_gcm_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
- struct rte_crypto_sym_xform *xform, void *sess)
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
{
+ void *sess_private_data;
+
if (unlikely(sess == NULL)) {
GCM_LOG_ERR("invalid session struct");
- return NULL;
+ return -1;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
}
- if (aesni_gcm_set_session_parameters(sess, xform) != 0) {
+ if (aesni_gcm_set_session_parameters(sess_private_data, xform) != 0) {
GCM_LOG_ERR("failed configure session parameters");
- return NULL;
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return -1;
}
- return sess;
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
}
/** Clear the memory of session so it doesn't leave key material behind */
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index ec348ab..c8ec112 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -348,22 +348,33 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
struct aesni_mb_session *sess = NULL;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- sess = (struct aesni_mb_session *)op->sym->session->_private;
- } else {
+ if (likely(op->sym->session != NULL))
+ sess = (struct aesni_mb_session *)
+ get_session_private_data(
+ op->sym->session,
+ cryptodev_driver_id);
+ } else {
void *_sess = NULL;
+ void *_sess_private_data = NULL;
if (rte_mempool_get(qp->sess_mp, (void **)&_sess))
return NULL;
- sess = (struct aesni_mb_session *)
- ((struct rte_cryptodev_sym_session *)_sess)->_private;
+ if (rte_mempool_get(qp->sess_mp, (void **)&_sess_private_data))
+ return NULL;
+
+ sess = (struct aesni_mb_session *)_sess_private_data;
if (unlikely(aesni_mb_set_session_parameters(qp->op_fns,
sess, op->sym->xform) != 0)) {
rte_mempool_put(qp->sess_mp, _sess);
+ rte_mempool_put(qp->sess_mp, _sess_private_data);
sess = NULL;
}
op->sym->session = (struct rte_cryptodev_sym_session *)_sess;
+ set_session_private_data(op->sym->session, cryptodev_driver_id,
+ _sess_private_data);
+
}
return sess;
@@ -512,7 +523,7 @@ verify_digest(JOB_AES_HMAC *job, struct rte_crypto_op *op) {
* - Returns NULL on invalid job
*/
static inline struct rte_crypto_op *
-post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
+post_process_mb_job(JOB_AES_HMAC *job)
{
struct rte_crypto_op *op = (struct rte_crypto_op *)job->user_data;
@@ -525,7 +536,9 @@ post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
if (job->hash_alg != NULL_HASH) {
sess = (struct aesni_mb_session *)
- op->sym->session->_private;
+ get_session_private_data(
+ op->sym->session,
+ cryptodev_driver_id);
if (sess->auth.operation ==
RTE_CRYPTO_AUTH_OP_VERIFY)
@@ -539,7 +552,7 @@ post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
/* Free session if a session-less crypto op */
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
- rte_mempool_put(qp->sess_mp, op->sym->session);
+ rte_cryptodev_sym_session_free(op->sym->session);
op->sym->session = NULL;
}
@@ -564,7 +577,7 @@ handle_completed_jobs(struct aesni_mb_qp *qp, JOB_AES_HMAC *job,
unsigned processed_jobs = 0;
while (job != NULL && processed_jobs < nb_ops) {
- op = post_process_mb_job(qp, job);
+ op = post_process_mb_job(job);
if (op) {
ops[processed_jobs++] = op;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index 3a2683b..fda8296 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -472,24 +472,39 @@ aesni_mb_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
}
/** Configure a aesni multi-buffer session from a crypto xform chain */
-static void *
+static int
aesni_mb_pmd_session_configure(struct rte_cryptodev *dev,
- struct rte_crypto_sym_xform *xform, void *sess)
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
{
+ void *sess_private_data;
struct aesni_mb_private *internals = dev->data->dev_private;
if (unlikely(sess == NULL)) {
MB_LOG_ERR("invalid session struct");
- return NULL;
+ return -1;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
}
if (aesni_mb_set_session_parameters(&job_ops[internals->vector_mode],
- sess, xform) != 0) {
+ sess_private_data, xform) != 0) {
MB_LOG_ERR("failed configure session parameters");
- return NULL;
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return -1;
}
- return sess;
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
}
/** Clear the memory of session so it doesn't leave key material behind */
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 1ddf6a2..775280a 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -549,18 +549,17 @@ get_session(struct armv8_crypto_qp *qp, struct rte_crypto_op *op)
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
/* get existing session */
- if (likely(op->sym->session != NULL)) {
+ if (likely(op->sym->session != NULL))
sess = (struct armv8_crypto_session *)
- op->sym->session->_private;
- }
+ get_session_private_data(
+ op->sym->session,
+ cryptodev_driver_id);
} else {
/* provide internal session */
void *_sess = NULL;
if (!rte_mempool_get(qp->sess_mp, (void **)&_sess)) {
- sess = (struct armv8_crypto_session *)
- ((struct rte_cryptodev_sym_session *)_sess)
- ->_private;
+ sess = (struct armv8_crypto_session *)_sess;
if (unlikely(armv8_crypto_set_session_parameters(
sess, op->sym->xform) != 0)) {
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 2911417..3db420b 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -316,22 +316,37 @@ armv8_crypto_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
}
/** Configure the session from a crypto xform chain */
-static void *
-armv8_crypto_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
- struct rte_crypto_sym_xform *xform, void *sess)
+static int
+armv8_crypto_pmd_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
{
+ void *sess_private_data;
+
if (unlikely(sess == NULL)) {
ARMV8_CRYPTO_LOG_ERR("invalid session struct");
- return NULL;
+ return -1;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
}
- if (armv8_crypto_set_session_parameters(
- sess, xform) != 0) {
+ if (armv8_crypto_set_session_parameters(sess_private_data, xform) != 0) {
ARMV8_CRYPTO_LOG_ERR("failed configure session parameters");
- return NULL;
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return -1;
}
- return sess;
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
}
/** Clear the memory of session so it doesn't leave key material behind */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 70ad07a..c71b0c7 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -465,7 +465,9 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
/*Clear the unused FD fields before sending*/
memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
sess = (dpaa2_sec_session *)
- (*ops)->sym->session->_private;
+ get_session_private_data(
+ (*ops)->sym->session,
+ cryptodev_driver_id);
mb_pool = (*ops)->sym->m_src->pool;
bpid = mempool_to_bpid(mb_pool);
ret = build_sec_fd(sess, *ops, &fd_arr[loop], bpid);
@@ -749,13 +751,6 @@ dpaa2_sec_session_get_size(struct rte_cryptodev *dev __rte_unused)
return sizeof(dpaa2_sec_session);
}
-static void
-dpaa2_sec_session_initialize(struct rte_mempool *mp __rte_unused,
- void *sess __rte_unused)
-{
- PMD_INIT_FUNC_TRACE();
-}
-
static int
dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform,
@@ -1202,18 +1197,14 @@ dpaa2_sec_aead_init(struct rte_cryptodev *dev,
return -1;
}
-static void *
-dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+static int
+dpaa2_sec_set_session_parameters(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform, void *sess)
{
dpaa2_sec_session *session = sess;
PMD_INIT_FUNC_TRACE();
- if (unlikely(sess == NULL)) {
- RTE_LOG(ERR, PMD, "invalid session struct");
- return NULL;
- }
/* Cipher Only */
if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
session->ctxt_type = DPAA2_SEC_CIPHER;
@@ -1238,10 +1229,39 @@ dpaa2_sec_session_configure(struct rte_cryptodev *dev,
dpaa2_sec_aead_init(dev, xform, session);
} else {
RTE_LOG(ERR, PMD, "Invalid crypto type");
- return NULL;
+ return -1;
}
- return session;
+ return 0;
+}
+
+static int
+dpaa2_sec_session_configure(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
+ }
+
+ if (dpaa2_sec_set_session_parameters(dev, xform, sess_private_data) != 0) {
+ PMD_DRV_LOG(ERR, "DPAA2 PMD: failed to configure "
+ "session parameters");
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return -1;
+ }
+
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
}
/** Clear the memory of session so it doesn't leave key material behind */
@@ -1477,7 +1497,6 @@ static struct rte_cryptodev_ops crypto_ops = {
.queue_pair_stop = dpaa2_sec_queue_pair_stop,
.queue_pair_count = dpaa2_sec_queue_pair_count,
.session_get_size = dpaa2_sec_session_get_size,
- .session_initialize = dpaa2_sec_session_initialize,
.session_configure = dpaa2_sec_session_configure,
.session_clear = dpaa2_sec_session_clear,
};
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 67f0b06..a114542 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -143,17 +143,21 @@ kasumi_set_session_parameters(struct kasumi_session *sess,
static struct kasumi_session *
kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
{
- struct kasumi_session *sess;
+ struct kasumi_session *sess = NULL;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- sess = (struct kasumi_session *)op->sym->session->_private;
- } else {
- struct rte_cryptodev_sym_session *c_sess = NULL;
+ if (likely(op->sym->session != NULL))
+ sess = (struct kasumi_session *)
+ get_session_private_data(
+ op->sym->session,
+ cryptodev_driver_id);
+ } else {
+ void *c_sess = NULL;
if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
return NULL;
- sess = (struct kasumi_session *)c_sess->_private;
+ sess = (struct kasumi_session *)c_sess;
if (unlikely(kasumi_set_session_parameters(sess,
op->sym->xform) != 0))
@@ -297,7 +301,7 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(ops[i]->sym->m_src,
ops[i]->sym->auth.digest.length);
- } else {
+ } else {
dst = ops[i]->sym->auth.digest.data;
sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index 343c9b3..d590b91 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -291,21 +291,37 @@ kasumi_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
}
/** Configure a KASUMI session from a crypto xform chain */
-static void *
+static int
kasumi_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
- struct rte_crypto_sym_xform *xform, void *sess)
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
{
+ void *sess_private_data;
+
if (unlikely(sess == NULL)) {
KASUMI_LOG_ERR("invalid session struct");
- return NULL;
+ return -1;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
}
- if (kasumi_set_session_parameters(sess, xform) != 0) {
+ if (kasumi_set_session_parameters(sess_private_data, xform) != 0) {
KASUMI_LOG_ERR("failed configure session parameters");
- return NULL;
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return -1;
}
- return sess;
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
}
/** Clear the memory of session so it doesn't leave key material behind */
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 9323874..448bbcf 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -94,20 +94,20 @@ process_op(const struct null_crypto_qp *qp, struct rte_crypto_op *op,
static struct null_crypto_session *
get_session(struct null_crypto_qp *qp, struct rte_crypto_sym_op *op)
{
- struct null_crypto_session *sess;
+ struct null_crypto_session *sess = NULL;
if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session == NULL))
- return NULL;
-
- sess = (struct null_crypto_session *)op->session->_private;
- } else {
+ if (likely(op->session != NULL))
+ sess = (struct null_crypto_session *)
+ get_session_private_data(
+ op->session, cryptodev_driver_id);
+ } else {
struct rte_cryptodev_sym_session *c_sess = NULL;
if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
return NULL;
- sess = (struct null_crypto_session *)c_sess->_private;
+ sess = (struct null_crypto_session *)c_sess;
if (null_crypto_set_session_parameters(sess, op->xform) != 0)
return NULL;
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index a7c891e..04147fe 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -302,24 +302,37 @@ null_crypto_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
}
/** Configure a null crypto session from a crypto xform chain */
-static void *
+static int
null_crypto_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
- struct rte_crypto_sym_xform *xform, void *sess)
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mp)
{
- int retval;
+ void *sess_private_data;
if (unlikely(sess == NULL)) {
NULL_CRYPTO_LOG_ERR("invalid session struct");
- return NULL;
+ return -1;
+ }
+
+ if (rte_mempool_get(mp, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
}
- retval = null_crypto_set_session_parameters(
- (struct null_crypto_session *)sess, xform);
- if (retval != 0) {
+
+ if (null_crypto_set_session_parameters(sess_private_data, xform) != 0) {
NULL_CRYPTO_LOG_ERR("failed configure session parameters");
- return NULL;
+
+ /* Return session to mempool */
+ rte_mempool_put(mp, sess_private_data);
+ return -1;
}
- return sess;
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
}
/** Clear the memory of session so it doesn't leave key material behind */
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 3232455..d34cf16 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -452,15 +452,15 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
/* get existing session */
if (likely(op->sym->session != NULL))
sess = (struct openssl_session *)
- op->sym->session->_private;
- } else {
+ get_session_private_data(
+ op->sym->session,
+ cryptodev_driver_id);
+ } else {
/* provide internal session */
void *_sess = NULL;
if (!rte_mempool_get(qp->sess_mp, (void **)&_sess)) {
- sess = (struct openssl_session *)
- ((struct rte_cryptodev_sym_session *)_sess)
- ->_private;
+ sess = (struct openssl_session *)_sess;
if (unlikely(openssl_set_session_parameters(
sess, op->sym->xform) != 0)) {
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index f65de53..005855b 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -671,22 +671,38 @@ openssl_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
}
/** Configure the session from a crypto xform chain */
-static void *
+static int
openssl_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
- struct rte_crypto_sym_xform *xform, void *sess)
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
{
+ void *sess_private_data;
+
if (unlikely(sess == NULL)) {
OPENSSL_LOG_ERR("invalid session struct");
- return NULL;
+ return -1;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
}
if (openssl_set_session_parameters(
- sess, xform) != 0) {
+ sess_private_data, xform) != 0) {
OPENSSL_LOG_ERR("failed configure session parameters");
- return NULL;
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return -1;
}
- return sess;
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
}
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 13bd0b5..6e028bb 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -220,7 +220,6 @@ void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
void *session)
{
struct qat_session *sess = session;
- phys_addr_t cd_paddr;
PMD_INIT_FUNC_TRACE();
if (sess) {
@@ -228,9 +227,7 @@ void qat_crypto_sym_clear_session(struct rte_cryptodev *dev,
bpi_cipher_ctx_free(sess->bpi_ctx);
sess->bpi_ctx = NULL;
}
- cd_paddr = sess->cd_paddr;
memset(sess, 0, qat_crypto_sym_get_session_private_size(dev));
- sess->cd_paddr = cd_paddr;
} else
PMD_DRV_LOG(ERR, "NULL session");
}
@@ -448,9 +445,37 @@ qat_crypto_sym_configure_session_cipher(struct rte_cryptodev *dev,
return NULL;
}
-
-void *
+int
qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
+{
+ void *sess_private_data;
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
+ }
+
+ if (qat_crypto_set_session_parameters(dev, xform, sess_private_data) != 0) {
+ PMD_DRV_LOG(ERR, "Crypto QAT PMD: failed to configure "
+ "session parameters");
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return -1;
+ }
+
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
+}
+
+int
+qat_crypto_set_session_parameters(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform, void *session_private)
{
struct qat_session *session = session_private;
@@ -458,6 +483,10 @@ qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
int qat_cmd_id;
PMD_INIT_FUNC_TRACE();
+ session->cd_paddr = rte_mempool_virt2phy(NULL, session) +
+ offsetof(struct qat_session, cd);
+
+
/* Get requested QAT command id */
qat_cmd_id = qat_get_cmd_id(xform);
if (qat_cmd_id < 0 || qat_cmd_id >= ICP_QAT_FW_LA_CMD_DELIMITER) {
@@ -498,10 +527,10 @@ qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
goto error_out;
}
- return session;
+ return 0;
error_out:
- return NULL;
+ return -1;
}
struct qat_session *
@@ -809,7 +838,10 @@ qat_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops,
rx_op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
} else {
struct qat_session *sess = (struct qat_session *)
- (rx_op->sym->session->_private);
+ get_session_private_data(
+ rx_op->sym->session,
+ cryptodev_qat_driver_id);
+
if (sess->bpi_ctx)
qat_bpicipher_postprocess(sess, rx_op);
rx_op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -914,7 +946,14 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
return -EINVAL;
}
- ctx = (struct qat_session *)op->sym->session->_private;
+ ctx = (struct qat_session *)get_session_private_data(
+ op->sym->session, cryptodev_qat_driver_id);
+
+ if (unlikely(ctx == NULL)) {
+ PMD_DRV_LOG(ERR, "Session was not created for this device");
+ return -EINVAL;
+ }
+
qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
rte_mov128((uint8_t *)qat_req, (const uint8_t *)&(ctx->fw_req));
qat_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)op;
@@ -1192,17 +1231,6 @@ static inline uint32_t adf_modulo(uint32_t data, uint32_t shift)
return data - mult;
}
-void qat_crypto_sym_session_init(struct rte_mempool *mp, void *sym_sess)
-{
- struct rte_cryptodev_sym_session *sess = sym_sess;
- struct qat_session *s = (void *)sess->_private;
-
- PMD_INIT_FUNC_TRACE();
- s->cd_paddr = rte_mempool_virt2phy(mp, sess) +
- offsetof(struct qat_session, cd) +
- offsetof(struct rte_cryptodev_sym_session, _private);
-}
-
int qat_dev_config(__rte_unused struct rte_cryptodev *dev,
__rte_unused struct rte_cryptodev_config *config)
{
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index ed27e3b..d5466c2 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -114,11 +114,15 @@ qat_pmd_session_mempool_create(struct rte_cryptodev *dev,
extern unsigned
qat_crypto_sym_get_session_private_size(struct rte_cryptodev *dev);
-extern void
-qat_crypto_sym_session_init(struct rte_mempool *mempool, void *priv_sess);
-
-extern void *
+extern int
qat_crypto_sym_configure_session(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool);
+
+
+int
+qat_crypto_set_session_parameters(struct rte_cryptodev *dev,
struct rte_crypto_sym_xform *xform, void *session_private);
struct qat_session *
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
index 1c5ff77..9a710e6 100644
--- a/drivers/crypto/qat/rte_qat_cryptodev.c
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -73,7 +73,6 @@ static struct rte_cryptodev_ops crypto_qat_ops = {
/* Crypto related operations */
.session_get_size = qat_crypto_sym_get_session_private_size,
.session_configure = qat_crypto_sym_configure_session,
- .session_initialize = qat_crypto_sym_session_init,
.session_clear = qat_crypto_sym_clear_session
};
diff --git a/drivers/crypto/scheduler/scheduler_failover.c b/drivers/crypto/scheduler/scheduler_failover.c
index 162a29b..2aa13f8 100644
--- a/drivers/crypto/scheduler/scheduler_failover.c
+++ b/drivers/crypto/scheduler/scheduler_failover.c
@@ -49,57 +49,18 @@ struct fo_scheduler_qp_ctx {
};
static __rte_always_inline uint16_t
-failover_slave_enqueue(struct scheduler_slave *slave, uint8_t slave_idx,
+failover_slave_enqueue(struct scheduler_slave *slave,
struct rte_crypto_op **ops, uint16_t nb_ops)
{
uint16_t i, processed_ops;
- struct rte_cryptodev_sym_session *sessions[nb_ops];
- struct scheduler_session *sess0, *sess1, *sess2, *sess3;
for (i = 0; i < nb_ops && i < 4; i++)
rte_prefetch0(ops[i]->sym->session);
- for (i = 0; (i < (nb_ops - 8)) && (nb_ops > 8); i += 4) {
- rte_prefetch0(ops[i + 4]->sym->session);
- rte_prefetch0(ops[i + 5]->sym->session);
- rte_prefetch0(ops[i + 6]->sym->session);
- rte_prefetch0(ops[i + 7]->sym->session);
-
- sess0 = (struct scheduler_session *)
- ops[i]->sym->session->_private;
- sess1 = (struct scheduler_session *)
- ops[i+1]->sym->session->_private;
- sess2 = (struct scheduler_session *)
- ops[i+2]->sym->session->_private;
- sess3 = (struct scheduler_session *)
- ops[i+3]->sym->session->_private;
-
- sessions[i] = ops[i]->sym->session;
- sessions[i + 1] = ops[i + 1]->sym->session;
- sessions[i + 2] = ops[i + 2]->sym->session;
- sessions[i + 3] = ops[i + 3]->sym->session;
-
- ops[i]->sym->session = sess0->sessions[slave_idx];
- ops[i + 1]->sym->session = sess1->sessions[slave_idx];
- ops[i + 2]->sym->session = sess2->sessions[slave_idx];
- ops[i + 3]->sym->session = sess3->sessions[slave_idx];
- }
-
- for (; i < nb_ops; i++) {
- sess0 = (struct scheduler_session *)
- ops[i]->sym->session->_private;
- sessions[i] = ops[i]->sym->session;
- ops[i]->sym->session = sess0->sessions[slave_idx];
- }
-
processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
slave->qp_id, ops, nb_ops);
slave->nb_inflight_cops += processed_ops;
- if (unlikely(processed_ops < nb_ops))
- for (i = processed_ops; i < nb_ops; i++)
- ops[i]->sym->session = sessions[i];
-
return processed_ops;
}
@@ -114,11 +75,11 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
return 0;
enqueued_ops = failover_slave_enqueue(&qp_ctx->primary_slave,
- PRIMARY_SLAVE_IDX, ops, nb_ops);
+ ops, nb_ops);
if (enqueued_ops < nb_ops)
enqueued_ops += failover_slave_enqueue(&qp_ctx->secondary_slave,
- SECONDARY_SLAVE_IDX, &ops[enqueued_ops],
+ &ops[enqueued_ops],
nb_ops - enqueued_ops);
return enqueued_ops;
diff --git a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
index 6b628df..1dd1bc3 100644
--- a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
+++ b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
@@ -67,7 +67,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
struct scheduler_qp_ctx *qp_ctx = qp;
struct psd_scheduler_qp_ctx *psd_qp_ctx = qp_ctx->private_qp_ctx;
struct rte_crypto_op *sched_ops[NB_PKT_SIZE_SLAVES][nb_ops];
- struct scheduler_session *sess;
uint32_t in_flight_ops[NB_PKT_SIZE_SLAVES] = {
psd_qp_ctx->primary_slave.nb_inflight_cops,
psd_qp_ctx->secondary_slave.nb_inflight_cops
@@ -97,8 +96,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
rte_prefetch0(ops[i + 7]->sym);
rte_prefetch0(ops[i + 7]->sym->session);
- sess = (struct scheduler_session *)
- ops[i]->sym->session->_private;
/* job_len is initialized as cipher data length, once
* it is 0, equals to auth data length
*/
@@ -118,11 +115,8 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
}
sched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i];
- ops[i]->sym->session = sess->sessions[p_enq_op->slave_idx];
p_enq_op->pos++;
- sess = (struct scheduler_session *)
- ops[i+1]->sym->session->_private;
job_len = ops[i+1]->sym->cipher.data.length;
job_len += (ops[i+1]->sym->cipher.data.length == 0) *
ops[i+1]->sym->auth.data.length;
@@ -135,11 +129,8 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
}
sched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i+1];
- ops[i+1]->sym->session = sess->sessions[p_enq_op->slave_idx];
p_enq_op->pos++;
- sess = (struct scheduler_session *)
- ops[i+2]->sym->session->_private;
job_len = ops[i+2]->sym->cipher.data.length;
job_len += (ops[i+2]->sym->cipher.data.length == 0) *
ops[i+2]->sym->auth.data.length;
@@ -152,12 +143,8 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
}
sched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i+2];
- ops[i+2]->sym->session = sess->sessions[p_enq_op->slave_idx];
p_enq_op->pos++;
- sess = (struct scheduler_session *)
- ops[i+3]->sym->session->_private;
-
job_len = ops[i+3]->sym->cipher.data.length;
job_len += (ops[i+3]->sym->cipher.data.length == 0) *
ops[i+3]->sym->auth.data.length;
@@ -170,14 +157,10 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
}
sched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i+3];
- ops[i+3]->sym->session = sess->sessions[p_enq_op->slave_idx];
p_enq_op->pos++;
}
for (; i < nb_ops; i++) {
- sess = (struct scheduler_session *)
- ops[i]->sym->session->_private;
-
job_len = ops[i]->sym->cipher.data.length;
job_len += (ops[i]->sym->cipher.data.length == 0) *
ops[i]->sym->auth.data.length;
@@ -190,7 +173,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
}
sched_ops[p_enq_op->slave_idx][p_enq_op->pos] = ops[i];
- ops[i]->sym->session = sess->sessions[p_enq_op->slave_idx];
p_enq_op->pos++;
}
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index b9d8973..6e37efc 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -85,8 +85,10 @@ scheduler_attach_init_slave(struct rte_cryptodev *dev)
/** Configure device */
static int
scheduler_pmd_config(struct rte_cryptodev *dev,
- struct rte_cryptodev_config *config __rte_unused)
+ struct rte_cryptodev_config *config)
{
+ struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+ uint32_t i;
int ret;
/* although scheduler_attach_init_slave presents multiple times,
@@ -96,6 +98,15 @@ scheduler_pmd_config(struct rte_cryptodev *dev,
if (ret < 0)
return ret;
+ for (i = 0; i < sched_ctx->nb_slaves; i++) {
+ uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+
+ ret = rte_cryptodev_configure(slave_dev_id, config,
+ dev->data->session_pool);
+ if (ret < 0)
+ break;
+ }
+
return ret;
}
@@ -474,67 +485,43 @@ scheduler_pmd_qp_count(struct rte_cryptodev *dev)
static uint32_t
scheduler_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
{
- return sizeof(struct scheduler_session);
-}
-
-static int
-config_slave_sess(struct scheduler_ctx *sched_ctx,
- struct rte_crypto_sym_xform *xform,
- struct scheduler_session *sess,
- uint32_t create)
-{
- uint32_t i;
+ struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+ uint8_t i = 0;
+ uint32_t max_priv_sess_size = 0;
+ /* Check what is the maximum private session size for all slaves */
for (i = 0; i < sched_ctx->nb_slaves; i++) {
- struct scheduler_slave *slave = &sched_ctx->slaves[i];
+ uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
+ struct rte_cryptodev *dev = &rte_cryptodevs[slave_dev_id];
+ uint32_t priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
- if (sess->sessions[i]) {
- if (create)
- continue;
- /* !create */
- sess->sessions[i] = rte_cryptodev_sym_session_free(
- slave->dev_id, sess->sessions[i]);
- } else {
- if (!create)
- continue;
- /* create */
- sess->sessions[i] =
- rte_cryptodev_sym_session_create(
- slave->dev_id, xform);
- if (!sess->sessions[i]) {
- config_slave_sess(sched_ctx, NULL, sess, 0);
- return -1;
- }
- }
+ if (max_priv_sess_size < priv_sess_size)
+ max_priv_sess_size = priv_sess_size;
}
- return 0;
-}
-
-/** Clear the memory of session so it doesn't leave key material behind */
-static void
-scheduler_pmd_session_clear(struct rte_cryptodev *dev,
- void *sess)
-{
- struct scheduler_ctx *sched_ctx = dev->data->dev_private;
-
- config_slave_sess(sched_ctx, NULL, sess, 0);
-
- memset(sess, 0, sizeof(struct scheduler_session));
+ return max_priv_sess_size;
}
-static void *
+static int
scheduler_pmd_session_configure(struct rte_cryptodev *dev,
- struct rte_crypto_sym_xform *xform, void *sess)
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
{
struct scheduler_ctx *sched_ctx = dev->data->dev_private;
+ uint32_t i;
+
+ for (i = 0; i < sched_ctx->nb_slaves; i++) {
+ struct scheduler_slave *slave = &sched_ctx->slaves[i];
- if (config_slave_sess(sched_ctx, xform, sess, 1) < 0) {
- CS_LOG_ERR("unabled to config sym session");
- return NULL;
+ if (rte_cryptodev_sym_session_init(slave->dev_id, sess,
+ xform, mempool) < 0) {
+ CS_LOG_ERR("unabled to config sym session");
+ return -1;
+ }
}
- return sess;
+ return 0;
}
struct rte_cryptodev_ops scheduler_pmd_ops = {
@@ -556,7 +543,7 @@ struct rte_cryptodev_ops scheduler_pmd_ops = {
.session_get_size = scheduler_pmd_session_get_size,
.session_configure = scheduler_pmd_session_configure,
- .session_clear = scheduler_pmd_session_clear,
+ .session_clear = NULL,
};
struct rte_cryptodev_ops *rte_crypto_scheduler_pmd_ops = &scheduler_pmd_ops;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
index e313a89..639d75a 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_private.h
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -103,10 +103,6 @@ struct scheduler_qp_ctx {
uint32_t seqn;
} __rte_cache_aligned;
-struct scheduler_session {
- struct rte_cryptodev_sym_session *sessions[
- RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES];
-};
extern uint8_t cryptodev_driver_id;
diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c b/drivers/crypto/scheduler/scheduler_roundrobin.c
index 0116276..4a84728 100644
--- a/drivers/crypto/scheduler/scheduler_roundrobin.c
+++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
@@ -52,8 +52,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
uint32_t slave_idx = rr_qp_ctx->last_enq_slave_idx;
struct scheduler_slave *slave = &rr_qp_ctx->slaves[slave_idx];
uint16_t i, processed_ops;
- struct rte_cryptodev_sym_session *sessions[nb_ops];
- struct scheduler_session *sess0, *sess1, *sess2, *sess3;
if (unlikely(nb_ops == 0))
return 0;
@@ -61,39 +59,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
for (i = 0; i < nb_ops && i < 4; i++)
rte_prefetch0(ops[i]->sym->session);
- for (i = 0; (i < (nb_ops - 8)) && (nb_ops > 8); i += 4) {
- sess0 = (struct scheduler_session *)
- ops[i]->sym->session->_private;
- sess1 = (struct scheduler_session *)
- ops[i+1]->sym->session->_private;
- sess2 = (struct scheduler_session *)
- ops[i+2]->sym->session->_private;
- sess3 = (struct scheduler_session *)
- ops[i+3]->sym->session->_private;
-
- sessions[i] = ops[i]->sym->session;
- sessions[i + 1] = ops[i + 1]->sym->session;
- sessions[i + 2] = ops[i + 2]->sym->session;
- sessions[i + 3] = ops[i + 3]->sym->session;
-
- ops[i]->sym->session = sess0->sessions[slave_idx];
- ops[i + 1]->sym->session = sess1->sessions[slave_idx];
- ops[i + 2]->sym->session = sess2->sessions[slave_idx];
- ops[i + 3]->sym->session = sess3->sessions[slave_idx];
-
- rte_prefetch0(ops[i + 4]->sym->session);
- rte_prefetch0(ops[i + 5]->sym->session);
- rte_prefetch0(ops[i + 6]->sym->session);
- rte_prefetch0(ops[i + 7]->sym->session);
- }
-
- for (; i < nb_ops; i++) {
- sess0 = (struct scheduler_session *)
- ops[i]->sym->session->_private;
- sessions[i] = ops[i]->sym->session;
- ops[i]->sym->session = sess0->sessions[slave_idx];
- }
-
processed_ops = rte_cryptodev_enqueue_burst(slave->dev_id,
slave->qp_id, ops, nb_ops);
@@ -102,12 +67,6 @@ schedule_enqueue(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops)
rr_qp_ctx->last_enq_slave_idx += 1;
rr_qp_ctx->last_enq_slave_idx %= rr_qp_ctx->nb_slaves;
- /* recover session if enqueue is failed */
- if (unlikely(processed_ops < nb_ops)) {
- for (i = processed_ops; i < nb_ops; i++)
- ops[i]->sym->session = sessions[i];
- }
-
return processed_ops;
}
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 677849d..e1771bb 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -143,17 +143,21 @@ snow3g_set_session_parameters(struct snow3g_session *sess,
static struct snow3g_session *
snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *op)
{
- struct snow3g_session *sess;
+ struct snow3g_session *sess = NULL;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- sess = (struct snow3g_session *)op->sym->session->_private;
- } else {
- struct rte_cryptodev_sym_session *c_sess = NULL;
+ if (likely(op->sym->session != NULL))
+ sess = (struct snow3g_session *)
+ get_session_private_data(
+ op->sym->session,
+ cryptodev_driver_id);
+ } else {
+ void *c_sess = NULL;
if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
return NULL;
- sess = (struct snow3g_session *)c_sess->_private;
+ sess = (struct snow3g_session *)c_sess;
if (unlikely(snow3g_set_session_parameters(sess,
op->sym->xform) != 0))
@@ -286,7 +290,7 @@ process_snow3g_hash_op(struct rte_crypto_op **ops,
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(ops[i]->sym->m_src,
ops[i]->sym->auth.digest.length);
- } else {
+ } else {
dst = ops[i]->sym->auth.digest.data;
sso_snow3g_f9_1_buffer(&session->pKeySched_hash,
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 26cc3e9..1967409 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -289,21 +289,37 @@ snow3g_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
}
/** Configure a SNOW 3G session from a crypto xform chain */
-static void *
+static int
snow3g_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
- struct rte_crypto_sym_xform *xform, void *sess)
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
{
+ void *sess_private_data;
+
if (unlikely(sess == NULL)) {
SNOW3G_LOG_ERR("invalid session struct");
- return NULL;
+ return -1;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
}
- if (snow3g_set_session_parameters(sess, xform) != 0) {
+ if (snow3g_set_session_parameters(sess_private_data, xform) != 0) {
SNOW3G_LOG_ERR("failed configure session parameters");
- return NULL;
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return -1;
}
- return sess;
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
}
/** Clear the memory of session so it doesn't leave key material behind */
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index 385e4e5..af90f90 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -142,17 +142,20 @@ zuc_set_session_parameters(struct zuc_session *sess,
static struct zuc_session *
zuc_get_session(struct zuc_qp *qp, struct rte_crypto_op *op)
{
- struct zuc_session *sess;
+ struct zuc_session *sess = NULL;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- sess = (struct zuc_session *)op->sym->session->_private;
- } else {
- struct rte_cryptodev_sym_session *c_sess = NULL;
+ if (likely(op->sym->session != NULL))
+ sess = (struct zuc_session *)get_session_private_data(
+ op->sym->session,
+ cryptodev_driver_id);
+ } else {
+ void *c_sess = NULL;
if (rte_mempool_get(qp->sess_mp, (void **)&c_sess))
return NULL;
- sess = (struct zuc_session *)c_sess->_private;
+ sess = (struct zuc_session *)c_sess;
if (unlikely(zuc_set_session_parameters(sess,
op->sym->xform) != 0))
@@ -277,7 +280,7 @@ process_zuc_hash_op(struct rte_crypto_op **ops,
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(ops[i]->sym->m_src,
ops[i]->sym->auth.digest.length);
- } else {
+ } else {
dst = (uint32_t *)ops[i]->sym->auth.digest.data;
sso_zuc_eia3_1_buffer(session->pKey_hash,
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index 645b80c..cdc783f 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -289,21 +289,37 @@ zuc_pmd_session_get_size(struct rte_cryptodev *dev __rte_unused)
}
/** Configure a ZUC session from a crypto xform chain */
-static void *
+static int
zuc_pmd_session_configure(struct rte_cryptodev *dev __rte_unused,
- struct rte_crypto_sym_xform *xform, void *sess)
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_mempool *mempool)
{
+ void *sess_private_data;
+
if (unlikely(sess == NULL)) {
ZUC_LOG_ERR("invalid session struct");
- return NULL;
+ return -1;
+ }
+
+ if (rte_mempool_get(mempool, &sess_private_data)) {
+ CDEV_LOG_ERR(
+ "Couldn't get object from session mempool");
+ return -1;
}
- if (zuc_set_session_parameters(sess, xform) != 0) {
+ if (zuc_set_session_parameters(sess_private_data, xform) != 0) {
ZUC_LOG_ERR("failed configure session parameters");
- return NULL;
+
+ /* Return session to mempool */
+ rte_mempool_put(mempool, sess_private_data);
+ return -1;
}
- return sess;
+ set_session_private_data(sess, dev->driver_id,
+ sess_private_data);
+
+ return 0;
}
/** Clear the memory of session so it doesn't leave key material behind */
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index a2286fd..708eadd 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -710,10 +710,12 @@ main_loop(__attribute__((unused)) void *dummy)
qconf->inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in;
qconf->inbound.sa_ctx = socket_ctx[socket_id].sa_in;
qconf->inbound.cdev_map = cdev_map_in;
+ qconf->inbound.session_pool = socket_ctx[socket_id].session_pool;
qconf->outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out;
qconf->outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out;
qconf->outbound.sa_ctx = socket_ctx[socket_id].sa_out;
qconf->outbound.cdev_map = cdev_map_out;
+ qconf->outbound.session_pool = socket_ctx[socket_id].session_pool;
if (qconf->nb_rx_queue == 0) {
RTE_LOG(INFO, IPSEC, "lcore %u has nothing to do\n", lcore_id);
@@ -1238,6 +1240,13 @@ cryptodevs_init(void)
printf("lcore/cryptodev/qp mappings:\n");
+ uint32_t max_sess_sz = 0, sess_sz;
+ for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) {
+ sess_sz = rte_cryptodev_get_private_session_size(cdev_id);
+ if (sess_sz > max_sess_sz)
+ max_sess_sz = sess_sz;
+ }
+
idx = 0;
/* Start from last cdev id to give HW priority */
for (cdev_id = rte_cryptodev_count() - 1; cdev_id >= 0; cdev_id--) {
@@ -1267,27 +1276,31 @@ cryptodevs_init(void)
dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id);
dev_conf.nb_queue_pairs = qp;
- char mp_name[RTE_MEMPOOL_NAMESIZE];
- struct rte_mempool *sess_mp;
-
- unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
- rte_cryptodev_get_private_session_size(cdev_id);
+ if (!socket_ctx[dev_conf.socket_id].session_pool) {
+ char mp_name[RTE_MEMPOOL_NAMESIZE];
+ struct rte_mempool *sess_mp;
- snprintf(mp_name, sizeof(mp_name), "sess_mp_%u", cdev_id);
- sess_mp = rte_mempool_create(mp_name,
+ snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
+ "sess_mp_%u", dev_conf.socket_id);
+ sess_mp = rte_mempool_create(mp_name,
CDEV_MP_NB_OBJS,
- session_size,
+ max_sess_sz,
CDEV_MP_CACHE_SZ,
0, NULL, NULL, NULL,
NULL, dev_conf.socket_id,
0);
-
- if (sess_mp == NULL) {
- printf("Failed to create device session mempool\n");
- return -ENOMEM;
+ if (sess_mp == NULL)
+ rte_exit(EXIT_FAILURE,
+ "Cannot create session pool on socket %d\n",
+ dev_conf.socket_id);
+ else
+ printf("Allocated session pool on socket %d\n",
+ dev_conf.socket_id);
+ socket_ctx[dev_conf.socket_id].session_pool = sess_mp;
}
- if (rte_cryptodev_configure(cdev_id, &dev_conf, sess_mp))
+ if (rte_cryptodev_configure(cdev_id, &dev_conf,
+ socket_ctx[dev_conf.socket_id].session_pool))
rte_panic("Failed to initialize cryptodev %u\n",
cdev_id);
@@ -1451,7 +1464,7 @@ main(int32_t argc, char **argv)
nb_lcores = rte_lcore_count();
- /* Replicate each contex per socket */
+ /* Replicate each context per socket */
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
if (rte_lcore_is_enabled(lcore_id) == 0)
continue;
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index 048e441..c927344 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -45,7 +45,7 @@
#include "esp.h"
static inline int
-create_session(struct ipsec_ctx *ipsec_ctx __rte_unused, struct ipsec_sa *sa)
+create_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa)
{
struct rte_cryptodev_info cdev_info;
unsigned long cdev_id_qp = 0;
@@ -72,7 +72,10 @@ create_session(struct ipsec_ctx *ipsec_ctx __rte_unused, struct ipsec_sa *sa)
ipsec_ctx->tbl[cdev_id_qp].qp);
sa->crypto_session = rte_cryptodev_sym_session_create(
- ipsec_ctx->tbl[cdev_id_qp].id, sa->xforms);
+ ipsec_ctx->session_pool);
+ rte_cryptodev_sym_session_init(ipsec_ctx->tbl[cdev_id_qp].id,
+ sa->crypto_session, sa->xforms,
+ ipsec_ctx->session_pool);
rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id, &cdev_info);
if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index 1d63161..593b0a2 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -139,6 +139,7 @@ struct ipsec_ctx {
uint16_t nb_qps;
uint16_t last_qp;
struct cdev_qp tbl[MAX_QP_PER_LCORE];
+ struct rte_mempool *session_pool;
};
struct cdev_key {
@@ -157,6 +158,7 @@ struct socket_ctx {
struct rt_ctx *rt_ip4;
struct rt_ctx *rt_ip6;
struct rte_mempool *mbuf_pool;
+ struct rte_mempool *session_pool;
};
struct cnt_blk {
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index c3c6f45..20f28d5 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -88,9 +88,8 @@ enum cdev_type {
#define MAX_KEY_SIZE 128
#define MAX_PKT_BURST 32
#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
-
-#define NUM_SESSIONS 2048
-#define SESS_MEMPOOL_CACHE_SIZE 64
+#define MAX_SESSIONS 32
+#define SESSION_POOL_CACHE_SIZE 0
/*
* Configurable number of RX/TX ring descriptors
@@ -226,6 +225,7 @@ static const struct rte_eth_conf port_conf = {
struct rte_mempool *l2fwd_pktmbuf_pool;
struct rte_mempool *l2fwd_crypto_op_pool;
+struct rte_mempool *l2fwd_session_pool;
/* Per-port statistics struct */
struct l2fwd_port_statistics {
@@ -589,9 +589,9 @@ generate_random_key(uint8_t *key, unsigned length)
rte_exit(EXIT_FAILURE, "Failed to generate random key\n");
}
-static struct rte_cryptodev_sym_session *
+static int
initialize_crypto_session(struct l2fwd_crypto_options *options,
- uint8_t cdev_id)
+ uint8_t cdev_id, struct rte_cryptodev_sym_session *sess)
{
struct rte_crypto_sym_xform *first_xform;
@@ -608,12 +608,14 @@ initialize_crypto_session(struct l2fwd_crypto_options *options,
}
/* Setup Cipher Parameters */
- return rte_cryptodev_sym_session_create(cdev_id, first_xform);
+ return rte_cryptodev_sym_session_init(cdev_id, sess, first_xform,
+ l2fwd_session_pool);
}
static void
l2fwd_crypto_options_print(struct l2fwd_crypto_options *options);
+
/* main processing loop */
static void
l2fwd_main_loop(struct l2fwd_crypto_options *options)
@@ -629,6 +631,7 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
US_PER_S * BURST_TX_DRAIN_US;
struct l2fwd_crypto_params *cparams;
struct l2fwd_crypto_params port_cparams[qconf->nb_crypto_devs];
+ struct rte_cryptodev_sym_session *session;
if (qconf->nb_rx_ports == 0) {
RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
@@ -644,6 +647,8 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
portid);
}
+ session = rte_cryptodev_sym_session_create(l2fwd_session_pool);
+
for (i = 0; i < qconf->nb_crypto_devs; i++) {
port_cparams[i].do_cipher = 0;
port_cparams[i].do_hash = 0;
@@ -701,11 +706,12 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
port_cparams[i].cipher_algo = options->cipher_xform.cipher.algo;
}
- port_cparams[i].session = initialize_crypto_session(options,
- port_cparams[i].dev_id);
+ port_cparams[i].session = session;
- if (port_cparams[i].session == NULL)
+ if (initialize_crypto_session(options, port_cparams[i].dev_id,
+ port_cparams[i].session) < 0)
return;
+
RTE_LOG(INFO, L2FWD, " -- lcoreid=%u cryptoid=%u\n", lcore_id,
port_cparams[i].dev_id);
}
@@ -1554,6 +1560,8 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
enum rte_crypto_cipher_algorithm cap_cipher_algo;
enum rte_crypto_cipher_algorithm opt_cipher_algo;
int retval;
+ uint32_t max_sess_sz = 0, sess_sz;
+ char mp_name[RTE_MEMPOOL_NAMESIZE];
cdev_count = rte_cryptodev_count();
if (cdev_count == 0) {
@@ -1561,6 +1569,29 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
return -1;
}
+ for (cdev_id = 0; cdev_id < cdev_count; cdev_id++) {
+ sess_sz = rte_cryptodev_get_private_session_size(cdev_id);
+ if (sess_sz > max_sess_sz)
+ max_sess_sz = sess_sz;
+ }
+
+ snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "sess_mp_%u", rte_socket_id());
+ l2fwd_session_pool = rte_mempool_create(mp_name,
+ MAX_SESSIONS * cdev_count,
+ max_sess_sz,
+ SESSION_POOL_CACHE_SIZE,
+ 0, NULL, NULL, NULL,
+ NULL, rte_socket_id(),
+ 0);
+
+ if (l2fwd_session_pool == NULL) {
+ printf("Failed to create session mempool on socket %u\n",
+ rte_socket_id());
+ return -ENOMEM;
+ }
+
+ printf("Allocated session mempool on socket %u\n", rte_socket_id());
+
for (cdev_id = 0; cdev_id < cdev_count && enabled_cdev_count < nb_ports;
cdev_id++) {
struct rte_cryptodev_qp_conf qp_conf;
@@ -1796,27 +1827,8 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
cap->sym.auth.digest_size.min;
}
- unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
- rte_cryptodev_get_private_session_size(enabled_cdevs[cdev_id]);
-
- char mp_name[RTE_MEMPOOL_NAMESIZE];
- struct rte_mempool *sess_mp;
-
- snprintf(mp_name, sizeof(mp_name), "sess_mp_%u", cdev_id);
- sess_mp = rte_mempool_create(mp_name,
- NUM_SESSIONS,
- session_size,
- SESS_MEMPOOL_CACHE_SIZE,
- 0, NULL, NULL, NULL,
- NULL, SOCKET_ID_ANY,
- 0);
-
- if (sess_mp == NULL) {
- printf("Failed to create device session mempool\n");
- return -ENOMEM;
- }
-
- retval = rte_cryptodev_configure(cdev_id, &conf, sess_mp);
+ retval = rte_cryptodev_configure(cdev_id, &conf,
+ l2fwd_session_pool);
if (retval < 0) {
printf("Failed to configure cryptodev %u", cdev_id);
return -1;
@@ -1825,7 +1837,7 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
qp_conf.nb_descriptors = 2048;
retval = rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
- SOCKET_ID_ANY);
+ rte_socket_id());
if (retval < 0) {
printf("Failed to setup queue pair %u on cryptodev %u",
0, cdev_id);
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index cc42c25..f5a72de 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -69,6 +69,8 @@
#include "rte_cryptodev.h"
#include "rte_cryptodev_pmd.h"
+static uint8_t nb_drivers;
+
struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];
struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];
@@ -1018,54 +1020,47 @@ rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
rte_spinlock_unlock(&rte_cryptodev_cb_lock);
}
-
-static void
-rte_cryptodev_sym_session_init(struct rte_mempool *mp,
- struct rte_cryptodev *dev,
- struct rte_cryptodev_sym_session *sess)
+int
+rte_cryptodev_sym_session_init(uint8_t dev_id,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_crypto_sym_xform *xforms,
+ struct rte_mempool *mp)
{
- memset(sess, 0, mp->elt_size);
+ struct rte_cryptodev *dev;
+ uint8_t index;
- if (dev->dev_ops->session_initialize)
- (*dev->dev_ops->session_initialize)(mp, sess);
-}
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+ if (sess == NULL || xforms == NULL || dev == NULL)
+ return -1;
-struct rte_cryptodev_sym_session *
-rte_cryptodev_sym_session_create(uint8_t dev_id,
- struct rte_crypto_sym_xform *xform)
-{
- struct rte_cryptodev *dev;
- struct rte_cryptodev_sym_session *sess;
- void *_sess;
+ index = dev->driver_id;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
- return NULL;
+ if (sess->sess_private_data[index] == NULL) {
+ if (dev->dev_ops->session_configure(dev, xforms, sess, mp) < 0) {
+ CDEV_LOG_ERR(
+ "dev_id %d failed to configure session details",
+ dev_id);
+ return -1;
+ }
}
- dev = &rte_crypto_devices[dev_id];
+ return 0;
+}
+
+struct rte_cryptodev_sym_session *
+rte_cryptodev_sym_session_create(struct rte_mempool *mp)
+{
+ struct rte_cryptodev_sym_session *sess;
/* Allocate a session structure from the session pool */
- if (rte_mempool_get(dev->data->session_pool, &_sess)) {
- CDEV_LOG_ERR("Couldn't get object from session mempool");
+ if (rte_mempool_get(mp, (void *)&sess)) {
+ CDEV_LOG_ERR("couldn't get object from session mempool");
return NULL;
}
- sess = _sess;
-
- rte_cryptodev_sym_session_init(dev->data->session_pool, dev,
- sess);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);
- if (dev->dev_ops->session_configure(dev, xform, sess->_private) ==
- NULL) {
- CDEV_LOG_ERR("dev_id %d failed to configure session details",
- dev_id);
-
- /* Return session to mempool */
- rte_mempool_put(dev->data->session_pool, _sess);
- return NULL;
- }
+ /* Clear device session pointer */
+ memset(sess, 0, (sizeof(void *) * nb_drivers));
return sess;
}
@@ -1085,7 +1080,10 @@ rte_cryptodev_queue_pair_attach_sym_session(uint8_t dev_id, uint16_t qp_id,
/* The API is optional, not returning error if driver do not suuport */
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->qp_attach_session, 0);
- if (dev->dev_ops->qp_attach_session(dev, qp_id, sess->_private)) {
+
+ void *sess_priv = get_session_private_data(sess, dev->driver_id);
+
+ if (dev->dev_ops->qp_attach_session(dev, qp_id, sess_priv)) {
CDEV_LOG_ERR("dev_id %d failed to attach qp: %d with session",
dev_id, qp_id);
return -EPERM;
@@ -1109,7 +1107,10 @@ rte_cryptodev_queue_pair_detach_sym_session(uint8_t dev_id, uint16_t qp_id,
/* The API is optional, not returning error if driver do not suuport */
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->qp_detach_session, 0);
- if (dev->dev_ops->qp_detach_session(dev, qp_id, sess->_private)) {
+
+ void *sess_priv = get_session_private_data(sess, dev->driver_id);
+
+ if (dev->dev_ops->qp_detach_session(dev, qp_id, sess_priv)) {
CDEV_LOG_ERR("dev_id %d failed to detach qp: %d from session",
dev_id, qp_id);
return -EPERM;
@@ -1117,34 +1118,45 @@ rte_cryptodev_queue_pair_detach_sym_session(uint8_t dev_id, uint16_t qp_id,
return 0;
}
-struct rte_cryptodev_sym_session *
-rte_cryptodev_sym_session_free(uint8_t dev_id,
- struct rte_cryptodev_sym_session *sess)
+void
+rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
{
- struct rte_cryptodev *dev;
+ uint8_t i;
+ struct rte_mempool *sess_mp;
- if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
- return sess;
- }
+ if (sess == NULL)
+ return;
- dev = &rte_crypto_devices[dev_id];
+ void *sess_priv;
- /* Let device implementation clear session material */
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
- dev->dev_ops->session_clear(dev, (void *)sess->_private);
+ for (i = 0; i < nb_drivers; i++) {
+ if (sess->sess_private_data[i] != NULL) {
+ sess_priv = get_session_private_data(sess, i);
+ sess_mp = rte_mempool_from_obj(sess_priv);
+ rte_mempool_put(sess_mp, sess_priv);
+ }
+ }
/* Return session to mempool */
- struct rte_mempool *mp = rte_mempool_from_obj(sess);
- rte_mempool_put(mp, (void *)sess);
+ sess_mp = rte_mempool_from_obj(sess);
+ rte_mempool_put(sess_mp, sess);
+}
- return NULL;
+unsigned int
+rte_cryptodev_get_header_session_size(void)
+{
+ /*
+ * Header contains pointers to the private data
+ * of all registered drivers
+ */
+ return (sizeof(void *) * nb_drivers);
}
unsigned int
rte_cryptodev_get_private_session_size(uint8_t dev_id)
{
struct rte_cryptodev *dev;
+ unsigned int header_size = sizeof(void *) * nb_drivers;
unsigned int priv_sess_size;
if (!rte_cryptodev_pmd_is_valid_dev(dev_id))
@@ -1157,6 +1169,14 @@ rte_cryptodev_get_private_session_size(uint8_t dev_id)
priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
+ /*
+ * If size is less than session header size,
+ * return the latter, as this guarantees that
+ * sessionless operations will work
+ */
+ if (priv_sess_size < header_size)
+ return header_size;
+
return priv_sess_size;
}
@@ -1272,8 +1292,6 @@ struct cryptodev_driver {
uint8_t id;
};
-static uint8_t nb_drivers;
-
int
rte_cryptodev_driver_id_get(const char *name)
{
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 044a4aa..2204982 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -797,50 +797,59 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
/** Cryptodev symmetric crypto session */
struct rte_cryptodev_sym_session {
RTE_STD_C11
- __extension__ char _private[0];
+ __extension__ void *sess_private_data[0];
/**< Private session material */
};
/**
- * Initialise a session for symmetric cryptographic operations.
+ * Create symmetric crypto session (generic with no private data)
*
- * This function is used by the client to initialize immutable
- * parameters of symmetric cryptographic operation.
- * To perform the operation the rte_cryptodev_enqueue_burst function is
- * used. Each mbuf should contain a reference to the session
- * pointer returned from this function contained within it's crypto_op if a
- * session-based operation is being provisioned. Memory to contain the session
- * information is allocated from within mempool managed by the cryptodev.
- *
- * The rte_cryptodev_session_free must be called to free allocated
- * memory when the session is no longer required.
+ * @param mempool Symmetric session mempool to allocate session
+ * objects from
+ * @return
+ * - On success return pointer to sym-session
+ * - On failure returns NULL
+ */
+struct rte_cryptodev_sym_session *
+rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
+
+/**
+ * Frees symmetric crypto session and all related device type specific
+ * private data objects associated with session
*
- * @param dev_id The device identifier.
- * @param xform Crypto transform chain.
+ * @param session Session where the private data will be attached to
+ */
+void
+rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *session);
+/**
+ * Fill out private data for the device id, based on its device type
+ *
+ * @param dev_id ID of device that we want the session to be used on
+ * @param sess Session where the private data will be attached to
+ * @param xforms Symmetric crypto transform operations to apply on flow
+ * processed with this session
+ * @param mempool Mempool where the private data is allocated.
*
* @return
- * Pointer to the created session or NULL
+ * - On success, zero.
+ * - On failure, a negative value.
*/
-extern struct rte_cryptodev_sym_session *
-rte_cryptodev_sym_session_create(uint8_t dev_id,
- struct rte_crypto_sym_xform *xform);
+int
+rte_cryptodev_sym_session_init(uint8_t dev_id,
+ struct rte_cryptodev_sym_session *sess,
+ struct rte_crypto_sym_xform *xforms,
+ struct rte_mempool *mempool);
/**
- * Free the memory associated with a previously allocated session.
- *
- * @param dev_id The device identifier.
- * @param session Session pointer previously allocated by
- * *rte_cryptodev_sym_session_create*.
+ * Get the size of the header session, for all registered drivers.
*
* @return
- * NULL on successful freeing of session.
- * Session pointer on failure to free session.
+ * Size of the header session.
*/
-extern struct rte_cryptodev_sym_session *
-rte_cryptodev_sym_session_free(uint8_t dev_id,
- struct rte_cryptodev_sym_session *session);
+unsigned int
+rte_cryptodev_get_header_session_size(void);
/**
* Get the size of the private session data for a device.
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 5911b83..959a8ae 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -285,13 +285,16 @@ typedef void (*cryptodev_sym_initialize_session_t)(struct rte_mempool *mempool,
* @param dev Crypto device pointer
* @param xform Single or chain of crypto xforms
* @param priv_sess Pointer to cryptodev's private session structure
+ * @param mp Mempool where the private session is allocated
*
* @return
- * - Returns private session structure on success.
- * - Returns NULL on failure.
+ * - Returns 0 if private session structure have been created successfully.
+ * - Returns -1 on failure.
*/
-typedef void * (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
- struct rte_crypto_sym_xform *xform, void *session_private);
+typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
+ struct rte_crypto_sym_xform *xform,
+ struct rte_cryptodev_sym_session *session,
+ struct rte_mempool *mp);
/**
* Free Crypto session.
@@ -413,6 +416,19 @@ void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,
int
rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix);
+static inline void *
+get_session_private_data(struct rte_cryptodev_sym_session *sess,
+ uint8_t driver_id) {
+ return sess->sess_private_data[driver_id];
+}
+
+static inline void
+set_session_private_data(struct rte_cryptodev_sym_session *sess,
+ uint8_t driver_id, void *private_data)
+{
+ sess->sess_private_data[driver_id] = private_data;
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 8855a34..c117c4f 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -65,9 +65,11 @@ DPDK_17.08 {
rte_cryptodev_device_count_by_driver;
rte_cryptodev_driver_id_get;
rte_cryptodev_driver_name_get;
+ rte_cryptodev_get_header_session_size;
rte_cryptodev_get_private_session_size;
rte_cryptodev_pci_generic_probe;
rte_cryptodev_pci_generic_remove;
+ rte_cryptodev_sym_session_init;
rte_cryptodev_vdev_parse_init_params;
rte_cryptodev_vdev_pmd_init;
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 3979145..9d08eb8 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -67,7 +67,6 @@ struct crypto_testsuite_params {
struct rte_mempool *large_mbuf_pool;
struct rte_mempool *op_mpool;
struct rte_mempool *session_mpool;
- struct rte_mempool *slave_session_mpool;
struct rte_cryptodev_config conf;
struct rte_cryptodev_qp_conf qp_conf;
@@ -384,12 +383,15 @@ testsuite_setup(void)
ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
ts_params->conf.socket_id = SOCKET_ID_ANY;
- unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
- rte_cryptodev_get_private_session_size(dev_id);
+ unsigned int session_size = rte_cryptodev_get_private_session_size(dev_id);
+ /*
+ * Create mempool with maximum number of sessions * 2,
+ * to include the session headers
+ */
ts_params->session_mpool = rte_mempool_create(
"test_sess_mp",
- info.sym.max_nb_sessions,
+ info.sym.max_nb_sessions * 2,
session_size,
0, 0, NULL, NULL, NULL,
NULL, SOCKET_ID_ANY,
@@ -432,11 +434,10 @@ testsuite_teardown(void)
}
/* Free session mempools */
- if (ts_params->session_mpool != NULL)
+ if (ts_params->session_mpool != NULL) {
rte_mempool_free(ts_params->session_mpool);
-
- if (ts_params->slave_session_mpool != NULL)
- rte_mempool_free(ts_params->slave_session_mpool);
+ ts_params->session_mpool = NULL;
+ }
}
static int
@@ -487,8 +488,7 @@ ut_teardown(void)
/* free crypto session structure */
if (ut_params->sess) {
- rte_cryptodev_sym_session_free(ts_params->valid_devs[0],
- ut_params->sess);
+ rte_cryptodev_sym_session_free(ut_params->sess);
ut_params->sess = NULL;
}
@@ -1271,10 +1271,13 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
ut_params->auth_xform.auth.key.data = hmac_sha1_key;
ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA1;
- /* Create crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0],
- &ut_params->cipher_xform);
+ ts_params->session_mpool);
+
+ /* Create crypto session*/
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess, &ut_params->cipher_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
/* Generate crypto op data structure */
@@ -1496,7 +1499,9 @@ test_AES_cipheronly_mb_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
@@ -1513,7 +1518,9 @@ test_AES_docsis_mb_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_DOCSIS_TYPE);
@@ -1530,7 +1537,9 @@ test_AES_docsis_qat_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_DOCSIS_TYPE);
@@ -1547,7 +1556,9 @@ test_DES_docsis_qat_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_DES_DOCSIS_TYPE);
@@ -1564,7 +1575,9 @@ test_authonly_mb_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
@@ -1581,7 +1594,9 @@ test_AES_chain_mb_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
@@ -1600,7 +1615,9 @@ test_AES_cipheronly_scheduler_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
@@ -1617,7 +1634,9 @@ test_AES_chain_scheduler_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
@@ -1634,7 +1653,9 @@ test_authonly_scheduler_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
@@ -1653,7 +1674,9 @@ test_AES_chain_openssl_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
@@ -1670,7 +1693,9 @@ test_AES_cipheronly_openssl_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
@@ -1687,7 +1712,9 @@ test_AES_chain_qat_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
@@ -1704,7 +1731,9 @@ test_AES_cipheronly_qat_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
@@ -1721,7 +1750,9 @@ test_AES_chain_dpaa2_sec_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
@@ -1738,7 +1769,9 @@ test_AES_cipheronly_dpaa2_sec_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
@@ -1755,7 +1788,9 @@ test_authonly_openssl_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
@@ -1772,7 +1807,9 @@ test_AES_chain_armv8_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
@@ -1792,6 +1829,7 @@ create_wireless_algo_hash_session(uint8_t dev_id,
{
uint8_t hash_key[key_len];
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
memcpy(hash_key, key, key_len);
@@ -1808,8 +1846,12 @@ create_wireless_algo_hash_session(uint8_t dev_id,
ut_params->auth_xform.auth.key.data = hash_key;
ut_params->auth_xform.auth.digest_length = auth_len;
ut_params->auth_xform.auth.add_auth_data_length = aad_len;
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->auth_xform);
+
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->auth_xform, ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
return 0;
}
@@ -1822,6 +1864,7 @@ create_wireless_algo_cipher_session(uint8_t dev_id,
{
uint8_t cipher_key[key_len];
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
memcpy(cipher_key, key, key_len);
@@ -1835,12 +1878,14 @@ create_wireless_algo_cipher_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
TEST_HEXDUMP(stdout, "key:", key, key_len);
/* Create Crypto session */
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->
- cipher_xform);
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->cipher_xform, ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
return 0;
}
@@ -1951,6 +1996,7 @@ create_wireless_algo_cipher_auth_session(uint8_t dev_id,
{
uint8_t cipher_auth_key[key_len];
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
memcpy(cipher_auth_key, key, key_len);
@@ -1976,11 +2022,14 @@ create_wireless_algo_cipher_auth_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.key.data = cipher_auth_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
TEST_HEXDUMP(stdout, "key:", key, key_len);
/* Create Crypto session*/
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->cipher_xform);
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->cipher_xform, ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
return 0;
@@ -1998,6 +2047,7 @@ create_wireless_cipher_auth_session(uint8_t dev_id,
uint8_t cipher_auth_key[key_len];
struct crypto_unittest_params *ut_params = &unittest_params;
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
const uint8_t *key = tdata->key.data;
const uint8_t aad_len = tdata->aad.len;
const uint8_t auth_len = tdata->digest.len;
@@ -2025,11 +2075,14 @@ create_wireless_cipher_auth_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.key.data = cipher_auth_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
TEST_HEXDUMP(stdout, "key:", key, key_len);
/* Create Crypto session*/
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->cipher_xform);
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->cipher_xform, ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
return 0;
@@ -2056,6 +2109,7 @@ create_wireless_algo_auth_cipher_session(uint8_t dev_id,
{
uint8_t auth_cipher_key[key_len];
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
memcpy(auth_cipher_key, key, key_len);
@@ -2078,11 +2132,14 @@ create_wireless_algo_auth_cipher_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.key.data = auth_cipher_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
TEST_HEXDUMP(stdout, "key:", key, key_len);
/* Create Crypto session*/
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->auth_xform);
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->auth_xform, ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
@@ -4696,7 +4753,9 @@ test_3DES_chain_qat_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
@@ -4713,7 +4772,9 @@ test_DES_cipheronly_qat_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_DES_CIPHERONLY_TYPE);
@@ -4730,7 +4791,9 @@ test_DES_docsis_openssl_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_DES_DOCSIS_TYPE);
@@ -4747,7 +4810,9 @@ test_3DES_chain_dpaa2_sec_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
@@ -4764,7 +4829,9 @@ test_3DES_cipheronly_dpaa2_sec_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
@@ -4781,7 +4848,9 @@ test_3DES_cipheronly_qat_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
@@ -4798,7 +4867,9 @@ test_3DES_chain_openssl_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
@@ -4815,7 +4886,9 @@ test_3DES_cipheronly_openssl_all(void)
int status;
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
- ts_params->op_mpool, ts_params->valid_devs[0],
+ ts_params->op_mpool,
+ ts_params->session_mpool,
+ ts_params->valid_devs[0],
rte_cryptodev_driver_id_get(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
@@ -4835,6 +4908,7 @@ create_gcm_session(uint8_t dev_id, enum rte_crypto_cipher_operation op,
{
uint8_t cipher_key[key_len];
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
memcpy(cipher_key, key, key_len);
@@ -4862,16 +4936,19 @@ create_gcm_session(uint8_t dev_id, enum rte_crypto_cipher_operation op,
ut_params->auth_xform.auth.key.length = 0;
ut_params->auth_xform.auth.key.data = NULL;
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
if (op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) {
ut_params->cipher_xform.next = &ut_params->auth_xform;
/* Create Crypto session*/
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->cipher_xform);
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->cipher_xform, ts_params->session_mpool);
} else {/* Create Crypto session*/
ut_params->auth_xform.next = &ut_params->cipher_xform;
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->auth_xform);
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->auth_xform, ts_params->session_mpool);
}
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
@@ -5769,7 +5846,11 @@ static int MD5_HMAC_create_session(struct crypto_testsuite_params *ts_params,
ut_params->auth_xform.auth.key.data = key;
ut_params->sess = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0], &ut_params->auth_xform);
+ ts_params->session_mpool);
+
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess, &ut_params->auth_xform,
+ ts_params->session_mpool);
if (ut_params->sess == NULL)
return TEST_FAILED;
@@ -5945,9 +6026,13 @@ test_multi_session(void)
/* Create multiple crypto sessions*/
for (i = 0; i < dev_info.sym.max_nb_sessions; i++) {
+
sessions[i] = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0],
- &ut_params->auth_xform);
+ ts_params->session_mpool);
+
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ sessions[i], &ut_params->auth_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(sessions[i],
"Session creation failed at session number %u",
i);
@@ -5983,14 +6068,14 @@ test_multi_session(void)
}
/* Next session create should fail */
- sessions[i] = rte_cryptodev_sym_session_create(ts_params->valid_devs[0],
- &ut_params->auth_xform);
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ sessions[i], &ut_params->auth_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NULL(sessions[i],
"Session creation succeeded unexpectedly!");
for (i = 0; i < dev_info.sym.max_nb_sessions; i++)
- rte_cryptodev_sym_session_free(ts_params->valid_devs[0],
- sessions[i]);
+ rte_cryptodev_sym_session_free(sessions[i]);
rte_free(sessions);
@@ -6048,6 +6133,9 @@ test_multi_session_random_usage(void)
* dev_info.sym.max_nb_sessions) + 1, 0);
for (i = 0; i < MB_SESSION_NUMBER; i++) {
+ sessions[i] = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
rte_memcpy(&ut_paramz[i].ut_params, &testsuite_params,
sizeof(struct crypto_unittest_params));
@@ -6056,9 +6144,11 @@ test_multi_session_random_usage(void)
ut_paramz[i].cipher_key, ut_paramz[i].hmac_key);
/* Create multiple crypto sessions*/
- sessions[i] = rte_cryptodev_sym_session_create(
+ rte_cryptodev_sym_session_init(
ts_params->valid_devs[0],
- &ut_paramz[i].ut_params.auth_xform);
+ sessions[i],
+ &ut_paramz[i].ut_params.auth_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(sessions[i],
"Session creation failed at session number %u",
@@ -6102,8 +6192,7 @@ test_multi_session_random_usage(void)
}
for (i = 0; i < MB_SESSION_NUMBER; i++)
- rte_cryptodev_sym_session_free(ts_params->valid_devs[0],
- sessions[i]);
+ rte_cryptodev_sym_session_free(sessions[i]);
rte_free(sessions);
@@ -6127,9 +6216,14 @@ test_null_cipher_only_operation(void)
ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_NULL;
ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
- /* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0], &ut_params->cipher_xform);
+ ts_params->session_mpool);
+
+ /* Create Crypto session*/
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess,
+ &ut_params->cipher_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
/* Generate Crypto op data structure */
@@ -6184,9 +6278,13 @@ test_null_auth_only_operation(void)
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_NULL;
ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
- /* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0], &ut_params->auth_xform);
+ ts_params->session_mpool);
+
+ /* Create Crypto session*/
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess, &ut_params->auth_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
/* Generate Crypto op data structure */
@@ -6240,9 +6338,13 @@ test_null_cipher_auth_operation(void)
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_NULL;
ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
- /* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0], &ut_params->cipher_xform);
+ ts_params->session_mpool);
+
+ /* Create Crypto session*/
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess, &ut_params->cipher_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
/* Generate Crypto op data structure */
@@ -6306,9 +6408,13 @@ test_null_auth_cipher_operation(void)
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_NULL;
ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
- /* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0], &ut_params->cipher_xform);
+ ts_params->session_mpool);
+
+ /* Create Crypto session*/
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess, &ut_params->cipher_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
/* Generate Crypto op data structure */
@@ -6354,6 +6460,7 @@ test_null_invalid_operation(void)
{
struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
+ int ret;
/* Setup Cipher Parameters */
ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
@@ -6362,10 +6469,14 @@ test_null_invalid_operation(void)
ut_params->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
- /* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0], &ut_params->cipher_xform);
- TEST_ASSERT_NULL(ut_params->sess,
+ ts_params->session_mpool);
+
+ /* Create Crypto session*/
+ ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess, &ut_params->cipher_xform,
+ ts_params->session_mpool);
+ TEST_ASSERT(ret == -1,
"Session creation succeeded unexpectedly");
@@ -6376,10 +6487,14 @@ test_null_invalid_operation(void)
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_SHA1_HMAC;
ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
- /* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0], &ut_params->auth_xform);
- TEST_ASSERT_NULL(ut_params->sess,
+ ts_params->session_mpool);
+
+ /* Create Crypto session*/
+ ret = rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess, &ut_params->auth_xform,
+ ts_params->session_mpool);
+ TEST_ASSERT(ret == -1,
"Session creation succeeded unexpectedly");
return TEST_SUCCESS;
@@ -6413,9 +6528,13 @@ test_null_burst_operation(void)
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_NULL;
ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_GENERATE;
- /* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(
- ts_params->valid_devs[0], &ut_params->cipher_xform);
+ ts_params->session_mpool);
+
+ /* Create Crypto session*/
+ rte_cryptodev_sym_session_init(ts_params->valid_devs[0],
+ ut_params->sess, &ut_params->cipher_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
TEST_ASSERT_EQUAL(rte_crypto_op_bulk_alloc(ts_params->op_mpool,
@@ -6556,6 +6675,7 @@ static int create_gmac_session(uint8_t dev_id,
{
uint8_t cipher_key[tdata->key.len];
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct crypto_unittest_params *ut_params = &unittest_params;
memcpy(cipher_key, tdata->key.data, tdata->key.len);
@@ -6580,8 +6700,12 @@ static int create_gmac_session(uint8_t dev_id,
ut_params->cipher_xform.next = &ut_params->auth_xform;
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->cipher_xform);
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->cipher_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
@@ -6911,6 +7035,7 @@ create_auth_session(struct crypto_unittest_params *ut_params,
const struct test_crypto_vector *reference,
enum rte_crypto_auth_operation auth_op)
{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
uint8_t auth_key[reference->auth_key.len + 1];
memcpy(auth_key, reference->auth_key.data, reference->auth_key.len);
@@ -6925,9 +7050,13 @@ create_auth_session(struct crypto_unittest_params *ut_params,
ut_params->auth_xform.auth.digest_length = reference->digest.len;
ut_params->auth_xform.auth.add_auth_data_length = reference->aad.len;
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
/* Create Crypto session*/
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->auth_xform);
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->auth_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
@@ -6941,6 +7070,7 @@ create_auth_cipher_session(struct crypto_unittest_params *ut_params,
enum rte_crypto_auth_operation auth_op,
enum rte_crypto_cipher_operation cipher_op)
{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
uint8_t cipher_key[reference->cipher_key.len + 1];
uint8_t auth_key[reference->auth_key.len + 1];
@@ -6966,9 +7096,13 @@ create_auth_cipher_session(struct crypto_unittest_params *ut_params,
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = reference->cipher_key.len;
+ ut_params->sess = rte_cryptodev_sym_session_create(
+ ts_params->session_mpool);
+
/* Create Crypto session*/
- ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
- &ut_params->auth_xform);
+ rte_cryptodev_sym_session_init(dev_id, ut_params->sess,
+ &ut_params->auth_xform,
+ ts_params->session_mpool);
TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
@@ -7860,30 +7994,32 @@ test_scheduler_attach_slave_op(void)
continue;
/*
- * Create a separate mempool for the slaves, as they need different
- * session size and then configure them to store the pointer
- * to this mempool
+ * Create the session mempool again, since now there are new devices
+ * to use the mempool.
*/
- unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
- rte_cryptodev_get_private_session_size(i);
-
- if (ts_params->slave_session_mpool == NULL) {
- ts_params->slave_session_mpool = rte_mempool_create(
- "test_slave_sess_mp",
- info.sym.max_nb_sessions,
- session_size,
- 0, 0, NULL, NULL, NULL, NULL,
- SOCKET_ID_ANY, 0);
+ if (ts_params->session_mpool) {
+ rte_mempool_free(ts_params->session_mpool);
+ ts_params->session_mpool = NULL;
+ }
+ unsigned int session_size = rte_cryptodev_get_private_session_size(i);
- TEST_ASSERT_NOT_NULL(ts_params->slave_session_mpool,
+ /*
+ * Create mempool with maximum number of sessions * 2,
+ * to include the session headers
+ */
+ if (ts_params->session_mpool == NULL) {
+ ts_params->session_mpool = rte_mempool_create(
+ "test_sess_mp",
+ info.sym.max_nb_sessions * 2,
+ session_size,
+ 0, 0, NULL, NULL, NULL,
+ NULL, SOCKET_ID_ANY,
+ 0);
+
+ TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
"session mempool allocation failed");
}
- TEST_ASSERT_SUCCESS(rte_cryptodev_configure(i,
- &ts_params->conf, ts_params->slave_session_mpool),
- "Failed to configure cryptodev %u with %u qps",
- i, ts_params->conf.nb_queue_pairs);
-
ret = rte_cryptodev_scheduler_slave_attach(sched_id,
(uint8_t)i);
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 4bc370d..b2e600f 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -52,6 +52,7 @@ static int
test_blockcipher_one_case(const struct blockcipher_test_case *t,
struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
+ struct rte_mempool *sess_mpool,
uint8_t dev_id,
int driver_id,
char *test_msg)
@@ -64,8 +65,8 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
struct rte_crypto_sym_xform *init_xform = NULL;
struct rte_crypto_sym_op *sym_op = NULL;
struct rte_crypto_op *op = NULL;
- struct rte_cryptodev_sym_session *sess = NULL;
struct rte_cryptodev_info dev_info;
+ struct rte_cryptodev_sym_session *sess = NULL;
int status = TEST_SUCCESS;
const struct blockcipher_test_data *tdata = t->test_data;
@@ -349,8 +350,10 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
/* create session for sessioned op */
if (!(t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SESSIONLESS)) {
- sess = rte_cryptodev_sym_session_create(dev_id,
- init_xform);
+ sess = rte_cryptodev_sym_session_create(sess_mpool);
+
+ rte_cryptodev_sym_session_init(dev_id, sess, init_xform,
+ sess_mpool);
if (!sess) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u "
"FAILED: %s", __LINE__,
@@ -448,7 +451,6 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
else
auth_res = pktmbuf_mtod_offset(iobuf,
tdata->ciphertext.len);
-
if (memcmp(auth_res, tdata->digest.data, digest_len)) {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u "
"FAILED: %s", __LINE__, "Generated "
@@ -577,7 +579,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
error_exit:
if (!(t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SESSIONLESS)) {
if (sess)
- rte_cryptodev_sym_session_free(dev_id, sess);
+ rte_cryptodev_sym_session_free(sess);
if (cipher_xform)
rte_free(cipher_xform);
if (auth_xform)
@@ -599,6 +601,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
int
test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
+ struct rte_mempool *sess_mpool,
uint8_t dev_id,
int driver_id,
enum blockcipher_test_type test_type)
@@ -690,7 +693,7 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
continue;
status = test_blockcipher_one_case(tc, mbuf_pool, op_mpool,
- dev_id, driver_id, test_msg);
+ sess_mpool, dev_id, driver_id, test_msg);
printf(" %u) TestCase %s %s\n", test_index ++,
tc->test_descr, test_msg);
diff --git a/test/test/test_cryptodev_blockcipher.h b/test/test/test_cryptodev_blockcipher.h
index 22fb420..22b8d20 100644
--- a/test/test/test_cryptodev_blockcipher.h
+++ b/test/test/test_cryptodev_blockcipher.h
@@ -125,6 +125,7 @@ struct blockcipher_test_data {
int
test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
+ struct rte_mempool *sess_mpool,
uint8_t dev_id,
int driver_id,
enum blockcipher_test_type test_type);
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 526be82..68c5fdd 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -50,7 +50,7 @@
struct crypto_testsuite_params {
struct rte_mempool *mbuf_mp;
struct rte_mempool *op_mpool;
- struct rte_mempool *session_mpool;
+ struct rte_mempool *sess_mp;
uint16_t nb_queue_pairs;
@@ -100,6 +100,8 @@ struct symmetric_session_attrs {
uint32_t digest_len;
};
+static struct rte_cryptodev_sym_session *test_crypto_session;
+
#define ALIGN_POW2_ROUNDUP(num, align) \
(((num) + (align) - 1) & ~((align) - 1))
@@ -148,17 +150,17 @@ struct crypto_unittest_params {
uint8_t *digest;
};
-static struct rte_cryptodev_sym_session *
+static int
test_perf_create_snow3g_session(uint8_t dev_id, enum chain_mode chain,
enum rte_crypto_cipher_algorithm cipher_algo,
unsigned int cipher_key_len,
enum rte_crypto_auth_algorithm auth_algo);
-static struct rte_cryptodev_sym_session *
+static int
test_perf_create_openssl_session(uint8_t dev_id, enum chain_mode chain,
enum rte_crypto_cipher_algorithm cipher_algo,
unsigned int cipher_key_len,
enum rte_crypto_auth_algorithm auth_algo);
-static struct rte_cryptodev_sym_session *
+static int
test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
enum rte_crypto_cipher_algorithm cipher_algo,
unsigned int cipher_key_len,
@@ -399,7 +401,7 @@ testsuite_setup(void)
unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
rte_cryptodev_get_private_session_size(ts_params->dev_id);
- ts_params->session_mpool = rte_mempool_create(
+ ts_params->sess_mp = rte_mempool_create(
"test_sess_mp_perf",
info.sym.max_nb_sessions,
session_size,
@@ -407,11 +409,11 @@ testsuite_setup(void)
NULL, SOCKET_ID_ANY,
0);
- TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+ TEST_ASSERT_NOT_NULL(ts_params->sess_mp,
"session mempool allocation failed");
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
- &ts_params->conf, ts_params->session_mpool),
+ &ts_params->conf, ts_params->sess_mp),
"Failed to configure cryptodev %u",
ts_params->dev_id);
@@ -441,8 +443,10 @@ testsuite_teardown(void)
RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_OP POOL count %u\n",
rte_mempool_avail_count(ts_params->op_mpool));
/* Free session mempool */
- if (ts_params->session_mpool != NULL)
- rte_mempool_free(ts_params->session_mpool);
+ if (ts_params->sess_mp != NULL) {
+ rte_mempool_free(ts_params->sess_mp);
+ ts_params->sess_mp = NULL;
+ }
}
@@ -476,8 +480,7 @@ ut_teardown(void)
/* free crypto session structure */
if (ut_params->sess)
- rte_cryptodev_sym_session_free(ts_params->dev_id,
- ut_params->sess);
+ rte_cryptodev_sym_session_free(ut_params->sess);
/* free crypto operation structure */
if (ut_params->op)
@@ -1956,10 +1959,13 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
ut_params->auth_xform.auth.digest_length = DIGEST_BYTE_LENGTH_SHA256;
/* Create Crypto session*/
- ut_params->sess = rte_cryptodev_sym_session_create(ts_params->dev_id,
- &ut_params->cipher_xform);
- TEST_ASSERT_NOT_NULL(ut_params->sess, "Session creation failed");
+ test_crypto_session = rte_cryptodev_sym_session_create(ts_params->sess_mp);
+
+ rte_cryptodev_sym_session_init(ts_params->dev_id, test_crypto_session,
+ &ut_params->cipher_xform, ts_params->sess_mp);
+
+ TEST_ASSERT_NOT_NULL(test_crypto_session, "Session creation failed");
/* Generate Crypto op data structure(s) */
for (i = 0; i < num_to_submit ; i++) {
@@ -1981,7 +1987,7 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
rte_crypto_op_alloc(ts_params->op_mpool,
RTE_CRYPTO_OP_TYPE_SYMMETRIC);
- rte_crypto_op_attach_sym_session(op, ut_params->sess);
+ rte_crypto_op_attach_sym_session(op, test_crypto_session);
op->sym->auth.digest.data = ut_params->digest;
op->sym->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
@@ -2099,9 +2105,12 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
}
/* Create Crypto session*/
- sess = test_perf_create_snow3g_session(ts_params->dev_id,
+ if (test_perf_create_snow3g_session(ts_params->dev_id,
pparams->chain, pparams->cipher_algo,
- pparams->cipher_key_length, pparams->auth_algo);
+ pparams->cipher_key_length, pparams->auth_algo) == 0)
+ sess = test_crypto_session;
+ else
+ sess = NULL;
TEST_ASSERT_NOT_NULL(sess, "Session creation failed");
/* Generate Crypto op data structure(s)*/
@@ -2197,7 +2206,7 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
rte_pktmbuf_free(c_ops[i]->sym->m_src);
rte_crypto_op_free(c_ops[i]);
}
- rte_cryptodev_sym_session_free(ts_params->dev_id, sess);
+ rte_cryptodev_sym_session_free(sess);
return TEST_SUCCESS;
}
@@ -2278,9 +2287,13 @@ test_perf_openssl_optimise_cyclecount(struct perf_test_params *pparams)
}
/* Create Crypto session*/
- sess = test_perf_create_openssl_session(ts_params->dev_id,
+ if (test_perf_create_openssl_session(ts_params->dev_id,
pparams->chain, pparams->cipher_algo,
- pparams->cipher_key_length, pparams->auth_algo);
+ pparams->cipher_key_length, pparams->auth_algo) == 0)
+ sess = test_crypto_session;
+ else
+ sess = NULL;
+
TEST_ASSERT_NOT_NULL(sess, "Session creation failed");
/* Generate Crypto op data structure(s)*/
@@ -2401,7 +2414,7 @@ test_perf_openssl_optimise_cyclecount(struct perf_test_params *pparams)
rte_pktmbuf_free(c_ops[i]->sym->m_src);
rte_crypto_op_free(c_ops[i]);
}
- rte_cryptodev_sym_session_free(ts_params->dev_id, sess);
+ rte_cryptodev_sym_session_free(sess);
return TEST_SUCCESS;
}
@@ -2430,9 +2443,12 @@ test_perf_armv8_optimise_cyclecount(struct perf_test_params *pparams)
}
/* Create Crypto session*/
- sess = test_perf_create_armv8_session(ts_params->dev_id,
+ if (test_perf_create_armv8_session(ts_params->dev_id,
pparams->chain, pparams->cipher_algo,
- pparams->cipher_key_length, pparams->auth_algo);
+ pparams->cipher_key_length, pparams->auth_algo) == 0)
+ sess = test_crypto_session;
+ else
+ sess = NULL;
TEST_ASSERT_NOT_NULL(sess, "Session creation failed");
/* Generate Crypto op data structure(s)*/
@@ -2644,12 +2660,13 @@ static uint8_t snow3g_hash_key[] = {
0x1E, 0x26, 0x98, 0xD2, 0xE2, 0x2A, 0xD5, 0x7E
};
-static struct rte_cryptodev_sym_session *
+static int
test_perf_create_aes_sha_session(uint8_t dev_id, enum chain_mode chain,
enum rte_crypto_cipher_algorithm cipher_algo,
unsigned cipher_key_len,
enum rte_crypto_auth_algorithm auth_algo)
{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct rte_crypto_sym_xform cipher_xform = { 0 };
struct rte_crypto_sym_xform auth_xform = { 0 };
@@ -2671,33 +2688,42 @@ test_perf_create_aes_sha_session(uint8_t dev_id, enum chain_mode chain,
auth_xform.auth.digest_length =
get_auth_digest_length(auth_algo);
}
+
+ test_crypto_session = rte_cryptodev_sym_session_create(ts_params->sess_mp);
switch (chain) {
case CIPHER_HASH:
cipher_xform.next = &auth_xform;
auth_xform.next = NULL;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &cipher_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &cipher_xform,
+ ts_params->sess_mp);
case HASH_CIPHER:
auth_xform.next = &cipher_xform;
cipher_xform.next = NULL;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &auth_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &auth_xform,
+ ts_params->sess_mp);
case CIPHER_ONLY:
cipher_xform.next = NULL;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &cipher_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &cipher_xform,
+ ts_params->sess_mp);
default:
- return NULL;
+ return -1;
}
}
#define SNOW3G_CIPHER_IV_LENGTH 16
-static struct rte_cryptodev_sym_session *
+static int
test_perf_create_snow3g_session(uint8_t dev_id, enum chain_mode chain,
enum rte_crypto_cipher_algorithm cipher_algo, unsigned cipher_key_len,
enum rte_crypto_auth_algorithm auth_algo)
{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct rte_crypto_sym_xform cipher_xform = {0};
struct rte_crypto_sym_xform auth_xform = {0};
@@ -2720,36 +2746,46 @@ test_perf_create_snow3g_session(uint8_t dev_id, enum chain_mode chain,
auth_xform.auth.key.length = get_auth_key_max_length(auth_algo);
auth_xform.auth.digest_length = get_auth_digest_length(auth_algo);
+ test_crypto_session = rte_cryptodev_sym_session_create(ts_params->sess_mp);
switch (chain) {
case CIPHER_HASH:
cipher_xform.next = &auth_xform;
auth_xform.next = NULL;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &cipher_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &cipher_xform,
+ ts_params->sess_mp);
case HASH_CIPHER:
auth_xform.next = &cipher_xform;
cipher_xform.next = NULL;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &auth_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &auth_xform,
+ ts_params->sess_mp);
case CIPHER_ONLY:
cipher_xform.next = NULL;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &cipher_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &cipher_xform,
+ ts_params->sess_mp);
case HASH_ONLY:
auth_xform.next = NULL;
/* Create Crypto session */
- return rte_cryptodev_sym_session_create(dev_id, &auth_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &auth_xform,
+ ts_params->sess_mp);
default:
- return NULL;
+ return -1;
}
}
-static struct rte_cryptodev_sym_session *
+static int
test_perf_create_openssl_session(uint8_t dev_id, enum chain_mode chain,
enum rte_crypto_cipher_algorithm cipher_algo,
unsigned int cipher_key_len,
enum rte_crypto_auth_algorithm auth_algo)
{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct rte_crypto_sym_xform cipher_xform = { 0 };
struct rte_crypto_sym_xform auth_xform = { 0 };
@@ -2769,7 +2805,7 @@ test_perf_create_openssl_session(uint8_t dev_id, enum chain_mode chain,
cipher_xform.cipher.key.data = aes_key;
break;
default:
- return NULL;
+ return -1;
}
cipher_xform.cipher.key.length = cipher_key_len;
@@ -2787,34 +2823,40 @@ test_perf_create_openssl_session(uint8_t dev_id, enum chain_mode chain,
auth_xform.auth.key.data = NULL;
break;
default:
- return NULL;
+ return -1;
}
auth_xform.auth.key.length = get_auth_key_max_length(auth_algo);
auth_xform.auth.digest_length = get_auth_digest_length(auth_algo);
+ test_crypto_session = rte_cryptodev_sym_session_create(ts_params->sess_mp);
switch (chain) {
case CIPHER_HASH:
cipher_xform.next = &auth_xform;
auth_xform.next = NULL;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &cipher_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &cipher_xform,
+ ts_params->sess_mp);
case HASH_CIPHER:
auth_xform.next = &cipher_xform;
cipher_xform.next = NULL;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &auth_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &auth_xform,
+ ts_params->sess_mp);
default:
- return NULL;
+ return -1;
}
}
-static struct rte_cryptodev_sym_session *
+static int
test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
enum rte_crypto_cipher_algorithm cipher_algo,
unsigned int cipher_key_len,
enum rte_crypto_auth_algorithm auth_algo)
{
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct rte_crypto_sym_xform cipher_xform = { 0 };
struct rte_crypto_sym_xform auth_xform = { 0 };
@@ -2827,7 +2869,7 @@ test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
cipher_xform.cipher.key.data = aes_cbc_128_key;
break;
default:
- return NULL;
+ return -1;
}
cipher_xform.cipher.key.length = cipher_key_len;
@@ -2839,6 +2881,8 @@ test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
auth_xform.auth.digest_length = get_auth_digest_length(auth_algo);
+ rte_cryptodev_sym_session_create(ts_params->sess_mp);
+
switch (chain) {
case CIPHER_HASH:
cipher_xform.next = &auth_xform;
@@ -2846,16 +2890,20 @@ test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
/* Encrypt and hash the result */
cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &cipher_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &cipher_xform,
+ ts_params->sess_mp);
case HASH_CIPHER:
auth_xform.next = &cipher_xform;
cipher_xform.next = NULL;
/* Hash encrypted message and decrypt */
cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
/* Create Crypto session*/
- return rte_cryptodev_sym_session_create(dev_id, &auth_xform);
+ return rte_cryptodev_sym_session_init(dev_id,
+ test_crypto_session, &auth_xform,
+ ts_params->sess_mp);
default:
- return NULL;
+ return -1;
}
}
@@ -3123,9 +3171,12 @@ test_perf_aes_sha(uint8_t dev_id, uint16_t queue_id,
}
/* Create Crypto session*/
- sess = test_perf_create_aes_sha_session(ts_params->dev_id,
+ if (test_perf_create_aes_sha_session(ts_params->dev_id,
pparams->chain, pparams->cipher_algo,
- pparams->cipher_key_length, pparams->auth_algo);
+ pparams->cipher_key_length, pparams->auth_algo) == 0)
+ sess = test_crypto_session;
+ else
+ sess = NULL;
TEST_ASSERT_NOT_NULL(sess, "Session creation failed");
/* Generate a burst of crypto operations */
@@ -3221,7 +3272,7 @@ test_perf_aes_sha(uint8_t dev_id, uint16_t queue_id,
for (i = 0; i < pparams->burst_size * NUM_MBUF_SETS; i++)
rte_pktmbuf_free(mbufs[i]);
- rte_cryptodev_sym_session_free(dev_id, sess);
+ rte_cryptodev_sym_session_free(sess);
printf("\n");
return TEST_SUCCESS;
@@ -3257,9 +3308,12 @@ test_perf_snow3g(uint8_t dev_id, uint16_t queue_id,
}
/* Create Crypto session*/
- sess = test_perf_create_snow3g_session(ts_params->dev_id,
+ if (test_perf_create_snow3g_session(ts_params->dev_id,
pparams->chain, pparams->cipher_algo,
- pparams->cipher_key_length, pparams->auth_algo);
+ pparams->cipher_key_length, pparams->auth_algo) == 0)
+ sess = test_crypto_session;
+ else
+ sess = NULL;
TEST_ASSERT_NOT_NULL(sess, "Session creation failed");
/* Generate a burst of crypto operations */
@@ -3386,7 +3440,7 @@ test_perf_snow3g(uint8_t dev_id, uint16_t queue_id,
for (i = 0; i < pparams->burst_size * NUM_MBUF_SETS; i++)
rte_pktmbuf_free(mbufs[i]);
- rte_cryptodev_sym_session_free(dev_id, sess);
+ rte_cryptodev_sym_session_free(sess);
printf("\n");
return TEST_SUCCESS;
@@ -3443,9 +3497,12 @@ test_perf_openssl(uint8_t dev_id, uint16_t queue_id,
}
/* Create Crypto session*/
- sess = test_perf_create_openssl_session(ts_params->dev_id,
+ if (test_perf_create_openssl_session(ts_params->dev_id,
pparams->chain, pparams->cipher_algo,
- pparams->cipher_key_length, pparams->auth_algo);
+ pparams->cipher_key_length, pparams->auth_algo) == 0)
+ sess = test_crypto_session;
+ else
+ sess = NULL;
TEST_ASSERT_NOT_NULL(sess, "Session creation failed");
/* Generate a burst of crypto operations */
@@ -3538,7 +3595,7 @@ test_perf_openssl(uint8_t dev_id, uint16_t queue_id,
for (i = 0; i < pparams->burst_size * NUM_MBUF_SETS; i++)
rte_pktmbuf_free(mbufs[i]);
- rte_cryptodev_sym_session_free(dev_id, sess);
+ rte_cryptodev_sym_session_free(sess);
printf("\n");
return TEST_SUCCESS;
@@ -3575,9 +3632,12 @@ test_perf_armv8(uint8_t dev_id, uint16_t queue_id,
}
/* Create Crypto session*/
- sess = test_perf_create_armv8_session(ts_params->dev_id,
+ if (test_perf_create_armv8_session(ts_params->dev_id,
pparams->chain, pparams->cipher_algo,
- pparams->cipher_key_length, pparams->auth_algo);
+ pparams->cipher_key_length, pparams->auth_algo) == 0)
+ sess = test_crypto_session;
+ else
+ sess = NULL;
TEST_ASSERT_NOT_NULL(sess, "Session creation failed");
/* Generate a burst of crypto operations */
@@ -4122,7 +4182,7 @@ test_perf_aes_cbc_vary_burst_size(void)
static struct rte_cryptodev_sym_session *
test_perf_create_session(uint8_t dev_id, struct perf_test_params *pparams)
{
- static struct rte_cryptodev_sym_session *sess;
+ struct crypto_testsuite_params *ts_params = &testsuite_params;
struct rte_crypto_sym_xform cipher_xform = { 0 };
struct rte_crypto_sym_xform auth_xform = { 0 };
@@ -4151,19 +4211,20 @@ test_perf_create_session(uint8_t dev_id, struct perf_test_params *pparams)
auth_xform.auth.digest_length = pparams->session_attrs->digest_len;
auth_xform.auth.key.length = pparams->session_attrs->key_auth_len;
+ test_crypto_session = rte_cryptodev_sym_session_create(ts_params->sess_mp);
cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
if (cipher_xform.cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) {
cipher_xform.next = &auth_xform;
- sess = rte_cryptodev_sym_session_create(dev_id,
- &cipher_xform);
+ rte_cryptodev_sym_session_init(dev_id, test_crypto_session,
+ &cipher_xform, ts_params->sess_mp);
} else {
auth_xform.next = &cipher_xform;
- sess = rte_cryptodev_sym_session_create(dev_id,
- &auth_xform);
+ rte_cryptodev_sym_session_init(dev_id, test_crypto_session,
+ &auth_xform, ts_params->sess_mp);
}
- return sess;
+ return test_crypto_session;
}
static inline struct rte_crypto_op *
@@ -4405,7 +4466,7 @@ perf_AES_GCM(uint8_t dev_id, uint16_t queue_id,
for (i = 0; i < burst; i++)
rte_pktmbuf_free(mbufs[i]);
- rte_cryptodev_sym_session_free(dev_id, sess);
+ rte_cryptodev_sym_session_free(sess);
return 0;
}
--
2.9.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 08/11] cryptodev: remove mempool from session
` (3 preceding siblings ...)
2017-06-30 17:09 3% ` [dpdk-dev] [PATCH v2 07/11] cryptodev: remove driver id from session Pablo de Lara
@ 2017-06-30 17:09 4% ` Pablo de Lara
2017-06-30 17:09 1% ` [dpdk-dev] [PATCH v2 09/11] cryptodev: support device independent sessions Pablo de Lara
2017-06-30 17:09 2% ` [dpdk-dev] [PATCH v2 10/11] cryptodev: add mempool pointer in queue pair setup Pablo de Lara
6 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-30 17:09 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Slawomir Mrozowicz, Pablo de Lara
From: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Mempool pointer can be obtained from the object itself,
which means that it is not required to actually store the pointer
in the session.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
doc/guides/rel_notes/release_17_08.rst | 1 +
lib/librte_cryptodev/rte_cryptodev.c | 7 +++----
lib/librte_cryptodev/rte_cryptodev.h | 6 ------
3 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 04bd3d5..6215584 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -162,6 +162,7 @@ API Changes
the new parameter ``device id``.
* ``dev_id`` field has been removed from ``rte_cryptodev_sym_session`` structure.
* ``driver_id`` field has been removed from ``rte_cryptodev_sym_session`` structure.
+ * Mempool pointer ``mp`` has been removed from ``rte_cryptodev_sym_session`` structure.
ABI Changes
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index d2772fd..cc42c25 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1026,8 +1026,6 @@ rte_cryptodev_sym_session_init(struct rte_mempool *mp,
{
memset(sess, 0, mp->elt_size);
- sess->mp = mp;
-
if (dev->dev_ops->session_initialize)
(*dev->dev_ops->session_initialize)(mp, sess);
}
@@ -1065,7 +1063,7 @@ rte_cryptodev_sym_session_create(uint8_t dev_id,
dev_id);
/* Return session to mempool */
- rte_mempool_put(sess->mp, _sess);
+ rte_mempool_put(dev->data->session_pool, _sess);
return NULL;
}
@@ -1137,7 +1135,8 @@ rte_cryptodev_sym_session_free(uint8_t dev_id,
dev->dev_ops->session_clear(dev, (void *)sess->_private);
/* Return session to mempool */
- rte_mempool_put(sess->mp, (void *)sess);
+ struct rte_mempool *mp = rte_mempool_from_obj(sess);
+ rte_mempool_put(mp, (void *)sess);
return NULL;
}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 7d574f1..044a4aa 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -797,12 +797,6 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
/** Cryptodev symmetric crypto session */
struct rte_cryptodev_sym_session {
RTE_STD_C11
- struct {
- struct rte_mempool *mp;
- /**< Mempool session allocated from */
- } __rte_aligned(8);
- /**< Public symmetric session details */
-
__extension__ char _private[0];
/**< Private session material */
};
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 07/11] cryptodev: remove driver id from session
` (2 preceding siblings ...)
2017-06-30 17:09 4% ` [dpdk-dev] [PATCH v2 06/11] cryptodev: remove dev_id from crypto session Pablo de Lara
@ 2017-06-30 17:09 3% ` Pablo de Lara
2017-06-30 17:09 4% ` [dpdk-dev] [PATCH v2 08/11] cryptodev: remove mempool " Pablo de Lara
` (2 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-30 17:09 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Slawomir Mrozowicz, Pablo de Lara
From: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Since crypto session will not be attached to a specific
device or driver, the field driver_id is not required
anymore (only used to check that a session was being
handled by the right device).
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
doc/guides/rel_notes/release_17_08.rst | 1 +
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 4 ----
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 5 -----
drivers/crypto/armv8/rte_armv8_pmd.c | 4 +---
drivers/crypto/kasumi/rte_kasumi_pmd.c | 4 ----
drivers/crypto/null/null_crypto_pmd.c | 3 +--
drivers/crypto/openssl/rte_openssl_pmd.c | 4 +---
drivers/crypto/qat/qat_crypto.c | 6 ------
drivers/crypto/snow3g/rte_snow3g_pmd.c | 4 ----
drivers/crypto/zuc/rte_zuc_pmd.c | 4 ----
lib/librte_cryptodev/rte_cryptodev.c | 5 -----
lib/librte_cryptodev/rte_cryptodev.h | 2 --
12 files changed, 4 insertions(+), 42 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index df66525..04bd3d5 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -161,6 +161,7 @@ API Changes
``rte_cryptodev_queue_pair_dettach_sym_session()`` functions require
the new parameter ``device id``.
* ``dev_id`` field has been removed from ``rte_cryptodev_sym_session`` structure.
+ * ``driver_id`` field has been removed from ``rte_cryptodev_sym_session`` structure.
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index ef290a3..2774b4e 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -146,10 +146,6 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
struct aesni_gcm_session *sess = NULL;
if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session->driver_id !=
- cryptodev_driver_id))
- return sess;
-
sess = (struct aesni_gcm_session *)op->session->_private;
} else {
void *_sess;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 4025978..ec348ab 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -348,11 +348,6 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
struct aesni_mb_session *sess = NULL;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->driver_id !=
- cryptodev_driver_id)) {
- return NULL;
- }
-
sess = (struct aesni_mb_session *)op->sym->session->_private;
} else {
void *_sess = NULL;
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 9fe781b..1ddf6a2 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -549,9 +549,7 @@ get_session(struct armv8_crypto_qp *qp, struct rte_crypto_op *op)
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
/* get existing session */
- if (likely(op->sym->session != NULL &&
- op->sym->session->driver_id ==
- cryptodev_driver_id)) {
+ if (likely(op->sym->session != NULL)) {
sess = (struct armv8_crypto_session *)
op->sym->session->_private;
}
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index a1a33c5..67f0b06 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -146,10 +146,6 @@ kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
struct kasumi_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->driver_id !=
- cryptodev_driver_id))
- return NULL;
-
sess = (struct kasumi_session *)op->sym->session->_private;
} else {
struct rte_cryptodev_sym_session *c_sess = NULL;
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 50ede06..9323874 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -97,8 +97,7 @@ get_session(struct null_crypto_qp *qp, struct rte_crypto_sym_op *op)
struct null_crypto_session *sess;
if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session == NULL || op->session->driver_id !=
- cryptodev_driver_id))
+ if (unlikely(op->session == NULL))
return NULL;
sess = (struct null_crypto_session *)op->session->_private;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 4e4394f..3232455 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -450,9 +450,7 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
/* get existing session */
- if (likely(op->sym->session != NULL &&
- op->sym->session->driver_id ==
- cryptodev_driver_id))
+ if (likely(op->sym->session != NULL))
sess = (struct openssl_session *)
op->sym->session->_private;
} else {
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 7c5a9a8..13bd0b5 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -914,12 +914,6 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
return -EINVAL;
}
- if (unlikely(op->sym->session->driver_id !=
- cryptodev_qat_driver_id)) {
- PMD_DRV_LOG(ERR, "Session was not created for this device");
- return -EINVAL;
- }
-
ctx = (struct qat_session *)op->sym->session->_private;
qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg;
rte_mov128((uint8_t *)qat_req, (const uint8_t *)&(ctx->fw_req));
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 35a2bcd..677849d 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -146,10 +146,6 @@ snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *op)
struct snow3g_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->driver_id !=
- cryptodev_driver_id))
- return NULL;
-
sess = (struct snow3g_session *)op->sym->session->_private;
} else {
struct rte_cryptodev_sym_session *c_sess = NULL;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index ff8f3b9..385e4e5 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -145,10 +145,6 @@ zuc_get_session(struct zuc_qp *qp, struct rte_crypto_op *op)
struct zuc_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->driver_id !=
- cryptodev_driver_id))
- return NULL;
-
sess = (struct zuc_session *)op->sym->session->_private;
} else {
struct rte_cryptodev_sym_session *c_sess = NULL;
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 0b381ba..d2772fd 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1026,7 +1026,6 @@ rte_cryptodev_sym_session_init(struct rte_mempool *mp,
{
memset(sess, 0, mp->elt_size);
- sess->driver_id = dev->driver_id;
sess->mp = mp;
if (dev->dev_ops->session_initialize)
@@ -1133,10 +1132,6 @@ rte_cryptodev_sym_session_free(uint8_t dev_id,
dev = &rte_crypto_devices[dev_id];
- /* Check the session belongs to this device type */
- if (sess->driver_id != dev->driver_id)
- return sess;
-
/* Let device implementation clear session material */
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);
dev->dev_ops->session_clear(dev, (void *)sess->_private);
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 77a763d..7d574f1 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -798,8 +798,6 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_cryptodev_sym_session {
RTE_STD_C11
struct {
- uint8_t driver_id;
- /** Crypto driver identifier session created on */
struct rte_mempool *mp;
/**< Mempool session allocated from */
} __rte_aligned(8);
--
2.9.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 06/11] cryptodev: remove dev_id from crypto session
2017-06-30 17:09 2% ` [dpdk-dev] [PATCH v2 04/11] cryptodev: do not create session mempool internally Pablo de Lara
2017-06-30 17:09 4% ` [dpdk-dev] [PATCH v2 05/11] cryptodev: change attach session to queue pair API Pablo de Lara
@ 2017-06-30 17:09 4% ` Pablo de Lara
2017-06-30 17:09 3% ` [dpdk-dev] [PATCH v2 07/11] cryptodev: remove driver id from session Pablo de Lara
` (3 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-30 17:09 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Slawomir Mrozowicz, Pablo de Lara
From: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Device id is necessary in the crypto session,
as it was only used for the functions that attach/detach
a session to a queue pair.
Since the session is not going to be attached to a device
anymore, this is field is no longer necessary.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
doc/guides/rel_notes/release_17_08.rst | 1 +
lib/librte_cryptodev/rte_cryptodev.c | 1 -
lib/librte_cryptodev/rte_cryptodev.h | 2 --
3 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index fe46d93..df66525 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -160,6 +160,7 @@ API Changes
* ``rte_cryptodev_queue_pair_attach_sym_session()`` and
``rte_cryptodev_queue_pair_dettach_sym_session()`` functions require
the new parameter ``device id``.
+ * ``dev_id`` field has been removed from ``rte_cryptodev_sym_session`` structure.
ABI Changes
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1852ecf..0b381ba 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1026,7 +1026,6 @@ rte_cryptodev_sym_session_init(struct rte_mempool *mp,
{
memset(sess, 0, mp->elt_size);
- sess->dev_id = dev->data->dev_id;
sess->driver_id = dev->driver_id;
sess->mp = mp;
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index d3d2794..77a763d 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -798,8 +798,6 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_cryptodev_sym_session {
RTE_STD_C11
struct {
- uint8_t dev_id;
- /**< Device Id */
uint8_t driver_id;
/** Crypto driver identifier session created on */
struct rte_mempool *mp;
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 05/11] cryptodev: change attach session to queue pair API
2017-06-30 17:09 2% ` [dpdk-dev] [PATCH v2 04/11] cryptodev: do not create session mempool internally Pablo de Lara
@ 2017-06-30 17:09 4% ` Pablo de Lara
2017-06-30 17:09 4% ` [dpdk-dev] [PATCH v2 06/11] cryptodev: remove dev_id from crypto session Pablo de Lara
` (4 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-30 17:09 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Slawomir Mrozowicz, Pablo de Lara
From: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Device id is going to be removed from session,
as the session will be device independent.
Therefore, the functions that attach/dettach a session
to a queue pair need to be updated, to accept the device id
as a parameter, apart from the queue pair id and the session.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
doc/guides/rel_notes/release_17_08.rst | 3 +++
examples/ipsec-secgw/ipsec.c | 1 +
lib/librte_cryptodev/rte_cryptodev.c | 20 ++++++++++----------
lib/librte_cryptodev/rte_cryptodev.h | 10 ++++++----
4 files changed, 20 insertions(+), 14 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index f5d6289..fe46d93 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -157,6 +157,9 @@ API Changes
replacing the previous device type enumeration.
* ``rte_cryptodev_configure()`` does not create the session mempool
for the device anymore.
+ * ``rte_cryptodev_queue_pair_attach_sym_session()`` and
+ ``rte_cryptodev_queue_pair_dettach_sym_session()`` functions require
+ the new parameter ``device id``.
ABI Changes
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index edca5f0..048e441 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -77,6 +77,7 @@ create_session(struct ipsec_ctx *ipsec_ctx __rte_unused, struct ipsec_sa *sa)
rte_cryptodev_info_get(ipsec_ctx->tbl[cdev_id_qp].id, &cdev_info);
if (cdev_info.sym.max_nb_sessions_per_qp > 0) {
ret = rte_cryptodev_queue_pair_attach_sym_session(
+ ipsec_ctx->tbl[cdev_id_qp].id,
ipsec_ctx->tbl[cdev_id_qp].qp,
sa->crypto_session);
if (ret < 0) {
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 1e8d3b9..1852ecf 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -1075,23 +1075,23 @@ rte_cryptodev_sym_session_create(uint8_t dev_id,
}
int
-rte_cryptodev_queue_pair_attach_sym_session(uint16_t qp_id,
+rte_cryptodev_queue_pair_attach_sym_session(uint8_t dev_id, uint16_t qp_id,
struct rte_cryptodev_sym_session *sess)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(sess->dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%d", sess->dev_id);
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return -EINVAL;
}
- dev = &rte_crypto_devices[sess->dev_id];
+ dev = &rte_crypto_devices[dev_id];
/* The API is optional, not returning error if driver do not suuport */
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->qp_attach_session, 0);
if (dev->dev_ops->qp_attach_session(dev, qp_id, sess->_private)) {
CDEV_LOG_ERR("dev_id %d failed to attach qp: %d with session",
- sess->dev_id, qp_id);
+ dev_id, qp_id);
return -EPERM;
}
@@ -1099,23 +1099,23 @@ rte_cryptodev_queue_pair_attach_sym_session(uint16_t qp_id,
}
int
-rte_cryptodev_queue_pair_detach_sym_session(uint16_t qp_id,
+rte_cryptodev_queue_pair_detach_sym_session(uint8_t dev_id, uint16_t qp_id,
struct rte_cryptodev_sym_session *sess)
{
struct rte_cryptodev *dev;
- if (!rte_cryptodev_pmd_is_valid_dev(sess->dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%d", sess->dev_id);
+ if (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%d", dev_id);
return -EINVAL;
}
- dev = &rte_crypto_devices[sess->dev_id];
+ dev = &rte_crypto_devices[dev_id];
/* The API is optional, not returning error if driver do not suuport */
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->qp_detach_session, 0);
if (dev->dev_ops->qp_detach_session(dev, qp_id, sess->_private)) {
CDEV_LOG_ERR("dev_id %d failed to detach qp: %d from session",
- sess->dev_id, qp_id);
+ dev_id, qp_id);
return -EPERM;
}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 1afd2d8..d3d2794 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -867,7 +867,8 @@ rte_cryptodev_get_private_session_size(uint8_t dev_id);
/**
* Attach queue pair with sym session.
*
- * @param qp_id Queue pair to which session will be attached.
+ * @param dev_id Device to which the session will be attached.
+ * @param qp_id Queue pair to which the session will be attached.
* @param session Session pointer previously allocated by
* *rte_cryptodev_sym_session_create*.
*
@@ -876,13 +877,14 @@ rte_cryptodev_get_private_session_size(uint8_t dev_id);
* - On failure, a negative value.
*/
int
-rte_cryptodev_queue_pair_attach_sym_session(uint16_t qp_id,
+rte_cryptodev_queue_pair_attach_sym_session(uint8_t dev_id, uint16_t qp_id,
struct rte_cryptodev_sym_session *session);
/**
* Detach queue pair with sym session.
*
- * @param qp_id Queue pair to which session is attached.
+ * @param dev_id Device to which the session is attached.
+ * @param qp_id Queue pair to which the session is attached.
* @param session Session pointer previously allocated by
* *rte_cryptodev_sym_session_create*.
*
@@ -891,7 +893,7 @@ rte_cryptodev_queue_pair_attach_sym_session(uint16_t qp_id,
* - On failure, a negative value.
*/
int
-rte_cryptodev_queue_pair_detach_sym_session(uint16_t qp_id,
+rte_cryptodev_queue_pair_detach_sym_session(uint8_t dev_id, uint16_t qp_id,
struct rte_cryptodev_sym_session *session);
/**
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 04/11] cryptodev: do not create session mempool internally
@ 2017-06-30 17:09 2% ` Pablo de Lara
2017-06-30 17:09 4% ` [dpdk-dev] [PATCH v2 05/11] cryptodev: change attach session to queue pair API Pablo de Lara
` (5 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-30 17:09 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Slawomir Mrozowicz, Pablo de Lara
From: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Instead of creating the session mempool while configuring
the crypto device, apps will create the mempool themselves.
This way, it gives flexibility to the user to have a single
mempool for all devices (as long as the objects are big
enough to contain the biggest private session size) or
separate mempools for different drivers.
Also, since the mempool is now created outside the
device configuration function, now it needs to be passed
through this function, which will be eventually passed
when setting up the queue pairs, as ethernet devices do.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/main.c | 33 +++++++++---
doc/guides/rel_notes/release_17_08.rst | 2 +
drivers/crypto/scheduler/scheduler_pmd_ops.c | 12 +----
examples/ipsec-secgw/ipsec-secgw.c | 26 +++++++--
examples/l2fwd-crypto/main.c | 29 ++++++++--
lib/librte_cryptodev/rte_cryptodev.c | 77 ++------------------------
lib/librte_cryptodev/rte_cryptodev.h | 9 ++--
test/test/test_cryptodev.c | 81 ++++++++++++++++++++++------
test/test/test_cryptodev_perf.c | 22 +++++++-
9 files changed, 167 insertions(+), 124 deletions(-)
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 9ec2a4b..9a22925 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -11,6 +11,9 @@
#include "cperf_test_latency.h"
#include "cperf_test_verify.h"
+#define NUM_SESSIONS 2048
+#define SESS_MEMPOOL_CACHE_SIZE 64
+
const char *cperf_test_type_strs[] = {
[CPERF_TEST_TYPE_THROUGHPUT] = "throughput",
[CPERF_TEST_TYPE_LATENCY] = "latency",
@@ -72,16 +75,34 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
struct rte_cryptodev_config conf = {
.nb_queue_pairs = 1,
.socket_id = SOCKET_ID_ANY,
- .session_mp = {
- .nb_objs = 2048,
- .cache_size = 64
- }
- };
+ };
+
struct rte_cryptodev_qp_conf qp_conf = {
.nb_descriptors = 2048
};
- ret = rte_cryptodev_configure(enabled_cdevs[cdev_id], &conf);
+ unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
+ rte_cryptodev_get_private_session_size(enabled_cdevs[cdev_id]);
+
+ char mp_name[RTE_MEMPOOL_NAMESIZE];
+ struct rte_mempool *sess_mp;
+
+ snprintf(mp_name, sizeof(mp_name), "sess_mp_%u", cdev_id);
+ sess_mp = rte_mempool_create(mp_name,
+ NUM_SESSIONS,
+ session_size,
+ SESS_MEMPOOL_CACHE_SIZE,
+ 0, NULL, NULL, NULL,
+ NULL, SOCKET_ID_ANY,
+ 0);
+
+ if (sess_mp == NULL) {
+ printf("Failed to create device session mempool\n");
+ return -ENOMEM;
+ }
+
+ ret = rte_cryptodev_configure(enabled_cdevs[cdev_id], &conf,
+ sess_mp);
if (ret < 0) {
printf("Failed to configure cryptodev %u",
enabled_cdevs[cdev_id]);
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 23f1bae..f5d6289 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -155,6 +155,8 @@ API Changes
* Device type identification is changed to be based on a unique 1-byte driver id,
replacing the previous device type enumeration.
+ * ``rte_cryptodev_configure()`` does not create the session mempool
+ for the device anymore.
ABI Changes
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index 90e3734..b9d8973 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -85,10 +85,8 @@ scheduler_attach_init_slave(struct rte_cryptodev *dev)
/** Configure device */
static int
scheduler_pmd_config(struct rte_cryptodev *dev,
- struct rte_cryptodev_config *config)
+ struct rte_cryptodev_config *config __rte_unused)
{
- struct scheduler_ctx *sched_ctx = dev->data->dev_private;
- uint32_t i;
int ret;
/* although scheduler_attach_init_slave presents multiple times,
@@ -98,14 +96,6 @@ scheduler_pmd_config(struct rte_cryptodev *dev,
if (ret < 0)
return ret;
- for (i = 0; i < sched_ctx->nb_slaves; i++) {
- uint8_t slave_dev_id = sched_ctx->slaves[i].dev_id;
-
- ret = rte_cryptodev_configure(slave_dev_id, config);
- if (ret < 0)
- break;
- }
-
return ret;
}
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 8cbf6ac..a2286fd 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -1266,11 +1266,29 @@ cryptodevs_init(void)
dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id);
dev_conf.nb_queue_pairs = qp;
- dev_conf.session_mp.nb_objs = CDEV_MP_NB_OBJS;
- dev_conf.session_mp.cache_size = CDEV_MP_CACHE_SZ;
- if (rte_cryptodev_configure(cdev_id, &dev_conf))
- rte_panic("Failed to initialize crypodev %u\n",
+ char mp_name[RTE_MEMPOOL_NAMESIZE];
+ struct rte_mempool *sess_mp;
+
+ unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
+ rte_cryptodev_get_private_session_size(cdev_id);
+
+ snprintf(mp_name, sizeof(mp_name), "sess_mp_%u", cdev_id);
+ sess_mp = rte_mempool_create(mp_name,
+ CDEV_MP_NB_OBJS,
+ session_size,
+ CDEV_MP_CACHE_SZ,
+ 0, NULL, NULL, NULL,
+ NULL, dev_conf.socket_id,
+ 0);
+
+ if (sess_mp == NULL) {
+ printf("Failed to create device session mempool\n");
+ return -ENOMEM;
+ }
+
+ if (rte_cryptodev_configure(cdev_id, &dev_conf, sess_mp))
+ rte_panic("Failed to initialize cryptodev %u\n",
cdev_id);
qp_conf.nb_descriptors = CDEV_QUEUE_DESC;
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 779b4fb..c3c6f45 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -89,6 +89,9 @@ enum cdev_type {
#define MAX_PKT_BURST 32
#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+#define NUM_SESSIONS 2048
+#define SESS_MEMPOOL_CACHE_SIZE 64
+
/*
* Configurable number of RX/TX ring descriptors
*/
@@ -1566,10 +1569,6 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
struct rte_cryptodev_config conf = {
.nb_queue_pairs = 1,
.socket_id = SOCKET_ID_ANY,
- .session_mp = {
- .nb_objs = 2048,
- .cache_size = 64
- }
};
if (check_cryptodev_mask(options, (uint8_t)cdev_id))
@@ -1797,7 +1796,27 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
cap->sym.auth.digest_size.min;
}
- retval = rte_cryptodev_configure(cdev_id, &conf);
+ unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
+ rte_cryptodev_get_private_session_size(enabled_cdevs[cdev_id]);
+
+ char mp_name[RTE_MEMPOOL_NAMESIZE];
+ struct rte_mempool *sess_mp;
+
+ snprintf(mp_name, sizeof(mp_name), "sess_mp_%u", cdev_id);
+ sess_mp = rte_mempool_create(mp_name,
+ NUM_SESSIONS,
+ session_size,
+ SESS_MEMPOOL_CACHE_SIZE,
+ 0, NULL, NULL, NULL,
+ NULL, SOCKET_ID_ANY,
+ 0);
+
+ if (sess_mp == NULL) {
+ printf("Failed to create device session mempool\n");
+ return -ENOMEM;
+ }
+
+ retval = rte_cryptodev_configure(cdev_id, &conf, sess_mp);
if (retval < 0) {
printf("Failed to configure cryptodev %u", cdev_id);
return -1;
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index 312a740..1e8d3b9 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -683,12 +683,9 @@ rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)
}
-static int
-rte_cryptodev_sym_session_pool_create(struct rte_cryptodev *dev,
- unsigned nb_objs, unsigned obj_cache_size, int socket_id);
-
int
-rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config,
+ struct rte_mempool *session_pool)
{
struct rte_cryptodev *dev;
int diag;
@@ -708,6 +705,8 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+ dev->data->session_pool = session_pool;
+
/* Setup new number of queue pairs and reconfigure device. */
diag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,
config->socket_id);
@@ -717,14 +716,6 @@ rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)
return diag;
}
- /* Setup Session mempool for device */
- diag = rte_cryptodev_sym_session_pool_create(dev,
- config->session_mp.nb_objs,
- config->session_mp.cache_size,
- config->socket_id);
- if (diag != 0)
- return diag;
-
return (*dev->dev_ops->dev_configure)(dev, config);
}
@@ -1043,66 +1034,6 @@ rte_cryptodev_sym_session_init(struct rte_mempool *mp,
(*dev->dev_ops->session_initialize)(mp, sess);
}
-static int
-rte_cryptodev_sym_session_pool_create(struct rte_cryptodev *dev,
- unsigned nb_objs, unsigned obj_cache_size, int socket_id)
-{
- char mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];
- unsigned priv_sess_size;
-
- unsigned n = snprintf(mp_name, sizeof(mp_name), "cdev_%d_sess_mp",
- dev->data->dev_id);
- if (n > sizeof(mp_name)) {
- CDEV_LOG_ERR("Unable to create unique name for session mempool");
- return -ENOMEM;
- }
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);
- priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
- if (priv_sess_size == 0) {
- CDEV_LOG_ERR("%s returned and invalid private session size ",
- dev->data->name);
- return -ENOMEM;
- }
-
- unsigned elt_size = sizeof(struct rte_cryptodev_sym_session) +
- priv_sess_size;
-
- dev->data->session_pool = rte_mempool_lookup(mp_name);
- if (dev->data->session_pool != NULL) {
- if ((dev->data->session_pool->elt_size != elt_size) ||
- (dev->data->session_pool->cache_size <
- obj_cache_size) ||
- (dev->data->session_pool->size < nb_objs)) {
-
- CDEV_LOG_ERR("%s mempool already exists with different"
- " initialization parameters", mp_name);
- dev->data->session_pool = NULL;
- return -ENOMEM;
- }
- } else {
- dev->data->session_pool = rte_mempool_create(
- mp_name, /* mempool name */
- nb_objs, /* number of elements*/
- elt_size, /* element size*/
- obj_cache_size, /* Cache size*/
- 0, /* private data size */
- NULL, /* obj initialization constructor */
- NULL, /* obj initialization constructor arg */
- NULL, /**< obj constructor*/
- dev, /* obj constructor arg */
- socket_id, /* socket id */
- 0); /* flags */
-
- if (dev->data->session_pool == NULL) {
- CDEV_LOG_ERR("%s mempool allocation failed", mp_name);
- return -ENOMEM;
- }
- }
-
- CDEV_LOG_DEBUG("%s mempool created!", mp_name);
- return 0;
-}
struct rte_cryptodev_sym_session *
rte_cryptodev_sym_session_create(uint8_t dev_id,
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index d883d8c..1afd2d8 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -428,11 +428,6 @@ struct rte_cryptodev_config {
int socket_id; /**< Socket to allocate resources on */
uint16_t nb_queue_pairs;
/**< Number of queue pairs to configure on device */
-
- struct {
- uint32_t nb_objs; /**< Number of objects in mempool */
- uint32_t cache_size; /**< l-core object cache size */
- } session_mp; /**< Session mempool configuration */
};
/**
@@ -444,13 +439,15 @@ struct rte_cryptodev_config {
*
* @param dev_id The identifier of the device to configure.
* @param config The crypto device configuration structure.
+ * @param session_pool Pointer to device session mempool
*
* @return
* - 0: Success, device configured.
* - <0: Error code returned by the driver configuration function.
*/
extern int
-rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);
+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config,
+ struct rte_mempool *session_pool);
/**
* Start an device.
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index afa895e..3979145 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -66,6 +66,8 @@ struct crypto_testsuite_params {
struct rte_mempool *mbuf_pool;
struct rte_mempool *large_mbuf_pool;
struct rte_mempool *op_mpool;
+ struct rte_mempool *session_mpool;
+ struct rte_mempool *slave_session_mpool;
struct rte_cryptodev_config conf;
struct rte_cryptodev_qp_conf qp_conf;
@@ -381,10 +383,23 @@ testsuite_setup(void)
ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
ts_params->conf.socket_id = SOCKET_ID_ANY;
- ts_params->conf.session_mp.nb_objs = info.sym.max_nb_sessions;
+
+ unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
+ rte_cryptodev_get_private_session_size(dev_id);
+
+ ts_params->session_mpool = rte_mempool_create(
+ "test_sess_mp",
+ info.sym.max_nb_sessions,
+ session_size,
+ 0, 0, NULL, NULL, NULL,
+ NULL, SOCKET_ID_ANY,
+ 0);
+
+ TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+ "session mempool allocation failed");
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
- &ts_params->conf),
+ &ts_params->conf, ts_params->session_mpool),
"Failed to configure cryptodev %u with %u qps",
dev_id, ts_params->conf.nb_queue_pairs);
@@ -416,6 +431,12 @@ testsuite_teardown(void)
rte_mempool_avail_count(ts_params->op_mpool));
}
+ /* Free session mempools */
+ if (ts_params->session_mpool != NULL)
+ rte_mempool_free(ts_params->session_mpool);
+
+ if (ts_params->slave_session_mpool != NULL)
+ rte_mempool_free(ts_params->slave_session_mpool);
}
static int
@@ -431,10 +452,9 @@ ut_setup(void)
/* Reconfigure device to default parameters */
ts_params->conf.socket_id = SOCKET_ID_ANY;
- ts_params->conf.session_mp.nb_objs = DEFAULT_NUM_OPS_INFLIGHT;
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf),
+ &ts_params->conf, ts_params->session_mpool),
"Failed to configure cryptodev %u",
ts_params->valid_devs[0]);
@@ -517,20 +537,23 @@ test_device_configure_invalid_dev_id(void)
/* Stop the device in case it's started so it can be configured */
rte_cryptodev_stop(ts_params->valid_devs[dev_id]);
- TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf),
+ TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id, &ts_params->conf,
+ ts_params->session_mpool),
"Failed test for rte_cryptodev_configure: "
"invalid dev_num %u", dev_id);
/* invalid dev_id values */
dev_id = num_devs;
- TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+ TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf,
+ ts_params->session_mpool),
"Failed test for rte_cryptodev_configure: "
"invalid dev_num %u", dev_id);
dev_id = 0xff;
- TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf),
+ TEST_ASSERT_FAIL(rte_cryptodev_configure(dev_id, &ts_params->conf,
+ ts_params->session_mpool),
"Failed test for rte_cryptodev_configure:"
"invalid dev_num %u", dev_id);
@@ -550,7 +573,7 @@ test_device_configure_invalid_queue_pair_ids(void)
ts_params->conf.nb_queue_pairs = 1;
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf),
+ &ts_params->conf, ts_params->session_mpool),
"Failed to configure cryptodev: dev_id %u, qp_id %u",
ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
@@ -559,16 +582,17 @@ test_device_configure_invalid_queue_pair_ids(void)
ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE;
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf),
+ &ts_params->conf, ts_params->session_mpool),
"Failed to configure cryptodev: dev_id %u, qp_id %u",
- ts_params->valid_devs[0], ts_params->conf.nb_queue_pairs);
+ ts_params->valid_devs[0],
+ ts_params->conf.nb_queue_pairs);
/* invalid - zero queue pairs */
ts_params->conf.nb_queue_pairs = 0;
TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf),
+ &ts_params->conf, ts_params->session_mpool),
"Failed test for rte_cryptodev_configure, dev_id %u,"
" invalid qps: %u",
ts_params->valid_devs[0],
@@ -579,7 +603,7 @@ test_device_configure_invalid_queue_pair_ids(void)
ts_params->conf.nb_queue_pairs = UINT16_MAX;
TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf),
+ &ts_params->conf, ts_params->session_mpool),
"Failed test for rte_cryptodev_configure, dev_id %u,"
" invalid qps: %u",
ts_params->valid_devs[0],
@@ -590,7 +614,7 @@ test_device_configure_invalid_queue_pair_ids(void)
ts_params->conf.nb_queue_pairs = MAX_NUM_QPS_PER_QAT_DEVICE + 1;
TEST_ASSERT_FAIL(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf),
+ &ts_params->conf, ts_params->session_mpool),
"Failed test for rte_cryptodev_configure, dev_id %u,"
" invalid qps: %u",
ts_params->valid_devs[0],
@@ -619,13 +643,11 @@ test_queue_pair_descriptor_setup(void)
rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
- ts_params->conf.session_mp.nb_objs = dev_info.sym.max_nb_sessions;
-
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->valid_devs[0],
- &ts_params->conf), "Failed to configure cryptodev %u",
+ &ts_params->conf, ts_params->session_mpool),
+ "Failed to configure cryptodev %u",
ts_params->valid_devs[0]);
-
/*
* Test various ring sizes on this device. memzones can't be
* freed so are re-used if ring is released and re-created.
@@ -7837,6 +7859,31 @@ test_scheduler_attach_slave_op(void)
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
continue;
+ /*
+ * Create a separate mempool for the slaves, as they need different
+ * session size and then configure them to store the pointer
+ * to this mempool
+ */
+ unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
+ rte_cryptodev_get_private_session_size(i);
+
+ if (ts_params->slave_session_mpool == NULL) {
+ ts_params->slave_session_mpool = rte_mempool_create(
+ "test_slave_sess_mp",
+ info.sym.max_nb_sessions,
+ session_size,
+ 0, 0, NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+
+ TEST_ASSERT_NOT_NULL(ts_params->slave_session_mpool,
+ "session mempool allocation failed");
+ }
+
+ TEST_ASSERT_SUCCESS(rte_cryptodev_configure(i,
+ &ts_params->conf, ts_params->slave_session_mpool),
+ "Failed to configure cryptodev %u with %u qps",
+ i, ts_params->conf.nb_queue_pairs);
+
ret = rte_cryptodev_scheduler_slave_attach(sched_id,
(uint8_t)i);
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 6553c94..526be82 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -50,6 +50,7 @@
struct crypto_testsuite_params {
struct rte_mempool *mbuf_mp;
struct rte_mempool *op_mpool;
+ struct rte_mempool *session_mpool;
uint16_t nb_queue_pairs;
@@ -394,10 +395,23 @@ testsuite_setup(void)
ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
ts_params->conf.socket_id = SOCKET_ID_ANY;
- ts_params->conf.session_mp.nb_objs = info.sym.max_nb_sessions;
+
+ unsigned int session_size = sizeof(struct rte_cryptodev_sym_session) +
+ rte_cryptodev_get_private_session_size(ts_params->dev_id);
+
+ ts_params->session_mpool = rte_mempool_create(
+ "test_sess_mp_perf",
+ info.sym.max_nb_sessions,
+ session_size,
+ 0, 0, NULL, NULL, NULL,
+ NULL, SOCKET_ID_ANY,
+ 0);
+
+ TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+ "session mempool allocation failed");
TEST_ASSERT_SUCCESS(rte_cryptodev_configure(ts_params->dev_id,
- &ts_params->conf),
+ &ts_params->conf, ts_params->session_mpool),
"Failed to configure cryptodev %u",
ts_params->dev_id);
@@ -426,6 +440,10 @@ testsuite_teardown(void)
if (ts_params->op_mpool != NULL)
RTE_LOG(DEBUG, USER1, "CRYPTO_PERF_OP POOL count %u\n",
rte_mempool_avail_count(ts_params->op_mpool));
+ /* Free session mempool */
+ if (ts_params->session_mpool != NULL)
+ rte_mempool_free(ts_params->session_mpool);
+
}
static int
--
2.9.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v4] cryptodev: remove crypto device type enumeration
2017-06-30 14:10 2% ` [dpdk-dev] [PATCH v3] " Pablo de Lara
@ 2017-06-30 14:34 2% ` Pablo de Lara
0 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-30 14:34 UTC (permalink / raw)
To: declan.doherty; +Cc: dev, Slawomir Mrozowicz, Pablo de Lara
From: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Changes device type identification to be based on a unique
driver id replacing the current device type enumeration, which needed
library changes every time a new crypto driver was added.
The driver id is assigned dynamically during driver registration using
the new macro RTE_PMD_REGISTER_CRYPTO_DRIVER which returns a unique
uint8_t identifier for that driver. New APIs are also introduced
to allow retrieval of the driver id using the driver name.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
v4 changes:
- Reworded release notes
v3 changes:
- Replaced static array of crypto driver ids with
a dynamic queue.
- Renamed function rte_cryptodev_allocate_driver_id,
removing "_id"
v2 changes:
- Added release notes information
- Reduced some call of rte_cryptodev_driver_id_get
- Removed clang compiler error
- Added internal mark for function rte_cryptodev_allocate_driver_id
doc/guides/prog_guide/cryptodev_lib.rst | 5 +-
doc/guides/rel_notes/release_17_08.rst | 27 ++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 9 +-
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 2 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 9 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 2 +-
drivers/crypto/armv8/rte_armv8_pmd.c | 9 +-
drivers/crypto/armv8/rte_armv8_pmd_ops.c | 2 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 7 +-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 9 +-
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 2 +-
drivers/crypto/null/null_crypto_pmd.c | 9 +-
drivers/crypto/null/null_crypto_pmd_ops.c | 2 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 9 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 2 +-
drivers/crypto/qat/qat_crypto.c | 5 +-
drivers/crypto/qat/qat_crypto.h | 2 +
drivers/crypto/qat/rte_qat_cryptodev.c | 6 +-
drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 22 +-
drivers/crypto/scheduler/scheduler_pmd.c | 6 +-
drivers/crypto/scheduler/scheduler_pmd_ops.c | 2 +-
drivers/crypto/scheduler/scheduler_pmd_private.h | 4 +-
drivers/crypto/snow3g/rte_snow3g_pmd.c | 9 +-
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 2 +-
drivers/crypto/zuc/rte_zuc_pmd.c | 9 +-
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 2 +-
lib/librte_cryptodev/rte_cryptodev.c | 65 +++-
lib/librte_cryptodev/rte_cryptodev.h | 68 +++--
lib/librte_cryptodev/rte_cryptodev_pmd.h | 2 +-
lib/librte_cryptodev/rte_cryptodev_version.map | 5 +-
test/test/test_cryptodev.c | 331 +++++++++++++--------
test/test/test_cryptodev_blockcipher.c | 68 +++--
test/test/test_cryptodev_blockcipher.h | 2 +-
test/test/test_cryptodev_perf.c | 217 +++++++++-----
34 files changed, 627 insertions(+), 305 deletions(-)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 4f98f28..4644802 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -291,7 +291,7 @@ relevant information for the device.
struct rte_cryptodev_info {
const char *driver_name;
- enum rte_cryptodev_type dev_type;
+ uint8_t driver_id;
struct rte_pci_device *pci_dev;
uint64_t feature_flags;
@@ -451,7 +451,8 @@ functions for the configuration of the session parameters and freeing function
so the PMD can managed the memory on destruction of a session.
**Note**: Sessions created on a particular device can only be used on Crypto
-devices of the same type, and if you try to use a session on a device different
+devices of the same type - the same driver id used by this devices,
+and if you try to use a session on a device different
to that on which it was created then the Crypto operation will fail.
``rte_cryptodev_sym_session_create()`` is used to create a symmetric session on
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 842f46f..23f1bae 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -75,6 +75,10 @@ New Features
Added support for firmwares with multiple Ethernet ports per physical port.
+* **Updated cryptodev library.**
+
+ Added helper functions for crypto device driver identification.
+
Resolved Issues
---------------
@@ -144,6 +148,14 @@ API Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* **Reworked rte_cryptodev library.**
+
+ The rte_cryptodev library has been reworked and updated. The following changes
+ have been made to it:
+
+ * Device type identification is changed to be based on a unique 1-byte driver id,
+ replacing the previous device type enumeration.
+
ABI Changes
-----------
@@ -159,6 +171,21 @@ ABI Changes
=========================================================
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+ * Add a short 1-2 sentence description of the removed item in the past
+ tense.
+
+ This section is a comment. do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =========================================================
+
+
+* The crypto device type enumeration has been removed from librte_cryptodev.
+
Shared Library Versions
-----------------------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1b95c23..ef290a3 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -43,6 +43,8 @@
#include "aesni_gcm_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
/** GCM encode functions pointer table */
static const struct aesni_gcm_ops aesni_gcm_enc[] = {
[AESNI_GCM_KEY_128] = {
@@ -144,8 +146,8 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
struct aesni_gcm_session *sess = NULL;
if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session->dev_type
- != RTE_CRYPTODEV_AESNI_GCM_PMD))
+ if (unlikely(op->session->driver_id !=
+ cryptodev_driver_id))
return sess;
sess = (struct aesni_gcm_session *)op->session->_private;
@@ -458,7 +460,7 @@ aesni_gcm_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_AESNI_GCM_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_aesni_gcm_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -541,3 +543,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_GCM_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(aesni_gcm_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 7b68a20..721dbda 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -181,7 +181,7 @@ aesni_gcm_pmd_info_get(struct rte_cryptodev *dev,
struct aesni_gcm_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = aesni_gcm_pmd_capabilities;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index f9a7d5b..4025978 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -41,6 +41,8 @@
#include "rte_aesni_mb_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
typedef void (*hash_one_block_t)(const void *data, void *digest);
typedef void (*aes_keyexp_t)(const void *key, void *enc_exp_keys, void *dec_exp_keys);
@@ -346,8 +348,8 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
struct aesni_mb_session *sess = NULL;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_AESNI_MB_PMD)) {
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id)) {
return NULL;
}
@@ -703,7 +705,7 @@ cryptodev_aesni_mb_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_aesni_mb_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -804,3 +806,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_MB_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_aesni_mb_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index d1bc28e..3a2683b 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -321,7 +321,7 @@ aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
struct aesni_mb_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = aesni_mb_pmd_capabilities;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 83dae87..9fe781b 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -45,6 +45,8 @@
#include "rte_armv8_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
static int cryptodev_armv8_crypto_uninit(struct rte_vdev_device *vdev);
/**
@@ -548,8 +550,8 @@ get_session(struct armv8_crypto_qp *qp, struct rte_crypto_op *op)
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
- op->sym->session->dev_type ==
- RTE_CRYPTODEV_ARMV8_PMD)) {
+ op->sym->session->driver_id ==
+ cryptodev_driver_id)) {
sess = (struct armv8_crypto_session *)
op->sym->session->_private;
}
@@ -816,7 +818,7 @@ cryptodev_armv8_crypto_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_ARMV8_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_armv8_crypto_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -906,3 +908,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_ARMV8_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(armv8_crypto_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 4d9ccbf..2911417 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -178,7 +178,7 @@ armv8_crypto_pmd_info_get(struct rte_cryptodev *dev,
struct armv8_crypto_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = armv8_crypto_pmd_capabilities;
dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index e32b27e..70ad07a 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -73,6 +73,8 @@
#define AES_CBC_IV_LEN 16
enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+static uint8_t cryptodev_driver_id;
+
static inline int
build_authenc_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
@@ -1383,7 +1385,7 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
info->feature_flags = dev->feature_flags;
info->capabilities = dpaa2_sec_capabilities;
info->sym.max_nb_sessions = internals->max_nb_sessions;
- info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ info->driver_id = cryptodev_driver_id;
}
}
@@ -1508,7 +1510,7 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
}
hw_id = dpaa2_dev->object_id;
- cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ cryptodev->driver_id = cryptodev_driver_id;
cryptodev->dev_ops = &crypto_ops;
cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
@@ -1651,3 +1653,4 @@ static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
};
RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
+RTE_PMD_REGISTER_CRYPTO_DRIVER(rte_dpaa2_sec_driver, cryptodev_driver_id);
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 70bf228..648718c 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -48,6 +48,8 @@
#define KASUMI_MAX_BURST 4
#define BYTE_LEN 8
+static uint8_t cryptodev_driver_id;
+
/** Get xform chain order. */
static enum kasumi_operation
kasumi_get_mode(const struct rte_crypto_sym_xform *xform)
@@ -144,8 +146,8 @@ kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
struct kasumi_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_KASUMI_PMD))
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct kasumi_session *)op->sym->session->_private;
@@ -582,7 +584,7 @@ cryptodev_kasumi_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_KASUMI_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_kasumi_pmd_ops;
/* Register RX/TX burst functions for data path. */
@@ -666,3 +668,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_KASUMI_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_kasumi_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index 62ebdbd..343c9b3 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -156,7 +156,7 @@ kasumi_pmd_info_get(struct rte_cryptodev *dev,
struct kasumi_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 53bdc3e..7ab3570 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -39,6 +39,8 @@
#include "null_crypto_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
/** verify and set session parameters */
int
null_crypto_set_session_parameters(
@@ -95,8 +97,8 @@ get_session(struct null_crypto_qp *qp, struct rte_crypto_sym_op *op)
struct null_crypto_session *sess;
if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session == NULL ||
- op->session->dev_type != RTE_CRYPTODEV_NULL_PMD))
+ if (unlikely(op->session == NULL || op->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct null_crypto_session *)op->session->_private;
@@ -186,7 +188,7 @@ cryptodev_null_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_NULL_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = null_crypto_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -271,3 +273,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_NULL_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_null_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index 12c946c..a7c891e 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -151,7 +151,7 @@ null_crypto_pmd_info_get(struct rte_cryptodev *dev,
struct null_crypto_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5d29171..4e4394f 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -45,6 +45,8 @@
#define DES_BLOCK_SIZE 8
+static uint8_t cryptodev_driver_id;
+
static int cryptodev_openssl_remove(struct rte_vdev_device *vdev);
/*----------------------------------------------------------------------------*/
@@ -449,8 +451,8 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
- op->sym->session->dev_type ==
- RTE_CRYPTODEV_OPENSSL_PMD))
+ op->sym->session->driver_id ==
+ cryptodev_driver_id))
sess = (struct openssl_session *)
op->sym->session->_private;
} else {
@@ -1285,7 +1287,7 @@ cryptodev_openssl_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_OPENSSL_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_openssl_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -1374,3 +1376,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_OPENSSL_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_openssl_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 22a6873..f65de53 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -536,7 +536,7 @@ openssl_pmd_info_get(struct rte_cryptodev *dev,
struct openssl_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = openssl_pmd_capabilities;
dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index f8e1d01..7c5a9a8 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -914,7 +914,8 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
return -EINVAL;
}
- if (unlikely(op->sym->session->dev_type != RTE_CRYPTODEV_QAT_SYM_PMD)) {
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_qat_driver_id)) {
PMD_DRV_LOG(ERR, "Session was not created for this device");
return -EINVAL;
}
@@ -1254,7 +1255,7 @@ void qat_dev_info_get(struct rte_cryptodev *dev,
info->feature_flags = dev->feature_flags;
info->capabilities = internals->qat_dev_capabilities;
info->sym.max_nb_sessions = internals->max_nb_sessions;
- info->dev_type = RTE_CRYPTODEV_QAT_SYM_PMD;
+ info->driver_id = cryptodev_qat_driver_id;
info->pci_dev = RTE_DEV_TO_PCI(dev->device);
}
}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index b740d6b..efcf607 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -85,6 +85,8 @@ struct qat_pmd_private {
const struct rte_cryptodev_capabilities *qat_dev_capabilities;
};
+extern uint8_t cryptodev_qat_driver_id;
+
int qat_dev_config(struct rte_cryptodev *dev,
struct rte_cryptodev_config *config);
int qat_dev_start(struct rte_cryptodev *dev);
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
index 78d50fb..1c5ff77 100644
--- a/drivers/crypto/qat/rte_qat_cryptodev.c
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -40,6 +40,8 @@
#include "qat_crypto.h"
#include "qat_logs.h"
+uint8_t cryptodev_qat_driver_id;
+
static const struct rte_cryptodev_capabilities qat_cpm16_capabilities[] = {
QAT_BASE_CPM16_SYM_CAPABILITIES,
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
@@ -106,7 +108,7 @@ crypto_qat_dev_init(struct rte_cryptodev *cryptodev)
RTE_DEV_TO_PCI(cryptodev->device)->addr.devid,
RTE_DEV_TO_PCI(cryptodev->device)->addr.function);
- cryptodev->dev_type = RTE_CRYPTODEV_QAT_SYM_PMD;
+ cryptodev->driver_id = cryptodev_qat_driver_id;
cryptodev->dev_ops = &crypto_qat_ops;
cryptodev->enqueue_burst = qat_pmd_enqueue_op_burst;
@@ -168,4 +170,4 @@ static struct rte_pci_driver rte_qat_pmd = {
RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_QAT_SYM_PMD, rte_qat_pmd);
RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_QAT_SYM_PMD, pci_id_qat_map);
-
+RTE_PMD_REGISTER_CRYPTO_DRIVER(rte_qat_pmd, cryptodev_qat_driver_id);
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
index 95566d5..9c364c2 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -198,7 +198,7 @@ rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -226,12 +226,12 @@ rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
rte_cryptodev_info_get(slave_id, &dev_info);
slave->dev_id = slave_id;
- slave->dev_type = dev_info.dev_type;
+ slave->driver_id = dev_info.driver_id;
sched_ctx->nb_slaves++;
if (update_scheduler_capability(sched_ctx) < 0) {
slave->dev_id = 0;
- slave->dev_type = 0;
+ slave->driver_id = 0;
sched_ctx->nb_slaves--;
CS_LOG_ERR("capabilities update failed");
@@ -257,7 +257,7 @@ rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -314,7 +314,7 @@ rte_cryptodev_scheduler_mode_set(uint8_t scheduler_id,
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -370,7 +370,7 @@ rte_cryptodev_scheduler_mode_get(uint8_t scheduler_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -392,7 +392,7 @@ rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -420,7 +420,7 @@ rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -442,7 +442,7 @@ rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -499,7 +499,7 @@ rte_cryptodev_scheduler_slaves_get(uint8_t scheduler_id, uint8_t *slaves)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -567,7 +567,7 @@ rte_cryptodev_scheduler_option_get(uint8_t scheduler_id,
return -EINVAL;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
index fefd6cc..b385851 100644
--- a/drivers/crypto/scheduler/scheduler_pmd.c
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -42,6 +42,8 @@
#include "rte_cryptodev_scheduler.h"
#include "scheduler_pmd_private.h"
+uint8_t cryptodev_driver_id;
+
struct scheduler_init_params {
struct rte_crypto_vdev_init_params def_p;
uint32_t nb_slaves;
@@ -113,7 +115,7 @@ cryptodev_scheduler_create(const char *name,
return -EFAULT;
}
- dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_crypto_scheduler_pmd_ops;
sched_ctx = dev->data->dev_private;
@@ -436,3 +438,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
"max_nb_sessions=<int> "
"socket_id=<int> "
"slave=<name>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_scheduler_pmd_drv,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index 4fc8b91..90e3734 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -369,7 +369,7 @@ scheduler_pmd_info_get(struct rte_cryptodev *dev,
max_nb_sessions;
}
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = sched_ctx->capabilities;
dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
index 05a5916..efb2bbc 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_private.h
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -63,7 +63,7 @@ struct scheduler_slave {
uint16_t qp_id;
uint32_t nb_inflight_cops;
- enum rte_cryptodev_type dev_type;
+ uint8_t driver_id;
};
struct scheduler_ctx {
@@ -105,6 +105,8 @@ struct scheduler_session {
RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES];
};
+extern uint8_t cryptodev_driver_id;
+
static __rte_always_inline uint16_t
get_max_enqueue_order_count(struct rte_ring *order_ring, uint16_t nb_ops)
{
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 8945f19..fe074c1 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -47,6 +47,8 @@
#define SNOW3G_MAX_BURST 8
#define BYTE_LEN 8
+static uint8_t cryptodev_driver_id;
+
/** Get xform chain order. */
static enum snow3g_operation
snow3g_get_mode(const struct rte_crypto_sym_xform *xform)
@@ -144,8 +146,8 @@ snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *op)
struct snow3g_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_SNOW3G_PMD))
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct snow3g_session *)op->sym->session->_private;
@@ -571,7 +573,7 @@ cryptodev_snow3g_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_SNOW3G_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_snow3g_pmd_ops;
/* Register RX/TX burst functions for data path. */
@@ -655,3 +657,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SNOW3G_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_snow3g_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 7ce96be..26cc3e9 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -156,7 +156,7 @@ snow3g_pmd_info_get(struct rte_cryptodev *dev,
struct snow3g_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index ec6d54f..b7b8dfc 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -46,6 +46,8 @@
#define ZUC_MAX_BURST 8
#define BYTE_LEN 8
+static uint8_t cryptodev_driver_id;
+
/** Get xform chain order. */
static enum zuc_operation
zuc_get_mode(const struct rte_crypto_sym_xform *xform)
@@ -143,8 +145,8 @@ zuc_get_session(struct zuc_qp *qp, struct rte_crypto_op *op)
struct zuc_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_ZUC_PMD))
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct zuc_session *)op->sym->session->_private;
@@ -471,7 +473,7 @@ cryptodev_zuc_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_ZUC_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_zuc_pmd_ops;
/* Register RX/TX burst functions for data path. */
@@ -554,3 +556,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_ZUC_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_zuc_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index e793459..645b80c 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -156,7 +156,7 @@ zuc_pmd_info_get(struct rte_cryptodev *dev,
struct zuc_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index a466ed7..17f7c63 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -391,12 +391,12 @@ rte_cryptodev_count(void)
}
uint8_t
-rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+rte_cryptodev_device_count_by_driver(uint8_t driver_id)
{
uint8_t i, dev_count = 0;
for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
- if (rte_cryptodev_globals->devs[i].dev_type == type &&
+ if (rte_cryptodev_globals->devs[i].driver_id == driver_id &&
rte_cryptodev_globals->devs[i].attached ==
RTE_CRYPTODEV_ATTACHED)
dev_count++;
@@ -1040,7 +1040,7 @@ rte_cryptodev_sym_session_init(struct rte_mempool *mp,
memset(sess, 0, mp->elt_size);
sess->dev_id = dev->data->dev_id;
- sess->dev_type = dev->dev_type;
+ sess->driver_id = dev->driver_id;
sess->mp = mp;
if (dev->dev_ops->session_initialize)
@@ -1207,7 +1207,7 @@ rte_cryptodev_sym_session_free(uint8_t dev_id,
dev = &rte_crypto_devices[dev_id];
/* Check the session belongs to this device type */
- if (sess->dev_type != dev->dev_type)
+ if (sess->driver_id != dev->driver_id)
return sess;
/* Let device implementation clear session material */
@@ -1319,3 +1319,60 @@ rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix)
return -1;
}
+
+TAILQ_HEAD(cryptodev_driver_list, cryptodev_driver);
+
+static struct cryptodev_driver_list cryptodev_driver_list =
+ TAILQ_HEAD_INITIALIZER(cryptodev_driver_list);
+
+struct cryptodev_driver {
+ TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+ const struct rte_driver *driver;
+ uint8_t id;
+};
+
+static uint8_t nb_drivers;
+
+int
+rte_cryptodev_driver_id_get(const char *name)
+{
+ struct cryptodev_driver *driver;
+ const char *driver_name;
+
+ if (name == NULL) {
+ RTE_LOG(DEBUG, CRYPTODEV, "name pointer NULL");
+ return -1;
+ }
+
+ TAILQ_FOREACH(driver, &cryptodev_driver_list, next) {
+ driver_name = driver->driver->name;
+ if (strncmp(driver_name, name, strlen(driver_name)) == 0)
+ return driver->id;
+ }
+ return -1;
+}
+
+const char *
+rte_cryptodev_driver_name_get(uint8_t driver_id)
+{
+ struct cryptodev_driver *driver;
+
+ TAILQ_FOREACH(driver, &cryptodev_driver_list, next)
+ if (driver->id == driver_id)
+ return driver->driver->name;
+ return NULL;
+}
+
+uint8_t
+rte_cryptodev_allocate_driver(const struct rte_driver *drv)
+{
+ struct cryptodev_driver *driver;
+
+ driver = malloc(sizeof(*driver));
+ driver->driver = drv;
+ driver->id = nb_drivers;
+
+ TAILQ_INSERT_TAIL(&cryptodev_driver_list, driver, next);
+
+ return nb_drivers++;
+}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4e318f0..9d541a8 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -74,21 +74,6 @@ extern "C" {
#define CRYPTODEV_NAME_DPAA2_SEC_PMD cryptodev_dpaa2_sec_pmd
/**< NXP DPAA2 - SEC PMD device name */
-/** Crypto device type */
-enum rte_cryptodev_type {
- RTE_CRYPTODEV_NULL_PMD = 1, /**< Null crypto PMD */
- RTE_CRYPTODEV_AESNI_GCM_PMD, /**< AES-NI GCM PMD */
- RTE_CRYPTODEV_AESNI_MB_PMD, /**< AES-NI multi buffer PMD */
- RTE_CRYPTODEV_QAT_SYM_PMD, /**< QAT PMD Symmetric Crypto */
- RTE_CRYPTODEV_SNOW3G_PMD, /**< SNOW 3G PMD */
- RTE_CRYPTODEV_KASUMI_PMD, /**< KASUMI PMD */
- RTE_CRYPTODEV_ZUC_PMD, /**< ZUC PMD */
- RTE_CRYPTODEV_OPENSSL_PMD, /**< OpenSSL PMD */
- RTE_CRYPTODEV_ARMV8_PMD, /**< ARMv8 crypto PMD */
- RTE_CRYPTODEV_SCHEDULER_PMD, /**< Crypto Scheduler PMD */
- RTE_CRYPTODEV_DPAA2_SEC_PMD, /**< NXP DPAA2 - SEC PMD */
-};
-
extern const char **rte_cyptodev_names;
/* Logging Macros */
@@ -322,7 +307,7 @@ rte_cryptodev_get_feature_name(uint64_t flag);
/** Crypto device information */
struct rte_cryptodev_info {
const char *driver_name; /**< Driver name. */
- enum rte_cryptodev_type dev_type; /**< Device type */
+ uint8_t driver_id; /**< Driver identifier */
struct rte_pci_device *pci_dev; /**< PCI information. */
uint64_t feature_flags; /**< Feature flags */
@@ -426,13 +411,13 @@ rte_cryptodev_count(void);
/**
* Get number of crypto device defined type.
*
- * @param type type of device.
+ * @param driver_id driver identifier.
*
* @return
* Returns number of crypto device.
*/
extern uint8_t
-rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+rte_cryptodev_device_count_by_driver(uint8_t driver_id);
/**
* Get number and identifiers of attached crypto devices that
@@ -703,8 +688,8 @@ struct rte_cryptodev {
struct rte_device *device;
/**< Backing device */
- enum rte_cryptodev_type dev_type;
- /**< Crypto device type */
+ uint8_t driver_id;
+ /**< Crypto driver identifier*/
struct rte_cryptodev_cb_list link_intr_cbs;
/**< User application callback for interrupts if present */
@@ -841,8 +826,8 @@ struct rte_cryptodev_sym_session {
struct {
uint8_t dev_id;
/**< Device Id */
- enum rte_cryptodev_type dev_type;
- /** Crypto Device type session created on */
+ uint8_t driver_id;
+ /** Crypto driver identifier session created on */
struct rte_mempool *mp;
/**< Mempool session allocated from */
} __rte_aligned(8);
@@ -923,6 +908,45 @@ int
rte_cryptodev_queue_pair_detach_sym_session(uint16_t qp_id,
struct rte_cryptodev_sym_session *session);
+/**
+ * Provide driver identifier.
+ *
+ * @param name
+ * The pointer to a driver name.
+ * @return
+ * The driver type identifier or -1 if no driver found
+ */
+int rte_cryptodev_driver_id_get(const char *name);
+
+/**
+ * Provide driver name.
+ *
+ * @param driver_id
+ * The driver identifier.
+ * @return
+ * The driver name or null if no driver found
+ */
+const char *rte_cryptodev_driver_name_get(uint8_t driver_id);
+
+/**
+ * @internal
+ * Allocate Cryptodev driver.
+ *
+ * @param driver
+ * Pointer to rte_driver.
+ * @return
+ * The driver type identifier
+ */
+uint8_t rte_cryptodev_allocate_driver(const struct rte_driver *driver);
+
+
+#define RTE_PMD_REGISTER_CRYPTO_DRIVER(drv, driver_id)\
+RTE_INIT(init_ ##driver_id);\
+static void init_ ##driver_id(void)\
+{\
+ driver_id = rte_cryptodev_allocate_driver(&(drv).driver);\
+}
+
#ifdef __cplusplus
}
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 8e8b2ad..f6aa84d 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -60,7 +60,7 @@ struct rte_cryptodev_session {
RTE_STD_C11
struct {
uint8_t dev_id;
- enum rte_cryptodev_type type;
+ uint8_t driver_id;
struct rte_mempool *mp;
} __rte_aligned(8);
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 7191607..afe148a 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -6,7 +6,6 @@ DPDK_16.04 {
rte_cryptodev_callback_unregister;
rte_cryptodev_close;
rte_cryptodev_count;
- rte_cryptodev_count_devtype;
rte_cryptodev_configure;
rte_cryptodev_create_vdev;
rte_cryptodev_get_dev_id;
@@ -62,6 +61,10 @@ DPDK_17.05 {
DPDK_17.08 {
global:
+ rte_cryptodev_allocate_driver;
+ rte_cryptodev_device_count_by_driver;
+ rte_cryptodev_driver_id_get;
+ rte_cryptodev_driver_name_get;
rte_cryptodev_pci_generic_probe;
rte_cryptodev_pci_generic_remove;
rte_cryptodev_vdev_parse_init_params;
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index f8f15c0..afa895e 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -60,7 +60,7 @@
#include "test_cryptodev_gcm_test_vectors.h"
#include "test_cryptodev_hmac_test_vectors.h"
-static enum rte_cryptodev_type gbl_cryptodev_type;
+static int gbl_driver_id;
struct crypto_testsuite_params {
struct rte_mempool *mbuf_pool;
@@ -210,14 +210,11 @@ testsuite_setup(void)
}
/* Create an AESNI MB device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
-#ifndef RTE_LIBRTE_PMD_AESNI_MB
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_AESNI_MB_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL);
@@ -230,14 +227,11 @@ testsuite_setup(void)
}
/* Create an AESNI GCM device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_GCM_PMD) {
-#ifndef RTE_LIBRTE_PMD_AESNI_GCM
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_AESNI_GCM_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL),
@@ -248,13 +242,11 @@ testsuite_setup(void)
}
/* Create a SNOW 3G device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_SNOW3G_PMD) {
-#ifndef RTE_LIBRTE_PMD_SNOW3G
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_SNOW3G must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_SNOW3G_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD), NULL),
@@ -265,13 +257,11 @@ testsuite_setup(void)
}
/* Create a KASUMI device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_KASUMI_PMD) {
-#ifndef RTE_LIBRTE_PMD_KASUMI
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_KASUMI must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_KASUMI_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_KASUMI_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_KASUMI_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_KASUMI_PMD), NULL),
@@ -282,13 +272,11 @@ testsuite_setup(void)
}
/* Create a ZUC device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_ZUC_PMD) {
-#ifndef RTE_LIBRTE_PMD_ZUC
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_ZUC must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_ZUC_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ZUC_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ZUC_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_ZUC_PMD), NULL),
@@ -299,14 +287,11 @@ testsuite_setup(void)
}
/* Create a NULL device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_NULL_PMD) {
-#ifndef RTE_LIBRTE_PMD_NULL_CRYPTO
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_NULL_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_NULL_PMD), NULL);
@@ -319,14 +304,11 @@ testsuite_setup(void)
}
/* Create an OPENSSL device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_OPENSSL_PMD) {
-#ifndef RTE_LIBRTE_PMD_OPENSSL
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_OPENSSL must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_OPENSSL_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD),
@@ -339,14 +321,11 @@ testsuite_setup(void)
}
/* Create a ARMv8 device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_ARMV8_PMD) {
-#ifndef RTE_LIBRTE_PMD_ARMV8_CRYPTO
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_ARMV8_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_ARMV8_PMD),
@@ -359,15 +338,12 @@ testsuite_setup(void)
}
#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
- if (gbl_cryptodev_type == RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD))) {
-#ifndef RTE_LIBRTE_PMD_AESNI_MB
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_SCHEDULER_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
@@ -381,14 +357,6 @@ testsuite_setup(void)
}
#endif /* RTE_LIBRTE_PMD_CRYPTO_SCHEDULER */
-#ifndef RTE_LIBRTE_PMD_QAT
- if (gbl_cryptodev_type == RTE_CRYPTODEV_QAT_SYM_PMD) {
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_QAT must be enabled "
- "in config file to run this testsuite.\n");
- return TEST_FAILED;
- }
-#endif
-
nb_devs = rte_cryptodev_count();
if (nb_devs < 1) {
RTE_LOG(ERR, USER1, "No crypto devices found?\n");
@@ -398,7 +366,7 @@ testsuite_setup(void)
/* Create list of valid crypto devs */
for (i = 0; i < nb_devs; i++) {
rte_cryptodev_info_get(i, &info);
- if (info.dev_type == gbl_cryptodev_type)
+ if (info.driver_id == gbl_driver_id)
ts_params->valid_devs[ts_params->valid_dev_count++] = i;
}
@@ -1341,7 +1309,8 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
TEST_ASSERT_BUFFERS_ARE_EQUAL(digest,
catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
- gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+ gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)) ?
TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
DIGEST_BYTE_LENGTH_SHA1,
"Generated digest data not as expected");
@@ -1506,7 +1475,8 @@ test_AES_cipheronly_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1522,7 +1492,8 @@ test_AES_docsis_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1538,7 +1509,8 @@ test_AES_docsis_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1554,7 +1526,8 @@ test_DES_docsis_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_DES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1570,7 +1543,8 @@ test_authonly_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1586,7 +1560,8 @@ test_AES_chain_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1604,7 +1579,8 @@ test_AES_cipheronly_scheduler_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_SCHEDULER_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1620,7 +1596,8 @@ test_AES_chain_scheduler_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_SCHEDULER_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1636,7 +1613,8 @@ test_authonly_scheduler_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_SCHEDULER_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1654,7 +1632,8 @@ test_AES_chain_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1670,7 +1649,8 @@ test_AES_cipheronly_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1686,7 +1666,8 @@ test_AES_chain_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1702,7 +1683,8 @@ test_AES_cipheronly_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1718,7 +1700,8 @@ test_AES_chain_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1734,7 +1717,8 @@ test_AES_cipheronly_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1750,7 +1734,8 @@ test_authonly_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1766,7 +1751,8 @@ test_AES_chain_armv8_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_ARMV8_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4689,7 +4675,8 @@ test_3DES_chain_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4705,7 +4692,8 @@ test_DES_cipheronly_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4721,7 +4709,8 @@ test_DES_docsis_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_DES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4737,7 +4726,8 @@ test_3DES_chain_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4753,7 +4743,8 @@ test_3DES_cipheronly_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4769,7 +4760,8 @@ test_3DES_cipheronly_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4785,7 +4777,8 @@ test_3DES_chain_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4801,7 +4794,8 @@ test_3DES_cipheronly_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -7816,8 +7810,9 @@ test_scheduler_attach_slave_op(void)
char vdev_name[32];
/* create 2 AESNI_MB if necessary */
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_AESNI_MB_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)));
if (nb_devs < 2) {
for (i = nb_devs; i < 2; i++) {
snprintf(vdev_name, sizeof(vdev_name), "%s_%u",
@@ -7838,7 +7833,8 @@ test_scheduler_attach_slave_op(void)
struct rte_cryptodev_info info;
rte_cryptodev_info_get(i, &info);
- if (info.dev_type != RTE_CRYPTODEV_AESNI_MB_PMD)
+ if (info.driver_id != rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
continue;
ret = rte_cryptodev_scheduler_slave_attach(sched_id,
@@ -8605,14 +8601,31 @@ static struct unit_test_suite cryptodev_armv8_testsuite = {
static int
test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_QAT is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
return unit_test_suite_runner(&cryptodev_qat_testsuite);
}
static int
test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI MB PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_MB is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_aesni_mb_testsuite);
}
@@ -8620,7 +8633,15 @@ test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_openssl(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_OPENSSL_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI MB PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_MB is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_openssl_testsuite);
}
@@ -8628,7 +8649,15 @@ test_cryptodev_openssl(void)
static int
test_cryptodev_aesni_gcm(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_GCM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI GCM PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_aesni_gcm_testsuite);
}
@@ -8636,7 +8665,15 @@ test_cryptodev_aesni_gcm(void)
static int
test_cryptodev_null(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_NULL_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "NULL PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_NULL is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_null_testsuite);
}
@@ -8644,7 +8681,15 @@ test_cryptodev_null(void)
static int
test_cryptodev_sw_snow3g(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_SNOW3G_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "SNOW3G PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_SNOW3G is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_sw_snow3g_testsuite);
}
@@ -8652,7 +8697,15 @@ test_cryptodev_sw_snow3g(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_sw_kasumi(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_KASUMI_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_KASUMI_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "ZUC PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_KASUMI is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_sw_kasumi_testsuite);
}
@@ -8660,7 +8713,15 @@ test_cryptodev_sw_kasumi(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_sw_zuc(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_ZUC_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ZUC_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "ZUC PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_ZUC is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_sw_zuc_testsuite);
}
@@ -8668,7 +8729,15 @@ test_cryptodev_sw_zuc(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_armv8(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_ARMV8_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "ARMV8 PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_ARMV8 is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_armv8_testsuite);
}
@@ -8678,7 +8747,22 @@ test_cryptodev_armv8(void)
static int
test_cryptodev_scheduler(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "SCHEDULER PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_SCHEDULER is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
+ if (rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)) == -1) {
+ RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
+ " enabled in config file to run this testsuite.\n");
+ return TEST_FAILED;
+}
return unit_test_suite_runner(&cryptodev_scheduler_testsuite);
}
@@ -8689,7 +8773,16 @@ REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
static int
test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "DPAA2 SEC PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
}
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 603c776..4bc370d 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -53,7 +53,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
uint8_t dev_id,
- enum rte_cryptodev_type cryptodev_type,
+ int driver_id,
char *test_msg)
{
struct rte_mbuf *ibuf = NULL;
@@ -79,6 +79,17 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
uint8_t tmp_src_buf[MBUF_SIZE];
uint8_t tmp_dst_buf[MBUF_SIZE];
+ int openssl_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+ int scheduler_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+ int armv8_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+ int aesni_mb_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+ int qat_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
int nb_segs = 1;
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SG) {
@@ -99,17 +110,14 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
memcpy(auth_key, tdata->auth_key.data,
tdata->auth_key.len);
- switch (cryptodev_type) {
- case RTE_CRYPTODEV_QAT_SYM_PMD:
- case RTE_CRYPTODEV_OPENSSL_PMD:
- case RTE_CRYPTODEV_ARMV8_PMD: /* Fall through */
+ if (driver_id == qat_pmd ||
+ driver_id == openssl_pmd ||
+ driver_id == armv8_pmd) { /* Fall through */
digest_len = tdata->digest.len;
- break;
- case RTE_CRYPTODEV_AESNI_MB_PMD:
- case RTE_CRYPTODEV_SCHEDULER_PMD:
+ } else if (driver_id == aesni_mb_pmd ||
+ driver_id == scheduler_pmd) {
digest_len = tdata->digest.truncated_len;
- break;
- default:
+ } else {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
"line %u FAILED: %s",
__LINE__, "Unsupported PMD type");
@@ -592,7 +600,7 @@ int
test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
uint8_t dev_id,
- enum rte_cryptodev_type cryptodev_type,
+ int driver_id,
enum blockcipher_test_type test_type)
{
int status, overall_status = TEST_SUCCESS;
@@ -602,6 +610,19 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
uint32_t target_pmd_mask = 0;
const struct blockcipher_test_case *tcs = NULL;
+ int openssl_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+ int dpaa2_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+ int scheduler_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+ int armv8_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+ int aesni_mb_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+ int qat_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
switch (test_type) {
case BLKCIPHER_AES_CHAIN_TYPE:
n_test_cases = sizeof(aes_chain_test_cases) /
@@ -647,29 +668,20 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
break;
}
- switch (cryptodev_type) {
- case RTE_CRYPTODEV_AESNI_MB_PMD:
+ if (driver_id == aesni_mb_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB;
- break;
- case RTE_CRYPTODEV_QAT_SYM_PMD:
+ else if (driver_id == qat_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_QAT;
- break;
- case RTE_CRYPTODEV_OPENSSL_PMD:
+ else if (driver_id == openssl_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL;
- break;
- case RTE_CRYPTODEV_ARMV8_PMD:
+ else if (driver_id == armv8_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8;
- break;
- case RTE_CRYPTODEV_SCHEDULER_PMD:
+ else if (driver_id == scheduler_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
- break;
- case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+ else if (driver_id == dpaa2_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
- break;
- default:
+ else
TEST_ASSERT(0, "Unrecognized cryptodev type");
- break;
- }
for (i = 0; i < n_test_cases; i++) {
const struct blockcipher_test_case *tc = &tcs[i];
@@ -678,7 +690,7 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
continue;
status = test_blockcipher_one_case(tc, mbuf_pool, op_mpool,
- dev_id, cryptodev_type, test_msg);
+ dev_id, driver_id, test_msg);
printf(" %u) TestCase %s %s\n", test_index ++,
tc->test_descr, test_msg);
diff --git a/test/test/test_cryptodev_blockcipher.h b/test/test/test_cryptodev_blockcipher.h
index 004122f..22fb420 100644
--- a/test/test/test_cryptodev_blockcipher.h
+++ b/test/test/test_cryptodev_blockcipher.h
@@ -126,7 +126,7 @@ int
test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
uint8_t dev_id,
- enum rte_cryptodev_type cryptodev_type,
+ int driver_id,
enum blockcipher_test_type test_type);
#endif /* TEST_CRYPTODEV_BLOCKCIPHER_H_ */
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 7a90667..6553c94 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -195,23 +195,35 @@ static const char *chain_mode_name(enum chain_mode mode)
}
}
-static const char *pmd_name(enum rte_cryptodev_type pmd)
+static const char *pmd_name(uint8_t driver_id)
{
- switch (pmd) {
- case RTE_CRYPTODEV_NULL_PMD: return RTE_STR(CRYPTODEV_NAME_NULL_PMD); break;
- case RTE_CRYPTODEV_AESNI_GCM_PMD:
+ uint8_t null_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD));
+ uint8_t dpaa2_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+ uint8_t snow3g_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD));
+ uint8_t aesni_gcm_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+ uint8_t aesni_mb_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+ uint8_t qat_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (driver_id == null_pmd)
+ return RTE_STR(CRYPTODEV_NAME_NULL_PMD);
+ else if (driver_id == aesni_gcm_pmd)
return RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD);
- case RTE_CRYPTODEV_AESNI_MB_PMD:
+ else if (driver_id == aesni_mb_pmd)
return RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD);
- case RTE_CRYPTODEV_QAT_SYM_PMD:
+ else if (driver_id == qat_pmd)
return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
- case RTE_CRYPTODEV_SNOW3G_PMD:
+ else if (driver_id == snow3g_pmd)
return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
- case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+ else if (driver_id == dpaa2_pmd)
return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
- default:
+ else
return "";
- }
}
static struct rte_mbuf *
@@ -236,7 +248,7 @@ setup_test_string(struct rte_mempool *mpool,
static struct crypto_testsuite_params testsuite_params = { NULL };
static struct crypto_unittest_params unittest_params;
-static enum rte_cryptodev_type gbl_cryptodev_perftest_devtype;
+static int gbl_driver_id;
static int
testsuite_setup(void)
@@ -273,13 +285,11 @@ testsuite_setup(void)
}
/* Create an AESNI MB device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
-#ifndef RTE_LIBRTE_PMD_AESNI_MB
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL);
@@ -291,13 +301,11 @@ testsuite_setup(void)
}
/* Create an AESNI GCM device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_AESNI_GCM_PMD) {
-#ifndef RTE_LIBRTE_PMD_AESNI_GCM
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_GCM_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL);
@@ -309,13 +317,11 @@ testsuite_setup(void)
}
/* Create a SNOW3G device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_SNOW3G_PMD) {
-#ifndef RTE_LIBRTE_PMD_SNOW3G
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_SNOW3G must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_SNOW3G_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD), NULL);
@@ -327,14 +333,11 @@ testsuite_setup(void)
}
/* Create an OPENSSL device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_OPENSSL_PMD) {
-#ifndef RTE_LIBRTE_PMD_OPENSSL
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_OPENSSL must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_OPENSSL_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD),
@@ -347,14 +350,11 @@ testsuite_setup(void)
}
/* Create an ARMv8 device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_ARMV8_PMD) {
-#ifndef RTE_LIBRTE_PMD_ARMV8_CRYPTO
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_ARMV8_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_ARMV8_PMD),
@@ -366,14 +366,6 @@ testsuite_setup(void)
}
}
-#ifndef RTE_LIBRTE_PMD_QAT
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_QAT_SYM_PMD) {
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_QAT must be enabled "
- "in config file to run this testsuite.\n");
- return TEST_FAILED;
- }
-#endif
-
nb_devs = rte_cryptodev_count();
if (nb_devs < 1) {
RTE_LOG(ERR, USER1, "No crypto devices found?\n");
@@ -383,7 +375,7 @@ testsuite_setup(void)
/* Search for the first valid */
for (i = 0; i < nb_devs; i++) {
rte_cryptodev_info_get(i, &info);
- if (info.dev_type == gbl_cryptodev_perftest_devtype) {
+ if (info.driver_id == (uint8_t) gbl_driver_id) {
ts_params->dev_id = i;
valid_dev_id = 1;
break;
@@ -2042,8 +2034,9 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
}
while (num_received != num_to_submit) {
- if (gbl_cryptodev_perftest_devtype ==
- RTE_CRYPTODEV_AESNI_MB_PMD)
+ if (gbl_driver_id ==
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
rte_cryptodev_enqueue_burst(dev_num, 0,
NULL, 0);
@@ -2114,7 +2107,7 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
printf("\nOn %s dev%u qp%u, %s, cipher algo:%s, auth_algo:%s, "
"Packet Size %u bytes",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
ts_params->dev_id, 0,
chain_mode_name(pparams->chain),
rte_crypto_cipher_algorithm_strings[pparams->cipher_algo],
@@ -2158,8 +2151,9 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
}
while (num_ops_received != num_to_submit) {
- if (gbl_cryptodev_perftest_devtype ==
- RTE_CRYPTODEV_AESNI_MB_PMD)
+ if (gbl_driver_id ==
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
rte_cryptodev_enqueue_burst(ts_params->dev_id, 0,
NULL, 0);
start_cycles = rte_rdtsc_precise();
@@ -2309,7 +2303,7 @@ test_perf_openssl_optimise_cyclecount(struct perf_test_params *pparams)
printf("\nOn %s dev%u qp%u, %s, cipher algo:%s, cipher key length:%u, "
"auth_algo:%s, Packet Size %u bytes",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
ts_params->dev_id, 0,
chain_mode_name(pparams->chain),
rte_crypto_cipher_algorithm_strings[pparams->cipher_algo],
@@ -2444,7 +2438,7 @@ test_perf_armv8_optimise_cyclecount(struct perf_test_params *pparams)
printf("\nOn %s dev%u qp%u, %s, cipher algo:%s, cipher key length:%u, "
"auth_algo:%s, Packet Size %u bytes",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
ts_params->dev_id, 0,
chain_mode_name(pparams->chain),
rte_crypto_cipher_algorithm_strings[pparams->cipher_algo],
@@ -3358,7 +3352,8 @@ test_perf_snow3g(uint8_t dev_id, uint16_t queue_id,
double cycles_B = cycles_buff / pparams->buf_size;
double throughput = (ops_s * pparams->buf_size * 8) / 1000000;
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_QAT_SYM_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD))) {
/* Cycle count misleading on HW devices for this test, so don't print */
printf("%4u\t%6.2f\t%10.2f\t n/a \t\t n/a "
"\t\t n/a \t\t%8"PRIu64"\t%8"PRIu64,
@@ -3797,7 +3792,7 @@ test_perf_snow3G_vary_pkt_size(void)
params_set[i].auth_algo;
printf("\nOn %s dev%u qp%u, %s, "
"cipher algo:%s, auth algo:%s, burst_size: %d ops",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
testsuite_params.dev_id, 0,
chain_mode_name(params_set[i].chain),
rte_crypto_cipher_algorithm_strings[cipher_algo],
@@ -4678,7 +4673,15 @@ static struct unit_test_suite cryptodev_armv8_testsuite = {
static int
perftest_aesni_gcm_cryptodev(void)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_AESNI_GCM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI GCM PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_gcm_testsuite);
}
@@ -4686,7 +4689,15 @@ perftest_aesni_gcm_cryptodev(void)
static int
perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI MB PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_MB is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_aes_testsuite);
}
@@ -4694,7 +4705,15 @@ perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_QAT is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_testsuite);
}
@@ -4702,7 +4721,15 @@ perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_sw_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_SNOW3G_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "SNOW3G PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_SNOW3G is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_snow3g_testsuite);
}
@@ -4710,7 +4737,15 @@ perftest_sw_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_qat_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_QAT is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_snow3g_testsuite);
}
@@ -4718,7 +4753,15 @@ perftest_qat_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_openssl_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_OPENSSL_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "OpenSSL PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_OPENSSL is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_openssl_testsuite);
}
@@ -4726,7 +4769,15 @@ perftest_openssl_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_qat_continual_cryptodev(void)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_QAT is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_qat_continual_testsuite);
}
@@ -4734,7 +4785,15 @@ perftest_qat_continual_cryptodev(void)
static int
perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_ARMV8_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "ARMV8 PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_ARMV8 is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_armv8_testsuite);
}
@@ -4742,7 +4801,15 @@ perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_dpaa2_sec_cryptodev(void)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "DPAA2 SEC PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
}
--
2.9.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v3] cryptodev: remove crypto device type enumeration
2017-05-22 11:10 3% ` [dpdk-dev] [PATCH v2] " Slawomir Mrozowicz
@ 2017-06-30 14:10 2% ` Pablo de Lara
2017-06-30 14:34 2% ` [dpdk-dev] [PATCH v4] " Pablo de Lara
0 siblings, 1 reply; 200+ results
From: Pablo de Lara @ 2017-06-30 14:10 UTC (permalink / raw)
To: declan.doherty; +Cc: dev, Slawomir Mrozowicz
From: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
Changes device type identification to be based on a unique
driver id replacing the current device type enumeration, which needed
library changes every time a new crypto driver was added.
The driver id is assigned dynamically during driver registration using
the new macro RTE_PMD_REGISTER_CRYPTO_DRIVER which returns a unique
uint8_t identifier for that driver. New APIs are also introduced
to allow retrieval of the driver id using the driver name.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
---
v3 changes:
- Replaced static array of crypto driver ids with
a dynamic queue.
- Renamed function rte_cryptodev_allocate_driver_id,
removing "_id"
v2 changes:
- Added release notes information
- Reduced some call of rte_cryptodev_driver_id_get
- Removed clang compiler error
- Added internal mark for function rte_cryptodev_allocate_driver_id
doc/guides/prog_guide/cryptodev_lib.rst | 5 +-
doc/guides/rel_notes/release_17_08.rst | 23 ++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 9 +-
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 2 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 9 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 2 +-
drivers/crypto/armv8/rte_armv8_pmd.c | 9 +-
drivers/crypto/armv8/rte_armv8_pmd_ops.c | 2 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 7 +-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 9 +-
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 2 +-
drivers/crypto/null/null_crypto_pmd.c | 9 +-
drivers/crypto/null/null_crypto_pmd_ops.c | 2 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 9 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 2 +-
drivers/crypto/qat/qat_crypto.c | 5 +-
drivers/crypto/qat/qat_crypto.h | 2 +
drivers/crypto/qat/rte_qat_cryptodev.c | 6 +-
drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 22 +-
drivers/crypto/scheduler/scheduler_pmd.c | 6 +-
drivers/crypto/scheduler/scheduler_pmd_ops.c | 2 +-
drivers/crypto/scheduler/scheduler_pmd_private.h | 4 +-
drivers/crypto/snow3g/rte_snow3g_pmd.c | 9 +-
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 2 +-
drivers/crypto/zuc/rte_zuc_pmd.c | 9 +-
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 2 +-
lib/librte_cryptodev/rte_cryptodev.c | 65 +++-
lib/librte_cryptodev/rte_cryptodev.h | 68 +++--
lib/librte_cryptodev/rte_cryptodev_pmd.h | 2 +-
lib/librte_cryptodev/rte_cryptodev_version.map | 5 +-
test/test/test_cryptodev.c | 331 +++++++++++++--------
test/test/test_cryptodev_blockcipher.c | 68 +++--
test/test/test_cryptodev_blockcipher.h | 2 +-
test/test/test_cryptodev_perf.c | 217 +++++++++-----
34 files changed, 623 insertions(+), 305 deletions(-)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 4f98f28..4644802 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -291,7 +291,7 @@ relevant information for the device.
struct rte_cryptodev_info {
const char *driver_name;
- enum rte_cryptodev_type dev_type;
+ uint8_t driver_id;
struct rte_pci_device *pci_dev;
uint64_t feature_flags;
@@ -451,7 +451,8 @@ functions for the configuration of the session parameters and freeing function
so the PMD can managed the memory on destruction of a session.
**Note**: Sessions created on a particular device can only be used on Crypto
-devices of the same type, and if you try to use a session on a device different
+devices of the same type - the same driver id used by this devices,
+and if you try to use a session on a device different
to that on which it was created then the Crypto operation will fail.
``rte_cryptodev_sym_session_create()`` is used to create a symmetric session on
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 842f46f..9308769 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -75,6 +75,10 @@ New Features
Added support for firmwares with multiple Ethernet ports per physical port.
+* **Updated cryptodev library.**
+
+ Added helper functions for crypto device driver identification.
+
Resolved Issues
---------------
@@ -145,6 +149,10 @@ API Changes
=========================================================
+* Changes device type identification to be based on a unique
+ driver id uint8_t type replacing the previous device type enumeration.
+
+
ABI Changes
-----------
@@ -159,6 +167,21 @@ ABI Changes
=========================================================
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+ * Add a short 1-2 sentence description of the removed item in the past
+ tense.
+
+ This section is a comment. do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =========================================================
+
+
+* The crypto device type enumeration has been removed from librte_cryptodev.
+
Shared Library Versions
-----------------------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1b95c23..ef290a3 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -43,6 +43,8 @@
#include "aesni_gcm_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
/** GCM encode functions pointer table */
static const struct aesni_gcm_ops aesni_gcm_enc[] = {
[AESNI_GCM_KEY_128] = {
@@ -144,8 +146,8 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
struct aesni_gcm_session *sess = NULL;
if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session->dev_type
- != RTE_CRYPTODEV_AESNI_GCM_PMD))
+ if (unlikely(op->session->driver_id !=
+ cryptodev_driver_id))
return sess;
sess = (struct aesni_gcm_session *)op->session->_private;
@@ -458,7 +460,7 @@ aesni_gcm_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_AESNI_GCM_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_aesni_gcm_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -541,3 +543,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_GCM_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(aesni_gcm_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 7b68a20..721dbda 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -181,7 +181,7 @@ aesni_gcm_pmd_info_get(struct rte_cryptodev *dev,
struct aesni_gcm_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = aesni_gcm_pmd_capabilities;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index f9a7d5b..4025978 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -41,6 +41,8 @@
#include "rte_aesni_mb_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
typedef void (*hash_one_block_t)(const void *data, void *digest);
typedef void (*aes_keyexp_t)(const void *key, void *enc_exp_keys, void *dec_exp_keys);
@@ -346,8 +348,8 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
struct aesni_mb_session *sess = NULL;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_AESNI_MB_PMD)) {
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id)) {
return NULL;
}
@@ -703,7 +705,7 @@ cryptodev_aesni_mb_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_aesni_mb_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -804,3 +806,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_MB_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_aesni_mb_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index d1bc28e..3a2683b 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -321,7 +321,7 @@ aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
struct aesni_mb_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = aesni_mb_pmd_capabilities;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 83dae87..9fe781b 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -45,6 +45,8 @@
#include "rte_armv8_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
static int cryptodev_armv8_crypto_uninit(struct rte_vdev_device *vdev);
/**
@@ -548,8 +550,8 @@ get_session(struct armv8_crypto_qp *qp, struct rte_crypto_op *op)
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
- op->sym->session->dev_type ==
- RTE_CRYPTODEV_ARMV8_PMD)) {
+ op->sym->session->driver_id ==
+ cryptodev_driver_id)) {
sess = (struct armv8_crypto_session *)
op->sym->session->_private;
}
@@ -816,7 +818,7 @@ cryptodev_armv8_crypto_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_ARMV8_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_armv8_crypto_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -906,3 +908,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_ARMV8_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(armv8_crypto_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 4d9ccbf..2911417 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -178,7 +178,7 @@ armv8_crypto_pmd_info_get(struct rte_cryptodev *dev,
struct armv8_crypto_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = armv8_crypto_pmd_capabilities;
dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index e32b27e..70ad07a 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -73,6 +73,8 @@
#define AES_CBC_IV_LEN 16
enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+static uint8_t cryptodev_driver_id;
+
static inline int
build_authenc_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
@@ -1383,7 +1385,7 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
info->feature_flags = dev->feature_flags;
info->capabilities = dpaa2_sec_capabilities;
info->sym.max_nb_sessions = internals->max_nb_sessions;
- info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ info->driver_id = cryptodev_driver_id;
}
}
@@ -1508,7 +1510,7 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
}
hw_id = dpaa2_dev->object_id;
- cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ cryptodev->driver_id = cryptodev_driver_id;
cryptodev->dev_ops = &crypto_ops;
cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
@@ -1651,3 +1653,4 @@ static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
};
RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
+RTE_PMD_REGISTER_CRYPTO_DRIVER(rte_dpaa2_sec_driver, cryptodev_driver_id);
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 70bf228..648718c 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -48,6 +48,8 @@
#define KASUMI_MAX_BURST 4
#define BYTE_LEN 8
+static uint8_t cryptodev_driver_id;
+
/** Get xform chain order. */
static enum kasumi_operation
kasumi_get_mode(const struct rte_crypto_sym_xform *xform)
@@ -144,8 +146,8 @@ kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
struct kasumi_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_KASUMI_PMD))
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct kasumi_session *)op->sym->session->_private;
@@ -582,7 +584,7 @@ cryptodev_kasumi_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_KASUMI_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_kasumi_pmd_ops;
/* Register RX/TX burst functions for data path. */
@@ -666,3 +668,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_KASUMI_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_kasumi_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index 62ebdbd..343c9b3 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -156,7 +156,7 @@ kasumi_pmd_info_get(struct rte_cryptodev *dev,
struct kasumi_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 53bdc3e..7ab3570 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -39,6 +39,8 @@
#include "null_crypto_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
/** verify and set session parameters */
int
null_crypto_set_session_parameters(
@@ -95,8 +97,8 @@ get_session(struct null_crypto_qp *qp, struct rte_crypto_sym_op *op)
struct null_crypto_session *sess;
if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session == NULL ||
- op->session->dev_type != RTE_CRYPTODEV_NULL_PMD))
+ if (unlikely(op->session == NULL || op->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct null_crypto_session *)op->session->_private;
@@ -186,7 +188,7 @@ cryptodev_null_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_NULL_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = null_crypto_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -271,3 +273,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_NULL_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_null_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index 12c946c..a7c891e 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -151,7 +151,7 @@ null_crypto_pmd_info_get(struct rte_cryptodev *dev,
struct null_crypto_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5d29171..4e4394f 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -45,6 +45,8 @@
#define DES_BLOCK_SIZE 8
+static uint8_t cryptodev_driver_id;
+
static int cryptodev_openssl_remove(struct rte_vdev_device *vdev);
/*----------------------------------------------------------------------------*/
@@ -449,8 +451,8 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
- op->sym->session->dev_type ==
- RTE_CRYPTODEV_OPENSSL_PMD))
+ op->sym->session->driver_id ==
+ cryptodev_driver_id))
sess = (struct openssl_session *)
op->sym->session->_private;
} else {
@@ -1285,7 +1287,7 @@ cryptodev_openssl_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_OPENSSL_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_openssl_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -1374,3 +1376,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_OPENSSL_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_openssl_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 22a6873..f65de53 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -536,7 +536,7 @@ openssl_pmd_info_get(struct rte_cryptodev *dev,
struct openssl_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = openssl_pmd_capabilities;
dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index f8e1d01..7c5a9a8 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -914,7 +914,8 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
return -EINVAL;
}
- if (unlikely(op->sym->session->dev_type != RTE_CRYPTODEV_QAT_SYM_PMD)) {
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_qat_driver_id)) {
PMD_DRV_LOG(ERR, "Session was not created for this device");
return -EINVAL;
}
@@ -1254,7 +1255,7 @@ void qat_dev_info_get(struct rte_cryptodev *dev,
info->feature_flags = dev->feature_flags;
info->capabilities = internals->qat_dev_capabilities;
info->sym.max_nb_sessions = internals->max_nb_sessions;
- info->dev_type = RTE_CRYPTODEV_QAT_SYM_PMD;
+ info->driver_id = cryptodev_qat_driver_id;
info->pci_dev = RTE_DEV_TO_PCI(dev->device);
}
}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index b740d6b..efcf607 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -85,6 +85,8 @@ struct qat_pmd_private {
const struct rte_cryptodev_capabilities *qat_dev_capabilities;
};
+extern uint8_t cryptodev_qat_driver_id;
+
int qat_dev_config(struct rte_cryptodev *dev,
struct rte_cryptodev_config *config);
int qat_dev_start(struct rte_cryptodev *dev);
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
index 78d50fb..1c5ff77 100644
--- a/drivers/crypto/qat/rte_qat_cryptodev.c
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -40,6 +40,8 @@
#include "qat_crypto.h"
#include "qat_logs.h"
+uint8_t cryptodev_qat_driver_id;
+
static const struct rte_cryptodev_capabilities qat_cpm16_capabilities[] = {
QAT_BASE_CPM16_SYM_CAPABILITIES,
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
@@ -106,7 +108,7 @@ crypto_qat_dev_init(struct rte_cryptodev *cryptodev)
RTE_DEV_TO_PCI(cryptodev->device)->addr.devid,
RTE_DEV_TO_PCI(cryptodev->device)->addr.function);
- cryptodev->dev_type = RTE_CRYPTODEV_QAT_SYM_PMD;
+ cryptodev->driver_id = cryptodev_qat_driver_id;
cryptodev->dev_ops = &crypto_qat_ops;
cryptodev->enqueue_burst = qat_pmd_enqueue_op_burst;
@@ -168,4 +170,4 @@ static struct rte_pci_driver rte_qat_pmd = {
RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_QAT_SYM_PMD, rte_qat_pmd);
RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_QAT_SYM_PMD, pci_id_qat_map);
-
+RTE_PMD_REGISTER_CRYPTO_DRIVER(rte_qat_pmd, cryptodev_qat_driver_id);
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
index 95566d5..9c364c2 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -198,7 +198,7 @@ rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -226,12 +226,12 @@ rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
rte_cryptodev_info_get(slave_id, &dev_info);
slave->dev_id = slave_id;
- slave->dev_type = dev_info.dev_type;
+ slave->driver_id = dev_info.driver_id;
sched_ctx->nb_slaves++;
if (update_scheduler_capability(sched_ctx) < 0) {
slave->dev_id = 0;
- slave->dev_type = 0;
+ slave->driver_id = 0;
sched_ctx->nb_slaves--;
CS_LOG_ERR("capabilities update failed");
@@ -257,7 +257,7 @@ rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -314,7 +314,7 @@ rte_cryptodev_scheduler_mode_set(uint8_t scheduler_id,
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -370,7 +370,7 @@ rte_cryptodev_scheduler_mode_get(uint8_t scheduler_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -392,7 +392,7 @@ rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -420,7 +420,7 @@ rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -442,7 +442,7 @@ rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -499,7 +499,7 @@ rte_cryptodev_scheduler_slaves_get(uint8_t scheduler_id, uint8_t *slaves)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -567,7 +567,7 @@ rte_cryptodev_scheduler_option_get(uint8_t scheduler_id,
return -EINVAL;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
index fefd6cc..b385851 100644
--- a/drivers/crypto/scheduler/scheduler_pmd.c
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -42,6 +42,8 @@
#include "rte_cryptodev_scheduler.h"
#include "scheduler_pmd_private.h"
+uint8_t cryptodev_driver_id;
+
struct scheduler_init_params {
struct rte_crypto_vdev_init_params def_p;
uint32_t nb_slaves;
@@ -113,7 +115,7 @@ cryptodev_scheduler_create(const char *name,
return -EFAULT;
}
- dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_crypto_scheduler_pmd_ops;
sched_ctx = dev->data->dev_private;
@@ -436,3 +438,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
"max_nb_sessions=<int> "
"socket_id=<int> "
"slave=<name>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_scheduler_pmd_drv,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index 4fc8b91..90e3734 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -369,7 +369,7 @@ scheduler_pmd_info_get(struct rte_cryptodev *dev,
max_nb_sessions;
}
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = sched_ctx->capabilities;
dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
index 05a5916..efb2bbc 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_private.h
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -63,7 +63,7 @@ struct scheduler_slave {
uint16_t qp_id;
uint32_t nb_inflight_cops;
- enum rte_cryptodev_type dev_type;
+ uint8_t driver_id;
};
struct scheduler_ctx {
@@ -105,6 +105,8 @@ struct scheduler_session {
RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES];
};
+extern uint8_t cryptodev_driver_id;
+
static __rte_always_inline uint16_t
get_max_enqueue_order_count(struct rte_ring *order_ring, uint16_t nb_ops)
{
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 8945f19..fe074c1 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -47,6 +47,8 @@
#define SNOW3G_MAX_BURST 8
#define BYTE_LEN 8
+static uint8_t cryptodev_driver_id;
+
/** Get xform chain order. */
static enum snow3g_operation
snow3g_get_mode(const struct rte_crypto_sym_xform *xform)
@@ -144,8 +146,8 @@ snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *op)
struct snow3g_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_SNOW3G_PMD))
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct snow3g_session *)op->sym->session->_private;
@@ -571,7 +573,7 @@ cryptodev_snow3g_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_SNOW3G_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_snow3g_pmd_ops;
/* Register RX/TX burst functions for data path. */
@@ -655,3 +657,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SNOW3G_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_snow3g_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 7ce96be..26cc3e9 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -156,7 +156,7 @@ snow3g_pmd_info_get(struct rte_cryptodev *dev,
struct snow3g_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index ec6d54f..b7b8dfc 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -46,6 +46,8 @@
#define ZUC_MAX_BURST 8
#define BYTE_LEN 8
+static uint8_t cryptodev_driver_id;
+
/** Get xform chain order. */
static enum zuc_operation
zuc_get_mode(const struct rte_crypto_sym_xform *xform)
@@ -143,8 +145,8 @@ zuc_get_session(struct zuc_qp *qp, struct rte_crypto_op *op)
struct zuc_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_ZUC_PMD))
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct zuc_session *)op->sym->session->_private;
@@ -471,7 +473,7 @@ cryptodev_zuc_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_ZUC_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_zuc_pmd_ops;
/* Register RX/TX burst functions for data path. */
@@ -554,3 +556,4 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_ZUC_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(cryptodev_zuc_pmd_drv, cryptodev_driver_id);
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index e793459..645b80c 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -156,7 +156,7 @@ zuc_pmd_info_get(struct rte_cryptodev *dev,
struct zuc_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index a466ed7..17f7c63 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -391,12 +391,12 @@ rte_cryptodev_count(void)
}
uint8_t
-rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+rte_cryptodev_device_count_by_driver(uint8_t driver_id)
{
uint8_t i, dev_count = 0;
for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
- if (rte_cryptodev_globals->devs[i].dev_type == type &&
+ if (rte_cryptodev_globals->devs[i].driver_id == driver_id &&
rte_cryptodev_globals->devs[i].attached ==
RTE_CRYPTODEV_ATTACHED)
dev_count++;
@@ -1040,7 +1040,7 @@ rte_cryptodev_sym_session_init(struct rte_mempool *mp,
memset(sess, 0, mp->elt_size);
sess->dev_id = dev->data->dev_id;
- sess->dev_type = dev->dev_type;
+ sess->driver_id = dev->driver_id;
sess->mp = mp;
if (dev->dev_ops->session_initialize)
@@ -1207,7 +1207,7 @@ rte_cryptodev_sym_session_free(uint8_t dev_id,
dev = &rte_crypto_devices[dev_id];
/* Check the session belongs to this device type */
- if (sess->dev_type != dev->dev_type)
+ if (sess->driver_id != dev->driver_id)
return sess;
/* Let device implementation clear session material */
@@ -1319,3 +1319,60 @@ rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix)
return -1;
}
+
+TAILQ_HEAD(cryptodev_driver_list, cryptodev_driver);
+
+static struct cryptodev_driver_list cryptodev_driver_list =
+ TAILQ_HEAD_INITIALIZER(cryptodev_driver_list);
+
+struct cryptodev_driver {
+ TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+ const struct rte_driver *driver;
+ uint8_t id;
+};
+
+static uint8_t nb_drivers;
+
+int
+rte_cryptodev_driver_id_get(const char *name)
+{
+ struct cryptodev_driver *driver;
+ const char *driver_name;
+
+ if (name == NULL) {
+ RTE_LOG(DEBUG, CRYPTODEV, "name pointer NULL");
+ return -1;
+ }
+
+ TAILQ_FOREACH(driver, &cryptodev_driver_list, next) {
+ driver_name = driver->driver->name;
+ if (strncmp(driver_name, name, strlen(driver_name)) == 0)
+ return driver->id;
+ }
+ return -1;
+}
+
+const char *
+rte_cryptodev_driver_name_get(uint8_t driver_id)
+{
+ struct cryptodev_driver *driver;
+
+ TAILQ_FOREACH(driver, &cryptodev_driver_list, next)
+ if (driver->id == driver_id)
+ return driver->driver->name;
+ return NULL;
+}
+
+uint8_t
+rte_cryptodev_allocate_driver(const struct rte_driver *drv)
+{
+ struct cryptodev_driver *driver;
+
+ driver = malloc(sizeof(*driver));
+ driver->driver = drv;
+ driver->id = nb_drivers;
+
+ TAILQ_INSERT_TAIL(&cryptodev_driver_list, driver, next);
+
+ return nb_drivers++;
+}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4e318f0..9d541a8 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -74,21 +74,6 @@ extern "C" {
#define CRYPTODEV_NAME_DPAA2_SEC_PMD cryptodev_dpaa2_sec_pmd
/**< NXP DPAA2 - SEC PMD device name */
-/** Crypto device type */
-enum rte_cryptodev_type {
- RTE_CRYPTODEV_NULL_PMD = 1, /**< Null crypto PMD */
- RTE_CRYPTODEV_AESNI_GCM_PMD, /**< AES-NI GCM PMD */
- RTE_CRYPTODEV_AESNI_MB_PMD, /**< AES-NI multi buffer PMD */
- RTE_CRYPTODEV_QAT_SYM_PMD, /**< QAT PMD Symmetric Crypto */
- RTE_CRYPTODEV_SNOW3G_PMD, /**< SNOW 3G PMD */
- RTE_CRYPTODEV_KASUMI_PMD, /**< KASUMI PMD */
- RTE_CRYPTODEV_ZUC_PMD, /**< ZUC PMD */
- RTE_CRYPTODEV_OPENSSL_PMD, /**< OpenSSL PMD */
- RTE_CRYPTODEV_ARMV8_PMD, /**< ARMv8 crypto PMD */
- RTE_CRYPTODEV_SCHEDULER_PMD, /**< Crypto Scheduler PMD */
- RTE_CRYPTODEV_DPAA2_SEC_PMD, /**< NXP DPAA2 - SEC PMD */
-};
-
extern const char **rte_cyptodev_names;
/* Logging Macros */
@@ -322,7 +307,7 @@ rte_cryptodev_get_feature_name(uint64_t flag);
/** Crypto device information */
struct rte_cryptodev_info {
const char *driver_name; /**< Driver name. */
- enum rte_cryptodev_type dev_type; /**< Device type */
+ uint8_t driver_id; /**< Driver identifier */
struct rte_pci_device *pci_dev; /**< PCI information. */
uint64_t feature_flags; /**< Feature flags */
@@ -426,13 +411,13 @@ rte_cryptodev_count(void);
/**
* Get number of crypto device defined type.
*
- * @param type type of device.
+ * @param driver_id driver identifier.
*
* @return
* Returns number of crypto device.
*/
extern uint8_t
-rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+rte_cryptodev_device_count_by_driver(uint8_t driver_id);
/**
* Get number and identifiers of attached crypto devices that
@@ -703,8 +688,8 @@ struct rte_cryptodev {
struct rte_device *device;
/**< Backing device */
- enum rte_cryptodev_type dev_type;
- /**< Crypto device type */
+ uint8_t driver_id;
+ /**< Crypto driver identifier*/
struct rte_cryptodev_cb_list link_intr_cbs;
/**< User application callback for interrupts if present */
@@ -841,8 +826,8 @@ struct rte_cryptodev_sym_session {
struct {
uint8_t dev_id;
/**< Device Id */
- enum rte_cryptodev_type dev_type;
- /** Crypto Device type session created on */
+ uint8_t driver_id;
+ /** Crypto driver identifier session created on */
struct rte_mempool *mp;
/**< Mempool session allocated from */
} __rte_aligned(8);
@@ -923,6 +908,45 @@ int
rte_cryptodev_queue_pair_detach_sym_session(uint16_t qp_id,
struct rte_cryptodev_sym_session *session);
+/**
+ * Provide driver identifier.
+ *
+ * @param name
+ * The pointer to a driver name.
+ * @return
+ * The driver type identifier or -1 if no driver found
+ */
+int rte_cryptodev_driver_id_get(const char *name);
+
+/**
+ * Provide driver name.
+ *
+ * @param driver_id
+ * The driver identifier.
+ * @return
+ * The driver name or null if no driver found
+ */
+const char *rte_cryptodev_driver_name_get(uint8_t driver_id);
+
+/**
+ * @internal
+ * Allocate Cryptodev driver.
+ *
+ * @param driver
+ * Pointer to rte_driver.
+ * @return
+ * The driver type identifier
+ */
+uint8_t rte_cryptodev_allocate_driver(const struct rte_driver *driver);
+
+
+#define RTE_PMD_REGISTER_CRYPTO_DRIVER(drv, driver_id)\
+RTE_INIT(init_ ##driver_id);\
+static void init_ ##driver_id(void)\
+{\
+ driver_id = rte_cryptodev_allocate_driver(&(drv).driver);\
+}
+
#ifdef __cplusplus
}
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 8e8b2ad..f6aa84d 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -60,7 +60,7 @@ struct rte_cryptodev_session {
RTE_STD_C11
struct {
uint8_t dev_id;
- enum rte_cryptodev_type type;
+ uint8_t driver_id;
struct rte_mempool *mp;
} __rte_aligned(8);
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 7191607..afe148a 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -6,7 +6,6 @@ DPDK_16.04 {
rte_cryptodev_callback_unregister;
rte_cryptodev_close;
rte_cryptodev_count;
- rte_cryptodev_count_devtype;
rte_cryptodev_configure;
rte_cryptodev_create_vdev;
rte_cryptodev_get_dev_id;
@@ -62,6 +61,10 @@ DPDK_17.05 {
DPDK_17.08 {
global:
+ rte_cryptodev_allocate_driver;
+ rte_cryptodev_device_count_by_driver;
+ rte_cryptodev_driver_id_get;
+ rte_cryptodev_driver_name_get;
rte_cryptodev_pci_generic_probe;
rte_cryptodev_pci_generic_remove;
rte_cryptodev_vdev_parse_init_params;
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index f8f15c0..afa895e 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -60,7 +60,7 @@
#include "test_cryptodev_gcm_test_vectors.h"
#include "test_cryptodev_hmac_test_vectors.h"
-static enum rte_cryptodev_type gbl_cryptodev_type;
+static int gbl_driver_id;
struct crypto_testsuite_params {
struct rte_mempool *mbuf_pool;
@@ -210,14 +210,11 @@ testsuite_setup(void)
}
/* Create an AESNI MB device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
-#ifndef RTE_LIBRTE_PMD_AESNI_MB
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_AESNI_MB_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL);
@@ -230,14 +227,11 @@ testsuite_setup(void)
}
/* Create an AESNI GCM device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_GCM_PMD) {
-#ifndef RTE_LIBRTE_PMD_AESNI_GCM
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_AESNI_GCM_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL),
@@ -248,13 +242,11 @@ testsuite_setup(void)
}
/* Create a SNOW 3G device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_SNOW3G_PMD) {
-#ifndef RTE_LIBRTE_PMD_SNOW3G
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_SNOW3G must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_SNOW3G_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD), NULL),
@@ -265,13 +257,11 @@ testsuite_setup(void)
}
/* Create a KASUMI device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_KASUMI_PMD) {
-#ifndef RTE_LIBRTE_PMD_KASUMI
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_KASUMI must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_KASUMI_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_KASUMI_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_KASUMI_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_KASUMI_PMD), NULL),
@@ -282,13 +272,11 @@ testsuite_setup(void)
}
/* Create a ZUC device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_ZUC_PMD) {
-#ifndef RTE_LIBRTE_PMD_ZUC
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_ZUC must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_ZUC_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ZUC_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ZUC_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_ZUC_PMD), NULL),
@@ -299,14 +287,11 @@ testsuite_setup(void)
}
/* Create a NULL device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_NULL_PMD) {
-#ifndef RTE_LIBRTE_PMD_NULL_CRYPTO
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_NULL_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_NULL_PMD), NULL);
@@ -319,14 +304,11 @@ testsuite_setup(void)
}
/* Create an OPENSSL device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_OPENSSL_PMD) {
-#ifndef RTE_LIBRTE_PMD_OPENSSL
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_OPENSSL must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_OPENSSL_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD),
@@ -339,14 +321,11 @@ testsuite_setup(void)
}
/* Create a ARMv8 device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_ARMV8_PMD) {
-#ifndef RTE_LIBRTE_PMD_ARMV8_CRYPTO
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_ARMV8_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_ARMV8_PMD),
@@ -359,15 +338,12 @@ testsuite_setup(void)
}
#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
- if (gbl_cryptodev_type == RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD))) {
-#ifndef RTE_LIBRTE_PMD_AESNI_MB
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_SCHEDULER_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
@@ -381,14 +357,6 @@ testsuite_setup(void)
}
#endif /* RTE_LIBRTE_PMD_CRYPTO_SCHEDULER */
-#ifndef RTE_LIBRTE_PMD_QAT
- if (gbl_cryptodev_type == RTE_CRYPTODEV_QAT_SYM_PMD) {
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_QAT must be enabled "
- "in config file to run this testsuite.\n");
- return TEST_FAILED;
- }
-#endif
-
nb_devs = rte_cryptodev_count();
if (nb_devs < 1) {
RTE_LOG(ERR, USER1, "No crypto devices found?\n");
@@ -398,7 +366,7 @@ testsuite_setup(void)
/* Create list of valid crypto devs */
for (i = 0; i < nb_devs; i++) {
rte_cryptodev_info_get(i, &info);
- if (info.dev_type == gbl_cryptodev_type)
+ if (info.driver_id == gbl_driver_id)
ts_params->valid_devs[ts_params->valid_dev_count++] = i;
}
@@ -1341,7 +1309,8 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
TEST_ASSERT_BUFFERS_ARE_EQUAL(digest,
catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
- gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+ gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)) ?
TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
DIGEST_BYTE_LENGTH_SHA1,
"Generated digest data not as expected");
@@ -1506,7 +1475,8 @@ test_AES_cipheronly_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1522,7 +1492,8 @@ test_AES_docsis_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1538,7 +1509,8 @@ test_AES_docsis_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1554,7 +1526,8 @@ test_DES_docsis_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_DES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1570,7 +1543,8 @@ test_authonly_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1586,7 +1560,8 @@ test_AES_chain_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1604,7 +1579,8 @@ test_AES_cipheronly_scheduler_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_SCHEDULER_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1620,7 +1596,8 @@ test_AES_chain_scheduler_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_SCHEDULER_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1636,7 +1613,8 @@ test_authonly_scheduler_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_SCHEDULER_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1654,7 +1632,8 @@ test_AES_chain_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1670,7 +1649,8 @@ test_AES_cipheronly_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1686,7 +1666,8 @@ test_AES_chain_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1702,7 +1683,8 @@ test_AES_cipheronly_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1718,7 +1700,8 @@ test_AES_chain_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1734,7 +1717,8 @@ test_AES_cipheronly_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1750,7 +1734,8 @@ test_authonly_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1766,7 +1751,8 @@ test_AES_chain_armv8_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_ARMV8_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4689,7 +4675,8 @@ test_3DES_chain_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4705,7 +4692,8 @@ test_DES_cipheronly_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4721,7 +4709,8 @@ test_DES_docsis_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_DES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4737,7 +4726,8 @@ test_3DES_chain_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4753,7 +4743,8 @@ test_3DES_cipheronly_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4769,7 +4760,8 @@ test_3DES_cipheronly_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4785,7 +4777,8 @@ test_3DES_chain_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4801,7 +4794,8 @@ test_3DES_cipheronly_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -7816,8 +7810,9 @@ test_scheduler_attach_slave_op(void)
char vdev_name[32];
/* create 2 AESNI_MB if necessary */
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_AESNI_MB_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)));
if (nb_devs < 2) {
for (i = nb_devs; i < 2; i++) {
snprintf(vdev_name, sizeof(vdev_name), "%s_%u",
@@ -7838,7 +7833,8 @@ test_scheduler_attach_slave_op(void)
struct rte_cryptodev_info info;
rte_cryptodev_info_get(i, &info);
- if (info.dev_type != RTE_CRYPTODEV_AESNI_MB_PMD)
+ if (info.driver_id != rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
continue;
ret = rte_cryptodev_scheduler_slave_attach(sched_id,
@@ -8605,14 +8601,31 @@ static struct unit_test_suite cryptodev_armv8_testsuite = {
static int
test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_QAT is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
return unit_test_suite_runner(&cryptodev_qat_testsuite);
}
static int
test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI MB PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_MB is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_aesni_mb_testsuite);
}
@@ -8620,7 +8633,15 @@ test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_openssl(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_OPENSSL_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI MB PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_MB is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_openssl_testsuite);
}
@@ -8628,7 +8649,15 @@ test_cryptodev_openssl(void)
static int
test_cryptodev_aesni_gcm(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_GCM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI GCM PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_aesni_gcm_testsuite);
}
@@ -8636,7 +8665,15 @@ test_cryptodev_aesni_gcm(void)
static int
test_cryptodev_null(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_NULL_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "NULL PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_NULL is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_null_testsuite);
}
@@ -8644,7 +8681,15 @@ test_cryptodev_null(void)
static int
test_cryptodev_sw_snow3g(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_SNOW3G_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "SNOW3G PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_SNOW3G is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_sw_snow3g_testsuite);
}
@@ -8652,7 +8697,15 @@ test_cryptodev_sw_snow3g(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_sw_kasumi(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_KASUMI_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_KASUMI_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "ZUC PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_KASUMI is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_sw_kasumi_testsuite);
}
@@ -8660,7 +8713,15 @@ test_cryptodev_sw_kasumi(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_sw_zuc(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_ZUC_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ZUC_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "ZUC PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_ZUC is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_sw_zuc_testsuite);
}
@@ -8668,7 +8729,15 @@ test_cryptodev_sw_zuc(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_armv8(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_ARMV8_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "ARMV8 PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_ARMV8 is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_armv8_testsuite);
}
@@ -8678,7 +8747,22 @@ test_cryptodev_armv8(void)
static int
test_cryptodev_scheduler(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "SCHEDULER PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_SCHEDULER is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
+ if (rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)) == -1) {
+ RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
+ " enabled in config file to run this testsuite.\n");
+ return TEST_FAILED;
+}
return unit_test_suite_runner(&cryptodev_scheduler_testsuite);
}
@@ -8689,7 +8773,16 @@ REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
static int
test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "DPAA2 SEC PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
+
return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
}
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 603c776..4bc370d 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -53,7 +53,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
uint8_t dev_id,
- enum rte_cryptodev_type cryptodev_type,
+ int driver_id,
char *test_msg)
{
struct rte_mbuf *ibuf = NULL;
@@ -79,6 +79,17 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
uint8_t tmp_src_buf[MBUF_SIZE];
uint8_t tmp_dst_buf[MBUF_SIZE];
+ int openssl_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+ int scheduler_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+ int armv8_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+ int aesni_mb_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+ int qat_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
int nb_segs = 1;
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SG) {
@@ -99,17 +110,14 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
memcpy(auth_key, tdata->auth_key.data,
tdata->auth_key.len);
- switch (cryptodev_type) {
- case RTE_CRYPTODEV_QAT_SYM_PMD:
- case RTE_CRYPTODEV_OPENSSL_PMD:
- case RTE_CRYPTODEV_ARMV8_PMD: /* Fall through */
+ if (driver_id == qat_pmd ||
+ driver_id == openssl_pmd ||
+ driver_id == armv8_pmd) { /* Fall through */
digest_len = tdata->digest.len;
- break;
- case RTE_CRYPTODEV_AESNI_MB_PMD:
- case RTE_CRYPTODEV_SCHEDULER_PMD:
+ } else if (driver_id == aesni_mb_pmd ||
+ driver_id == scheduler_pmd) {
digest_len = tdata->digest.truncated_len;
- break;
- default:
+ } else {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
"line %u FAILED: %s",
__LINE__, "Unsupported PMD type");
@@ -592,7 +600,7 @@ int
test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
uint8_t dev_id,
- enum rte_cryptodev_type cryptodev_type,
+ int driver_id,
enum blockcipher_test_type test_type)
{
int status, overall_status = TEST_SUCCESS;
@@ -602,6 +610,19 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
uint32_t target_pmd_mask = 0;
const struct blockcipher_test_case *tcs = NULL;
+ int openssl_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+ int dpaa2_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+ int scheduler_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+ int armv8_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+ int aesni_mb_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+ int qat_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
switch (test_type) {
case BLKCIPHER_AES_CHAIN_TYPE:
n_test_cases = sizeof(aes_chain_test_cases) /
@@ -647,29 +668,20 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
break;
}
- switch (cryptodev_type) {
- case RTE_CRYPTODEV_AESNI_MB_PMD:
+ if (driver_id == aesni_mb_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB;
- break;
- case RTE_CRYPTODEV_QAT_SYM_PMD:
+ else if (driver_id == qat_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_QAT;
- break;
- case RTE_CRYPTODEV_OPENSSL_PMD:
+ else if (driver_id == openssl_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL;
- break;
- case RTE_CRYPTODEV_ARMV8_PMD:
+ else if (driver_id == armv8_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8;
- break;
- case RTE_CRYPTODEV_SCHEDULER_PMD:
+ else if (driver_id == scheduler_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
- break;
- case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+ else if (driver_id == dpaa2_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
- break;
- default:
+ else
TEST_ASSERT(0, "Unrecognized cryptodev type");
- break;
- }
for (i = 0; i < n_test_cases; i++) {
const struct blockcipher_test_case *tc = &tcs[i];
@@ -678,7 +690,7 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
continue;
status = test_blockcipher_one_case(tc, mbuf_pool, op_mpool,
- dev_id, cryptodev_type, test_msg);
+ dev_id, driver_id, test_msg);
printf(" %u) TestCase %s %s\n", test_index ++,
tc->test_descr, test_msg);
diff --git a/test/test/test_cryptodev_blockcipher.h b/test/test/test_cryptodev_blockcipher.h
index 004122f..22fb420 100644
--- a/test/test/test_cryptodev_blockcipher.h
+++ b/test/test/test_cryptodev_blockcipher.h
@@ -126,7 +126,7 @@ int
test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
uint8_t dev_id,
- enum rte_cryptodev_type cryptodev_type,
+ int driver_id,
enum blockcipher_test_type test_type);
#endif /* TEST_CRYPTODEV_BLOCKCIPHER_H_ */
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 7a90667..6553c94 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -195,23 +195,35 @@ static const char *chain_mode_name(enum chain_mode mode)
}
}
-static const char *pmd_name(enum rte_cryptodev_type pmd)
+static const char *pmd_name(uint8_t driver_id)
{
- switch (pmd) {
- case RTE_CRYPTODEV_NULL_PMD: return RTE_STR(CRYPTODEV_NAME_NULL_PMD); break;
- case RTE_CRYPTODEV_AESNI_GCM_PMD:
+ uint8_t null_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD));
+ uint8_t dpaa2_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+ uint8_t snow3g_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD));
+ uint8_t aesni_gcm_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+ uint8_t aesni_mb_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+ uint8_t qat_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (driver_id == null_pmd)
+ return RTE_STR(CRYPTODEV_NAME_NULL_PMD);
+ else if (driver_id == aesni_gcm_pmd)
return RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD);
- case RTE_CRYPTODEV_AESNI_MB_PMD:
+ else if (driver_id == aesni_mb_pmd)
return RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD);
- case RTE_CRYPTODEV_QAT_SYM_PMD:
+ else if (driver_id == qat_pmd)
return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
- case RTE_CRYPTODEV_SNOW3G_PMD:
+ else if (driver_id == snow3g_pmd)
return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
- case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+ else if (driver_id == dpaa2_pmd)
return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
- default:
+ else
return "";
- }
}
static struct rte_mbuf *
@@ -236,7 +248,7 @@ setup_test_string(struct rte_mempool *mpool,
static struct crypto_testsuite_params testsuite_params = { NULL };
static struct crypto_unittest_params unittest_params;
-static enum rte_cryptodev_type gbl_cryptodev_perftest_devtype;
+static int gbl_driver_id;
static int
testsuite_setup(void)
@@ -273,13 +285,11 @@ testsuite_setup(void)
}
/* Create an AESNI MB device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
-#ifndef RTE_LIBRTE_PMD_AESNI_MB
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL);
@@ -291,13 +301,11 @@ testsuite_setup(void)
}
/* Create an AESNI GCM device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_AESNI_GCM_PMD) {
-#ifndef RTE_LIBRTE_PMD_AESNI_GCM
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_GCM_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL);
@@ -309,13 +317,11 @@ testsuite_setup(void)
}
/* Create a SNOW3G device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_SNOW3G_PMD) {
-#ifndef RTE_LIBRTE_PMD_SNOW3G
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_SNOW3G must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_SNOW3G_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD), NULL);
@@ -327,14 +333,11 @@ testsuite_setup(void)
}
/* Create an OPENSSL device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_OPENSSL_PMD) {
-#ifndef RTE_LIBRTE_PMD_OPENSSL
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_OPENSSL must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_OPENSSL_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD),
@@ -347,14 +350,11 @@ testsuite_setup(void)
}
/* Create an ARMv8 device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_ARMV8_PMD) {
-#ifndef RTE_LIBRTE_PMD_ARMV8_CRYPTO
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO must be"
- " enabled in config file to run this testsuite.\n");
- return TEST_FAILED;
-#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_ARMV8_PMD);
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD))) {
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_ARMV8_PMD),
@@ -366,14 +366,6 @@ testsuite_setup(void)
}
}
-#ifndef RTE_LIBRTE_PMD_QAT
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_QAT_SYM_PMD) {
- RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_QAT must be enabled "
- "in config file to run this testsuite.\n");
- return TEST_FAILED;
- }
-#endif
-
nb_devs = rte_cryptodev_count();
if (nb_devs < 1) {
RTE_LOG(ERR, USER1, "No crypto devices found?\n");
@@ -383,7 +375,7 @@ testsuite_setup(void)
/* Search for the first valid */
for (i = 0; i < nb_devs; i++) {
rte_cryptodev_info_get(i, &info);
- if (info.dev_type == gbl_cryptodev_perftest_devtype) {
+ if (info.driver_id == (uint8_t) gbl_driver_id) {
ts_params->dev_id = i;
valid_dev_id = 1;
break;
@@ -2042,8 +2034,9 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
}
while (num_received != num_to_submit) {
- if (gbl_cryptodev_perftest_devtype ==
- RTE_CRYPTODEV_AESNI_MB_PMD)
+ if (gbl_driver_id ==
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
rte_cryptodev_enqueue_burst(dev_num, 0,
NULL, 0);
@@ -2114,7 +2107,7 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
printf("\nOn %s dev%u qp%u, %s, cipher algo:%s, auth_algo:%s, "
"Packet Size %u bytes",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
ts_params->dev_id, 0,
chain_mode_name(pparams->chain),
rte_crypto_cipher_algorithm_strings[pparams->cipher_algo],
@@ -2158,8 +2151,9 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
}
while (num_ops_received != num_to_submit) {
- if (gbl_cryptodev_perftest_devtype ==
- RTE_CRYPTODEV_AESNI_MB_PMD)
+ if (gbl_driver_id ==
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
rte_cryptodev_enqueue_burst(ts_params->dev_id, 0,
NULL, 0);
start_cycles = rte_rdtsc_precise();
@@ -2309,7 +2303,7 @@ test_perf_openssl_optimise_cyclecount(struct perf_test_params *pparams)
printf("\nOn %s dev%u qp%u, %s, cipher algo:%s, cipher key length:%u, "
"auth_algo:%s, Packet Size %u bytes",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
ts_params->dev_id, 0,
chain_mode_name(pparams->chain),
rte_crypto_cipher_algorithm_strings[pparams->cipher_algo],
@@ -2444,7 +2438,7 @@ test_perf_armv8_optimise_cyclecount(struct perf_test_params *pparams)
printf("\nOn %s dev%u qp%u, %s, cipher algo:%s, cipher key length:%u, "
"auth_algo:%s, Packet Size %u bytes",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
ts_params->dev_id, 0,
chain_mode_name(pparams->chain),
rte_crypto_cipher_algorithm_strings[pparams->cipher_algo],
@@ -3358,7 +3352,8 @@ test_perf_snow3g(uint8_t dev_id, uint16_t queue_id,
double cycles_B = cycles_buff / pparams->buf_size;
double throughput = (ops_s * pparams->buf_size * 8) / 1000000;
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_QAT_SYM_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD))) {
/* Cycle count misleading on HW devices for this test, so don't print */
printf("%4u\t%6.2f\t%10.2f\t n/a \t\t n/a "
"\t\t n/a \t\t%8"PRIu64"\t%8"PRIu64,
@@ -3797,7 +3792,7 @@ test_perf_snow3G_vary_pkt_size(void)
params_set[i].auth_algo;
printf("\nOn %s dev%u qp%u, %s, "
"cipher algo:%s, auth algo:%s, burst_size: %d ops",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
testsuite_params.dev_id, 0,
chain_mode_name(params_set[i].chain),
rte_crypto_cipher_algorithm_strings[cipher_algo],
@@ -4678,7 +4673,15 @@ static struct unit_test_suite cryptodev_armv8_testsuite = {
static int
perftest_aesni_gcm_cryptodev(void)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_AESNI_GCM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI GCM PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_gcm_testsuite);
}
@@ -4686,7 +4689,15 @@ perftest_aesni_gcm_cryptodev(void)
static int
perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "AESNI MB PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_AESNI_MB is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_aes_testsuite);
}
@@ -4694,7 +4705,15 @@ perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_QAT is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_testsuite);
}
@@ -4702,7 +4721,15 @@ perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_sw_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_SNOW3G_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "SNOW3G PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_SNOW3G is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_snow3g_testsuite);
}
@@ -4710,7 +4737,15 @@ perftest_sw_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_qat_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_QAT is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_snow3g_testsuite);
}
@@ -4718,7 +4753,15 @@ perftest_qat_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_openssl_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_OPENSSL_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "OpenSSL PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_OPENSSL is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_openssl_testsuite);
}
@@ -4726,7 +4769,15 @@ perftest_openssl_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_qat_continual_cryptodev(void)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "QAT PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_QAT is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_qat_continual_testsuite);
}
@@ -4734,7 +4785,15 @@ perftest_qat_continual_cryptodev(void)
static int
perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_ARMV8_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "ARMV8 PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_ARMV8 is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_armv8_testsuite);
}
@@ -4742,7 +4801,15 @@ perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_dpaa2_sec_cryptodev(void)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+
+ if (gbl_driver_id == -1) {
+ RTE_LOG(ERR, USER1, "DPAA2 SEC PMD must be loaded. Check if "
+ "CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC is enabled "
+ "in config file to run this testsuite.\n");
+ return TEST_FAILED;
+ }
return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
}
--
2.9.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [RFC] ring: relax alignment constraint on ring structure
@ 2017-06-30 14:26 4% Olivier Matz
0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2017-06-30 14:26 UTC (permalink / raw)
To: dev; +Cc: bruce.richardson, konstantin.ananyev, daniel.verkamp
The initial objective of
commit d9f0d3a1ffd4 ("ring: remove split cacheline build setting")
was to add an empty cache line betwee, the producer and consumer
data (on platform with cache line size = 64B), preventing from
having them on adjacent cache lines.
Following discussion on the mailing list, it appears that this
also imposes an alignment constraint that is not required.
This patch removes the extra alignment constraint and adds the
empty cache lines using padding fields in the structure. The
size of rte_ring structure and the offset of the fields remain
the same on platforms with cache line size = 64B:
rte_ring = 384
rte_ring.name = 0
rte_ring.flags = 32
rte_ring.memzone = 40
rte_ring.size = 48
rte_ring.mask = 52
rte_ring.prod = 128
rte_ring.cons = 256
But it has an impact on platform where cache line size is 128B:
rte_ring = 384 -> 768
rte_ring.name = 0
rte_ring.flags = 32
rte_ring.memzone = 40
rte_ring.size = 48
rte_ring.mask = 52
rte_ring.prod = 128 -> 256
rte_ring.cons = 256 -> 512
Link: http://dpdk.org/dev/patchwork/patch/25039/
Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
I'm sending this patch to throw the discussion again, but since it
breaks the ABI on platform with cache lines = 128B, I think we should
follow the usual ABI breakage process.
If everybody agree, I'll send a notice and resend a similar patch after
17.08.
Olivier
lib/librte_ring/rte_ring.h | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 1beb781b4..135b83df0 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -116,14 +116,6 @@ enum rte_ring_queue_behavior {
struct rte_memzone; /* forward declaration, so as not to require memzone.h */
-#if RTE_CACHE_LINE_SIZE < 128
-#define PROD_ALIGN (RTE_CACHE_LINE_SIZE * 2)
-#define CONS_ALIGN (RTE_CACHE_LINE_SIZE * 2)
-#else
-#define PROD_ALIGN RTE_CACHE_LINE_SIZE
-#define CONS_ALIGN RTE_CACHE_LINE_SIZE
-#endif
-
/* structure to hold a pair of head/tail values and other metadata */
struct rte_ring_headtail {
volatile uint32_t head; /**< Prod/consumer head. */
@@ -155,11 +147,15 @@ struct rte_ring {
uint32_t mask; /**< Mask (size-1) of ring. */
uint32_t capacity; /**< Usable size of ring */
+ char pad0 __rte_cache_aligned; /**< empty cache line */
+
/** Ring producer status. */
- struct rte_ring_headtail prod __rte_aligned(PROD_ALIGN);
+ struct rte_ring_headtail prod __rte_cache_aligned;
+ char pad1 __rte_cache_aligned; /**< empty cache line */
/** Ring consumer status. */
- struct rte_ring_headtail cons __rte_aligned(CONS_ALIGN);
+ struct rte_ring_headtail cons __rte_cache_aligned;
+ char pad2 __rte_cache_aligned; /**< empty cache line */
};
#define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */
--
2.11.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 1/3] timer: inform periodic timers of multiple expiries
2017-06-30 10:14 3% ` Olivier Matz
@ 2017-06-30 12:06 0% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2017-06-30 12:06 UTC (permalink / raw)
To: Olivier Matz; +Cc: Robert Sanford, dev
On Fri, Jun 30, 2017 at 12:14:31PM +0200, Olivier Matz wrote:
> Hi Bruce,
>
> On Wed, 31 May 2017 10:16:19 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > if timer_manage is called much less frequently than the period of a
> > periodic timer, then timer expiries will be missed. For example, if a timer
> > has a period of 300us, but timer_manage is called every 1ms, then there
> > will only be one timer callback called every 1ms instead of 3 within that
> > time.
> >
> > While we can fix this by having each function called multiple times within
> > timer-manage, this will lead to out-of-order timeouts, and will be slower
> > with all the function call overheads - especially in the case of a timeout
> > doing something trivial like incrementing a counter. Therefore, we instead
> > modify the callback functions to take a counter value of the number of
> > expiries that have passed since the last time it was called.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>
> Sorry, it's probably a bit late to react. If it's too late, nevermind.
> I'm not really convinced that adding another argument to the callback
> function is the best solution.
>
> Invoking the callbacks several times would result in a much smaller patch
> that does not need a heavy ABI compat.
>
Yes, that is true. My first implementation did just that, and I'm not
adverse to that as a possible solution. However, my opinion is that this
is a better solution - primarily as it can let the worker know that it
is very late (from the multiple expiries info).
> I'm not sure the function call overhead is really significant in that
> case.
Yes, you are probably right in many cases. However, for a timeout doing
a simple operations, e.g. updating a counter or two, or setting a flag,
the function call overhead will be significant compared to the work
being done.
> I'm not sure I understand your point related to out-of-order timeouts,
> nor I see why this patchset would behave better.
My problem with the multiple expiries and ordering is that if we call
timeout X multiple times, we should really order those calls with other
timeouts that also need to expire, rather than just calling them three
times in a row. Not doing so seems wrong. By instead passing in the
extra parameter, there is no expectation of ordering of the callbacks,
and the callback function can know itself that it is running very late
and can take appropriate action when needed.
>
> About the problem itself, my understanding was that the timer manage
> function has to be called frequently enough to process the timers.
>
Yes. However, if some operation ends up taking a longer than expected
period of time, we should not drop timer expiries.
Anyone else want to weigh in on this problem. Any other opinions as to
which solution people would prefer?
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-12 9:56 0% ` Bruce Richardson
@ 2017-06-30 11:35 0% ` Olivier Matz
0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2017-06-30 11:35 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Ananyev, Konstantin, Verkamp, Daniel, dev
On Mon, 12 Jun 2017 10:56:09 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> On Mon, Jun 12, 2017 at 11:02:32AM +0200, Olivier Matz wrote:
> > On Fri, 9 Jun 2017 10:02:55 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > On Thu, Jun 08, 2017 at 05:42:00PM +0100, Ananyev, Konstantin wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Richardson, Bruce
> > > > > Sent: Thursday, June 8, 2017 5:21 PM
> > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Ananyev, Konstantin
> > > > > > Sent: Thursday, June 8, 2017 5:13 PM
> > > > > > To: Richardson, Bruce <bruce.richardson@intel.com>
> > > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Richardson, Bruce
> > > > > > > Sent: Thursday, June 8, 2017 5:04 PM
> > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > > allocation
> > > > > > >
> > > > > > > On Thu, Jun 08, 2017 at 04:35:20PM +0100, Ananyev, Konstantin wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Richardson, Bruce
> > > > > > > > > Sent: Thursday, June 8, 2017 4:25 PM
> > > > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > > > > allocation
> > > > > > > > >
> > > > > > > > > On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > > Sent: Thursday, June 8, 2017 3:12 PM
> > > > > > > > > > > To: Olivier Matz <olivier.matz@6wind.com>
> > > > > > > > > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > > > > > > Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > > > > > > allocation
> > > > > > > > > > >
> > > > > > > > > > > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > > > > > > > > > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson
> > > > > > <bruce.richardson@intel.com> wrote:
> > > > > > > > > > > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz
> > > > > > wrote:
> > > > > > > > > > > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson
> > > > > > <bruce.richardson@intel.com> wrote:
> > > > > > > > > > > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev,
> > > > > > Konstantin wrote:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > > > > > > > > > > To: Ananyev, Konstantin
> > > > > > > > > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > > > > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>;
> > > > > > > > > > > > > > > > > dev@dpdk.org
> > > > > > > > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use
> > > > > > > > > > > > > > > > > aligned memzone allocation
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev,
> > > > > > Konstantin wrote:
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are
> > > > > > > > > > > > > > > > > > > > > set to 2 cache lines, so members
> > > > > > > > > > > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > > > > > > > > > > >and therefore the whole struct needs
> > > > > > > > > > > > > > > > > > > > >128-byte alignment according to the ABI
> > > > > > > > > > > > > > > > > > > > so that the 128-byte alignment of the fields
> > > > > > can be guaranteed.
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B
> > > > > > aligned these days.
> > > > > > > > > > > > > > > > > > > > BTW, I probably missed the initial discussion,
> > > > > > but what was the reason for that?
> > > > > > > > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128
> > > > > > > > > > > > > > > > > > > byte alignment; it seems unnecessary if the
> > > > > > > > > > > > > > > > > > > cache line is only 64
> > > > > > > > > bytes.
> > > > > > > > > > > An
> > > > > > > > > > > > > > > > > alternate
> > > > > > > > > > > > > > > > > > > fix would be to just use cache line alignment
> > > > > > for these fields (since memzones are already cache line aligned).
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Yes, had the same thought.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Maybe there is some deeper reason for the >=
> > > > > > 128-byte alignment logic in rte_ring.h?
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Might be, would be good to hear opinion the author
> > > > > > of that change.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > It gives improved performance for core-2-core
> > > > > > transfer.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > > > > > > > > > > That's ok but why we can't keep them and whole
> > > > > > rte_ring aligned on cache-line boundaries?
> > > > > > > > > > > > > > > > Something like that:
> > > > > > > > > > > > > > > > struct rte_ring {
> > > > > > > > > > > > > > > > ...
> > > > > > > > > > > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > > > > > };
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Sure. That should probably work too.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > /Bruce
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > I also agree with Konstantin's proposal. One question
> > > > > > > > > > > > > > though: since it changes the alignment constraint of the
> > > > > > > > > > > > > > rte_ring structure, I think it is an ABI breakage: a
> > > > > > > > > > > > > > structure including the rte_ring structure inherits from
> > > > > > this constraint.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > How could we handle that, knowing this is probably a rare
> > > > > > case?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > Is it an ABI break so long as we keep the resulting size
> > > > > > > > > > > > > and field placement of the structures the same? The
> > > > > > > > > > > > > alignment being reduced should not be a problem, as
> > > > > > > > > > > > > 128byte alignment is also valid as 64byte alignment, after
> > > > > > all.
> > > > > > > > > > > >
> > > > > > > > > > > > I'd say yes. Consider the following example:
> > > > > > > > > > > >
> > > > > > > > > > > > ---8<---
> > > > > > > > > > > > #include <stdio.h>
> > > > > > > > > > > > #include <stdlib.h>
> > > > > > > > > > > >
> > > > > > > > > > > > #define ALIGN 64
> > > > > > > > > > > > /* #define ALIGN 128 */
> > > > > > > > > > > >
> > > > > > > > > > > > /* dummy rte_ring struct */
> > > > > > > > > > > > struct rte_ring {
> > > > > > > > > > > > char x[128];
> > > > > > > > > > > > } __attribute__((aligned(ALIGN)));
> > > > > > > > > > > >
> > > > > > > > > > > > struct foo {
> > > > > > > > > > > > struct rte_ring r;
> > > > > > > > > > > > unsigned bar;
> > > > > > > > > > > > };
> > > > > > > > > > > >
> > > > > > > > > > > > int main(void)
> > > > > > > > > > > > {
> > > > > > > > > > > > struct foo array[2];
> > > > > > > > > > > >
> > > > > > > > > > > > printf("sizeof(ring)=%zu diff=%u\n",
> > > > > > > > > > > > sizeof(struct rte_ring),
> > > > > > > > > > > > (unsigned int)((char *)&array[1].r - (char
> > > > > > *)array));
> > > > > > > > > > > >
> > > > > > > > > > > > return 0;
> > > > > > > > > > > > }
> > > > > > > > > > > > ---8<---
> > > > > > > > > > > >
> > > > > > > > > > > > The size of rte_ring is always 128.
> > > > > > > > > > > > diff is 192 or 256, depending on the value of ALIGN.
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Olivier
> > > > > > > > > >
> > > > > > > > > > About would it be an ABI breakage to 17.05 - I think would...
> > > > > > > > > > Though for me the actual breakage happens in 17.05 when rte_ring
> > > > > > > > > > alignment was increased from 64B 128B.
> > > > > > > > > > Now we just restoring it.
> > > > > > > > > >
> > > > > > > > > Yes, ABI change was announced in advance and explicitly broken in
> > > > > > 17.05.
> > > > > > > > > There was no announcement of ABI break in 17.08 for rte_ring.
> > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Yes, the diff will change, but that is after a recompile. If
> > > > > > > > > > > we have rte_ring_create function always return a 128-byte
> > > > > > > > > > > aligned structure, will any already-compiled apps fail to work
> > > > > > > > > > > if we also change the alignment of the rte_ring struct in the
> > > > > > header?
> > > > > > > > > >
> > > > > > > > > > Why 128B?
> > > > > > > > > > I thought we are discussing making rte_ring 64B aligned again?
> > > > > > > > > >
> > > > > > > > > > Konstantin
> > > > > > > > >
> > > > > > > > > To avoid possibly breaking apps compiled against 17.05 when run
> > > > > > > > > against shared libs for 17.08. Having the extra alignment won't
> > > > > > > > > affect 17.08 apps, since they only require 64-byte alignment, but
> > > > > > > > > returning only 64-byte aligned memory for apps which expect
> > > > > > > > > 128byte aligned memory may cause issues.
> > > > > > > > >
> > > > > > > > > Therefore, we should reduce the required alignment to 64B, which
> > > > > > > > > should only affect any apps that do a recompile, and have memory
> > > > > > > > > allocation for rings return 128B aligned addresses to work with
> > > > > > > > > both 64B aligned and 128B aligned ring structures.
> > > > > > > >
> > > > > > > > Ah, I see - you are talking just about rte_ring_create().
> > > > > > > > BTW, are you sure that right now it allocates rings 128B aligned?
> > > > > > > > As I can see it does just:
> > > > > > > > mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > > > > > > > which means cache line alignment.
> > > > > > > >
> > > > > > > It doesn't currently allocate with that alignment, which is something
> > > > > > > we need to fix - and what this patch was originally submitted to do.
> > > > > > > So I think this patch should be applied, along with a further patch to
> > > > > > > reduce the alignment going forward to avoid any other problems.
> > > > > >
> > > > > > But if we going to reduce alignment anyway (patch #2) why do we need patch
> > > > > > #1 at all?
> > > > >
> > > > > Because any app compiled against 17.05 will use the old alignment value. Therefore patch 1 should be applied to 17.08 for backward
> > > > > compatibility, and backported to 17.05.1.
> > > >
> > > > Why then just no backport patch #2 to 17.05.1?
> > > >
> > > Maybe so. I'm just a little wary about backporting changes like that to
> > > an older release, even though I'm not aware of any specific issues it
> > > might cause.
> >
> >
> > If we want to fully respect the API/ABI deprecation process, we should
> > have patch #1 in 17.05 and 17.08, a deprecation notice in 17.08, and patch
> > #2 starting from 17.11.
> >
> > More pragmatically, it's quite difficult to foresee really big problems
> > due to the changes in patch #2. One I can see is:
> >
> > - rte_ring.so: the dpdk ring library
> > - another_ring.so: a library based on dpdk ring. The struct another_ring
> > is like the struct foo in my previous example.
> > - application: uses another_ring structure
> >
> > After we apply patch #2 on dpdk, and recompile the another_ring library,
> > its ABI will change.
> >
> >
> > So I suggest to follow the deprecation process for that issue.
> >
> While this theoretically can occur, I consider it fairly unlikely, so my
> preference is to have patch #1 in 17.05 and .08, as you suggest,
> but put patch #2 into 17.08 as well.
Ok, let's move forward. I'll ack Daniel's patch + CC stable.
Then I'll submit Konstantin's proposal on the ML.
Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 1/3] timer: inform periodic timers of multiple expiries
@ 2017-06-30 10:14 3% ` Olivier Matz
2017-06-30 12:06 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2017-06-30 10:14 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Robert Sanford, dev
Hi Bruce,
On Wed, 31 May 2017 10:16:19 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> if timer_manage is called much less frequently than the period of a
> periodic timer, then timer expiries will be missed. For example, if a timer
> has a period of 300us, but timer_manage is called every 1ms, then there
> will only be one timer callback called every 1ms instead of 3 within that
> time.
>
> While we can fix this by having each function called multiple times within
> timer-manage, this will lead to out-of-order timeouts, and will be slower
> with all the function call overheads - especially in the case of a timeout
> doing something trivial like incrementing a counter. Therefore, we instead
> modify the callback functions to take a counter value of the number of
> expiries that have passed since the last time it was called.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Sorry, it's probably a bit late to react. If it's too late, nevermind.
I'm not really convinced that adding another argument to the callback
function is the best solution.
Invoking the callbacks several times would result in a much smaller patch
that does not need a heavy ABI compat.
I'm not sure the function call overhead is really significant in that
case. I'm not sure I understand your point related to out-of-order timeouts,
nor I see why this patchset would behave better.
About the problem itself, my understanding was that the timer manage
function has to be called frequently enough to process the timers.
Olivier
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3 26/26] cryptodev: remove AAD from authentication structure
` (9 preceding siblings ...)
2017-06-29 11:35 2% ` [dpdk-dev] [PATCH v3 20/26] cryptodev: add AEAD parameters in crypto operation Pablo de Lara
@ 2017-06-29 11:35 4% ` Pablo de Lara
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:35 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Now that AAD is only used in AEAD algorithms,
there is no need to keep AAD in the authentication
structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
app/test-crypto-perf/cperf_ops.c | 2 --
doc/guides/prog_guide/cryptodev_lib.rst | 6 ------
doc/guides/rel_notes/release_17_08.rst | 3 +++
drivers/crypto/openssl/rte_openssl_pmd.c | 1 -
lib/librte_cryptodev/rte_crypto_sym.h | 26 --------------------------
test/test/test_cryptodev.c | 4 ----
test/test/test_cryptodev_perf.c | 1 -
7 files changed, 3 insertions(+), 40 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index a63fd83..8407503 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -425,7 +425,6 @@ cperf_create_session(uint8_t dev_id,
test_vector->auth_iv.length;
} else {
auth_xform.auth.digest_length = 0;
- auth_xform.auth.add_auth_data_length = 0;
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
auth_xform.auth.iv.length = 0;
@@ -478,7 +477,6 @@ cperf_create_session(uint8_t dev_id,
test_vector->auth_key.data;
} else {
auth_xform.auth.digest_length = 0;
- auth_xform.auth.add_auth_data_length = 0;
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
auth_xform.auth.iv.length = 0;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 5048839..f250c00 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -567,12 +567,6 @@ chain.
uint8_t *data;
phys_addr_t phys_addr;
} digest; /**< Digest parameters */
-
- struct {
- uint8_t *data;
- phys_addr_t phys_addr;
- } aad;
- /**< Additional authentication parameters */
} auth;
};
};
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 2c6bef5..d29b203 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -176,6 +176,9 @@ API Changes
* Changed field size of digest length in ``rte_crypto_auth_xform``,
from uint32_t to uint16_t.
* Added AEAD structure in ``rte_crypto_sym_op``.
+ * Removed AAD length from ``rte_crypto_auth_xform``.
+ * Removed AAD pointer and physical address from auth structure
+ in ``rte_crypto_sym_op``.
ABI Changes
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 7f5c6aa..73e7ff1 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -413,7 +413,6 @@ openssl_set_session_auth_parameters(struct openssl_session *sess,
return -EINVAL;
}
- sess->auth.aad_length = xform->auth.add_auth_data_length;
sess->auth.digest_length = xform->auth.digest_length;
return 0;
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index dab042b..742dc34 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -326,13 +326,6 @@ struct rte_crypto_auth_xform {
* the result shall be truncated.
*/
- uint16_t add_auth_data_length;
- /**< The length of the additional authenticated data (AAD) in bytes.
- * The maximum permitted value is 65535 (2^16 - 1) bytes, unless
- * otherwise specified below.
- *
- */
-
struct {
uint16_t offset;
/**< Starting point for Initialisation Vector or Counter,
@@ -670,25 +663,6 @@ struct rte_crypto_sym_op {
phys_addr_t phys_addr;
/**< Physical address of digest */
} digest; /**< Digest parameters */
-
- struct {
- uint8_t *data;
- /**< Pointer to Additional Authenticated
- * Data (AAD) needed for authenticated cipher
- * mechanisms (CCM and GCM).
- *
- * The length of the data pointed to by this
- * field is set up for the session in the @ref
- * rte_crypto_auth_xform structure as part of
- * the @ref rte_cryptodev_sym_session_create
- * function call.
- * This length must not exceed 65535 (2^16-1)
- * bytes.
- *
- */
- phys_addr_t phys_addr; /**< physical address */
- } aad;
- /**< Additional authentication parameters */
} auth;
};
};
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 21c6270..db0999e 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -5530,7 +5530,6 @@ static int MD5_HMAC_create_session(struct crypto_testsuite_params *ts_params,
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_MD5_HMAC;
ut_params->auth_xform.auth.digest_length = MD5_DIGEST_LEN;
- ut_params->auth_xform.auth.add_auth_data_length = 0;
ut_params->auth_xform.auth.key.length = test_case->key.len;
ut_params->auth_xform.auth.key.data = key;
@@ -6303,7 +6302,6 @@ static int create_gmac_session(uint8_t dev_id,
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_GMAC;
ut_params->auth_xform.auth.op = auth_op;
ut_params->auth_xform.auth.digest_length = tdata->gmac_tag.len;
- ut_params->auth_xform.auth.add_auth_data_length = 0;
ut_params->auth_xform.auth.key.length = tdata->key.len;
ut_params->auth_xform.auth.key.data = auth_key;
ut_params->auth_xform.auth.iv.offset = IV_OFFSET;
@@ -6683,7 +6681,6 @@ create_auth_session(struct crypto_unittest_params *ut_params,
ut_params->auth_xform.auth.key.length = reference->auth_key.len;
ut_params->auth_xform.auth.key.data = auth_key;
ut_params->auth_xform.auth.digest_length = reference->digest.len;
- ut_params->auth_xform.auth.add_auth_data_length = reference->aad.len;
/* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
@@ -6721,7 +6718,6 @@ create_auth_cipher_session(struct crypto_unittest_params *ut_params,
ut_params->auth_xform.auth.iv.length = reference->iv.len;
} else {
ut_params->auth_xform.next = &ut_params->cipher_xform;
- ut_params->auth_xform.auth.add_auth_data_length = reference->aad.len;
/* Setup Cipher Parameters */
ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 5b2468d..7ae1ae9 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -2936,7 +2936,6 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
if (chain == CIPHER_ONLY) {
op->sym->auth.digest.data = NULL;
op->sym->auth.digest.phys_addr = 0;
- op->sym->auth.aad.data = NULL;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = 0;
} else {
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 20/26] cryptodev: add AEAD parameters in crypto operation
` (8 preceding siblings ...)
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 17/26] cryptodev: remove digest " Pablo de Lara
@ 2017-06-29 11:35 2% ` Pablo de Lara
2017-06-29 11:35 4% ` [dpdk-dev] [PATCH v3 26/26] cryptodev: remove AAD from authentication structure Pablo de Lara
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:35 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
AEAD operation parameters can be set in the new
aead structure, in the crypto operation.
This structure is within a union with the cipher
and authentication parameters, since operations can be:
- AEAD: using the aead structure
- Cipher only: using only the cipher structure
- Auth only: using only the authentication structure
- Cipher-then-auth/Auth-then-cipher: using both cipher
and authentication structures
Therefore, all three cannot be used at the same time.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
doc/guides/prog_guide/cryptodev_lib.rst | 70 +++---
doc/guides/rel_notes/release_17_08.rst | 1 +
lib/librte_cryptodev/rte_crypto_sym.h | 375 ++++++++++++++++++++------------
3 files changed, 279 insertions(+), 167 deletions(-)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index b888554..5048839 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -431,7 +431,6 @@ operations, as well as also supporting AEAD operations.
Session and Session Management
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Session are used in symmetric cryptographic processing to store the immutable
data defined in a cryptographic transform which is used in the operation
@@ -465,9 +464,6 @@ operation and its parameters. See the section below for details on transforms.
struct rte_cryptodev_sym_session * rte_cryptodev_sym_session_create(
uint8_t dev_id, struct rte_crypto_sym_xform *xform);
-**Note**: For AEAD operations the algorithm selected for authentication and
-ciphering must aligned, eg AES_GCM.
-
Transforms and Transform Chaining
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -533,30 +529,54 @@ chain.
/**< Session-less API Crypto operation parameters */
};
- struct {
- struct {
- uint32_t offset;
- uint32_t length;
- } data; /**< Data offsets and length for ciphering */
- } cipher;
-
- struct {
- struct {
- uint32_t offset;
- uint32_t length;
- } data; /**< Data offsets and length for authentication */
-
+ union {
struct {
- uint8_t *data;
- phys_addr_t phys_addr;
- } digest; /**< Digest parameters */
+ struct {
+ uint32_t offset;
+ uint32_t length;
+ } data; /**< Data offsets and length for AEAD */
+
+ struct {
+ uint8_t *data;
+ phys_addr_t phys_addr;
+ } digest; /**< Digest parameters */
+
+ struct {
+ uint8_t *data;
+ phys_addr_t phys_addr;
+ } aad;
+ /**< Additional authentication parameters */
+ } aead;
struct {
- uint8_t *data;
- phys_addr_t phys_addr;
- } aad; /**< Additional authentication parameters */
- } auth;
- }
+ struct {
+ struct {
+ uint32_t offset;
+ uint32_t length;
+ } data; /**< Data offsets and length for ciphering */
+ } cipher;
+
+ struct {
+ struct {
+ uint32_t offset;
+ uint32_t length;
+ } data;
+ /**< Data offsets and length for authentication */
+
+ struct {
+ uint8_t *data;
+ phys_addr_t phys_addr;
+ } digest; /**< Digest parameters */
+
+ struct {
+ uint8_t *data;
+ phys_addr_t phys_addr;
+ } aad;
+ /**< Additional authentication parameters */
+ } auth;
+ };
+ };
+ };
Asymmetric Cryptography
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index b920142..2c6bef5 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -175,6 +175,7 @@ API Changes
* Removed digest length from ``rte_crypto_sym_op``.
* Changed field size of digest length in ``rte_crypto_auth_xform``,
from uint32_t to uint16_t.
+ * Added AEAD structure in ``rte_crypto_sym_op``.
ABI Changes
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index db3957e..f03d2fd 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -556,151 +556,242 @@ struct rte_crypto_sym_op {
/**< Session-less API crypto operation parameters */
};
- struct {
- struct {
- uint32_t offset;
- /**< Starting point for cipher processing, specified
- * as number of bytes from start of data in the source
- * buffer. The result of the cipher operation will be
- * written back into the output buffer starting at
- * this location.
- *
- * @note
- * For SNOW 3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- * KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8
- * and ZUC @ RTE_CRYPTO_CIPHER_ZUC_EEA3,
- * this field should be in bits.
- */
-
- uint32_t length;
- /**< The message length, in bytes, of the source buffer
- * on which the cryptographic operation will be
- * computed. This must be a multiple of the block size
- * if a block cipher is being used. This is also the
- * same as the result length.
- *
- * @note
- * In the case of CCM @ref RTE_CRYPTO_AUTH_AES_CCM,
- * this value should not include the length of the
- * padding or the length of the MAC; the driver will
- * compute the actual number of bytes over which the
- * encryption will occur, which will include these
- * values.
- *
- * @note
- * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UEA2,
- * KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8
- * and ZUC @ RTE_CRYPTO_CIPHER_ZUC_EEA3,
- * this field should be in bits.
- */
- } data; /**< Data offsets and length for ciphering */
-
- } cipher;
-
- struct {
- struct {
- uint32_t offset;
- /**< Starting point for hash processing, specified as
- * number of bytes from start of packet in source
- * buffer.
- *
- * @note
- * For CCM and GCM modes of operation, this field is
- * ignored. The field @ref aad field
- * should be set instead.
- *
- * @note
- * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2,
- * KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9
- * and ZUC @ RTE_CRYPTO_AUTH_ZUC_EIA3,
- * this field should be in bits.
- */
-
- uint32_t length;
- /**< The message length, in bytes, of the source
- * buffer that the hash will be computed on.
- *
- * @note
- * For CCM and GCM modes of operation, this field is
- * ignored. The field @ref aad field should be set
- * instead.
- *
- * @note
- * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2,
- * KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9
- * and ZUC @ RTE_CRYPTO_AUTH_ZUC_EIA3,
- * this field should be in bits.
- */
- } data; /**< Data offsets and length for authentication */
-
+ union {
struct {
- uint8_t *data;
- /**< This points to the location where the digest result
- * should be inserted (in the case of digest generation)
- * or where the purported digest exists (in the case of
- * digest verification).
- *
- * At session creation time, the client specified the
- * digest result length with the digest_length member
- * of the @ref rte_crypto_auth_xform structure. For
- * physical crypto devices the caller must allocate at
- * least digest_length of physically contiguous memory
- * at this location.
- *
- * For digest generation, the digest result will
- * overwrite any data at this location.
- *
- * @note
- * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
- * "digest result" read "authentication tag T".
- */
- phys_addr_t phys_addr;
- /**< Physical address of digest */
- } digest; /**< Digest parameters */
+ struct {
+ uint32_t offset;
+ /**< Starting point for AEAD processing, specified as
+ * number of bytes from start of packet in source
+ * buffer.
+ */
+ uint32_t length;
+ /**< The message length, in bytes, of the source buffer
+ * on which the cryptographic operation will be
+ * computed. This must be a multiple of the block size
+ */
+ } data; /**< Data offsets and length for AEAD */
+ struct {
+ uint8_t *data;
+ /**< This points to the location where the digest result
+ * should be inserted (in the case of digest generation)
+ * or where the purported digest exists (in the case of
+ * digest verification).
+ *
+ * At session creation time, the client specified the
+ * digest result length with the digest_length member
+ * of the @ref rte_crypto_auth_xform structure. For
+ * physical crypto devices the caller must allocate at
+ * least digest_length of physically contiguous memory
+ * at this location.
+ *
+ * For digest generation, the digest result will
+ * overwrite any data at this location.
+ *
+ * @note
+ * For GCM (@ref RTE_CRYPTO_AEAD_AES_GCM), for
+ * "digest result" read "authentication tag T".
+ */
+ phys_addr_t phys_addr;
+ /**< Physical address of digest */
+ } digest; /**< Digest parameters */
+ struct {
+ uint8_t *data;
+ /**< Pointer to Additional Authenticated Data (AAD)
+ * needed for authenticated cipher mechanisms (CCM and
+ * GCM)
+ *
+ * Specifically for CCM (@ref RTE_CRYPTO_AEAD_AES_CCM),
+ * the caller should setup this field as follows:
+ *
+ * - the nonce should be written starting at an offset
+ * of one byte into the array, leaving room for the
+ * implementation to write in the flags to the first
+ * byte.
+ *
+ * - the additional authentication data itself should
+ * be written starting at an offset of 18 bytes into
+ * the array, leaving room for the length encoding in
+ * the first two bytes of the second block.
+ *
+ * - the array should be big enough to hold the above
+ * fields, plus any padding to round this up to the
+ * nearest multiple of the block size (16 bytes).
+ * Padding will be added by the implementation.
+ *
+ * Finally, for GCM (@ref RTE_CRYPTO_AEAD_AES_GCM), the
+ * caller should setup this field as follows:
+ *
+ * - the AAD is written in starting at byte 0
+ * - the array must be big enough to hold the AAD, plus
+ * any space to round this up to the nearest multiple
+ * of the block size (16 bytes).
+ *
+ */
+ phys_addr_t phys_addr; /**< physical address */
+ } aad;
+ /**< Additional authentication parameters */
+ } aead;
struct {
- uint8_t *data;
- /**< Pointer to Additional Authenticated Data (AAD)
- * needed for authenticated cipher mechanisms (CCM and
- * GCM).
- *
- * The length of the data pointed to by this field is
- * set up for the session in the @ref
- * rte_crypto_auth_xform structure as part of the @ref
- * rte_cryptodev_sym_session_create function call.
- * This length must not exceed 65535 (2^16-1) bytes.
- *
- * Specifically for CCM (@ref RTE_CRYPTO_AUTH_AES_CCM),
- * the caller should setup this field as follows:
- *
- * - the nonce should be written starting at an offset
- * of one byte into the array, leaving room for the
- * implementation to write in the flags to the first
- * byte.
- *
- * - the additional authentication data itself should
- * be written starting at an offset of 18 bytes into
- * the array, leaving room for the length encoding in
- * the first two bytes of the second block.
- *
- * - the array should be big enough to hold the above
- * fields, plus any padding to round this up to the
- * nearest multiple of the block size (16 bytes).
- * Padding will be added by the implementation.
- *
- * Finally, for GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), the
- * caller should setup this field as follows:
- *
- * - the AAD is written in starting at byte 0
- * - the array must be big enough to hold the AAD, plus
- * any space to round this up to the nearest multiple
- * of the block size (16 bytes).
- *
- */
- phys_addr_t phys_addr; /**< physical address */
- } aad;
- /**< Additional authentication parameters */
- } auth;
+ struct {
+ struct {
+ uint32_t offset;
+ /**< Starting point for cipher processing,
+ * specified as number of bytes from start
+ * of data in the source buffer.
+ * The result of the cipher operation will be
+ * written back into the output buffer
+ * starting at this location.
+ *
+ * @note
+ * For SNOW 3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+ * KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8
+ * and ZUC @ RTE_CRYPTO_CIPHER_ZUC_EEA3,
+ * this field should be in bits.
+ */
+ uint32_t length;
+ /**< The message length, in bytes, of the
+ * source buffer on which the cryptographic
+ * operation will be computed.
+ * This must be a multiple of the block size
+ * if a block cipher is being used. This is
+ * also the same as the result length.
+ *
+ * @note
+ * In the case of CCM
+ * @ref RTE_CRYPTO_AUTH_AES_CCM, this value
+ * should not include the length of the padding
+ * or the length of the MAC; the driver will
+ * compute the actual number of bytes over
+ * which the encryption will occur, which will
+ * include these values.
+ *
+ * @note
+ * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UEA2,
+ * KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8
+ * and ZUC @ RTE_CRYPTO_CIPHER_ZUC_EEA3,
+ * this field should be in bits.
+ */
+ } data; /**< Data offsets and length for ciphering */
+ } cipher;
+
+ struct {
+ struct {
+ uint32_t offset;
+ /**< Starting point for hash processing,
+ * specified as number of bytes from start of
+ * packet in source buffer.
+ *
+ * @note
+ * For CCM and GCM modes of operation,
+ * this field is ignored.
+ * The field @ref aad field should be set
+ * instead.
+ *
+ * @note
+ * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+ * KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9
+ * and ZUC @ RTE_CRYPTO_AUTH_ZUC_EIA3,
+ * this field should be in bits.
+ */
+ uint32_t length;
+ /**< The message length, in bytes, of the source
+ * buffer that the hash will be computed on.
+ *
+ * @note
+ * For CCM and GCM modes of operation,
+ * this field is ignored. The field @ref aad
+ * field should be set instead.
+ *
+ * @note
+ * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+ * KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9
+ * and ZUC @ RTE_CRYPTO_AUTH_ZUC_EIA3,
+ * this field should be in bits.
+ */
+ } data;
+ /**< Data offsets and length for authentication */
+
+ struct {
+ uint8_t *data;
+ /**< This points to the location where
+ * the digest result should be inserted
+ * (in the case of digest generation)
+ * or where the purported digest exists
+ * (in the case of digest verification).
+ *
+ * At session creation time, the client
+ * specified the digest result length with
+ * the digest_length member of the
+ * @ref rte_crypto_auth_xform structure.
+ * For physical crypto devices the caller
+ * must allocate at least digest_length of
+ * physically contiguous memory at this
+ * location.
+ *
+ * For digest generation, the digest result
+ * will overwrite any data at this location.
+ *
+ * @note
+ * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
+ * "digest result" read "authentication tag T".
+ */
+ phys_addr_t phys_addr;
+ /**< Physical address of digest */
+ } digest; /**< Digest parameters */
+
+ struct {
+ uint8_t *data;
+ /**< Pointer to Additional Authenticated
+ * Data (AAD) needed for authenticated cipher
+ * mechanisms (CCM and GCM).
+ *
+ * The length of the data pointed to by this
+ * field is set up for the session in the @ref
+ * rte_crypto_auth_xform structure as part of
+ * the @ref rte_cryptodev_sym_session_create
+ * function call.
+ * This length must not exceed 65535 (2^16-1)
+ * bytes.
+ *
+ * Specifically for CCM
+ * (@ref RTE_CRYPTO_AUTH_AES_CCM),
+ * the caller should setup this field as follows:
+ *
+ * - the nonce should be written starting at
+ * an offset of one byte into the array,
+ * leaving room for the implementation to
+ * write in the flags to the first byte.
+ *
+ * - the additional authentication data
+ * itself should be written starting at
+ * an offset of 18 bytes into the array,
+ * leaving room for the length encoding in
+ * the first two bytes of the second block.
+ *
+ * - the array should be big enough to hold
+ * the above fields, plus any padding to
+ * round this up to the nearest multiple of
+ * the block size (16 bytes).
+ * Padding will be added by the implementation.
+ *
+ * Finally, for GCM
+ * (@ref RTE_CRYPTO_AUTH_AES_GCM), the
+ * caller should setup this field as follows:
+ *
+ * - the AAD is written in starting at byte 0
+ * - the array must be big enough to hold
+ * the AAD, plus any space to round this up to
+ * the nearest multiple of the block size
+ * (16 bytes).
+ *
+ */
+ phys_addr_t phys_addr; /**< physical address */
+ } aad;
+ /**< Additional authentication parameters */
+ } auth;
+ };
+ };
};
--
2.9.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v3 17/26] cryptodev: remove digest length from crypto op
` (7 preceding siblings ...)
2017-06-29 11:35 2% ` [dpdk-dev] [PATCH v3 16/26] cryptodev: remove AAD length from crypto op Pablo de Lara
@ 2017-06-29 11:35 1% ` Pablo de Lara
2017-06-29 11:35 2% ` [dpdk-dev] [PATCH v3 20/26] cryptodev: add AEAD parameters in crypto operation Pablo de Lara
2017-06-29 11:35 4% ` [dpdk-dev] [PATCH v3 26/26] cryptodev: remove AAD from authentication structure Pablo de Lara
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:35 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Digest length was duplicated in the authentication transform
and the crypto operation structures.
Since digest length is not expected to change in a same
session, it is removed from the crypto operation.
Also, the length has been shrunk to 16 bits,
which should be sufficient for any digest.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
app/test-crypto-perf/cperf_ops.c | 7 ---
doc/guides/prog_guide/cryptodev_lib.rst | 1 -
doc/guides/rel_notes/release_17_08.rst | 3 ++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 34 +++++++-------
drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h | 2 +
drivers/crypto/armv8/rte_armv8_pmd.c | 9 ++--
drivers/crypto/armv8/rte_armv8_pmd_private.h | 2 +
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 34 +++++++-------
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 1 +
drivers/crypto/kasumi/rte_kasumi_pmd.c | 18 ++++----
drivers/crypto/openssl/rte_openssl_pmd.c | 7 +--
drivers/crypto/openssl/rte_openssl_pmd_private.h | 2 +
drivers/crypto/qat/qat_adf/qat_algs.h | 1 +
drivers/crypto/qat/qat_crypto.c | 3 +-
drivers/crypto/snow3g/rte_snow3g_pmd.c | 18 ++++----
drivers/crypto/zuc/rte_zuc_pmd.c | 18 ++++----
examples/ipsec-secgw/esp.c | 2 -
examples/l2fwd-crypto/main.c | 1 -
lib/librte_cryptodev/rte_crypto_sym.h | 6 +--
test/test/test_cryptodev.c | 34 +++++---------
test/test/test_cryptodev_blockcipher.c | 5 +--
test/test/test_cryptodev_perf.c | 56 ++++++++----------------
22 files changed, 119 insertions(+), 145 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 15a4b58..bc74371 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -161,7 +161,6 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
sym_op->auth.digest.data = test_vector->digest.data;
sym_op->auth.digest.phys_addr =
test_vector->digest.phys_addr;
- sym_op->auth.digest.length = options->auth_digest_sz;
} else {
uint32_t offset = options->test_buffer_size;
@@ -184,7 +183,6 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
uint8_t *, offset);
sym_op->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(buf, offset);
- sym_op->auth.digest.length = options->auth_digest_sz;
sym_op->auth.aad.phys_addr = test_vector->aad.phys_addr;
sym_op->auth.aad.data = test_vector->aad.data;
@@ -247,7 +245,6 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->auth.digest.data = test_vector->digest.data;
sym_op->auth.digest.phys_addr =
test_vector->digest.phys_addr;
- sym_op->auth.digest.length = options->auth_digest_sz;
} else {
uint32_t offset = options->test_buffer_size;
@@ -270,7 +267,6 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
uint8_t *, offset);
sym_op->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(buf, offset);
- sym_op->auth.digest.length = options->auth_digest_sz;
sym_op->auth.aad.phys_addr = test_vector->aad.phys_addr;
sym_op->auth.aad.data = test_vector->aad.data;
}
@@ -339,7 +335,6 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
sym_op->auth.digest.data = test_vector->digest.data;
sym_op->auth.digest.phys_addr =
test_vector->digest.phys_addr;
- sym_op->auth.digest.length = options->auth_digest_sz;
} else {
uint32_t offset = sym_op->cipher.data.length +
@@ -363,8 +358,6 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
uint8_t *, offset);
sym_op->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(buf, offset);
-
- sym_op->auth.digest.length = options->auth_digest_sz;
}
sym_op->auth.data.length = options->test_buffer_size;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index ea8fc00..e036611 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -547,7 +547,6 @@ chain.
struct {
uint8_t *data;
phys_addr_t phys_addr;
- uint16_t length;
} digest; /**< Digest parameters */
struct {
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index e633d73..a544639 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -166,6 +166,9 @@ API Changes
* Removed Additional Authentication Data (AAD) length from ``rte_crypto_sym_op``.
* Changed field size of AAD length in ``rte_crypto_auth_xform``,
from uint32_t to uint16_t.
+ * Removed digest length from ``rte_crypto_sym_op``.
+ * Changed field size of digest length in ``rte_crypto_auth_xform``,
+ from uint32_t to uint16_t.
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index f6136ba..fcf0f8b 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -78,6 +78,7 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
{
const struct rte_crypto_sym_xform *auth_xform;
const struct rte_crypto_sym_xform *cipher_xform;
+ uint16_t digest_length;
if (xform->next == NULL || xform->next->next != NULL) {
GCM_LOG_ERR("Two and only two chained xform required");
@@ -128,6 +129,8 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
return -EINVAL;
}
+ digest_length = auth_xform->auth.digest_length;
+
/* Check key length and calculate GCM pre-compute. */
switch (cipher_xform->cipher.key.length) {
case 16:
@@ -146,6 +149,14 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
}
sess->aad_length = auth_xform->auth.add_auth_data_length;
+ /* Digest check */
+ if (digest_length != 16 &&
+ digest_length != 12 &&
+ digest_length != 8) {
+ GCM_LOG_ERR("digest");
+ return -EINVAL;
+ }
+ sess->digest_length = digest_length;
return 0;
}
@@ -245,13 +256,6 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
*iv_padd = rte_bswap32(1);
}
- if (sym_op->auth.digest.length != 16 &&
- sym_op->auth.digest.length != 12 &&
- sym_op->auth.digest.length != 8) {
- GCM_LOG_ERR("digest");
- return -1;
- }
-
if (session->op == AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION) {
aesni_gcm_enc[session->key].init(&session->gdata,
@@ -281,11 +285,11 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
aesni_gcm_enc[session->key].finalize(&session->gdata,
sym_op->auth.digest.data,
- (uint64_t)sym_op->auth.digest.length);
+ (uint64_t)session->digest_length);
} else { /* session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION */
uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(sym_op->m_dst ?
sym_op->m_dst : sym_op->m_src,
- sym_op->auth.digest.length);
+ session->digest_length);
if (!auth_tag) {
GCM_LOG_ERR("auth_tag");
@@ -319,7 +323,7 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
aesni_gcm_dec[session->key].finalize(&session->gdata,
auth_tag,
- (uint64_t)sym_op->auth.digest.length);
+ (uint64_t)session->digest_length);
}
return 0;
@@ -349,21 +353,21 @@ post_process_gcm_crypto_op(struct rte_crypto_op *op)
if (session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION) {
uint8_t *tag = rte_pktmbuf_mtod_offset(m, uint8_t *,
- m->data_len - op->sym->auth.digest.length);
+ m->data_len - session->digest_length);
#ifdef RTE_LIBRTE_PMD_AESNI_GCM_DEBUG
rte_hexdump(stdout, "auth tag (orig):",
- op->sym->auth.digest.data, op->sym->auth.digest.length);
+ op->sym->auth.digest.data, session->digest_length);
rte_hexdump(stdout, "auth tag (calc):",
- tag, op->sym->auth.digest.length);
+ tag, session->digest_length);
#endif
if (memcmp(tag, op->sym->auth.digest.data,
- op->sym->auth.digest.length) != 0)
+ session->digest_length) != 0)
op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
/* trim area used for digest from mbuf */
- rte_pktmbuf_trim(m, op->sym->auth.digest.length);
+ rte_pktmbuf_trim(m, session->digest_length);
}
}
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index bfd4d1c..05fabe6 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -95,6 +95,8 @@ struct aesni_gcm_session {
uint16_t offset;
} iv;
/**< IV parameters */
+ uint16_t digest_length;
+ /**< Digest length */
enum aesni_gcm_operation op;
/**< GCM operation type */
enum aesni_gcm_key key;
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index dac4fc3..4a23ff1 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -452,6 +452,9 @@ armv8_crypto_set_session_chained_parameters(struct armv8_crypto_session *sess,
return -EINVAL;
}
+ /* Set the digest length */
+ sess->auth.digest_length = auth_xform->auth.digest_length;
+
/* Verify supported key lengths and extract proper algorithm */
switch (cipher_xform->cipher.key.length << 3) {
case 128:
@@ -649,7 +652,7 @@ process_armv8_chained_op
}
} else {
adst = (uint8_t *)rte_pktmbuf_append(m_asrc,
- op->sym->auth.digest.length);
+ sess->auth.digest_length);
}
arg.cipher.iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -667,12 +670,12 @@ process_armv8_chained_op
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
if (memcmp(adst, op->sym->auth.digest.data,
- op->sym->auth.digest.length) != 0) {
+ sess->auth.digest_length) != 0) {
op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
}
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(m_asrc,
- op->sym->auth.digest.length);
+ sess->auth.digest_length);
}
}
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_private.h b/drivers/crypto/armv8/rte_armv8_pmd_private.h
index 75bde9f..09d32f2 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_private.h
+++ b/drivers/crypto/armv8/rte_armv8_pmd_private.h
@@ -199,6 +199,8 @@ struct armv8_crypto_session {
/**< HMAC key (max supported length)*/
} hmac;
};
+ uint16_t digest_length;
+ /* Digest length */
} auth;
} __rte_cache_aligned;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 3930794..8ee6ece 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -84,7 +84,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
struct sec_flow_context *flc;
uint32_t auth_only_len = sym_op->auth.data.length -
sym_op->cipher.data.length;
- int icv_len = sym_op->auth.digest.length;
+ int icv_len = sess->digest_length;
uint8_t *old_icv;
uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -135,7 +135,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
"cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
sym_op->auth.data.offset,
sym_op->auth.data.length,
- sym_op->auth.digest.length,
+ sess->digest_length,
sym_op->cipher.data.offset,
sym_op->cipher.data.length,
sess->iv.length,
@@ -161,7 +161,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
sge++;
DPAA2_SET_FLE_ADDR(sge,
DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
- sge->length = sym_op->auth.digest.length;
+ sge->length = sess->digest_length;
DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
sess->iv.length));
}
@@ -177,7 +177,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
fle->length = (sess->dir == DIR_ENC) ?
(sym_op->auth.data.length + sess->iv.length) :
(sym_op->auth.data.length + sess->iv.length +
- sym_op->auth.digest.length);
+ sess->digest_length);
/* Configure Input SGE for Encap/Decap */
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
@@ -192,12 +192,12 @@ build_authenc_fd(dpaa2_sec_session *sess,
sge++;
old_icv = (uint8_t *)(sge + 1);
memcpy(old_icv, sym_op->auth.digest.data,
- sym_op->auth.digest.length);
- memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+ sess->digest_length);
+ memset(sym_op->auth.digest.data, 0, sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
- sge->length = sym_op->auth.digest.length;
+ sge->length = sess->digest_length;
DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
- sym_op->auth.digest.length +
+ sess->digest_length +
sess->iv.length));
}
DPAA2_SET_FLE_FIN(sge);
@@ -217,7 +217,7 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
uint32_t mem_len = (sess->dir == DIR_ENC) ?
(3 * sizeof(struct qbman_fle)) :
(5 * sizeof(struct qbman_fle) +
- sym_op->auth.digest.length);
+ sess->digest_length);
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
uint8_t *old_digest;
@@ -251,7 +251,7 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
- fle->length = sym_op->auth.digest.length;
+ fle->length = sess->digest_length;
DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
DPAA2_SET_FD_COMPOUND_FMT(fd);
@@ -282,17 +282,17 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
sym_op->m_src->data_off);
DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
- sym_op->auth.digest.length);
+ sess->digest_length);
sge->length = sym_op->auth.data.length;
sge++;
old_digest = (uint8_t *)(sge + 1);
rte_memcpy(old_digest, sym_op->auth.digest.data,
- sym_op->auth.digest.length);
- memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+ sess->digest_length);
+ memset(sym_op->auth.digest.data, 0, sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
- sge->length = sym_op->auth.digest.length;
+ sge->length = sess->digest_length;
fle->length = sym_op->auth.data.length +
- sym_op->auth.digest.length;
+ sess->digest_length;
DPAA2_SET_FLE_FIN(sge);
}
DPAA2_SET_FLE_FIN(fle);
@@ -912,6 +912,8 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
+ session->digest_length = xform->auth.digest_length;
+
switch (xform->auth.algo) {
case RTE_CRYPTO_AUTH_SHA1_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA1;
@@ -1064,6 +1066,8 @@ dpaa2_sec_aead_init(struct rte_cryptodev *dev,
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
+ session->digest_length = auth_xform->digest_length;
+
switch (auth_xform->algo) {
case RTE_CRYPTO_AUTH_SHA1_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA1;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index ff3be70..eda2eec 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -191,6 +191,7 @@ typedef struct dpaa2_sec_session_entry {
uint16_t length; /**< IV length in bytes */
uint16_t offset; /**< IV offset in bytes */
} iv;
+ uint16_t digest_length;
uint8_t status;
union {
struct dpaa2_sec_cipher_ctxt cipher_ctxt;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 3a3ffa4..6ece58c 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -132,6 +132,12 @@ kasumi_set_session_parameters(struct kasumi_session *sess,
/* Only KASUMI F9 supported */
if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_KASUMI_F9)
return -EINVAL;
+
+ if (auth_xform->auth.digest_length != KASUMI_DIGEST_LENGTH) {
+ KASUMI_LOG_ERR("Wrong digest length");
+ return -EINVAL;
+ }
+
sess->auth_op = auth_xform->auth.op;
sess->auth_iv_offset = auth_xform->auth.iv.offset;
@@ -261,12 +267,6 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
uint8_t direction;
for (i = 0; i < num_ops; i++) {
- if (unlikely(ops[i]->sym->auth.digest.length != KASUMI_DIGEST_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- KASUMI_LOG_ERR("digest");
- break;
- }
-
/* Data must be byte aligned */
if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
@@ -288,19 +288,19 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
dst = (uint8_t *)rte_pktmbuf_append(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ KASUMI_DIGEST_LENGTH);
sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
iv, src,
length_in_bits, dst, direction);
/* Verify digest. */
if (memcmp(dst, ops[i]->sym->auth.digest.data,
- ops[i]->sym->auth.digest.length) != 0)
+ KASUMI_DIGEST_LENGTH) != 0)
ops[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ KASUMI_DIGEST_LENGTH);
} else {
dst = ops[i]->sym->auth.digest.data;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 9de4c68..46b1dd8 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -371,6 +371,7 @@ openssl_set_session_auth_parameters(struct openssl_session *sess,
}
sess->auth.aad_length = xform->auth.add_auth_data_length;
+ sess->auth.digest_length = xform->auth.digest_length;
return 0;
}
@@ -1130,7 +1131,7 @@ process_openssl_auth_op
if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY)
dst = (uint8_t *)rte_pktmbuf_append(mbuf_src,
- op->sym->auth.digest.length);
+ sess->auth.digest_length);
else {
dst = op->sym->auth.digest.data;
if (dst == NULL)
@@ -1158,11 +1159,11 @@ process_openssl_auth_op
if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
if (memcmp(dst, op->sym->auth.digest.data,
- op->sym->auth.digest.length) != 0) {
+ sess->auth.digest_length) != 0) {
op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
}
/* Trim area used for digest from mbuf. */
- rte_pktmbuf_trim(mbuf_src, op->sym->auth.digest.length);
+ rte_pktmbuf_trim(mbuf_src, sess->auth.digest_length);
}
if (status != 0)
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_private.h b/drivers/crypto/openssl/rte_openssl_pmd_private.h
index 045e532..4c9be05 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_private.h
+++ b/drivers/crypto/openssl/rte_openssl_pmd_private.h
@@ -165,6 +165,8 @@ struct openssl_session {
uint16_t aad_length;
/**< AAD length */
+ uint16_t digest_length;
+ /**< digest length */
} auth;
} __rte_cache_aligned;
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
index f70c6cb..b13d90b 100644
--- a/drivers/crypto/qat/qat_adf/qat_algs.h
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -135,6 +135,7 @@ struct qat_session {
uint16_t offset;
uint16_t length;
} auth_iv;
+ uint16_t digest_length;
rte_spinlock_t lock; /* protects this struct */
};
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index aada9dd..b365c8d 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -606,6 +606,7 @@ qat_crypto_sym_configure_session_auth(struct rte_cryptodev *dev,
auth_xform->op))
goto error_out;
}
+ session->digest_length = auth_xform->digest_length;
return session;
error_out:
@@ -1200,7 +1201,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
ctx->auth_iv.length);
}
rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
- op->sym->auth.digest.length);
+ ctx->digest_length);
rte_hexdump(stdout, "aad:", op->sym->auth.aad.data,
ctx->aad_len);
}
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index afb5e92..fbdccd1 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -132,6 +132,12 @@ snow3g_set_session_parameters(struct snow3g_session *sess,
/* Only SNOW 3G UIA2 supported */
if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_SNOW3G_UIA2)
return -EINVAL;
+
+ if (auth_xform->auth.digest_length != SNOW3G_DIGEST_LENGTH) {
+ SNOW3G_LOG_ERR("Wrong digest length");
+ return -EINVAL;
+ }
+
sess->auth_op = auth_xform->auth.op;
if (auth_xform->auth.iv.length != SNOW3G_IV_LENGTH) {
@@ -252,12 +258,6 @@ process_snow3g_hash_op(struct rte_crypto_op **ops,
uint8_t *iv;
for (i = 0; i < num_ops; i++) {
- if (unlikely(ops[i]->sym->auth.digest.length != SNOW3G_DIGEST_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- SNOW3G_LOG_ERR("digest");
- break;
- }
-
/* Data must be byte aligned */
if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
@@ -274,19 +274,19 @@ process_snow3g_hash_op(struct rte_crypto_op **ops,
if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
dst = (uint8_t *)rte_pktmbuf_append(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ SNOW3G_DIGEST_LENGTH);
sso_snow3g_f9_1_buffer(&session->pKeySched_hash,
iv, src,
length_in_bits, dst);
/* Verify digest. */
if (memcmp(dst, ops[i]->sym->auth.digest.data,
- ops[i]->sym->auth.digest.length) != 0)
+ SNOW3G_DIGEST_LENGTH) != 0)
ops[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ SNOW3G_DIGEST_LENGTH);
} else {
dst = ops[i]->sym->auth.digest.data;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index c79ea6e..80ddd5a 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -131,6 +131,12 @@ zuc_set_session_parameters(struct zuc_session *sess,
/* Only ZUC EIA3 supported */
if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_ZUC_EIA3)
return -EINVAL;
+
+ if (auth_xform->auth.digest_length != ZUC_DIGEST_LENGTH) {
+ ZUC_LOG_ERR("Wrong digest length");
+ return -EINVAL;
+ }
+
sess->auth_op = auth_xform->auth.op;
if (auth_xform->auth.iv.length != ZUC_IV_KEY_LENGTH) {
@@ -249,12 +255,6 @@ process_zuc_hash_op(struct rte_crypto_op **ops,
uint8_t *iv;
for (i = 0; i < num_ops; i++) {
- if (unlikely(ops[i]->sym->auth.digest.length != ZUC_DIGEST_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- ZUC_LOG_ERR("digest");
- break;
- }
-
/* Data must be byte aligned */
if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
@@ -271,19 +271,19 @@ process_zuc_hash_op(struct rte_crypto_op **ops,
if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
dst = (uint32_t *)rte_pktmbuf_append(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ ZUC_DIGEST_LENGTH);
sso_zuc_eia3_1_buffer(session->pKey_hash,
iv, src,
length_in_bits, dst);
/* Verify digest. */
if (memcmp(dst, ops[i]->sym->auth.digest.data,
- ops[i]->sym->auth.digest.length) != 0)
+ ZUC_DIGEST_LENGTH) != 0)
ops[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ ZUC_DIGEST_LENGTH);
} else {
dst = (uint32_t *)ops[i]->sym->auth.digest.data;
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 571c2c6..d544a3c 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -140,7 +140,6 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
sym_cop->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
- sym_cop->auth.digest.length = sa->digest_len;
return 0;
}
@@ -368,7 +367,6 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
sym_cop->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
- sym_cop->auth.digest.length = sa->digest_len;
return 0;
}
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 6fe829e..6d88937 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -481,7 +481,6 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
op->sym->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
rte_pktmbuf_pkt_len(m) - cparams->digest_length);
- op->sym->auth.digest.length = cparams->digest_length;
/* For wireless algorithms, offset/length must be in bits */
if (cparams->auth_algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index b964a56..de4031a 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -354,7 +354,7 @@ struct rte_crypto_auth_xform {
* (for example RFC 2104, FIPS 198a).
*/
- uint32_t digest_length;
+ uint16_t digest_length;
/**< Length of the digest to be returned. If the verify option is set,
* this specifies the length of the digest to be compared for the
* session.
@@ -604,10 +604,6 @@ struct rte_crypto_sym_op {
*/
phys_addr_t phys_addr;
/**< Physical address of digest */
- uint16_t length;
- /**< Length of digest. This must be the same value as
- * @ref rte_crypto_auth_xform.digest_length.
- */
} digest; /**< Digest parameters */
struct {
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 7acfa24..4698f26 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1307,7 +1307,6 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
sym_op->auth.digest.data = ut_params->digest;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, QUOTE_512_BYTES);
- sym_op->auth.digest.length = DIGEST_BYTE_LENGTH_SHA1;
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
@@ -1459,7 +1458,6 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
sym_op->auth.digest.data = ut_params->digest;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, QUOTE_512_BYTES);
- sym_op->auth.digest.length = DIGEST_BYTE_LENGTH_SHA512;
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
@@ -2102,7 +2100,6 @@ create_wireless_algo_hash_operation(const uint8_t *auth_tag,
ut_params->digest = sym_op->auth.digest.data;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, data_pad_len);
- sym_op->auth.digest.length = auth_tag_len;
if (op == RTE_CRYPTO_AUTH_OP_GENERATE)
memset(sym_op->auth.digest.data, 0, auth_tag_len);
else
@@ -2110,7 +2107,7 @@ create_wireless_algo_hash_operation(const uint8_t *auth_tag,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
sym_op->auth.data.length = auth_len;
sym_op->auth.data.offset = auth_offset;
@@ -2159,7 +2156,6 @@ create_wireless_cipher_hash_operation(const struct wireless_test_data *tdata,
ut_params->digest = sym_op->auth.digest.data;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, data_pad_len);
- sym_op->auth.digest.length = auth_tag_len;
if (op == RTE_CRYPTO_AUTH_OP_GENERATE)
memset(sym_op->auth.digest.data, 0, auth_tag_len);
else
@@ -2167,7 +2163,7 @@ create_wireless_cipher_hash_operation(const struct wireless_test_data *tdata,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
/* Copy cipher and auth IVs at the end of the crypto operation */
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op, uint8_t *,
@@ -2227,7 +2223,6 @@ create_wireless_algo_cipher_hash_operation(const uint8_t *auth_tag,
ut_params->digest = sym_op->auth.digest.data;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, data_pad_len);
- sym_op->auth.digest.length = auth_tag_len;
if (op == RTE_CRYPTO_AUTH_OP_GENERATE)
memset(sym_op->auth.digest.data, 0, auth_tag_len);
else
@@ -2235,7 +2230,7 @@ create_wireless_algo_cipher_hash_operation(const uint8_t *auth_tag,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
/* Copy cipher and auth IVs at the end of the crypto operation */
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op, uint8_t *,
@@ -2286,13 +2281,12 @@ create_wireless_algo_auth_cipher_operation(unsigned int auth_tag_len,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, data_pad_len);
- sym_op->auth.digest.length = auth_tag_len;
memset(sym_op->auth.digest.data, 0, auth_tag_len);
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
/* Copy cipher and auth IVs at the end of the crypto operation */
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op, uint8_t *,
@@ -4824,7 +4818,6 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
ut_params->ibuf,
plaintext_pad_len +
aad_pad_len);
- sym_op->auth.digest.length = tdata->auth_tag.len;
} else {
sym_op->auth.digest.data = (uint8_t *)rte_pktmbuf_append(
ut_params->ibuf, tdata->auth_tag.len);
@@ -4833,13 +4826,12 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf,
plaintext_pad_len + aad_pad_len);
- sym_op->auth.digest.length = tdata->auth_tag.len;
rte_memcpy(sym_op->auth.digest.data, tdata->auth_tag.data,
tdata->auth_tag.len);
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ tdata->auth_tag.len);
}
sym_op->cipher.data.length = tdata->plaintext.len;
@@ -5614,7 +5606,6 @@ static int MD5_HMAC_create_op(struct crypto_unittest_params *ut_params,
"no room to append digest");
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, plaintext_pad_len);
- sym_op->auth.digest.length = MD5_DIGEST_LEN;
if (ut_params->auth_xform.auth.op == RTE_CRYPTO_AUTH_OP_VERIFY) {
rte_memcpy(sym_op->auth.digest.data, test_case->auth_tag.data,
@@ -6325,14 +6316,13 @@ create_gmac_operation(enum rte_crypto_auth_operation op,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, aad_pad_len);
- sym_op->auth.digest.length = tdata->gmac_tag.len;
if (op == RTE_CRYPTO_AUTH_OP_VERIFY) {
rte_memcpy(sym_op->auth.digest.data, tdata->gmac_tag.data,
tdata->gmac_tag.len);
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ tdata->gmac_tag.len);
}
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
@@ -6810,7 +6800,6 @@ create_auth_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, reference->plaintext.len);
- sym_op->auth.digest.length = reference->digest.len;
if (auth_generate)
memset(sym_op->auth.digest.data, 0, reference->digest.len);
@@ -6821,7 +6810,7 @@ create_auth_operation(struct crypto_testsuite_params *ts_params,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ reference->digest.len);
sym_op->auth.data.length = reference->plaintext.len;
sym_op->auth.data.offset = 0;
@@ -6868,7 +6857,6 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, reference->ciphertext.len);
- sym_op->auth.digest.length = reference->digest.len;
if (auth_generate)
memset(sym_op->auth.digest.data, 0, reference->digest.len);
@@ -6879,7 +6867,7 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ reference->digest.len);
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
@@ -6922,7 +6910,6 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, reference->ciphertext.len);
- sym_op->auth.digest.length = reference->digest.len;
if (auth_generate)
memset(sym_op->auth.digest.data, 0, reference->digest.len);
@@ -6933,7 +6920,7 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ reference->digest.len);
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
@@ -7170,14 +7157,13 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
"no room to append digest");
sym_op->auth.digest.phys_addr = digest_phys;
- sym_op->auth.digest.length = auth_tag_len;
if (op == RTE_CRYPTO_CIPHER_OP_DECRYPT) {
rte_memcpy(sym_op->auth.digest.data, tdata->auth_tag.data,
auth_tag_len);
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
}
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 9faf088..446ab4f 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -324,7 +324,6 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = tdata->ciphertext.len;
- sym_op->auth.digest.length = digest_len;
}
/* create session for sessioned op */
@@ -474,7 +473,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
sym_op->auth.data.offset;
changed_len = sym_op->auth.data.length;
if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH_GEN)
- changed_len += sym_op->auth.digest.length;
+ changed_len += digest_len;
} else {
/* cipher-only */
head_unchanged_len = rte_pktmbuf_headroom(mbuf) +
@@ -516,7 +515,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH_GEN)
- changed_len += sym_op->auth.digest.length;
+ changed_len += digest_len;
if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH_VERIFY) {
/* white-box test: PMDs use some of the
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 7239976..3bd9351 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -168,20 +168,19 @@ static struct rte_mbuf *
test_perf_create_pktmbuf(struct rte_mempool *mpool, unsigned buf_sz);
static inline struct rte_crypto_op *
test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
- struct rte_cryptodev_sym_session *sess, unsigned data_len,
- unsigned digest_len);
+ struct rte_cryptodev_sym_session *sess, unsigned int data_len);
static inline struct rte_crypto_op *
test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain);
+ enum chain_mode chain);
static inline struct rte_crypto_op *
test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain __rte_unused);
+ enum chain_mode chain __rte_unused);
static inline struct rte_crypto_op *
test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain __rte_unused);
+ enum chain_mode chain __rte_unused);
static uint32_t get_auth_digest_length(enum rte_crypto_auth_algorithm algo);
@@ -1979,7 +1978,6 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
op->sym->auth.digest.data = ut_params->digest;
op->sym->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
data_params[0].length);
- op->sym->auth.digest.length = DIGEST_BYTE_LENGTH_SHA256;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_params[0].length;
@@ -2102,8 +2100,7 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
RTE_CRYPTO_OP_TYPE_SYMMETRIC);
TEST_ASSERT_NOT_NULL(op, "Failed to allocate op");
- op = test_perf_set_crypto_op_snow3g(op, m, sess, pparams->buf_size,
- get_auth_digest_length(pparams->auth_algo));
+ op = test_perf_set_crypto_op_snow3g(op, m, sess, pparams->buf_size);
TEST_ASSERT_NOT_NULL(op, "Failed to attach op to session");
c_ops[i] = op;
@@ -2252,11 +2249,9 @@ test_perf_openssl_optimise_cyclecount(struct perf_test_params *pparams)
static struct rte_crypto_op *(*test_perf_set_crypto_op)
(struct rte_crypto_op *, struct rte_mbuf *,
struct rte_cryptodev_sym_session *,
- unsigned int, unsigned int,
+ unsigned int,
enum chain_mode);
- unsigned int digest_length = get_auth_digest_length(pparams->auth_algo);
-
if (rte_cryptodev_count() == 0) {
printf("\nNo crypto devices found. Is PMD build configured?\n");
return TEST_FAILED;
@@ -2298,7 +2293,7 @@ test_perf_openssl_optimise_cyclecount(struct perf_test_params *pparams)
}
op = test_perf_set_crypto_op(op, m, sess, pparams->buf_size,
- digest_length, pparams->chain);
+ pparams->chain);
TEST_ASSERT_NOT_NULL(op, "Failed to attach op to session");
c_ops[i] = op;
@@ -2407,8 +2402,6 @@ test_perf_armv8_optimise_cyclecount(struct perf_test_params *pparams)
static struct rte_cryptodev_sym_session *sess;
- unsigned int digest_length = get_auth_digest_length(pparams->auth_algo);
-
if (rte_cryptodev_count() == 0) {
printf("\nNo crypto devices found. Is PMD build configured?\n");
return TEST_FAILED;
@@ -2433,7 +2426,7 @@ test_perf_armv8_optimise_cyclecount(struct perf_test_params *pparams)
TEST_ASSERT_NOT_NULL(op, "Failed to allocate op");
op = test_perf_set_crypto_op_aes(op, m, sess, pparams->buf_size,
- digest_length, pparams->chain);
+ pparams->chain);
TEST_ASSERT_NOT_NULL(op, "Failed to attach op to session");
c_ops[i] = op;
@@ -2875,7 +2868,7 @@ test_perf_create_pktmbuf(struct rte_mempool *mpool, unsigned buf_sz)
static inline struct rte_crypto_op *
test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain)
+ enum chain_mode chain)
{
if (rte_crypto_op_attach_sym_session(op, sess) != 0) {
rte_crypto_op_free(op);
@@ -2886,7 +2879,6 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
if (chain == CIPHER_ONLY) {
op->sym->auth.digest.data = NULL;
op->sym->auth.digest.phys_addr = 0;
- op->sym->auth.digest.length = 0;
op->sym->auth.aad.data = NULL;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = 0;
@@ -2895,7 +2887,6 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
uint8_t *, data_len);
op->sym->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
data_len);
- op->sym->auth.digest.length = digest_len;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_len;
}
@@ -2917,7 +2908,7 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
static inline struct rte_crypto_op *
test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain __rte_unused)
+ enum chain_mode chain __rte_unused)
{
if (rte_crypto_op_attach_sym_session(op, sess) != 0) {
rte_crypto_op_free(op);
@@ -2929,7 +2920,6 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
(m->data_off + data_len);
op->sym->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(m, data_len);
- op->sym->auth.digest.length = digest_len;
op->sym->auth.aad.data = aes_gcm_aad;
/* Copy IV at the end of the crypto operation */
@@ -2950,8 +2940,7 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
static inline struct rte_crypto_op *
test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
- struct rte_cryptodev_sym_session *sess, unsigned data_len,
- unsigned digest_len)
+ struct rte_cryptodev_sym_session *sess, unsigned int data_len)
{
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op,
uint8_t *, IV_OFFSET);
@@ -2968,7 +2957,6 @@ test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
(m->data_off + data_len);
op->sym->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(m, data_len);
- op->sym->auth.digest.length = digest_len;
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
@@ -3015,8 +3003,7 @@ static inline struct rte_crypto_op *
test_perf_set_crypto_op_snow3g_hash(struct rte_crypto_op *op,
struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess,
- unsigned data_len,
- unsigned digest_len)
+ unsigned int data_len)
{
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op,
uint8_t *, IV_OFFSET);
@@ -3036,7 +3023,6 @@ test_perf_set_crypto_op_snow3g_hash(struct rte_crypto_op *op,
op->sym->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(m, data_len +
SNOW3G_CIPHER_IV_LENGTH);
- op->sym->auth.digest.length = digest_len;
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
@@ -3051,7 +3037,7 @@ test_perf_set_crypto_op_snow3g_hash(struct rte_crypto_op *op,
static inline struct rte_crypto_op *
test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain __rte_unused)
+ enum chain_mode chain __rte_unused)
{
if (rte_crypto_op_attach_sym_session(op, sess) != 0) {
rte_crypto_op_free(op);
@@ -3063,7 +3049,6 @@ test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
(m->data_off + data_len);
op->sym->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(m, data_len);
- op->sym->auth.digest.length = digest_len;
/* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
@@ -3156,7 +3141,7 @@ test_perf_aes_sha(uint8_t dev_id, uint16_t queue_id,
ops[i] = test_perf_set_crypto_op_aes(ops[i],
mbufs[i + (pparams->burst_size *
(j % NUM_MBUF_SETS))],
- sess, pparams->buf_size, digest_length,
+ sess, pparams->buf_size,
pparams->chain);
/* enqueue burst */
@@ -3298,7 +3283,7 @@ test_perf_snow3g(uint8_t dev_id, uint16_t queue_id,
mbufs[i +
(pparams->burst_size * (j % NUM_MBUF_SETS))],
sess,
- pparams->buf_size, digest_length);
+ pparams->buf_size);
else if (pparams->chain == CIPHER_ONLY)
ops[i+op_offset] =
test_perf_set_crypto_op_snow3g_cipher(ops[i+op_offset],
@@ -3394,8 +3379,6 @@ test_perf_openssl(uint8_t dev_id, uint16_t queue_id,
uint64_t processed = 0, failed_polls = 0, retries = 0;
uint64_t tsc_start = 0, tsc_end = 0;
- unsigned int digest_length = get_auth_digest_length(pparams->auth_algo);
-
struct rte_crypto_op *ops[pparams->burst_size];
struct rte_crypto_op *proc_ops[pparams->burst_size];
@@ -3408,7 +3391,7 @@ test_perf_openssl(uint8_t dev_id, uint16_t queue_id,
static struct rte_crypto_op *(*test_perf_set_crypto_op)
(struct rte_crypto_op *, struct rte_mbuf *,
struct rte_cryptodev_sym_session *,
- unsigned int, unsigned int,
+ unsigned int,
enum chain_mode);
switch (pparams->cipher_algo) {
@@ -3470,7 +3453,7 @@ test_perf_openssl(uint8_t dev_id, uint16_t queue_id,
ops[i] = test_perf_set_crypto_op(ops[i],
mbufs[i + (pparams->burst_size *
(j % NUM_MBUF_SETS))],
- sess, pparams->buf_size, digest_length,
+ sess, pparams->buf_size,
pparams->chain);
/* enqueue burst */
@@ -3548,8 +3531,6 @@ test_perf_armv8(uint8_t dev_id, uint16_t queue_id,
uint64_t processed = 0, failed_polls = 0, retries = 0;
uint64_t tsc_start = 0, tsc_end = 0;
- unsigned int digest_length = get_auth_digest_length(pparams->auth_algo);
-
struct rte_crypto_op *ops[pparams->burst_size];
struct rte_crypto_op *proc_ops[pparams->burst_size];
@@ -3604,7 +3585,7 @@ test_perf_armv8(uint8_t dev_id, uint16_t queue_id,
ops[i] = test_perf_set_crypto_op_aes(ops[i],
mbufs[i + (pparams->burst_size *
(j % NUM_MBUF_SETS))], sess,
- pparams->buf_size, digest_length,
+ pparams->buf_size,
pparams->chain);
/* enqueue burst */
@@ -4179,7 +4160,6 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
params->session_attrs->aad_len +
params->symmetric_op->p_len);
- op->sym->auth.digest.length = params->symmetric_op->t_len;
op->sym->auth.aad.data = m_hlp->aad;
op->sym->auth.aad.phys_addr = rte_pktmbuf_mtophys(m);
--
2.9.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v3 16/26] cryptodev: remove AAD length from crypto op
` (6 preceding siblings ...)
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 14/26] cryptodev: add auth IV Pablo de Lara
@ 2017-06-29 11:35 2% ` Pablo de Lara
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 17/26] cryptodev: remove digest " Pablo de Lara
` (2 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:35 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Additional authenticated data (AAD) information was duplicated
in the authentication transform and in the crypto
operation structures.
Since AAD length is not meant to be changed in a same session,
it is removed from the crypto operation structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
app/test-crypto-perf/cperf_ops.c | 3 ---
doc/guides/prog_guide/cryptodev_lib.rst | 1 -
doc/guides/rel_notes/release_17_08.rst | 3 +++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 6 +++--
drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h | 2 ++
drivers/crypto/openssl/rte_openssl_pmd.c | 4 ++-
drivers/crypto/openssl/rte_openssl_pmd_private.h | 3 +++
drivers/crypto/qat/qat_adf/qat_algs_build_desc.c | 1 +
drivers/crypto/qat/qat_crypto.c | 4 +--
examples/ipsec-secgw/esp.c | 2 --
examples/l2fwd-crypto/main.c | 4 ---
lib/librte_cryptodev/rte_crypto_sym.h | 6 +----
test/test/test_cryptodev.c | 10 +++-----
test/test/test_cryptodev_perf.c | 31 +++++++++++++-----------
14 files changed, 39 insertions(+), 41 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 7d5d3f0..15a4b58 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -187,7 +187,6 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
sym_op->auth.digest.length = options->auth_digest_sz;
sym_op->auth.aad.phys_addr = test_vector->aad.phys_addr;
sym_op->auth.aad.data = test_vector->aad.data;
- sym_op->auth.aad.length = options->auth_aad_sz;
}
@@ -274,7 +273,6 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->auth.digest.length = options->auth_digest_sz;
sym_op->auth.aad.phys_addr = test_vector->aad.phys_addr;
sym_op->auth.aad.data = test_vector->aad.data;
- sym_op->auth.aad.length = options->auth_aad_sz;
}
if (options->auth_algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
@@ -335,7 +333,6 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
sym_op->auth.aad.data = rte_pktmbuf_mtod(bufs_in[i], uint8_t *);
sym_op->auth.aad.phys_addr = rte_pktmbuf_mtophys(bufs_in[i]);
- sym_op->auth.aad.length = options->auth_aad_sz;
/* authentication parameters */
if (options->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 68890ff..ea8fc00 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -553,7 +553,6 @@ chain.
struct {
uint8_t *data;
phys_addr_t phys_addr;
- uint16_t length;
} aad; /**< Additional authentication parameters */
} auth;
}
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index eabf3dd..e633d73 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -163,6 +163,9 @@ API Changes
``rte_crypto_cipher_xform``.
* Added authentication IV parameters (offset and length) in
``rte_crypto_auth_xform``.
+ * Removed Additional Authentication Data (AAD) length from ``rte_crypto_sym_op``.
+ * Changed field size of AAD length in ``rte_crypto_auth_xform``,
+ from uint32_t to uint16_t.
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 414f22b..f6136ba 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -145,6 +145,8 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
return -EINVAL;
}
+ sess->aad_length = auth_xform->auth.add_auth_data_length;
+
return 0;
}
@@ -255,7 +257,7 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
aesni_gcm_enc[session->key].init(&session->gdata,
iv_ptr,
sym_op->auth.aad.data,
- (uint64_t)sym_op->auth.aad.length);
+ (uint64_t)session->aad_length);
aesni_gcm_enc[session->key].update(&session->gdata, dst, src,
(uint64_t)part_len);
@@ -293,7 +295,7 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
aesni_gcm_dec[session->key].init(&session->gdata,
iv_ptr,
sym_op->auth.aad.data,
- (uint64_t)sym_op->auth.aad.length);
+ (uint64_t)session->aad_length);
aesni_gcm_dec[session->key].update(&session->gdata, dst, src,
(uint64_t)part_len);
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 2ed96f8..bfd4d1c 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -99,6 +99,8 @@ struct aesni_gcm_session {
/**< GCM operation type */
enum aesni_gcm_key key;
/**< GCM key type */
+ uint16_t aad_length;
+ /**< AAD length */
struct gcm_data gdata __rte_cache_aligned;
/**< GCM parameters */
};
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 970c735..9de4c68 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -370,6 +370,8 @@ openssl_set_session_auth_parameters(struct openssl_session *sess,
return -EINVAL;
}
+ sess->auth.aad_length = xform->auth.add_auth_data_length;
+
return 0;
}
@@ -934,7 +936,7 @@ process_openssl_combined_op
sess->iv.offset);
ivlen = sess->iv.length;
aad = op->sym->auth.aad.data;
- aadlen = op->sym->auth.aad.length;
+ aadlen = sess->auth.aad_length;
tag = op->sym->auth.digest.data;
if (tag == NULL)
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_private.h b/drivers/crypto/openssl/rte_openssl_pmd_private.h
index 3a64853..045e532 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_private.h
+++ b/drivers/crypto/openssl/rte_openssl_pmd_private.h
@@ -162,6 +162,9 @@ struct openssl_session {
/**< pointer to EVP context structure */
} hmac;
};
+
+ uint16_t aad_length;
+ /**< AAD length */
} auth;
} __rte_cache_aligned;
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
index 5bf9c86..4df57aa 100644
--- a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -817,6 +817,7 @@ int qat_alg_aead_session_create_content_desc_auth(struct qat_session *cdesc,
ICP_QAT_HW_GALOIS_128_STATE1_SZ +
ICP_QAT_HW_GALOIS_H_SZ);
*aad_len = rte_bswap32(add_auth_data_length);
+ cdesc->aad_len = add_auth_data_length;
break;
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 8f5532e..aada9dd 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -1175,7 +1175,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
cipher_param->cipher_length = 0;
cipher_param->cipher_offset = 0;
auth_param->u1.aad_adr = 0;
- auth_param->auth_len = op->sym->auth.aad.length;
+ auth_param->auth_len = ctx->aad_len;
auth_param->auth_off = op->sym->auth.data.offset;
auth_param->u2.aad_sz = 0;
}
@@ -1202,7 +1202,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
op->sym->auth.digest.length);
rte_hexdump(stdout, "aad:", op->sym->auth.aad.data,
- op->sym->auth.aad.length);
+ ctx->aad_len);
}
#endif
return 0;
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 9e12782..571c2c6 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -129,7 +129,6 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
sym_cop->auth.aad.data = aad;
sym_cop->auth.aad.phys_addr = rte_pktmbuf_mtophys_offset(m,
aad - rte_pktmbuf_mtod(m, uint8_t *));
- sym_cop->auth.aad.length = 8;
break;
default:
RTE_LOG(ERR, IPSEC_ESP, "unsupported auth algorithm %u\n",
@@ -358,7 +357,6 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
sym_cop->auth.aad.data = aad;
sym_cop->auth.aad.phys_addr = rte_pktmbuf_mtophys_offset(m,
aad - rte_pktmbuf_mtod(m, uint8_t *));
- sym_cop->auth.aad.length = 8;
break;
default:
RTE_LOG(ERR, IPSEC_ESP, "unsupported auth algorithm %u\n",
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index ba5aef7..6fe829e 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -497,11 +497,9 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
if (cparams->aad.length) {
op->sym->auth.aad.data = cparams->aad.data;
op->sym->auth.aad.phys_addr = cparams->aad.phys_addr;
- op->sym->auth.aad.length = cparams->aad.length;
} else {
op->sym->auth.aad.data = NULL;
op->sym->auth.aad.phys_addr = 0;
- op->sym->auth.aad.length = 0;
}
}
@@ -709,8 +707,6 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
options->auth_xform.auth.digest_length;
if (options->auth_xform.auth.add_auth_data_length) {
port_cparams[i].aad.data = options->aad.data;
- port_cparams[i].aad.length =
- options->auth_xform.auth.add_auth_data_length;
port_cparams[i].aad.phys_addr = options->aad.phys_addr;
if (!options->aad_param)
generate_random_key(port_cparams[i].aad.data,
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 3ccb6fd..b964a56 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -365,7 +365,7 @@ struct rte_crypto_auth_xform {
* the result shall be truncated.
*/
- uint32_t add_auth_data_length;
+ uint16_t add_auth_data_length;
/**< The length of the additional authenticated data (AAD) in bytes.
* The maximum permitted value is 65535 (2^16 - 1) bytes, unless
* otherwise specified below.
@@ -653,10 +653,6 @@ struct rte_crypto_sym_op {
* operation, this field is used to pass plaintext.
*/
phys_addr_t phys_addr; /**< physical address */
- uint16_t length;
- /**< Length of additional authenticated data (AAD)
- * in bytes
- */
} aad;
/**< Additional authentication parameters */
} auth;
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 853e3bd..7acfa24 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -4637,7 +4637,7 @@ test_3DES_cipheronly_openssl_all(void)
static int
create_gcm_session(uint8_t dev_id, enum rte_crypto_cipher_operation op,
const uint8_t *key, const uint8_t key_len,
- const uint8_t aad_len, const uint8_t auth_len,
+ const uint16_t aad_len, const uint8_t auth_len,
uint8_t iv_len,
enum rte_crypto_auth_operation auth_op)
{
@@ -4751,12 +4751,11 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
TEST_ASSERT_NOT_NULL(sym_op->auth.aad.data,
"no room to append aad");
- sym_op->auth.aad.length = tdata->aad.len;
sym_op->auth.aad.phys_addr =
rte_pktmbuf_mtophys(ut_params->ibuf);
memcpy(sym_op->auth.aad.data, tdata->aad.data, tdata->aad.len);
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data,
- sym_op->auth.aad.length);
+ tdata->aad.len);
/* Append IV at the end of the crypto operation*/
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
@@ -6315,7 +6314,6 @@ create_gmac_operation(enum rte_crypto_auth_operation op,
TEST_ASSERT_NOT_NULL(sym_op->auth.aad.data,
"no room to append aad");
- sym_op->auth.aad.length = tdata->aad.len;
sym_op->auth.aad.phys_addr =
rte_pktmbuf_mtophys(ut_params->ibuf);
memcpy(sym_op->auth.aad.data, tdata->aad.data, tdata->aad.len);
@@ -6380,7 +6378,7 @@ static int create_gmac_session(uint8_t dev_id,
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_GMAC;
ut_params->auth_xform.auth.op = auth_op;
ut_params->auth_xform.auth.digest_length = tdata->gmac_tag.len;
- ut_params->auth_xform.auth.add_auth_data_length = 0;
+ ut_params->auth_xform.auth.add_auth_data_length = tdata->aad.len;
ut_params->auth_xform.auth.key.length = 0;
ut_params->auth_xform.auth.key.data = NULL;
@@ -6860,7 +6858,6 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
TEST_HEXDUMP(stdout, "AAD:", sym_op->auth.aad.data, reference->aad.len);
sym_op->auth.aad.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
- sym_op->auth.aad.length = reference->aad.len;
/* digest */
sym_op->auth.digest.data = (uint8_t *)rte_pktmbuf_append(
@@ -7194,7 +7191,6 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
"no room to prepend aad");
sym_op->auth.aad.phys_addr = rte_pktmbuf_mtophys(
ut_params->ibuf);
- sym_op->auth.aad.length = aad_len;
memset(sym_op->auth.aad.data, 0, aad_len);
rte_memcpy(sym_op->auth.aad.data, tdata->aad.data, aad_len);
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 1d204fd..7239976 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -45,6 +45,7 @@
#define AES_CIPHER_IV_LENGTH 16
#define TRIPLE_DES_CIPHER_IV_LENGTH 8
+#define AES_GCM_AAD_LENGTH 16
#define PERF_NUM_OPS_INFLIGHT (128)
#define DEFAULT_NUM_REQS_TO_SUBMIT (10000000)
@@ -70,7 +71,6 @@ enum chain_mode {
struct symmetric_op {
const uint8_t *aad_data;
- uint32_t aad_len;
const uint8_t *p_data;
uint32_t p_len;
@@ -97,6 +97,7 @@ struct symmetric_session_attrs {
const uint8_t *iv_data;
uint16_t iv_len;
+ uint16_t aad_len;
uint32_t digest_len;
};
@@ -2779,6 +2780,7 @@ test_perf_create_openssl_session(uint8_t dev_id, enum chain_mode chain,
break;
case RTE_CRYPTO_AUTH_AES_GCM:
auth_xform.auth.key.data = NULL;
+ auth_xform.auth.add_auth_data_length = AES_GCM_AAD_LENGTH;
break;
default:
return NULL;
@@ -2855,8 +2857,6 @@ test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
}
}
-#define AES_GCM_AAD_LENGTH 16
-
static struct rte_mbuf *
test_perf_create_pktmbuf(struct rte_mempool *mpool, unsigned buf_sz)
{
@@ -2888,7 +2888,6 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.digest.phys_addr = 0;
op->sym->auth.digest.length = 0;
op->sym->auth.aad.data = NULL;
- op->sym->auth.aad.length = 0;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = 0;
} else {
@@ -2932,7 +2931,6 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
rte_pktmbuf_mtophys_offset(m, data_len);
op->sym->auth.digest.length = digest_len;
op->sym->auth.aad.data = aes_gcm_aad;
- op->sym->auth.aad.length = AES_GCM_AAD_LENGTH;
/* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
@@ -2999,9 +2997,14 @@ test_perf_set_crypto_op_snow3g_cipher(struct rte_crypto_op *op,
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
snow3g_iv, SNOW3G_CIPHER_IV_LENGTH);
+ /* Cipher Parameters */
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_len << 3;
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ snow3g_iv,
+ SNOW3G_CIPHER_IV_LENGTH);
+
op->sym->m_src = m;
return op;
@@ -4137,6 +4140,7 @@ test_perf_create_session(uint8_t dev_id, struct perf_test_params *pparams)
auth_xform.auth.op = pparams->session_attrs->auth;
auth_xform.auth.algo = pparams->session_attrs->auth_algorithm;
+ auth_xform.auth.add_auth_data_length = pparams->session_attrs->aad_len;
auth_xform.auth.digest_length = pparams->session_attrs->digest_len;
auth_xform.auth.key.length = pparams->session_attrs->key_auth_len;
@@ -4172,17 +4176,16 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.digest.data = m_hlp->digest;
op->sym->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
m,
- params->symmetric_op->aad_len +
+ params->session_attrs->aad_len +
params->symmetric_op->p_len);
op->sym->auth.digest.length = params->symmetric_op->t_len;
op->sym->auth.aad.data = m_hlp->aad;
- op->sym->auth.aad.length = params->symmetric_op->aad_len;
op->sym->auth.aad.phys_addr = rte_pktmbuf_mtophys(m);
rte_memcpy(op->sym->auth.aad.data, params->symmetric_op->aad_data,
- params->symmetric_op->aad_len);
+ params->session_attrs->aad_len);
rte_memcpy(iv_ptr, params->session_attrs->iv_data,
params->session_attrs->iv_len);
@@ -4190,11 +4193,11 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
iv_ptr[15] = 1;
op->sym->auth.data.offset =
- params->symmetric_op->aad_len;
+ params->session_attrs->aad_len;
op->sym->auth.data.length = params->symmetric_op->p_len;
op->sym->cipher.data.offset =
- params->symmetric_op->aad_len;
+ params->session_attrs->aad_len;
op->sym->cipher.data.length = params->symmetric_op->p_len;
op->sym->m_src = m;
@@ -4208,7 +4211,7 @@ test_perf_create_pktmbuf_fill(struct rte_mempool *mpool,
unsigned buf_sz, struct crypto_params *m_hlp)
{
struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
- uint16_t aad_len = params->symmetric_op->aad_len;
+ uint16_t aad_len = params->session_attrs->aad_len;
uint16_t digest_size = params->symmetric_op->t_len;
char *p;
@@ -4344,14 +4347,14 @@ perf_AES_GCM(uint8_t dev_id, uint16_t queue_id,
TEST_ASSERT_BUFFERS_ARE_EQUAL(
pparams->symmetric_op->c_data,
pkt +
- pparams->symmetric_op->aad_len,
+ pparams->session_attrs->aad_len,
pparams->symmetric_op->c_len,
"GCM Ciphertext data not as expected");
TEST_ASSERT_BUFFERS_ARE_EQUAL(
pparams->symmetric_op->t_data,
pkt +
- pparams->symmetric_op->aad_len +
+ pparams->session_attrs->aad_len +
pparams->symmetric_op->c_len,
pparams->symmetric_op->t_len,
"GCM MAC data not as expected");
@@ -4423,13 +4426,13 @@ test_perf_AES_GCM(int continual_buf_len, int continual_size)
RTE_CRYPTO_AUTH_OP_GENERATE;
session_attrs[i].key_auth_data = NULL;
session_attrs[i].key_auth_len = 0;
+ session_attrs[i].aad_len = gcm_test->aad.len;
session_attrs[i].digest_len =
gcm_test->auth_tag.len;
session_attrs[i].iv_len = gcm_test->iv.len;
session_attrs[i].iv_data = gcm_test->iv.data;
ops_set[i].aad_data = gcm_test->aad.data;
- ops_set[i].aad_len = gcm_test->aad.len;
ops_set[i].p_data = gcm_test->plaintext.data;
ops_set[i].p_len = buf_lengths[i];
ops_set[i].c_data = gcm_test->ciphertext.data;
--
2.9.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v3 14/26] cryptodev: add auth IV
` (5 preceding siblings ...)
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 13/26] cryptodev: move IV parameters to crypto session Pablo de Lara
@ 2017-06-29 11:35 1% ` Pablo de Lara
2017-06-29 11:35 2% ` [dpdk-dev] [PATCH v3 16/26] cryptodev: remove AAD length from crypto op Pablo de Lara
` (3 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:35 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Authentication algorithms, such as AES-GMAC or the wireless
algorithms (like SNOW3G) use IV, like cipher algorithms.
So far, AES-GMAC has used the IV from the cipher structure,
and the wireless algorithms have used the AAD field,
which is not technically correct.
Therefore, authentication IV parameters have been added,
so API is more correct. Like cipher IV, auth IV is expected
to be copied after the crypto operation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
app/test-crypto-perf/cperf_ops.c | 61 +++++++++--
app/test-crypto-perf/cperf_options.h | 2 +
app/test-crypto-perf/cperf_options_parsing.c | 9 ++
app/test-crypto-perf/cperf_test_latency.c | 4 +-
app/test-crypto-perf/cperf_test_throughput.c | 3 +-
app/test-crypto-perf/cperf_test_vector_parsing.c | 54 +++++++---
app/test-crypto-perf/cperf_test_vectors.c | 37 +++++--
app/test-crypto-perf/cperf_test_vectors.h | 8 +-
app/test-crypto-perf/cperf_test_verify.c | 3 +-
app/test-crypto-perf/data/aes_cbc_128_sha.data | 2 +-
app/test-crypto-perf/data/aes_cbc_192_sha.data | 2 +-
app/test-crypto-perf/data/aes_cbc_256_sha.data | 2 +-
app/test-crypto-perf/main.c | 25 ++++-
doc/guides/prog_guide/cryptodev_lib.rst | 3 +-
doc/guides/rel_notes/release_17_08.rst | 2 +
doc/guides/sample_app_ug/l2_forward_crypto.rst | 17 ++-
doc/guides/tools/cryptoperf.rst | 14 ++-
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 6 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 21 ++--
drivers/crypto/armv8/rte_armv8_pmd_ops.c | 6 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 18 ++--
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 3 +-
drivers/crypto/null/null_crypto_pmd_ops.c | 3 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 78 ++++++++------
drivers/crypto/qat/qat_crypto_capabilities.h | 41 ++++---
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 3 +-
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 3 +-
examples/l2fwd-crypto/main.c | 132 +++++++++++++++++------
lib/librte_cryptodev/rte_crypto_sym.h | 24 +++++
lib/librte_cryptodev/rte_cryptodev.c | 6 +-
lib/librte_cryptodev/rte_cryptodev.h | 6 +-
31 files changed, 439 insertions(+), 159 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 16476ee..7d5d3f0 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -121,9 +121,11 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
uint8_t *, iv_offset);
- memcpy(iv_ptr, test_vector->iv.data,
- test_vector->iv.length);
- } }
+ memcpy(iv_ptr, test_vector->cipher_iv.data,
+ test_vector->cipher_iv.length);
+
+ }
+ }
return 0;
}
@@ -134,7 +136,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
- uint16_t iv_offset __rte_unused)
+ uint16_t iv_offset)
{
uint16_t i;
@@ -146,6 +148,14 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
sym_op->m_src = bufs_in[i];
sym_op->m_dst = bufs_out[i];
+ if (test_vector->auth_iv.length) {
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *,
+ iv_offset);
+ memcpy(iv_ptr, test_vector->auth_iv.data,
+ test_vector->auth_iv.length);
+ }
+
/* authentication parameters */
if (options->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
sym_op->auth.digest.data = test_vector->digest.data;
@@ -191,6 +201,17 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
sym_op->auth.data.offset = 0;
}
+ if (options->test == CPERF_TEST_TYPE_VERIFY) {
+ if (test_vector->auth_iv.length) {
+ for (i = 0; i < nb_ops; i++) {
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *, iv_offset);
+
+ memcpy(iv_ptr, test_vector->auth_iv.data,
+ test_vector->auth_iv.length);
+ }
+ }
+ }
return 0;
}
@@ -271,9 +292,19 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
uint8_t *, iv_offset);
- memcpy(iv_ptr, test_vector->iv.data,
- test_vector->iv.length);
+ memcpy(iv_ptr, test_vector->cipher_iv.data,
+ test_vector->cipher_iv.length);
+ if (test_vector->auth_iv.length) {
+ /*
+ * Copy IV after the crypto operation and
+ * the cipher IV
+ */
+ iv_ptr += test_vector->cipher_iv.length;
+ memcpy(iv_ptr, test_vector->auth_iv.data,
+ test_vector->auth_iv.length);
+ }
}
+
}
return 0;
@@ -348,8 +379,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
uint8_t *, iv_offset);
- memcpy(iv_ptr, test_vector->iv.data,
- test_vector->iv.length);
+ memcpy(iv_ptr, test_vector->cipher_iv.data,
+ test_vector->cipher_iv.length);
}
}
@@ -382,8 +413,8 @@ cperf_create_session(uint8_t dev_id,
test_vector->cipher_key.data;
cipher_xform.cipher.key.length =
test_vector->cipher_key.length;
- cipher_xform.cipher.iv.length = test_vector->iv.length;
-
+ cipher_xform.cipher.iv.length =
+ test_vector->cipher_iv.length;
} else {
cipher_xform.cipher.key.data = NULL;
cipher_xform.cipher.key.length = 0;
@@ -409,11 +440,14 @@ cperf_create_session(uint8_t dev_id,
auth_xform.auth.key.length =
test_vector->auth_key.length;
auth_xform.auth.key.data = test_vector->auth_key.data;
+ auth_xform.auth.iv.length =
+ test_vector->auth_iv.length;
} else {
auth_xform.auth.digest_length = 0;
auth_xform.auth.add_auth_data_length = 0;
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
+ auth_xform.auth.iv.length = 0;
}
/* create crypto session */
sess = rte_cryptodev_sym_session_create(dev_id, &auth_xform);
@@ -439,7 +473,8 @@ cperf_create_session(uint8_t dev_id,
test_vector->cipher_key.data;
cipher_xform.cipher.key.length =
test_vector->cipher_key.length;
- cipher_xform.cipher.iv.length = test_vector->iv.length;
+ cipher_xform.cipher.iv.length =
+ test_vector->cipher_iv.length;
} else {
cipher_xform.cipher.key.data = NULL;
cipher_xform.cipher.key.length = 0;
@@ -464,17 +499,21 @@ cperf_create_session(uint8_t dev_id,
options->auth_algo == RTE_CRYPTO_AUTH_AES_GCM) {
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
+ auth_xform.auth.iv.length = 0;
} else { /* auth options for others */
auth_xform.auth.key.length =
test_vector->auth_key.length;
auth_xform.auth.key.data =
test_vector->auth_key.data;
+ auth_xform.auth.iv.length =
+ test_vector->auth_iv.length;
}
} else {
auth_xform.auth.digest_length = 0;
auth_xform.auth.add_auth_data_length = 0;
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
+ auth_xform.auth.iv.length = 0;
}
/* create crypto session for aes gcm */
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index b928c58..0e53c03 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -28,6 +28,7 @@
#define CPERF_AUTH_ALGO ("auth-algo")
#define CPERF_AUTH_OP ("auth-op")
#define CPERF_AUTH_KEY_SZ ("auth-key-sz")
+#define CPERF_AUTH_IV_SZ ("auth-iv-sz")
#define CPERF_AUTH_DIGEST_SZ ("auth-digest-sz")
#define CPERF_AUTH_AAD_SZ ("auth-aad-sz")
#define CPERF_CSV ("csv-friendly")
@@ -76,6 +77,7 @@ struct cperf_options {
enum rte_crypto_auth_operation auth_op;
uint16_t auth_key_sz;
+ uint16_t auth_iv_sz;
uint16_t auth_digest_sz;
uint16_t auth_aad_sz;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 63ba37c..70b6a60 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -549,6 +549,12 @@ parse_auth_digest_sz(struct cperf_options *opts, const char *arg)
}
static int
+parse_auth_iv_sz(struct cperf_options *opts, const char *arg)
+{
+ return parse_uint16_t(&opts->auth_iv_sz, arg);
+}
+
+static int
parse_auth_aad_sz(struct cperf_options *opts, const char *arg)
{
return parse_uint16_t(&opts->auth_aad_sz, arg);
@@ -651,6 +657,7 @@ cperf_options_default(struct cperf_options *opts)
opts->auth_key_sz = 64;
opts->auth_digest_sz = 12;
+ opts->auth_iv_sz = 0;
opts->auth_aad_sz = 0;
}
@@ -678,6 +685,7 @@ cperf_opts_parse_long(int opt_idx, struct cperf_options *opts)
{ CPERF_AUTH_ALGO, parse_auth_algo },
{ CPERF_AUTH_OP, parse_auth_op },
{ CPERF_AUTH_KEY_SZ, parse_auth_key_sz },
+ { CPERF_AUTH_IV_SZ, parse_auth_iv_sz },
{ CPERF_AUTH_DIGEST_SZ, parse_auth_digest_sz },
{ CPERF_AUTH_AAD_SZ, parse_auth_aad_sz },
{ CPERF_CSV, parse_csv_friendly},
@@ -914,6 +922,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("# auth operation: %s\n",
rte_crypto_auth_operation_strings[opts->auth_op]);
printf("# auth key size: %u\n", opts->auth_key_sz);
+ printf("# auth iv size: %u\n", opts->auth_iv_sz);
printf("# auth digest size: %u\n", opts->auth_digest_sz);
printf("# auth aad size: %u\n", opts->auth_aad_sz);
printf("#\n");
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index d37083f..f828366 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -285,7 +285,9 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = sizeof(struct priv_op_data) + test_vector->iv.length;
+ uint16_t priv_size = sizeof(struct priv_op_data) +
+ test_vector->cipher_iv.length +
+ test_vector->auth_iv.length;
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, priv_size,
rte_socket_id());
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 4d2b3d3..1e3f3b3 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -266,7 +266,8 @@ cperf_throughput_test_constructor(uint8_t dev_id, uint16_t qp_id,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->iv.length;
+ uint16_t priv_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length;
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, priv_size,
diff --git a/app/test-crypto-perf/cperf_test_vector_parsing.c b/app/test-crypto-perf/cperf_test_vector_parsing.c
index 62d0c91..277ff1e 100644
--- a/app/test-crypto-perf/cperf_test_vector_parsing.c
+++ b/app/test-crypto-perf/cperf_test_vector_parsing.c
@@ -15,7 +15,8 @@ free_test_vector(struct cperf_test_vector *vector, struct cperf_options *opts)
if (vector == NULL || opts == NULL)
return -1;
- rte_free(vector->iv.data);
+ rte_free(vector->cipher_iv.data);
+ rte_free(vector->auth_iv.data);
rte_free(vector->aad.data);
rte_free(vector->digest.data);
@@ -84,15 +85,28 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
- if (test_vector->iv.data) {
- printf("\niv =\n");
- for (i = 0; i < test_vector->iv.length; ++i) {
+ if (test_vector->cipher_iv.data) {
+ printf("\ncipher_iv =\n");
+ for (i = 0; i < test_vector->cipher_iv.length; ++i) {
if ((i % wrap == 0) && (i != 0))
printf("\n");
- if (i == (uint32_t)(test_vector->iv.length - 1))
- printf("0x%02x", test_vector->iv.data[i]);
+ if (i == (uint32_t)(test_vector->cipher_iv.length - 1))
+ printf("0x%02x", test_vector->cipher_iv.data[i]);
else
- printf("0x%02x, ", test_vector->iv.data[i]);
+ printf("0x%02x, ", test_vector->cipher_iv.data[i]);
+ }
+ printf("\n");
+ }
+
+ if (test_vector->auth_iv.data) {
+ printf("\nauth_iv =\n");
+ for (i = 0; i < test_vector->auth_iv.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->auth_iv.length - 1))
+ printf("0x%02x", test_vector->auth_iv.data[i]);
+ else
+ printf("0x%02x, ", test_vector->auth_iv.data[i]);
}
printf("\n");
}
@@ -300,18 +314,32 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_key.length = opts->auth_key_sz;
}
- } else if (strstr(key_token, "iv")) {
- rte_free(vector->iv.data);
- vector->iv.data = data;
+ } else if (strstr(key_token, "cipher_iv")) {
+ rte_free(vector->cipher_iv.data);
+ vector->cipher_iv.data = data;
if (tc_found)
- vector->iv.length = data_length;
+ vector->cipher_iv.length = data_length;
else {
if (opts->cipher_iv_sz > data_length) {
- printf("Global iv shorter than "
+ printf("Global cipher iv shorter than "
"cipher_iv_sz\n");
return -1;
}
- vector->iv.length = opts->cipher_iv_sz;
+ vector->cipher_iv.length = opts->cipher_iv_sz;
+ }
+
+ } else if (strstr(key_token, "auth_iv")) {
+ rte_free(vector->auth_iv.data);
+ vector->auth_iv.data = data;
+ if (tc_found)
+ vector->auth_iv.length = data_length;
+ else {
+ if (opts->auth_iv_sz > data_length) {
+ printf("Global auth iv shorter than "
+ "auth_iv_sz\n");
+ return -1;
+ }
+ vector->auth_iv.length = opts->auth_iv_sz;
}
} else if (strstr(key_token, "ciphertext")) {
diff --git a/app/test-crypto-perf/cperf_test_vectors.c b/app/test-crypto-perf/cperf_test_vectors.c
index 4a14fb3..6829b86 100644
--- a/app/test-crypto-perf/cperf_test_vectors.c
+++ b/app/test-crypto-perf/cperf_test_vectors.c
@@ -409,32 +409,34 @@ cperf_test_vector_get_dummy(struct cperf_options *options)
t_vec->cipher_key.length = 0;
t_vec->ciphertext.data = plaintext;
t_vec->cipher_key.data = NULL;
- t_vec->iv.data = NULL;
+ t_vec->cipher_iv.data = NULL;
} else {
t_vec->cipher_key.length = options->cipher_key_sz;
t_vec->ciphertext.data = ciphertext;
t_vec->cipher_key.data = cipher_key;
- t_vec->iv.data = rte_malloc(NULL, options->cipher_iv_sz,
+ t_vec->cipher_iv.data = rte_malloc(NULL, options->cipher_iv_sz,
16);
- if (t_vec->iv.data == NULL) {
+ if (t_vec->cipher_iv.data == NULL) {
rte_free(t_vec);
return NULL;
}
- memcpy(t_vec->iv.data, iv, options->cipher_iv_sz);
+ memcpy(t_vec->cipher_iv.data, iv, options->cipher_iv_sz);
}
t_vec->ciphertext.length = options->max_buffer_size;
+
/* Set IV parameters */
- t_vec->iv.data = rte_malloc(NULL, options->cipher_iv_sz,
- 16);
- if (options->cipher_iv_sz && t_vec->iv.data == NULL) {
+ t_vec->cipher_iv.data = rte_malloc(NULL, options->cipher_iv_sz,
+ 16);
+ if (options->cipher_iv_sz && t_vec->cipher_iv.data == NULL) {
rte_free(t_vec);
return NULL;
}
- memcpy(t_vec->iv.data, iv, options->cipher_iv_sz);
- t_vec->iv.length = options->cipher_iv_sz;
+ memcpy(t_vec->cipher_iv.data, iv, options->cipher_iv_sz);
+ t_vec->cipher_iv.length = options->cipher_iv_sz;
t_vec->data.cipher_offset = 0;
t_vec->data.cipher_length = options->max_buffer_size;
+
}
if (options->op_type == CPERF_AUTH_ONLY ||
@@ -476,7 +478,7 @@ cperf_test_vector_get_dummy(struct cperf_options *options)
options->auth_aad_sz, 16);
if (t_vec->aad.data == NULL) {
if (options->op_type != CPERF_AUTH_ONLY)
- rte_free(t_vec->iv.data);
+ rte_free(t_vec->cipher_iv.data);
rte_free(t_vec);
return NULL;
}
@@ -485,13 +487,26 @@ cperf_test_vector_get_dummy(struct cperf_options *options)
t_vec->aad.data = NULL;
}
+ /* Set IV parameters */
+ t_vec->auth_iv.data = rte_malloc(NULL, options->auth_iv_sz,
+ 16);
+ if (options->auth_iv_sz && t_vec->auth_iv.data == NULL) {
+ if (options->op_type != CPERF_AUTH_ONLY)
+ rte_free(t_vec->cipher_iv.data);
+ rte_free(t_vec);
+ return NULL;
+ }
+ memcpy(t_vec->auth_iv.data, iv, options->auth_iv_sz);
+ t_vec->auth_iv.length = options->auth_iv_sz;
+
t_vec->aad.phys_addr = rte_malloc_virt2phy(t_vec->aad.data);
t_vec->aad.length = options->auth_aad_sz;
t_vec->digest.data = rte_malloc(NULL, options->auth_digest_sz,
16);
if (t_vec->digest.data == NULL) {
if (options->op_type != CPERF_AUTH_ONLY)
- rte_free(t_vec->iv.data);
+ rte_free(t_vec->cipher_iv.data);
+ rte_free(t_vec->auth_iv.data);
rte_free(t_vec->aad.data);
rte_free(t_vec);
return NULL;
diff --git a/app/test-crypto-perf/cperf_test_vectors.h b/app/test-crypto-perf/cperf_test_vectors.h
index e64f116..7f9c4fa 100644
--- a/app/test-crypto-perf/cperf_test_vectors.h
+++ b/app/test-crypto-perf/cperf_test_vectors.h
@@ -53,9 +53,13 @@ struct cperf_test_vector {
struct {
uint8_t *data;
- phys_addr_t phys_addr;
uint16_t length;
- } iv;
+ } cipher_iv;
+
+ struct {
+ uint8_t *data;
+ uint16_t length;
+ } auth_iv;
struct {
uint8_t *data;
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 1b58b1d..81057ff 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -270,7 +270,8 @@ cperf_verify_test_constructor(uint8_t dev_id, uint16_t qp_id,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->iv.length;
+ uint16_t priv_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length;
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, priv_size,
rte_socket_id());
diff --git a/app/test-crypto-perf/data/aes_cbc_128_sha.data b/app/test-crypto-perf/data/aes_cbc_128_sha.data
index 0b054f5..ff55590 100644
--- a/app/test-crypto-perf/data/aes_cbc_128_sha.data
+++ b/app/test-crypto-perf/data/aes_cbc_128_sha.data
@@ -282,7 +282,7 @@ auth_key =
0xe8, 0x38, 0x36, 0x58, 0x39, 0xd9, 0x9a, 0xc5, 0xe7, 0x3b, 0xc4, 0x47, 0xe2, 0xbd, 0x80, 0x73,
0xf8, 0xd1, 0x9a, 0x5e, 0x4b, 0xfb, 0x52, 0x6b, 0x50, 0xaf, 0x8b, 0xb7, 0xb5, 0x2c, 0x52, 0x84
-iv =
+cipher_iv =
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
####################
diff --git a/app/test-crypto-perf/data/aes_cbc_192_sha.data b/app/test-crypto-perf/data/aes_cbc_192_sha.data
index 7bfe3da..3f85a00 100644
--- a/app/test-crypto-perf/data/aes_cbc_192_sha.data
+++ b/app/test-crypto-perf/data/aes_cbc_192_sha.data
@@ -283,7 +283,7 @@ auth_key =
0xe8, 0x38, 0x36, 0x58, 0x39, 0xd9, 0x9a, 0xc5, 0xe7, 0x3b, 0xc4, 0x47, 0xe2, 0xbd, 0x80, 0x73,
0xf8, 0xd1, 0x9a, 0x5e, 0x4b, 0xfb, 0x52, 0x6b, 0x50, 0xaf, 0x8b, 0xb7, 0xb5, 0x2c, 0x52, 0x84
-iv =
+cipher_iv =
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
####################
diff --git a/app/test-crypto-perf/data/aes_cbc_256_sha.data b/app/test-crypto-perf/data/aes_cbc_256_sha.data
index 52dafb9..8da8161 100644
--- a/app/test-crypto-perf/data/aes_cbc_256_sha.data
+++ b/app/test-crypto-perf/data/aes_cbc_256_sha.data
@@ -283,7 +283,7 @@ auth_key =
0xe8, 0x38, 0x36, 0x58, 0x39, 0xd9, 0x9a, 0xc5, 0xe7, 0x3b, 0xc4, 0x47, 0xe2, 0xbd, 0x80, 0x73,
0xf8, 0xd1, 0x9a, 0x5e, 0x4b, 0xfb, 0x52, 0x6b, 0x50, 0xaf, 0x8b, 0xb7, 0xb5, 0x2c, 0x52, 0x84
-iv =
+cipher_iv =
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
####################
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 9ec2a4b..cf4fa4f 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -138,7 +138,8 @@ cperf_verify_devices_capabilities(struct cperf_options *opts,
capability,
opts->auth_key_sz,
opts->auth_digest_sz,
- opts->auth_aad_sz);
+ opts->auth_aad_sz,
+ opts->auth_iv_sz);
if (ret != 0)
return ret;
}
@@ -185,9 +186,9 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->ciphertext.length < opts->max_buffer_size)
return -1;
- if (test_vec->iv.data == NULL)
+ if (test_vec->cipher_iv.data == NULL)
return -1;
- if (test_vec->iv.length != opts->cipher_iv_sz)
+ if (test_vec->cipher_iv.length != opts->cipher_iv_sz)
return -1;
if (test_vec->cipher_key.data == NULL)
return -1;
@@ -204,6 +205,11 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->auth_key.length != opts->auth_key_sz)
return -1;
+ if (test_vec->auth_iv.length != opts->auth_iv_sz)
+ return -1;
+ /* Auth IV is only required for some algorithms */
+ if (opts->auth_iv_sz && test_vec->auth_iv.data == NULL)
+ return -1;
if (test_vec->digest.data == NULL)
return -1;
if (test_vec->digest.length < opts->auth_digest_sz)
@@ -226,9 +232,9 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->ciphertext.length < opts->max_buffer_size)
return -1;
- if (test_vec->iv.data == NULL)
+ if (test_vec->cipher_iv.data == NULL)
return -1;
- if (test_vec->iv.length != opts->cipher_iv_sz)
+ if (test_vec->cipher_iv.length != opts->cipher_iv_sz)
return -1;
if (test_vec->cipher_key.data == NULL)
return -1;
@@ -240,6 +246,11 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->auth_key.length != opts->auth_key_sz)
return -1;
+ if (test_vec->auth_iv.length != opts->auth_iv_sz)
+ return -1;
+ /* Auth IV is only required for some algorithms */
+ if (opts->auth_iv_sz && test_vec->auth_iv.data == NULL)
+ return -1;
if (test_vec->digest.data == NULL)
return -1;
if (test_vec->digest.length < opts->auth_digest_sz)
@@ -254,6 +265,10 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->ciphertext.length < opts->max_buffer_size)
return -1;
+ if (test_vec->cipher_iv.data == NULL)
+ return -1;
+ if (test_vec->cipher_iv.length != opts->cipher_iv_sz)
+ return -1;
if (test_vec->aad.data == NULL)
return -1;
if (test_vec->aad.length != opts->auth_aad_sz)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 4e352f4..68890ff 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -245,7 +245,8 @@ algorithm AES_CBC.
.max = 12,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}
}
},
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 4775bd2..eabf3dd 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -161,6 +161,8 @@ API Changes
offset from the start of the crypto operation.
* Moved length and offset of cipher IV from ``rte_crypto_sym_op`` to
``rte_crypto_cipher_xform``.
+ * Added authentication IV parameters (offset and length) in
+ ``rte_crypto_auth_xform``.
ABI Changes
diff --git a/doc/guides/sample_app_ug/l2_forward_crypto.rst b/doc/guides/sample_app_ug/l2_forward_crypto.rst
index 45d8a12..b9aa573 100644
--- a/doc/guides/sample_app_ug/l2_forward_crypto.rst
+++ b/doc/guides/sample_app_ug/l2_forward_crypto.rst
@@ -86,9 +86,10 @@ The application requires a number of command line options:
./build/l2fwd-crypto [EAL options] -- [-p PORTMASK] [-q NQ] [-s] [-T PERIOD] /
[--cdev_type HW/SW/ANY] [--chain HASH_CIPHER/CIPHER_HASH/CIPHER_ONLY/HASH_ONLY] /
[--cipher_algo ALGO] [--cipher_op ENCRYPT/DECRYPT] [--cipher_key KEY] /
- [--cipher_key_random_size SIZE] [--iv IV] [--iv_random_size SIZE] /
+ [--cipher_key_random_size SIZE] [--cipher_iv IV] [--cipher_iv_random_size SIZE] /
[--auth_algo ALGO] [--auth_op GENERATE/VERIFY] [--auth_key KEY] /
- [--auth_key_random_size SIZE] [--aad AAD] [--aad_random_size SIZE] /
+ [--auth_key_random_size SIZE] [--auth_iv IV] [--auth_iv_random_size SIZE] /
+ [--aad AAD] [--aad_random_size SIZE] /
[--digest size SIZE] [--sessionless] [--cryptodev_mask MASK]
where,
@@ -127,11 +128,11 @@ where,
Note that if --cipher_key is used, this will be ignored.
-* iv: set the IV to be used. Bytes has to be separated with ":"
+* cipher_iv: set the cipher IV to be used. Bytes has to be separated with ":"
-* iv_random_size: set the size of the IV, which will be generated randomly.
+* cipher_iv_random_size: set the size of the cipher IV, which will be generated randomly.
- Note that if --iv is used, this will be ignored.
+ Note that if --cipher_iv is used, this will be ignored.
* auth_algo: select the authentication algorithm (default is sha1-hmac)
@@ -147,6 +148,12 @@ where,
Note that if --auth_key is used, this will be ignored.
+* auth_iv: set the auth IV to be used. Bytes has to be separated with ":"
+
+* auth_iv_random_size: set the size of the auth IV, which will be generated randomly.
+
+ Note that if --auth_iv is used, this will be ignored.
+
* aad: set the AAD to be used. Bytes has to be separated with ":"
* aad_random_size: set the size of the AAD, which will be generated randomly.
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 1acde76..c0accfc 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -290,6 +290,10 @@ The following are the appication command-line options:
Set the size of authentication key.
+* ``--auth-iv-sz <n>``
+
+ Set the size of auth iv.
+
* ``--auth-digest-sz <n>``
Set the size of authentication digest.
@@ -345,9 +349,13 @@ a string of bytes in C byte array format::
Key used in auth operation.
-* ``iv``
+* ``cipher_iv``
+
+ Cipher Initial Vector.
+
+* ``auth_iv``
- Initial vector.
+ Auth Initial Vector.
* ``aad``
@@ -412,7 +420,7 @@ Test vector file for cipher algorithm aes cbc 256 with authorization sha::
0xf5, 0x0c, 0xe7, 0xa2, 0xa6, 0x23, 0xd5, 0x3d, 0x95, 0xd8, 0xcd, 0x86, 0x79, 0xf5, 0x01, 0x47,
0x4f, 0xf9, 0x1d, 0x9d, 0x36, 0xf7, 0x68, 0x1a, 0x64, 0x44, 0x58, 0x5d, 0xe5, 0x81, 0x15, 0x2a,
0x41, 0xe4, 0x0e, 0xaa, 0x1f, 0x04, 0x21, 0xff, 0x2c, 0xf3, 0x73, 0x2b, 0x48, 0x1e, 0xd2, 0xf7
- iv =
+ cipher_iv =
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
# Section sha 1 hmac buff 32
[sha1_hmac_buff_32]
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 7b68a20..542e6c4 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -60,7 +60,8 @@ static const struct rte_cryptodev_capabilities aesni_gcm_pmd_capabilities[] = {
.min = 0,
.max = 65535,
.increment = 1
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -85,7 +86,8 @@ static const struct rte_cryptodev_capabilities aesni_gcm_pmd_capabilities[] = {
.min = 0,
.max = 65535,
.increment = 1
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index d1bc28e..780b88b 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -57,7 +57,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 12,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -78,7 +79,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 12,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -99,7 +101,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 14,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -120,7 +123,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 16,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -141,7 +145,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 24,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -162,7 +167,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 32,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -183,7 +189,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 12,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 4d9ccbf..78ed770 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -59,7 +59,8 @@ static const struct rte_cryptodev_capabilities
.max = 20,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -80,7 +81,8 @@ static const struct rte_cryptodev_capabilities
.max = 32,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index d152161..ff3be70 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -217,7 +217,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 16,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -238,7 +239,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 20,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -259,7 +261,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 28,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -280,7 +283,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 32,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -301,7 +305,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 48,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -322,7 +327,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 64,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index 62ebdbd..8f1a116 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -60,7 +60,8 @@ static const struct rte_cryptodev_capabilities kasumi_pmd_capabilities[] = {
.min = 8,
.max = 8,
.increment = 0
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index 5f74f0c..f8ad8e4 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -56,7 +56,8 @@ static const struct rte_cryptodev_capabilities null_crypto_pmd_capabilities[] =
.max = 0,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, },
}, },
},
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 22a6873..3026dbd 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -57,7 +57,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 16,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -78,7 +79,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 16,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -99,7 +101,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 20,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -120,7 +123,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 20,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -141,7 +145,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 28,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -162,7 +167,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 28,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -183,31 +189,33 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 32,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
{ /* SHA256 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- .aad_size = { 0 }
- }, }
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
- },
+ }, }
+ },
{ /* SHA384 HMAC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -225,7 +233,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 48,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -246,7 +255,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 48,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -267,7 +277,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 64,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -288,7 +299,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 64,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -353,7 +365,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.min = 0,
.max = 65535,
.increment = 1
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -398,7 +411,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.min = 8,
.max = 65532,
.increment = 4
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/qat/qat_crypto_capabilities.h b/drivers/crypto/qat/qat_crypto_capabilities.h
index 1294f24..4bc2c97 100644
--- a/drivers/crypto/qat/qat_crypto_capabilities.h
+++ b/drivers/crypto/qat/qat_crypto_capabilities.h
@@ -52,7 +52,8 @@
.max = 20, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -73,7 +74,8 @@
.max = 28, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -94,7 +96,8 @@
.max = 32, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -114,8 +117,9 @@
.min = 48, \
.max = 48, \
.increment = 0 \
- }, \
- .aad_size = { 0 } \
+ }, \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -136,7 +140,8 @@
.max = 64, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -157,7 +162,8 @@
.max = 16, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -178,7 +184,8 @@
.max = 16, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -203,7 +210,8 @@
.min = 0, \
.max = 240, \
.increment = 1 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -228,7 +236,8 @@
.min = 1, \
.max = 65535, \
.increment = 1 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -253,7 +262,8 @@
.min = 16, \
.max = 16, \
.increment = 0 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -374,7 +384,8 @@
.max = 0, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, }, \
}, }, \
}, \
@@ -439,7 +450,8 @@
.min = 8, \
.max = 8, \
.increment = 0 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -566,7 +578,8 @@
.min = 16, \
.max = 16, \
.increment = 0 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 7ce96be..68ede97 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -60,7 +60,8 @@ static const struct rte_cryptodev_capabilities snow3g_pmd_capabilities[] = {
.min = 16,
.max = 16,
.increment = 0
- }
+ },
+ .iv_size = { 0 },
}, }
}, }
},
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index c24b9bd..02c3c4a 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -60,7 +60,8 @@ static const struct rte_cryptodev_capabilities zuc_pmd_capabilities[] = {
.min = 16,
.max = 16,
.increment = 0
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 9f16806..ba5aef7 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -160,14 +160,18 @@ struct l2fwd_crypto_options {
unsigned ckey_param;
int ckey_random_size;
- struct l2fwd_iv iv;
- unsigned int iv_param;
- int iv_random_size;
+ struct l2fwd_iv cipher_iv;
+ unsigned int cipher_iv_param;
+ int cipher_iv_random_size;
struct rte_crypto_sym_xform auth_xform;
uint8_t akey_param;
int akey_random_size;
+ struct l2fwd_iv auth_iv;
+ unsigned int auth_iv_param;
+ int auth_iv_random_size;
+
struct l2fwd_key aad;
unsigned aad_param;
int aad_random_size;
@@ -188,7 +192,8 @@ struct l2fwd_crypto_params {
unsigned digest_length;
unsigned block_size;
- struct l2fwd_iv iv;
+ struct l2fwd_iv cipher_iv;
+ struct l2fwd_iv auth_iv;
struct l2fwd_key aad;
struct rte_cryptodev_sym_session *session;
@@ -453,6 +458,18 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
rte_crypto_op_attach_sym_session(op, cparams->session);
if (cparams->do_hash) {
+ if (cparams->auth_iv.length) {
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ IV_OFFSET +
+ cparams->cipher_iv.length);
+ /*
+ * Copy IV at the end of the crypto operation,
+ * after the cipher IV, if added
+ */
+ rte_memcpy(iv_ptr, cparams->auth_iv.data,
+ cparams->auth_iv.length);
+ }
if (!cparams->hash_verify) {
/* Append space for digest to end of packet */
op->sym->auth.digest.data = (uint8_t *)rte_pktmbuf_append(m,
@@ -492,7 +509,8 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
IV_OFFSET);
/* Copy IV at the end of the crypto operation */
- rte_memcpy(iv_ptr, cparams->iv.data, cparams->iv.length);
+ rte_memcpy(iv_ptr, cparams->cipher_iv.data,
+ cparams->cipher_iv.length);
/* For wireless algorithms, offset/length must be in bits */
if (cparams->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -675,6 +693,18 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
port_cparams[i].block_size = options->block_size;
if (port_cparams[i].do_hash) {
+ port_cparams[i].auth_iv.data = options->auth_iv.data;
+ port_cparams[i].auth_iv.length = options->auth_iv.length;
+ if (!options->auth_iv_param)
+ generate_random_key(port_cparams[i].auth_iv.data,
+ port_cparams[i].auth_iv.length);
+ /* Set IV parameters */
+ if (options->auth_iv.length) {
+ options->auth_xform.auth.iv.offset =
+ IV_OFFSET + options->cipher_iv.length;
+ options->auth_xform.auth.iv.length =
+ options->auth_iv.length;
+ }
port_cparams[i].digest_length =
options->auth_xform.auth.digest_length;
if (options->auth_xform.auth.add_auth_data_length) {
@@ -698,16 +728,17 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
}
if (port_cparams[i].do_cipher) {
- port_cparams[i].iv.data = options->iv.data;
- port_cparams[i].iv.length = options->iv.length;
- if (!options->iv_param)
- generate_random_key(port_cparams[i].iv.data,
- port_cparams[i].iv.length);
+ port_cparams[i].cipher_iv.data = options->cipher_iv.data;
+ port_cparams[i].cipher_iv.length = options->cipher_iv.length;
+ if (!options->cipher_iv_param)
+ generate_random_key(port_cparams[i].cipher_iv.data,
+ port_cparams[i].cipher_iv.length);
port_cparams[i].cipher_algo = options->cipher_xform.cipher.algo;
/* Set IV parameters */
options->cipher_xform.cipher.iv.offset = IV_OFFSET;
- options->cipher_xform.cipher.iv.length = options->iv.length;
+ options->cipher_xform.cipher.iv.length =
+ options->cipher_iv.length;
}
port_cparams[i].session = initialize_crypto_session(options,
@@ -861,13 +892,15 @@ l2fwd_crypto_usage(const char *prgname)
" --cipher_op ENCRYPT / DECRYPT\n"
" --cipher_key KEY (bytes separated with \":\")\n"
" --cipher_key_random_size SIZE: size of cipher key when generated randomly\n"
- " --iv IV (bytes separated with \":\")\n"
- " --iv_random_size SIZE: size of IV when generated randomly\n"
+ " --cipher_iv IV (bytes separated with \":\")\n"
+ " --cipher_iv_random_size SIZE: size of cipher IV when generated randomly\n"
" --auth_algo ALGO\n"
" --auth_op GENERATE / VERIFY\n"
" --auth_key KEY (bytes separated with \":\")\n"
" --auth_key_random_size SIZE: size of auth key when generated randomly\n"
+ " --auth_iv IV (bytes separated with \":\")\n"
+ " --auth_iv_random_size SIZE: size of auth IV when generated randomly\n"
" --aad AAD (bytes separated with \":\")\n"
" --aad_random_size SIZE: size of AAD when generated randomly\n"
" --digest_size SIZE: size of digest to be generated/verified\n"
@@ -1078,18 +1111,18 @@ l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
else if (strcmp(lgopts[option_index].name, "cipher_key_random_size") == 0)
return parse_size(&options->ckey_random_size, optarg);
- else if (strcmp(lgopts[option_index].name, "iv") == 0) {
- options->iv_param = 1;
- options->iv.length =
- parse_key(options->iv.data, optarg);
- if (options->iv.length > 0)
+ else if (strcmp(lgopts[option_index].name, "cipher_iv") == 0) {
+ options->cipher_iv_param = 1;
+ options->cipher_iv.length =
+ parse_key(options->cipher_iv.data, optarg);
+ if (options->cipher_iv.length > 0)
return 0;
else
return -1;
}
- else if (strcmp(lgopts[option_index].name, "iv_random_size") == 0)
- return parse_size(&options->iv_random_size, optarg);
+ else if (strcmp(lgopts[option_index].name, "cipher_iv_random_size") == 0)
+ return parse_size(&options->cipher_iv_random_size, optarg);
/* Authentication options */
else if (strcmp(lgopts[option_index].name, "auth_algo") == 0) {
@@ -1115,6 +1148,20 @@ l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
return parse_size(&options->akey_random_size, optarg);
}
+
+ else if (strcmp(lgopts[option_index].name, "auth_iv") == 0) {
+ options->auth_iv_param = 1;
+ options->auth_iv.length =
+ parse_key(options->auth_iv.data, optarg);
+ if (options->auth_iv.length > 0)
+ return 0;
+ else
+ return -1;
+ }
+
+ else if (strcmp(lgopts[option_index].name, "auth_iv_random_size") == 0)
+ return parse_size(&options->auth_iv_random_size, optarg);
+
else if (strcmp(lgopts[option_index].name, "aad") == 0) {
options->aad_param = 1;
options->aad.length =
@@ -1233,9 +1280,9 @@ l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
options->ckey_param = 0;
options->ckey_random_size = -1;
options->cipher_xform.cipher.key.length = 0;
- options->iv_param = 0;
- options->iv_random_size = -1;
- options->iv.length = 0;
+ options->cipher_iv_param = 0;
+ options->cipher_iv_random_size = -1;
+ options->cipher_iv.length = 0;
options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
@@ -1246,6 +1293,9 @@ l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
options->akey_param = 0;
options->akey_random_size = -1;
options->auth_xform.auth.key.length = 0;
+ options->auth_iv_param = 0;
+ options->auth_iv_random_size = -1;
+ options->auth_iv.length = 0;
options->aad_param = 0;
options->aad_random_size = -1;
options->aad.length = 0;
@@ -1267,7 +1317,7 @@ display_cipher_info(struct l2fwd_crypto_options *options)
rte_hexdump(stdout, "Cipher key:",
options->cipher_xform.cipher.key.data,
options->cipher_xform.cipher.key.length);
- rte_hexdump(stdout, "IV:", options->iv.data, options->iv.length);
+ rte_hexdump(stdout, "IV:", options->cipher_iv.data, options->cipher_iv.length);
}
static void
@@ -1279,6 +1329,7 @@ display_auth_info(struct l2fwd_crypto_options *options)
rte_hexdump(stdout, "Auth key:",
options->auth_xform.auth.key.data,
options->auth_xform.auth.key.length);
+ rte_hexdump(stdout, "IV:", options->auth_iv.data, options->auth_iv.length);
rte_hexdump(stdout, "AAD:", options->aad.data, options->aad.length);
}
@@ -1316,8 +1367,11 @@ l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
if (options->akey_param && (options->akey_random_size != -1))
printf("Auth key already parsed, ignoring size of random key\n");
- if (options->iv_param && (options->iv_random_size != -1))
- printf("IV already parsed, ignoring size of random IV\n");
+ if (options->cipher_iv_param && (options->cipher_iv_random_size != -1))
+ printf("Cipher IV already parsed, ignoring size of random IV\n");
+
+ if (options->auth_iv_param && (options->auth_iv_random_size != -1))
+ printf("Auth IV already parsed, ignoring size of random IV\n");
if (options->aad_param && (options->aad_random_size != -1))
printf("AAD already parsed, ignoring size of random AAD\n");
@@ -1365,14 +1419,16 @@ l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
{ "cipher_op", required_argument, 0, 0 },
{ "cipher_key", required_argument, 0, 0 },
{ "cipher_key_random_size", required_argument, 0, 0 },
+ { "cipher_iv", required_argument, 0, 0 },
+ { "cipher_iv_random_size", required_argument, 0, 0 },
{ "auth_algo", required_argument, 0, 0 },
{ "auth_op", required_argument, 0, 0 },
{ "auth_key", required_argument, 0, 0 },
{ "auth_key_random_size", required_argument, 0, 0 },
+ { "auth_iv", required_argument, 0, 0 },
+ { "auth_iv_random_size", required_argument, 0, 0 },
- { "iv", required_argument, 0, 0 },
- { "iv_random_size", required_argument, 0, 0 },
{ "aad", required_argument, 0, 0 },
{ "aad_random_size", required_argument, 0, 0 },
{ "digest_size", required_argument, 0, 0 },
@@ -1660,8 +1716,10 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
options->block_size = cap->sym.cipher.block_size;
- check_iv_param(&cap->sym.cipher.iv_size, options->iv_param,
- options->iv_random_size, &options->iv.length);
+ check_iv_param(&cap->sym.cipher.iv_size,
+ options->cipher_iv_param,
+ options->cipher_iv_random_size,
+ &options->cipher_iv.length);
/*
* Check if length of provided cipher key is supported
@@ -1731,6 +1789,10 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
continue;
}
+ check_iv_param(&cap->sym.auth.iv_size,
+ options->auth_iv_param,
+ options->auth_iv_random_size,
+ &options->auth_iv.length);
/*
* Check if length of provided AAD is supported
* by the algorithm chosen.
@@ -1972,9 +2034,13 @@ reserve_key_memory(struct l2fwd_crypto_options *options)
if (options->auth_xform.auth.key.data == NULL)
rte_exit(EXIT_FAILURE, "Failed to allocate memory for auth key");
- options->iv.data = rte_malloc("iv", MAX_KEY_SIZE, 0);
- if (options->iv.data == NULL)
- rte_exit(EXIT_FAILURE, "Failed to allocate memory for IV");
+ options->cipher_iv.data = rte_malloc("cipher iv", MAX_KEY_SIZE, 0);
+ if (options->cipher_iv.data == NULL)
+ rte_exit(EXIT_FAILURE, "Failed to allocate memory for cipher IV");
+
+ options->auth_iv.data = rte_malloc("auth iv", MAX_KEY_SIZE, 0);
+ if (options->auth_iv.data == NULL)
+ rte_exit(EXIT_FAILURE, "Failed to allocate memory for auth IV");
options->aad.data = rte_malloc("aad", MAX_KEY_SIZE, 0);
if (options->aad.data == NULL)
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index c1a1e27..0e84bad 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -393,6 +393,30 @@ struct rte_crypto_auth_xform {
* of the AAD data is specified in additional authentication data
* length field of the rte_crypto_sym_op_data structure
*/
+
+ struct {
+ uint16_t offset;
+ /**< Starting point for Initialisation Vector or Counter,
+ * specified as number of bytes from start of crypto
+ * operation (rte_crypto_op).
+ *
+ * - For KASUMI in F9 mode, SNOW 3G in UIA2 mode,
+ * for ZUC in EIA3 mode and for AES-GMAC, this is the
+ * authentication Initialisation Vector (IV) value.
+ *
+ *
+ * For optimum performance, the data pointed to SHOULD
+ * be 8-byte aligned.
+ */
+ uint16_t length;
+ /**< Length of valid IV data.
+ *
+ * - For KASUMI in F9 mode, SNOW3G in UIA2 mode, for
+ * ZUC in EIA3 mode and for AES-GMAC, this is the length
+ * of the IV.
+ *
+ */
+ } iv; /**< Initialisation vector parameters */
};
/** Crypto transformation types */
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index a466ed7..5aa177f 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -272,7 +272,8 @@ rte_cryptodev_sym_capability_check_cipher(
int
rte_cryptodev_sym_capability_check_auth(
const struct rte_cryptodev_symmetric_capability *capability,
- uint16_t key_size, uint16_t digest_size, uint16_t aad_size)
+ uint16_t key_size, uint16_t digest_size, uint16_t aad_size,
+ uint16_t iv_size)
{
if (param_range_check(key_size, capability->auth.key_size))
return -1;
@@ -283,6 +284,9 @@ rte_cryptodev_sym_capability_check_auth(
if (param_range_check(aad_size, capability->auth.aad_size))
return -1;
+ if (param_range_check(iv_size, capability->auth.iv_size))
+ return -1;
+
return 0;
}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 91f3375..75b423a 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -184,6 +184,8 @@ struct rte_cryptodev_symmetric_capability {
/**< digest size range */
struct rte_crypto_param_range aad_size;
/**< Additional authentication data size range */
+ struct rte_crypto_param_range iv_size;
+ /**< Initialisation vector data size range */
} auth;
/**< Symmetric Authentication transform capabilities */
struct {
@@ -260,6 +262,7 @@ rte_cryptodev_sym_capability_check_cipher(
* @param key_size Auth key size.
* @param digest_size Auth digest size.
* @param aad_size Auth aad size.
+ * @param iv_size Auth initial vector size.
*
* @return
* - Return 0 if the parameters are in range of the capability.
@@ -268,7 +271,8 @@ rte_cryptodev_sym_capability_check_cipher(
int
rte_cryptodev_sym_capability_check_auth(
const struct rte_cryptodev_symmetric_capability *capability,
- uint16_t key_size, uint16_t digest_size, uint16_t aad_size);
+ uint16_t key_size, uint16_t digest_size, uint16_t aad_size,
+ uint16_t iv_size);
/**
* Provide the cipher algorithm enum, given an algorithm string
--
2.9.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v3 13/26] cryptodev: move IV parameters to crypto session
` (4 preceding siblings ...)
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 12/26] cryptodev: pass IV as offset Pablo de Lara
@ 2017-06-29 11:35 1% ` Pablo de Lara
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 14/26] cryptodev: add auth IV Pablo de Lara
` (4 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:35 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Since IV parameters (offset and length) should not
change for operations in the same session, these parameters
are moved to the crypto transform structure, so they will
be stored in the sessions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
app/test-crypto-perf/cperf_ops.c | 19 ++-
app/test-crypto-perf/cperf_ops.h | 3 +-
app/test-crypto-perf/cperf_test_latency.c | 7 +-
app/test-crypto-perf/cperf_test_throughput.c | 6 +-
app/test-crypto-perf/cperf_test_vectors.c | 9 ++
app/test-crypto-perf/cperf_test_verify.c | 6 +-
doc/guides/prog_guide/cryptodev_lib.rst | 5 -
doc/guides/rel_notes/release_17_08.rst | 2 +
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 22 ++--
drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h | 5 +
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 11 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 5 +
drivers/crypto/armv8/rte_armv8_pmd.c | 12 +-
drivers/crypto/armv8/rte_armv8_pmd_private.h | 7 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 39 ++++--
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 8 +-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 25 ++--
drivers/crypto/kasumi/rte_kasumi_pmd_private.h | 1 +
drivers/crypto/null/null_crypto_pmd_ops.c | 6 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 17 ++-
drivers/crypto/openssl/rte_openssl_pmd_private.h | 5 +
drivers/crypto/qat/qat_adf/qat_algs.h | 4 +
drivers/crypto/qat/qat_crypto.c | 44 +++----
drivers/crypto/snow3g/rte_snow3g_pmd.c | 25 ++--
drivers/crypto/snow3g/rte_snow3g_pmd_private.h | 1 +
drivers/crypto/zuc/rte_zuc_pmd.c | 16 +--
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 2 +-
drivers/crypto/zuc/rte_zuc_pmd_private.h | 1 +
examples/ipsec-secgw/esp.c | 9 --
examples/ipsec-secgw/ipsec.h | 3 +
examples/ipsec-secgw/sa.c | 20 +++
examples/l2fwd-crypto/main.c | 90 ++++++++------
lib/librte_cryptodev/rte_crypto_sym.h | 98 +++++++--------
test/test/test_cryptodev.c | 134 ++++++++++++---------
test/test/test_cryptodev_blockcipher.c | 4 +-
test/test/test_cryptodev_perf.c | 61 +++++-----
36 files changed, 417 insertions(+), 315 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 10002cd..16476ee 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -106,9 +106,6 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
/* cipher parameters */
- sym_op->cipher.iv.offset = iv_offset;
- sym_op->cipher.iv.length = test_vector->iv.length;
-
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
options->cipher_algo == RTE_CRYPTO_CIPHER_KASUMI_F8 ||
options->cipher_algo == RTE_CRYPTO_CIPHER_ZUC_EEA3)
@@ -216,9 +213,6 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
/* cipher parameters */
- sym_op->cipher.iv.offset = iv_offset;
- sym_op->cipher.iv.length = test_vector->iv.length;
-
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
options->cipher_algo == RTE_CRYPTO_CIPHER_KASUMI_F8 ||
options->cipher_algo == RTE_CRYPTO_CIPHER_ZUC_EEA3)
@@ -304,9 +298,6 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
/* cipher parameters */
- sym_op->cipher.iv.offset = iv_offset;
- sym_op->cipher.iv.length = test_vector->iv.length;
-
sym_op->cipher.data.length = options->test_buffer_size;
sym_op->cipher.data.offset =
RTE_ALIGN_CEIL(options->auth_aad_sz, 16);
@@ -368,7 +359,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
static struct rte_cryptodev_sym_session *
cperf_create_session(uint8_t dev_id,
const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_test_vector *test_vector,
+ uint16_t iv_offset)
{
struct rte_crypto_sym_xform cipher_xform;
struct rte_crypto_sym_xform auth_xform;
@@ -382,6 +374,7 @@ cperf_create_session(uint8_t dev_id,
cipher_xform.next = NULL;
cipher_xform.cipher.algo = options->cipher_algo;
cipher_xform.cipher.op = options->cipher_op;
+ cipher_xform.cipher.iv.offset = iv_offset;
/* cipher different than null */
if (options->cipher_algo != RTE_CRYPTO_CIPHER_NULL) {
@@ -389,9 +382,12 @@ cperf_create_session(uint8_t dev_id,
test_vector->cipher_key.data;
cipher_xform.cipher.key.length =
test_vector->cipher_key.length;
+ cipher_xform.cipher.iv.length = test_vector->iv.length;
+
} else {
cipher_xform.cipher.key.data = NULL;
cipher_xform.cipher.key.length = 0;
+ cipher_xform.cipher.iv.length = 0;
}
/* create crypto session */
sess = rte_cryptodev_sym_session_create(dev_id, &cipher_xform);
@@ -435,6 +431,7 @@ cperf_create_session(uint8_t dev_id,
cipher_xform.next = NULL;
cipher_xform.cipher.algo = options->cipher_algo;
cipher_xform.cipher.op = options->cipher_op;
+ cipher_xform.cipher.iv.offset = iv_offset;
/* cipher different than null */
if (options->cipher_algo != RTE_CRYPTO_CIPHER_NULL) {
@@ -442,9 +439,11 @@ cperf_create_session(uint8_t dev_id,
test_vector->cipher_key.data;
cipher_xform.cipher.key.length =
test_vector->cipher_key.length;
+ cipher_xform.cipher.iv.length = test_vector->iv.length;
} else {
cipher_xform.cipher.key.data = NULL;
cipher_xform.cipher.key.length = 0;
+ cipher_xform.cipher.iv.length = 0;
}
/*
diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h
index f7b431c..bb83cd5 100644
--- a/app/test-crypto-perf/cperf_ops.h
+++ b/app/test-crypto-perf/cperf_ops.h
@@ -42,7 +42,8 @@
typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
uint8_t dev_id, const struct cperf_options *options,
- const struct cperf_test_vector *test_vector);
+ const struct cperf_test_vector *test_vector,
+ uint16_t iv_offset);
typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 3aca1b4..d37083f 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -211,7 +211,12 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
ctx->options = options;
ctx->test_vector = test_vector;
- ctx->sess = op_fns->sess_create(dev_id, options, test_vector);
+ /* IV goes at the end of the crypto operation */
+ uint16_t iv_offset = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op) +
+ sizeof(struct cperf_op_result *);
+
+ ctx->sess = op_fns->sess_create(dev_id, options, test_vector, iv_offset);
if (ctx->sess == NULL)
goto err;
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index ba883fd..4d2b3d3 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -195,7 +195,11 @@ cperf_throughput_test_constructor(uint8_t dev_id, uint16_t qp_id,
ctx->options = options;
ctx->test_vector = test_vector;
- ctx->sess = op_fns->sess_create(dev_id, options, test_vector);
+ /* IV goes at the end of the cryptop operation */
+ uint16_t iv_offset = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+
+ ctx->sess = op_fns->sess_create(dev_id, options, test_vector, iv_offset);
if (ctx->sess == NULL)
goto err;
diff --git a/app/test-crypto-perf/cperf_test_vectors.c b/app/test-crypto-perf/cperf_test_vectors.c
index 36b3f6f..4a14fb3 100644
--- a/app/test-crypto-perf/cperf_test_vectors.c
+++ b/app/test-crypto-perf/cperf_test_vectors.c
@@ -423,7 +423,16 @@ cperf_test_vector_get_dummy(struct cperf_options *options)
memcpy(t_vec->iv.data, iv, options->cipher_iv_sz);
}
t_vec->ciphertext.length = options->max_buffer_size;
+ /* Set IV parameters */
+ t_vec->iv.data = rte_malloc(NULL, options->cipher_iv_sz,
+ 16);
+ if (options->cipher_iv_sz && t_vec->iv.data == NULL) {
+ rte_free(t_vec);
+ return NULL;
+ }
+ memcpy(t_vec->iv.data, iv, options->cipher_iv_sz);
t_vec->iv.length = options->cipher_iv_sz;
+
t_vec->data.cipher_offset = 0;
t_vec->data.cipher_length = options->max_buffer_size;
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index d5a2b33..1b58b1d 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -199,7 +199,11 @@ cperf_verify_test_constructor(uint8_t dev_id, uint16_t qp_id,
ctx->options = options;
ctx->test_vector = test_vector;
- ctx->sess = op_fns->sess_create(dev_id, options, test_vector);
+ /* IV goes at the end of the cryptop operation */
+ uint16_t iv_offset = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+
+ ctx->sess = op_fns->sess_create(dev_id, options, test_vector, iv_offset);
if (ctx->sess == NULL)
goto err;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 48c58a9..4e352f4 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -535,11 +535,6 @@ chain.
uint32_t offset;
uint32_t length;
} data; /**< Data offsets and length for ciphering */
-
- struct {
- uint16_t offset;
- uint16_t length;
- } iv; /**< Initialisation vector parameters */
} cipher;
struct {
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 68e8022..4775bd2 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -159,6 +159,8 @@ API Changes
with a zero length array.
* Replaced pointer and physical address of IV in ``rte_crypto_sym_op`` with
offset from the start of the crypto operation.
+ * Moved length and offset of cipher IV from ``rte_crypto_sym_op`` to
+ ``rte_crypto_cipher_xform``.
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 217ea65..414f22b 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -104,6 +104,17 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
return -EINVAL;
}
+ /* Set IV parameters */
+ sess->iv.offset = cipher_xform->cipher.iv.offset;
+ sess->iv.length = cipher_xform->cipher.iv.length;
+
+ /* IV check */
+ if (sess->iv.length != 16 && sess->iv.length != 12 &&
+ sess->iv.length != 0) {
+ GCM_LOG_ERR("Wrong IV length");
+ return -EINVAL;
+ }
+
/* Select Crypto operation */
if (cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE)
@@ -221,20 +232,13 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
src = rte_pktmbuf_mtod_offset(m_src, uint8_t *, offset);
- /* sanity checks */
- if (sym_op->cipher.iv.length != 16 && sym_op->cipher.iv.length != 12 &&
- sym_op->cipher.iv.length != 0) {
- GCM_LOG_ERR("iv");
- return -1;
- }
-
iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- sym_op->cipher.iv.offset);
+ session->iv.offset);
/*
* GCM working in 12B IV mode => 16B pre-counter block we need
* to set BE LSB to 1, driver expects that 16B is allocated
*/
- if (sym_op->cipher.iv.length == 12) {
+ if (session->iv.length == 12) {
uint32_t *iv_padd = (uint32_t *)&(iv_ptr[12]);
*iv_padd = rte_bswap32(1);
}
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 0496b44..2ed96f8 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -90,6 +90,11 @@ enum aesni_gcm_key {
/** AESNI GCM private session structure */
struct aesni_gcm_session {
+ struct {
+ uint16_t length;
+ uint16_t offset;
+ } iv;
+ /**< IV parameters */
enum aesni_gcm_operation op;
/**< GCM operation type */
enum aesni_gcm_key key;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 1f03582..0a20dec 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -246,6 +246,10 @@ aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
return -1;
}
+ /* Set IV parameters */
+ sess->iv.offset = xform->cipher.iv.offset;
+ sess->iv.length = xform->cipher.iv.length;
+
/* Expanded cipher keys */
(*aes_keyexp_fn)(xform->cipher.key.data,
sess->cipher.expanded_aes_keys.encode,
@@ -300,6 +304,9 @@ aesni_mb_set_session_parameters(const struct aesni_mb_op_fns *mb_ops,
return -1;
}
+ /* Default IV length = 0 */
+ sess->iv.length = 0;
+
if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
MB_LOG_ERR("Invalid/unsupported authentication parameters");
return -1;
@@ -472,8 +479,8 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
/* Set IV parameters */
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
- job->iv_len_in_bytes = op->sym->cipher.iv.length;
+ session->iv.offset);
+ job->iv_len_in_bytes = session->iv.length;
/* Data Parameter */
job->src = rte_pktmbuf_mtod(m_src, uint8_t *);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
index 0d82699..5c50d37 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -167,6 +167,11 @@ struct aesni_mb_qp {
/** AES-NI multi-buffer private session structure */
struct aesni_mb_session {
JOB_CHAIN_ORDER chain_order;
+ struct {
+ uint16_t length;
+ uint16_t offset;
+ } iv;
+ /**< IV parameters */
/** Cipher Parameters */
struct {
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 693eccd..dac4fc3 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -432,7 +432,7 @@ armv8_crypto_set_session_chained_parameters(struct armv8_crypto_session *sess,
case RTE_CRYPTO_CIPHER_AES_CBC:
sess->cipher.algo = calg;
/* IV len is always 16 bytes (block size) for AES CBC */
- sess->cipher.iv_len = 16;
+ sess->cipher.iv.length = 16;
break;
default:
return -EINVAL;
@@ -523,6 +523,9 @@ armv8_crypto_set_session_parameters(struct armv8_crypto_session *sess,
return -EINVAL;
}
+ /* Set IV offset */
+ sess->cipher.iv.offset = cipher_xform->cipher.iv.offset;
+
if (is_chained_op) {
ret = armv8_crypto_set_session_chained_parameters(sess,
cipher_xform, auth_xform);
@@ -649,13 +652,8 @@ process_armv8_chained_op
op->sym->auth.digest.length);
}
- if (unlikely(op->sym->cipher.iv.length != sess->cipher.iv_len)) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- return;
- }
-
arg.cipher.iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->cipher.iv.offset);
arg.cipher.key = sess->cipher.key.data;
/* Acquire combined mode function */
crypto_func = sess->crypto_func;
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_private.h b/drivers/crypto/armv8/rte_armv8_pmd_private.h
index b75107f..75bde9f 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_private.h
+++ b/drivers/crypto/armv8/rte_armv8_pmd_private.h
@@ -159,8 +159,11 @@ struct armv8_crypto_session {
/**< cipher operation direction */
enum rte_crypto_cipher_algorithm algo;
/**< cipher algorithm */
- int iv_len;
- /**< IV length */
+ struct {
+ uint16_t length;
+ uint16_t offset;
+ } iv;
+ /**< IV parameters */
struct {
uint8_t data[256];
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 1605701..3930794 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -88,7 +88,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
uint8_t *old_icv;
uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
PMD_INIT_FUNC_TRACE();
@@ -138,7 +138,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
sym_op->auth.digest.length,
sym_op->cipher.data.offset,
sym_op->cipher.data.length,
- sym_op->cipher.iv.length,
+ sess->iv.length,
sym_op->m_src->data_off);
/* Configure Output FLE with Scatter/Gather Entry */
@@ -163,7 +163,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
sge->length = sym_op->auth.digest.length;
DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
- sym_op->cipher.iv.length));
+ sess->iv.length));
}
DPAA2_SET_FLE_FIN(sge);
@@ -175,13 +175,13 @@ build_authenc_fd(dpaa2_sec_session *sess,
DPAA2_SET_FLE_SG_EXT(fle);
DPAA2_SET_FLE_FIN(fle);
fle->length = (sess->dir == DIR_ENC) ?
- (sym_op->auth.data.length + sym_op->cipher.iv.length) :
- (sym_op->auth.data.length + sym_op->cipher.iv.length +
+ (sym_op->auth.data.length + sess->iv.length) :
+ (sym_op->auth.data.length + sess->iv.length +
sym_op->auth.digest.length);
/* Configure Input SGE for Encap/Decap */
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
- sge->length = sym_op->cipher.iv.length;
+ sge->length = sess->iv.length;
sge++;
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
@@ -198,7 +198,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
sge->length = sym_op->auth.digest.length;
DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
sym_op->auth.digest.length +
- sym_op->cipher.iv.length));
+ sess->iv.length));
}
DPAA2_SET_FLE_FIN(sge);
if (auth_only_len) {
@@ -310,7 +310,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
PMD_INIT_FUNC_TRACE();
@@ -347,21 +347,21 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
flc = &priv->flc_desc[0].flc;
DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
- sym_op->cipher.iv.length);
+ sess->iv.length);
DPAA2_SET_FD_COMPOUND_FMT(fd);
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
sym_op->cipher.data.offset,
sym_op->cipher.data.length,
- sym_op->cipher.iv.length,
+ sess->iv.length,
sym_op->m_src->data_off);
DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
sym_op->m_src->data_off);
- fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+ fle->length = sym_op->cipher.data.length + sess->iv.length;
PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
@@ -369,12 +369,12 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
fle++;
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
- fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+ fle->length = sym_op->cipher.data.length + sess->iv.length;
DPAA2_SET_FLE_SG_EXT(fle);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
- sge->length = sym_op->cipher.iv.length;
+ sge->length = sess->iv.length;
sge++;
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
@@ -798,6 +798,10 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
cipherdata.key_enc_flags = 0;
cipherdata.key_type = RTA_DATA_IMM;
+ /* Set IV parameters */
+ session->iv.offset = xform->cipher.iv.offset;
+ session->iv.length = xform->cipher.iv.length;
+
switch (xform->cipher.algo) {
case RTE_CRYPTO_CIPHER_AES_CBC:
cipherdata.algtype = OP_ALG_ALGSEL_AES;
@@ -1016,6 +1020,11 @@ dpaa2_sec_aead_init(struct rte_cryptodev *dev,
(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
}
+
+ /* Set IV parameters */
+ session->iv.offset = cipher_xform->iv.offset;
+ session->iv.length = cipher_xform->iv.length;
+
/* For SEC AEAD only one descriptor is required */
priv = (struct ctxt_priv *)rte_zmalloc(NULL,
sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
@@ -1216,6 +1225,10 @@ dpaa2_sec_session_configure(struct rte_cryptodev *dev,
RTE_LOG(ERR, PMD, "invalid session struct");
return NULL;
}
+
+ /* Default IV length = 0 */
+ session->iv.length = 0;
+
/* Cipher Only */
if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
session->ctxt_type = DPAA2_SEC_CIPHER;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index f5c6169..d152161 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -187,6 +187,10 @@ typedef struct dpaa2_sec_session_entry {
uint8_t *data; /**< pointer to key data */
size_t length; /**< key length in bytes */
} auth_key;
+ struct {
+ uint16_t length; /**< IV length in bytes */
+ uint16_t offset; /**< IV offset in bytes */
+ } iv;
uint8_t status;
union {
struct dpaa2_sec_cipher_ctxt cipher_ctxt;
@@ -275,8 +279,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.min = 32,
.max = 32,
.increment = 0
- },
- .aad_size = { 0 }
+ },
+ .aad_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 9a0b4a8..c92f5d1 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -116,6 +116,13 @@ kasumi_set_session_parameters(struct kasumi_session *sess,
/* Only KASUMI F8 supported */
if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_KASUMI_F8)
return -EINVAL;
+
+ sess->iv_offset = cipher_xform->cipher.iv.offset;
+ if (cipher_xform->cipher.iv.length != KASUMI_IV_LENGTH) {
+ KASUMI_LOG_ERR("Wrong IV length");
+ return -EINVAL;
+ }
+
/* Initialize key */
sso_kasumi_init_f8_key_sched(cipher_xform->cipher.key.data,
&sess->pKeySched_cipher);
@@ -179,13 +186,6 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
uint32_t num_bytes[num_ops];
for (i = 0; i < num_ops; i++) {
- /* Sanity checks. */
- if (ops[i]->sym->cipher.iv.length != KASUMI_IV_LENGTH) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- KASUMI_LOG_ERR("iv");
- break;
- }
-
src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
dst[i] = ops[i]->sym->m_dst ?
@@ -194,7 +194,7 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
iv_ptr = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- ops[i]->sym->cipher.iv.offset);
+ session->iv_offset);
iv[i] = *((uint64_t *)(iv_ptr));
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
@@ -218,13 +218,6 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
uint64_t iv;
uint32_t length_in_bits, offset_in_bits;
- /* Sanity checks. */
- if (unlikely(op->sym->cipher.iv.length != KASUMI_IV_LENGTH)) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- KASUMI_LOG_ERR("iv");
- return 0;
- }
-
offset_in_bits = op->sym->cipher.data.offset;
src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
if (op->sym->m_dst == NULL) {
@@ -234,7 +227,7 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
}
dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ session->iv_offset);
iv = *((uint64_t *)(iv_ptr));
length_in_bits = op->sym->cipher.data.length;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
index fb586ca..6a0d47a 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
@@ -92,6 +92,7 @@ struct kasumi_session {
sso_kasumi_key_sched_t pKeySched_hash;
enum kasumi_operation op;
enum rte_crypto_auth_operation auth_op;
+ uint16_t iv_offset;
} __rte_cache_aligned;
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index 12c946c..5f74f0c 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -72,11 +72,7 @@ static const struct rte_cryptodev_capabilities null_crypto_pmd_capabilities[] =
.max = 0,
.increment = 0
},
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
+ .iv_size = { 0 }
}, },
}, }
},
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 6bfa06f..970c735 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -264,6 +264,10 @@ openssl_set_session_cipher_parameters(struct openssl_session *sess,
/* Select cipher key */
sess->cipher.key.length = xform->cipher.key.length;
+ /* Set IV parameters */
+ sess->iv.offset = xform->cipher.iv.offset;
+ sess->iv.length = xform->cipher.iv.length;
+
/* Select cipher algo */
switch (xform->cipher.algo) {
case RTE_CRYPTO_CIPHER_3DES_CBC:
@@ -397,6 +401,9 @@ openssl_set_session_parameters(struct openssl_session *sess,
return -EINVAL;
}
+ /* Default IV length = 0 */
+ sess->iv.length = 0;
+
/* cipher_xform must be check before auth_xform */
if (cipher_xform) {
if (openssl_set_session_cipher_parameters(
@@ -924,8 +931,8 @@ process_openssl_combined_op
}
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
- ivlen = op->sym->cipher.iv.length;
+ sess->iv.offset);
+ ivlen = sess->iv.length;
aad = op->sym->auth.aad.data;
aadlen = op->sym->auth.aad.length;
@@ -989,7 +996,7 @@ process_openssl_cipher_op
op->sym->cipher.data.offset);
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
if (sess->cipher.mode == OPENSSL_CIPHER_LIB)
if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
@@ -1031,7 +1038,7 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op,
op->sym->cipher.data.offset);
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
block_size = DES_BLOCK_SIZE;
@@ -1090,7 +1097,7 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op,
last_block_len, sess->cipher.bpi_ctx);
/* Prepare parameters for CBC mode op */
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
dst += last_block_len - srclen;
srclen -= last_block_len;
}
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_private.h b/drivers/crypto/openssl/rte_openssl_pmd_private.h
index 4d820c5..3a64853 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_private.h
+++ b/drivers/crypto/openssl/rte_openssl_pmd_private.h
@@ -108,6 +108,11 @@ struct openssl_session {
enum openssl_chain_order chain_order;
/**< chain order mode */
+ struct {
+ uint16_t length;
+ uint16_t offset;
+ } iv;
+ /**< IV parameters */
/** Cipher Parameters */
struct {
enum rte_crypto_cipher_operation direction;
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
index 5c63406..e8fa3d3 100644
--- a/drivers/crypto/qat/qat_adf/qat_algs.h
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -127,6 +127,10 @@ struct qat_session {
struct icp_qat_fw_la_bulk_req fw_req;
uint8_t aad_len;
struct qat_crypto_instance *inst;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
rte_spinlock_t lock; /* protects this struct */
};
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index a4f356f..5d7b68e 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -298,6 +298,9 @@ qat_crypto_sym_configure_session_cipher(struct rte_cryptodev *dev,
/* Get cipher xform from crypto xform chain */
cipher_xform = qat_get_cipher_xform(xform);
+ session->iv.offset = cipher_xform->iv.offset;
+ session->iv.length = cipher_xform->iv.length;
+
switch (cipher_xform->algo) {
case RTE_CRYPTO_CIPHER_AES_CBC:
if (qat_alg_validate_aes_key(cipher_xform->key.length,
@@ -643,7 +646,7 @@ qat_bpicipher_preprocess(struct qat_session *ctx,
else
/* runt block, i.e. less than one full block */
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- sym_op->cipher.iv.offset);
+ ctx->iv.offset);
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
rte_hexdump(stdout, "BPI: src before pre-process:", last_block,
@@ -699,7 +702,7 @@ qat_bpicipher_postprocess(struct qat_session *ctx,
else
/* runt block, i.e. less than one full block */
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- sym_op->cipher.iv.offset);
+ ctx->iv.offset);
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
rte_hexdump(stdout, "BPI: src before post-process:", last_block,
@@ -975,27 +978,20 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
}
iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ ctx->iv.offset);
/* copy IV into request if it fits */
- /*
- * If IV length is zero do not copy anything but still
- * use request descriptor embedded IV
- *
- */
- if (op->sym->cipher.iv.length) {
- if (op->sym->cipher.iv.length <=
- sizeof(cipher_param->u.cipher_IV_array)) {
- rte_memcpy(cipher_param->u.cipher_IV_array,
- iv_ptr,
- op->sym->cipher.iv.length);
- } else {
- ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
- qat_req->comn_hdr.serv_specif_flags,
- ICP_QAT_FW_CIPH_IV_64BIT_PTR);
- cipher_param->u.s.cipher_IV_ptr =
- rte_crypto_op_ctophys_offset(op,
- op->sym->cipher.iv.offset);
- }
+ if (ctx->iv.length <=
+ sizeof(cipher_param->u.cipher_IV_array)) {
+ rte_memcpy(cipher_param->u.cipher_IV_array,
+ iv_ptr,
+ ctx->iv.length);
+ } else {
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ qat_req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+ cipher_param->u.s.cipher_IV_ptr =
+ rte_crypto_op_ctophys_offset(op,
+ ctx->iv.offset);
}
min_ofs = cipher_ofs;
}
@@ -1151,7 +1147,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
- if (op->sym->cipher.iv.length == 12) {
+ if (ctx->iv.length == 12) {
/*
* For GCM a 12 bit IV is allowed,
* but we need to inform the f/w
@@ -1187,7 +1183,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
rte_pktmbuf_data_len(op->sym->m_src));
if (do_cipher)
rte_hexdump(stdout, "iv:", iv_ptr,
- op->sym->cipher.iv.length);
+ ctx->iv.length);
if (do_auth) {
rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 3157d7b..4e93f64 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -116,6 +116,13 @@ snow3g_set_session_parameters(struct snow3g_session *sess,
/* Only SNOW 3G UEA2 supported */
if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_SNOW3G_UEA2)
return -EINVAL;
+
+ if (cipher_xform->cipher.iv.length != SNOW3G_IV_LENGTH) {
+ SNOW3G_LOG_ERR("Wrong IV length");
+ return -EINVAL;
+ }
+ sess->iv_offset = cipher_xform->cipher.iv.offset;
+
/* Initialize key */
sso_snow3g_init_key_sched(cipher_xform->cipher.key.data,
&sess->pKeySched_cipher);
@@ -178,13 +185,6 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops,
uint32_t num_bytes[SNOW3G_MAX_BURST];
for (i = 0; i < num_ops; i++) {
- /* Sanity checks. */
- if (unlikely(ops[i]->sym->cipher.iv.length != SNOW3G_IV_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- SNOW3G_LOG_ERR("iv");
- break;
- }
-
src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
dst[i] = ops[i]->sym->m_dst ?
@@ -193,7 +193,7 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops,
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- ops[i]->sym->cipher.iv.offset);
+ session->iv_offset);
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
processed_ops++;
@@ -214,13 +214,6 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
uint8_t *iv;
uint32_t length_in_bits, offset_in_bits;
- /* Sanity checks. */
- if (unlikely(op->sym->cipher.iv.length != SNOW3G_IV_LENGTH)) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- SNOW3G_LOG_ERR("iv");
- return 0;
- }
-
offset_in_bits = op->sym->cipher.data.offset;
src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
if (op->sym->m_dst == NULL) {
@@ -230,7 +223,7 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
}
dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ session->iv_offset);
length_in_bits = op->sym->cipher.data.length;
sso_snow3g_f8_1_buffer_bit(&session->pKeySched_cipher, iv,
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_private.h b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
index 03973b9..e8943a7 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
@@ -91,6 +91,7 @@ struct snow3g_session {
enum rte_crypto_auth_operation auth_op;
sso_snow3g_key_schedule_t pKeySched_cipher;
sso_snow3g_key_schedule_t pKeySched_hash;
+ uint16_t iv_offset;
} __rte_cache_aligned;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index b91b305..f3cb5f0 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -115,6 +115,13 @@ zuc_set_session_parameters(struct zuc_session *sess,
/* Only ZUC EEA3 supported */
if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_ZUC_EEA3)
return -EINVAL;
+
+ if (cipher_xform->cipher.iv.length != ZUC_IV_KEY_LENGTH) {
+ ZUC_LOG_ERR("Wrong IV length");
+ return -EINVAL;
+ }
+ sess->iv_offset = cipher_xform->cipher.iv.offset;
+
/* Copy the key */
memcpy(sess->pKey_cipher, cipher_xform->cipher.key.data,
ZUC_IV_KEY_LENGTH);
@@ -178,13 +185,6 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
uint8_t *cipher_keys[ZUC_MAX_BURST];
for (i = 0; i < num_ops; i++) {
- /* Sanity checks. */
- if (unlikely(ops[i]->sym->cipher.iv.length != ZUC_IV_KEY_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- ZUC_LOG_ERR("iv");
- break;
- }
-
if (((ops[i]->sym->cipher.data.length % BYTE_LEN) != 0)
|| ((ops[i]->sym->cipher.data.offset
% BYTE_LEN) != 0)) {
@@ -214,7 +214,7 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- ops[i]->sym->cipher.iv.offset);
+ session->iv_offset);
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
cipher_keys[i] = session->pKey_cipher;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index e793459..c24b9bd 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -80,7 +80,7 @@ static const struct rte_cryptodev_capabilities zuc_pmd_capabilities[] = {
.min = 16,
.max = 16,
.increment = 0
- }
+ },
}, }
}, }
},
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_private.h b/drivers/crypto/zuc/rte_zuc_pmd_private.h
index 030f120..cee1b5d 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_private.h
+++ b/drivers/crypto/zuc/rte_zuc_pmd_private.h
@@ -92,6 +92,7 @@ struct zuc_session {
enum rte_crypto_auth_operation auth_op;
uint8_t pKey_cipher[ZUC_IV_KEY_LENGTH];
uint8_t pKey_hash[ZUC_IV_KEY_LENGTH];
+ uint16_t iv_offset;
} __rte_cache_aligned;
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 738a800..9e12782 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -50,9 +50,6 @@
#include "esp.h"
#include "ipip.h"
-#define IV_OFFSET (sizeof(struct rte_crypto_op) + \
- sizeof(struct rte_crypto_sym_op))
-
int
esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop)
@@ -104,8 +101,6 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
case RTE_CRYPTO_CIPHER_AES_CBC:
/* Copy IV at the end of crypto operation */
rte_memcpy(iv_ptr, iv, sa->iv_len);
- sym_cop->cipher.iv.offset = IV_OFFSET;
- sym_cop->cipher.iv.length = sa->iv_len;
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
case RTE_CRYPTO_CIPHER_AES_GCM:
@@ -113,8 +108,6 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
icb->salt = sa->salt;
memcpy(&icb->iv, iv, 8);
icb->cnt = rte_cpu_to_be_32(1);
- sym_cop->cipher.iv.offset = IV_OFFSET;
- sym_cop->cipher.iv.length = 16;
break;
default:
RTE_LOG(ERR, IPSEC_ESP, "unsupported cipher algorithm %u\n",
@@ -348,8 +341,6 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
icb->salt = sa->salt;
icb->iv = sa->seq;
icb->cnt = rte_cpu_to_be_32(1);
- sym_cop->cipher.iv.offset = IV_OFFSET;
- sym_cop->cipher.iv.length = 16;
uint8_t *aad;
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index de1df7b..405cf3d 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -48,6 +48,9 @@
#define MAX_DIGEST_SIZE 32 /* Bytes -- 256 bits */
+#define IV_OFFSET (sizeof(struct rte_crypto_op) + \
+ sizeof(struct rte_crypto_sym_op))
+
#define uint32_t_to_char(ip, a, b, c, d) do {\
*a = (uint8_t)(ip >> 24 & 0xff);\
*b = (uint8_t)(ip >> 16 & 0xff);\
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 39624c4..85e4d4e 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -589,6 +589,7 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
{
struct ipsec_sa *sa;
uint32_t i, idx;
+ uint16_t iv_length;
for (i = 0; i < nb_entries; i++) {
idx = SPI2IDX(entries[i].spi);
@@ -607,6 +608,21 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
sa->dst.ip.ip4 = rte_cpu_to_be_32(sa->dst.ip.ip4);
}
+ switch (sa->cipher_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ iv_length = sa->iv_len;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ case RTE_CRYPTO_CIPHER_AES_GCM:
+ iv_length = 16;
+ break;
+ default:
+ RTE_LOG(ERR, IPSEC_ESP, "unsupported cipher algorithm %u\n",
+ sa->cipher_algo);
+ return -EINVAL;
+ }
+
if (inbound) {
sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
sa_ctx->xf[idx].b.cipher.algo = sa->cipher_algo;
@@ -615,6 +631,8 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
sa->cipher_key_len;
sa_ctx->xf[idx].b.cipher.op =
RTE_CRYPTO_CIPHER_OP_DECRYPT;
+ sa_ctx->xf[idx].b.cipher.iv.offset = IV_OFFSET;
+ sa_ctx->xf[idx].b.cipher.iv.length = iv_length;
sa_ctx->xf[idx].b.next = NULL;
sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -637,6 +655,8 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
sa->cipher_key_len;
sa_ctx->xf[idx].a.cipher.op =
RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+ sa_ctx->xf[idx].a.cipher.iv.offset = IV_OFFSET;
+ sa_ctx->xf[idx].a.cipher.iv.length = iv_length;
sa_ctx->xf[idx].a.next = NULL;
sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_AUTH;
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index ffd9731..9f16806 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -139,6 +139,11 @@ struct l2fwd_key {
phys_addr_t phys_addr;
};
+struct l2fwd_iv {
+ uint8_t *data;
+ uint16_t length;
+};
+
/** l2fwd crypto application command line options */
struct l2fwd_crypto_options {
unsigned portmask;
@@ -155,8 +160,8 @@ struct l2fwd_crypto_options {
unsigned ckey_param;
int ckey_random_size;
- struct l2fwd_key iv;
- unsigned iv_param;
+ struct l2fwd_iv iv;
+ unsigned int iv_param;
int iv_random_size;
struct rte_crypto_sym_xform auth_xform;
@@ -183,7 +188,7 @@ struct l2fwd_crypto_params {
unsigned digest_length;
unsigned block_size;
- struct l2fwd_key iv;
+ struct l2fwd_iv iv;
struct l2fwd_key aad;
struct rte_cryptodev_sym_session *session;
@@ -489,9 +494,6 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
/* Copy IV at the end of the crypto operation */
rte_memcpy(iv_ptr, cparams->iv.data, cparams->iv.length);
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = cparams->iv.length;
-
/* For wireless algorithms, offset/length must be in bits */
if (cparams->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
cparams->cipher_algo == RTE_CRYPTO_CIPHER_KASUMI_F8 ||
@@ -703,6 +705,9 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
port_cparams[i].iv.length);
port_cparams[i].cipher_algo = options->cipher_xform.cipher.algo;
+ /* Set IV parameters */
+ options->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ options->cipher_xform.cipher.iv.length = options->iv.length;
}
port_cparams[i].session = initialize_crypto_session(options,
@@ -1547,6 +1552,46 @@ check_supported_size(uint16_t length, uint16_t min, uint16_t max,
return -1;
}
+
+static int
+check_iv_param(const struct rte_crypto_param_range *iv_range_size,
+ unsigned int iv_param, int iv_random_size,
+ uint16_t *iv_length)
+{
+ /*
+ * Check if length of provided IV is supported
+ * by the algorithm chosen.
+ */
+ if (iv_param) {
+ if (check_supported_size(*iv_length,
+ iv_range_size->min,
+ iv_range_size->max,
+ iv_range_size->increment)
+ != 0) {
+ printf("Unsupported IV length\n");
+ return -1;
+ }
+ /*
+ * Check if length of IV to be randomly generated
+ * is supported by the algorithm chosen.
+ */
+ } else if (iv_random_size != -1) {
+ if (check_supported_size(iv_random_size,
+ iv_range_size->min,
+ iv_range_size->max,
+ iv_range_size->increment)
+ != 0) {
+ printf("Unsupported IV length\n");
+ return -1;
+ }
+ *iv_length = iv_random_size;
+ /* No size provided, use minimum size. */
+ } else
+ *iv_length = iv_range_size->min;
+
+ return 0;
+}
+
static int
initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
uint8_t *enabled_cdevs)
@@ -1614,36 +1659,9 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
}
options->block_size = cap->sym.cipher.block_size;
- /*
- * Check if length of provided IV is supported
- * by the algorithm chosen.
- */
- if (options->iv_param) {
- if (check_supported_size(options->iv.length,
- cap->sym.cipher.iv_size.min,
- cap->sym.cipher.iv_size.max,
- cap->sym.cipher.iv_size.increment)
- != 0) {
- printf("Unsupported IV length\n");
- return -1;
- }
- /*
- * Check if length of IV to be randomly generated
- * is supported by the algorithm chosen.
- */
- } else if (options->iv_random_size != -1) {
- if (check_supported_size(options->iv_random_size,
- cap->sym.cipher.iv_size.min,
- cap->sym.cipher.iv_size.max,
- cap->sym.cipher.iv_size.increment)
- != 0) {
- printf("Unsupported IV length\n");
- return -1;
- }
- options->iv.length = options->iv_random_size;
- /* No size provided, use minimum size. */
- } else
- options->iv.length = cap->sym.cipher.iv_size.min;
+
+ check_iv_param(&cap->sym.cipher.iv_size, options->iv_param,
+ options->iv_random_size, &options->iv.length);
/*
* Check if length of provided cipher key is supported
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index b35c45a..c1a1e27 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -190,6 +190,55 @@ struct rte_crypto_cipher_xform {
* - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
* - Both keys must have the same size.
**/
+ struct {
+ uint16_t offset;
+ /**< Starting point for Initialisation Vector or Counter,
+ * specified as number of bytes from start of crypto
+ * operation (rte_crypto_op).
+ *
+ * - For block ciphers in CBC or F8 mode, or for KASUMI
+ * in F8 mode, or for SNOW 3G in UEA2 mode, this is the
+ * Initialisation Vector (IV) value.
+ *
+ * - For block ciphers in CTR mode, this is the counter.
+ *
+ * - For GCM mode, this is either the IV (if the length
+ * is 96 bits) or J0 (for other sizes), where J0 is as
+ * defined by NIST SP800-38D. Regardless of the IV
+ * length, a full 16 bytes needs to be allocated.
+ *
+ * - For CCM mode, the first byte is reserved, and the
+ * nonce should be written starting at &iv[1] (to allow
+ * space for the implementation to write in the flags
+ * in the first byte). Note that a full 16 bytes should
+ * be allocated, even though the length field will
+ * have a value less than this.
+ *
+ * - For AES-XTS, this is the 128bit tweak, i, from
+ * IEEE Std 1619-2007.
+ *
+ * For optimum performance, the data pointed to SHOULD
+ * be 8-byte aligned.
+ */
+ uint16_t length;
+ /**< Length of valid IV data.
+ *
+ * - For block ciphers in CBC or F8 mode, or for KASUMI
+ * in F8 mode, or for SNOW 3G in UEA2 mode, this is the
+ * length of the IV (which must be the same as the
+ * block length of the cipher).
+ *
+ * - For block ciphers in CTR mode, this is the length
+ * of the counter (which must be the same as the block
+ * length of the cipher).
+ *
+ * - For GCM mode, this is either 12 (for 96-bit IVs)
+ * or 16, in which case data points to J0.
+ *
+ * - For CCM mode, this is the length of the nonce,
+ * which can be in the range 7 to 13 inclusive.
+ */
+ } iv; /**< Initialisation vector parameters */
};
/** Symmetric Authentication / Hash Algorithms */
@@ -463,55 +512,6 @@ struct rte_crypto_sym_op {
*/
} data; /**< Data offsets and length for ciphering */
- struct {
- uint16_t offset;
- /**< Starting point for Initialisation Vector or Counter,
- * specified as number of bytes from start of crypto
- * operation.
- *
- * - For block ciphers in CBC or F8 mode, or for KASUMI
- * in F8 mode, or for SNOW 3G in UEA2 mode, this is the
- * Initialisation Vector (IV) value.
- *
- * - For block ciphers in CTR mode, this is the counter.
- *
- * - For GCM mode, this is either the IV (if the length
- * is 96 bits) or J0 (for other sizes), where J0 is as
- * defined by NIST SP800-38D. Regardless of the IV
- * length, a full 16 bytes needs to be allocated.
- *
- * - For CCM mode, the first byte is reserved, and the
- * nonce should be written starting at &iv[1] (to allow
- * space for the implementation to write in the flags
- * in the first byte). Note that a full 16 bytes should
- * be allocated, even though the length field will
- * have a value less than this.
- *
- * - For AES-XTS, this is the 128bit tweak, i, from
- * IEEE Std 1619-2007.
- *
- * For optimum performance, the data pointed to SHOULD
- * be 8-byte aligned.
- */
- uint16_t length;
- /**< Length of valid IV data.
- *
- * - For block ciphers in CBC or F8 mode, or for KASUMI
- * in F8 mode, or for SNOW 3G in UEA2 mode, this is the
- * length of the IV (which must be the same as the
- * block length of the cipher).
- *
- * - For block ciphers in CTR mode, this is the length
- * of the counter (which must be the same as the block
- * length of the cipher).
- *
- * - For GCM mode, this is either 12 (for 96-bit IVs)
- * or 16, in which case data points to J0.
- *
- * - For CCM mode, this is the length of the nonce,
- * which can be in the range 7 to 13 inclusive.
- */
- } iv; /**< Initialisation vector parameters */
} cipher;
struct {
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index fbcaaee..828a91b 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1270,6 +1270,8 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
/* Setup HMAC Parameters */
ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -1310,13 +1312,11 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
- /* Set crypto operation cipher parameters */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
-
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+ /* Set crypto operation cipher parameters */
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = QUOTE_512_BYTES;
@@ -1404,6 +1404,8 @@ test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
/* Setup HMAC Parameters */
ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -1462,9 +1464,7 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
-
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, CIPHER_IV_LENGTH_AES_CBC);
@@ -1806,7 +1806,8 @@ static int
create_wireless_algo_cipher_session(uint8_t dev_id,
enum rte_crypto_cipher_operation op,
enum rte_crypto_cipher_algorithm algo,
- const uint8_t *key, const uint8_t key_len)
+ const uint8_t *key, const uint8_t key_len,
+ uint8_t iv_len)
{
uint8_t cipher_key[key_len];
@@ -1822,6 +1823,8 @@ create_wireless_algo_cipher_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = op;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -1856,9 +1859,6 @@ create_wireless_algo_cipher_operation(const uint8_t *iv, uint8_t iv_len,
sym_op->m_src = ut_params->ibuf;
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -1890,9 +1890,6 @@ create_wireless_algo_cipher_operation_oop(const uint8_t *iv, uint8_t iv_len,
sym_op->m_dst = ut_params->obuf;
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -1907,7 +1904,8 @@ create_wireless_algo_cipher_auth_session(uint8_t dev_id,
enum rte_crypto_auth_algorithm auth_algo,
enum rte_crypto_cipher_algorithm cipher_algo,
const uint8_t *key, const uint8_t key_len,
- const uint8_t aad_len, const uint8_t auth_len)
+ const uint8_t aad_len, const uint8_t auth_len,
+ uint8_t iv_len)
{
uint8_t cipher_auth_key[key_len];
@@ -1936,6 +1934,8 @@ create_wireless_algo_cipher_auth_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = cipher_op;
ut_params->cipher_xform.cipher.key.data = cipher_auth_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -1962,6 +1962,7 @@ create_wireless_cipher_auth_session(uint8_t dev_id,
const uint8_t *key = tdata->key.data;
const uint8_t aad_len = tdata->aad.len;
const uint8_t auth_len = tdata->digest.len;
+ uint8_t iv_len = tdata->iv.len;
memcpy(cipher_auth_key, key, key_len);
@@ -1985,6 +1986,9 @@ create_wireless_cipher_auth_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = cipher_op;
ut_params->cipher_xform.cipher.key.data = cipher_auth_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
+
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -2013,7 +2017,8 @@ create_wireless_algo_auth_cipher_session(uint8_t dev_id,
enum rte_crypto_auth_algorithm auth_algo,
enum rte_crypto_cipher_algorithm cipher_algo,
const uint8_t *key, const uint8_t key_len,
- const uint8_t aad_len, const uint8_t auth_len)
+ const uint8_t aad_len, const uint8_t auth_len,
+ uint8_t iv_len)
{
uint8_t auth_cipher_key[key_len];
@@ -2038,6 +2043,8 @@ create_wireless_algo_auth_cipher_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = cipher_op;
ut_params->cipher_xform.cipher.key.data = auth_cipher_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -2211,9 +2218,6 @@ create_wireless_cipher_hash_operation(const struct wireless_test_data *tdata,
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -2306,9 +2310,6 @@ create_wireless_algo_cipher_hash_operation(const uint8_t *auth_tag,
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -2389,9 +2390,6 @@ create_wireless_algo_auth_cipher_operation(const unsigned auth_tag_len,
sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -2801,7 +2799,8 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -2876,7 +2875,8 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -2940,7 +2940,8 @@ test_kasumi_encryption_oop(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3017,7 +3018,8 @@ test_kasumi_encryption_oop_sgl(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3080,7 +3082,8 @@ test_kasumi_decryption_oop(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3146,7 +3149,8 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3210,7 +3214,8 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3274,7 +3279,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3355,7 +3361,8 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3444,7 +3451,8 @@ test_snow3g_encryption_offset_oop(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3534,7 +3542,8 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3595,7 +3604,8 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3758,7 +3768,8 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata)
RTE_CRYPTO_AUTH_SNOW3G_UIA2,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
tdata->key.data, tdata->key.len,
- tdata->aad.len, tdata->digest.len);
+ tdata->aad.len, tdata->digest.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
@@ -3840,7 +3851,8 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata)
RTE_CRYPTO_AUTH_SNOW3G_UIA2,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
tdata->key.data, tdata->key.len,
- tdata->aad.len, tdata->digest.len);
+ tdata->aad.len, tdata->digest.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3926,7 +3938,8 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata)
RTE_CRYPTO_AUTH_KASUMI_F9,
RTE_CRYPTO_CIPHER_KASUMI_F8,
tdata->key.data, tdata->key.len,
- tdata->aad.len, tdata->digest.len);
+ tdata->aad.len, tdata->digest.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
@@ -4008,7 +4021,8 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata)
RTE_CRYPTO_AUTH_KASUMI_F9,
RTE_CRYPTO_CIPHER_KASUMI_F8,
tdata->key.data, tdata->key.len,
- tdata->aad.len, tdata->digest.len);
+ tdata->aad.len, tdata->digest.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -4097,7 +4111,8 @@ test_zuc_encryption(const struct wireless_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_ZUC_EEA3,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -4192,7 +4207,8 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_ZUC_EEA3,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -4724,6 +4740,7 @@ static int
create_gcm_session(uint8_t dev_id, enum rte_crypto_cipher_operation op,
const uint8_t *key, const uint8_t key_len,
const uint8_t aad_len, const uint8_t auth_len,
+ uint8_t iv_len,
enum rte_crypto_auth_operation auth_op)
{
uint8_t cipher_key[key_len];
@@ -4741,6 +4758,8 @@ create_gcm_session(uint8_t dev_id, enum rte_crypto_cipher_operation op,
ut_params->cipher_xform.cipher.op = op;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -4777,6 +4796,7 @@ create_gcm_xforms(struct rte_crypto_op *op,
enum rte_crypto_cipher_operation cipher_op,
uint8_t *key, const uint8_t key_len,
const uint8_t aad_len, const uint8_t auth_len,
+ uint8_t iv_len,
enum rte_crypto_auth_operation auth_op)
{
TEST_ASSERT_NOT_NULL(rte_crypto_op_sym_xforms_alloc(op, 2),
@@ -4790,6 +4810,8 @@ create_gcm_xforms(struct rte_crypto_op *op,
sym_op->xform->cipher.op = cipher_op;
sym_op->xform->cipher.key.data = key;
sym_op->xform->cipher.key.length = key_len;
+ sym_op->xform->cipher.iv.offset = IV_OFFSET;
+ sym_op->xform->cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -4841,12 +4863,10 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
/* Append IV at the end of the crypto operation*/
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = tdata->iv.len;
rte_memcpy(iv_ptr, tdata->iv.data, tdata->iv.len);
TEST_HEXDUMP(stdout, "iv:", iv_ptr,
- sym_op->cipher.iv.length);
+ tdata->iv.len);
/* Append plaintext/ciphertext */
if (op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) {
@@ -4950,6 +4970,7 @@ test_mb_AES_GCM_authenticated_encryption(const struct gcm_test_data *tdata)
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_GENERATE);
if (retval < 0)
return retval;
@@ -5127,6 +5148,7 @@ test_mb_AES_GCM_authenticated_decryption(const struct gcm_test_data *tdata)
RTE_CRYPTO_CIPHER_OP_DECRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_VERIFY);
if (retval < 0)
return retval;
@@ -5293,6 +5315,7 @@ test_AES_GCM_authenticated_encryption_oop(const struct gcm_test_data *tdata)
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_GENERATE);
if (retval < 0)
return retval;
@@ -5369,6 +5392,7 @@ test_AES_GCM_authenticated_decryption_oop(const struct gcm_test_data *tdata)
RTE_CRYPTO_CIPHER_OP_DECRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_VERIFY);
if (retval < 0)
return retval;
@@ -5452,6 +5476,7 @@ test_AES_GCM_authenticated_encryption_sessionless(
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
key, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_GENERATE);
if (retval < 0)
return retval;
@@ -5532,6 +5557,7 @@ test_AES_GCM_authenticated_decryption_sessionless(
RTE_CRYPTO_CIPHER_OP_DECRYPT,
key, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_VERIFY);
if (retval < 0)
return retval;
@@ -6416,9 +6442,6 @@ create_gmac_operation(enum rte_crypto_auth_operation op,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = tdata->iv.len;
-
rte_memcpy(iv_ptr, tdata->iv.data, tdata->iv.len);
TEST_HEXDUMP(stdout, "iv:", iv_ptr, tdata->iv.len);
@@ -6450,6 +6473,8 @@ static int create_gmac_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = op;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = tdata->key.len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = tdata->iv.len;
ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
ut_params->auth_xform.next = NULL;
@@ -6848,6 +6873,8 @@ create_auth_cipher_session(struct crypto_unittest_params *ut_params,
ut_params->cipher_xform.cipher.op = cipher_op;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = reference->cipher_key.len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = reference->iv.len;
/* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
@@ -6959,9 +6986,6 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.data,
sym_op->auth.digest.length);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = reference->iv.len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
@@ -7016,9 +7040,6 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.data,
sym_op->auth.digest.length);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = reference->iv.len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
@@ -7266,8 +7287,6 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
rte_memcpy(iv_ptr, tdata->iv.data, iv_len);
@@ -7348,6 +7367,7 @@ test_AES_GCM_authenticated_encryption_SGL(const struct gcm_test_data *tdata,
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_GENERATE);
if (retval < 0)
return retval;
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 312405b..9faf088 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -287,11 +287,11 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
RTE_CRYPTO_CIPHER_OP_DECRYPT;
cipher_xform->cipher.key.data = cipher_key;
cipher_xform->cipher.key.length = tdata->cipher_key.len;
+ cipher_xform->cipher.iv.offset = IV_OFFSET;
+ cipher_xform->cipher.iv.length = tdata->iv.len;
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = tdata->ciphertext.len;
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = tdata->iv.len;
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
tdata->iv.data,
tdata->iv.len);
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 86bdc6e..7238bfa 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -43,6 +43,8 @@
#include "test_cryptodev.h"
#include "test_cryptodev_gcm_test_vectors.h"
+#define AES_CIPHER_IV_LENGTH 16
+#define TRIPLE_DES_CIPHER_IV_LENGTH 8
#define PERF_NUM_OPS_INFLIGHT (128)
#define DEFAULT_NUM_REQS_TO_SUBMIT (10000000)
@@ -67,9 +69,6 @@ enum chain_mode {
struct symmetric_op {
- const uint8_t *iv_data;
- uint32_t iv_len;
-
const uint8_t *aad_data;
uint32_t aad_len;
@@ -96,6 +95,8 @@ struct symmetric_session_attrs {
const uint8_t *key_auth_data;
uint32_t key_auth_len;
+ const uint8_t *iv_data;
+ uint16_t iv_len;
uint32_t digest_len;
};
@@ -1933,7 +1934,8 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
ut_params->cipher_xform.cipher.key.data = aes_cbc_128_key;
ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
-
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
/* Setup HMAC Parameters */
ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -1981,9 +1983,6 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_params[0].length;
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
-
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
aes_cbc_128_iv, CIPHER_IV_LENGTH_AES_CBC);
@@ -2646,6 +2645,8 @@ test_perf_create_aes_sha_session(uint8_t dev_id, enum chain_mode chain,
cipher_xform.cipher.key.data = aes_key;
cipher_xform.cipher.key.length = cipher_key_len;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
+ cipher_xform.cipher.iv.length = AES_CIPHER_IV_LENGTH;
if (chain != CIPHER_ONLY) {
/* Setup HMAC Parameters */
auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -2694,6 +2695,9 @@ test_perf_create_snow3g_session(uint8_t dev_id, enum chain_mode chain,
cipher_xform.cipher.key.data = snow3g_cipher_key;
cipher_xform.cipher.key.length = cipher_key_len;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
+ cipher_xform.cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
+
/* Setup HMAC Parameters */
auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -2741,17 +2745,20 @@ test_perf_create_openssl_session(uint8_t dev_id, enum chain_mode chain,
/* Setup Cipher Parameters */
cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cipher_xform.cipher.algo = cipher_algo;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
switch (cipher_algo) {
case RTE_CRYPTO_CIPHER_3DES_CBC:
case RTE_CRYPTO_CIPHER_3DES_CTR:
cipher_xform.cipher.key.data = triple_des_key;
+ cipher_xform.cipher.iv.length = TRIPLE_DES_CIPHER_IV_LENGTH;
break;
case RTE_CRYPTO_CIPHER_AES_CBC:
case RTE_CRYPTO_CIPHER_AES_CTR:
case RTE_CRYPTO_CIPHER_AES_GCM:
cipher_xform.cipher.key.data = aes_key;
+ cipher_xform.cipher.iv.length = AES_CIPHER_IV_LENGTH;
break;
default:
return NULL;
@@ -2816,6 +2823,8 @@ test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
}
cipher_xform.cipher.key.length = cipher_key_len;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
+ cipher_xform.cipher.iv.length = AES_CIPHER_IV_LENGTH;
/* Setup Auth Parameters */
auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -2844,9 +2853,7 @@ test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
}
}
-#define AES_CIPHER_IV_LENGTH 16
#define AES_GCM_AAD_LENGTH 16
-#define TRIPLE_DES_CIPHER_IV_LENGTH 8
static struct rte_mbuf *
test_perf_create_pktmbuf(struct rte_mempool *mpool, unsigned buf_sz)
@@ -2893,12 +2900,11 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
}
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = AES_CIPHER_IV_LENGTH;
+ /* Copy the IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
aes_iv, AES_CIPHER_IV_LENGTH);
+ /* Cipher Parameters */
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_len;
@@ -2926,9 +2932,7 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.aad.data = aes_gcm_aad;
op->sym->auth.aad.length = AES_GCM_AAD_LENGTH;
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = AES_CIPHER_IV_LENGTH;
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
aes_iv, AES_CIPHER_IV_LENGTH);
@@ -2970,10 +2974,6 @@ test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
IV_OFFSET);
op->sym->auth.aad.length = SNOW3G_CIPHER_IV_LENGTH;
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
-
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_len << 3;
@@ -2997,9 +2997,7 @@ test_perf_set_crypto_op_snow3g_cipher(struct rte_crypto_op *op,
return NULL;
}
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
snow3g_iv, SNOW3G_CIPHER_IV_LENGTH);
@@ -3068,9 +3066,7 @@ test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
rte_pktmbuf_mtophys_offset(m, data_len);
op->sym->auth.digest.length = digest_len;
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = TRIPLE_DES_CIPHER_IV_LENGTH;
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
triple_des_iv, TRIPLE_DES_CIPHER_IV_LENGTH);
@@ -4136,6 +4132,8 @@ test_perf_create_session(uint8_t dev_id, struct perf_test_params *pparams)
cipher_xform.cipher.op = pparams->session_attrs->cipher;
cipher_xform.cipher.key.data = cipher_key;
cipher_xform.cipher.key.length = pparams->session_attrs->key_cipher_len;
+ cipher_xform.cipher.iv.length = pparams->session_attrs->iv_len;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
auth_xform.next = NULL;
@@ -4190,14 +4188,11 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
rte_memcpy(op->sym->auth.aad.data, params->symmetric_op->aad_data,
params->symmetric_op->aad_len);
- op->sym->cipher.iv.offset = IV_OFFSET;
- rte_memcpy(iv_ptr, params->symmetric_op->iv_data,
- params->symmetric_op->iv_len);
- if (params->symmetric_op->iv_len == 12)
+ rte_memcpy(iv_ptr, params->session_attrs->iv_data,
+ params->session_attrs->iv_len);
+ if (params->session_attrs->iv_len == 12)
iv_ptr[15] = 1;
- op->sym->cipher.iv.length = params->symmetric_op->iv_len;
-
op->sym->auth.data.offset =
params->symmetric_op->aad_len;
op->sym->auth.data.length = params->symmetric_op->p_len;
@@ -4434,11 +4429,11 @@ test_perf_AES_GCM(int continual_buf_len, int continual_size)
session_attrs[i].key_auth_len = 0;
session_attrs[i].digest_len =
gcm_test->auth_tag.len;
+ session_attrs[i].iv_len = gcm_test->iv.len;
+ session_attrs[i].iv_data = gcm_test->iv.data;
ops_set[i].aad_data = gcm_test->aad.data;
ops_set[i].aad_len = gcm_test->aad.len;
- ops_set[i].iv_data = gcm_test->iv.data;
- ops_set[i].iv_len = gcm_test->iv.len;
ops_set[i].p_data = gcm_test->plaintext.data;
ops_set[i].p_len = buf_lengths[i];
ops_set[i].c_data = gcm_test->ciphertext.data;
--
2.9.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v3 12/26] cryptodev: pass IV as offset
` (3 preceding siblings ...)
2017-06-29 11:34 4% ` [dpdk-dev] [PATCH v3 04/26] cryptodev: do not store pointer to op specific params Pablo de Lara
@ 2017-06-29 11:35 1% ` Pablo de Lara
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 13/26] cryptodev: move IV parameters to crypto session Pablo de Lara
` (5 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:35 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Since IV now is copied after the crypto operation, in
its private size, IV can be passed only with offset
and length.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
app/test-crypto-perf/cperf_ops.c | 49 +++++++------
doc/guides/prog_guide/cryptodev_lib.rst | 3 +-
doc/guides/rel_notes/release_17_08.rst | 2 +
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 80 +++++++++++----------
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 3 +-
drivers/crypto/armv8/rte_armv8_pmd.c | 3 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 8 ++-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 26 ++++---
drivers/crypto/openssl/rte_openssl_pmd.c | 12 ++--
drivers/crypto/qat/qat_crypto.c | 30 +++++---
drivers/crypto/snow3g/rte_snow3g_pmd.c | 14 ++--
drivers/crypto/zuc/rte_zuc_pmd.c | 7 +-
examples/ipsec-secgw/esp.c | 14 +---
examples/l2fwd-crypto/main.c | 5 +-
lib/librte_cryptodev/rte_crypto_sym.h | 7 +-
test/test/test_cryptodev.c | 107 +++++++++++-----------------
test/test/test_cryptodev_blockcipher.c | 8 +--
test/test/test_cryptodev_perf.c | 60 ++++++----------
18 files changed, 211 insertions(+), 227 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 7404abc..10002cd 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -106,10 +106,7 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
/* cipher parameters */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i],
- uint8_t *, iv_offset);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
- iv_offset);
+ sym_op->cipher.iv.offset = iv_offset;
sym_op->cipher.iv.length = test_vector->iv.length;
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -123,11 +120,13 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
}
if (options->test == CPERF_TEST_TYPE_VERIFY) {
- for (i = 0; i < nb_ops; i++)
- memcpy(ops[i]->sym->cipher.iv.data,
- test_vector->iv.data,
- test_vector->iv.length);
- }
+ for (i = 0; i < nb_ops; i++) {
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *, iv_offset);
+
+ memcpy(iv_ptr, test_vector->iv.data,
+ test_vector->iv.length);
+ } }
return 0;
}
@@ -217,10 +216,7 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
/* cipher parameters */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i],
- uint8_t *, iv_offset);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
- iv_offset);
+ sym_op->cipher.iv.offset = iv_offset;
sym_op->cipher.iv.length = test_vector->iv.length;
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -277,10 +273,13 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
}
if (options->test == CPERF_TEST_TYPE_VERIFY) {
- for (i = 0; i < nb_ops; i++)
- memcpy(ops[i]->sym->cipher.iv.data,
- test_vector->iv.data,
- test_vector->iv.length);
+ for (i = 0; i < nb_ops; i++) {
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *, iv_offset);
+
+ memcpy(iv_ptr, test_vector->iv.data,
+ test_vector->iv.length);
+ }
}
return 0;
@@ -305,10 +304,7 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
/* cipher parameters */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i],
- uint8_t *, iv_offset);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
- iv_offset);
+ sym_op->cipher.iv.offset = iv_offset;
sym_op->cipher.iv.length = test_vector->iv.length;
sym_op->cipher.data.length = options->test_buffer_size;
@@ -357,10 +353,13 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
}
if (options->test == CPERF_TEST_TYPE_VERIFY) {
- for (i = 0; i < nb_ops; i++)
- memcpy(ops[i]->sym->cipher.iv.data,
- test_vector->iv.data,
- test_vector->iv.length);
+ for (i = 0; i < nb_ops; i++) {
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *, iv_offset);
+
+ memcpy(iv_ptr, test_vector->iv.data,
+ test_vector->iv.length);
+ }
}
return 0;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c9a29f8..48c58a9 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -537,8 +537,7 @@ chain.
} data; /**< Data offsets and length for ciphering */
struct {
- uint8_t *data;
- phys_addr_t phys_addr;
+ uint16_t offset;
uint16_t length;
} iv; /**< Initialisation vector parameters */
} cipher;
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 6acbf35..68e8022 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -157,6 +157,8 @@ API Changes
* Removed the field ``opaque_data`` from ``rte_crypto_op``.
* Pointer to ``rte_crypto_sym_op`` in ``rte_crypto_op`` has been replaced
with a zero length array.
+ * Replaced pointer and physical address of IV in ``rte_crypto_sym_op`` with
+ offset from the start of the crypto operation.
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index a0154ff..217ea65 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -180,12 +180,14 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_op *op)
*
*/
static int
-process_gcm_crypto_op(struct rte_crypto_sym_op *op,
+process_gcm_crypto_op(struct rte_crypto_op *op,
struct aesni_gcm_session *session)
{
uint8_t *src, *dst;
- struct rte_mbuf *m_src = op->m_src;
- uint32_t offset = op->cipher.data.offset;
+ uint8_t *iv_ptr;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct rte_mbuf *m_src = sym_op->m_src;
+ uint32_t offset = sym_op->cipher.data.offset;
uint32_t part_len, total_len, data_len;
RTE_ASSERT(m_src != NULL);
@@ -198,46 +200,48 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
}
data_len = m_src->data_len - offset;
- part_len = (data_len < op->cipher.data.length) ? data_len :
- op->cipher.data.length;
+ part_len = (data_len < sym_op->cipher.data.length) ? data_len :
+ sym_op->cipher.data.length;
/* Destination buffer is required when segmented source buffer */
- RTE_ASSERT((part_len == op->cipher.data.length) ||
- ((part_len != op->cipher.data.length) &&
- (op->m_dst != NULL)));
+ RTE_ASSERT((part_len == sym_op->cipher.data.length) ||
+ ((part_len != sym_op->cipher.data.length) &&
+ (sym_op->m_dst != NULL)));
/* Segmented destination buffer is not supported */
- RTE_ASSERT((op->m_dst == NULL) ||
- ((op->m_dst != NULL) &&
- rte_pktmbuf_is_contiguous(op->m_dst)));
+ RTE_ASSERT((sym_op->m_dst == NULL) ||
+ ((sym_op->m_dst != NULL) &&
+ rte_pktmbuf_is_contiguous(sym_op->m_dst)));
- dst = op->m_dst ?
- rte_pktmbuf_mtod_offset(op->m_dst, uint8_t *,
- op->cipher.data.offset) :
- rte_pktmbuf_mtod_offset(op->m_src, uint8_t *,
- op->cipher.data.offset);
+ dst = sym_op->m_dst ?
+ rte_pktmbuf_mtod_offset(sym_op->m_dst, uint8_t *,
+ sym_op->cipher.data.offset) :
+ rte_pktmbuf_mtod_offset(sym_op->m_src, uint8_t *,
+ sym_op->cipher.data.offset);
src = rte_pktmbuf_mtod_offset(m_src, uint8_t *, offset);
/* sanity checks */
- if (op->cipher.iv.length != 16 && op->cipher.iv.length != 12 &&
- op->cipher.iv.length != 0) {
+ if (sym_op->cipher.iv.length != 16 && sym_op->cipher.iv.length != 12 &&
+ sym_op->cipher.iv.length != 0) {
GCM_LOG_ERR("iv");
return -1;
}
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sym_op->cipher.iv.offset);
/*
* GCM working in 12B IV mode => 16B pre-counter block we need
* to set BE LSB to 1, driver expects that 16B is allocated
*/
- if (op->cipher.iv.length == 12) {
- uint32_t *iv_padd = (uint32_t *)&op->cipher.iv.data[12];
+ if (sym_op->cipher.iv.length == 12) {
+ uint32_t *iv_padd = (uint32_t *)&(iv_ptr[12]);
*iv_padd = rte_bswap32(1);
}
- if (op->auth.digest.length != 16 &&
- op->auth.digest.length != 12 &&
- op->auth.digest.length != 8) {
+ if (sym_op->auth.digest.length != 16 &&
+ sym_op->auth.digest.length != 12 &&
+ sym_op->auth.digest.length != 8) {
GCM_LOG_ERR("digest");
return -1;
}
@@ -245,13 +249,13 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
if (session->op == AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION) {
aesni_gcm_enc[session->key].init(&session->gdata,
- op->cipher.iv.data,
- op->auth.aad.data,
- (uint64_t)op->auth.aad.length);
+ iv_ptr,
+ sym_op->auth.aad.data,
+ (uint64_t)sym_op->auth.aad.length);
aesni_gcm_enc[session->key].update(&session->gdata, dst, src,
(uint64_t)part_len);
- total_len = op->cipher.data.length - part_len;
+ total_len = sym_op->cipher.data.length - part_len;
while (total_len) {
dst += part_len;
@@ -270,12 +274,12 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
}
aesni_gcm_enc[session->key].finalize(&session->gdata,
- op->auth.digest.data,
- (uint64_t)op->auth.digest.length);
+ sym_op->auth.digest.data,
+ (uint64_t)sym_op->auth.digest.length);
} else { /* session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION */
- uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(op->m_dst ?
- op->m_dst : op->m_src,
- op->auth.digest.length);
+ uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(sym_op->m_dst ?
+ sym_op->m_dst : sym_op->m_src,
+ sym_op->auth.digest.length);
if (!auth_tag) {
GCM_LOG_ERR("auth_tag");
@@ -283,13 +287,13 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
}
aesni_gcm_dec[session->key].init(&session->gdata,
- op->cipher.iv.data,
- op->auth.aad.data,
- (uint64_t)op->auth.aad.length);
+ iv_ptr,
+ sym_op->auth.aad.data,
+ (uint64_t)sym_op->auth.aad.length);
aesni_gcm_dec[session->key].update(&session->gdata, dst, src,
(uint64_t)part_len);
- total_len = op->cipher.data.length - part_len;
+ total_len = sym_op->cipher.data.length - part_len;
while (total_len) {
dst += part_len;
@@ -309,7 +313,7 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
aesni_gcm_dec[session->key].finalize(&session->gdata,
auth_tag,
- (uint64_t)op->auth.digest.length);
+ (uint64_t)sym_op->auth.digest.length);
}
return 0;
@@ -401,7 +405,7 @@ aesni_gcm_pmd_dequeue_burst(void *queue_pair,
break;
}
- retval = process_gcm_crypto_op(ops[i]->sym, sess);
+ retval = process_gcm_crypto_op(ops[i], sess);
if (retval < 0) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
qp->qp_stats.dequeue_err_count++;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index ccdb3a7..1f03582 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -471,7 +471,8 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
get_truncated_digest_byte_length(job->hash_alg);
/* Set IV parameters */
- job->iv = op->sym->cipher.iv.data;
+ job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
job->iv_len_in_bytes = op->sym->cipher.iv.length;
/* Data Parameter */
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 4a79b61..693eccd 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -654,7 +654,8 @@ process_armv8_chained_op
return;
}
- arg.cipher.iv = op->sym->cipher.iv.data;
+ arg.cipher.iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
arg.cipher.key = sess->cipher.key.data;
/* Acquire combined mode function */
crypto_func = sess->crypto_func;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index e154395..1605701 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -87,6 +87,8 @@ build_authenc_fd(dpaa2_sec_session *sess,
int icv_len = sym_op->auth.digest.length;
uint8_t *old_icv;
uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
PMD_INIT_FUNC_TRACE();
@@ -178,7 +180,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
sym_op->auth.digest.length);
/* Configure Input SGE for Encap/Decap */
- DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
sge->length = sym_op->cipher.iv.length;
sge++;
@@ -307,6 +309,8 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
uint32_t mem_len = (5 * sizeof(struct qbman_fle));
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
PMD_INIT_FUNC_TRACE();
@@ -369,7 +373,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SET_FLE_SG_EXT(fle);
- DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
sge->length = sym_op->cipher.iv.length;
sge++;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index c539650..9a0b4a8 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -174,7 +174,8 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
unsigned i;
uint8_t processed_ops = 0;
uint8_t *src[num_ops], *dst[num_ops];
- uint64_t IV[num_ops];
+ uint8_t *iv_ptr;
+ uint64_t iv[num_ops];
uint32_t num_bytes[num_ops];
for (i = 0; i < num_ops; i++) {
@@ -192,14 +193,16 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
(ops[i]->sym->cipher.data.offset >> 3) :
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
- IV[i] = *((uint64_t *)(ops[i]->sym->cipher.iv.data));
+ iv_ptr = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
+ ops[i]->sym->cipher.iv.offset);
+ iv[i] = *((uint64_t *)(iv_ptr));
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
processed_ops++;
}
if (processed_ops != 0)
- sso_kasumi_f8_n_buffer(&session->pKeySched_cipher, IV,
+ sso_kasumi_f8_n_buffer(&session->pKeySched_cipher, iv,
src, dst, num_bytes, processed_ops);
return processed_ops;
@@ -211,7 +214,8 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
struct kasumi_session *session)
{
uint8_t *src, *dst;
- uint64_t IV;
+ uint8_t *iv_ptr;
+ uint64_t iv;
uint32_t length_in_bits, offset_in_bits;
/* Sanity checks. */
@@ -229,10 +233,12 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
return 0;
}
dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
- IV = *((uint64_t *)(op->sym->cipher.iv.data));
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
+ iv = *((uint64_t *)(iv_ptr));
length_in_bits = op->sym->cipher.data.length;
- sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, IV,
+ sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, iv,
src, dst, length_in_bits, offset_in_bits);
return 1;
@@ -250,7 +256,7 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
uint32_t length_in_bits;
uint32_t num_bytes;
uint32_t shift_bits;
- uint64_t IV;
+ uint64_t iv;
uint8_t direction;
for (i = 0; i < num_ops; i++) {
@@ -278,7 +284,7 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
src = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->auth.data.offset >> 3);
/* IV from AAD */
- IV = *((uint64_t *)(ops[i]->sym->auth.aad.data));
+ iv = *((uint64_t *)(ops[i]->sym->auth.aad.data));
/* Direction from next bit after end of message */
num_bytes = (length_in_bits >> 3) + 1;
shift_bits = (BYTE_LEN - 1 - length_in_bits) % BYTE_LEN;
@@ -289,7 +295,7 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
ops[i]->sym->auth.digest.length);
sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
- IV, src,
+ iv, src,
length_in_bits, dst, direction);
/* Verify digest. */
if (memcmp(dst, ops[i]->sym->auth.digest.data,
@@ -303,7 +309,7 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
dst = ops[i]->sym->auth.digest.data;
sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
- IV, src,
+ iv, src,
length_in_bits, dst, direction);
}
processed_ops++;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 9f4d9b7..6bfa06f 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -923,7 +923,8 @@ process_openssl_combined_op
return;
}
- iv = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
ivlen = op->sym->cipher.iv.length;
aad = op->sym->auth.aad.data;
aadlen = op->sym->auth.aad.length;
@@ -987,7 +988,8 @@ process_openssl_cipher_op
dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *,
op->sym->cipher.data.offset);
- iv = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
if (sess->cipher.mode == OPENSSL_CIPHER_LIB)
if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
@@ -1028,7 +1030,8 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op,
dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *,
op->sym->cipher.data.offset);
- iv = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
block_size = DES_BLOCK_SIZE;
@@ -1086,7 +1089,8 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op,
dst, iv,
last_block_len, sess->cipher.bpi_ctx);
/* Prepare parameters for CBC mode op */
- iv = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
dst += last_block_len - srclen;
srclen -= last_block_len;
}
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 9b294e4..a4f356f 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -642,7 +642,8 @@ qat_bpicipher_preprocess(struct qat_session *ctx,
iv = last_block - block_len;
else
/* runt block, i.e. less than one full block */
- iv = sym_op->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sym_op->cipher.iv.offset);
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
rte_hexdump(stdout, "BPI: src before pre-process:", last_block,
@@ -697,7 +698,8 @@ qat_bpicipher_postprocess(struct qat_session *ctx,
iv = dst - block_len;
else
/* runt block, i.e. less than one full block */
- iv = sym_op->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sym_op->cipher.iv.offset);
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
rte_hexdump(stdout, "BPI: src before post-process:", last_block,
@@ -898,6 +900,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
uint32_t min_ofs = 0;
uint64_t src_buf_start = 0, dst_buf_start = 0;
uint8_t do_sgl = 0;
+ uint8_t *iv_ptr;
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
@@ -971,6 +974,8 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
cipher_ofs = op->sym->cipher.data.offset;
}
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
/* copy IV into request if it fits */
/*
* If IV length is zero do not copy anything but still
@@ -981,14 +986,15 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
if (op->sym->cipher.iv.length <=
sizeof(cipher_param->u.cipher_IV_array)) {
rte_memcpy(cipher_param->u.cipher_IV_array,
- op->sym->cipher.iv.data,
+ iv_ptr,
op->sym->cipher.iv.length);
} else {
ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
qat_req->comn_hdr.serv_specif_flags,
ICP_QAT_FW_CIPH_IV_64BIT_PTR);
cipher_param->u.s.cipher_IV_ptr =
- op->sym->cipher.iv.phys_addr;
+ rte_crypto_op_ctophys_offset(op,
+ op->sym->cipher.iv.offset);
}
}
min_ofs = cipher_ofs;
@@ -1179,12 +1185,16 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
rte_hexdump(stdout, "src_data:",
rte_pktmbuf_mtod(op->sym->m_src, uint8_t*),
rte_pktmbuf_data_len(op->sym->m_src));
- rte_hexdump(stdout, "iv:", op->sym->cipher.iv.data,
- op->sym->cipher.iv.length);
- rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
- op->sym->auth.digest.length);
- rte_hexdump(stdout, "aad:", op->sym->auth.aad.data,
- op->sym->auth.aad.length);
+ if (do_cipher)
+ rte_hexdump(stdout, "iv:", iv_ptr,
+ op->sym->cipher.iv.length);
+
+ if (do_auth) {
+ rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
+ op->sym->auth.digest.length);
+ rte_hexdump(stdout, "aad:", op->sym->auth.aad.data,
+ op->sym->auth.aad.length);
+ }
#endif
return 0;
}
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 84757ac..3157d7b 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -174,7 +174,7 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops,
unsigned i;
uint8_t processed_ops = 0;
uint8_t *src[SNOW3G_MAX_BURST], *dst[SNOW3G_MAX_BURST];
- uint8_t *IV[SNOW3G_MAX_BURST];
+ uint8_t *iv[SNOW3G_MAX_BURST];
uint32_t num_bytes[SNOW3G_MAX_BURST];
for (i = 0; i < num_ops; i++) {
@@ -192,13 +192,14 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops,
(ops[i]->sym->cipher.data.offset >> 3) :
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
- IV[i] = ops[i]->sym->cipher.iv.data;
+ iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
+ ops[i]->sym->cipher.iv.offset);
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
processed_ops++;
}
- sso_snow3g_f8_n_buffer(&session->pKeySched_cipher, IV, src, dst,
+ sso_snow3g_f8_n_buffer(&session->pKeySched_cipher, iv, src, dst,
num_bytes, processed_ops);
return processed_ops;
@@ -210,7 +211,7 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
struct snow3g_session *session)
{
uint8_t *src, *dst;
- uint8_t *IV;
+ uint8_t *iv;
uint32_t length_in_bits, offset_in_bits;
/* Sanity checks. */
@@ -228,10 +229,11 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
return 0;
}
dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
- IV = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
length_in_bits = op->sym->cipher.data.length;
- sso_snow3g_f8_1_buffer_bit(&session->pKeySched_cipher, IV,
+ sso_snow3g_f8_1_buffer_bit(&session->pKeySched_cipher, iv,
src, dst, length_in_bits, offset_in_bits);
return 1;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index 63236ac..b91b305 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -173,7 +173,7 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
unsigned i;
uint8_t processed_ops = 0;
uint8_t *src[ZUC_MAX_BURST], *dst[ZUC_MAX_BURST];
- uint8_t *IV[ZUC_MAX_BURST];
+ uint8_t *iv[ZUC_MAX_BURST];
uint32_t num_bytes[ZUC_MAX_BURST];
uint8_t *cipher_keys[ZUC_MAX_BURST];
@@ -213,7 +213,8 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
(ops[i]->sym->cipher.data.offset >> 3) :
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
- IV[i] = ops[i]->sym->cipher.iv.data;
+ iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
+ ops[i]->sym->cipher.iv.offset);
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
cipher_keys[i] = session->pKey_cipher;
@@ -221,7 +222,7 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
processed_ops++;
}
- sso_zuc_eea3_n_buffer(cipher_keys, IV, src, dst,
+ sso_zuc_eea3_n_buffer(cipher_keys, iv, src, dst,
num_bytes, processed_ops);
return processed_ops;
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 5bf2d7d..738a800 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -104,9 +104,7 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
case RTE_CRYPTO_CIPHER_AES_CBC:
/* Copy IV at the end of crypto operation */
rte_memcpy(iv_ptr, iv, sa->iv_len);
- sym_cop->cipher.iv.data = iv_ptr;
- sym_cop->cipher.iv.phys_addr =
- rte_crypto_op_ctophys_offset(cop, IV_OFFSET);
+ sym_cop->cipher.iv.offset = IV_OFFSET;
sym_cop->cipher.iv.length = sa->iv_len;
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
@@ -115,9 +113,7 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
icb->salt = sa->salt;
memcpy(&icb->iv, iv, 8);
icb->cnt = rte_cpu_to_be_32(1);
- sym_cop->cipher.iv.data = iv_ptr;
- sym_cop->cipher.iv.phys_addr =
- rte_crypto_op_ctophys_offset(cop, IV_OFFSET);
+ sym_cop->cipher.iv.offset = IV_OFFSET;
sym_cop->cipher.iv.length = 16;
break;
default:
@@ -348,15 +344,11 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
padding[pad_len - 2] = pad_len - 2;
padding[pad_len - 1] = nlp;
- uint8_t *iv_ptr = rte_crypto_op_ctod_offset(cop,
- uint8_t *, IV_OFFSET);
struct cnt_blk *icb = get_cnt_blk(m);
icb->salt = sa->salt;
icb->iv = sa->seq;
icb->cnt = rte_cpu_to_be_32(1);
- sym_cop->cipher.iv.data = iv_ptr;
- sym_cop->cipher.iv.phys_addr =
- rte_crypto_op_ctophys_offset(cop, IV_OFFSET);
+ sym_cop->cipher.iv.offset = IV_OFFSET;
sym_cop->cipher.iv.length = 16;
uint8_t *aad;
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 1380bc6..ffd9731 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -489,9 +489,7 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
/* Copy IV at the end of the crypto operation */
rte_memcpy(iv_ptr, cparams->iv.data, cparams->iv.length);
- op->sym->cipher.iv.data = iv_ptr;
- op->sym->cipher.iv.phys_addr =
- rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = cparams->iv.length;
/* For wireless algorithms, offset/length must be in bits */
@@ -700,7 +698,6 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
if (port_cparams[i].do_cipher) {
port_cparams[i].iv.data = options->iv.data;
port_cparams[i].iv.length = options->iv.length;
- port_cparams[i].iv.phys_addr = options->iv.phys_addr;
if (!options->iv_param)
generate_random_key(port_cparams[i].iv.data,
port_cparams[i].iv.length);
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 39ad1e3..b35c45a 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -464,8 +464,10 @@ struct rte_crypto_sym_op {
} data; /**< Data offsets and length for ciphering */
struct {
- uint8_t *data;
- /**< Initialisation Vector or Counter.
+ uint16_t offset;
+ /**< Starting point for Initialisation Vector or Counter,
+ * specified as number of bytes from start of crypto
+ * operation.
*
* - For block ciphers in CBC or F8 mode, or for KASUMI
* in F8 mode, or for SNOW 3G in UEA2 mode, this is the
@@ -491,7 +493,6 @@ struct rte_crypto_sym_op {
* For optimum performance, the data pointed to SHOULD
* be 8-byte aligned.
*/
- phys_addr_t phys_addr;
uint16_t length;
/**< Length of valid IV data.
*
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 0037e88..fbcaaee 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1311,13 +1311,11 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
sym_op->auth.data.length = QUOTE_512_BYTES;
/* Set crypto operation cipher parameters */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
- rte_memcpy(sym_op->cipher.iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = QUOTE_512_BYTES;
@@ -1464,13 +1462,11 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
- rte_memcpy(sym_op->cipher.iv.data, iv, CIPHER_IV_LENGTH_AES_CBC);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, CIPHER_IV_LENGTH_AES_CBC);
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = QUOTE_512_BYTES;
@@ -1860,13 +1856,11 @@ create_wireless_algo_cipher_operation(const uint8_t *iv, uint8_t iv_len,
sym_op->m_src = ut_params->ibuf;
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = cipher_offset;
return 0;
@@ -1896,13 +1890,11 @@ create_wireless_algo_cipher_operation_oop(const uint8_t *iv, uint8_t iv_len,
sym_op->m_dst = ut_params->obuf;
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = cipher_offset;
return 0;
@@ -2219,13 +2211,11 @@ create_wireless_cipher_hash_operation(const struct wireless_test_data *tdata,
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = cipher_offset + auth_offset;
sym_op->auth.data.length = auth_len;
@@ -2316,13 +2306,11 @@ create_wireless_algo_cipher_hash_operation(const uint8_t *auth_tag,
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = cipher_offset + auth_offset;
sym_op->auth.data.length = auth_len;
@@ -2401,14 +2389,11 @@ create_wireless_algo_auth_cipher_operation(const unsigned auth_tag_len,
sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
-
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = auth_offset + cipher_offset;
@@ -4854,14 +4839,13 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
sym_op->auth.aad.length);
/* Append IV at the end of the crypto operation*/
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
+ uint8_t *, IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = tdata->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, tdata->iv.data, tdata->iv.len);
- TEST_HEXDUMP(stdout, "iv:", sym_op->cipher.iv.data,
+ rte_memcpy(iv_ptr, tdata->iv.data, tdata->iv.len);
+ TEST_HEXDUMP(stdout, "iv:", iv_ptr,
sym_op->cipher.iv.length);
/* Append plaintext/ciphertext */
@@ -6429,15 +6413,15 @@ create_gmac_operation(enum rte_crypto_auth_operation op,
sym_op->auth.digest.length);
}
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
+ uint8_t *, IV_OFFSET);
+
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = tdata->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, tdata->iv.data, tdata->iv.len);
+ rte_memcpy(iv_ptr, tdata->iv.data, tdata->iv.len);
- TEST_HEXDUMP(stdout, "iv:", sym_op->cipher.iv.data, tdata->iv.len);
+ TEST_HEXDUMP(stdout, "iv:", iv_ptr, tdata->iv.len);
sym_op->cipher.data.length = 0;
sym_op->cipher.data.offset = 0;
@@ -6975,13 +6959,11 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.data,
sym_op->auth.digest.length);
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = reference->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, reference->iv.data, reference->iv.len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ reference->iv.data, reference->iv.len);
sym_op->cipher.data.length = 0;
sym_op->cipher.data.offset = 0;
@@ -7034,13 +7016,11 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.data,
sym_op->auth.digest.length);
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = reference->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, reference->iv.data, reference->iv.len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ reference->iv.data, reference->iv.len);
sym_op->cipher.data.length = reference->ciphertext.len;
sym_op->cipher.data.offset = 0;
@@ -7284,13 +7264,12 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
sym_op->auth.digest.length);
}
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
+ uint8_t *, IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, tdata->iv.data, iv_len);
+ rte_memcpy(iv_ptr, tdata->iv.data, iv_len);
sym_op->auth.aad.data = (uint8_t *)rte_pktmbuf_prepend(
ut_params->ibuf, aad_len);
@@ -7303,7 +7282,7 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
memset(sym_op->auth.aad.data, 0, aad_len);
rte_memcpy(sym_op->auth.aad.data, tdata->aad.data, aad_len);
- TEST_HEXDUMP(stdout, "iv:", sym_op->cipher.iv.data, iv_len);
+ TEST_HEXDUMP(stdout, "iv:", iv_ptr, iv_len);
TEST_HEXDUMP(stdout, "aad:",
sym_op->auth.aad.data, aad_len);
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 2a0c364..312405b 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -290,12 +290,10 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = tdata->ciphertext.len;
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = tdata->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, tdata->iv.data,
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ tdata->iv.data,
tdata->iv.len);
}
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index b08451d..86bdc6e 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -1981,15 +1981,11 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_params[0].length;
-
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
- rte_memcpy(op->sym->cipher.iv.data, aes_cbc_128_iv,
- CIPHER_IV_LENGTH_AES_CBC);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ aes_cbc_128_iv, CIPHER_IV_LENGTH_AES_CBC);
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_params[0].length;
@@ -2898,13 +2894,10 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
/* Cipher Parameters */
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = AES_CIPHER_IV_LENGTH;
-
- rte_memcpy(op->sym->cipher.iv.data, aes_iv, AES_CIPHER_IV_LENGTH);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ aes_iv, AES_CIPHER_IV_LENGTH);
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_len;
@@ -2934,12 +2927,10 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.aad.length = AES_GCM_AAD_LENGTH;
/* Cipher Parameters */
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = AES_CIPHER_IV_LENGTH;
- rte_memcpy(op->sym->cipher.iv.data, aes_iv, AES_CIPHER_IV_LENGTH);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ aes_iv, AES_CIPHER_IV_LENGTH);
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
@@ -2980,9 +2971,7 @@ test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.aad.length = SNOW3G_CIPHER_IV_LENGTH;
/* Cipher Parameters */
- op->sym->cipher.iv.data = iv_ptr;
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
/* Data lengths/offsets Parameters */
@@ -3009,12 +2998,10 @@ test_perf_set_crypto_op_snow3g_cipher(struct rte_crypto_op *op,
}
/* Cipher Parameters */
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
- rte_memcpy(op->sym->cipher.iv.data, snow3g_iv, SNOW3G_CIPHER_IV_LENGTH);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ snow3g_iv, SNOW3G_CIPHER_IV_LENGTH);
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_len << 3;
@@ -3082,13 +3069,10 @@ test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.digest.length = digest_len;
/* Cipher Parameters */
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = TRIPLE_DES_CIPHER_IV_LENGTH;
- rte_memcpy(op->sym->cipher.iv.data, triple_des_iv,
- TRIPLE_DES_CIPHER_IV_LENGTH);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ triple_des_iv, TRIPLE_DES_CIPHER_IV_LENGTH);
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
@@ -4183,6 +4167,9 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
struct crypto_params *m_hlp,
struct perf_test_params *params)
{
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op,
+ uint8_t *, IV_OFFSET);
+
if (rte_crypto_op_attach_sym_session(op, sess) != 0) {
rte_crypto_op_free(op);
return NULL;
@@ -4203,14 +4190,11 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
rte_memcpy(op->sym->auth.aad.data, params->symmetric_op->aad_data,
params->symmetric_op->aad_len);
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
- rte_memcpy(op->sym->cipher.iv.data, params->symmetric_op->iv_data,
+ op->sym->cipher.iv.offset = IV_OFFSET;
+ rte_memcpy(iv_ptr, params->symmetric_op->iv_data,
params->symmetric_op->iv_len);
if (params->symmetric_op->iv_len == 12)
- op->sym->cipher.iv.data[15] = 1;
+ iv_ptr[15] = 1;
op->sym->cipher.iv.length = params->symmetric_op->iv_len;
--
2.9.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v3 04/26] cryptodev: do not store pointer to op specific params
` (2 preceding siblings ...)
2017-06-29 11:34 4% ` [dpdk-dev] [PATCH v3 03/26] cryptodev: remove opaque data pointer in crypto op Pablo de Lara
@ 2017-06-29 11:34 4% ` Pablo de Lara
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 12/26] cryptodev: pass IV as offset Pablo de Lara
` (6 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:34 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Instead of storing a pointer to operation specific parameters,
such as symmetric crypto parameters, use a zero-length array,
to mark that these parameters will be stored after the
generic crypto operation structure, which was already assumed
in the code, reducing the memory footprint of the crypto operation.
Besides, it is always expected to have rte_crypto_op
and rte_crypto_sym_op (the only operation specific parameters
structure right now) to be together, as they are initialized
as a single object in the crypto operation pool.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
doc/guides/rel_notes/release_17_08.rst | 2 ++
examples/ipsec-secgw/ipsec.c | 1 -
lib/librte_cryptodev/rte_crypto.h | 8 +-------
3 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 20f459e..6acbf35 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -155,6 +155,8 @@ API Changes
``rte_crypto_op_sess_type`` in ``rte_crypto_op`` have been modified to be
uint8_t values.
* Removed the field ``opaque_data`` from ``rte_crypto_op``.
+ * Pointer to ``rte_crypto_sym_op`` in ``rte_crypto_op`` has been replaced
+ with a zero length array.
ABI Changes
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index edca5f0..126d79f 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -140,7 +140,6 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_prefetch0(&priv->sym_cop);
- priv->cop.sym = &priv->sym_cop;
if ((unlikely(sa->crypto_session == NULL)) &&
create_session(ipsec_ctx, sa)) {
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index c2677fa..85716a6 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -124,7 +124,7 @@ struct rte_crypto_op {
RTE_STD_C11
union {
- struct rte_crypto_sym_op *sym;
+ struct rte_crypto_sym_op sym[0];
/**< Symmetric operation parameters */
}; /**< operation specific parameters */
} __rte_cache_aligned;
@@ -144,12 +144,6 @@ __rte_crypto_op_reset(struct rte_crypto_op *op, enum rte_crypto_op_type type)
switch (type) {
case RTE_CRYPTO_OP_TYPE_SYMMETRIC:
- /** Symmetric operation structure starts after the end of the
- * rte_crypto_op structure.
- */
- op->sym = (struct rte_crypto_sym_op *)(op + 1);
- op->type = type;
-
__rte_crypto_sym_op_reset(op->sym);
break;
default:
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 03/26] cryptodev: remove opaque data pointer in crypto op
2017-06-29 11:34 2% ` [dpdk-dev] [PATCH v3 01/26] cryptodev: move session type to generic crypto op Pablo de Lara
2017-06-29 11:34 4% ` [dpdk-dev] [PATCH v3 02/26] cryptodev: replace enums with 1-byte variables Pablo de Lara
@ 2017-06-29 11:34 4% ` Pablo de Lara
2017-06-29 11:34 4% ` [dpdk-dev] [PATCH v3 04/26] cryptodev: do not store pointer to op specific params Pablo de Lara
` (7 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:34 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Storing a pointer to the user data is unnecessary,
since user can store additional data, after the crypto operation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
app/test-crypto-perf/cperf_test_latency.c | 36 +++++++++++++++++++++----------
doc/guides/prog_guide/cryptodev_lib.rst | 3 +--
doc/guides/rel_notes/release_17_08.rst | 1 +
lib/librte_cryptodev/rte_crypto.h | 5 -----
4 files changed, 27 insertions(+), 18 deletions(-)
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index e61ac97..a7443a3 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -66,6 +66,10 @@ struct cperf_latency_ctx {
struct cperf_op_result *res;
};
+struct priv_op_data {
+ struct cperf_op_result *result;
+};
+
#define max(a, b) (a > b ? (uint64_t)a : (uint64_t)b)
#define min(a, b) (a < b ? (uint64_t)a : (uint64_t)b)
@@ -276,8 +280,9 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
+ uint16_t priv_size = sizeof(struct priv_op_data);
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, 0,
+ RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, priv_size,
rte_socket_id());
if (ctx->crypto_op_pool == NULL)
goto err;
@@ -295,11 +300,20 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
return NULL;
}
+static inline void
+store_timestamp(struct rte_crypto_op *op, uint64_t timestamp)
+{
+ struct priv_op_data *priv_data;
+
+ priv_data = (struct priv_op_data *) (op->sym + 1);
+ priv_data->result->status = op->status;
+ priv_data->result->tsc_end = timestamp;
+}
+
int
cperf_latency_test_runner(void *arg)
{
struct cperf_latency_ctx *ctx = arg;
- struct cperf_op_result *pres;
uint16_t test_burst_size;
uint8_t burst_size_idx = 0;
@@ -311,6 +325,7 @@ cperf_latency_test_runner(void *arg)
struct rte_crypto_op *ops[ctx->options->max_burst_size];
struct rte_crypto_op *ops_processed[ctx->options->max_burst_size];
uint64_t i;
+ struct priv_op_data *priv_data;
uint32_t lcore = rte_lcore_id();
@@ -398,7 +413,12 @@ cperf_latency_test_runner(void *arg)
for (i = 0; i < ops_enqd; i++) {
ctx->res[tsc_idx].tsc_start = tsc_start;
- ops[i]->opaque_data = (void *)&ctx->res[tsc_idx];
+ /*
+ * Private data structure starts after the end of the
+ * rte_crypto_sym_op structure.
+ */
+ priv_data = (struct priv_op_data *) (ops[i]->sym + 1);
+ priv_data->result = (void *)&ctx->res[tsc_idx];
tsc_idx++;
}
@@ -410,10 +430,7 @@ cperf_latency_test_runner(void *arg)
* failures.
*/
for (i = 0; i < ops_deqd; i++) {
- pres = (struct cperf_op_result *)
- (ops_processed[i]->opaque_data);
- pres->status = ops_processed[i]->status;
- pres->tsc_end = tsc_end;
+ store_timestamp(ops_processed[i], tsc_end);
rte_crypto_op_free(ops_processed[i]);
}
@@ -446,10 +463,7 @@ cperf_latency_test_runner(void *arg)
if (ops_deqd != 0) {
for (i = 0; i < ops_deqd; i++) {
- pres = (struct cperf_op_result *)
- (ops_processed[i]->opaque_data);
- pres->status = ops_processed[i]->status;
- pres->tsc_end = tsc_end;
+ store_timestamp(ops_processed[i], tsc_end);
rte_crypto_op_free(ops_processed[i]);
}
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 229cb7a..c9a29f8 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -363,8 +363,7 @@ The operation structure includes the operation type, the operation status
and the session type (session-based/less), a reference to the operation
specific data, which can vary in size and content depending on the operation
being provisioned. It also contains the source mempool for the operation,
-if it allocate from a mempool. Finally an opaque pointer for user specific
-data is provided.
+if it allocated from a mempool.
If Crypto operations are allocated from a Crypto operation mempool, see next
section, there is also the ability to allocate private memory with the
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index bbb14a9..20f459e 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -154,6 +154,7 @@ API Changes
* Enumerations ``rte_crypto_op_sess_type``, ``rte_crypto_op_status`` and
``rte_crypto_op_sess_type`` in ``rte_crypto_op`` have been modified to be
uint8_t values.
+ * Removed the field ``opaque_data`` from ``rte_crypto_op``.
ABI Changes
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index 8e2b640..c2677fa 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -122,9 +122,6 @@ struct rte_crypto_op {
phys_addr_t phys_addr;
/**< physical address of crypto operation */
- void *opaque_data;
- /**< Opaque pointer for user data */
-
RTE_STD_C11
union {
struct rte_crypto_sym_op *sym;
@@ -158,8 +155,6 @@ __rte_crypto_op_reset(struct rte_crypto_op *op, enum rte_crypto_op_type type)
default:
break;
}
-
- op->opaque_data = NULL;
}
/**
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 02/26] cryptodev: replace enums with 1-byte variables
2017-06-29 11:34 2% ` [dpdk-dev] [PATCH v3 01/26] cryptodev: move session type to generic crypto op Pablo de Lara
@ 2017-06-29 11:34 4% ` Pablo de Lara
2017-06-29 11:34 4% ` [dpdk-dev] [PATCH v3 03/26] cryptodev: remove opaque data pointer in crypto op Pablo de Lara
` (8 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:34 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Instead of storing some crypto operation flags,
such as operation status, as enumerations,
store them as uint8_t, for memory efficiency.
Also, reserve extra 5 bytes in the crypto operation,
for future additions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
doc/guides/rel_notes/release_17_08.rst | 3 +++
lib/librte_cryptodev/rte_crypto.h | 9 +++++----
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 2bc405d..bbb14a9 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -151,6 +151,9 @@ API Changes
* Removed the field ``rte_crypto_sym_op_sess_type`` from ``rte_crypto_sym_op``,
and moved it to ``rte_crypto_op`` as ``rte_crypto_op_sess_type``.
+ * Enumerations ``rte_crypto_op_sess_type``, ``rte_crypto_op_status`` and
+ ``rte_crypto_op_sess_type`` in ``rte_crypto_op`` have been modified to be
+ uint8_t values.
ABI Changes
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index ac5c184..8e2b640 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -102,19 +102,20 @@ enum rte_crypto_op_sess_type {
* rte_cryptodev_enqueue_burst() / rte_cryptodev_dequeue_burst() .
*/
struct rte_crypto_op {
- enum rte_crypto_op_type type;
+ uint8_t type;
/**< operation type */
-
- enum rte_crypto_op_status status;
+ uint8_t status;
/**<
* operation status - this is reset to
* RTE_CRYPTO_OP_STATUS_NOT_PROCESSED on allocation from mempool and
* will be set to RTE_CRYPTO_OP_STATUS_SUCCESS after crypto operation
* is successfully processed by a crypto PMD
*/
- enum rte_crypto_op_sess_type sess_type;
+ uint8_t sess_type;
/**< operation session type */
+ uint8_t reserved[5];
+ /**< Reserved bytes to fill 64 bits for future additions */
struct rte_mempool *mempool;
/**< crypto operation mempool which operation is allocated from */
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 01/26] cryptodev: move session type to generic crypto op
@ 2017-06-29 11:34 2% ` Pablo de Lara
2017-06-29 11:34 4% ` [dpdk-dev] [PATCH v3 02/26] cryptodev: replace enums with 1-byte variables Pablo de Lara
` (9 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-29 11:34 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal,
hemant.agrawal, fiona.trahe, john.griffin, deepak.k.jain
Cc: dev, Pablo de Lara
Session type (operation with or without session) is not
something specific to symmetric operations.
Therefore, the variable is moved to the generic crypto operation
structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
doc/guides/prog_guide/cryptodev_lib.rst | 21 ++++++++++-----------
doc/guides/rel_notes/release_17_08.rst | 8 ++++++++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 15 ++++++++-------
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 4 ++--
drivers/crypto/armv8/rte_armv8_pmd.c | 4 ++--
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 2 +-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 6 +++---
drivers/crypto/null/null_crypto_pmd.c | 15 ++++++++-------
drivers/crypto/openssl/rte_openssl_pmd.c | 4 ++--
drivers/crypto/qat/qat_crypto.c | 2 +-
drivers/crypto/snow3g/rte_snow3g_pmd.c | 6 +++---
drivers/crypto/zuc/rte_zuc_pmd.c | 4 ++--
lib/librte_cryptodev/rte_crypto.h | 15 +++++++++++++++
lib/librte_cryptodev/rte_crypto_sym.h | 16 ----------------
test/test/test_cryptodev.c | 8 ++++----
15 files changed, 69 insertions(+), 61 deletions(-)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 4f98f28..229cb7a 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1,5 +1,5 @@
.. BSD LICENSE
- Copyright(c) 2016 Intel Corporation. All rights reserved.
+ Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
@@ -359,11 +359,12 @@ Crypto operation to be processed on a particular Crypto device poll mode driver.
.. figure:: img/crypto_op.*
-The operation structure includes the operation type and the operation status,
-a reference to the operation specific data, which can vary in size and content
-depending on the operation being provisioned. It also contains the source
-mempool for the operation, if it allocate from a mempool. Finally an
-opaque pointer for user specific data is provided.
+The operation structure includes the operation type, the operation status
+and the session type (session-based/less), a reference to the operation
+specific data, which can vary in size and content depending on the operation
+being provisioned. It also contains the source mempool for the operation,
+if it allocate from a mempool. Finally an opaque pointer for user specific
+data is provided.
If Crypto operations are allocated from a Crypto operation mempool, see next
section, there is also the ability to allocate private memory with the
@@ -512,9 +513,9 @@ buffer. It is used for either cipher, authentication, AEAD and chained
operations.
As a minimum the symmetric operation must have a source data buffer (``m_src``),
-the session type (session-based/less), a valid session (or transform chain if in
-session-less mode) and the minimum authentication/ cipher parameters required
-depending on the type of operation specified in the session or the transform
+a valid session (or transform chain if in session-less mode) and the minimum
+authentication/ cipher parameters required depending on the type of operation
+specified in the session or the transform
chain.
.. code-block:: c
@@ -523,8 +524,6 @@ chain.
struct rte_mbuf *m_src;
struct rte_mbuf *m_dst;
- enum rte_crypto_sym_op_sess_type type;
-
union {
struct rte_cryptodev_sym_session *session;
/**< Handle for the initialised session context */
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 842f46f..2bc405d 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -144,6 +144,14 @@ API Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* **Reworked rte_cryptodev library.**
+
+ The rte_cryptodev library has been reworked and updated. The following changes
+ have been made to it:
+
+ * Removed the field ``rte_crypto_sym_op_sess_type`` from ``rte_crypto_sym_op``,
+ and moved it to ``rte_crypto_op`` as ``rte_crypto_op_sess_type``.
+
ABI Changes
-----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 1b95c23..a0154ff 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -139,16 +139,17 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
/** Get gcm session */
static struct aesni_gcm_session *
-aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
+aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_op *op)
{
struct aesni_gcm_session *sess = NULL;
+ struct rte_crypto_sym_op *sym_op = op->sym;
- if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session->dev_type
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+ if (unlikely(sym_op->session->dev_type
!= RTE_CRYPTODEV_AESNI_GCM_PMD))
return sess;
- sess = (struct aesni_gcm_session *)op->session->_private;
+ sess = (struct aesni_gcm_session *)sym_op->session->_private;
} else {
void *_sess;
@@ -159,7 +160,7 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
((struct rte_cryptodev_sym_session *)_sess)->_private;
if (unlikely(aesni_gcm_set_session_parameters(sess,
- op->xform) != 0)) {
+ sym_op->xform) != 0)) {
rte_mempool_put(qp->sess_mp, _sess);
sess = NULL;
}
@@ -372,7 +373,7 @@ handle_completed_gcm_crypto_op(struct aesni_gcm_qp *qp,
post_process_gcm_crypto_op(op);
/* Free session if a session-less crypto op */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
}
@@ -393,7 +394,7 @@ aesni_gcm_pmd_dequeue_burst(void *queue_pair,
for (i = 0; i < nb_dequeued; i++) {
- sess = aesni_gcm_get_session(qp, ops[i]->sym);
+ sess = aesni_gcm_get_session(qp, ops[i]);
if (unlikely(sess == NULL)) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
qp->qp_stats.dequeue_err_count++;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index f9a7d5b..ccdb3a7 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -345,7 +345,7 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
{
struct aesni_mb_session *sess = NULL;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
if (unlikely(op->sym->session->dev_type !=
RTE_CRYPTODEV_AESNI_MB_PMD)) {
return NULL;
@@ -541,7 +541,7 @@ post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
}
/* Free session if a session-less crypto op */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
}
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 83dae87..4a79b61 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -545,7 +545,7 @@ get_session(struct armv8_crypto_qp *qp, struct rte_crypto_op *op)
{
struct armv8_crypto_session *sess = NULL;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
op->sym->session->dev_type ==
@@ -700,7 +700,7 @@ process_op(const struct armv8_crypto_qp *qp, struct rte_crypto_op *op,
}
/* Free session if a session-less crypto op */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
memset(sess, 0, sizeof(struct armv8_crypto_session));
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index e32b27e..e154395 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -437,7 +437,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
if (unlikely(nb_ops == 0))
return 0;
- if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (ops[0]->sess_type != RTE_CRYPTO_OP_WITH_SESSION) {
RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
return 0;
}
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 70bf228..c539650 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -143,7 +143,7 @@ kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
{
struct kasumi_session *sess;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
if (unlikely(op->sym->session->dev_type !=
RTE_CRYPTODEV_KASUMI_PMD))
return NULL;
@@ -353,7 +353,7 @@ process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (ops[i]->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
ops[i]->sym->session = NULL;
}
@@ -405,7 +405,7 @@ process_op_bit(struct rte_crypto_op *op, struct kasumi_session *session,
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
}
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 53bdc3e..b1e465a 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -90,16 +90,17 @@ process_op(const struct null_crypto_qp *qp, struct rte_crypto_op *op,
}
static struct null_crypto_session *
-get_session(struct null_crypto_qp *qp, struct rte_crypto_sym_op *op)
+get_session(struct null_crypto_qp *qp, struct rte_crypto_op *op)
{
struct null_crypto_session *sess;
+ struct rte_crypto_sym_op *sym_op = op->sym;
- if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session == NULL ||
- op->session->dev_type != RTE_CRYPTODEV_NULL_PMD))
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+ if (unlikely(sym_op->session == NULL ||
+ sym_op->session->dev_type != RTE_CRYPTODEV_NULL_PMD))
return NULL;
- sess = (struct null_crypto_session *)op->session->_private;
+ sess = (struct null_crypto_session *)sym_op->session->_private;
} else {
struct rte_cryptodev_session *c_sess = NULL;
@@ -108,7 +109,7 @@ get_session(struct null_crypto_qp *qp, struct rte_crypto_sym_op *op)
sess = (struct null_crypto_session *)c_sess->_private;
- if (null_crypto_set_session_parameters(sess, op->xform) != 0)
+ if (null_crypto_set_session_parameters(sess, sym_op->xform) != 0)
return NULL;
}
@@ -126,7 +127,7 @@ null_crypto_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
int i, retval;
for (i = 0; i < nb_ops; i++) {
- sess = get_session(qp, ops[i]->sym);
+ sess = get_session(qp, ops[i]);
if (unlikely(sess == NULL))
goto enqueue_err;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5d29171..9f4d9b7 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -446,7 +446,7 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
{
struct openssl_session *sess = NULL;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
op->sym->session->dev_type ==
@@ -1196,7 +1196,7 @@ process_op(const struct openssl_qp *qp, struct rte_crypto_op *op,
}
/* Free session if a session-less crypto op */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
openssl_reset_session(sess);
memset(sess, 0, sizeof(struct openssl_session));
rte_mempool_put(qp->sess_mp, op->sym->session);
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 8b7b2fa..9b294e4 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -908,7 +908,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
return -EINVAL;
}
#endif
- if (unlikely(op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS)) {
+ if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented"
" requests, op (%p) is sessionless.", op);
return -EINVAL;
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 8945f19..84757ac 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -143,7 +143,7 @@ snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *op)
{
struct snow3g_session *sess;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
if (unlikely(op->sym->session->dev_type !=
RTE_CRYPTODEV_SNOW3G_PMD))
return NULL;
@@ -357,7 +357,7 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (ops[i]->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
ops[i]->sym->session = NULL;
}
@@ -409,7 +409,7 @@ process_op_bit(struct rte_crypto_op *op, struct snow3g_session *session,
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
}
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index ec6d54f..63236ac 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -142,7 +142,7 @@ zuc_get_session(struct zuc_qp *qp, struct rte_crypto_op *op)
{
struct zuc_session *sess;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
if (unlikely(op->sym->session->dev_type !=
RTE_CRYPTODEV_ZUC_PMD))
return NULL;
@@ -333,7 +333,7 @@ process_ops(struct rte_crypto_op **ops, struct zuc_session *session,
if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (ops[i]->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
ops[i]->sym->session = NULL;
}
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index 9019518..ac5c184 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -82,6 +82,16 @@ enum rte_crypto_op_status {
};
/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+ RTE_CRYPTO_OP_WITH_SESSION, /**< Session based crypto operation */
+ RTE_CRYPTO_OP_SESSIONLESS /**< Session-less crypto operation */
+};
+
+/**
* Cryptographic Operation.
*
* This structure contains data relating to performing cryptographic
@@ -102,6 +112,8 @@ struct rte_crypto_op {
* will be set to RTE_CRYPTO_OP_STATUS_SUCCESS after crypto operation
* is successfully processed by a crypto PMD
*/
+ enum rte_crypto_op_sess_type sess_type;
+ /**< operation session type */
struct rte_mempool *mempool;
/**< crypto operation mempool which operation is allocated from */
@@ -130,6 +142,7 @@ __rte_crypto_op_reset(struct rte_crypto_op *op, enum rte_crypto_op_type type)
{
op->type = type;
op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_SESSIONLESS;
switch (type) {
case RTE_CRYPTO_OP_TYPE_SYMMETRIC:
@@ -407,6 +420,8 @@ rte_crypto_op_attach_sym_session(struct rte_crypto_op *op,
if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
return -1;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
return __rte_crypto_sym_op_attach_sym_session(op->sym, sess);
}
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 3a40844..386b120 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -376,17 +376,6 @@ struct rte_crypto_sym_xform {
};
};
-/**
- * Crypto operation session type. This is used to specify whether a crypto
- * operation has session structure attached for immutable parameters or if all
- * operation information is included in the operation data structure.
- */
-enum rte_crypto_sym_op_sess_type {
- RTE_CRYPTO_SYM_OP_WITH_SESSION, /**< Session based crypto operation */
- RTE_CRYPTO_SYM_OP_SESSIONLESS /**< Session-less crypto operation */
-};
-
-
struct rte_cryptodev_sym_session;
/**
@@ -423,8 +412,6 @@ struct rte_crypto_sym_op {
struct rte_mbuf *m_src; /**< source mbuf */
struct rte_mbuf *m_dst; /**< destination mbuf */
- enum rte_crypto_sym_op_sess_type sess_type;
-
RTE_STD_C11
union {
struct rte_cryptodev_sym_session *session;
@@ -665,8 +652,6 @@ static inline void
__rte_crypto_sym_op_reset(struct rte_crypto_sym_op *op)
{
memset(op, 0, sizeof(*op));
-
- op->sess_type = RTE_CRYPTO_SYM_OP_SESSIONLESS;
}
@@ -708,7 +693,6 @@ __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op *sym_op,
struct rte_cryptodev_sym_session *sess)
{
sym_op->session = sess;
- sym_op->sess_type = RTE_CRYPTO_SYM_OP_WITH_SESSION;
return 0;
}
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index f8f15c0..04620f3 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -5555,8 +5555,8 @@ test_AES_GCM_authenticated_encryption_sessionless(
ut_params->op->sym->m_src = ut_params->ibuf;
- TEST_ASSERT_EQUAL(ut_params->op->sym->sess_type,
- RTE_CRYPTO_SYM_OP_SESSIONLESS,
+ TEST_ASSERT_EQUAL(ut_params->op->sess_type,
+ RTE_CRYPTO_OP_SESSIONLESS,
"crypto op session type not sessionless");
/* Process crypto operation */
@@ -5635,8 +5635,8 @@ test_AES_GCM_authenticated_decryption_sessionless(
ut_params->op->sym->m_src = ut_params->ibuf;
- TEST_ASSERT_EQUAL(ut_params->op->sym->sess_type,
- RTE_CRYPTO_SYM_OP_SESSIONLESS,
+ TEST_ASSERT_EQUAL(ut_params->op->sess_type,
+ RTE_CRYPTO_OP_SESSIONLESS,
"crypto op session type not sessionless");
/* Process crypto operation */
--
2.9.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v2 1/5] service cores: header and implementation
@ 2017-06-29 11:23 1% ` Harry van Haaren
0 siblings, 0 replies; 200+ results
From: Harry van Haaren @ 2017-06-29 11:23 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, thomas, keith.wiles, bruce.richardson, Harry van Haaren
Add header files, update .map files with new service
functions, and add the service header to the doxygen
for building.
This service header API allows DPDK to use services as
a concept of something that requires CPU cycles. An example
is a PMD that runs in software to schedule events, where a
hardware version exists that does not require a CPU.
The code presented here is based on an initial RFC:
http://dpdk.org/ml/archives/dev/2017-May/065207.html
This was then reworked, and RFC v2 with the changes posted:
http://dpdk.org/ml/archives/dev/2017-June/067194.html
This is the fourth iteration of the service core concept,
with 2 RFCs and this being v2 of the implementation.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
v2:
Thanks Jerin for review - below a list your suggested changes;
- Doxygen rename to "service cores" for consistency
- use lcore instead of core for function names
- Fix about 10 typos / seplling msitakse ;)
- Dix doxygen /** comments for functions
- Doxygen @param[out] improvements
- int8_t for socket_id to ordinary int
- Rename MACROS for readability
- Align structs to cache lines
- Allocate fastpath-used data from hugepages
- Added/fixed memory barriers for multi-core scheduling
- Add const to variables, and hoist above loop
- Optimize cmpset atomic if MT_SAFE or only one core mapped
- Statistics collection only when requested
- Add error check for array pointer
- Remove panic() calls from library
- Fix TODO notes from previous patchset
There are also some other changes;
- Checkpatch issues fixed
- .map file updates
- Add rte_service_get_by_name() function
---
doc/api/doxy-api-index.md | 1 +
lib/librte_eal/bsdapp/eal/Makefile | 1 +
lib/librte_eal/bsdapp/eal/rte_eal_version.map | 28 +
lib/librte_eal/common/Makefile | 1 +
lib/librte_eal/common/eal_common_lcore.c | 1 +
lib/librte_eal/common/include/rte_eal.h | 4 +
lib/librte_eal/common/include/rte_lcore.h | 3 +-
lib/librte_eal/common/include/rte_service.h | 298 +++++++++
.../common/include/rte_service_private.h | 118 ++++
lib/librte_eal/common/rte_service.c | 671 +++++++++++++++++++++
lib/librte_eal/linuxapp/eal/Makefile | 1 +
lib/librte_eal/linuxapp/eal/eal_thread.c | 9 +-
lib/librte_eal/linuxapp/eal/rte_eal_version.map | 29 +
13 files changed, 1163 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_eal/common/include/rte_service.h
create mode 100644 lib/librte_eal/common/include/rte_service_private.h
create mode 100644 lib/librte_eal/common/rte_service.c
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f5f1f19..1284402 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -158,6 +158,7 @@ There are many libraries, so their headers may be grouped by topics:
[common] (@ref rte_common.h),
[ABI compat] (@ref rte_compat.h),
[keepalive] (@ref rte_keepalive.h),
+ [service cores] (@ref rte_service.h),
[device metrics] (@ref rte_metrics.h),
[bitrate statistics] (@ref rte_bitrate.h),
[latency statistics] (@ref rte_latencystats.h),
diff --git a/lib/librte_eal/bsdapp/eal/Makefile b/lib/librte_eal/bsdapp/eal/Makefile
index a0f9950..05517a2 100644
--- a/lib/librte_eal/bsdapp/eal/Makefile
+++ b/lib/librte_eal/bsdapp/eal/Makefile
@@ -87,6 +87,7 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_malloc.c
SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += malloc_elem.c
SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += malloc_heap.c
SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_keepalive.c
+SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_service.c
# from arch dir
SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_cpuflags.c
diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
index 2e48a73..5493a13 100644
--- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
@@ -193,3 +193,31 @@ DPDK_17.05 {
vfio_get_group_no;
} DPDK_17.02;
+
+DPDK_17.08 {
+ global:
+
+ rte_service_disable_on_lcore;
+ rte_service_dump;
+ rte_service_enable_on_lcore;
+ rte_service_get_by_id;
+ rte_service_get_by_name;
+ rte_service_get_count;
+ rte_service_get_enabled_on_lcore;
+ rte_service_is_running;
+ rte_service_lcore_add;
+ rte_service_lcore_count;
+ rte_service_lcore_del;
+ rte_service_lcore_list;
+ rte_service_lcore_reset_all;
+ rte_service_lcore_start;
+ rte_service_lcore_stop;
+ rte_service_probe_capability;
+ rte_service_register;
+ rte_service_reset;
+ rte_service_set_stats_enable;
+ rte_service_start;
+ rte_service_stop;
+ rte_service_unregister;
+
+} DPDK_17.05;
diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile
index a5bd108..2a93397 100644
--- a/lib/librte_eal/common/Makefile
+++ b/lib/librte_eal/common/Makefile
@@ -41,6 +41,7 @@ INC += rte_eal_memconfig.h rte_malloc_heap.h
INC += rte_hexdump.h rte_devargs.h rte_bus.h rte_dev.h rte_vdev.h
INC += rte_pci_dev_feature_defs.h rte_pci_dev_features.h
INC += rte_malloc.h rte_keepalive.h rte_time.h
+INC += rte_service.h rte_service_private.h
GENERIC_INC := rte_atomic.h rte_byteorder.h rte_cycles.h rte_prefetch.h
GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h rte_rwlock.h
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index 84fa0cb..0db1555 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -81,6 +81,7 @@ rte_eal_cpu_init(void)
/* By default, each detected core is enabled */
config->lcore_role[lcore_id] = ROLE_RTE;
+ lcore_config[lcore_id].core_role = ROLE_RTE;
lcore_config[lcore_id].core_id = eal_cpu_core_id(lcore_id);
lcore_config[lcore_id].socket_id = eal_cpu_socket_id(lcore_id);
if (lcore_config[lcore_id].socket_id >= RTE_MAX_NUMA_NODES) {
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index abf020b..4dd0518 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -61,6 +61,7 @@ extern "C" {
enum rte_lcore_role_t {
ROLE_RTE,
ROLE_OFF,
+ ROLE_SERVICE,
};
/**
@@ -80,6 +81,7 @@ enum rte_proc_type_t {
struct rte_config {
uint32_t master_lcore; /**< Id of the master lcore */
uint32_t lcore_count; /**< Number of available logical cores. */
+ uint32_t service_lcore_count;/**< Number of available service cores. */
enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
/** Primary or secondary configuration */
@@ -185,6 +187,8 @@ int rte_eal_iopl_init(void);
*
* EPROTO indicates that the PCI bus is either not present, or is not
* readable by the eal.
+ *
+ * ENOEXEC indicates that a service core failed to launch successfully.
*/
int rte_eal_init(int argc, char **argv);
diff --git a/lib/librte_eal/common/include/rte_lcore.h b/lib/librte_eal/common/include/rte_lcore.h
index fe7b586..50e0d0f 100644
--- a/lib/librte_eal/common/include/rte_lcore.h
+++ b/lib/librte_eal/common/include/rte_lcore.h
@@ -73,6 +73,7 @@ struct lcore_config {
unsigned core_id; /**< core number on socket for this lcore */
int core_index; /**< relative index, starting from 0 */
rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
+ uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
};
/**
@@ -175,7 +176,7 @@ rte_lcore_is_enabled(unsigned lcore_id)
struct rte_config *cfg = rte_eal_get_configuration();
if (lcore_id >= RTE_MAX_LCORE)
return 0;
- return cfg->lcore_role[lcore_id] != ROLE_OFF;
+ return cfg->lcore_role[lcore_id] == ROLE_RTE;
}
/**
diff --git a/lib/librte_eal/common/include/rte_service.h b/lib/librte_eal/common/include/rte_service.h
new file mode 100644
index 0000000..3be59ea
--- /dev/null
+++ b/lib/librte_eal/common/include/rte_service.h
@@ -0,0 +1,298 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SERVICE_H_
+#define _RTE_SERVICE_H_
+
+/**
+ * @file
+ *
+ * Service functions
+ *
+ * The service functionality provided by this header allows a DPDK component
+ * to indicate that it requires a function call in order for it to perform
+ * its processing.
+ *
+ * An example usage of this functionality would be a component that registers
+ * a service to perform a particular packet processing duty: for example the
+ * eventdev software PMD. At startup the application requests all services
+ * that have been registered, and the cores in the service-coremask run the
+ * required services. The EAL removes these number of cores from the available
+ * runtime cores, and dedicates them to performing service-core workloads. The
+ * application has access to the remaining lcores as normal.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include<stdio.h>
+#include <stdint.h>
+#include <sys/queue.h>
+
+#include <rte_lcore.h>
+
+/* forward declaration only. Definition in rte_service_private.h */
+struct rte_service_spec;
+
+#define RTE_SERVICE_NAME_MAX 32
+
+/* Capabilities of a service.
+ *
+ * Use the *rte_service_probe_capability* function to check if a service is
+ * capable of a specific capability.
+ */
+/** When set, the service is capable of having multiple threads run it at the
+ * same time.
+ */
+#define RTE_SERVICE_CAP_MT_SAFE (1 << 0)
+
+/** Return the number of services registered.
+ *
+ * The number of services registered can be passed to *rte_service_get_by_id*,
+ * enabling the application to retrieve the specification of each service.
+ *
+ * @return The number of services registered.
+ */
+uint32_t rte_service_get_count(void);
+
+/** Return the specification of a service by integer id.
+ *
+ * This function provides the specification of a service. This can be used by
+ * the application to understand what the service represents. The service
+ * must not be modified by the application directly, only passed to the various
+ * rte_service_* functions.
+ *
+ * @param id The integer id of the service to retrieve
+ * @retval non-zero A valid pointer to the service_spec
+ * @retval NULL Invalid *id* provided.
+ */
+struct rte_service_spec *rte_service_get_by_id(uint32_t id);
+
+/** Return the specification of a service by name.
+ *
+ * This function provides the specification of a service using the service name
+ * as lookup key. This can be used by the application to understand what the
+ * service represents. The service must not be modified by the application
+ * directly, only passed to the various rte_service_* functions.
+ *
+ * @param name The name of the service to retrieve
+ * @retval non-zero A valid pointer to the service_spec
+ * @retval NULL Invalid *name* provided.
+ */
+struct rte_service_spec *rte_service_get_by_name(const char *name);
+
+/** Return the name of the service.
+ *
+ * @return A pointer to the name of the service. The returned pointer remains
+ * in ownership of the service, and the application must not free it.
+ */
+const char *rte_service_get_name(const struct rte_service_spec *service);
+
+/** Check if a service has a specific capability.
+ *
+ * This function returns if *service* has implements *capability*.
+ * See RTE_SERVICE_CAP_* defines for a list of valid capabilities.
+ * @retval 1 Capability supported by this service instance
+ * @retval 0 Capability not supported by this service instance
+ */
+int32_t rte_service_probe_capability(const struct rte_service_spec *service,
+ uint32_t capability);
+
+/** Enable a core to run a service.
+ *
+ * Each core can be added or removed from running specific services. This
+ * functions adds *lcore* to the set of cores that will run *service*.
+ *
+ * If multiple cores are enabled on a service, an atomic is used to ensure that
+ * only one cores runs the service at a time. The exception to this is when
+ * a service indicates that it is multi-thread safe by setting the capability
+ * called RTE_SERVICE_CAP_MT_SAFE. With the multi-thread safe capability set,
+ * the service function can be run on multiple threads at the same time.
+ *
+ * @retval 0 lcore added successfully
+ * @retval -EINVAL An invalid service or lcore was provided.
+ */
+int32_t rte_service_enable_on_lcore(struct rte_service_spec *service,
+ uint32_t lcore);
+
+/** Disable a core to run a service.
+ *
+ * Each core can be added or removed from running specific services. This
+ * functions removes *lcore* to the set of cores that will run *service*.
+ *
+ * @retval 0 Lcore removed successfully
+ * @retval -EINVAL An invalid service or lcore was provided.
+ */
+int32_t rte_service_disable_on_lcore(struct rte_service_spec *service,
+ uint32_t lcore);
+
+/** Return if an lcore is enabled for the service.
+ *
+ * This function allows the application to query if *lcore* is currently set to
+ * run *service*.
+ *
+ * @retval 1 Lcore enabled on this lcore
+ * @retval 0 Lcore disabled on this lcore
+ * @retval -EINVAL An invalid service or lcore was provided.
+ */
+int32_t rte_service_get_enabled_on_lcore(struct rte_service_spec *service,
+ uint32_t lcore);
+
+
+/** Enable *service* to run.
+ *
+ * This function switches on a service during runtime.
+ * @retval 0 The service was successfully started
+ */
+int32_t rte_service_start(struct rte_service_spec *service);
+
+/** Disable *service*.
+ *
+ * Switch off a service, so it is not run until it is *rte_service_start* is
+ * called on it.
+ * @retval 0 Service successfully switched off
+ */
+int32_t rte_service_stop(struct rte_service_spec *service);
+
+/** Returns if *service* is currently running.
+ *
+ * This function retuns true if the service has been started using
+ * *rte_service_start*, AND a service core is mapped to the service. This
+ * function can be used to ensure that the service will be run.
+ *
+ * @retval 1 Service is currently running, and has a service lcore mapped
+ * @retval 0 Service is currently stopped, or no service lcore is mapped
+ * @retval -EINVAL Invalid service pointer provided
+ */
+int32_t rte_service_is_running(const struct rte_service_spec *service);
+
+/** Start a service core.
+ *
+ * Starting a core makes the core begin polling. Any services assigned to it
+ * will be run as fast as possible.
+ *
+ * @retval 0 Success
+ * @retval -EINVAL Failed to start core. The *lcore_id* passed in is not
+ * currently assigned to be a service core.
+ */
+int32_t rte_service_lcore_start(uint32_t lcore_id);
+
+/** Stop a service core.
+ *
+ * Stopping a core makes the core become idle, but remains assigned as a
+ * service core.
+ *
+ * @retval 0 Success
+ * @retval -EINVAL Invalid *lcore_id* provided
+ * @retval -EALREADY Already stopped core
+ * @retval -EBUSY Failed to stop core, as it would cause a service to not
+ * be run, as this is the only core currently running the service.
+ * The application must stop the service first, and then stop the
+ * lcore.
+ */
+int32_t rte_service_lcore_stop(uint32_t lcore_id);
+
+/** Adds lcore to the list of service cores.
+ *
+ * This functions can be used at runtime in order to modify the service core
+ * mask.
+ *
+ * @retval 0 Success
+ * @retval -EBUSY lcore is busy, and not available for service core duty
+ * @retval -EALREADY lcore is already added to the service core list
+ * @retval -EINVAL Invalid lcore provided
+ */
+int32_t rte_service_lcore_add(uint32_t lcore);
+
+/** Removes lcore from the list of service cores.
+ *
+ * This can fail if the core is not stopped, see *rte_service_core_stop*.
+ *
+ * @retval 0 Success
+ * @retval -EBUSY Lcore is not stopped, stop service core before removing.
+ * @retval -EINVAL failed to add lcore to service core mask.
+ */
+int32_t rte_service_lcore_del(uint32_t lcore);
+
+/** Retrieve the number of service cores currently available.
+ *
+ * This function returns the integer count of service cores available. The
+ * service core count can be used in mapping logic when creating mappings
+ * from service cores to services.
+ *
+ * See *rte_service_lcore_list* for details on retrieving the lcore_id of each
+ * service core.
+ *
+ * @return The number of service cores currently configured.
+ */
+int32_t rte_service_lcore_count(void);
+
+/** Reset all service core mappings.
+ * @retval 0 Success
+ */
+int32_t rte_service_lcore_reset_all(void);
+
+/** Enable or disable statistics collection.
+ *
+ * This function enables per core, per-service cycle count collection.
+ * @param enabled Zero to turn off statistics collection, non-zero to enable.
+ */
+void rte_service_set_stats_enable(int enabled);
+
+/** Retrieve the list of currently enabled service cores.
+ *
+ * This function fills in an application supplied array, with each element
+ * indicating the lcore_id of a service core.
+ *
+ * Adding and removing service cores can be performed using
+ * *rte_service_lcore_add* and *rte_service_lcore_del*.
+ * @param [out] array An array of at least N items.
+ * @param [out] The size of *array*.
+ * @retval >=0 Number of service cores that have been populated in the array
+ * @retval -ENOMEM The provided array is not large enough to fill in the
+ * service core list. No items have been populated, call this function
+ * with a size of at least *rte_service_core_count* items.
+ */
+int32_t rte_service_lcore_list(uint32_t array[], uint32_t n);
+
+/** Dumps any information available about the service. If service is NULL,
+ * dumps info for all services.
+ */
+int32_t rte_service_dump(FILE *f, struct rte_service_spec *service);
+
+#ifdef __cplusplus
+}
+#endif
+
+
+#endif /* _RTE_SERVICE_H_ */
diff --git a/lib/librte_eal/common/include/rte_service_private.h b/lib/librte_eal/common/include/rte_service_private.h
new file mode 100644
index 0000000..d518b02
--- /dev/null
+++ b/lib/librte_eal/common/include/rte_service_private.h
@@ -0,0 +1,118 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SERVICE_PRIVATE_H_
+#define _RTE_SERVICE_PRIVATE_H_
+
+/* This file specifies the internal service specification.
+ * Include this file if you are writing a component that requires CPU cycles to
+ * operate, and you wish to run the component using service cores
+ */
+
+#include <rte_service.h>
+
+/**
+ * Signature of callback function to run a service.
+ */
+typedef int32_t (*rte_service_func)(void *args);
+
+/**
+ * The specification of a service.
+ *
+ * This struct contains metadata about the service itself, the callback
+ * function to run one iteration of the service, a userdata pointer, flags etc.
+ */
+struct rte_service_spec {
+ /** The name of the service. This should be used by the application to
+ * understand what purpose this service provides.
+ */
+ char name[RTE_SERVICE_NAME_MAX];
+ /** The callback to invoke to run one iteration of the service. */
+ rte_service_func callback;
+ /** The userdata pointer provided to the service callback. */
+ void *callback_userdata;
+ /** Flags to indicate the capabilities of this service. See defines in
+ * the public header file for values of RTE_SERVICE_CAP_*
+ */
+ uint32_t capabilities;
+ /** NUMA socket ID that this service is affinitized to */
+ int socket_id;
+};
+
+/** Register a new service.
+ *
+ * A service represents a component that the requires CPU time periodically to
+ * achieve its purpose.
+ *
+ * For example the eventdev SW PMD requires CPU cycles to perform its
+ * scheduling. This can be achieved by registering it as a service, and the
+ * application can then assign CPU resources to it using
+ * *rte_service_set_coremask*.
+ *
+ * @param spec The specification of the service to register
+ * @retval 0 Successfully registered the service.
+ * -EINVAL Attempted to register an invalid service (eg, no callback
+ * set)
+ */
+int32_t rte_service_register(const struct rte_service_spec *spec);
+
+/** Unregister a service.
+ *
+ * The service being removed must be stopped before calling this function.
+ *
+ * @retval 0 The service was successfully unregistered.
+ * @retval -EBUSY The service is currently running, stop the service before
+ * calling unregister. No action has been taken.
+ */
+int32_t rte_service_unregister(struct rte_service_spec *service);
+
+/** Private function to allow EAL to initialized default mappings.
+ *
+ * This function iterates all the services, and maps then to the available
+ * cores. Based on the capabilities of the services, they are set to run on the
+ * available cores in a round-robin manner.
+ *
+ * @retval 0 Success
+ */
+int32_t rte_service_set_default_mapping(void);
+
+/** Initialize the service library.
+ *
+ * In order to use the service library, it must be initialized. EAL initializes
+ * the library at startup.
+ *
+ * @retval 0 Success
+ * @retval -EALREADY Service library is already initialized
+ */
+int32_t rte_service_init(void);
+
+#endif /* _RTE_SERVICE_PRIVATE_H_ */
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
new file mode 100644
index 0000000..67338db
--- /dev/null
+++ b/lib/librte_eal/common/rte_service.c
@@ -0,0 +1,671 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <limits.h>
+#include <string.h>
+#include <dirent.h>
+
+#include <rte_service.h>
+#include "include/rte_service_private.h"
+
+#include <rte_eal.h>
+#include <rte_lcore.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_cycles.h>
+#include <rte_atomic.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+
+#define RTE_SERVICE_NUM_MAX 64
+
+#define SERVICE_F_REGISTERED 0
+
+/* runstates for services and lcores, denoting if they are active or not */
+#define RUNSTATE_STOPPED 0
+#define RUNSTATE_RUNNING 1
+
+/* internal representation of a service */
+struct rte_service_spec_impl {
+ /* public part of the struct */
+ struct rte_service_spec spec;
+
+ /* atomic lock that when set indicates a service core is currently
+ * running this service callback. When not set, a core may take the
+ * lock and then run the service callback.
+ */
+ rte_atomic32_t execute_lock;
+
+ /* API set/get-able variables */
+ int32_t runstate;
+ uint8_t internal_flags;
+
+ /* per service statistics */
+ uint32_t num_mapped_cores;
+ uint64_t calls;
+ uint64_t cycles_spent;
+} __rte_cache_aligned;
+
+/* the internal values of a service core */
+struct core_state {
+ /* map of services IDs are run on this core */
+ uint64_t service_mask;
+ uint8_t runstate; /* running or stopped */
+ uint8_t is_service_core; /* set if core is currently a service core */
+ uint8_t collect_statistics; /* if set, measure cycle counts */
+
+ /* extreme statistics */
+ uint64_t calls_per_service[RTE_SERVICE_NUM_MAX];
+} __rte_cache_aligned;
+
+static uint32_t rte_service_count;
+static struct rte_service_spec_impl *rte_services;
+static struct core_state *cores_state;
+static uint32_t rte_service_library_initialized;
+
+int32_t rte_service_init(void)
+{
+ if (rte_service_library_initialized) {
+ printf("service library init() called, init flag %d\n",
+ rte_service_library_initialized);
+ return -EALREADY;
+ }
+
+ rte_services = rte_calloc("rte_services", RTE_SERVICE_NUM_MAX,
+ sizeof(struct rte_service_spec_impl),
+ RTE_CACHE_LINE_SIZE);
+ if (!rte_services) {
+ printf("error allocating rte services array\n");
+ return -ENOMEM;
+ }
+
+ cores_state = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
+ sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
+ if (!cores_state) {
+ printf("error allocating core states array\n");
+ return -ENOMEM;
+ }
+
+ int i;
+ int count = 0;
+ struct rte_config *cfg = rte_eal_get_configuration();
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (lcore_config[i].core_role == ROLE_SERVICE) {
+ if ((unsigned)i == cfg->master_lcore)
+ continue;
+ rte_service_lcore_add(i);
+ count++;
+ }
+ }
+
+ rte_service_library_initialized = 1;
+ return 0;
+}
+
+void rte_service_set_stats_enable(int enabled)
+{
+ uint32_t i;
+ for (i = 0; i < RTE_MAX_LCORE; i++)
+ cores_state[i].collect_statistics = enabled;
+}
+
+/* returns 1 if service is registered and has not been unregistered
+ * Returns 0 if service never registered, or has been unregistered
+ */
+static inline int
+service_valid(uint32_t id) {
+ return !!(rte_services[id].internal_flags &
+ (1 << SERVICE_F_REGISTERED));
+}
+
+uint32_t
+rte_service_get_count(void)
+{
+ return rte_service_count;
+}
+
+struct rte_service_spec *
+rte_service_get_by_id(uint32_t id)
+{
+ struct rte_service_spec *service = NULL;
+ if (id < rte_service_count)
+ service = (struct rte_service_spec *)&rte_services[id];
+
+ return service;
+}
+
+struct rte_service_spec *rte_service_get_by_name(const char *name)
+{
+ struct rte_service_spec *service = NULL;
+ int i;
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ if (service_valid(i) &&
+ strcmp(name, rte_services[i].spec.name) == 0)
+ service = (struct rte_service_spec *)&rte_services[i];
+ break;
+ }
+
+ return service;
+}
+
+const char *
+rte_service_get_name(const struct rte_service_spec *service)
+{
+ return service->name;
+}
+
+int32_t
+rte_service_probe_capability(const struct rte_service_spec *service,
+ uint32_t capability)
+{
+ return service->capabilities & capability;
+}
+
+int32_t
+rte_service_is_running(const struct rte_service_spec *spec)
+{
+ const struct rte_service_spec_impl *impl =
+ (const struct rte_service_spec_impl *)spec;
+ if (!impl)
+ return -EINVAL;
+
+ return (impl->runstate == RUNSTATE_RUNNING) &&
+ (impl->num_mapped_cores > 0);
+}
+
+int32_t
+rte_service_register(const struct rte_service_spec *spec)
+{
+ uint32_t i;
+ int32_t free_slot = -1;
+
+ if (spec->callback == NULL || strlen(spec->name) == 0)
+ return -EINVAL;
+
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ if (!service_valid(i)) {
+ free_slot = i;
+ break;
+ }
+ }
+
+ if ((free_slot < 0) || (i == RTE_SERVICE_NUM_MAX))
+ return -ENOSPC;
+
+ struct rte_service_spec_impl *s = &rte_services[free_slot];
+ s->spec = *spec;
+ s->internal_flags |= (1 << SERVICE_F_REGISTERED);
+
+ rte_smp_wmb();
+ rte_service_count++;
+
+ return 0;
+}
+
+int32_t
+rte_service_unregister(struct rte_service_spec *spec)
+{
+ struct rte_service_spec_impl *s = NULL;
+ struct rte_service_spec_impl *spec_impl =
+ (struct rte_service_spec_impl *)spec;
+
+ uint32_t i;
+ uint32_t service_id;
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ if (&rte_services[i] == spec_impl) {
+ s = spec_impl;
+ service_id = i;
+ break;
+ }
+ }
+
+ if (!s)
+ return -EINVAL;
+
+ rte_service_count--;
+ rte_smp_wmb();
+
+ s->internal_flags &= ~(1 << SERVICE_F_REGISTERED);
+
+ for (i = 0; i < RTE_MAX_LCORE; i++)
+ cores_state[i].service_mask &= ~(1 << service_id);
+
+ memset(&rte_services[service_id], 0,
+ sizeof(struct rte_service_spec_impl));
+
+ return 0;
+}
+
+int32_t
+rte_service_start(struct rte_service_spec *service)
+{
+ struct rte_service_spec_impl *s =
+ (struct rte_service_spec_impl *)service;
+ s->runstate = RUNSTATE_RUNNING;
+ rte_smp_wmb();
+ return 0;
+}
+
+int32_t
+rte_service_stop(struct rte_service_spec *service)
+{
+ struct rte_service_spec_impl *s =
+ (struct rte_service_spec_impl *)service;
+ s->runstate = RUNSTATE_STOPPED;
+ rte_smp_wmb();
+ return 0;
+}
+
+static int32_t
+rte_service_runner_func(void *arg)
+{
+ RTE_SET_USED(arg);
+ uint32_t i;
+ const int lcore = rte_lcore_id();
+ struct core_state *cs = &cores_state[lcore];
+
+ while (cores_state[lcore].runstate == RUNSTATE_RUNNING) {
+ const uint64_t service_mask = cs->service_mask;
+ for (i = 0; i < rte_service_count; i++) {
+ struct rte_service_spec_impl *s = &rte_services[i];
+ if (s->runstate != RUNSTATE_RUNNING ||
+ !(service_mask & (1 << i)))
+ continue;
+
+ /* check if this is the only core mapped, else use
+ * atomic to serialize cores mapped to this service
+ */
+ uint32_t *lock = (uint32_t *)&s->execute_lock;
+ if ((s->spec.capabilities & RTE_SERVICE_CAP_MT_SAFE) ||
+ (s->num_mapped_cores == 1 ||
+ rte_atomic32_cmpset(lock, 0, 1))) {
+ void *userdata = s->spec.callback_userdata;
+
+ if (cs->collect_statistics) {
+ uint64_t start = rte_rdtsc();
+ s->spec.callback(userdata);
+ uint64_t end = rte_rdtsc();
+ s->cycles_spent += end - start;
+ cs->calls_per_service[i]++;
+ s->calls++;
+ } else {
+ cs->calls_per_service[i]++;
+ s->spec.callback(userdata);
+ s->calls++;
+ }
+
+ rte_atomic32_clear(&s->execute_lock);
+ }
+ }
+ }
+
+ lcore_config[lcore].state = WAIT;
+
+ return 0;
+}
+
+int32_t
+rte_service_lcore_count(void)
+{
+ int32_t count = 0;
+ uint32_t i;
+ for (i = 0; i < RTE_MAX_LCORE; i++)
+ count += cores_state[i].is_service_core;
+ return count;
+}
+
+int32_t
+rte_service_lcore_list(uint32_t array[], uint32_t n)
+{
+ uint32_t count = rte_service_lcore_count();
+ if (count > n)
+ return -ENOMEM;
+
+ if (!array)
+ return -EINVAL;
+
+ uint32_t i;
+ uint32_t idx = 0;
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ struct core_state *cs = &cores_state[i];
+ if (cs->is_service_core) {
+ array[idx] = i;
+ idx++;
+ }
+ }
+
+ return count;
+}
+
+int32_t
+rte_service_set_default_mapping(void)
+{
+ /* create a default mapping from cores to services, then start the
+ * services to make them transparent to unaware applications.
+ */
+ uint32_t i;
+ int ret;
+ uint32_t count = rte_service_get_count();
+
+ int32_t lcore_iter = 0;
+ uint32_t ids[RTE_MAX_LCORE];
+ int32_t lcore_count = rte_service_lcore_list(ids, RTE_MAX_LCORE);
+
+ for (i = 0; i < count; i++) {
+ struct rte_service_spec *s = rte_service_get_by_id(i);
+ if (!s)
+ return -EINVAL;
+
+ /* if no lcores available as services cores, don't setup map.
+ * This means app logic must add cores, and setup mappings
+ */
+ if (lcore_count > 0) {
+ /* do 1:1 core mapping here, with each service getting
+ * assigned a single core by default. Adding multiple
+ * services should multiplex to a single core, or 1:1
+ * if services == cores
+ */
+ ret = rte_service_enable_on_lcore(s, ids[lcore_iter]);
+ if (ret)
+ return -ENODEV;
+ }
+
+ lcore_iter++;
+ if (lcore_iter >= lcore_count)
+ lcore_iter = 0;
+
+ ret = rte_service_start(s);
+ if (ret)
+ return -ENOEXEC;
+ }
+
+ return 0;
+}
+
+static int32_t
+service_update(struct rte_service_spec *service, uint32_t lcore,
+ uint32_t *set, uint32_t *enabled)
+{
+ uint32_t i;
+ int32_t sid = -1;
+
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ if ((struct rte_service_spec *)&rte_services[i] == service &&
+ service_valid(i)) {
+ sid = i;
+ break;
+ }
+ }
+
+ if (sid == -1 || lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+
+ if (!cores_state[lcore].is_service_core)
+ return -EINVAL;
+
+ if (set) {
+ if (*set) {
+ cores_state[lcore].service_mask |= (1 << sid);
+ rte_services[sid].num_mapped_cores++;
+ } else {
+ cores_state[lcore].service_mask &= ~(1 << sid);
+ rte_services[sid].num_mapped_cores--;
+ }
+ }
+
+ if (enabled)
+ *enabled = (cores_state[lcore].service_mask & (1 << sid));
+
+ rte_smp_wmb();
+
+ return 0;
+}
+
+int32_t rte_service_get_enabled_on_lcore(struct rte_service_spec *service,
+ uint32_t lcore)
+{
+ uint32_t enabled;
+ int ret = service_update(service, lcore, 0, &enabled);
+ if (ret == 0)
+ return enabled;
+ return -EINVAL;
+}
+
+int32_t
+rte_service_enable_on_lcore(struct rte_service_spec *service, uint32_t lcore)
+{
+ uint32_t on = 1;
+ return service_update(service, lcore, &on, 0);
+}
+
+int32_t
+rte_service_disable_on_lcore(struct rte_service_spec *service, uint32_t lcore)
+{
+ uint32_t off = 0;
+ return service_update(service, lcore, &off, 0);
+}
+
+int32_t rte_service_lcore_reset_all(void)
+{
+ /* loop over cores, reset all to mask 0 */
+ uint32_t i;
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ cores_state[i].service_mask = 0;
+ cores_state[i].is_service_core = 0;
+ cores_state[i].runstate = RUNSTATE_STOPPED;
+ }
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++)
+ rte_services[i].num_mapped_cores = 0;
+
+ rte_smp_wmb();
+
+ return 0;
+}
+
+int32_t
+rte_service_lcore_add(uint32_t lcore)
+{
+ if (lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+ if (cores_state[lcore].is_service_core)
+ return -EALREADY;
+
+ lcore_config[lcore].core_role = ROLE_SERVICE;
+
+ cores_state[lcore].is_service_core = 1;
+ cores_state[lcore].service_mask = 0;
+ cores_state[lcore].runstate = RUNSTATE_STOPPED;
+
+ return 0;
+}
+
+int32_t
+rte_service_lcore_del(uint32_t lcore)
+{
+ if (lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+
+ struct core_state *cs = &cores_state[lcore];
+ if (!cs->is_service_core)
+ return -EINVAL;
+
+ if (cs->runstate != RUNSTATE_STOPPED)
+ return -EBUSY;
+
+ lcore_config[lcore].core_role = ROLE_RTE;
+ cores_state[lcore].is_service_core = 0;
+
+ return 0;
+}
+
+int32_t
+rte_service_lcore_start(uint32_t lcore)
+{
+ if (lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+
+ struct core_state *cs = &cores_state[lcore];
+ if (!cs->is_service_core)
+ return -EINVAL;
+
+ if (cs->runstate == RUNSTATE_RUNNING)
+ return -EALREADY;
+
+ /* set core to run state first, and then launch otherwise it will
+ * return immediately as runstate keeps it in the service poll loop
+ */
+ cores_state[lcore].runstate = RUNSTATE_RUNNING;
+
+ int ret = rte_eal_remote_launch(rte_service_runner_func, 0, lcore);
+ /* returns -EBUSY if the core is already launched, 0 on success */
+ return ret;
+}
+
+int32_t
+rte_service_lcore_stop(uint32_t lcore)
+{
+ if (lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+
+ if (cores_state[lcore].runstate == RUNSTATE_STOPPED)
+ return -EALREADY;
+
+ uint32_t i;
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ int32_t enabled = cores_state[i].service_mask & (1 << i);
+ int32_t service_running = rte_services[i].runstate !=
+ RUNSTATE_STOPPED;
+ int32_t only_core = rte_services[i].num_mapped_cores == 1;
+
+ /* if the core is mapped, and the service is running, and this
+ * is the only core that is mapped, the service would cease to
+ * run if this core stopped, so fail instead.
+ */
+ if (enabled && service_running && only_core)
+ return -EBUSY;
+ }
+
+ cores_state[lcore].runstate = RUNSTATE_STOPPED;
+
+ return 0;
+}
+
+static void
+rte_service_dump_one(FILE *f, struct rte_service_spec_impl *s,
+ uint64_t all_cycles, uint32_t reset)
+{
+ /* avoid divide by zero */
+ if (all_cycles == 0)
+ all_cycles = 1;
+
+ int calls = 1;
+ if (s->calls != 0)
+ calls = s->calls;
+
+ float cycles_pct = (((float)s->cycles_spent) / all_cycles) * 100.f;
+ fprintf(f,
+ " %s : %0.1f %%\tcalls %"PRIu64"\tcycles %"
+ PRIu64"\tavg: %"PRIu64"\n",
+ s->spec.name, cycles_pct, s->calls, s->cycles_spent,
+ s->cycles_spent / calls);
+
+ if (reset) {
+ s->cycles_spent = 0;
+ s->calls = 0;
+ }
+}
+
+static void
+service_dump_calls_per_lcore(FILE *f, uint32_t lcore, uint32_t reset)
+{
+ uint32_t i;
+ struct core_state *cs = &cores_state[lcore];
+
+ fprintf(f, "%02d\t", lcore);
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ if (!service_valid(i))
+ continue;
+ fprintf(f, "%"PRIu64"\t", cs->calls_per_service[i]);
+ if (reset)
+ cs->calls_per_service[i] = 0;
+ }
+ fprintf(f, "\n");
+}
+
+int32_t rte_service_dump(FILE *f, struct rte_service_spec *service)
+{
+ uint32_t i;
+
+ uint64_t total_cycles = 0;
+ for (i = 0; i < rte_service_count; i++) {
+ if (!service_valid(i))
+ continue;
+ total_cycles += rte_services[i].cycles_spent;
+ }
+
+ int print_no_collect_warning = 0;
+ for (i = 0; i < RTE_MAX_LCORE; i++)
+ if (cores_state[i].collect_statistics == 0)
+ print_no_collect_warning = 1;
+ if (print_no_collect_warning)
+ fprintf(f, "Warning; cycle counts not collectd; refer to rte_service_set_stats_enable\n");
+
+ if (service) {
+ struct rte_service_spec_impl *s =
+ (struct rte_service_spec_impl *)service;
+ fprintf(f, "Service %s Summary\n", s->spec.name);
+ uint32_t reset = 0;
+ rte_service_dump_one(f, s, total_cycles, reset);
+ return 0;
+ }
+
+ fprintf(f, "Services Summary\n");
+ for (i = 0; i < rte_service_count; i++) {
+ uint32_t reset = 1;
+ rte_service_dump_one(f, &rte_services[i], total_cycles, reset);
+ }
+
+ fprintf(f, "Service Cores Summary\n");
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (lcore_config[i].core_role != ROLE_SERVICE)
+ continue;
+
+ uint32_t reset = 0;
+ service_dump_calls_per_lcore(f, i, reset);
+ }
+
+ return 0;
+}
diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile
index 640afd0..438dcf9 100644
--- a/lib/librte_eal/linuxapp/eal/Makefile
+++ b/lib/librte_eal/linuxapp/eal/Makefile
@@ -96,6 +96,7 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_malloc.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += malloc_elem.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += malloc_heap.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_keepalive.c
+SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_service.c
# from arch dir
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_cpuflags.c
diff --git a/lib/librte_eal/linuxapp/eal/eal_thread.c b/lib/librte_eal/linuxapp/eal/eal_thread.c
index 9f88530..831ba07 100644
--- a/lib/librte_eal/linuxapp/eal/eal_thread.c
+++ b/lib/librte_eal/linuxapp/eal/eal_thread.c
@@ -184,7 +184,14 @@ eal_thread_loop(__attribute__((unused)) void *arg)
ret = lcore_config[lcore_id].f(fct_arg);
lcore_config[lcore_id].ret = ret;
rte_wmb();
- lcore_config[lcore_id].state = FINISHED;
+
+ /* when a service core returns, it should go directly to WAIT
+ * state, because the application will not lcore_wait() for it.
+ */
+ if (lcore_config[lcore_id].core_role == ROLE_SERVICE)
+ lcore_config[lcore_id].state = WAIT;
+ else
+ lcore_config[lcore_id].state = FINISHED;
}
/* never reached */
diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
index 670bab3..830d224 100644
--- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
@@ -198,3 +198,32 @@ DPDK_17.05 {
vfio_get_group_no;
} DPDK_17.02;
+
+DPDK_17.08 {
+ global:
+
+ rte_service_disable_on_lcore;
+ rte_service_dump;
+ rte_service_enable_on_lcore;
+ rte_service_get_by_id;
+ rte_service_get_by_name;
+ rte_service_get_count;
+ rte_service_get_enabled_on_lcore;
+ rte_service_is_running;
+ rte_service_lcore_add;
+ rte_service_lcore_count;
+ rte_service_lcore_del;
+ rte_service_lcore_list;
+ rte_service_lcore_reset_all;
+ rte_service_lcore_start;
+ rte_service_lcore_stop;
+ rte_service_probe_capability;
+ rte_service_register;
+ rte_service_reset;
+ rte_service_set_stats_enable;
+ rte_service_start;
+ rte_service_stop;
+ rte_service_unregister;
+
+
+} DPDK_17.05;
--
2.7.4
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH 1/6] service cores: header and implementation
2017-06-26 11:59 0% ` Jerin Jacob
@ 2017-06-29 11:13 3% ` Van Haaren, Harry
0 siblings, 0 replies; 200+ results
From: Van Haaren, Harry @ 2017-06-29 11:13 UTC (permalink / raw)
To: 'Jerin Jacob'; +Cc: dev, thomas, Wiles, Keith, Richardson, Bruce
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
<snip>
> > This is the third iteration of the service core concept,
> > now with an implementation.
> >
> > Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
>
> Nice work. Detailed review comments below
Thanks for the (prompt!) feedback. Mostly agree, comments inline, <snip> lots of noisy code out :)
I have a follow up question on service-core usage, and how we can work e.g Eventdev PMDs into Service cores. I'll kick off a new thread on the mailing list to discuss it.
Patchset v2 on the way soon.
> > [keepalive] (@ref rte_keepalive.h),
> > + [Service Cores] (@ref rte_service.h),
>
> 1) IMO, To keep the consistency we can rename to "[service cores]"
Done.
> 2) I thought, we decided to expose rte_service_register() and
> rte_service_unregister() as well, Considering the case where even application
> as register for service functions if required. If it is true then I
> think, registration functions can moved of private header file so that
> it will visible in doxygen.
To avoid bleeding implementation out of the API, this was not done. Services register API is not currently publicly visible - this keeps the API abstraction very powerful. If we decide to make the register-struct public, we lost (almost) all of the API encapsulation, as the struct itself has to be public
Applications can #include <rte_service_private.h> if they insist - which would provide the functionality as desired, but then the application is aware that it is using DPDK private data structures.
I suggest we leave the service register API as private for this release. We can always move it to public if required - once we are more comfortable with the API and it is more widely implemented. This will help keep API/ABI stability - we don't have the "luxury" of EXPERIMENTAL tag in EAL :D
> 3) Should we change core function name as lcore like
> rte_service_lcore_add(), rte_service_lcore_del() etc as we are operating
> on lcore here.
Yep, done. Added "l" to all core related functions for consistency, so lcore is now used everywhere.
> > struct rte_config {
> > uint32_t master_lcore; /**< Id of the master lcore */
> > uint32_t lcore_count; /**< Number of available logical cores. */
> > + uint32_t score_count; /**< Number of available service cores. */
>
> Should we call it as service core or service lcore?
Done
> > +/** Return the number of services registered.
> > + *
> > + * The number of services registered can be passed to *rte_service_get_by_id*,
> > + * enabling the application to retireve the specificaion of each service.
>
> s/retireve the specificaion/retrieve the specification
>
> > + *
> > + * @return The number of services registered.
> > + */
> > +uint32_t rte_service_get_count(void);
> > +
> > +/** Return the specificaion of each service.
>
> s/specificaion/specification
Fixed
> > +/* Check if a service has a specific capability.
> Missing the doxygen marker(ie. change to /** Check)
Fixed
> > +/* Start a service core.
> Missing the doxygen marker(ie. change to /** Start)
Fixed
> > +/** Retreve the number of service cores currently avaialble.
> typo: ^^^^^^^^ ^^^^^^^^^^
> Retrieve the number of service cores currently available.
Oh my do I have talent for mis-spelling :D Fixed
> > + * @param array An array of at least N items.
>
> @param [out] array An array of at least n items
>
> > + * @param The size of *array*.
>
> @param n The size of *array*.
Done!
> > + /** NUMA socket ID that this service is affinitized to */
> > + int8_t socket_id;
>
> All other places socket_id is of type "int".
Done
> > +/** Private function to allow EAL to initialied default mappings.
>
> typo: ^^^^^^^^^^^
Fixed
> > +#define RTE_SERVICE_FLAG_REGISTERED_SHIFT 0
>
> Internal macro, Can be shorten to reduce the length(SERVICE_F_REGISTERED?)
>
> > +
> > +#define RTE_SERVICE_RUNSTATE_STOPPED 0
> > +#define RTE_SERVICE_RUNSTATE_RUNNING 1
>
> Internal macro, Can be shorten to reduce the length(SERVICE_STATE_RUNNING?)
These are used for services and for lcore state, so just used RUNSTATE_RUNNING and RUNSTATE_STOPPED.
> > +struct rte_service_spec_impl {
> > + /* public part of the struct */
> > + struct rte_service_spec spec;
>
> Nice approach.
<snip>
> Since it been used in fastpath. better to align to cache line
Done :)
> > +struct core_state {
<snip>
> aligned to cache line?
Done
> > +static uint32_t rte_service_count;
> > +static struct rte_service_spec_impl rte_services[RTE_SERVICE_NUM_MAX];
> > +static struct core_state cores_state[RTE_MAX_LCORE];
>
> Since these variable are used in fastpath, better to allocate form
> huge page area. It will avoid lot of global variables in code as well.
> Like other module, you can add a private function for service init and it can be
> called from eal_init()
Yep good point, done.
> > +static int
>
> static inline int
> > +service_valid(uint32_t id) {
Done
> > +int32_t
>
> bool could be enough here
>
> > +rte_service_probe_capability(const struct rte_service_spec *service,
> > + uint32_t capability)
Currently the entire API is <stdint.h> only, leaving as is.
> > +int32_t
> > +rte_service_register(const struct rte_service_spec *spec)
> > +{
> > + uint32_t i;
> > + int32_t free_slot = -1;
> > +
> > + if (spec->callback == NULL || strlen(spec->name) == 0)
> > + return -EINVAL;
> > +
> > + for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
> > + if (!service_valid(i)) {
> > + free_slot = i;
> > + break;
> > + }
> > + }
> > +
> > + if (free_slot < 0)
>
> if ((free_slot < 0) || (i == RTE_SERVICE_NUM_MAX))
Ah - a bug! Nice catch, fixed.
> > + s->internal_flags |= (1 << RTE_SERVICE_FLAG_REGISTERED_SHIFT);
> > +
> > + rte_smp_wmb();
> > + rte_service_count++;
>
> IMO, You can move above rte_smp_wmb() here.
Perhaps I'm not understanding correctly, but don't we need the writes to the service spec to be completed before allowing other cores to see the extra service count? In short, I think the wmb() is in the right place?
> > + memset(&rte_services[service_id], 0,
> > + sizeof(struct rte_service_spec_impl));
> > +
> > + rte_smp_wmb();
> > + rte_service_count--;
>
> IMO, You can move above rte_smp_wmb() here.
I think this part needs refactoring actually;
count--;
wmb();
memset();
Stop cores from seeing service, wmb() to ensure writes complete, then clear internal config?
> > +int32_t
> > +rte_service_start(struct rte_service_spec *service)
> > +{
> > + struct rte_service_spec_impl *s =
> > + (struct rte_service_spec_impl *)service;
> > + s->runstate = RTE_SERVICE_RUNSTATE_RUNNING;
>
> Is this function can called from worker thread? if so add rte_smp_wmb()
Done
> > + return 0;
> > +}
> > +
> > +int32_t
> > +rte_service_stop(struct rte_service_spec *service)
> > +{
> > + struct rte_service_spec_impl *s =
> > + (struct rte_service_spec_impl *)service;
> > + s->runstate = RTE_SERVICE_RUNSTATE_STOPPED;
>
> Is this function can called from worker thread? if so add rte_smp_wmb()
Done
> > +static int32_t
> > +rte_service_runner_func(void *arg)
> > +{
> > + RTE_SET_USED(arg);
> > + uint32_t i;
> > + const int lcore = rte_lcore_id();
> > + struct core_state *cs = &cores_state[lcore];
> > +
> > + while (cores_state[lcore].runstate == RTE_SERVICE_RUNSTATE_RUNNING) {
> > + for (i = 0; i < rte_service_count; i++) {
> > + struct rte_service_spec_impl *s = &rte_services[i];
> > + uint64_t service_mask = cs->service_mask;
>
> No need to read in loop, Move it above while loop and add const.
> const uint64_t service_mask = cs->service_mask;
Yep done, I wonder would a compiler be smart enough.. :)
> > + uint32_t *lock = (uint32_t *)&s->execute_lock;
> > + if (rte_atomic32_cmpset(lock, 0, 1)) {
>
> rte_atomic32 is costly. How about checking RTE_SERVICE_CAP_MT_SAFE
> first.
Yep this was on my scope for optimizing down the line.
> > + void *userdata = s->spec.callback_userdata;
> > + uint64_t start = rte_rdtsc();
> > + s->spec.callback(userdata);
> > + uint64_t end = rte_rdtsc();
> > +
> > + uint64_t spent = end - start;
> > + s->cycles_spent += spent;
> > + s->calls++;
> > + cs->calls_per_service[i]++;
>
> How about enabling the statistics based on some runtime configuration?
Good idea - added an API to enable/disable statistics collection.
> > + rte_atomic32_clear(&s->execute_lock);
> > + }
> > + }
> > + rte_mb();
>
> Do we need full barrier here. Is rte_smp_rmb() inside the loop is
> enough?
Actually I'm not quite sure why there's a barrier at all.. removed.
> > + uint32_t i;
> > + uint32_t idx = 0;
> > + for (i = 0; i < RTE_MAX_LCORE; i++) {
>
> Are we good if "count" being the upper limit instead of RTE_MAX_LCORE?
Nope, the cores could be anywhere from 0 to RTE_MAX_LCORE - we gotta scan them all.
> > + struct core_state *cs = &cores_state[i];
> > + if (cs->is_service_core) {
> > + array[idx] = i;
> > + idx++;
> > + }
> > + }
> > +
<snip>
> > + ret = rte_service_enable_on_core(s, j);
> > + if (ret)
> > + rte_panic("Enabling service core %d on service %s failed\n",
> > + j, s->name);
>
> avoid panic in library
Done
> > + ret = rte_service_start(s);
> > + if (ret)
> > + rte_panic("failed to start service %s\n", s->name);
>
> avoid panic in library
Done
> > +static int32_t
> > +service_update(struct rte_service_spec *service, uint32_t lcore,
> > + uint32_t *set, uint32_t *enabled)
> > +{
<snip>
>
> If the parent functions can be called from worker thread then add
> rte_smp_wmb() here.
Yes they could, done.
> > + lcore_config[lcore].core_role = ROLE_SERVICE;
> > +
> > + /* TODO: take from EAL by setting ROLE_SERVICE? */
>
> I think, we need to fix TODO in v2
Good point :) done
> > + lcore_config[lcore].core_role = ROLE_RTE;
> > + cores_state[lcore].is_service_core = 0;
> > + /* TODO: return to EAL by setting ROLE_RTE? */
>
> I think, we need to fix TODO in v2
Done
> > + /* set core to run state first, and then launch otherwise it will
> > + * return immidiatly as runstate keeps it in the service poll loop
>
> s/immidiatly/immediately
Fixed
> > + int ret = rte_eal_remote_launch(rte_service_runner_func, 0, lcore);
> > + /* returns -EBUSY if the core is already launched, 0 on success */
> > + return ret;
>
> return rte_eal_remote_launch(rte_service_runner_func, 0, lcore);
I got bitten by this twice - documenting the return values, and making it obvious where they come from is worth the variable IMO. Any compiler will optimize away anyways :)
> > + /* avoid divide by zeros */
>
> s/zeros/zero
Fixed!
Thanks for the lengthy review - the code has improved a lot - appreciated.
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 27/27] cryptodev: remove AAD from authentication structure
` (9 preceding siblings ...)
2017-06-26 10:22 2% ` [dpdk-dev] [PATCH v2 21/27] cryptodev: add AEAD parameters in crypto operation Pablo de Lara
@ 2017-06-26 10:23 4% ` Pablo de Lara
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:23 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Now that AAD is only used in AEAD algorithms,
there is no need to keep AAD in the authentication
structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 2 --
doc/guides/prog_guide/cryptodev_lib.rst | 6 ------
doc/guides/rel_notes/release_17_08.rst | 3 +++
drivers/crypto/openssl/rte_openssl_pmd.c | 1 -
lib/librte_cryptodev/rte_crypto_sym.h | 26 --------------------------
test/test/test_cryptodev.c | 4 ----
test/test/test_cryptodev_perf.c | 1 -
7 files changed, 3 insertions(+), 40 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index ac4a12b..75038cd 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -394,7 +394,6 @@ cperf_create_session(uint8_t dev_id,
test_vector->auth_iv.length;
} else {
auth_xform.auth.digest_length = 0;
- auth_xform.auth.add_auth_data_length = 0;
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
auth_xform.auth.iv.length = 0;
@@ -447,7 +446,6 @@ cperf_create_session(uint8_t dev_id,
test_vector->auth_key.data;
} else {
auth_xform.auth.digest_length = 0;
- auth_xform.auth.add_auth_data_length = 0;
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
auth_xform.auth.iv.length = 0;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 5048839..f250c00 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -567,12 +567,6 @@ chain.
uint8_t *data;
phys_addr_t phys_addr;
} digest; /**< Digest parameters */
-
- struct {
- uint8_t *data;
- phys_addr_t phys_addr;
- } aad;
- /**< Additional authentication parameters */
} auth;
};
};
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 2c6bef5..d29b203 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -176,6 +176,9 @@ API Changes
* Changed field size of digest length in ``rte_crypto_auth_xform``,
from uint32_t to uint16_t.
* Added AEAD structure in ``rte_crypto_sym_op``.
+ * Removed AAD length from ``rte_crypto_auth_xform``.
+ * Removed AAD pointer and physical address from auth structure
+ in ``rte_crypto_sym_op``.
ABI Changes
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index ee2d71f..5258f06 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -413,7 +413,6 @@ openssl_set_session_auth_parameters(struct openssl_session *sess,
return -EINVAL;
}
- sess->auth.aad_length = xform->auth.add_auth_data_length;
sess->auth.digest_length = xform->auth.digest_length;
return 0;
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index dab042b..742dc34 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -326,13 +326,6 @@ struct rte_crypto_auth_xform {
* the result shall be truncated.
*/
- uint16_t add_auth_data_length;
- /**< The length of the additional authenticated data (AAD) in bytes.
- * The maximum permitted value is 65535 (2^16 - 1) bytes, unless
- * otherwise specified below.
- *
- */
-
struct {
uint16_t offset;
/**< Starting point for Initialisation Vector or Counter,
@@ -670,25 +663,6 @@ struct rte_crypto_sym_op {
phys_addr_t phys_addr;
/**< Physical address of digest */
} digest; /**< Digest parameters */
-
- struct {
- uint8_t *data;
- /**< Pointer to Additional Authenticated
- * Data (AAD) needed for authenticated cipher
- * mechanisms (CCM and GCM).
- *
- * The length of the data pointed to by this
- * field is set up for the session in the @ref
- * rte_crypto_auth_xform structure as part of
- * the @ref rte_cryptodev_sym_session_create
- * function call.
- * This length must not exceed 65535 (2^16-1)
- * bytes.
- *
- */
- phys_addr_t phys_addr; /**< physical address */
- } aad;
- /**< Additional authentication parameters */
} auth;
};
};
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 0f6c619..c10225f 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -5531,7 +5531,6 @@ static int MD5_HMAC_create_session(struct crypto_testsuite_params *ts_params,
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_MD5_HMAC;
ut_params->auth_xform.auth.digest_length = MD5_DIGEST_LEN;
- ut_params->auth_xform.auth.add_auth_data_length = 0;
ut_params->auth_xform.auth.key.length = test_case->key.len;
ut_params->auth_xform.auth.key.data = key;
@@ -6304,7 +6303,6 @@ static int create_gmac_session(uint8_t dev_id,
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_GMAC;
ut_params->auth_xform.auth.op = auth_op;
ut_params->auth_xform.auth.digest_length = tdata->gmac_tag.len;
- ut_params->auth_xform.auth.add_auth_data_length = 0;
ut_params->auth_xform.auth.key.length = tdata->key.len;
ut_params->auth_xform.auth.key.data = auth_key;
ut_params->auth_xform.auth.iv.offset = IV_OFFSET;
@@ -6684,7 +6682,6 @@ create_auth_session(struct crypto_unittest_params *ut_params,
ut_params->auth_xform.auth.key.length = reference->auth_key.len;
ut_params->auth_xform.auth.key.data = auth_key;
ut_params->auth_xform.auth.digest_length = reference->digest.len;
- ut_params->auth_xform.auth.add_auth_data_length = reference->aad.len;
/* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
@@ -6722,7 +6719,6 @@ create_auth_cipher_session(struct crypto_unittest_params *ut_params,
ut_params->auth_xform.auth.iv.length = reference->iv.len;
} else {
ut_params->auth_xform.next = &ut_params->cipher_xform;
- ut_params->auth_xform.auth.add_auth_data_length = reference->aad.len;
/* Setup Cipher Parameters */
ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 5b2468d..7ae1ae9 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -2936,7 +2936,6 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
if (chain == CIPHER_ONLY) {
op->sym->auth.digest.data = NULL;
op->sym->auth.digest.phys_addr = 0;
- op->sym->auth.aad.data = NULL;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = 0;
} else {
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 21/27] cryptodev: add AEAD parameters in crypto operation
` (8 preceding siblings ...)
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 18/27] cryptodev: remove digest " Pablo de Lara
@ 2017-06-26 10:22 2% ` Pablo de Lara
2017-06-26 10:23 4% ` [dpdk-dev] [PATCH v2 27/27] cryptodev: remove AAD from authentication structure Pablo de Lara
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
AEAD operation parameters can be set in the new
aead structure, in the crypto operation.
This structure is within a union with the cipher
and authentication parameters, since operations can be:
- AEAD: using the aead structure
- Cipher only: using only the cipher structure
- Auth only: using only the authentication structure
- Cipher-then-auth/Auth-then-cipher: using both cipher
and authentication structures
Therefore, all three cannot be used at the same time.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
doc/guides/prog_guide/cryptodev_lib.rst | 70 +++---
doc/guides/rel_notes/release_17_08.rst | 1 +
lib/librte_cryptodev/rte_crypto_sym.h | 375 ++++++++++++++++++++------------
3 files changed, 279 insertions(+), 167 deletions(-)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index b888554..5048839 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -431,7 +431,6 @@ operations, as well as also supporting AEAD operations.
Session and Session Management
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Session are used in symmetric cryptographic processing to store the immutable
data defined in a cryptographic transform which is used in the operation
@@ -465,9 +464,6 @@ operation and its parameters. See the section below for details on transforms.
struct rte_cryptodev_sym_session * rte_cryptodev_sym_session_create(
uint8_t dev_id, struct rte_crypto_sym_xform *xform);
-**Note**: For AEAD operations the algorithm selected for authentication and
-ciphering must aligned, eg AES_GCM.
-
Transforms and Transform Chaining
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -533,30 +529,54 @@ chain.
/**< Session-less API Crypto operation parameters */
};
- struct {
- struct {
- uint32_t offset;
- uint32_t length;
- } data; /**< Data offsets and length for ciphering */
- } cipher;
-
- struct {
- struct {
- uint32_t offset;
- uint32_t length;
- } data; /**< Data offsets and length for authentication */
-
+ union {
struct {
- uint8_t *data;
- phys_addr_t phys_addr;
- } digest; /**< Digest parameters */
+ struct {
+ uint32_t offset;
+ uint32_t length;
+ } data; /**< Data offsets and length for AEAD */
+
+ struct {
+ uint8_t *data;
+ phys_addr_t phys_addr;
+ } digest; /**< Digest parameters */
+
+ struct {
+ uint8_t *data;
+ phys_addr_t phys_addr;
+ } aad;
+ /**< Additional authentication parameters */
+ } aead;
struct {
- uint8_t *data;
- phys_addr_t phys_addr;
- } aad; /**< Additional authentication parameters */
- } auth;
- }
+ struct {
+ struct {
+ uint32_t offset;
+ uint32_t length;
+ } data; /**< Data offsets and length for ciphering */
+ } cipher;
+
+ struct {
+ struct {
+ uint32_t offset;
+ uint32_t length;
+ } data;
+ /**< Data offsets and length for authentication */
+
+ struct {
+ uint8_t *data;
+ phys_addr_t phys_addr;
+ } digest; /**< Digest parameters */
+
+ struct {
+ uint8_t *data;
+ phys_addr_t phys_addr;
+ } aad;
+ /**< Additional authentication parameters */
+ } auth;
+ };
+ };
+ };
Asymmetric Cryptography
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index b920142..2c6bef5 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -175,6 +175,7 @@ API Changes
* Removed digest length from ``rte_crypto_sym_op``.
* Changed field size of digest length in ``rte_crypto_auth_xform``,
from uint32_t to uint16_t.
+ * Added AEAD structure in ``rte_crypto_sym_op``.
ABI Changes
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index db3957e..f03d2fd 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -556,151 +556,242 @@ struct rte_crypto_sym_op {
/**< Session-less API crypto operation parameters */
};
- struct {
- struct {
- uint32_t offset;
- /**< Starting point for cipher processing, specified
- * as number of bytes from start of data in the source
- * buffer. The result of the cipher operation will be
- * written back into the output buffer starting at
- * this location.
- *
- * @note
- * For SNOW 3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- * KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8
- * and ZUC @ RTE_CRYPTO_CIPHER_ZUC_EEA3,
- * this field should be in bits.
- */
-
- uint32_t length;
- /**< The message length, in bytes, of the source buffer
- * on which the cryptographic operation will be
- * computed. This must be a multiple of the block size
- * if a block cipher is being used. This is also the
- * same as the result length.
- *
- * @note
- * In the case of CCM @ref RTE_CRYPTO_AUTH_AES_CCM,
- * this value should not include the length of the
- * padding or the length of the MAC; the driver will
- * compute the actual number of bytes over which the
- * encryption will occur, which will include these
- * values.
- *
- * @note
- * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UEA2,
- * KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8
- * and ZUC @ RTE_CRYPTO_CIPHER_ZUC_EEA3,
- * this field should be in bits.
- */
- } data; /**< Data offsets and length for ciphering */
-
- } cipher;
-
- struct {
- struct {
- uint32_t offset;
- /**< Starting point for hash processing, specified as
- * number of bytes from start of packet in source
- * buffer.
- *
- * @note
- * For CCM and GCM modes of operation, this field is
- * ignored. The field @ref aad field
- * should be set instead.
- *
- * @note
- * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2,
- * KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9
- * and ZUC @ RTE_CRYPTO_AUTH_ZUC_EIA3,
- * this field should be in bits.
- */
-
- uint32_t length;
- /**< The message length, in bytes, of the source
- * buffer that the hash will be computed on.
- *
- * @note
- * For CCM and GCM modes of operation, this field is
- * ignored. The field @ref aad field should be set
- * instead.
- *
- * @note
- * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2,
- * KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9
- * and ZUC @ RTE_CRYPTO_AUTH_ZUC_EIA3,
- * this field should be in bits.
- */
- } data; /**< Data offsets and length for authentication */
-
+ union {
struct {
- uint8_t *data;
- /**< This points to the location where the digest result
- * should be inserted (in the case of digest generation)
- * or where the purported digest exists (in the case of
- * digest verification).
- *
- * At session creation time, the client specified the
- * digest result length with the digest_length member
- * of the @ref rte_crypto_auth_xform structure. For
- * physical crypto devices the caller must allocate at
- * least digest_length of physically contiguous memory
- * at this location.
- *
- * For digest generation, the digest result will
- * overwrite any data at this location.
- *
- * @note
- * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
- * "digest result" read "authentication tag T".
- */
- phys_addr_t phys_addr;
- /**< Physical address of digest */
- } digest; /**< Digest parameters */
+ struct {
+ uint32_t offset;
+ /**< Starting point for AEAD processing, specified as
+ * number of bytes from start of packet in source
+ * buffer.
+ */
+ uint32_t length;
+ /**< The message length, in bytes, of the source buffer
+ * on which the cryptographic operation will be
+ * computed. This must be a multiple of the block size
+ */
+ } data; /**< Data offsets and length for AEAD */
+ struct {
+ uint8_t *data;
+ /**< This points to the location where the digest result
+ * should be inserted (in the case of digest generation)
+ * or where the purported digest exists (in the case of
+ * digest verification).
+ *
+ * At session creation time, the client specified the
+ * digest result length with the digest_length member
+ * of the @ref rte_crypto_auth_xform structure. For
+ * physical crypto devices the caller must allocate at
+ * least digest_length of physically contiguous memory
+ * at this location.
+ *
+ * For digest generation, the digest result will
+ * overwrite any data at this location.
+ *
+ * @note
+ * For GCM (@ref RTE_CRYPTO_AEAD_AES_GCM), for
+ * "digest result" read "authentication tag T".
+ */
+ phys_addr_t phys_addr;
+ /**< Physical address of digest */
+ } digest; /**< Digest parameters */
+ struct {
+ uint8_t *data;
+ /**< Pointer to Additional Authenticated Data (AAD)
+ * needed for authenticated cipher mechanisms (CCM and
+ * GCM)
+ *
+ * Specifically for CCM (@ref RTE_CRYPTO_AEAD_AES_CCM),
+ * the caller should setup this field as follows:
+ *
+ * - the nonce should be written starting at an offset
+ * of one byte into the array, leaving room for the
+ * implementation to write in the flags to the first
+ * byte.
+ *
+ * - the additional authentication data itself should
+ * be written starting at an offset of 18 bytes into
+ * the array, leaving room for the length encoding in
+ * the first two bytes of the second block.
+ *
+ * - the array should be big enough to hold the above
+ * fields, plus any padding to round this up to the
+ * nearest multiple of the block size (16 bytes).
+ * Padding will be added by the implementation.
+ *
+ * Finally, for GCM (@ref RTE_CRYPTO_AEAD_AES_GCM), the
+ * caller should setup this field as follows:
+ *
+ * - the AAD is written in starting at byte 0
+ * - the array must be big enough to hold the AAD, plus
+ * any space to round this up to the nearest multiple
+ * of the block size (16 bytes).
+ *
+ */
+ phys_addr_t phys_addr; /**< physical address */
+ } aad;
+ /**< Additional authentication parameters */
+ } aead;
struct {
- uint8_t *data;
- /**< Pointer to Additional Authenticated Data (AAD)
- * needed for authenticated cipher mechanisms (CCM and
- * GCM).
- *
- * The length of the data pointed to by this field is
- * set up for the session in the @ref
- * rte_crypto_auth_xform structure as part of the @ref
- * rte_cryptodev_sym_session_create function call.
- * This length must not exceed 65535 (2^16-1) bytes.
- *
- * Specifically for CCM (@ref RTE_CRYPTO_AUTH_AES_CCM),
- * the caller should setup this field as follows:
- *
- * - the nonce should be written starting at an offset
- * of one byte into the array, leaving room for the
- * implementation to write in the flags to the first
- * byte.
- *
- * - the additional authentication data itself should
- * be written starting at an offset of 18 bytes into
- * the array, leaving room for the length encoding in
- * the first two bytes of the second block.
- *
- * - the array should be big enough to hold the above
- * fields, plus any padding to round this up to the
- * nearest multiple of the block size (16 bytes).
- * Padding will be added by the implementation.
- *
- * Finally, for GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), the
- * caller should setup this field as follows:
- *
- * - the AAD is written in starting at byte 0
- * - the array must be big enough to hold the AAD, plus
- * any space to round this up to the nearest multiple
- * of the block size (16 bytes).
- *
- */
- phys_addr_t phys_addr; /**< physical address */
- } aad;
- /**< Additional authentication parameters */
- } auth;
+ struct {
+ struct {
+ uint32_t offset;
+ /**< Starting point for cipher processing,
+ * specified as number of bytes from start
+ * of data in the source buffer.
+ * The result of the cipher operation will be
+ * written back into the output buffer
+ * starting at this location.
+ *
+ * @note
+ * For SNOW 3G @ RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
+ * KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8
+ * and ZUC @ RTE_CRYPTO_CIPHER_ZUC_EEA3,
+ * this field should be in bits.
+ */
+ uint32_t length;
+ /**< The message length, in bytes, of the
+ * source buffer on which the cryptographic
+ * operation will be computed.
+ * This must be a multiple of the block size
+ * if a block cipher is being used. This is
+ * also the same as the result length.
+ *
+ * @note
+ * In the case of CCM
+ * @ref RTE_CRYPTO_AUTH_AES_CCM, this value
+ * should not include the length of the padding
+ * or the length of the MAC; the driver will
+ * compute the actual number of bytes over
+ * which the encryption will occur, which will
+ * include these values.
+ *
+ * @note
+ * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UEA2,
+ * KASUMI @ RTE_CRYPTO_CIPHER_KASUMI_F8
+ * and ZUC @ RTE_CRYPTO_CIPHER_ZUC_EEA3,
+ * this field should be in bits.
+ */
+ } data; /**< Data offsets and length for ciphering */
+ } cipher;
+
+ struct {
+ struct {
+ uint32_t offset;
+ /**< Starting point for hash processing,
+ * specified as number of bytes from start of
+ * packet in source buffer.
+ *
+ * @note
+ * For CCM and GCM modes of operation,
+ * this field is ignored.
+ * The field @ref aad field should be set
+ * instead.
+ *
+ * @note
+ * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+ * KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9
+ * and ZUC @ RTE_CRYPTO_AUTH_ZUC_EIA3,
+ * this field should be in bits.
+ */
+ uint32_t length;
+ /**< The message length, in bytes, of the source
+ * buffer that the hash will be computed on.
+ *
+ * @note
+ * For CCM and GCM modes of operation,
+ * this field is ignored. The field @ref aad
+ * field should be set instead.
+ *
+ * @note
+ * For SNOW 3G @ RTE_CRYPTO_AUTH_SNOW3G_UIA2,
+ * KASUMI @ RTE_CRYPTO_AUTH_KASUMI_F9
+ * and ZUC @ RTE_CRYPTO_AUTH_ZUC_EIA3,
+ * this field should be in bits.
+ */
+ } data;
+ /**< Data offsets and length for authentication */
+
+ struct {
+ uint8_t *data;
+ /**< This points to the location where
+ * the digest result should be inserted
+ * (in the case of digest generation)
+ * or where the purported digest exists
+ * (in the case of digest verification).
+ *
+ * At session creation time, the client
+ * specified the digest result length with
+ * the digest_length member of the
+ * @ref rte_crypto_auth_xform structure.
+ * For physical crypto devices the caller
+ * must allocate at least digest_length of
+ * physically contiguous memory at this
+ * location.
+ *
+ * For digest generation, the digest result
+ * will overwrite any data at this location.
+ *
+ * @note
+ * For GCM (@ref RTE_CRYPTO_AUTH_AES_GCM), for
+ * "digest result" read "authentication tag T".
+ */
+ phys_addr_t phys_addr;
+ /**< Physical address of digest */
+ } digest; /**< Digest parameters */
+
+ struct {
+ uint8_t *data;
+ /**< Pointer to Additional Authenticated
+ * Data (AAD) needed for authenticated cipher
+ * mechanisms (CCM and GCM).
+ *
+ * The length of the data pointed to by this
+ * field is set up for the session in the @ref
+ * rte_crypto_auth_xform structure as part of
+ * the @ref rte_cryptodev_sym_session_create
+ * function call.
+ * This length must not exceed 65535 (2^16-1)
+ * bytes.
+ *
+ * Specifically for CCM
+ * (@ref RTE_CRYPTO_AUTH_AES_CCM),
+ * the caller should setup this field as follows:
+ *
+ * - the nonce should be written starting at
+ * an offset of one byte into the array,
+ * leaving room for the implementation to
+ * write in the flags to the first byte.
+ *
+ * - the additional authentication data
+ * itself should be written starting at
+ * an offset of 18 bytes into the array,
+ * leaving room for the length encoding in
+ * the first two bytes of the second block.
+ *
+ * - the array should be big enough to hold
+ * the above fields, plus any padding to
+ * round this up to the nearest multiple of
+ * the block size (16 bytes).
+ * Padding will be added by the implementation.
+ *
+ * Finally, for GCM
+ * (@ref RTE_CRYPTO_AUTH_AES_GCM), the
+ * caller should setup this field as follows:
+ *
+ * - the AAD is written in starting at byte 0
+ * - the array must be big enough to hold
+ * the AAD, plus any space to round this up to
+ * the nearest multiple of the block size
+ * (16 bytes).
+ *
+ */
+ phys_addr_t phys_addr; /**< physical address */
+ } aad;
+ /**< Additional authentication parameters */
+ } auth;
+ };
+ };
};
--
2.9.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v2 18/27] cryptodev: remove digest length from crypto op
` (7 preceding siblings ...)
2017-06-26 10:22 2% ` [dpdk-dev] [PATCH v2 17/27] cryptodev: remove AAD length from crypto op Pablo de Lara
@ 2017-06-26 10:22 1% ` Pablo de Lara
2017-06-26 10:22 2% ` [dpdk-dev] [PATCH v2 21/27] cryptodev: add AEAD parameters in crypto operation Pablo de Lara
` (2 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Digest length was duplicated in the authentication transform
and the crypto operation structures.
Since digest length is not expected to change in a same
session, it is removed from the crypto operation.
Also, the length has been shrunk to 16 bits,
which should be sufficient for any digest.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 7 ---
doc/guides/prog_guide/cryptodev_lib.rst | 1 -
doc/guides/rel_notes/release_17_08.rst | 3 ++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 34 +++++++-------
drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h | 2 +
drivers/crypto/armv8/rte_armv8_pmd.c | 9 ++--
drivers/crypto/armv8/rte_armv8_pmd_private.h | 2 +
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 34 +++++++-------
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 1 +
drivers/crypto/kasumi/rte_kasumi_pmd.c | 18 ++++----
drivers/crypto/openssl/rte_openssl_pmd.c | 7 +--
drivers/crypto/openssl/rte_openssl_pmd_private.h | 2 +
drivers/crypto/qat/qat_adf/qat_algs.h | 1 +
drivers/crypto/qat/qat_crypto.c | 3 +-
drivers/crypto/snow3g/rte_snow3g_pmd.c | 18 ++++----
drivers/crypto/zuc/rte_zuc_pmd.c | 18 ++++----
examples/ipsec-secgw/esp.c | 2 -
examples/l2fwd-crypto/main.c | 1 -
lib/librte_cryptodev/rte_crypto_sym.h | 6 +--
test/test/test_cryptodev.c | 34 +++++---------
test/test/test_cryptodev_blockcipher.c | 5 +--
test/test/test_cryptodev_perf.c | 56 ++++++++----------------
22 files changed, 119 insertions(+), 145 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index c45d369..b8bd397 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -154,7 +154,6 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
sym_op->auth.digest.data = test_vector->digest.data;
sym_op->auth.digest.phys_addr =
test_vector->digest.phys_addr;
- sym_op->auth.digest.length = options->auth_digest_sz;
} else {
uint32_t offset = options->test_buffer_size;
@@ -177,7 +176,6 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
uint8_t *, offset);
sym_op->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(buf, offset);
- sym_op->auth.digest.length = options->auth_digest_sz;
sym_op->auth.aad.phys_addr = test_vector->aad.phys_addr;
sym_op->auth.aad.data = test_vector->aad.data;
@@ -242,7 +240,6 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->auth.digest.data = test_vector->digest.data;
sym_op->auth.digest.phys_addr =
test_vector->digest.phys_addr;
- sym_op->auth.digest.length = options->auth_digest_sz;
} else {
uint32_t offset = options->test_buffer_size;
@@ -265,7 +262,6 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
uint8_t *, offset);
sym_op->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(buf, offset);
- sym_op->auth.digest.length = options->auth_digest_sz;
sym_op->auth.aad.phys_addr = test_vector->aad.phys_addr;
sym_op->auth.aad.data = test_vector->aad.data;
}
@@ -318,7 +314,6 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
sym_op->auth.digest.data = test_vector->digest.data;
sym_op->auth.digest.phys_addr =
test_vector->digest.phys_addr;
- sym_op->auth.digest.length = options->auth_digest_sz;
} else {
uint32_t offset = sym_op->cipher.data.length +
@@ -342,8 +337,6 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
uint8_t *, offset);
sym_op->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(buf, offset);
-
- sym_op->auth.digest.length = options->auth_digest_sz;
}
sym_op->auth.data.length = options->test_buffer_size;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index ea8fc00..e036611 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -547,7 +547,6 @@ chain.
struct {
uint8_t *data;
phys_addr_t phys_addr;
- uint16_t length;
} digest; /**< Digest parameters */
struct {
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index e633d73..a544639 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -166,6 +166,9 @@ API Changes
* Removed Additional Authentication Data (AAD) length from ``rte_crypto_sym_op``.
* Changed field size of AAD length in ``rte_crypto_auth_xform``,
from uint32_t to uint16_t.
+ * Removed digest length from ``rte_crypto_sym_op``.
+ * Changed field size of digest length in ``rte_crypto_auth_xform``,
+ from uint32_t to uint16_t.
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 77808b4..cf115d3 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -78,6 +78,7 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
{
const struct rte_crypto_sym_xform *auth_xform;
const struct rte_crypto_sym_xform *cipher_xform;
+ uint16_t digest_length;
if (xform->next == NULL || xform->next->next != NULL) {
GCM_LOG_ERR("Two and only two chained xform required");
@@ -128,6 +129,8 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
return -EINVAL;
}
+ digest_length = auth_xform->auth.digest_length;
+
/* Check key length and calculate GCM pre-compute. */
switch (cipher_xform->cipher.key.length) {
case 16:
@@ -146,6 +149,14 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
}
sess->aad_length = auth_xform->auth.add_auth_data_length;
+ /* Digest check */
+ if (digest_length != 16 &&
+ digest_length != 12 &&
+ digest_length != 8) {
+ GCM_LOG_ERR("digest");
+ return -EINVAL;
+ }
+ sess->digest_length = digest_length;
return 0;
}
@@ -245,13 +256,6 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
*iv_padd = rte_bswap32(1);
}
- if (sym_op->auth.digest.length != 16 &&
- sym_op->auth.digest.length != 12 &&
- sym_op->auth.digest.length != 8) {
- GCM_LOG_ERR("digest");
- return -1;
- }
-
if (session->op == AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION) {
aesni_gcm_enc[session->key].init(&session->gdata,
@@ -281,11 +285,11 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
aesni_gcm_enc[session->key].finalize(&session->gdata,
sym_op->auth.digest.data,
- (uint64_t)sym_op->auth.digest.length);
+ (uint64_t)session->digest_length);
} else { /* session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION */
uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(sym_op->m_dst ?
sym_op->m_dst : sym_op->m_src,
- sym_op->auth.digest.length);
+ session->digest_length);
if (!auth_tag) {
GCM_LOG_ERR("auth_tag");
@@ -319,7 +323,7 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
aesni_gcm_dec[session->key].finalize(&session->gdata,
auth_tag,
- (uint64_t)sym_op->auth.digest.length);
+ (uint64_t)session->digest_length);
}
return 0;
@@ -349,21 +353,21 @@ post_process_gcm_crypto_op(struct rte_crypto_op *op)
if (session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION) {
uint8_t *tag = rte_pktmbuf_mtod_offset(m, uint8_t *,
- m->data_len - op->sym->auth.digest.length);
+ m->data_len - session->digest_length);
#ifdef RTE_LIBRTE_PMD_AESNI_GCM_DEBUG
rte_hexdump(stdout, "auth tag (orig):",
- op->sym->auth.digest.data, op->sym->auth.digest.length);
+ op->sym->auth.digest.data, session->digest_length);
rte_hexdump(stdout, "auth tag (calc):",
- tag, op->sym->auth.digest.length);
+ tag, session->digest_length);
#endif
if (memcmp(tag, op->sym->auth.digest.data,
- op->sym->auth.digest.length) != 0)
+ session->digest_length) != 0)
op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
/* trim area used for digest from mbuf */
- rte_pktmbuf_trim(m, op->sym->auth.digest.length);
+ rte_pktmbuf_trim(m, session->digest_length);
}
}
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index bfd4d1c..05fabe6 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -95,6 +95,8 @@ struct aesni_gcm_session {
uint16_t offset;
} iv;
/**< IV parameters */
+ uint16_t digest_length;
+ /**< Digest length */
enum aesni_gcm_operation op;
/**< GCM operation type */
enum aesni_gcm_key key;
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 5256f66..dae74c8 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -452,6 +452,9 @@ armv8_crypto_set_session_chained_parameters(struct armv8_crypto_session *sess,
return -EINVAL;
}
+ /* Set the digest length */
+ sess->auth.digest_length = auth_xform->auth.digest_length;
+
/* Verify supported key lengths and extract proper algorithm */
switch (cipher_xform->cipher.key.length << 3) {
case 128:
@@ -649,7 +652,7 @@ process_armv8_chained_op
}
} else {
adst = (uint8_t *)rte_pktmbuf_append(m_asrc,
- op->sym->auth.digest.length);
+ sess->auth.digest_length);
}
arg.cipher.iv = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -667,12 +670,12 @@ process_armv8_chained_op
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
if (memcmp(adst, op->sym->auth.digest.data,
- op->sym->auth.digest.length) != 0) {
+ sess->auth.digest_length) != 0) {
op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
}
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(m_asrc,
- op->sym->auth.digest.length);
+ sess->auth.digest_length);
}
}
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_private.h b/drivers/crypto/armv8/rte_armv8_pmd_private.h
index 75bde9f..09d32f2 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_private.h
+++ b/drivers/crypto/armv8/rte_armv8_pmd_private.h
@@ -199,6 +199,8 @@ struct armv8_crypto_session {
/**< HMAC key (max supported length)*/
} hmac;
};
+ uint16_t digest_length;
+ /* Digest length */
} auth;
} __rte_cache_aligned;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 3930794..8ee6ece 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -84,7 +84,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
struct sec_flow_context *flc;
uint32_t auth_only_len = sym_op->auth.data.length -
sym_op->cipher.data.length;
- int icv_len = sym_op->auth.digest.length;
+ int icv_len = sess->digest_length;
uint8_t *old_icv;
uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
@@ -135,7 +135,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
"cipher_off: 0x%x/length %d, iv-len=%d data_off: 0x%x\n",
sym_op->auth.data.offset,
sym_op->auth.data.length,
- sym_op->auth.digest.length,
+ sess->digest_length,
sym_op->cipher.data.offset,
sym_op->cipher.data.length,
sess->iv.length,
@@ -161,7 +161,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
sge++;
DPAA2_SET_FLE_ADDR(sge,
DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
- sge->length = sym_op->auth.digest.length;
+ sge->length = sess->digest_length;
DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
sess->iv.length));
}
@@ -177,7 +177,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
fle->length = (sess->dir == DIR_ENC) ?
(sym_op->auth.data.length + sess->iv.length) :
(sym_op->auth.data.length + sess->iv.length +
- sym_op->auth.digest.length);
+ sess->digest_length);
/* Configure Input SGE for Encap/Decap */
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
@@ -192,12 +192,12 @@ build_authenc_fd(dpaa2_sec_session *sess,
sge++;
old_icv = (uint8_t *)(sge + 1);
memcpy(old_icv, sym_op->auth.digest.data,
- sym_op->auth.digest.length);
- memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+ sess->digest_length);
+ memset(sym_op->auth.digest.data, 0, sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
- sge->length = sym_op->auth.digest.length;
+ sge->length = sess->digest_length;
DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
- sym_op->auth.digest.length +
+ sess->digest_length +
sess->iv.length));
}
DPAA2_SET_FLE_FIN(sge);
@@ -217,7 +217,7 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
uint32_t mem_len = (sess->dir == DIR_ENC) ?
(3 * sizeof(struct qbman_fle)) :
(5 * sizeof(struct qbman_fle) +
- sym_op->auth.digest.length);
+ sess->digest_length);
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
uint8_t *old_digest;
@@ -251,7 +251,7 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
- fle->length = sym_op->auth.digest.length;
+ fle->length = sess->digest_length;
DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
DPAA2_SET_FD_COMPOUND_FMT(fd);
@@ -282,17 +282,17 @@ build_auth_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
sym_op->m_src->data_off);
DPAA2_SET_FD_LEN(fd, sym_op->auth.data.length +
- sym_op->auth.digest.length);
+ sess->digest_length);
sge->length = sym_op->auth.data.length;
sge++;
old_digest = (uint8_t *)(sge + 1);
rte_memcpy(old_digest, sym_op->auth.digest.data,
- sym_op->auth.digest.length);
- memset(sym_op->auth.digest.data, 0, sym_op->auth.digest.length);
+ sess->digest_length);
+ memset(sym_op->auth.digest.data, 0, sess->digest_length);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
- sge->length = sym_op->auth.digest.length;
+ sge->length = sess->digest_length;
fle->length = sym_op->auth.data.length +
- sym_op->auth.digest.length;
+ sess->digest_length;
DPAA2_SET_FLE_FIN(sge);
}
DPAA2_SET_FLE_FIN(fle);
@@ -912,6 +912,8 @@ dpaa2_sec_auth_init(struct rte_cryptodev *dev,
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
+ session->digest_length = xform->auth.digest_length;
+
switch (xform->auth.algo) {
case RTE_CRYPTO_AUTH_SHA1_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA1;
@@ -1064,6 +1066,8 @@ dpaa2_sec_aead_init(struct rte_cryptodev *dev,
authdata.key_enc_flags = 0;
authdata.key_type = RTA_DATA_IMM;
+ session->digest_length = auth_xform->digest_length;
+
switch (auth_xform->algo) {
case RTE_CRYPTO_AUTH_SHA1_HMAC:
authdata.algtype = OP_ALG_ALGSEL_SHA1;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index ff3be70..eda2eec 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -191,6 +191,7 @@ typedef struct dpaa2_sec_session_entry {
uint16_t length; /**< IV length in bytes */
uint16_t offset; /**< IV offset in bytes */
} iv;
+ uint16_t digest_length;
uint8_t status;
union {
struct dpaa2_sec_cipher_ctxt cipher_ctxt;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 43f7fe7..32a5a7c 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -132,6 +132,12 @@ kasumi_set_session_parameters(struct kasumi_session *sess,
/* Only KASUMI F9 supported */
if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_KASUMI_F9)
return -EINVAL;
+
+ if (auth_xform->auth.digest_length != KASUMI_DIGEST_LENGTH) {
+ KASUMI_LOG_ERR("Wrong digest length");
+ return -EINVAL;
+ }
+
sess->auth_op = auth_xform->auth.op;
sess->auth_iv_offset = auth_xform->auth.iv.offset;
@@ -261,12 +267,6 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
uint8_t direction;
for (i = 0; i < num_ops; i++) {
- if (unlikely(ops[i]->sym->auth.digest.length != KASUMI_DIGEST_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- KASUMI_LOG_ERR("digest");
- break;
- }
-
/* Data must be byte aligned */
if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
@@ -288,19 +288,19 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
dst = (uint8_t *)rte_pktmbuf_append(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ KASUMI_DIGEST_LENGTH);
sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
iv, src,
length_in_bits, dst, direction);
/* Verify digest. */
if (memcmp(dst, ops[i]->sym->auth.digest.data,
- ops[i]->sym->auth.digest.length) != 0)
+ KASUMI_DIGEST_LENGTH) != 0)
ops[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ KASUMI_DIGEST_LENGTH);
} else {
dst = ops[i]->sym->auth.digest.data;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 8853a67..7b39691 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -371,6 +371,7 @@ openssl_set_session_auth_parameters(struct openssl_session *sess,
}
sess->auth.aad_length = xform->auth.add_auth_data_length;
+ sess->auth.digest_length = xform->auth.digest_length;
return 0;
}
@@ -1130,7 +1131,7 @@ process_openssl_auth_op
if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY)
dst = (uint8_t *)rte_pktmbuf_append(mbuf_src,
- op->sym->auth.digest.length);
+ sess->auth.digest_length);
else {
dst = op->sym->auth.digest.data;
if (dst == NULL)
@@ -1158,11 +1159,11 @@ process_openssl_auth_op
if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
if (memcmp(dst, op->sym->auth.digest.data,
- op->sym->auth.digest.length) != 0) {
+ sess->auth.digest_length) != 0) {
op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
}
/* Trim area used for digest from mbuf. */
- rte_pktmbuf_trim(mbuf_src, op->sym->auth.digest.length);
+ rte_pktmbuf_trim(mbuf_src, sess->auth.digest_length);
}
if (status != 0)
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_private.h b/drivers/crypto/openssl/rte_openssl_pmd_private.h
index 045e532..4c9be05 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_private.h
+++ b/drivers/crypto/openssl/rte_openssl_pmd_private.h
@@ -165,6 +165,8 @@ struct openssl_session {
uint16_t aad_length;
/**< AAD length */
+ uint16_t digest_length;
+ /**< digest length */
} auth;
} __rte_cache_aligned;
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
index f70c6cb..b13d90b 100644
--- a/drivers/crypto/qat/qat_adf/qat_algs.h
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -135,6 +135,7 @@ struct qat_session {
uint16_t offset;
uint16_t length;
} auth_iv;
+ uint16_t digest_length;
rte_spinlock_t lock; /* protects this struct */
};
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 6adc1eb..6174f61 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -606,6 +606,7 @@ qat_crypto_sym_configure_session_auth(struct rte_cryptodev *dev,
auth_xform->op))
goto error_out;
}
+ session->digest_length = auth_xform->digest_length;
return session;
error_out:
@@ -1215,7 +1216,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
ctx->auth_iv.length);
}
rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
- op->sym->auth.digest.length);
+ ctx->digest_length);
rte_hexdump(stdout, "aad:", op->sym->auth.aad.data,
ctx->aad_len);
}
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 23f00a6..064fc69 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -132,6 +132,12 @@ snow3g_set_session_parameters(struct snow3g_session *sess,
/* Only SNOW 3G UIA2 supported */
if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_SNOW3G_UIA2)
return -EINVAL;
+
+ if (auth_xform->auth.digest_length != SNOW3G_DIGEST_LENGTH) {
+ SNOW3G_LOG_ERR("Wrong digest length");
+ return -EINVAL;
+ }
+
sess->auth_op = auth_xform->auth.op;
if (auth_xform->auth.iv.length != SNOW3G_IV_LENGTH) {
@@ -252,12 +258,6 @@ process_snow3g_hash_op(struct rte_crypto_op **ops,
uint8_t *iv;
for (i = 0; i < num_ops; i++) {
- if (unlikely(ops[i]->sym->auth.digest.length != SNOW3G_DIGEST_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- SNOW3G_LOG_ERR("digest");
- break;
- }
-
/* Data must be byte aligned */
if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
@@ -274,19 +274,19 @@ process_snow3g_hash_op(struct rte_crypto_op **ops,
if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
dst = (uint8_t *)rte_pktmbuf_append(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ SNOW3G_DIGEST_LENGTH);
sso_snow3g_f9_1_buffer(&session->pKeySched_hash,
iv, src,
length_in_bits, dst);
/* Verify digest. */
if (memcmp(dst, ops[i]->sym->auth.digest.data,
- ops[i]->sym->auth.digest.length) != 0)
+ SNOW3G_DIGEST_LENGTH) != 0)
ops[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ SNOW3G_DIGEST_LENGTH);
} else {
dst = ops[i]->sym->auth.digest.data;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index c824d9c..3c2d437 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -131,6 +131,12 @@ zuc_set_session_parameters(struct zuc_session *sess,
/* Only ZUC EIA3 supported */
if (auth_xform->auth.algo != RTE_CRYPTO_AUTH_ZUC_EIA3)
return -EINVAL;
+
+ if (auth_xform->auth.digest_length != ZUC_DIGEST_LENGTH) {
+ ZUC_LOG_ERR("Wrong digest length");
+ return -EINVAL;
+ }
+
sess->auth_op = auth_xform->auth.op;
if (auth_xform->auth.iv.length != ZUC_IV_KEY_LENGTH) {
@@ -249,12 +255,6 @@ process_zuc_hash_op(struct rte_crypto_op **ops,
uint8_t *iv;
for (i = 0; i < num_ops; i++) {
- if (unlikely(ops[i]->sym->auth.digest.length != ZUC_DIGEST_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- ZUC_LOG_ERR("digest");
- break;
- }
-
/* Data must be byte aligned */
if ((ops[i]->sym->auth.data.offset % BYTE_LEN) != 0) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
@@ -271,19 +271,19 @@ process_zuc_hash_op(struct rte_crypto_op **ops,
if (session->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
dst = (uint32_t *)rte_pktmbuf_append(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ ZUC_DIGEST_LENGTH);
sso_zuc_eia3_1_buffer(session->pKey_hash,
iv, src,
length_in_bits, dst);
/* Verify digest. */
if (memcmp(dst, ops[i]->sym->auth.digest.data,
- ops[i]->sym->auth.digest.length) != 0)
+ ZUC_DIGEST_LENGTH) != 0)
ops[i]->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
/* Trim area used for digest from mbuf. */
rte_pktmbuf_trim(ops[i]->sym->m_src,
- ops[i]->sym->auth.digest.length);
+ ZUC_DIGEST_LENGTH);
} else {
dst = (uint32_t *)ops[i]->sym->auth.digest.data;
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 571c2c6..d544a3c 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -140,7 +140,6 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
sym_cop->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
- sym_cop->auth.digest.length = sa->digest_len;
return 0;
}
@@ -368,7 +367,6 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
sym_cop->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
rte_pktmbuf_pkt_len(m) - sa->digest_len);
- sym_cop->auth.digest.length = sa->digest_len;
return 0;
}
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 6fe829e..6d88937 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -481,7 +481,6 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
op->sym->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
rte_pktmbuf_pkt_len(m) - cparams->digest_length);
- op->sym->auth.digest.length = cparams->digest_length;
/* For wireless algorithms, offset/length must be in bits */
if (cparams->auth_algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index b964a56..de4031a 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -354,7 +354,7 @@ struct rte_crypto_auth_xform {
* (for example RFC 2104, FIPS 198a).
*/
- uint32_t digest_length;
+ uint16_t digest_length;
/**< Length of the digest to be returned. If the verify option is set,
* this specifies the length of the digest to be compared for the
* session.
@@ -604,10 +604,6 @@ struct rte_crypto_sym_op {
*/
phys_addr_t phys_addr;
/**< Physical address of digest */
- uint16_t length;
- /**< Length of digest. This must be the same value as
- * @ref rte_crypto_auth_xform.digest_length.
- */
} digest; /**< Digest parameters */
struct {
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 91078ab..a1243d0 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1307,7 +1307,6 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
sym_op->auth.digest.data = ut_params->digest;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, QUOTE_512_BYTES);
- sym_op->auth.digest.length = DIGEST_BYTE_LENGTH_SHA1;
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
@@ -1459,7 +1458,6 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
sym_op->auth.digest.data = ut_params->digest;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, QUOTE_512_BYTES);
- sym_op->auth.digest.length = DIGEST_BYTE_LENGTH_SHA512;
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
@@ -2102,7 +2100,6 @@ create_wireless_algo_hash_operation(const uint8_t *auth_tag,
ut_params->digest = sym_op->auth.digest.data;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, data_pad_len);
- sym_op->auth.digest.length = auth_tag_len;
if (op == RTE_CRYPTO_AUTH_OP_GENERATE)
memset(sym_op->auth.digest.data, 0, auth_tag_len);
else
@@ -2110,7 +2107,7 @@ create_wireless_algo_hash_operation(const uint8_t *auth_tag,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
sym_op->auth.data.length = auth_len;
sym_op->auth.data.offset = auth_offset;
@@ -2159,7 +2156,6 @@ create_wireless_cipher_hash_operation(const struct wireless_test_data *tdata,
ut_params->digest = sym_op->auth.digest.data;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, data_pad_len);
- sym_op->auth.digest.length = auth_tag_len;
if (op == RTE_CRYPTO_AUTH_OP_GENERATE)
memset(sym_op->auth.digest.data, 0, auth_tag_len);
else
@@ -2167,7 +2163,7 @@ create_wireless_cipher_hash_operation(const struct wireless_test_data *tdata,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
/* Copy cipher and auth IVs at the end of the crypto operation */
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op, uint8_t *,
@@ -2227,7 +2223,6 @@ create_wireless_algo_cipher_hash_operation(const uint8_t *auth_tag,
ut_params->digest = sym_op->auth.digest.data;
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, data_pad_len);
- sym_op->auth.digest.length = auth_tag_len;
if (op == RTE_CRYPTO_AUTH_OP_GENERATE)
memset(sym_op->auth.digest.data, 0, auth_tag_len);
else
@@ -2235,7 +2230,7 @@ create_wireless_algo_cipher_hash_operation(const uint8_t *auth_tag,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
/* Copy cipher and auth IVs at the end of the crypto operation */
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op, uint8_t *,
@@ -2286,13 +2281,12 @@ create_wireless_algo_auth_cipher_operation(unsigned int auth_tag_len,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, data_pad_len);
- sym_op->auth.digest.length = auth_tag_len;
memset(sym_op->auth.digest.data, 0, auth_tag_len);
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
/* Copy cipher and auth IVs at the end of the crypto operation */
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op, uint8_t *,
@@ -4825,7 +4819,6 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
ut_params->ibuf,
plaintext_pad_len +
aad_pad_len);
- sym_op->auth.digest.length = tdata->auth_tag.len;
} else {
sym_op->auth.digest.data = (uint8_t *)rte_pktmbuf_append(
ut_params->ibuf, tdata->auth_tag.len);
@@ -4834,13 +4827,12 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf,
plaintext_pad_len + aad_pad_len);
- sym_op->auth.digest.length = tdata->auth_tag.len;
rte_memcpy(sym_op->auth.digest.data, tdata->auth_tag.data,
tdata->auth_tag.len);
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ tdata->auth_tag.len);
}
sym_op->cipher.data.length = tdata->plaintext.len;
@@ -5615,7 +5607,6 @@ static int MD5_HMAC_create_op(struct crypto_unittest_params *ut_params,
"no room to append digest");
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, plaintext_pad_len);
- sym_op->auth.digest.length = MD5_DIGEST_LEN;
if (ut_params->auth_xform.auth.op == RTE_CRYPTO_AUTH_OP_VERIFY) {
rte_memcpy(sym_op->auth.digest.data, test_case->auth_tag.data,
@@ -6326,14 +6317,13 @@ create_gmac_operation(enum rte_crypto_auth_operation op,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, aad_pad_len);
- sym_op->auth.digest.length = tdata->gmac_tag.len;
if (op == RTE_CRYPTO_AUTH_OP_VERIFY) {
rte_memcpy(sym_op->auth.digest.data, tdata->gmac_tag.data,
tdata->gmac_tag.len);
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ tdata->gmac_tag.len);
}
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
@@ -6811,7 +6801,6 @@ create_auth_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, reference->plaintext.len);
- sym_op->auth.digest.length = reference->digest.len;
if (auth_generate)
memset(sym_op->auth.digest.data, 0, reference->digest.len);
@@ -6822,7 +6811,7 @@ create_auth_operation(struct crypto_testsuite_params *ts_params,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ reference->digest.len);
sym_op->auth.data.length = reference->plaintext.len;
sym_op->auth.data.offset = 0;
@@ -6869,7 +6858,6 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, reference->ciphertext.len);
- sym_op->auth.digest.length = reference->digest.len;
if (auth_generate)
memset(sym_op->auth.digest.data, 0, reference->digest.len);
@@ -6880,7 +6868,7 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ reference->digest.len);
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
@@ -6923,7 +6911,6 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
ut_params->ibuf, reference->ciphertext.len);
- sym_op->auth.digest.length = reference->digest.len;
if (auth_generate)
memset(sym_op->auth.digest.data, 0, reference->digest.len);
@@ -6934,7 +6921,7 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ reference->digest.len);
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
@@ -7171,14 +7158,13 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
"no room to append digest");
sym_op->auth.digest.phys_addr = digest_phys;
- sym_op->auth.digest.length = auth_tag_len;
if (op == RTE_CRYPTO_CIPHER_OP_DECRYPT) {
rte_memcpy(sym_op->auth.digest.data, tdata->auth_tag.data,
auth_tag_len);
TEST_HEXDUMP(stdout, "digest:",
sym_op->auth.digest.data,
- sym_op->auth.digest.length);
+ auth_tag_len);
}
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index c69e83e..6c1f1ec 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -324,7 +324,6 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = tdata->ciphertext.len;
- sym_op->auth.digest.length = digest_len;
}
/* create session for sessioned op */
@@ -480,7 +479,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
sym_op->auth.data.offset;
changed_len = sym_op->auth.data.length;
if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH_GEN)
- changed_len += sym_op->auth.digest.length;
+ changed_len += digest_len;
} else {
/* cipher-only */
head_unchanged_len = rte_pktmbuf_headroom(mbuf) +
@@ -522,7 +521,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
}
if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH_GEN)
- changed_len += sym_op->auth.digest.length;
+ changed_len += digest_len;
if (t->op_mask & BLOCKCIPHER_TEST_OP_AUTH_VERIFY) {
/* white-box test: PMDs use some of the
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 7239976..3bd9351 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -168,20 +168,19 @@ static struct rte_mbuf *
test_perf_create_pktmbuf(struct rte_mempool *mpool, unsigned buf_sz);
static inline struct rte_crypto_op *
test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
- struct rte_cryptodev_sym_session *sess, unsigned data_len,
- unsigned digest_len);
+ struct rte_cryptodev_sym_session *sess, unsigned int data_len);
static inline struct rte_crypto_op *
test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain);
+ enum chain_mode chain);
static inline struct rte_crypto_op *
test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain __rte_unused);
+ enum chain_mode chain __rte_unused);
static inline struct rte_crypto_op *
test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain __rte_unused);
+ enum chain_mode chain __rte_unused);
static uint32_t get_auth_digest_length(enum rte_crypto_auth_algorithm algo);
@@ -1979,7 +1978,6 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
op->sym->auth.digest.data = ut_params->digest;
op->sym->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
data_params[0].length);
- op->sym->auth.digest.length = DIGEST_BYTE_LENGTH_SHA256;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_params[0].length;
@@ -2102,8 +2100,7 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
RTE_CRYPTO_OP_TYPE_SYMMETRIC);
TEST_ASSERT_NOT_NULL(op, "Failed to allocate op");
- op = test_perf_set_crypto_op_snow3g(op, m, sess, pparams->buf_size,
- get_auth_digest_length(pparams->auth_algo));
+ op = test_perf_set_crypto_op_snow3g(op, m, sess, pparams->buf_size);
TEST_ASSERT_NOT_NULL(op, "Failed to attach op to session");
c_ops[i] = op;
@@ -2252,11 +2249,9 @@ test_perf_openssl_optimise_cyclecount(struct perf_test_params *pparams)
static struct rte_crypto_op *(*test_perf_set_crypto_op)
(struct rte_crypto_op *, struct rte_mbuf *,
struct rte_cryptodev_sym_session *,
- unsigned int, unsigned int,
+ unsigned int,
enum chain_mode);
- unsigned int digest_length = get_auth_digest_length(pparams->auth_algo);
-
if (rte_cryptodev_count() == 0) {
printf("\nNo crypto devices found. Is PMD build configured?\n");
return TEST_FAILED;
@@ -2298,7 +2293,7 @@ test_perf_openssl_optimise_cyclecount(struct perf_test_params *pparams)
}
op = test_perf_set_crypto_op(op, m, sess, pparams->buf_size,
- digest_length, pparams->chain);
+ pparams->chain);
TEST_ASSERT_NOT_NULL(op, "Failed to attach op to session");
c_ops[i] = op;
@@ -2407,8 +2402,6 @@ test_perf_armv8_optimise_cyclecount(struct perf_test_params *pparams)
static struct rte_cryptodev_sym_session *sess;
- unsigned int digest_length = get_auth_digest_length(pparams->auth_algo);
-
if (rte_cryptodev_count() == 0) {
printf("\nNo crypto devices found. Is PMD build configured?\n");
return TEST_FAILED;
@@ -2433,7 +2426,7 @@ test_perf_armv8_optimise_cyclecount(struct perf_test_params *pparams)
TEST_ASSERT_NOT_NULL(op, "Failed to allocate op");
op = test_perf_set_crypto_op_aes(op, m, sess, pparams->buf_size,
- digest_length, pparams->chain);
+ pparams->chain);
TEST_ASSERT_NOT_NULL(op, "Failed to attach op to session");
c_ops[i] = op;
@@ -2875,7 +2868,7 @@ test_perf_create_pktmbuf(struct rte_mempool *mpool, unsigned buf_sz)
static inline struct rte_crypto_op *
test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain)
+ enum chain_mode chain)
{
if (rte_crypto_op_attach_sym_session(op, sess) != 0) {
rte_crypto_op_free(op);
@@ -2886,7 +2879,6 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
if (chain == CIPHER_ONLY) {
op->sym->auth.digest.data = NULL;
op->sym->auth.digest.phys_addr = 0;
- op->sym->auth.digest.length = 0;
op->sym->auth.aad.data = NULL;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = 0;
@@ -2895,7 +2887,6 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
uint8_t *, data_len);
op->sym->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(m,
data_len);
- op->sym->auth.digest.length = digest_len;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_len;
}
@@ -2917,7 +2908,7 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
static inline struct rte_crypto_op *
test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain __rte_unused)
+ enum chain_mode chain __rte_unused)
{
if (rte_crypto_op_attach_sym_session(op, sess) != 0) {
rte_crypto_op_free(op);
@@ -2929,7 +2920,6 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
(m->data_off + data_len);
op->sym->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(m, data_len);
- op->sym->auth.digest.length = digest_len;
op->sym->auth.aad.data = aes_gcm_aad;
/* Copy IV at the end of the crypto operation */
@@ -2950,8 +2940,7 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
static inline struct rte_crypto_op *
test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
- struct rte_cryptodev_sym_session *sess, unsigned data_len,
- unsigned digest_len)
+ struct rte_cryptodev_sym_session *sess, unsigned int data_len)
{
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op,
uint8_t *, IV_OFFSET);
@@ -2968,7 +2957,6 @@ test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
(m->data_off + data_len);
op->sym->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(m, data_len);
- op->sym->auth.digest.length = digest_len;
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
@@ -3015,8 +3003,7 @@ static inline struct rte_crypto_op *
test_perf_set_crypto_op_snow3g_hash(struct rte_crypto_op *op,
struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess,
- unsigned data_len,
- unsigned digest_len)
+ unsigned int data_len)
{
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op,
uint8_t *, IV_OFFSET);
@@ -3036,7 +3023,6 @@ test_perf_set_crypto_op_snow3g_hash(struct rte_crypto_op *op,
op->sym->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(m, data_len +
SNOW3G_CIPHER_IV_LENGTH);
- op->sym->auth.digest.length = digest_len;
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
@@ -3051,7 +3037,7 @@ test_perf_set_crypto_op_snow3g_hash(struct rte_crypto_op *op,
static inline struct rte_crypto_op *
test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
struct rte_cryptodev_sym_session *sess, unsigned int data_len,
- unsigned int digest_len, enum chain_mode chain __rte_unused)
+ enum chain_mode chain __rte_unused)
{
if (rte_crypto_op_attach_sym_session(op, sess) != 0) {
rte_crypto_op_free(op);
@@ -3063,7 +3049,6 @@ test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
(m->data_off + data_len);
op->sym->auth.digest.phys_addr =
rte_pktmbuf_mtophys_offset(m, data_len);
- op->sym->auth.digest.length = digest_len;
/* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
@@ -3156,7 +3141,7 @@ test_perf_aes_sha(uint8_t dev_id, uint16_t queue_id,
ops[i] = test_perf_set_crypto_op_aes(ops[i],
mbufs[i + (pparams->burst_size *
(j % NUM_MBUF_SETS))],
- sess, pparams->buf_size, digest_length,
+ sess, pparams->buf_size,
pparams->chain);
/* enqueue burst */
@@ -3298,7 +3283,7 @@ test_perf_snow3g(uint8_t dev_id, uint16_t queue_id,
mbufs[i +
(pparams->burst_size * (j % NUM_MBUF_SETS))],
sess,
- pparams->buf_size, digest_length);
+ pparams->buf_size);
else if (pparams->chain == CIPHER_ONLY)
ops[i+op_offset] =
test_perf_set_crypto_op_snow3g_cipher(ops[i+op_offset],
@@ -3394,8 +3379,6 @@ test_perf_openssl(uint8_t dev_id, uint16_t queue_id,
uint64_t processed = 0, failed_polls = 0, retries = 0;
uint64_t tsc_start = 0, tsc_end = 0;
- unsigned int digest_length = get_auth_digest_length(pparams->auth_algo);
-
struct rte_crypto_op *ops[pparams->burst_size];
struct rte_crypto_op *proc_ops[pparams->burst_size];
@@ -3408,7 +3391,7 @@ test_perf_openssl(uint8_t dev_id, uint16_t queue_id,
static struct rte_crypto_op *(*test_perf_set_crypto_op)
(struct rte_crypto_op *, struct rte_mbuf *,
struct rte_cryptodev_sym_session *,
- unsigned int, unsigned int,
+ unsigned int,
enum chain_mode);
switch (pparams->cipher_algo) {
@@ -3470,7 +3453,7 @@ test_perf_openssl(uint8_t dev_id, uint16_t queue_id,
ops[i] = test_perf_set_crypto_op(ops[i],
mbufs[i + (pparams->burst_size *
(j % NUM_MBUF_SETS))],
- sess, pparams->buf_size, digest_length,
+ sess, pparams->buf_size,
pparams->chain);
/* enqueue burst */
@@ -3548,8 +3531,6 @@ test_perf_armv8(uint8_t dev_id, uint16_t queue_id,
uint64_t processed = 0, failed_polls = 0, retries = 0;
uint64_t tsc_start = 0, tsc_end = 0;
- unsigned int digest_length = get_auth_digest_length(pparams->auth_algo);
-
struct rte_crypto_op *ops[pparams->burst_size];
struct rte_crypto_op *proc_ops[pparams->burst_size];
@@ -3604,7 +3585,7 @@ test_perf_armv8(uint8_t dev_id, uint16_t queue_id,
ops[i] = test_perf_set_crypto_op_aes(ops[i],
mbufs[i + (pparams->burst_size *
(j % NUM_MBUF_SETS))], sess,
- pparams->buf_size, digest_length,
+ pparams->buf_size,
pparams->chain);
/* enqueue burst */
@@ -4179,7 +4160,6 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
params->session_attrs->aad_len +
params->symmetric_op->p_len);
- op->sym->auth.digest.length = params->symmetric_op->t_len;
op->sym->auth.aad.data = m_hlp->aad;
op->sym->auth.aad.phys_addr = rte_pktmbuf_mtophys(m);
--
2.9.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 17/27] cryptodev: remove AAD length from crypto op
` (6 preceding siblings ...)
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 15/27] cryptodev: add auth IV Pablo de Lara
@ 2017-06-26 10:22 2% ` Pablo de Lara
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 18/27] cryptodev: remove digest " Pablo de Lara
` (3 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Additional authenticated data (AAD) information was duplicated
in the authentication transform and in the crypto
operation structures.
Since AAD length is not meant to be changed in a same session,
it is removed from the crypto operation structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 3 ---
doc/guides/prog_guide/cryptodev_lib.rst | 1 -
doc/guides/rel_notes/release_17_08.rst | 3 +++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 6 +++--
drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h | 2 ++
drivers/crypto/openssl/rte_openssl_pmd.c | 4 ++-
drivers/crypto/openssl/rte_openssl_pmd_private.h | 3 +++
drivers/crypto/qat/qat_adf/qat_algs_build_desc.c | 1 +
drivers/crypto/qat/qat_crypto.c | 4 +--
examples/ipsec-secgw/esp.c | 2 --
examples/l2fwd-crypto/main.c | 4 ---
lib/librte_cryptodev/rte_crypto_sym.h | 6 +----
test/test/test_cryptodev.c | 10 +++-----
test/test/test_cryptodev_perf.c | 31 +++++++++++++-----------
14 files changed, 39 insertions(+), 41 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 0ed51e5..c45d369 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -180,7 +180,6 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
sym_op->auth.digest.length = options->auth_digest_sz;
sym_op->auth.aad.phys_addr = test_vector->aad.phys_addr;
sym_op->auth.aad.data = test_vector->aad.data;
- sym_op->auth.aad.length = options->auth_aad_sz;
}
@@ -269,7 +268,6 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->auth.digest.length = options->auth_digest_sz;
sym_op->auth.aad.phys_addr = test_vector->aad.phys_addr;
sym_op->auth.aad.data = test_vector->aad.data;
- sym_op->auth.aad.length = options->auth_aad_sz;
}
if (options->auth_algo == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
@@ -314,7 +312,6 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
sym_op->auth.aad.data = rte_pktmbuf_mtod(bufs_in[i], uint8_t *);
sym_op->auth.aad.phys_addr = rte_pktmbuf_mtophys(bufs_in[i]);
- sym_op->auth.aad.length = options->auth_aad_sz;
/* authentication parameters */
if (options->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 68890ff..ea8fc00 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -553,7 +553,6 @@ chain.
struct {
uint8_t *data;
phys_addr_t phys_addr;
- uint16_t length;
} aad; /**< Additional authentication parameters */
} auth;
}
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index eabf3dd..e633d73 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -163,6 +163,9 @@ API Changes
``rte_crypto_cipher_xform``.
* Added authentication IV parameters (offset and length) in
``rte_crypto_auth_xform``.
+ * Removed Additional Authentication Data (AAD) length from ``rte_crypto_sym_op``.
+ * Changed field size of AAD length in ``rte_crypto_auth_xform``,
+ from uint32_t to uint16_t.
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 28ac035..77808b4 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -145,6 +145,8 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
return -EINVAL;
}
+ sess->aad_length = auth_xform->auth.add_auth_data_length;
+
return 0;
}
@@ -255,7 +257,7 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
aesni_gcm_enc[session->key].init(&session->gdata,
iv_ptr,
sym_op->auth.aad.data,
- (uint64_t)sym_op->auth.aad.length);
+ (uint64_t)session->aad_length);
aesni_gcm_enc[session->key].update(&session->gdata, dst, src,
(uint64_t)part_len);
@@ -293,7 +295,7 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
aesni_gcm_dec[session->key].init(&session->gdata,
iv_ptr,
sym_op->auth.aad.data,
- (uint64_t)sym_op->auth.aad.length);
+ (uint64_t)session->aad_length);
aesni_gcm_dec[session->key].update(&session->gdata, dst, src,
(uint64_t)part_len);
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 2ed96f8..bfd4d1c 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -99,6 +99,8 @@ struct aesni_gcm_session {
/**< GCM operation type */
enum aesni_gcm_key key;
/**< GCM key type */
+ uint16_t aad_length;
+ /**< AAD length */
struct gcm_data gdata __rte_cache_aligned;
/**< GCM parameters */
};
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index ab4333e..8853a67 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -370,6 +370,8 @@ openssl_set_session_auth_parameters(struct openssl_session *sess,
return -EINVAL;
}
+ sess->auth.aad_length = xform->auth.add_auth_data_length;
+
return 0;
}
@@ -934,7 +936,7 @@ process_openssl_combined_op
sess->iv.offset);
ivlen = sess->iv.length;
aad = op->sym->auth.aad.data;
- aadlen = op->sym->auth.aad.length;
+ aadlen = sess->auth.aad_length;
tag = op->sym->auth.digest.data;
if (tag == NULL)
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_private.h b/drivers/crypto/openssl/rte_openssl_pmd_private.h
index 3a64853..045e532 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_private.h
+++ b/drivers/crypto/openssl/rte_openssl_pmd_private.h
@@ -162,6 +162,9 @@ struct openssl_session {
/**< pointer to EVP context structure */
} hmac;
};
+
+ uint16_t aad_length;
+ /**< AAD length */
} auth;
} __rte_cache_aligned;
diff --git a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
index 5bf9c86..4df57aa 100644
--- a/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
+++ b/drivers/crypto/qat/qat_adf/qat_algs_build_desc.c
@@ -817,6 +817,7 @@ int qat_alg_aead_session_create_content_desc_auth(struct qat_session *cdesc,
ICP_QAT_HW_GALOIS_128_STATE1_SZ +
ICP_QAT_HW_GALOIS_H_SZ);
*aad_len = rte_bswap32(add_auth_data_length);
+ cdesc->aad_len = add_auth_data_length;
break;
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index a384b24..6adc1eb 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -1190,7 +1190,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
cipher_param->cipher_length = 0;
cipher_param->cipher_offset = 0;
auth_param->u1.aad_adr = 0;
- auth_param->auth_len = op->sym->auth.aad.length;
+ auth_param->auth_len = ctx->aad_len;
auth_param->auth_off = op->sym->auth.data.offset;
auth_param->u2.aad_sz = 0;
}
@@ -1217,7 +1217,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
op->sym->auth.digest.length);
rte_hexdump(stdout, "aad:", op->sym->auth.aad.data,
- op->sym->auth.aad.length);
+ ctx->aad_len);
}
#endif
return 0;
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 9e12782..571c2c6 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -129,7 +129,6 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
sym_cop->auth.aad.data = aad;
sym_cop->auth.aad.phys_addr = rte_pktmbuf_mtophys_offset(m,
aad - rte_pktmbuf_mtod(m, uint8_t *));
- sym_cop->auth.aad.length = 8;
break;
default:
RTE_LOG(ERR, IPSEC_ESP, "unsupported auth algorithm %u\n",
@@ -358,7 +357,6 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
sym_cop->auth.aad.data = aad;
sym_cop->auth.aad.phys_addr = rte_pktmbuf_mtophys_offset(m,
aad - rte_pktmbuf_mtod(m, uint8_t *));
- sym_cop->auth.aad.length = 8;
break;
default:
RTE_LOG(ERR, IPSEC_ESP, "unsupported auth algorithm %u\n",
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index ba5aef7..6fe829e 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -497,11 +497,9 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
if (cparams->aad.length) {
op->sym->auth.aad.data = cparams->aad.data;
op->sym->auth.aad.phys_addr = cparams->aad.phys_addr;
- op->sym->auth.aad.length = cparams->aad.length;
} else {
op->sym->auth.aad.data = NULL;
op->sym->auth.aad.phys_addr = 0;
- op->sym->auth.aad.length = 0;
}
}
@@ -709,8 +707,6 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
options->auth_xform.auth.digest_length;
if (options->auth_xform.auth.add_auth_data_length) {
port_cparams[i].aad.data = options->aad.data;
- port_cparams[i].aad.length =
- options->auth_xform.auth.add_auth_data_length;
port_cparams[i].aad.phys_addr = options->aad.phys_addr;
if (!options->aad_param)
generate_random_key(port_cparams[i].aad.data,
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 3ccb6fd..b964a56 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -365,7 +365,7 @@ struct rte_crypto_auth_xform {
* the result shall be truncated.
*/
- uint32_t add_auth_data_length;
+ uint16_t add_auth_data_length;
/**< The length of the additional authenticated data (AAD) in bytes.
* The maximum permitted value is 65535 (2^16 - 1) bytes, unless
* otherwise specified below.
@@ -653,10 +653,6 @@ struct rte_crypto_sym_op {
* operation, this field is used to pass plaintext.
*/
phys_addr_t phys_addr; /**< physical address */
- uint16_t length;
- /**< Length of additional authenticated data (AAD)
- * in bytes
- */
} aad;
/**< Additional authentication parameters */
} auth;
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index b5b499a..91078ab 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -4638,7 +4638,7 @@ test_3DES_cipheronly_openssl_all(void)
static int
create_gcm_session(uint8_t dev_id, enum rte_crypto_cipher_operation op,
const uint8_t *key, const uint8_t key_len,
- const uint8_t aad_len, const uint8_t auth_len,
+ const uint16_t aad_len, const uint8_t auth_len,
uint8_t iv_len,
enum rte_crypto_auth_operation auth_op)
{
@@ -4752,12 +4752,11 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
TEST_ASSERT_NOT_NULL(sym_op->auth.aad.data,
"no room to append aad");
- sym_op->auth.aad.length = tdata->aad.len;
sym_op->auth.aad.phys_addr =
rte_pktmbuf_mtophys(ut_params->ibuf);
memcpy(sym_op->auth.aad.data, tdata->aad.data, tdata->aad.len);
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data,
- sym_op->auth.aad.length);
+ tdata->aad.len);
/* Append IV at the end of the crypto operation*/
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
@@ -6316,7 +6315,6 @@ create_gmac_operation(enum rte_crypto_auth_operation op,
TEST_ASSERT_NOT_NULL(sym_op->auth.aad.data,
"no room to append aad");
- sym_op->auth.aad.length = tdata->aad.len;
sym_op->auth.aad.phys_addr =
rte_pktmbuf_mtophys(ut_params->ibuf);
memcpy(sym_op->auth.aad.data, tdata->aad.data, tdata->aad.len);
@@ -6381,7 +6379,7 @@ static int create_gmac_session(uint8_t dev_id,
ut_params->auth_xform.auth.algo = RTE_CRYPTO_AUTH_AES_GMAC;
ut_params->auth_xform.auth.op = auth_op;
ut_params->auth_xform.auth.digest_length = tdata->gmac_tag.len;
- ut_params->auth_xform.auth.add_auth_data_length = 0;
+ ut_params->auth_xform.auth.add_auth_data_length = tdata->aad.len;
ut_params->auth_xform.auth.key.length = 0;
ut_params->auth_xform.auth.key.data = NULL;
@@ -6861,7 +6859,6 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
TEST_HEXDUMP(stdout, "AAD:", sym_op->auth.aad.data, reference->aad.len);
sym_op->auth.aad.phys_addr = rte_pktmbuf_mtophys(ut_params->ibuf);
- sym_op->auth.aad.length = reference->aad.len;
/* digest */
sym_op->auth.digest.data = (uint8_t *)rte_pktmbuf_append(
@@ -7195,7 +7192,6 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
"no room to prepend aad");
sym_op->auth.aad.phys_addr = rte_pktmbuf_mtophys(
ut_params->ibuf);
- sym_op->auth.aad.length = aad_len;
memset(sym_op->auth.aad.data, 0, aad_len);
rte_memcpy(sym_op->auth.aad.data, tdata->aad.data, aad_len);
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 1d204fd..7239976 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -45,6 +45,7 @@
#define AES_CIPHER_IV_LENGTH 16
#define TRIPLE_DES_CIPHER_IV_LENGTH 8
+#define AES_GCM_AAD_LENGTH 16
#define PERF_NUM_OPS_INFLIGHT (128)
#define DEFAULT_NUM_REQS_TO_SUBMIT (10000000)
@@ -70,7 +71,6 @@ enum chain_mode {
struct symmetric_op {
const uint8_t *aad_data;
- uint32_t aad_len;
const uint8_t *p_data;
uint32_t p_len;
@@ -97,6 +97,7 @@ struct symmetric_session_attrs {
const uint8_t *iv_data;
uint16_t iv_len;
+ uint16_t aad_len;
uint32_t digest_len;
};
@@ -2779,6 +2780,7 @@ test_perf_create_openssl_session(uint8_t dev_id, enum chain_mode chain,
break;
case RTE_CRYPTO_AUTH_AES_GCM:
auth_xform.auth.key.data = NULL;
+ auth_xform.auth.add_auth_data_length = AES_GCM_AAD_LENGTH;
break;
default:
return NULL;
@@ -2855,8 +2857,6 @@ test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
}
}
-#define AES_GCM_AAD_LENGTH 16
-
static struct rte_mbuf *
test_perf_create_pktmbuf(struct rte_mempool *mpool, unsigned buf_sz)
{
@@ -2888,7 +2888,6 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.digest.phys_addr = 0;
op->sym->auth.digest.length = 0;
op->sym->auth.aad.data = NULL;
- op->sym->auth.aad.length = 0;
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = 0;
} else {
@@ -2932,7 +2931,6 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
rte_pktmbuf_mtophys_offset(m, data_len);
op->sym->auth.digest.length = digest_len;
op->sym->auth.aad.data = aes_gcm_aad;
- op->sym->auth.aad.length = AES_GCM_AAD_LENGTH;
/* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
@@ -2999,9 +2997,14 @@ test_perf_set_crypto_op_snow3g_cipher(struct rte_crypto_op *op,
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
snow3g_iv, SNOW3G_CIPHER_IV_LENGTH);
+ /* Cipher Parameters */
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_len << 3;
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ snow3g_iv,
+ SNOW3G_CIPHER_IV_LENGTH);
+
op->sym->m_src = m;
return op;
@@ -4137,6 +4140,7 @@ test_perf_create_session(uint8_t dev_id, struct perf_test_params *pparams)
auth_xform.auth.op = pparams->session_attrs->auth;
auth_xform.auth.algo = pparams->session_attrs->auth_algorithm;
+ auth_xform.auth.add_auth_data_length = pparams->session_attrs->aad_len;
auth_xform.auth.digest_length = pparams->session_attrs->digest_len;
auth_xform.auth.key.length = pparams->session_attrs->key_auth_len;
@@ -4172,17 +4176,16 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.digest.data = m_hlp->digest;
op->sym->auth.digest.phys_addr = rte_pktmbuf_mtophys_offset(
m,
- params->symmetric_op->aad_len +
+ params->session_attrs->aad_len +
params->symmetric_op->p_len);
op->sym->auth.digest.length = params->symmetric_op->t_len;
op->sym->auth.aad.data = m_hlp->aad;
- op->sym->auth.aad.length = params->symmetric_op->aad_len;
op->sym->auth.aad.phys_addr = rte_pktmbuf_mtophys(m);
rte_memcpy(op->sym->auth.aad.data, params->symmetric_op->aad_data,
- params->symmetric_op->aad_len);
+ params->session_attrs->aad_len);
rte_memcpy(iv_ptr, params->session_attrs->iv_data,
params->session_attrs->iv_len);
@@ -4190,11 +4193,11 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
iv_ptr[15] = 1;
op->sym->auth.data.offset =
- params->symmetric_op->aad_len;
+ params->session_attrs->aad_len;
op->sym->auth.data.length = params->symmetric_op->p_len;
op->sym->cipher.data.offset =
- params->symmetric_op->aad_len;
+ params->session_attrs->aad_len;
op->sym->cipher.data.length = params->symmetric_op->p_len;
op->sym->m_src = m;
@@ -4208,7 +4211,7 @@ test_perf_create_pktmbuf_fill(struct rte_mempool *mpool,
unsigned buf_sz, struct crypto_params *m_hlp)
{
struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
- uint16_t aad_len = params->symmetric_op->aad_len;
+ uint16_t aad_len = params->session_attrs->aad_len;
uint16_t digest_size = params->symmetric_op->t_len;
char *p;
@@ -4344,14 +4347,14 @@ perf_AES_GCM(uint8_t dev_id, uint16_t queue_id,
TEST_ASSERT_BUFFERS_ARE_EQUAL(
pparams->symmetric_op->c_data,
pkt +
- pparams->symmetric_op->aad_len,
+ pparams->session_attrs->aad_len,
pparams->symmetric_op->c_len,
"GCM Ciphertext data not as expected");
TEST_ASSERT_BUFFERS_ARE_EQUAL(
pparams->symmetric_op->t_data,
pkt +
- pparams->symmetric_op->aad_len +
+ pparams->session_attrs->aad_len +
pparams->symmetric_op->c_len,
pparams->symmetric_op->t_len,
"GCM MAC data not as expected");
@@ -4423,13 +4426,13 @@ test_perf_AES_GCM(int continual_buf_len, int continual_size)
RTE_CRYPTO_AUTH_OP_GENERATE;
session_attrs[i].key_auth_data = NULL;
session_attrs[i].key_auth_len = 0;
+ session_attrs[i].aad_len = gcm_test->aad.len;
session_attrs[i].digest_len =
gcm_test->auth_tag.len;
session_attrs[i].iv_len = gcm_test->iv.len;
session_attrs[i].iv_data = gcm_test->iv.data;
ops_set[i].aad_data = gcm_test->aad.data;
- ops_set[i].aad_len = gcm_test->aad.len;
ops_set[i].p_data = gcm_test->plaintext.data;
ops_set[i].p_len = buf_lengths[i];
ops_set[i].c_data = gcm_test->ciphertext.data;
--
2.9.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v2 15/27] cryptodev: add auth IV
` (5 preceding siblings ...)
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 14/27] cryptodev: move IV parameters to crypto session Pablo de Lara
@ 2017-06-26 10:22 1% ` Pablo de Lara
2017-06-26 10:22 2% ` [dpdk-dev] [PATCH v2 17/27] cryptodev: remove AAD length from crypto op Pablo de Lara
` (4 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Authentication algorithms, such as AES-GMAC or the wireless
algorithms (like SNOW3G) use IV, like cipher algorithms.
So far, AES-GMAC has used the IV from the cipher structure,
and the wireless algorithms have used the AAD field,
which is not technically correct.
Therefore, authentication IV parameters have been added,
so API is more correct. Like cipher IV, auth IV is expected
to be copied after the crypto operation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 47 ++++++--
app/test-crypto-perf/cperf_options.h | 2 +
app/test-crypto-perf/cperf_options_parsing.c | 9 ++
app/test-crypto-perf/cperf_test_latency.c | 4 +-
app/test-crypto-perf/cperf_test_throughput.c | 3 +-
app/test-crypto-perf/cperf_test_vector_parsing.c | 54 +++++++---
app/test-crypto-perf/cperf_test_vectors.c | 37 +++++--
app/test-crypto-perf/cperf_test_vectors.h | 8 +-
app/test-crypto-perf/cperf_test_verify.c | 3 +-
app/test-crypto-perf/data/aes_cbc_128_sha.data | 2 +-
app/test-crypto-perf/data/aes_cbc_192_sha.data | 2 +-
app/test-crypto-perf/data/aes_cbc_256_sha.data | 2 +-
app/test-crypto-perf/main.c | 25 ++++-
doc/guides/prog_guide/cryptodev_lib.rst | 3 +-
doc/guides/rel_notes/release_17_08.rst | 2 +
doc/guides/sample_app_ug/l2_forward_crypto.rst | 17 ++-
doc/guides/tools/cryptoperf.rst | 14 ++-
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 6 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 21 ++--
drivers/crypto/armv8/rte_armv8_pmd_ops.c | 6 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 18 ++--
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 3 +-
drivers/crypto/null/null_crypto_pmd_ops.c | 3 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 78 ++++++++------
drivers/crypto/qat/qat_crypto_capabilities.h | 41 ++++---
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 3 +-
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 3 +-
examples/l2fwd-crypto/main.c | 132 +++++++++++++++++------
lib/librte_cryptodev/rte_crypto_sym.h | 24 +++++
lib/librte_cryptodev/rte_cryptodev.c | 6 +-
lib/librte_cryptodev/rte_cryptodev.h | 6 +-
31 files changed, 425 insertions(+), 159 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index d6d9f14..0ed51e5 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -106,8 +106,8 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset),
- test_vector->iv.data,
- test_vector->iv.length);
+ test_vector->cipher_iv.data,
+ test_vector->cipher_iv.length);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -129,7 +129,7 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
uint16_t nb_ops, struct rte_cryptodev_sym_session *sess,
const struct cperf_options *options,
const struct cperf_test_vector *test_vector,
- uint16_t iv_offset __rte_unused)
+ uint16_t iv_offset)
{
uint16_t i;
@@ -141,6 +141,14 @@ cperf_set_ops_auth(struct rte_crypto_op **ops,
sym_op->m_src = bufs_in[i];
sym_op->m_dst = bufs_out[i];
+ if (test_vector->auth_iv.length) {
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *,
+ iv_offset);
+ memcpy(iv_ptr, test_vector->auth_iv.data,
+ test_vector->auth_iv.length);
+ }
+
/* authentication parameters */
if (options->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
sym_op->auth.digest.data = test_vector->digest.data;
@@ -207,9 +215,11 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->m_src = bufs_in[i];
sym_op->m_dst = bufs_out[i];
- memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset),
- test_vector->iv.data,
- test_vector->iv.length);
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ops[i],
+ uint8_t *,
+ iv_offset);
+ memcpy(iv_ptr, test_vector->cipher_iv.data,
+ test_vector->cipher_iv.length);
/* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -221,6 +231,13 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->cipher.data.offset = 0;
+ if (test_vector->auth_iv.length) {
+ /* Copy IV after the crypto operation and the cipher IV */
+ iv_ptr += test_vector->cipher_iv.length;
+ memcpy(iv_ptr, test_vector->auth_iv.data,
+ test_vector->auth_iv.length);
+ }
+
/* authentication parameters */
if (options->auth_op == RTE_CRYPTO_AUTH_OP_VERIFY) {
sym_op->auth.digest.data = test_vector->digest.data;
@@ -287,8 +304,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset),
- test_vector->iv.data,
- test_vector->iv.length);
+ test_vector->cipher_iv.data,
+ test_vector->cipher_iv.length);
/* cipher parameters */
sym_op->cipher.data.length = options->test_buffer_size;
@@ -365,8 +382,8 @@ cperf_create_session(uint8_t dev_id,
test_vector->cipher_key.data;
cipher_xform.cipher.key.length =
test_vector->cipher_key.length;
- cipher_xform.cipher.iv.length = test_vector->iv.length;
-
+ cipher_xform.cipher.iv.length =
+ test_vector->cipher_iv.length;
} else {
cipher_xform.cipher.key.data = NULL;
cipher_xform.cipher.key.length = 0;
@@ -392,11 +409,14 @@ cperf_create_session(uint8_t dev_id,
auth_xform.auth.key.length =
test_vector->auth_key.length;
auth_xform.auth.key.data = test_vector->auth_key.data;
+ auth_xform.auth.iv.length =
+ test_vector->auth_iv.length;
} else {
auth_xform.auth.digest_length = 0;
auth_xform.auth.add_auth_data_length = 0;
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
+ auth_xform.auth.iv.length = 0;
}
/* create crypto session */
sess = rte_cryptodev_sym_session_create(dev_id, &auth_xform);
@@ -422,7 +442,8 @@ cperf_create_session(uint8_t dev_id,
test_vector->cipher_key.data;
cipher_xform.cipher.key.length =
test_vector->cipher_key.length;
- cipher_xform.cipher.iv.length = test_vector->iv.length;
+ cipher_xform.cipher.iv.length =
+ test_vector->cipher_iv.length;
} else {
cipher_xform.cipher.key.data = NULL;
cipher_xform.cipher.key.length = 0;
@@ -447,17 +468,21 @@ cperf_create_session(uint8_t dev_id,
options->auth_algo == RTE_CRYPTO_AUTH_AES_GCM) {
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
+ auth_xform.auth.iv.length = 0;
} else { /* auth options for others */
auth_xform.auth.key.length =
test_vector->auth_key.length;
auth_xform.auth.key.data =
test_vector->auth_key.data;
+ auth_xform.auth.iv.length =
+ test_vector->auth_iv.length;
}
} else {
auth_xform.auth.digest_length = 0;
auth_xform.auth.add_auth_data_length = 0;
auth_xform.auth.key.length = 0;
auth_xform.auth.key.data = NULL;
+ auth_xform.auth.iv.length = 0;
}
/* create crypto session for aes gcm */
diff --git a/app/test-crypto-perf/cperf_options.h b/app/test-crypto-perf/cperf_options.h
index b928c58..0e53c03 100644
--- a/app/test-crypto-perf/cperf_options.h
+++ b/app/test-crypto-perf/cperf_options.h
@@ -28,6 +28,7 @@
#define CPERF_AUTH_ALGO ("auth-algo")
#define CPERF_AUTH_OP ("auth-op")
#define CPERF_AUTH_KEY_SZ ("auth-key-sz")
+#define CPERF_AUTH_IV_SZ ("auth-iv-sz")
#define CPERF_AUTH_DIGEST_SZ ("auth-digest-sz")
#define CPERF_AUTH_AAD_SZ ("auth-aad-sz")
#define CPERF_CSV ("csv-friendly")
@@ -76,6 +77,7 @@ struct cperf_options {
enum rte_crypto_auth_operation auth_op;
uint16_t auth_key_sz;
+ uint16_t auth_iv_sz;
uint16_t auth_digest_sz;
uint16_t auth_aad_sz;
diff --git a/app/test-crypto-perf/cperf_options_parsing.c b/app/test-crypto-perf/cperf_options_parsing.c
index 63ba37c..70b6a60 100644
--- a/app/test-crypto-perf/cperf_options_parsing.c
+++ b/app/test-crypto-perf/cperf_options_parsing.c
@@ -549,6 +549,12 @@ parse_auth_digest_sz(struct cperf_options *opts, const char *arg)
}
static int
+parse_auth_iv_sz(struct cperf_options *opts, const char *arg)
+{
+ return parse_uint16_t(&opts->auth_iv_sz, arg);
+}
+
+static int
parse_auth_aad_sz(struct cperf_options *opts, const char *arg)
{
return parse_uint16_t(&opts->auth_aad_sz, arg);
@@ -651,6 +657,7 @@ cperf_options_default(struct cperf_options *opts)
opts->auth_key_sz = 64;
opts->auth_digest_sz = 12;
+ opts->auth_iv_sz = 0;
opts->auth_aad_sz = 0;
}
@@ -678,6 +685,7 @@ cperf_opts_parse_long(int opt_idx, struct cperf_options *opts)
{ CPERF_AUTH_ALGO, parse_auth_algo },
{ CPERF_AUTH_OP, parse_auth_op },
{ CPERF_AUTH_KEY_SZ, parse_auth_key_sz },
+ { CPERF_AUTH_IV_SZ, parse_auth_iv_sz },
{ CPERF_AUTH_DIGEST_SZ, parse_auth_digest_sz },
{ CPERF_AUTH_AAD_SZ, parse_auth_aad_sz },
{ CPERF_CSV, parse_csv_friendly},
@@ -914,6 +922,7 @@ cperf_options_dump(struct cperf_options *opts)
printf("# auth operation: %s\n",
rte_crypto_auth_operation_strings[opts->auth_op]);
printf("# auth key size: %u\n", opts->auth_key_sz);
+ printf("# auth iv size: %u\n", opts->auth_iv_sz);
printf("# auth digest size: %u\n", opts->auth_digest_sz);
printf("# auth aad size: %u\n", opts->auth_aad_sz);
printf("#\n");
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index d37083f..f828366 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -285,7 +285,9 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = sizeof(struct priv_op_data) + test_vector->iv.length;
+ uint16_t priv_size = sizeof(struct priv_op_data) +
+ test_vector->cipher_iv.length +
+ test_vector->auth_iv.length;
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, priv_size,
rte_socket_id());
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 4d2b3d3..1e3f3b3 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -266,7 +266,8 @@ cperf_throughput_test_constructor(uint8_t dev_id, uint16_t qp_id,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->iv.length;
+ uint16_t priv_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length;
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, priv_size,
diff --git a/app/test-crypto-perf/cperf_test_vector_parsing.c b/app/test-crypto-perf/cperf_test_vector_parsing.c
index 62d0c91..277ff1e 100644
--- a/app/test-crypto-perf/cperf_test_vector_parsing.c
+++ b/app/test-crypto-perf/cperf_test_vector_parsing.c
@@ -15,7 +15,8 @@ free_test_vector(struct cperf_test_vector *vector, struct cperf_options *opts)
if (vector == NULL || opts == NULL)
return -1;
- rte_free(vector->iv.data);
+ rte_free(vector->cipher_iv.data);
+ rte_free(vector->auth_iv.data);
rte_free(vector->aad.data);
rte_free(vector->digest.data);
@@ -84,15 +85,28 @@ show_test_vector(struct cperf_test_vector *test_vector)
printf("\n");
}
- if (test_vector->iv.data) {
- printf("\niv =\n");
- for (i = 0; i < test_vector->iv.length; ++i) {
+ if (test_vector->cipher_iv.data) {
+ printf("\ncipher_iv =\n");
+ for (i = 0; i < test_vector->cipher_iv.length; ++i) {
if ((i % wrap == 0) && (i != 0))
printf("\n");
- if (i == (uint32_t)(test_vector->iv.length - 1))
- printf("0x%02x", test_vector->iv.data[i]);
+ if (i == (uint32_t)(test_vector->cipher_iv.length - 1))
+ printf("0x%02x", test_vector->cipher_iv.data[i]);
else
- printf("0x%02x, ", test_vector->iv.data[i]);
+ printf("0x%02x, ", test_vector->cipher_iv.data[i]);
+ }
+ printf("\n");
+ }
+
+ if (test_vector->auth_iv.data) {
+ printf("\nauth_iv =\n");
+ for (i = 0; i < test_vector->auth_iv.length; ++i) {
+ if ((i % wrap == 0) && (i != 0))
+ printf("\n");
+ if (i == (uint32_t)(test_vector->auth_iv.length - 1))
+ printf("0x%02x", test_vector->auth_iv.data[i]);
+ else
+ printf("0x%02x, ", test_vector->auth_iv.data[i]);
}
printf("\n");
}
@@ -300,18 +314,32 @@ parse_entry(char *entry, struct cperf_test_vector *vector,
vector->auth_key.length = opts->auth_key_sz;
}
- } else if (strstr(key_token, "iv")) {
- rte_free(vector->iv.data);
- vector->iv.data = data;
+ } else if (strstr(key_token, "cipher_iv")) {
+ rte_free(vector->cipher_iv.data);
+ vector->cipher_iv.data = data;
if (tc_found)
- vector->iv.length = data_length;
+ vector->cipher_iv.length = data_length;
else {
if (opts->cipher_iv_sz > data_length) {
- printf("Global iv shorter than "
+ printf("Global cipher iv shorter than "
"cipher_iv_sz\n");
return -1;
}
- vector->iv.length = opts->cipher_iv_sz;
+ vector->cipher_iv.length = opts->cipher_iv_sz;
+ }
+
+ } else if (strstr(key_token, "auth_iv")) {
+ rte_free(vector->auth_iv.data);
+ vector->auth_iv.data = data;
+ if (tc_found)
+ vector->auth_iv.length = data_length;
+ else {
+ if (opts->auth_iv_sz > data_length) {
+ printf("Global auth iv shorter than "
+ "auth_iv_sz\n");
+ return -1;
+ }
+ vector->auth_iv.length = opts->auth_iv_sz;
}
} else if (strstr(key_token, "ciphertext")) {
diff --git a/app/test-crypto-perf/cperf_test_vectors.c b/app/test-crypto-perf/cperf_test_vectors.c
index 4a14fb3..6829b86 100644
--- a/app/test-crypto-perf/cperf_test_vectors.c
+++ b/app/test-crypto-perf/cperf_test_vectors.c
@@ -409,32 +409,34 @@ cperf_test_vector_get_dummy(struct cperf_options *options)
t_vec->cipher_key.length = 0;
t_vec->ciphertext.data = plaintext;
t_vec->cipher_key.data = NULL;
- t_vec->iv.data = NULL;
+ t_vec->cipher_iv.data = NULL;
} else {
t_vec->cipher_key.length = options->cipher_key_sz;
t_vec->ciphertext.data = ciphertext;
t_vec->cipher_key.data = cipher_key;
- t_vec->iv.data = rte_malloc(NULL, options->cipher_iv_sz,
+ t_vec->cipher_iv.data = rte_malloc(NULL, options->cipher_iv_sz,
16);
- if (t_vec->iv.data == NULL) {
+ if (t_vec->cipher_iv.data == NULL) {
rte_free(t_vec);
return NULL;
}
- memcpy(t_vec->iv.data, iv, options->cipher_iv_sz);
+ memcpy(t_vec->cipher_iv.data, iv, options->cipher_iv_sz);
}
t_vec->ciphertext.length = options->max_buffer_size;
+
/* Set IV parameters */
- t_vec->iv.data = rte_malloc(NULL, options->cipher_iv_sz,
- 16);
- if (options->cipher_iv_sz && t_vec->iv.data == NULL) {
+ t_vec->cipher_iv.data = rte_malloc(NULL, options->cipher_iv_sz,
+ 16);
+ if (options->cipher_iv_sz && t_vec->cipher_iv.data == NULL) {
rte_free(t_vec);
return NULL;
}
- memcpy(t_vec->iv.data, iv, options->cipher_iv_sz);
- t_vec->iv.length = options->cipher_iv_sz;
+ memcpy(t_vec->cipher_iv.data, iv, options->cipher_iv_sz);
+ t_vec->cipher_iv.length = options->cipher_iv_sz;
t_vec->data.cipher_offset = 0;
t_vec->data.cipher_length = options->max_buffer_size;
+
}
if (options->op_type == CPERF_AUTH_ONLY ||
@@ -476,7 +478,7 @@ cperf_test_vector_get_dummy(struct cperf_options *options)
options->auth_aad_sz, 16);
if (t_vec->aad.data == NULL) {
if (options->op_type != CPERF_AUTH_ONLY)
- rte_free(t_vec->iv.data);
+ rte_free(t_vec->cipher_iv.data);
rte_free(t_vec);
return NULL;
}
@@ -485,13 +487,26 @@ cperf_test_vector_get_dummy(struct cperf_options *options)
t_vec->aad.data = NULL;
}
+ /* Set IV parameters */
+ t_vec->auth_iv.data = rte_malloc(NULL, options->auth_iv_sz,
+ 16);
+ if (options->auth_iv_sz && t_vec->auth_iv.data == NULL) {
+ if (options->op_type != CPERF_AUTH_ONLY)
+ rte_free(t_vec->cipher_iv.data);
+ rte_free(t_vec);
+ return NULL;
+ }
+ memcpy(t_vec->auth_iv.data, iv, options->auth_iv_sz);
+ t_vec->auth_iv.length = options->auth_iv_sz;
+
t_vec->aad.phys_addr = rte_malloc_virt2phy(t_vec->aad.data);
t_vec->aad.length = options->auth_aad_sz;
t_vec->digest.data = rte_malloc(NULL, options->auth_digest_sz,
16);
if (t_vec->digest.data == NULL) {
if (options->op_type != CPERF_AUTH_ONLY)
- rte_free(t_vec->iv.data);
+ rte_free(t_vec->cipher_iv.data);
+ rte_free(t_vec->auth_iv.data);
rte_free(t_vec->aad.data);
rte_free(t_vec);
return NULL;
diff --git a/app/test-crypto-perf/cperf_test_vectors.h b/app/test-crypto-perf/cperf_test_vectors.h
index e64f116..7f9c4fa 100644
--- a/app/test-crypto-perf/cperf_test_vectors.h
+++ b/app/test-crypto-perf/cperf_test_vectors.h
@@ -53,9 +53,13 @@ struct cperf_test_vector {
struct {
uint8_t *data;
- phys_addr_t phys_addr;
uint16_t length;
- } iv;
+ } cipher_iv;
+
+ struct {
+ uint8_t *data;
+ uint16_t length;
+ } auth_iv;
struct {
uint8_t *data;
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index 1b58b1d..81057ff 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -270,7 +270,8 @@ cperf_verify_test_constructor(uint8_t dev_id, uint16_t qp_id,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
- uint16_t priv_size = test_vector->iv.length;
+ uint16_t priv_size = test_vector->cipher_iv.length +
+ test_vector->auth_iv.length;
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, priv_size,
rte_socket_id());
diff --git a/app/test-crypto-perf/data/aes_cbc_128_sha.data b/app/test-crypto-perf/data/aes_cbc_128_sha.data
index 0b054f5..ff55590 100644
--- a/app/test-crypto-perf/data/aes_cbc_128_sha.data
+++ b/app/test-crypto-perf/data/aes_cbc_128_sha.data
@@ -282,7 +282,7 @@ auth_key =
0xe8, 0x38, 0x36, 0x58, 0x39, 0xd9, 0x9a, 0xc5, 0xe7, 0x3b, 0xc4, 0x47, 0xe2, 0xbd, 0x80, 0x73,
0xf8, 0xd1, 0x9a, 0x5e, 0x4b, 0xfb, 0x52, 0x6b, 0x50, 0xaf, 0x8b, 0xb7, 0xb5, 0x2c, 0x52, 0x84
-iv =
+cipher_iv =
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
####################
diff --git a/app/test-crypto-perf/data/aes_cbc_192_sha.data b/app/test-crypto-perf/data/aes_cbc_192_sha.data
index 7bfe3da..3f85a00 100644
--- a/app/test-crypto-perf/data/aes_cbc_192_sha.data
+++ b/app/test-crypto-perf/data/aes_cbc_192_sha.data
@@ -283,7 +283,7 @@ auth_key =
0xe8, 0x38, 0x36, 0x58, 0x39, 0xd9, 0x9a, 0xc5, 0xe7, 0x3b, 0xc4, 0x47, 0xe2, 0xbd, 0x80, 0x73,
0xf8, 0xd1, 0x9a, 0x5e, 0x4b, 0xfb, 0x52, 0x6b, 0x50, 0xaf, 0x8b, 0xb7, 0xb5, 0x2c, 0x52, 0x84
-iv =
+cipher_iv =
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
####################
diff --git a/app/test-crypto-perf/data/aes_cbc_256_sha.data b/app/test-crypto-perf/data/aes_cbc_256_sha.data
index 52dafb9..8da8161 100644
--- a/app/test-crypto-perf/data/aes_cbc_256_sha.data
+++ b/app/test-crypto-perf/data/aes_cbc_256_sha.data
@@ -283,7 +283,7 @@ auth_key =
0xe8, 0x38, 0x36, 0x58, 0x39, 0xd9, 0x9a, 0xc5, 0xe7, 0x3b, 0xc4, 0x47, 0xe2, 0xbd, 0x80, 0x73,
0xf8, 0xd1, 0x9a, 0x5e, 0x4b, 0xfb, 0x52, 0x6b, 0x50, 0xaf, 0x8b, 0xb7, 0xb5, 0x2c, 0x52, 0x84
-iv =
+cipher_iv =
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
####################
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 9ec2a4b..cf4fa4f 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -138,7 +138,8 @@ cperf_verify_devices_capabilities(struct cperf_options *opts,
capability,
opts->auth_key_sz,
opts->auth_digest_sz,
- opts->auth_aad_sz);
+ opts->auth_aad_sz,
+ opts->auth_iv_sz);
if (ret != 0)
return ret;
}
@@ -185,9 +186,9 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->ciphertext.length < opts->max_buffer_size)
return -1;
- if (test_vec->iv.data == NULL)
+ if (test_vec->cipher_iv.data == NULL)
return -1;
- if (test_vec->iv.length != opts->cipher_iv_sz)
+ if (test_vec->cipher_iv.length != opts->cipher_iv_sz)
return -1;
if (test_vec->cipher_key.data == NULL)
return -1;
@@ -204,6 +205,11 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->auth_key.length != opts->auth_key_sz)
return -1;
+ if (test_vec->auth_iv.length != opts->auth_iv_sz)
+ return -1;
+ /* Auth IV is only required for some algorithms */
+ if (opts->auth_iv_sz && test_vec->auth_iv.data == NULL)
+ return -1;
if (test_vec->digest.data == NULL)
return -1;
if (test_vec->digest.length < opts->auth_digest_sz)
@@ -226,9 +232,9 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->ciphertext.length < opts->max_buffer_size)
return -1;
- if (test_vec->iv.data == NULL)
+ if (test_vec->cipher_iv.data == NULL)
return -1;
- if (test_vec->iv.length != opts->cipher_iv_sz)
+ if (test_vec->cipher_iv.length != opts->cipher_iv_sz)
return -1;
if (test_vec->cipher_key.data == NULL)
return -1;
@@ -240,6 +246,11 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->auth_key.length != opts->auth_key_sz)
return -1;
+ if (test_vec->auth_iv.length != opts->auth_iv_sz)
+ return -1;
+ /* Auth IV is only required for some algorithms */
+ if (opts->auth_iv_sz && test_vec->auth_iv.data == NULL)
+ return -1;
if (test_vec->digest.data == NULL)
return -1;
if (test_vec->digest.length < opts->auth_digest_sz)
@@ -254,6 +265,10 @@ cperf_check_test_vector(struct cperf_options *opts,
return -1;
if (test_vec->ciphertext.length < opts->max_buffer_size)
return -1;
+ if (test_vec->cipher_iv.data == NULL)
+ return -1;
+ if (test_vec->cipher_iv.length != opts->cipher_iv_sz)
+ return -1;
if (test_vec->aad.data == NULL)
return -1;
if (test_vec->aad.length != opts->auth_aad_sz)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 4e352f4..68890ff 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -245,7 +245,8 @@ algorithm AES_CBC.
.max = 12,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}
}
},
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 4775bd2..eabf3dd 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -161,6 +161,8 @@ API Changes
offset from the start of the crypto operation.
* Moved length and offset of cipher IV from ``rte_crypto_sym_op`` to
``rte_crypto_cipher_xform``.
+ * Added authentication IV parameters (offset and length) in
+ ``rte_crypto_auth_xform``.
ABI Changes
diff --git a/doc/guides/sample_app_ug/l2_forward_crypto.rst b/doc/guides/sample_app_ug/l2_forward_crypto.rst
index 45d8a12..b9aa573 100644
--- a/doc/guides/sample_app_ug/l2_forward_crypto.rst
+++ b/doc/guides/sample_app_ug/l2_forward_crypto.rst
@@ -86,9 +86,10 @@ The application requires a number of command line options:
./build/l2fwd-crypto [EAL options] -- [-p PORTMASK] [-q NQ] [-s] [-T PERIOD] /
[--cdev_type HW/SW/ANY] [--chain HASH_CIPHER/CIPHER_HASH/CIPHER_ONLY/HASH_ONLY] /
[--cipher_algo ALGO] [--cipher_op ENCRYPT/DECRYPT] [--cipher_key KEY] /
- [--cipher_key_random_size SIZE] [--iv IV] [--iv_random_size SIZE] /
+ [--cipher_key_random_size SIZE] [--cipher_iv IV] [--cipher_iv_random_size SIZE] /
[--auth_algo ALGO] [--auth_op GENERATE/VERIFY] [--auth_key KEY] /
- [--auth_key_random_size SIZE] [--aad AAD] [--aad_random_size SIZE] /
+ [--auth_key_random_size SIZE] [--auth_iv IV] [--auth_iv_random_size SIZE] /
+ [--aad AAD] [--aad_random_size SIZE] /
[--digest size SIZE] [--sessionless] [--cryptodev_mask MASK]
where,
@@ -127,11 +128,11 @@ where,
Note that if --cipher_key is used, this will be ignored.
-* iv: set the IV to be used. Bytes has to be separated with ":"
+* cipher_iv: set the cipher IV to be used. Bytes has to be separated with ":"
-* iv_random_size: set the size of the IV, which will be generated randomly.
+* cipher_iv_random_size: set the size of the cipher IV, which will be generated randomly.
- Note that if --iv is used, this will be ignored.
+ Note that if --cipher_iv is used, this will be ignored.
* auth_algo: select the authentication algorithm (default is sha1-hmac)
@@ -147,6 +148,12 @@ where,
Note that if --auth_key is used, this will be ignored.
+* auth_iv: set the auth IV to be used. Bytes has to be separated with ":"
+
+* auth_iv_random_size: set the size of the auth IV, which will be generated randomly.
+
+ Note that if --auth_iv is used, this will be ignored.
+
* aad: set the AAD to be used. Bytes has to be separated with ":"
* aad_random_size: set the size of the AAD, which will be generated randomly.
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 1acde76..c0accfc 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -290,6 +290,10 @@ The following are the appication command-line options:
Set the size of authentication key.
+* ``--auth-iv-sz <n>``
+
+ Set the size of auth iv.
+
* ``--auth-digest-sz <n>``
Set the size of authentication digest.
@@ -345,9 +349,13 @@ a string of bytes in C byte array format::
Key used in auth operation.
-* ``iv``
+* ``cipher_iv``
+
+ Cipher Initial Vector.
+
+* ``auth_iv``
- Initial vector.
+ Auth Initial Vector.
* ``aad``
@@ -412,7 +420,7 @@ Test vector file for cipher algorithm aes cbc 256 with authorization sha::
0xf5, 0x0c, 0xe7, 0xa2, 0xa6, 0x23, 0xd5, 0x3d, 0x95, 0xd8, 0xcd, 0x86, 0x79, 0xf5, 0x01, 0x47,
0x4f, 0xf9, 0x1d, 0x9d, 0x36, 0xf7, 0x68, 0x1a, 0x64, 0x44, 0x58, 0x5d, 0xe5, 0x81, 0x15, 0x2a,
0x41, 0xe4, 0x0e, 0xaa, 0x1f, 0x04, 0x21, 0xff, 0x2c, 0xf3, 0x73, 0x2b, 0x48, 0x1e, 0xd2, 0xf7
- iv =
+ cipher_iv =
0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F
# Section sha 1 hmac buff 32
[sha1_hmac_buff_32]
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 7b68a20..542e6c4 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -60,7 +60,8 @@ static const struct rte_cryptodev_capabilities aesni_gcm_pmd_capabilities[] = {
.min = 0,
.max = 65535,
.increment = 1
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -85,7 +86,8 @@ static const struct rte_cryptodev_capabilities aesni_gcm_pmd_capabilities[] = {
.min = 0,
.max = 65535,
.increment = 1
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index d1bc28e..780b88b 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -57,7 +57,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 12,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -78,7 +79,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 12,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -99,7 +101,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 14,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -120,7 +123,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 16,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -141,7 +145,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 24,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -162,7 +167,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 32,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -183,7 +189,8 @@ static const struct rte_cryptodev_capabilities aesni_mb_pmd_capabilities[] = {
.max = 12,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 4d9ccbf..78ed770 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -59,7 +59,8 @@ static const struct rte_cryptodev_capabilities
.max = 20,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -80,7 +81,8 @@ static const struct rte_cryptodev_capabilities
.max = 32,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index d152161..ff3be70 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -217,7 +217,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 16,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -238,7 +239,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 20,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -259,7 +261,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 28,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -280,7 +283,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 32,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -301,7 +305,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 48,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -322,7 +327,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.max = 64,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index 62ebdbd..8f1a116 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -60,7 +60,8 @@ static const struct rte_cryptodev_capabilities kasumi_pmd_capabilities[] = {
.min = 8,
.max = 8,
.increment = 0
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index 5f74f0c..f8ad8e4 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -56,7 +56,8 @@ static const struct rte_cryptodev_capabilities null_crypto_pmd_capabilities[] =
.max = 0,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, },
}, },
},
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 22a6873..3026dbd 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -57,7 +57,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 16,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -78,7 +79,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 16,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -99,7 +101,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 20,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -120,7 +123,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 20,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -141,7 +145,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 28,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -162,7 +167,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 28,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -183,31 +189,33 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 32,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
{ /* SHA256 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- .aad_size = { 0 }
- }, }
+ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+ {.sym = {
+ .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+ {.auth = {
+ .algo = RTE_CRYPTO_AUTH_SHA256,
+ .block_size = 64,
+ .key_size = {
+ .min = 0,
+ .max = 0,
+ .increment = 0
+ },
+ .digest_size = {
+ .min = 32,
+ .max = 32,
+ .increment = 0
+ },
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
- },
+ }, }
+ },
{ /* SHA384 HMAC */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
{.sym = {
@@ -225,7 +233,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 48,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -246,7 +255,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 48,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -267,7 +277,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 64,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -288,7 +299,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.max = 64,
.increment = 0
},
- .aad_size = { 0 }
+ .aad_size = { 0 },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -353,7 +365,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.min = 0,
.max = 65535,
.increment = 1
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
@@ -398,7 +411,8 @@ static const struct rte_cryptodev_capabilities openssl_pmd_capabilities[] = {
.min = 8,
.max = 65532,
.increment = 4
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/qat/qat_crypto_capabilities.h b/drivers/crypto/qat/qat_crypto_capabilities.h
index 1294f24..4bc2c97 100644
--- a/drivers/crypto/qat/qat_crypto_capabilities.h
+++ b/drivers/crypto/qat/qat_crypto_capabilities.h
@@ -52,7 +52,8 @@
.max = 20, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -73,7 +74,8 @@
.max = 28, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -94,7 +96,8 @@
.max = 32, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -114,8 +117,9 @@
.min = 48, \
.max = 48, \
.increment = 0 \
- }, \
- .aad_size = { 0 } \
+ }, \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -136,7 +140,8 @@
.max = 64, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -157,7 +162,8 @@
.max = 16, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -178,7 +184,8 @@
.max = 16, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -203,7 +210,8 @@
.min = 0, \
.max = 240, \
.increment = 1 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -228,7 +236,8 @@
.min = 1, \
.max = 65535, \
.increment = 1 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -253,7 +262,8 @@
.min = 16, \
.max = 16, \
.increment = 0 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -374,7 +384,8 @@
.max = 0, \
.increment = 0 \
}, \
- .aad_size = { 0 } \
+ .aad_size = { 0 }, \
+ .iv_size = { 0 } \
}, }, \
}, }, \
}, \
@@ -439,7 +450,8 @@
.min = 8, \
.max = 8, \
.increment = 0 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}, \
@@ -566,7 +578,8 @@
.min = 16, \
.max = 16, \
.increment = 0 \
- } \
+ }, \
+ .iv_size = { 0 } \
}, } \
}, } \
}
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 7ce96be..68ede97 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -60,7 +60,8 @@ static const struct rte_cryptodev_capabilities snow3g_pmd_capabilities[] = {
.min = 16,
.max = 16,
.increment = 0
- }
+ },
+ .iv_size = { 0 },
}, }
}, }
},
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index c24b9bd..02c3c4a 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -60,7 +60,8 @@ static const struct rte_cryptodev_capabilities zuc_pmd_capabilities[] = {
.min = 16,
.max = 16,
.increment = 0
- }
+ },
+ .iv_size = { 0 }
}, }
}, }
},
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 9f16806..ba5aef7 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -160,14 +160,18 @@ struct l2fwd_crypto_options {
unsigned ckey_param;
int ckey_random_size;
- struct l2fwd_iv iv;
- unsigned int iv_param;
- int iv_random_size;
+ struct l2fwd_iv cipher_iv;
+ unsigned int cipher_iv_param;
+ int cipher_iv_random_size;
struct rte_crypto_sym_xform auth_xform;
uint8_t akey_param;
int akey_random_size;
+ struct l2fwd_iv auth_iv;
+ unsigned int auth_iv_param;
+ int auth_iv_random_size;
+
struct l2fwd_key aad;
unsigned aad_param;
int aad_random_size;
@@ -188,7 +192,8 @@ struct l2fwd_crypto_params {
unsigned digest_length;
unsigned block_size;
- struct l2fwd_iv iv;
+ struct l2fwd_iv cipher_iv;
+ struct l2fwd_iv auth_iv;
struct l2fwd_key aad;
struct rte_cryptodev_sym_session *session;
@@ -453,6 +458,18 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
rte_crypto_op_attach_sym_session(op, cparams->session);
if (cparams->do_hash) {
+ if (cparams->auth_iv.length) {
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op,
+ uint8_t *,
+ IV_OFFSET +
+ cparams->cipher_iv.length);
+ /*
+ * Copy IV at the end of the crypto operation,
+ * after the cipher IV, if added
+ */
+ rte_memcpy(iv_ptr, cparams->auth_iv.data,
+ cparams->auth_iv.length);
+ }
if (!cparams->hash_verify) {
/* Append space for digest to end of packet */
op->sym->auth.digest.data = (uint8_t *)rte_pktmbuf_append(m,
@@ -492,7 +509,8 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
IV_OFFSET);
/* Copy IV at the end of the crypto operation */
- rte_memcpy(iv_ptr, cparams->iv.data, cparams->iv.length);
+ rte_memcpy(iv_ptr, cparams->cipher_iv.data,
+ cparams->cipher_iv.length);
/* For wireless algorithms, offset/length must be in bits */
if (cparams->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
@@ -675,6 +693,18 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
port_cparams[i].block_size = options->block_size;
if (port_cparams[i].do_hash) {
+ port_cparams[i].auth_iv.data = options->auth_iv.data;
+ port_cparams[i].auth_iv.length = options->auth_iv.length;
+ if (!options->auth_iv_param)
+ generate_random_key(port_cparams[i].auth_iv.data,
+ port_cparams[i].auth_iv.length);
+ /* Set IV parameters */
+ if (options->auth_iv.length) {
+ options->auth_xform.auth.iv.offset =
+ IV_OFFSET + options->cipher_iv.length;
+ options->auth_xform.auth.iv.length =
+ options->auth_iv.length;
+ }
port_cparams[i].digest_length =
options->auth_xform.auth.digest_length;
if (options->auth_xform.auth.add_auth_data_length) {
@@ -698,16 +728,17 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
}
if (port_cparams[i].do_cipher) {
- port_cparams[i].iv.data = options->iv.data;
- port_cparams[i].iv.length = options->iv.length;
- if (!options->iv_param)
- generate_random_key(port_cparams[i].iv.data,
- port_cparams[i].iv.length);
+ port_cparams[i].cipher_iv.data = options->cipher_iv.data;
+ port_cparams[i].cipher_iv.length = options->cipher_iv.length;
+ if (!options->cipher_iv_param)
+ generate_random_key(port_cparams[i].cipher_iv.data,
+ port_cparams[i].cipher_iv.length);
port_cparams[i].cipher_algo = options->cipher_xform.cipher.algo;
/* Set IV parameters */
options->cipher_xform.cipher.iv.offset = IV_OFFSET;
- options->cipher_xform.cipher.iv.length = options->iv.length;
+ options->cipher_xform.cipher.iv.length =
+ options->cipher_iv.length;
}
port_cparams[i].session = initialize_crypto_session(options,
@@ -861,13 +892,15 @@ l2fwd_crypto_usage(const char *prgname)
" --cipher_op ENCRYPT / DECRYPT\n"
" --cipher_key KEY (bytes separated with \":\")\n"
" --cipher_key_random_size SIZE: size of cipher key when generated randomly\n"
- " --iv IV (bytes separated with \":\")\n"
- " --iv_random_size SIZE: size of IV when generated randomly\n"
+ " --cipher_iv IV (bytes separated with \":\")\n"
+ " --cipher_iv_random_size SIZE: size of cipher IV when generated randomly\n"
" --auth_algo ALGO\n"
" --auth_op GENERATE / VERIFY\n"
" --auth_key KEY (bytes separated with \":\")\n"
" --auth_key_random_size SIZE: size of auth key when generated randomly\n"
+ " --auth_iv IV (bytes separated with \":\")\n"
+ " --auth_iv_random_size SIZE: size of auth IV when generated randomly\n"
" --aad AAD (bytes separated with \":\")\n"
" --aad_random_size SIZE: size of AAD when generated randomly\n"
" --digest_size SIZE: size of digest to be generated/verified\n"
@@ -1078,18 +1111,18 @@ l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
else if (strcmp(lgopts[option_index].name, "cipher_key_random_size") == 0)
return parse_size(&options->ckey_random_size, optarg);
- else if (strcmp(lgopts[option_index].name, "iv") == 0) {
- options->iv_param = 1;
- options->iv.length =
- parse_key(options->iv.data, optarg);
- if (options->iv.length > 0)
+ else if (strcmp(lgopts[option_index].name, "cipher_iv") == 0) {
+ options->cipher_iv_param = 1;
+ options->cipher_iv.length =
+ parse_key(options->cipher_iv.data, optarg);
+ if (options->cipher_iv.length > 0)
return 0;
else
return -1;
}
- else if (strcmp(lgopts[option_index].name, "iv_random_size") == 0)
- return parse_size(&options->iv_random_size, optarg);
+ else if (strcmp(lgopts[option_index].name, "cipher_iv_random_size") == 0)
+ return parse_size(&options->cipher_iv_random_size, optarg);
/* Authentication options */
else if (strcmp(lgopts[option_index].name, "auth_algo") == 0) {
@@ -1115,6 +1148,20 @@ l2fwd_crypto_parse_args_long_options(struct l2fwd_crypto_options *options,
return parse_size(&options->akey_random_size, optarg);
}
+
+ else if (strcmp(lgopts[option_index].name, "auth_iv") == 0) {
+ options->auth_iv_param = 1;
+ options->auth_iv.length =
+ parse_key(options->auth_iv.data, optarg);
+ if (options->auth_iv.length > 0)
+ return 0;
+ else
+ return -1;
+ }
+
+ else if (strcmp(lgopts[option_index].name, "auth_iv_random_size") == 0)
+ return parse_size(&options->auth_iv_random_size, optarg);
+
else if (strcmp(lgopts[option_index].name, "aad") == 0) {
options->aad_param = 1;
options->aad.length =
@@ -1233,9 +1280,9 @@ l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
options->ckey_param = 0;
options->ckey_random_size = -1;
options->cipher_xform.cipher.key.length = 0;
- options->iv_param = 0;
- options->iv_random_size = -1;
- options->iv.length = 0;
+ options->cipher_iv_param = 0;
+ options->cipher_iv_random_size = -1;
+ options->cipher_iv.length = 0;
options->cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_AES_CBC;
options->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
@@ -1246,6 +1293,9 @@ l2fwd_crypto_default_options(struct l2fwd_crypto_options *options)
options->akey_param = 0;
options->akey_random_size = -1;
options->auth_xform.auth.key.length = 0;
+ options->auth_iv_param = 0;
+ options->auth_iv_random_size = -1;
+ options->auth_iv.length = 0;
options->aad_param = 0;
options->aad_random_size = -1;
options->aad.length = 0;
@@ -1267,7 +1317,7 @@ display_cipher_info(struct l2fwd_crypto_options *options)
rte_hexdump(stdout, "Cipher key:",
options->cipher_xform.cipher.key.data,
options->cipher_xform.cipher.key.length);
- rte_hexdump(stdout, "IV:", options->iv.data, options->iv.length);
+ rte_hexdump(stdout, "IV:", options->cipher_iv.data, options->cipher_iv.length);
}
static void
@@ -1279,6 +1329,7 @@ display_auth_info(struct l2fwd_crypto_options *options)
rte_hexdump(stdout, "Auth key:",
options->auth_xform.auth.key.data,
options->auth_xform.auth.key.length);
+ rte_hexdump(stdout, "IV:", options->auth_iv.data, options->auth_iv.length);
rte_hexdump(stdout, "AAD:", options->aad.data, options->aad.length);
}
@@ -1316,8 +1367,11 @@ l2fwd_crypto_options_print(struct l2fwd_crypto_options *options)
if (options->akey_param && (options->akey_random_size != -1))
printf("Auth key already parsed, ignoring size of random key\n");
- if (options->iv_param && (options->iv_random_size != -1))
- printf("IV already parsed, ignoring size of random IV\n");
+ if (options->cipher_iv_param && (options->cipher_iv_random_size != -1))
+ printf("Cipher IV already parsed, ignoring size of random IV\n");
+
+ if (options->auth_iv_param && (options->auth_iv_random_size != -1))
+ printf("Auth IV already parsed, ignoring size of random IV\n");
if (options->aad_param && (options->aad_random_size != -1))
printf("AAD already parsed, ignoring size of random AAD\n");
@@ -1365,14 +1419,16 @@ l2fwd_crypto_parse_args(struct l2fwd_crypto_options *options,
{ "cipher_op", required_argument, 0, 0 },
{ "cipher_key", required_argument, 0, 0 },
{ "cipher_key_random_size", required_argument, 0, 0 },
+ { "cipher_iv", required_argument, 0, 0 },
+ { "cipher_iv_random_size", required_argument, 0, 0 },
{ "auth_algo", required_argument, 0, 0 },
{ "auth_op", required_argument, 0, 0 },
{ "auth_key", required_argument, 0, 0 },
{ "auth_key_random_size", required_argument, 0, 0 },
+ { "auth_iv", required_argument, 0, 0 },
+ { "auth_iv_random_size", required_argument, 0, 0 },
- { "iv", required_argument, 0, 0 },
- { "iv_random_size", required_argument, 0, 0 },
{ "aad", required_argument, 0, 0 },
{ "aad_random_size", required_argument, 0, 0 },
{ "digest_size", required_argument, 0, 0 },
@@ -1660,8 +1716,10 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
options->block_size = cap->sym.cipher.block_size;
- check_iv_param(&cap->sym.cipher.iv_size, options->iv_param,
- options->iv_random_size, &options->iv.length);
+ check_iv_param(&cap->sym.cipher.iv_size,
+ options->cipher_iv_param,
+ options->cipher_iv_random_size,
+ &options->cipher_iv.length);
/*
* Check if length of provided cipher key is supported
@@ -1731,6 +1789,10 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
continue;
}
+ check_iv_param(&cap->sym.auth.iv_size,
+ options->auth_iv_param,
+ options->auth_iv_random_size,
+ &options->auth_iv.length);
/*
* Check if length of provided AAD is supported
* by the algorithm chosen.
@@ -1972,9 +2034,13 @@ reserve_key_memory(struct l2fwd_crypto_options *options)
if (options->auth_xform.auth.key.data == NULL)
rte_exit(EXIT_FAILURE, "Failed to allocate memory for auth key");
- options->iv.data = rte_malloc("iv", MAX_KEY_SIZE, 0);
- if (options->iv.data == NULL)
- rte_exit(EXIT_FAILURE, "Failed to allocate memory for IV");
+ options->cipher_iv.data = rte_malloc("cipher iv", MAX_KEY_SIZE, 0);
+ if (options->cipher_iv.data == NULL)
+ rte_exit(EXIT_FAILURE, "Failed to allocate memory for cipher IV");
+
+ options->auth_iv.data = rte_malloc("auth iv", MAX_KEY_SIZE, 0);
+ if (options->auth_iv.data == NULL)
+ rte_exit(EXIT_FAILURE, "Failed to allocate memory for auth IV");
options->aad.data = rte_malloc("aad", MAX_KEY_SIZE, 0);
if (options->aad.data == NULL)
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index c1a1e27..0e84bad 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -393,6 +393,30 @@ struct rte_crypto_auth_xform {
* of the AAD data is specified in additional authentication data
* length field of the rte_crypto_sym_op_data structure
*/
+
+ struct {
+ uint16_t offset;
+ /**< Starting point for Initialisation Vector or Counter,
+ * specified as number of bytes from start of crypto
+ * operation (rte_crypto_op).
+ *
+ * - For KASUMI in F9 mode, SNOW 3G in UIA2 mode,
+ * for ZUC in EIA3 mode and for AES-GMAC, this is the
+ * authentication Initialisation Vector (IV) value.
+ *
+ *
+ * For optimum performance, the data pointed to SHOULD
+ * be 8-byte aligned.
+ */
+ uint16_t length;
+ /**< Length of valid IV data.
+ *
+ * - For KASUMI in F9 mode, SNOW3G in UIA2 mode, for
+ * ZUC in EIA3 mode and for AES-GMAC, this is the length
+ * of the IV.
+ *
+ */
+ } iv; /**< Initialisation vector parameters */
};
/** Crypto transformation types */
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index a466ed7..5aa177f 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -272,7 +272,8 @@ rte_cryptodev_sym_capability_check_cipher(
int
rte_cryptodev_sym_capability_check_auth(
const struct rte_cryptodev_symmetric_capability *capability,
- uint16_t key_size, uint16_t digest_size, uint16_t aad_size)
+ uint16_t key_size, uint16_t digest_size, uint16_t aad_size,
+ uint16_t iv_size)
{
if (param_range_check(key_size, capability->auth.key_size))
return -1;
@@ -283,6 +284,9 @@ rte_cryptodev_sym_capability_check_auth(
if (param_range_check(aad_size, capability->auth.aad_size))
return -1;
+ if (param_range_check(iv_size, capability->auth.iv_size))
+ return -1;
+
return 0;
}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 91f3375..75b423a 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -184,6 +184,8 @@ struct rte_cryptodev_symmetric_capability {
/**< digest size range */
struct rte_crypto_param_range aad_size;
/**< Additional authentication data size range */
+ struct rte_crypto_param_range iv_size;
+ /**< Initialisation vector data size range */
} auth;
/**< Symmetric Authentication transform capabilities */
struct {
@@ -260,6 +262,7 @@ rte_cryptodev_sym_capability_check_cipher(
* @param key_size Auth key size.
* @param digest_size Auth digest size.
* @param aad_size Auth aad size.
+ * @param iv_size Auth initial vector size.
*
* @return
* - Return 0 if the parameters are in range of the capability.
@@ -268,7 +271,8 @@ rte_cryptodev_sym_capability_check_cipher(
int
rte_cryptodev_sym_capability_check_auth(
const struct rte_cryptodev_symmetric_capability *capability,
- uint16_t key_size, uint16_t digest_size, uint16_t aad_size);
+ uint16_t key_size, uint16_t digest_size, uint16_t aad_size,
+ uint16_t iv_size);
/**
* Provide the cipher algorithm enum, given an algorithm string
--
2.9.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 14/27] cryptodev: move IV parameters to crypto session
` (4 preceding siblings ...)
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 13/27] cryptodev: pass IV as offset Pablo de Lara
@ 2017-06-26 10:22 1% ` Pablo de Lara
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 15/27] cryptodev: add auth IV Pablo de Lara
` (5 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Since IV parameters (offset and length) should not
change for operations in the same session, these parameters
are moved to the crypto transform structure, so they will
be stored in the sessions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 22 ++--
app/test-crypto-perf/cperf_ops.h | 3 +-
app/test-crypto-perf/cperf_test_latency.c | 7 +-
app/test-crypto-perf/cperf_test_throughput.c | 6 +-
app/test-crypto-perf/cperf_test_vectors.c | 9 ++
app/test-crypto-perf/cperf_test_verify.c | 6 +-
doc/guides/prog_guide/cryptodev_lib.rst | 5 -
doc/guides/rel_notes/release_17_08.rst | 2 +
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 22 ++--
drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h | 5 +
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 11 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 5 +
drivers/crypto/armv8/rte_armv8_pmd.c | 12 +-
drivers/crypto/armv8/rte_armv8_pmd_private.h | 7 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 39 ++++--
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 8 +-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 25 ++--
drivers/crypto/kasumi/rte_kasumi_pmd_private.h | 1 +
drivers/crypto/null/null_crypto_pmd_ops.c | 6 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 17 ++-
drivers/crypto/openssl/rte_openssl_pmd_private.h | 5 +
drivers/crypto/qat/qat_adf/qat_algs.h | 4 +
drivers/crypto/qat/qat_crypto.c | 44 +++----
drivers/crypto/snow3g/rte_snow3g_pmd.c | 25 ++--
drivers/crypto/snow3g/rte_snow3g_pmd_private.h | 1 +
drivers/crypto/zuc/rte_zuc_pmd.c | 16 +--
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 2 +-
drivers/crypto/zuc/rte_zuc_pmd_private.h | 1 +
examples/ipsec-secgw/esp.c | 9 --
examples/ipsec-secgw/ipsec.h | 3 +
examples/ipsec-secgw/sa.c | 20 +++
examples/l2fwd-crypto/main.c | 90 ++++++++------
lib/librte_cryptodev/rte_crypto_sym.h | 98 +++++++--------
test/test/test_cryptodev.c | 134 ++++++++++++---------
test/test/test_cryptodev_blockcipher.c | 4 +-
test/test/test_cryptodev_perf.c | 61 +++++-----
36 files changed, 420 insertions(+), 315 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 1e151a9..d6d9f14 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -105,13 +105,11 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
sym_op->m_src = bufs_in[i];
sym_op->m_dst = bufs_out[i];
- /* cipher parameters */
- sym_op->cipher.iv.offset = iv_offset;
- sym_op->cipher.iv.length = test_vector->iv.length;
memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset),
test_vector->iv.data,
test_vector->iv.length);
+ /* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
options->cipher_algo == RTE_CRYPTO_CIPHER_KASUMI_F8 ||
options->cipher_algo == RTE_CRYPTO_CIPHER_ZUC_EEA3)
@@ -209,13 +207,11 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->m_src = bufs_in[i];
sym_op->m_dst = bufs_out[i];
- /* cipher parameters */
- sym_op->cipher.iv.offset = iv_offset;
- sym_op->cipher.iv.length = test_vector->iv.length;
memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset),
test_vector->iv.data,
test_vector->iv.length);
+ /* cipher parameters */
if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
options->cipher_algo == RTE_CRYPTO_CIPHER_KASUMI_F8 ||
options->cipher_algo == RTE_CRYPTO_CIPHER_ZUC_EEA3)
@@ -290,13 +286,11 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
sym_op->m_src = bufs_in[i];
sym_op->m_dst = bufs_out[i];
- /* cipher parameters */
- sym_op->cipher.iv.offset = iv_offset;
- sym_op->cipher.iv.length = test_vector->iv.length;
memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset),
test_vector->iv.data,
test_vector->iv.length);
+ /* cipher parameters */
sym_op->cipher.data.length = options->test_buffer_size;
sym_op->cipher.data.offset =
RTE_ALIGN_CEIL(options->auth_aad_sz, 16);
@@ -348,7 +342,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
static struct rte_cryptodev_sym_session *
cperf_create_session(uint8_t dev_id,
const struct cperf_options *options,
- const struct cperf_test_vector *test_vector)
+ const struct cperf_test_vector *test_vector,
+ uint16_t iv_offset)
{
struct rte_crypto_sym_xform cipher_xform;
struct rte_crypto_sym_xform auth_xform;
@@ -362,6 +357,7 @@ cperf_create_session(uint8_t dev_id,
cipher_xform.next = NULL;
cipher_xform.cipher.algo = options->cipher_algo;
cipher_xform.cipher.op = options->cipher_op;
+ cipher_xform.cipher.iv.offset = iv_offset;
/* cipher different than null */
if (options->cipher_algo != RTE_CRYPTO_CIPHER_NULL) {
@@ -369,9 +365,12 @@ cperf_create_session(uint8_t dev_id,
test_vector->cipher_key.data;
cipher_xform.cipher.key.length =
test_vector->cipher_key.length;
+ cipher_xform.cipher.iv.length = test_vector->iv.length;
+
} else {
cipher_xform.cipher.key.data = NULL;
cipher_xform.cipher.key.length = 0;
+ cipher_xform.cipher.iv.length = 0;
}
/* create crypto session */
sess = rte_cryptodev_sym_session_create(dev_id, &cipher_xform);
@@ -415,6 +414,7 @@ cperf_create_session(uint8_t dev_id,
cipher_xform.next = NULL;
cipher_xform.cipher.algo = options->cipher_algo;
cipher_xform.cipher.op = options->cipher_op;
+ cipher_xform.cipher.iv.offset = iv_offset;
/* cipher different than null */
if (options->cipher_algo != RTE_CRYPTO_CIPHER_NULL) {
@@ -422,9 +422,11 @@ cperf_create_session(uint8_t dev_id,
test_vector->cipher_key.data;
cipher_xform.cipher.key.length =
test_vector->cipher_key.length;
+ cipher_xform.cipher.iv.length = test_vector->iv.length;
} else {
cipher_xform.cipher.key.data = NULL;
cipher_xform.cipher.key.length = 0;
+ cipher_xform.cipher.iv.length = 0;
}
/*
diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h
index f7b431c..bb83cd5 100644
--- a/app/test-crypto-perf/cperf_ops.h
+++ b/app/test-crypto-perf/cperf_ops.h
@@ -42,7 +42,8 @@
typedef struct rte_cryptodev_sym_session *(*cperf_sessions_create_t)(
uint8_t dev_id, const struct cperf_options *options,
- const struct cperf_test_vector *test_vector);
+ const struct cperf_test_vector *test_vector,
+ uint16_t iv_offset);
typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops,
struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out,
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index 3aca1b4..d37083f 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -211,7 +211,12 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
ctx->options = options;
ctx->test_vector = test_vector;
- ctx->sess = op_fns->sess_create(dev_id, options, test_vector);
+ /* IV goes at the end of the crypto operation */
+ uint16_t iv_offset = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op) +
+ sizeof(struct cperf_op_result *);
+
+ ctx->sess = op_fns->sess_create(dev_id, options, test_vector, iv_offset);
if (ctx->sess == NULL)
goto err;
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index ba883fd..4d2b3d3 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -195,7 +195,11 @@ cperf_throughput_test_constructor(uint8_t dev_id, uint16_t qp_id,
ctx->options = options;
ctx->test_vector = test_vector;
- ctx->sess = op_fns->sess_create(dev_id, options, test_vector);
+ /* IV goes at the end of the cryptop operation */
+ uint16_t iv_offset = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+
+ ctx->sess = op_fns->sess_create(dev_id, options, test_vector, iv_offset);
if (ctx->sess == NULL)
goto err;
diff --git a/app/test-crypto-perf/cperf_test_vectors.c b/app/test-crypto-perf/cperf_test_vectors.c
index 36b3f6f..4a14fb3 100644
--- a/app/test-crypto-perf/cperf_test_vectors.c
+++ b/app/test-crypto-perf/cperf_test_vectors.c
@@ -423,7 +423,16 @@ cperf_test_vector_get_dummy(struct cperf_options *options)
memcpy(t_vec->iv.data, iv, options->cipher_iv_sz);
}
t_vec->ciphertext.length = options->max_buffer_size;
+ /* Set IV parameters */
+ t_vec->iv.data = rte_malloc(NULL, options->cipher_iv_sz,
+ 16);
+ if (options->cipher_iv_sz && t_vec->iv.data == NULL) {
+ rte_free(t_vec);
+ return NULL;
+ }
+ memcpy(t_vec->iv.data, iv, options->cipher_iv_sz);
t_vec->iv.length = options->cipher_iv_sz;
+
t_vec->data.cipher_offset = 0;
t_vec->data.cipher_length = options->max_buffer_size;
}
diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c
index d5a2b33..1b58b1d 100644
--- a/app/test-crypto-perf/cperf_test_verify.c
+++ b/app/test-crypto-perf/cperf_test_verify.c
@@ -199,7 +199,11 @@ cperf_verify_test_constructor(uint8_t dev_id, uint16_t qp_id,
ctx->options = options;
ctx->test_vector = test_vector;
- ctx->sess = op_fns->sess_create(dev_id, options, test_vector);
+ /* IV goes at the end of the cryptop operation */
+ uint16_t iv_offset = sizeof(struct rte_crypto_op) +
+ sizeof(struct rte_crypto_sym_op);
+
+ ctx->sess = op_fns->sess_create(dev_id, options, test_vector, iv_offset);
if (ctx->sess == NULL)
goto err;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 48c58a9..4e352f4 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -535,11 +535,6 @@ chain.
uint32_t offset;
uint32_t length;
} data; /**< Data offsets and length for ciphering */
-
- struct {
- uint16_t offset;
- uint16_t length;
- } iv; /**< Initialisation vector parameters */
} cipher;
struct {
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 68e8022..4775bd2 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -159,6 +159,8 @@ API Changes
with a zero length array.
* Replaced pointer and physical address of IV in ``rte_crypto_sym_op`` with
offset from the start of the crypto operation.
+ * Moved length and offset of cipher IV from ``rte_crypto_sym_op`` to
+ ``rte_crypto_cipher_xform``.
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 9a23415..28ac035 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -104,6 +104,17 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
return -EINVAL;
}
+ /* Set IV parameters */
+ sess->iv.offset = cipher_xform->cipher.iv.offset;
+ sess->iv.length = cipher_xform->cipher.iv.length;
+
+ /* IV check */
+ if (sess->iv.length != 16 && sess->iv.length != 12 &&
+ sess->iv.length != 0) {
+ GCM_LOG_ERR("Wrong IV length");
+ return -EINVAL;
+ }
+
/* Select Crypto operation */
if (cipher_xform->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
auth_xform->auth.op == RTE_CRYPTO_AUTH_OP_GENERATE)
@@ -221,20 +232,13 @@ process_gcm_crypto_op(struct rte_crypto_op *op,
src = rte_pktmbuf_mtod_offset(m_src, uint8_t *, offset);
- /* sanity checks */
- if (sym_op->cipher.iv.length != 16 && sym_op->cipher.iv.length != 12 &&
- sym_op->cipher.iv.length != 0) {
- GCM_LOG_ERR("iv");
- return -1;
- }
-
iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- sym_op->cipher.iv.offset);
+ session->iv.offset);
/*
* GCM working in 12B IV mode => 16B pre-counter block we need
* to set BE LSB to 1, driver expects that 16B is allocated
*/
- if (sym_op->cipher.iv.length == 12) {
+ if (session->iv.length == 12) {
uint32_t *iv_padd = (uint32_t *)&(iv_ptr[12]);
*iv_padd = rte_bswap32(1);
}
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 0496b44..2ed96f8 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -90,6 +90,11 @@ enum aesni_gcm_key {
/** AESNI GCM private session structure */
struct aesni_gcm_session {
+ struct {
+ uint16_t length;
+ uint16_t offset;
+ } iv;
+ /**< IV parameters */
enum aesni_gcm_operation op;
/**< GCM operation type */
enum aesni_gcm_key key;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 1685fa8..aa9fcf8 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -246,6 +246,10 @@ aesni_mb_set_session_cipher_parameters(const struct aesni_mb_op_fns *mb_ops,
return -1;
}
+ /* Set IV parameters */
+ sess->iv.offset = xform->cipher.iv.offset;
+ sess->iv.length = xform->cipher.iv.length;
+
/* Expanded cipher keys */
(*aes_keyexp_fn)(xform->cipher.key.data,
sess->cipher.expanded_aes_keys.encode,
@@ -300,6 +304,9 @@ aesni_mb_set_session_parameters(const struct aesni_mb_op_fns *mb_ops,
return -1;
}
+ /* Default IV length = 0 */
+ sess->iv.length = 0;
+
if (aesni_mb_set_session_auth_parameters(mb_ops, sess, auth_xform)) {
MB_LOG_ERR("Invalid/unsupported authentication parameters");
return -1;
@@ -472,8 +479,8 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
/* Set IV parameters */
job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
- job->iv_len_in_bytes = op->sym->cipher.iv.length;
+ session->iv.offset);
+ job->iv_len_in_bytes = session->iv.length;
/* Data Parameter */
job->src = rte_pktmbuf_mtod(m_src, uint8_t *);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
index 0d82699..5c50d37 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -167,6 +167,11 @@ struct aesni_mb_qp {
/** AES-NI multi-buffer private session structure */
struct aesni_mb_session {
JOB_CHAIN_ORDER chain_order;
+ struct {
+ uint16_t length;
+ uint16_t offset;
+ } iv;
+ /**< IV parameters */
/** Cipher Parameters */
struct {
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index fa6a7d5..5256f66 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -432,7 +432,7 @@ armv8_crypto_set_session_chained_parameters(struct armv8_crypto_session *sess,
case RTE_CRYPTO_CIPHER_AES_CBC:
sess->cipher.algo = calg;
/* IV len is always 16 bytes (block size) for AES CBC */
- sess->cipher.iv_len = 16;
+ sess->cipher.iv.length = 16;
break;
default:
return -EINVAL;
@@ -523,6 +523,9 @@ armv8_crypto_set_session_parameters(struct armv8_crypto_session *sess,
return -EINVAL;
}
+ /* Set IV offset */
+ sess->cipher.iv.offset = cipher_xform->cipher.iv.offset;
+
if (is_chained_op) {
ret = armv8_crypto_set_session_chained_parameters(sess,
cipher_xform, auth_xform);
@@ -649,13 +652,8 @@ process_armv8_chained_op
op->sym->auth.digest.length);
}
- if (unlikely(op->sym->cipher.iv.length != sess->cipher.iv_len)) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- return;
- }
-
arg.cipher.iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->cipher.iv.offset);
arg.cipher.key = sess->cipher.key.data;
/* Acquire combined mode function */
crypto_func = sess->crypto_func;
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_private.h b/drivers/crypto/armv8/rte_armv8_pmd_private.h
index b75107f..75bde9f 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_private.h
+++ b/drivers/crypto/armv8/rte_armv8_pmd_private.h
@@ -159,8 +159,11 @@ struct armv8_crypto_session {
/**< cipher operation direction */
enum rte_crypto_cipher_algorithm algo;
/**< cipher algorithm */
- int iv_len;
- /**< IV length */
+ struct {
+ uint16_t length;
+ uint16_t offset;
+ } iv;
+ /**< IV parameters */
struct {
uint8_t data[256];
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 1605701..3930794 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -88,7 +88,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
uint8_t *old_icv;
uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
PMD_INIT_FUNC_TRACE();
@@ -138,7 +138,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
sym_op->auth.digest.length,
sym_op->cipher.data.offset,
sym_op->cipher.data.length,
- sym_op->cipher.iv.length,
+ sess->iv.length,
sym_op->m_src->data_off);
/* Configure Output FLE with Scatter/Gather Entry */
@@ -163,7 +163,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
DPAA2_VADDR_TO_IOVA(sym_op->auth.digest.data));
sge->length = sym_op->auth.digest.length;
DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
- sym_op->cipher.iv.length));
+ sess->iv.length));
}
DPAA2_SET_FLE_FIN(sge);
@@ -175,13 +175,13 @@ build_authenc_fd(dpaa2_sec_session *sess,
DPAA2_SET_FLE_SG_EXT(fle);
DPAA2_SET_FLE_FIN(fle);
fle->length = (sess->dir == DIR_ENC) ?
- (sym_op->auth.data.length + sym_op->cipher.iv.length) :
- (sym_op->auth.data.length + sym_op->cipher.iv.length +
+ (sym_op->auth.data.length + sess->iv.length) :
+ (sym_op->auth.data.length + sess->iv.length +
sym_op->auth.digest.length);
/* Configure Input SGE for Encap/Decap */
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
- sge->length = sym_op->cipher.iv.length;
+ sge->length = sess->iv.length;
sge++;
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
@@ -198,7 +198,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
sge->length = sym_op->auth.digest.length;
DPAA2_SET_FD_LEN(fd, (sym_op->auth.data.length +
sym_op->auth.digest.length +
- sym_op->cipher.iv.length));
+ sess->iv.length));
}
DPAA2_SET_FLE_FIN(sge);
if (auth_only_len) {
@@ -310,7 +310,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
PMD_INIT_FUNC_TRACE();
@@ -347,21 +347,21 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
flc = &priv->flc_desc[0].flc;
DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(fle));
DPAA2_SET_FD_LEN(fd, sym_op->cipher.data.length +
- sym_op->cipher.iv.length);
+ sess->iv.length);
DPAA2_SET_FD_COMPOUND_FMT(fd);
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
PMD_TX_LOG(DEBUG, "cipher_off: 0x%x/length %d,ivlen=%d data_off: 0x%x",
sym_op->cipher.data.offset,
sym_op->cipher.data.length,
- sym_op->cipher.iv.length,
+ sess->iv.length,
sym_op->m_src->data_off);
DPAA2_SET_FLE_ADDR(fle, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
DPAA2_SET_FLE_OFFSET(fle, sym_op->cipher.data.offset +
sym_op->m_src->data_off);
- fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+ fle->length = sym_op->cipher.data.length + sess->iv.length;
PMD_TX_LOG(DEBUG, "1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
flc, fle, fle->addr_hi, fle->addr_lo, fle->length);
@@ -369,12 +369,12 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
fle++;
DPAA2_SET_FLE_ADDR(fle, DPAA2_VADDR_TO_IOVA(sge));
- fle->length = sym_op->cipher.data.length + sym_op->cipher.iv.length;
+ fle->length = sym_op->cipher.data.length + sess->iv.length;
DPAA2_SET_FLE_SG_EXT(fle);
DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
- sge->length = sym_op->cipher.iv.length;
+ sge->length = sess->iv.length;
sge++;
DPAA2_SET_FLE_ADDR(sge, DPAA2_MBUF_VADDR_TO_IOVA(sym_op->m_src));
@@ -798,6 +798,10 @@ dpaa2_sec_cipher_init(struct rte_cryptodev *dev,
cipherdata.key_enc_flags = 0;
cipherdata.key_type = RTA_DATA_IMM;
+ /* Set IV parameters */
+ session->iv.offset = xform->cipher.iv.offset;
+ session->iv.length = xform->cipher.iv.length;
+
switch (xform->cipher.algo) {
case RTE_CRYPTO_CIPHER_AES_CBC:
cipherdata.algtype = OP_ALG_ALGSEL_AES;
@@ -1016,6 +1020,11 @@ dpaa2_sec_aead_init(struct rte_cryptodev *dev,
(cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) ?
DPAA2_SEC_HASH_CIPHER : DPAA2_SEC_CIPHER_HASH;
}
+
+ /* Set IV parameters */
+ session->iv.offset = cipher_xform->iv.offset;
+ session->iv.length = cipher_xform->iv.length;
+
/* For SEC AEAD only one descriptor is required */
priv = (struct ctxt_priv *)rte_zmalloc(NULL,
sizeof(struct ctxt_priv) + sizeof(struct sec_flc_desc),
@@ -1216,6 +1225,10 @@ dpaa2_sec_session_configure(struct rte_cryptodev *dev,
RTE_LOG(ERR, PMD, "invalid session struct");
return NULL;
}
+
+ /* Default IV length = 0 */
+ session->iv.length = 0;
+
/* Cipher Only */
if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER && xform->next == NULL) {
session->ctxt_type = DPAA2_SEC_CIPHER;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index f5c6169..d152161 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -187,6 +187,10 @@ typedef struct dpaa2_sec_session_entry {
uint8_t *data; /**< pointer to key data */
size_t length; /**< key length in bytes */
} auth_key;
+ struct {
+ uint16_t length; /**< IV length in bytes */
+ uint16_t offset; /**< IV offset in bytes */
+ } iv;
uint8_t status;
union {
struct dpaa2_sec_cipher_ctxt cipher_ctxt;
@@ -275,8 +279,8 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
.min = 32,
.max = 32,
.increment = 0
- },
- .aad_size = { 0 }
+ },
+ .aad_size = { 0 }
}, }
}, }
},
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index edf84e8..2af7769 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -116,6 +116,13 @@ kasumi_set_session_parameters(struct kasumi_session *sess,
/* Only KASUMI F8 supported */
if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_KASUMI_F8)
return -EINVAL;
+
+ sess->iv_offset = cipher_xform->cipher.iv.offset;
+ if (cipher_xform->cipher.iv.length != KASUMI_IV_LENGTH) {
+ KASUMI_LOG_ERR("Wrong IV length");
+ return -EINVAL;
+ }
+
/* Initialize key */
sso_kasumi_init_f8_key_sched(cipher_xform->cipher.key.data,
&sess->pKeySched_cipher);
@@ -179,13 +186,6 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
uint32_t num_bytes[num_ops];
for (i = 0; i < num_ops; i++) {
- /* Sanity checks. */
- if (ops[i]->sym->cipher.iv.length != KASUMI_IV_LENGTH) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- KASUMI_LOG_ERR("iv");
- break;
- }
-
src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
dst[i] = ops[i]->sym->m_dst ?
@@ -194,7 +194,7 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
iv_ptr = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- ops[i]->sym->cipher.iv.offset);
+ session->iv_offset);
iv[i] = *((uint64_t *)(iv_ptr));
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
@@ -218,13 +218,6 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
uint64_t iv;
uint32_t length_in_bits, offset_in_bits;
- /* Sanity checks. */
- if (unlikely(op->sym->cipher.iv.length != KASUMI_IV_LENGTH)) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- KASUMI_LOG_ERR("iv");
- return 0;
- }
-
offset_in_bits = op->sym->cipher.data.offset;
src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
if (op->sym->m_dst == NULL) {
@@ -234,7 +227,7 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
}
dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ session->iv_offset);
iv = *((uint64_t *)(iv_ptr));
length_in_bits = op->sym->cipher.data.length;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
index fb586ca..6a0d47a 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
@@ -92,6 +92,7 @@ struct kasumi_session {
sso_kasumi_key_sched_t pKeySched_hash;
enum kasumi_operation op;
enum rte_crypto_auth_operation auth_op;
+ uint16_t iv_offset;
} __rte_cache_aligned;
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index 12c946c..5f74f0c 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -72,11 +72,7 @@ static const struct rte_cryptodev_capabilities null_crypto_pmd_capabilities[] =
.max = 0,
.increment = 0
},
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
+ .iv_size = { 0 }
}, },
}, }
},
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 02cf25a..ab4333e 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -264,6 +264,10 @@ openssl_set_session_cipher_parameters(struct openssl_session *sess,
/* Select cipher key */
sess->cipher.key.length = xform->cipher.key.length;
+ /* Set IV parameters */
+ sess->iv.offset = xform->cipher.iv.offset;
+ sess->iv.length = xform->cipher.iv.length;
+
/* Select cipher algo */
switch (xform->cipher.algo) {
case RTE_CRYPTO_CIPHER_3DES_CBC:
@@ -397,6 +401,9 @@ openssl_set_session_parameters(struct openssl_session *sess,
return -EINVAL;
}
+ /* Default IV length = 0 */
+ sess->iv.length = 0;
+
/* cipher_xform must be check before auth_xform */
if (cipher_xform) {
if (openssl_set_session_cipher_parameters(
@@ -924,8 +931,8 @@ process_openssl_combined_op
}
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
- ivlen = op->sym->cipher.iv.length;
+ sess->iv.offset);
+ ivlen = sess->iv.length;
aad = op->sym->auth.aad.data;
aadlen = op->sym->auth.aad.length;
@@ -989,7 +996,7 @@ process_openssl_cipher_op
op->sym->cipher.data.offset);
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
if (sess->cipher.mode == OPENSSL_CIPHER_LIB)
if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
@@ -1031,7 +1038,7 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op,
op->sym->cipher.data.offset);
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
block_size = DES_BLOCK_SIZE;
@@ -1090,7 +1097,7 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op,
last_block_len, sess->cipher.bpi_ctx);
/* Prepare parameters for CBC mode op */
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ sess->iv.offset);
dst += last_block_len - srclen;
srclen -= last_block_len;
}
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_private.h b/drivers/crypto/openssl/rte_openssl_pmd_private.h
index 4d820c5..3a64853 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_private.h
+++ b/drivers/crypto/openssl/rte_openssl_pmd_private.h
@@ -108,6 +108,11 @@ struct openssl_session {
enum openssl_chain_order chain_order;
/**< chain order mode */
+ struct {
+ uint16_t length;
+ uint16_t offset;
+ } iv;
+ /**< IV parameters */
/** Cipher Parameters */
struct {
enum rte_crypto_cipher_operation direction;
diff --git a/drivers/crypto/qat/qat_adf/qat_algs.h b/drivers/crypto/qat/qat_adf/qat_algs.h
index 5c63406..e8fa3d3 100644
--- a/drivers/crypto/qat/qat_adf/qat_algs.h
+++ b/drivers/crypto/qat/qat_adf/qat_algs.h
@@ -127,6 +127,10 @@ struct qat_session {
struct icp_qat_fw_la_bulk_req fw_req;
uint8_t aad_len;
struct qat_crypto_instance *inst;
+ struct {
+ uint16_t offset;
+ uint16_t length;
+ } iv;
rte_spinlock_t lock; /* protects this struct */
};
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 7015549..01f7de1 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -298,6 +298,9 @@ qat_crypto_sym_configure_session_cipher(struct rte_cryptodev *dev,
/* Get cipher xform from crypto xform chain */
cipher_xform = qat_get_cipher_xform(xform);
+ session->iv.offset = cipher_xform->iv.offset;
+ session->iv.length = cipher_xform->iv.length;
+
switch (cipher_xform->algo) {
case RTE_CRYPTO_CIPHER_AES_CBC:
if (qat_alg_validate_aes_key(cipher_xform->key.length,
@@ -643,7 +646,7 @@ qat_bpicipher_preprocess(struct qat_session *ctx,
else
/* runt block, i.e. less than one full block */
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- sym_op->cipher.iv.offset);
+ ctx->iv.offset);
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
rte_hexdump(stdout, "BPI: src before pre-process:", last_block,
@@ -699,7 +702,7 @@ qat_bpicipher_postprocess(struct qat_session *ctx,
else
/* runt block, i.e. less than one full block */
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- sym_op->cipher.iv.offset);
+ ctx->iv.offset);
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
rte_hexdump(stdout, "BPI: src before post-process:", last_block,
@@ -980,27 +983,20 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
}
iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ ctx->iv.offset);
/* copy IV into request if it fits */
- /*
- * If IV length is zero do not copy anything but still
- * use request descriptor embedded IV
- *
- */
- if (op->sym->cipher.iv.length) {
- if (op->sym->cipher.iv.length <=
- sizeof(cipher_param->u.cipher_IV_array)) {
- rte_memcpy(cipher_param->u.cipher_IV_array,
- iv_ptr,
- op->sym->cipher.iv.length);
- } else {
- ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
- qat_req->comn_hdr.serv_specif_flags,
- ICP_QAT_FW_CIPH_IV_64BIT_PTR);
- cipher_param->u.s.cipher_IV_ptr =
- rte_crypto_op_ctophys_offset(op,
- op->sym->cipher.iv.offset);
- }
+ if (ctx->iv.length <=
+ sizeof(cipher_param->u.cipher_IV_array)) {
+ rte_memcpy(cipher_param->u.cipher_IV_array,
+ iv_ptr,
+ ctx->iv.length);
+ } else {
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ qat_req->comn_hdr.serv_specif_flags,
+ ICP_QAT_FW_CIPH_IV_64BIT_PTR);
+ cipher_param->u.s.cipher_IV_ptr =
+ rte_crypto_op_ctophys_offset(op,
+ ctx->iv.offset);
}
min_ofs = cipher_ofs;
}
@@ -1166,7 +1162,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128 ||
ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
- if (op->sym->cipher.iv.length == 12) {
+ if (ctx->iv.length == 12) {
/*
* For GCM a 12 bit IV is allowed,
* but we need to inform the f/w
@@ -1202,7 +1198,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
rte_pktmbuf_data_len(op->sym->m_src));
if (do_cipher)
rte_hexdump(stdout, "iv:", iv_ptr,
- op->sym->cipher.iv.length);
+ ctx->iv.length);
if (do_auth) {
rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 6df8416..2b1689b 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -116,6 +116,13 @@ snow3g_set_session_parameters(struct snow3g_session *sess,
/* Only SNOW 3G UEA2 supported */
if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_SNOW3G_UEA2)
return -EINVAL;
+
+ if (cipher_xform->cipher.iv.length != SNOW3G_IV_LENGTH) {
+ SNOW3G_LOG_ERR("Wrong IV length");
+ return -EINVAL;
+ }
+ sess->iv_offset = cipher_xform->cipher.iv.offset;
+
/* Initialize key */
sso_snow3g_init_key_sched(cipher_xform->cipher.key.data,
&sess->pKeySched_cipher);
@@ -178,13 +185,6 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops,
uint32_t num_bytes[SNOW3G_MAX_BURST];
for (i = 0; i < num_ops; i++) {
- /* Sanity checks. */
- if (unlikely(ops[i]->sym->cipher.iv.length != SNOW3G_IV_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- SNOW3G_LOG_ERR("iv");
- break;
- }
-
src[i] = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
dst[i] = ops[i]->sym->m_dst ?
@@ -193,7 +193,7 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops,
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- ops[i]->sym->cipher.iv.offset);
+ session->iv_offset);
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
processed_ops++;
@@ -214,13 +214,6 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
uint8_t *iv;
uint32_t length_in_bits, offset_in_bits;
- /* Sanity checks. */
- if (unlikely(op->sym->cipher.iv.length != SNOW3G_IV_LENGTH)) {
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- SNOW3G_LOG_ERR("iv");
- return 0;
- }
-
offset_in_bits = op->sym->cipher.data.offset;
src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
if (op->sym->m_dst == NULL) {
@@ -230,7 +223,7 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
}
dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
iv = rte_crypto_op_ctod_offset(op, uint8_t *,
- op->sym->cipher.iv.offset);
+ session->iv_offset);
length_in_bits = op->sym->cipher.data.length;
sso_snow3g_f8_1_buffer_bit(&session->pKeySched_cipher, iv,
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_private.h b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
index 03973b9..e8943a7 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
@@ -91,6 +91,7 @@ struct snow3g_session {
enum rte_crypto_auth_operation auth_op;
sso_snow3g_key_schedule_t pKeySched_cipher;
sso_snow3g_key_schedule_t pKeySched_hash;
+ uint16_t iv_offset;
} __rte_cache_aligned;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index 8374f65..c48a2d6 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -115,6 +115,13 @@ zuc_set_session_parameters(struct zuc_session *sess,
/* Only ZUC EEA3 supported */
if (cipher_xform->cipher.algo != RTE_CRYPTO_CIPHER_ZUC_EEA3)
return -EINVAL;
+
+ if (cipher_xform->cipher.iv.length != ZUC_IV_KEY_LENGTH) {
+ ZUC_LOG_ERR("Wrong IV length");
+ return -EINVAL;
+ }
+ sess->iv_offset = cipher_xform->cipher.iv.offset;
+
/* Copy the key */
memcpy(sess->pKey_cipher, cipher_xform->cipher.key.data,
ZUC_IV_KEY_LENGTH);
@@ -178,13 +185,6 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
uint8_t *cipher_keys[ZUC_MAX_BURST];
for (i = 0; i < num_ops; i++) {
- /* Sanity checks. */
- if (unlikely(ops[i]->sym->cipher.iv.length != ZUC_IV_KEY_LENGTH)) {
- ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- ZUC_LOG_ERR("iv");
- break;
- }
-
if (((ops[i]->sym->cipher.data.length % BYTE_LEN) != 0)
|| ((ops[i]->sym->cipher.data.offset
% BYTE_LEN) != 0)) {
@@ -214,7 +214,7 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
- ops[i]->sym->cipher.iv.offset);
+ session->iv_offset);
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
cipher_keys[i] = session->pKey_cipher;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index e793459..c24b9bd 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -80,7 +80,7 @@ static const struct rte_cryptodev_capabilities zuc_pmd_capabilities[] = {
.min = 16,
.max = 16,
.increment = 0
- }
+ },
}, }
}, }
},
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_private.h b/drivers/crypto/zuc/rte_zuc_pmd_private.h
index 030f120..cee1b5d 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_private.h
+++ b/drivers/crypto/zuc/rte_zuc_pmd_private.h
@@ -92,6 +92,7 @@ struct zuc_session {
enum rte_crypto_auth_operation auth_op;
uint8_t pKey_cipher[ZUC_IV_KEY_LENGTH];
uint8_t pKey_hash[ZUC_IV_KEY_LENGTH];
+ uint16_t iv_offset;
} __rte_cache_aligned;
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 738a800..9e12782 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -50,9 +50,6 @@
#include "esp.h"
#include "ipip.h"
-#define IV_OFFSET (sizeof(struct rte_crypto_op) + \
- sizeof(struct rte_crypto_sym_op))
-
int
esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
struct rte_crypto_op *cop)
@@ -104,8 +101,6 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
case RTE_CRYPTO_CIPHER_AES_CBC:
/* Copy IV at the end of crypto operation */
rte_memcpy(iv_ptr, iv, sa->iv_len);
- sym_cop->cipher.iv.offset = IV_OFFSET;
- sym_cop->cipher.iv.length = sa->iv_len;
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
case RTE_CRYPTO_CIPHER_AES_GCM:
@@ -113,8 +108,6 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
icb->salt = sa->salt;
memcpy(&icb->iv, iv, 8);
icb->cnt = rte_cpu_to_be_32(1);
- sym_cop->cipher.iv.offset = IV_OFFSET;
- sym_cop->cipher.iv.length = 16;
break;
default:
RTE_LOG(ERR, IPSEC_ESP, "unsupported cipher algorithm %u\n",
@@ -348,8 +341,6 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
icb->salt = sa->salt;
icb->iv = sa->seq;
icb->cnt = rte_cpu_to_be_32(1);
- sym_cop->cipher.iv.offset = IV_OFFSET;
- sym_cop->cipher.iv.length = 16;
uint8_t *aad;
diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h
index de1df7b..405cf3d 100644
--- a/examples/ipsec-secgw/ipsec.h
+++ b/examples/ipsec-secgw/ipsec.h
@@ -48,6 +48,9 @@
#define MAX_DIGEST_SIZE 32 /* Bytes -- 256 bits */
+#define IV_OFFSET (sizeof(struct rte_crypto_op) + \
+ sizeof(struct rte_crypto_sym_op))
+
#define uint32_t_to_char(ip, a, b, c, d) do {\
*a = (uint8_t)(ip >> 24 & 0xff);\
*b = (uint8_t)(ip >> 16 & 0xff);\
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 39624c4..85e4d4e 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -589,6 +589,7 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
{
struct ipsec_sa *sa;
uint32_t i, idx;
+ uint16_t iv_length;
for (i = 0; i < nb_entries; i++) {
idx = SPI2IDX(entries[i].spi);
@@ -607,6 +608,21 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
sa->dst.ip.ip4 = rte_cpu_to_be_32(sa->dst.ip.ip4);
}
+ switch (sa->cipher_algo) {
+ case RTE_CRYPTO_CIPHER_NULL:
+ case RTE_CRYPTO_CIPHER_AES_CBC:
+ iv_length = sa->iv_len;
+ break;
+ case RTE_CRYPTO_CIPHER_AES_CTR:
+ case RTE_CRYPTO_CIPHER_AES_GCM:
+ iv_length = 16;
+ break;
+ default:
+ RTE_LOG(ERR, IPSEC_ESP, "unsupported cipher algorithm %u\n",
+ sa->cipher_algo);
+ return -EINVAL;
+ }
+
if (inbound) {
sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
sa_ctx->xf[idx].b.cipher.algo = sa->cipher_algo;
@@ -615,6 +631,8 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
sa->cipher_key_len;
sa_ctx->xf[idx].b.cipher.op =
RTE_CRYPTO_CIPHER_OP_DECRYPT;
+ sa_ctx->xf[idx].b.cipher.iv.offset = IV_OFFSET;
+ sa_ctx->xf[idx].b.cipher.iv.length = iv_length;
sa_ctx->xf[idx].b.next = NULL;
sa_ctx->xf[idx].a.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -637,6 +655,8 @@ sa_add_rules(struct sa_ctx *sa_ctx, const struct ipsec_sa entries[],
sa->cipher_key_len;
sa_ctx->xf[idx].a.cipher.op =
RTE_CRYPTO_CIPHER_OP_ENCRYPT;
+ sa_ctx->xf[idx].a.cipher.iv.offset = IV_OFFSET;
+ sa_ctx->xf[idx].a.cipher.iv.length = iv_length;
sa_ctx->xf[idx].a.next = NULL;
sa_ctx->xf[idx].b.type = RTE_CRYPTO_SYM_XFORM_AUTH;
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index ffd9731..9f16806 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -139,6 +139,11 @@ struct l2fwd_key {
phys_addr_t phys_addr;
};
+struct l2fwd_iv {
+ uint8_t *data;
+ uint16_t length;
+};
+
/** l2fwd crypto application command line options */
struct l2fwd_crypto_options {
unsigned portmask;
@@ -155,8 +160,8 @@ struct l2fwd_crypto_options {
unsigned ckey_param;
int ckey_random_size;
- struct l2fwd_key iv;
- unsigned iv_param;
+ struct l2fwd_iv iv;
+ unsigned int iv_param;
int iv_random_size;
struct rte_crypto_sym_xform auth_xform;
@@ -183,7 +188,7 @@ struct l2fwd_crypto_params {
unsigned digest_length;
unsigned block_size;
- struct l2fwd_key iv;
+ struct l2fwd_iv iv;
struct l2fwd_key aad;
struct rte_cryptodev_sym_session *session;
@@ -489,9 +494,6 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
/* Copy IV at the end of the crypto operation */
rte_memcpy(iv_ptr, cparams->iv.data, cparams->iv.length);
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = cparams->iv.length;
-
/* For wireless algorithms, offset/length must be in bits */
if (cparams->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
cparams->cipher_algo == RTE_CRYPTO_CIPHER_KASUMI_F8 ||
@@ -703,6 +705,9 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
port_cparams[i].iv.length);
port_cparams[i].cipher_algo = options->cipher_xform.cipher.algo;
+ /* Set IV parameters */
+ options->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ options->cipher_xform.cipher.iv.length = options->iv.length;
}
port_cparams[i].session = initialize_crypto_session(options,
@@ -1547,6 +1552,46 @@ check_supported_size(uint16_t length, uint16_t min, uint16_t max,
return -1;
}
+
+static int
+check_iv_param(const struct rte_crypto_param_range *iv_range_size,
+ unsigned int iv_param, int iv_random_size,
+ uint16_t *iv_length)
+{
+ /*
+ * Check if length of provided IV is supported
+ * by the algorithm chosen.
+ */
+ if (iv_param) {
+ if (check_supported_size(*iv_length,
+ iv_range_size->min,
+ iv_range_size->max,
+ iv_range_size->increment)
+ != 0) {
+ printf("Unsupported IV length\n");
+ return -1;
+ }
+ /*
+ * Check if length of IV to be randomly generated
+ * is supported by the algorithm chosen.
+ */
+ } else if (iv_random_size != -1) {
+ if (check_supported_size(iv_random_size,
+ iv_range_size->min,
+ iv_range_size->max,
+ iv_range_size->increment)
+ != 0) {
+ printf("Unsupported IV length\n");
+ return -1;
+ }
+ *iv_length = iv_random_size;
+ /* No size provided, use minimum size. */
+ } else
+ *iv_length = iv_range_size->min;
+
+ return 0;
+}
+
static int
initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
uint8_t *enabled_cdevs)
@@ -1614,36 +1659,9 @@ initialize_cryptodevs(struct l2fwd_crypto_options *options, unsigned nb_ports,
}
options->block_size = cap->sym.cipher.block_size;
- /*
- * Check if length of provided IV is supported
- * by the algorithm chosen.
- */
- if (options->iv_param) {
- if (check_supported_size(options->iv.length,
- cap->sym.cipher.iv_size.min,
- cap->sym.cipher.iv_size.max,
- cap->sym.cipher.iv_size.increment)
- != 0) {
- printf("Unsupported IV length\n");
- return -1;
- }
- /*
- * Check if length of IV to be randomly generated
- * is supported by the algorithm chosen.
- */
- } else if (options->iv_random_size != -1) {
- if (check_supported_size(options->iv_random_size,
- cap->sym.cipher.iv_size.min,
- cap->sym.cipher.iv_size.max,
- cap->sym.cipher.iv_size.increment)
- != 0) {
- printf("Unsupported IV length\n");
- return -1;
- }
- options->iv.length = options->iv_random_size;
- /* No size provided, use minimum size. */
- } else
- options->iv.length = cap->sym.cipher.iv_size.min;
+
+ check_iv_param(&cap->sym.cipher.iv_size, options->iv_param,
+ options->iv_random_size, &options->iv.length);
/*
* Check if length of provided cipher key is supported
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index b35c45a..c1a1e27 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -190,6 +190,55 @@ struct rte_crypto_cipher_xform {
* - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).
* - Both keys must have the same size.
**/
+ struct {
+ uint16_t offset;
+ /**< Starting point for Initialisation Vector or Counter,
+ * specified as number of bytes from start of crypto
+ * operation (rte_crypto_op).
+ *
+ * - For block ciphers in CBC or F8 mode, or for KASUMI
+ * in F8 mode, or for SNOW 3G in UEA2 mode, this is the
+ * Initialisation Vector (IV) value.
+ *
+ * - For block ciphers in CTR mode, this is the counter.
+ *
+ * - For GCM mode, this is either the IV (if the length
+ * is 96 bits) or J0 (for other sizes), where J0 is as
+ * defined by NIST SP800-38D. Regardless of the IV
+ * length, a full 16 bytes needs to be allocated.
+ *
+ * - For CCM mode, the first byte is reserved, and the
+ * nonce should be written starting at &iv[1] (to allow
+ * space for the implementation to write in the flags
+ * in the first byte). Note that a full 16 bytes should
+ * be allocated, even though the length field will
+ * have a value less than this.
+ *
+ * - For AES-XTS, this is the 128bit tweak, i, from
+ * IEEE Std 1619-2007.
+ *
+ * For optimum performance, the data pointed to SHOULD
+ * be 8-byte aligned.
+ */
+ uint16_t length;
+ /**< Length of valid IV data.
+ *
+ * - For block ciphers in CBC or F8 mode, or for KASUMI
+ * in F8 mode, or for SNOW 3G in UEA2 mode, this is the
+ * length of the IV (which must be the same as the
+ * block length of the cipher).
+ *
+ * - For block ciphers in CTR mode, this is the length
+ * of the counter (which must be the same as the block
+ * length of the cipher).
+ *
+ * - For GCM mode, this is either 12 (for 96-bit IVs)
+ * or 16, in which case data points to J0.
+ *
+ * - For CCM mode, this is the length of the nonce,
+ * which can be in the range 7 to 13 inclusive.
+ */
+ } iv; /**< Initialisation vector parameters */
};
/** Symmetric Authentication / Hash Algorithms */
@@ -463,55 +512,6 @@ struct rte_crypto_sym_op {
*/
} data; /**< Data offsets and length for ciphering */
- struct {
- uint16_t offset;
- /**< Starting point for Initialisation Vector or Counter,
- * specified as number of bytes from start of crypto
- * operation.
- *
- * - For block ciphers in CBC or F8 mode, or for KASUMI
- * in F8 mode, or for SNOW 3G in UEA2 mode, this is the
- * Initialisation Vector (IV) value.
- *
- * - For block ciphers in CTR mode, this is the counter.
- *
- * - For GCM mode, this is either the IV (if the length
- * is 96 bits) or J0 (for other sizes), where J0 is as
- * defined by NIST SP800-38D. Regardless of the IV
- * length, a full 16 bytes needs to be allocated.
- *
- * - For CCM mode, the first byte is reserved, and the
- * nonce should be written starting at &iv[1] (to allow
- * space for the implementation to write in the flags
- * in the first byte). Note that a full 16 bytes should
- * be allocated, even though the length field will
- * have a value less than this.
- *
- * - For AES-XTS, this is the 128bit tweak, i, from
- * IEEE Std 1619-2007.
- *
- * For optimum performance, the data pointed to SHOULD
- * be 8-byte aligned.
- */
- uint16_t length;
- /**< Length of valid IV data.
- *
- * - For block ciphers in CBC or F8 mode, or for KASUMI
- * in F8 mode, or for SNOW 3G in UEA2 mode, this is the
- * length of the IV (which must be the same as the
- * block length of the cipher).
- *
- * - For block ciphers in CTR mode, this is the length
- * of the counter (which must be the same as the block
- * length of the cipher).
- *
- * - For GCM mode, this is either 12 (for 96-bit IVs)
- * or 16, in which case data points to J0.
- *
- * - For CCM mode, this is the length of the nonce,
- * which can be in the range 7 to 13 inclusive.
- */
- } iv; /**< Initialisation vector parameters */
} cipher;
struct {
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 0dd95ca..1f066fb 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1270,6 +1270,8 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
ut_params->cipher_xform.cipher.key.data = aes_cbc_key;
ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
/* Setup HMAC Parameters */
ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -1310,13 +1312,11 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
- /* Set crypto operation cipher parameters */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
-
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+ /* Set crypto operation cipher parameters */
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = QUOTE_512_BYTES;
@@ -1404,6 +1404,8 @@ test_AES_CBC_HMAC_SHA512_decrypt_create_session_params(
ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = CIPHER_KEY_LENGTH_AES_CBC;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
/* Setup HMAC Parameters */
ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -1462,9 +1464,7 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
-
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, CIPHER_IV_LENGTH_AES_CBC);
@@ -1806,7 +1806,8 @@ static int
create_wireless_algo_cipher_session(uint8_t dev_id,
enum rte_crypto_cipher_operation op,
enum rte_crypto_cipher_algorithm algo,
- const uint8_t *key, const uint8_t key_len)
+ const uint8_t *key, const uint8_t key_len,
+ uint8_t iv_len)
{
uint8_t cipher_key[key_len];
@@ -1822,6 +1823,8 @@ create_wireless_algo_cipher_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = op;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -1856,9 +1859,6 @@ create_wireless_algo_cipher_operation(const uint8_t *iv, uint8_t iv_len,
sym_op->m_src = ut_params->ibuf;
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -1890,9 +1890,6 @@ create_wireless_algo_cipher_operation_oop(const uint8_t *iv, uint8_t iv_len,
sym_op->m_dst = ut_params->obuf;
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -1907,7 +1904,8 @@ create_wireless_algo_cipher_auth_session(uint8_t dev_id,
enum rte_crypto_auth_algorithm auth_algo,
enum rte_crypto_cipher_algorithm cipher_algo,
const uint8_t *key, const uint8_t key_len,
- const uint8_t aad_len, const uint8_t auth_len)
+ const uint8_t aad_len, const uint8_t auth_len,
+ uint8_t iv_len)
{
uint8_t cipher_auth_key[key_len];
@@ -1936,6 +1934,8 @@ create_wireless_algo_cipher_auth_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = cipher_op;
ut_params->cipher_xform.cipher.key.data = cipher_auth_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -1962,6 +1962,7 @@ create_wireless_cipher_auth_session(uint8_t dev_id,
const uint8_t *key = tdata->key.data;
const uint8_t aad_len = tdata->aad.len;
const uint8_t auth_len = tdata->digest.len;
+ uint8_t iv_len = tdata->iv.len;
memcpy(cipher_auth_key, key, key_len);
@@ -1985,6 +1986,9 @@ create_wireless_cipher_auth_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = cipher_op;
ut_params->cipher_xform.cipher.key.data = cipher_auth_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
+
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -2013,7 +2017,8 @@ create_wireless_algo_auth_cipher_session(uint8_t dev_id,
enum rte_crypto_auth_algorithm auth_algo,
enum rte_crypto_cipher_algorithm cipher_algo,
const uint8_t *key, const uint8_t key_len,
- const uint8_t aad_len, const uint8_t auth_len)
+ const uint8_t aad_len, const uint8_t auth_len,
+ uint8_t iv_len)
{
uint8_t auth_cipher_key[key_len];
@@ -2038,6 +2043,8 @@ create_wireless_algo_auth_cipher_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = cipher_op;
ut_params->cipher_xform.cipher.key.data = auth_cipher_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -2211,9 +2218,6 @@ create_wireless_cipher_hash_operation(const struct wireless_test_data *tdata,
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -2306,9 +2310,6 @@ create_wireless_algo_cipher_hash_operation(const uint8_t *auth_tag,
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -2389,9 +2390,6 @@ create_wireless_algo_auth_cipher_operation(const unsigned auth_tag_len,
sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
iv, iv_len);
sym_op->cipher.data.length = cipher_len;
@@ -2801,7 +2799,8 @@ test_kasumi_encryption(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -2876,7 +2875,8 @@ test_kasumi_encryption_sgl(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -2940,7 +2940,8 @@ test_kasumi_encryption_oop(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3017,7 +3018,8 @@ test_kasumi_encryption_oop_sgl(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3080,7 +3082,8 @@ test_kasumi_decryption_oop(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3146,7 +3149,8 @@ test_kasumi_decryption(const struct kasumi_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
RTE_CRYPTO_CIPHER_KASUMI_F8,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3210,7 +3214,8 @@ test_snow3g_encryption(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3274,7 +3279,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3355,7 +3361,8 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3444,7 +3451,8 @@ test_snow3g_encryption_offset_oop(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3535,7 +3543,8 @@ static int test_snow3g_decryption(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3596,7 +3605,8 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_DECRYPT,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3759,7 +3769,8 @@ test_snow3g_cipher_auth(const struct snow3g_test_data *tdata)
RTE_CRYPTO_AUTH_SNOW3G_UIA2,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
tdata->key.data, tdata->key.len,
- tdata->aad.len, tdata->digest.len);
+ tdata->aad.len, tdata->digest.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
@@ -3841,7 +3852,8 @@ test_snow3g_auth_cipher(const struct snow3g_test_data *tdata)
RTE_CRYPTO_AUTH_SNOW3G_UIA2,
RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
tdata->key.data, tdata->key.len,
- tdata->aad.len, tdata->digest.len);
+ tdata->aad.len, tdata->digest.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -3927,7 +3939,8 @@ test_kasumi_auth_cipher(const struct kasumi_test_data *tdata)
RTE_CRYPTO_AUTH_KASUMI_F9,
RTE_CRYPTO_CIPHER_KASUMI_F8,
tdata->key.data, tdata->key.len,
- tdata->aad.len, tdata->digest.len);
+ tdata->aad.len, tdata->digest.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool);
@@ -4009,7 +4022,8 @@ test_kasumi_cipher_auth(const struct kasumi_test_data *tdata)
RTE_CRYPTO_AUTH_KASUMI_F9,
RTE_CRYPTO_CIPHER_KASUMI_F8,
tdata->key.data, tdata->key.len,
- tdata->aad.len, tdata->digest.len);
+ tdata->aad.len, tdata->digest.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -4098,7 +4112,8 @@ test_zuc_encryption(const struct wireless_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_ZUC_EEA3,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -4193,7 +4208,8 @@ test_zuc_encryption_sgl(const struct wireless_test_data *tdata)
retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
RTE_CRYPTO_CIPHER_ZUC_EEA3,
- tdata->key.data, tdata->key.len);
+ tdata->key.data, tdata->key.len,
+ tdata->iv.len);
if (retval < 0)
return retval;
@@ -4725,6 +4741,7 @@ static int
create_gcm_session(uint8_t dev_id, enum rte_crypto_cipher_operation op,
const uint8_t *key, const uint8_t key_len,
const uint8_t aad_len, const uint8_t auth_len,
+ uint8_t iv_len,
enum rte_crypto_auth_operation auth_op)
{
uint8_t cipher_key[key_len];
@@ -4742,6 +4759,8 @@ create_gcm_session(uint8_t dev_id, enum rte_crypto_cipher_operation op,
ut_params->cipher_xform.cipher.op = op;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = key_len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -4778,6 +4797,7 @@ create_gcm_xforms(struct rte_crypto_op *op,
enum rte_crypto_cipher_operation cipher_op,
uint8_t *key, const uint8_t key_len,
const uint8_t aad_len, const uint8_t auth_len,
+ uint8_t iv_len,
enum rte_crypto_auth_operation auth_op)
{
TEST_ASSERT_NOT_NULL(rte_crypto_op_sym_xforms_alloc(op, 2),
@@ -4791,6 +4811,8 @@ create_gcm_xforms(struct rte_crypto_op *op,
sym_op->xform->cipher.op = cipher_op;
sym_op->xform->cipher.key.data = key;
sym_op->xform->cipher.key.length = key_len;
+ sym_op->xform->cipher.iv.offset = IV_OFFSET;
+ sym_op->xform->cipher.iv.length = iv_len;
TEST_HEXDUMP(stdout, "key:", key, key_len);
@@ -4842,12 +4864,10 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
/* Append IV at the end of the crypto operation*/
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = tdata->iv.len;
rte_memcpy(iv_ptr, tdata->iv.data, tdata->iv.len);
TEST_HEXDUMP(stdout, "iv:", iv_ptr,
- sym_op->cipher.iv.length);
+ tdata->iv.len);
/* Append plaintext/ciphertext */
if (op == RTE_CRYPTO_CIPHER_OP_ENCRYPT) {
@@ -4951,6 +4971,7 @@ test_mb_AES_GCM_authenticated_encryption(const struct gcm_test_data *tdata)
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_GENERATE);
if (retval < 0)
return retval;
@@ -5128,6 +5149,7 @@ test_mb_AES_GCM_authenticated_decryption(const struct gcm_test_data *tdata)
RTE_CRYPTO_CIPHER_OP_DECRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_VERIFY);
if (retval < 0)
return retval;
@@ -5294,6 +5316,7 @@ test_AES_GCM_authenticated_encryption_oop(const struct gcm_test_data *tdata)
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_GENERATE);
if (retval < 0)
return retval;
@@ -5370,6 +5393,7 @@ test_AES_GCM_authenticated_decryption_oop(const struct gcm_test_data *tdata)
RTE_CRYPTO_CIPHER_OP_DECRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_VERIFY);
if (retval < 0)
return retval;
@@ -5453,6 +5477,7 @@ test_AES_GCM_authenticated_encryption_sessionless(
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
key, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_GENERATE);
if (retval < 0)
return retval;
@@ -5533,6 +5558,7 @@ test_AES_GCM_authenticated_decryption_sessionless(
RTE_CRYPTO_CIPHER_OP_DECRYPT,
key, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_VERIFY);
if (retval < 0)
return retval;
@@ -6417,9 +6443,6 @@ create_gmac_operation(enum rte_crypto_auth_operation op,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = tdata->iv.len;
-
rte_memcpy(iv_ptr, tdata->iv.data, tdata->iv.len);
TEST_HEXDUMP(stdout, "iv:", iv_ptr, tdata->iv.len);
@@ -6451,6 +6474,8 @@ static int create_gmac_session(uint8_t dev_id,
ut_params->cipher_xform.cipher.op = op;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = tdata->key.len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = tdata->iv.len;
ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
ut_params->auth_xform.next = NULL;
@@ -6849,6 +6874,8 @@ create_auth_cipher_session(struct crypto_unittest_params *ut_params,
ut_params->cipher_xform.cipher.op = cipher_op;
ut_params->cipher_xform.cipher.key.data = cipher_key;
ut_params->cipher_xform.cipher.key.length = reference->cipher_key.len;
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = reference->iv.len;
/* Create Crypto session*/
ut_params->sess = rte_cryptodev_sym_session_create(dev_id,
@@ -6960,9 +6987,6 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.data,
sym_op->auth.digest.length);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = reference->iv.len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
@@ -7017,9 +7041,6 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.data,
sym_op->auth.digest.length);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = reference->iv.len;
-
rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
reference->iv.data, reference->iv.len);
@@ -7267,8 +7288,6 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = iv_len;
rte_memcpy(iv_ptr, tdata->iv.data, iv_len);
@@ -7349,6 +7368,7 @@ test_AES_GCM_authenticated_encryption_SGL(const struct gcm_test_data *tdata,
RTE_CRYPTO_CIPHER_OP_ENCRYPT,
tdata->key.data, tdata->key.len,
tdata->aad.len, tdata->auth_tag.len,
+ tdata->iv.len,
RTE_CRYPTO_AUTH_OP_GENERATE);
if (retval < 0)
return retval;
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index ad5fb0d..c69e83e 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -287,11 +287,11 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
RTE_CRYPTO_CIPHER_OP_DECRYPT;
cipher_xform->cipher.key.data = cipher_key;
cipher_xform->cipher.key.length = tdata->cipher_key.len;
+ cipher_xform->cipher.iv.offset = IV_OFFSET;
+ cipher_xform->cipher.iv.length = tdata->iv.len;
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = tdata->ciphertext.len;
- sym_op->cipher.iv.offset = IV_OFFSET;
- sym_op->cipher.iv.length = tdata->iv.len;
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
tdata->iv.data,
tdata->iv.len);
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index 86bdc6e..7238bfa 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -43,6 +43,8 @@
#include "test_cryptodev.h"
#include "test_cryptodev_gcm_test_vectors.h"
+#define AES_CIPHER_IV_LENGTH 16
+#define TRIPLE_DES_CIPHER_IV_LENGTH 8
#define PERF_NUM_OPS_INFLIGHT (128)
#define DEFAULT_NUM_REQS_TO_SUBMIT (10000000)
@@ -67,9 +69,6 @@ enum chain_mode {
struct symmetric_op {
- const uint8_t *iv_data;
- uint32_t iv_len;
-
const uint8_t *aad_data;
uint32_t aad_len;
@@ -96,6 +95,8 @@ struct symmetric_session_attrs {
const uint8_t *key_auth_data;
uint32_t key_auth_len;
+ const uint8_t *iv_data;
+ uint16_t iv_len;
uint32_t digest_len;
};
@@ -1933,7 +1934,8 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
ut_params->cipher_xform.cipher.key.data = aes_cbc_128_key;
ut_params->cipher_xform.cipher.key.length = CIPHER_IV_LENGTH_AES_CBC;
-
+ ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+ ut_params->cipher_xform.cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
/* Setup HMAC Parameters */
ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -1981,9 +1983,6 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_params[0].length;
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
-
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
aes_cbc_128_iv, CIPHER_IV_LENGTH_AES_CBC);
@@ -2646,6 +2645,8 @@ test_perf_create_aes_sha_session(uint8_t dev_id, enum chain_mode chain,
cipher_xform.cipher.key.data = aes_key;
cipher_xform.cipher.key.length = cipher_key_len;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
+ cipher_xform.cipher.iv.length = AES_CIPHER_IV_LENGTH;
if (chain != CIPHER_ONLY) {
/* Setup HMAC Parameters */
auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -2694,6 +2695,9 @@ test_perf_create_snow3g_session(uint8_t dev_id, enum chain_mode chain,
cipher_xform.cipher.key.data = snow3g_cipher_key;
cipher_xform.cipher.key.length = cipher_key_len;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
+ cipher_xform.cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
+
/* Setup HMAC Parameters */
auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -2741,17 +2745,20 @@ test_perf_create_openssl_session(uint8_t dev_id, enum chain_mode chain,
/* Setup Cipher Parameters */
cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
cipher_xform.cipher.algo = cipher_algo;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
switch (cipher_algo) {
case RTE_CRYPTO_CIPHER_3DES_CBC:
case RTE_CRYPTO_CIPHER_3DES_CTR:
cipher_xform.cipher.key.data = triple_des_key;
+ cipher_xform.cipher.iv.length = TRIPLE_DES_CIPHER_IV_LENGTH;
break;
case RTE_CRYPTO_CIPHER_AES_CBC:
case RTE_CRYPTO_CIPHER_AES_CTR:
case RTE_CRYPTO_CIPHER_AES_GCM:
cipher_xform.cipher.key.data = aes_key;
+ cipher_xform.cipher.iv.length = AES_CIPHER_IV_LENGTH;
break;
default:
return NULL;
@@ -2816,6 +2823,8 @@ test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
}
cipher_xform.cipher.key.length = cipher_key_len;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
+ cipher_xform.cipher.iv.length = AES_CIPHER_IV_LENGTH;
/* Setup Auth Parameters */
auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
@@ -2844,9 +2853,7 @@ test_perf_create_armv8_session(uint8_t dev_id, enum chain_mode chain,
}
}
-#define AES_CIPHER_IV_LENGTH 16
#define AES_GCM_AAD_LENGTH 16
-#define TRIPLE_DES_CIPHER_IV_LENGTH 8
static struct rte_mbuf *
test_perf_create_pktmbuf(struct rte_mempool *mpool, unsigned buf_sz)
@@ -2893,12 +2900,11 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
}
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = AES_CIPHER_IV_LENGTH;
+ /* Copy the IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
aes_iv, AES_CIPHER_IV_LENGTH);
+ /* Cipher Parameters */
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_len;
@@ -2926,9 +2932,7 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.aad.data = aes_gcm_aad;
op->sym->auth.aad.length = AES_GCM_AAD_LENGTH;
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = AES_CIPHER_IV_LENGTH;
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
aes_iv, AES_CIPHER_IV_LENGTH);
@@ -2970,10 +2974,6 @@ test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
IV_OFFSET);
op->sym->auth.aad.length = SNOW3G_CIPHER_IV_LENGTH;
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
-
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_len << 3;
@@ -2997,9 +2997,7 @@ test_perf_set_crypto_op_snow3g_cipher(struct rte_crypto_op *op,
return NULL;
}
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
snow3g_iv, SNOW3G_CIPHER_IV_LENGTH);
@@ -3068,9 +3066,7 @@ test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
rte_pktmbuf_mtophys_offset(m, data_len);
op->sym->auth.digest.length = digest_len;
- /* Cipher Parameters */
- op->sym->cipher.iv.offset = IV_OFFSET;
- op->sym->cipher.iv.length = TRIPLE_DES_CIPHER_IV_LENGTH;
+ /* Copy IV at the end of the crypto operation */
rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
triple_des_iv, TRIPLE_DES_CIPHER_IV_LENGTH);
@@ -4136,6 +4132,8 @@ test_perf_create_session(uint8_t dev_id, struct perf_test_params *pparams)
cipher_xform.cipher.op = pparams->session_attrs->cipher;
cipher_xform.cipher.key.data = cipher_key;
cipher_xform.cipher.key.length = pparams->session_attrs->key_cipher_len;
+ cipher_xform.cipher.iv.length = pparams->session_attrs->iv_len;
+ cipher_xform.cipher.iv.offset = IV_OFFSET;
auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
auth_xform.next = NULL;
@@ -4190,14 +4188,11 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
rte_memcpy(op->sym->auth.aad.data, params->symmetric_op->aad_data,
params->symmetric_op->aad_len);
- op->sym->cipher.iv.offset = IV_OFFSET;
- rte_memcpy(iv_ptr, params->symmetric_op->iv_data,
- params->symmetric_op->iv_len);
- if (params->symmetric_op->iv_len == 12)
+ rte_memcpy(iv_ptr, params->session_attrs->iv_data,
+ params->session_attrs->iv_len);
+ if (params->session_attrs->iv_len == 12)
iv_ptr[15] = 1;
- op->sym->cipher.iv.length = params->symmetric_op->iv_len;
-
op->sym->auth.data.offset =
params->symmetric_op->aad_len;
op->sym->auth.data.length = params->symmetric_op->p_len;
@@ -4434,11 +4429,11 @@ test_perf_AES_GCM(int continual_buf_len, int continual_size)
session_attrs[i].key_auth_len = 0;
session_attrs[i].digest_len =
gcm_test->auth_tag.len;
+ session_attrs[i].iv_len = gcm_test->iv.len;
+ session_attrs[i].iv_data = gcm_test->iv.data;
ops_set[i].aad_data = gcm_test->aad.data;
ops_set[i].aad_len = gcm_test->aad.len;
- ops_set[i].iv_data = gcm_test->iv.data;
- ops_set[i].iv_len = gcm_test->iv.len;
ops_set[i].p_data = gcm_test->plaintext.data;
ops_set[i].p_len = buf_lengths[i];
ops_set[i].c_data = gcm_test->ciphertext.data;
--
2.9.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 13/27] cryptodev: pass IV as offset
` (3 preceding siblings ...)
2017-06-26 10:22 4% ` [dpdk-dev] [PATCH v2 04/27] cryptodev: do not store pointer to op specific params Pablo de Lara
@ 2017-06-26 10:22 1% ` Pablo de Lara
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 14/27] cryptodev: move IV parameters to crypto session Pablo de Lara
` (6 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Since IV now is copied after the crypto operation, in
its private size, IV can be passed only with offset
and length.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_ops.c | 21 ++----
doc/guides/prog_guide/cryptodev_lib.rst | 3 +-
doc/guides/rel_notes/release_17_08.rst | 2 +
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 80 +++++++++++----------
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 3 +-
drivers/crypto/armv8/rte_armv8_pmd.c | 3 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 8 ++-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 26 ++++---
drivers/crypto/openssl/rte_openssl_pmd.c | 12 ++--
drivers/crypto/qat/qat_crypto.c | 30 +++++---
drivers/crypto/snow3g/rte_snow3g_pmd.c | 14 ++--
drivers/crypto/zuc/rte_zuc_pmd.c | 7 +-
examples/ipsec-secgw/esp.c | 14 +---
examples/l2fwd-crypto/main.c | 5 +-
lib/librte_cryptodev/rte_crypto_sym.h | 7 +-
test/test/test_cryptodev.c | 107 +++++++++++-----------------
test/test/test_cryptodev_blockcipher.c | 8 +--
test/test/test_cryptodev_perf.c | 60 ++++++----------
18 files changed, 193 insertions(+), 217 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 423bdae..1e151a9 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -106,12 +106,9 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
/* cipher parameters */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i],
- uint8_t *, iv_offset);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
- iv_offset);
+ sym_op->cipher.iv.offset = iv_offset;
sym_op->cipher.iv.length = test_vector->iv.length;
- memcpy(sym_op->cipher.iv.data,
+ memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset),
test_vector->iv.data,
test_vector->iv.length);
@@ -213,12 +210,9 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
/* cipher parameters */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i],
- uint8_t *, iv_offset);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
- iv_offset);
+ sym_op->cipher.iv.offset = iv_offset;
sym_op->cipher.iv.length = test_vector->iv.length;
- memcpy(sym_op->cipher.iv.data,
+ memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset),
test_vector->iv.data,
test_vector->iv.length);
@@ -297,12 +291,9 @@ cperf_set_ops_aead(struct rte_crypto_op **ops,
sym_op->m_dst = bufs_out[i];
/* cipher parameters */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i],
- uint8_t *, iv_offset);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i],
- iv_offset);
+ sym_op->cipher.iv.offset = iv_offset;
sym_op->cipher.iv.length = test_vector->iv.length;
- memcpy(sym_op->cipher.iv.data,
+ memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset),
test_vector->iv.data,
test_vector->iv.length);
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index c9a29f8..48c58a9 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -537,8 +537,7 @@ chain.
} data; /**< Data offsets and length for ciphering */
struct {
- uint8_t *data;
- phys_addr_t phys_addr;
+ uint16_t offset;
uint16_t length;
} iv; /**< Initialisation vector parameters */
} cipher;
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 6acbf35..68e8022 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -157,6 +157,8 @@ API Changes
* Removed the field ``opaque_data`` from ``rte_crypto_op``.
* Pointer to ``rte_crypto_sym_op`` in ``rte_crypto_op`` has been replaced
with a zero length array.
+ * Replaced pointer and physical address of IV in ``rte_crypto_sym_op`` with
+ offset from the start of the crypto operation.
ABI Changes
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 165f5a1..9a23415 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -180,12 +180,14 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_op *op)
*
*/
static int
-process_gcm_crypto_op(struct rte_crypto_sym_op *op,
+process_gcm_crypto_op(struct rte_crypto_op *op,
struct aesni_gcm_session *session)
{
uint8_t *src, *dst;
- struct rte_mbuf *m_src = op->m_src;
- uint32_t offset = op->cipher.data.offset;
+ uint8_t *iv_ptr;
+ struct rte_crypto_sym_op *sym_op = op->sym;
+ struct rte_mbuf *m_src = sym_op->m_src;
+ uint32_t offset = sym_op->cipher.data.offset;
uint32_t part_len, total_len, data_len;
RTE_ASSERT(m_src != NULL);
@@ -198,46 +200,48 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
}
data_len = m_src->data_len - offset;
- part_len = (data_len < op->cipher.data.length) ? data_len :
- op->cipher.data.length;
+ part_len = (data_len < sym_op->cipher.data.length) ? data_len :
+ sym_op->cipher.data.length;
/* Destination buffer is required when segmented source buffer */
- RTE_ASSERT((part_len == op->cipher.data.length) ||
- ((part_len != op->cipher.data.length) &&
- (op->m_dst != NULL)));
+ RTE_ASSERT((part_len == sym_op->cipher.data.length) ||
+ ((part_len != sym_op->cipher.data.length) &&
+ (sym_op->m_dst != NULL)));
/* Segmented destination buffer is not supported */
- RTE_ASSERT((op->m_dst == NULL) ||
- ((op->m_dst != NULL) &&
- rte_pktmbuf_is_contiguous(op->m_dst)));
+ RTE_ASSERT((sym_op->m_dst == NULL) ||
+ ((sym_op->m_dst != NULL) &&
+ rte_pktmbuf_is_contiguous(sym_op->m_dst)));
- dst = op->m_dst ?
- rte_pktmbuf_mtod_offset(op->m_dst, uint8_t *,
- op->cipher.data.offset) :
- rte_pktmbuf_mtod_offset(op->m_src, uint8_t *,
- op->cipher.data.offset);
+ dst = sym_op->m_dst ?
+ rte_pktmbuf_mtod_offset(sym_op->m_dst, uint8_t *,
+ sym_op->cipher.data.offset) :
+ rte_pktmbuf_mtod_offset(sym_op->m_src, uint8_t *,
+ sym_op->cipher.data.offset);
src = rte_pktmbuf_mtod_offset(m_src, uint8_t *, offset);
/* sanity checks */
- if (op->cipher.iv.length != 16 && op->cipher.iv.length != 12 &&
- op->cipher.iv.length != 0) {
+ if (sym_op->cipher.iv.length != 16 && sym_op->cipher.iv.length != 12 &&
+ sym_op->cipher.iv.length != 0) {
GCM_LOG_ERR("iv");
return -1;
}
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sym_op->cipher.iv.offset);
/*
* GCM working in 12B IV mode => 16B pre-counter block we need
* to set BE LSB to 1, driver expects that 16B is allocated
*/
- if (op->cipher.iv.length == 12) {
- uint32_t *iv_padd = (uint32_t *)&op->cipher.iv.data[12];
+ if (sym_op->cipher.iv.length == 12) {
+ uint32_t *iv_padd = (uint32_t *)&(iv_ptr[12]);
*iv_padd = rte_bswap32(1);
}
- if (op->auth.digest.length != 16 &&
- op->auth.digest.length != 12 &&
- op->auth.digest.length != 8) {
+ if (sym_op->auth.digest.length != 16 &&
+ sym_op->auth.digest.length != 12 &&
+ sym_op->auth.digest.length != 8) {
GCM_LOG_ERR("digest");
return -1;
}
@@ -245,13 +249,13 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
if (session->op == AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION) {
aesni_gcm_enc[session->key].init(&session->gdata,
- op->cipher.iv.data,
- op->auth.aad.data,
- (uint64_t)op->auth.aad.length);
+ iv_ptr,
+ sym_op->auth.aad.data,
+ (uint64_t)sym_op->auth.aad.length);
aesni_gcm_enc[session->key].update(&session->gdata, dst, src,
(uint64_t)part_len);
- total_len = op->cipher.data.length - part_len;
+ total_len = sym_op->cipher.data.length - part_len;
while (total_len) {
dst += part_len;
@@ -270,12 +274,12 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
}
aesni_gcm_enc[session->key].finalize(&session->gdata,
- op->auth.digest.data,
- (uint64_t)op->auth.digest.length);
+ sym_op->auth.digest.data,
+ (uint64_t)sym_op->auth.digest.length);
} else { /* session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION */
- uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(op->m_dst ?
- op->m_dst : op->m_src,
- op->auth.digest.length);
+ uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(sym_op->m_dst ?
+ sym_op->m_dst : sym_op->m_src,
+ sym_op->auth.digest.length);
if (!auth_tag) {
GCM_LOG_ERR("auth_tag");
@@ -283,13 +287,13 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
}
aesni_gcm_dec[session->key].init(&session->gdata,
- op->cipher.iv.data,
- op->auth.aad.data,
- (uint64_t)op->auth.aad.length);
+ iv_ptr,
+ sym_op->auth.aad.data,
+ (uint64_t)sym_op->auth.aad.length);
aesni_gcm_dec[session->key].update(&session->gdata, dst, src,
(uint64_t)part_len);
- total_len = op->cipher.data.length - part_len;
+ total_len = sym_op->cipher.data.length - part_len;
while (total_len) {
dst += part_len;
@@ -309,7 +313,7 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op,
aesni_gcm_dec[session->key].finalize(&session->gdata,
auth_tag,
- (uint64_t)op->auth.digest.length);
+ (uint64_t)sym_op->auth.digest.length);
}
return 0;
@@ -401,7 +405,7 @@ aesni_gcm_pmd_dequeue_burst(void *queue_pair,
break;
}
- retval = process_gcm_crypto_op(ops[i]->sym, sess);
+ retval = process_gcm_crypto_op(ops[i], sess);
if (retval < 0) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
qp->qp_stats.dequeue_err_count++;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index efdc321..1685fa8 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -471,7 +471,8 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp,
get_truncated_digest_byte_length(job->hash_alg);
/* Set IV parameters */
- job->iv = op->sym->cipher.iv.data;
+ job->iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
job->iv_len_in_bytes = op->sym->cipher.iv.length;
/* Data Parameter */
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 04d8781..fa6a7d5 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -654,7 +654,8 @@ process_armv8_chained_op
return;
}
- arg.cipher.iv = op->sym->cipher.iv.data;
+ arg.cipher.iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
arg.cipher.key = sess->cipher.key.data;
/* Acquire combined mode function */
crypto_func = sess->crypto_func;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index e154395..1605701 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -87,6 +87,8 @@ build_authenc_fd(dpaa2_sec_session *sess,
int icv_len = sym_op->auth.digest.length;
uint8_t *old_icv;
uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len;
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
PMD_INIT_FUNC_TRACE();
@@ -178,7 +180,7 @@ build_authenc_fd(dpaa2_sec_session *sess,
sym_op->auth.digest.length);
/* Configure Input SGE for Encap/Decap */
- DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
sge->length = sym_op->cipher.iv.length;
sge++;
@@ -307,6 +309,8 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
uint32_t mem_len = (5 * sizeof(struct qbman_fle));
struct sec_flow_context *flc;
struct ctxt_priv *priv = sess->ctxt;
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
PMD_INIT_FUNC_TRACE();
@@ -369,7 +373,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SET_FLE_SG_EXT(fle);
- DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data));
+ DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
sge->length = sym_op->cipher.iv.length;
sge++;
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 667ebfc..edf84e8 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -174,7 +174,8 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
unsigned i;
uint8_t processed_ops = 0;
uint8_t *src[num_ops], *dst[num_ops];
- uint64_t IV[num_ops];
+ uint8_t *iv_ptr;
+ uint64_t iv[num_ops];
uint32_t num_bytes[num_ops];
for (i = 0; i < num_ops; i++) {
@@ -192,14 +193,16 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops,
(ops[i]->sym->cipher.data.offset >> 3) :
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
- IV[i] = *((uint64_t *)(ops[i]->sym->cipher.iv.data));
+ iv_ptr = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
+ ops[i]->sym->cipher.iv.offset);
+ iv[i] = *((uint64_t *)(iv_ptr));
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
processed_ops++;
}
if (processed_ops != 0)
- sso_kasumi_f8_n_buffer(&session->pKeySched_cipher, IV,
+ sso_kasumi_f8_n_buffer(&session->pKeySched_cipher, iv,
src, dst, num_bytes, processed_ops);
return processed_ops;
@@ -211,7 +214,8 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
struct kasumi_session *session)
{
uint8_t *src, *dst;
- uint64_t IV;
+ uint8_t *iv_ptr;
+ uint64_t iv;
uint32_t length_in_bits, offset_in_bits;
/* Sanity checks. */
@@ -229,10 +233,12 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op,
return 0;
}
dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
- IV = *((uint64_t *)(op->sym->cipher.iv.data));
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
+ iv = *((uint64_t *)(iv_ptr));
length_in_bits = op->sym->cipher.data.length;
- sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, IV,
+ sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, iv,
src, dst, length_in_bits, offset_in_bits);
return 1;
@@ -250,7 +256,7 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
uint32_t length_in_bits;
uint32_t num_bytes;
uint32_t shift_bits;
- uint64_t IV;
+ uint64_t iv;
uint8_t direction;
for (i = 0; i < num_ops; i++) {
@@ -278,7 +284,7 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
src = rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->auth.data.offset >> 3);
/* IV from AAD */
- IV = *((uint64_t *)(ops[i]->sym->auth.aad.data));
+ iv = *((uint64_t *)(ops[i]->sym->auth.aad.data));
/* Direction from next bit after end of message */
num_bytes = (length_in_bits >> 3) + 1;
shift_bits = (BYTE_LEN - 1 - length_in_bits) % BYTE_LEN;
@@ -289,7 +295,7 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
ops[i]->sym->auth.digest.length);
sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
- IV, src,
+ iv, src,
length_in_bits, dst, direction);
/* Verify digest. */
if (memcmp(dst, ops[i]->sym->auth.digest.data,
@@ -303,7 +309,7 @@ process_kasumi_hash_op(struct rte_crypto_op **ops,
dst = ops[i]->sym->auth.digest.data;
sso_kasumi_f9_1_buffer_user(&session->pKeySched_hash,
- IV, src,
+ iv, src,
length_in_bits, dst, direction);
}
processed_ops++;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 9f032d3..02cf25a 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -923,7 +923,8 @@ process_openssl_combined_op
return;
}
- iv = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
ivlen = op->sym->cipher.iv.length;
aad = op->sym->auth.aad.data;
aadlen = op->sym->auth.aad.length;
@@ -987,7 +988,8 @@ process_openssl_cipher_op
dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *,
op->sym->cipher.data.offset);
- iv = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
if (sess->cipher.mode == OPENSSL_CIPHER_LIB)
if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
@@ -1028,7 +1030,8 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op,
dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *,
op->sym->cipher.data.offset);
- iv = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
block_size = DES_BLOCK_SIZE;
@@ -1086,7 +1089,8 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op,
dst, iv,
last_block_len, sess->cipher.bpi_ctx);
/* Prepare parameters for CBC mode op */
- iv = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
dst += last_block_len - srclen;
srclen -= last_block_len;
}
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index bdd5bab..7015549 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -642,7 +642,8 @@ qat_bpicipher_preprocess(struct qat_session *ctx,
iv = last_block - block_len;
else
/* runt block, i.e. less than one full block */
- iv = sym_op->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sym_op->cipher.iv.offset);
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
rte_hexdump(stdout, "BPI: src before pre-process:", last_block,
@@ -697,7 +698,8 @@ qat_bpicipher_postprocess(struct qat_session *ctx,
iv = dst - block_len;
else
/* runt block, i.e. less than one full block */
- iv = sym_op->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ sym_op->cipher.iv.offset);
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX
rte_hexdump(stdout, "BPI: src before post-process:", last_block,
@@ -903,6 +905,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
uint32_t min_ofs = 0;
uint64_t src_buf_start = 0, dst_buf_start = 0;
uint8_t do_sgl = 0;
+ uint8_t *iv_ptr;
#ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX
@@ -976,6 +979,8 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
cipher_ofs = op->sym->cipher.data.offset;
}
+ iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
/* copy IV into request if it fits */
/*
* If IV length is zero do not copy anything but still
@@ -986,14 +991,15 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
if (op->sym->cipher.iv.length <=
sizeof(cipher_param->u.cipher_IV_array)) {
rte_memcpy(cipher_param->u.cipher_IV_array,
- op->sym->cipher.iv.data,
+ iv_ptr,
op->sym->cipher.iv.length);
} else {
ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
qat_req->comn_hdr.serv_specif_flags,
ICP_QAT_FW_CIPH_IV_64BIT_PTR);
cipher_param->u.s.cipher_IV_ptr =
- op->sym->cipher.iv.phys_addr;
+ rte_crypto_op_ctophys_offset(op,
+ op->sym->cipher.iv.offset);
}
}
min_ofs = cipher_ofs;
@@ -1194,12 +1200,16 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
rte_hexdump(stdout, "src_data:",
rte_pktmbuf_mtod(op->sym->m_src, uint8_t*),
rte_pktmbuf_data_len(op->sym->m_src));
- rte_hexdump(stdout, "iv:", op->sym->cipher.iv.data,
- op->sym->cipher.iv.length);
- rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
- op->sym->auth.digest.length);
- rte_hexdump(stdout, "aad:", op->sym->auth.aad.data,
- op->sym->auth.aad.length);
+ if (do_cipher)
+ rte_hexdump(stdout, "iv:", iv_ptr,
+ op->sym->cipher.iv.length);
+
+ if (do_auth) {
+ rte_hexdump(stdout, "digest:", op->sym->auth.digest.data,
+ op->sym->auth.digest.length);
+ rte_hexdump(stdout, "aad:", op->sym->auth.aad.data,
+ op->sym->auth.aad.length);
+ }
#endif
return 0;
}
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 6261656..6df8416 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -174,7 +174,7 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops,
unsigned i;
uint8_t processed_ops = 0;
uint8_t *src[SNOW3G_MAX_BURST], *dst[SNOW3G_MAX_BURST];
- uint8_t *IV[SNOW3G_MAX_BURST];
+ uint8_t *iv[SNOW3G_MAX_BURST];
uint32_t num_bytes[SNOW3G_MAX_BURST];
for (i = 0; i < num_ops; i++) {
@@ -192,13 +192,14 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops,
(ops[i]->sym->cipher.data.offset >> 3) :
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
- IV[i] = ops[i]->sym->cipher.iv.data;
+ iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
+ ops[i]->sym->cipher.iv.offset);
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
processed_ops++;
}
- sso_snow3g_f8_n_buffer(&session->pKeySched_cipher, IV, src, dst,
+ sso_snow3g_f8_n_buffer(&session->pKeySched_cipher, iv, src, dst,
num_bytes, processed_ops);
return processed_ops;
@@ -210,7 +211,7 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
struct snow3g_session *session)
{
uint8_t *src, *dst;
- uint8_t *IV;
+ uint8_t *iv;
uint32_t length_in_bits, offset_in_bits;
/* Sanity checks. */
@@ -228,10 +229,11 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op,
return 0;
}
dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *);
- IV = op->sym->cipher.iv.data;
+ iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+ op->sym->cipher.iv.offset);
length_in_bits = op->sym->cipher.data.length;
- sso_snow3g_f8_1_buffer_bit(&session->pKeySched_cipher, IV,
+ sso_snow3g_f8_1_buffer_bit(&session->pKeySched_cipher, iv,
src, dst, length_in_bits, offset_in_bits);
return 1;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index d2263b4..8374f65 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -173,7 +173,7 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
unsigned i;
uint8_t processed_ops = 0;
uint8_t *src[ZUC_MAX_BURST], *dst[ZUC_MAX_BURST];
- uint8_t *IV[ZUC_MAX_BURST];
+ uint8_t *iv[ZUC_MAX_BURST];
uint32_t num_bytes[ZUC_MAX_BURST];
uint8_t *cipher_keys[ZUC_MAX_BURST];
@@ -213,7 +213,8 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
(ops[i]->sym->cipher.data.offset >> 3) :
rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) +
(ops[i]->sym->cipher.data.offset >> 3);
- IV[i] = ops[i]->sym->cipher.iv.data;
+ iv[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *,
+ ops[i]->sym->cipher.iv.offset);
num_bytes[i] = ops[i]->sym->cipher.data.length >> 3;
cipher_keys[i] = session->pKey_cipher;
@@ -221,7 +222,7 @@ process_zuc_cipher_op(struct rte_crypto_op **ops,
processed_ops++;
}
- sso_zuc_eea3_n_buffer(cipher_keys, IV, src, dst,
+ sso_zuc_eea3_n_buffer(cipher_keys, iv, src, dst,
num_bytes, processed_ops);
return processed_ops;
diff --git a/examples/ipsec-secgw/esp.c b/examples/ipsec-secgw/esp.c
index 5bf2d7d..738a800 100644
--- a/examples/ipsec-secgw/esp.c
+++ b/examples/ipsec-secgw/esp.c
@@ -104,9 +104,7 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
case RTE_CRYPTO_CIPHER_AES_CBC:
/* Copy IV at the end of crypto operation */
rte_memcpy(iv_ptr, iv, sa->iv_len);
- sym_cop->cipher.iv.data = iv_ptr;
- sym_cop->cipher.iv.phys_addr =
- rte_crypto_op_ctophys_offset(cop, IV_OFFSET);
+ sym_cop->cipher.iv.offset = IV_OFFSET;
sym_cop->cipher.iv.length = sa->iv_len;
break;
case RTE_CRYPTO_CIPHER_AES_CTR:
@@ -115,9 +113,7 @@ esp_inbound(struct rte_mbuf *m, struct ipsec_sa *sa,
icb->salt = sa->salt;
memcpy(&icb->iv, iv, 8);
icb->cnt = rte_cpu_to_be_32(1);
- sym_cop->cipher.iv.data = iv_ptr;
- sym_cop->cipher.iv.phys_addr =
- rte_crypto_op_ctophys_offset(cop, IV_OFFSET);
+ sym_cop->cipher.iv.offset = IV_OFFSET;
sym_cop->cipher.iv.length = 16;
break;
default:
@@ -348,15 +344,11 @@ esp_outbound(struct rte_mbuf *m, struct ipsec_sa *sa,
padding[pad_len - 2] = pad_len - 2;
padding[pad_len - 1] = nlp;
- uint8_t *iv_ptr = rte_crypto_op_ctod_offset(cop,
- uint8_t *, IV_OFFSET);
struct cnt_blk *icb = get_cnt_blk(m);
icb->salt = sa->salt;
icb->iv = sa->seq;
icb->cnt = rte_cpu_to_be_32(1);
- sym_cop->cipher.iv.data = iv_ptr;
- sym_cop->cipher.iv.phys_addr =
- rte_crypto_op_ctophys_offset(cop, IV_OFFSET);
+ sym_cop->cipher.iv.offset = IV_OFFSET;
sym_cop->cipher.iv.length = 16;
uint8_t *aad;
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 1380bc6..ffd9731 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -489,9 +489,7 @@ l2fwd_simple_crypto_enqueue(struct rte_mbuf *m,
/* Copy IV at the end of the crypto operation */
rte_memcpy(iv_ptr, cparams->iv.data, cparams->iv.length);
- op->sym->cipher.iv.data = iv_ptr;
- op->sym->cipher.iv.phys_addr =
- rte_crypto_op_ctophys_offset(op, IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = cparams->iv.length;
/* For wireless algorithms, offset/length must be in bits */
@@ -700,7 +698,6 @@ l2fwd_main_loop(struct l2fwd_crypto_options *options)
if (port_cparams[i].do_cipher) {
port_cparams[i].iv.data = options->iv.data;
port_cparams[i].iv.length = options->iv.length;
- port_cparams[i].iv.phys_addr = options->iv.phys_addr;
if (!options->iv_param)
generate_random_key(port_cparams[i].iv.data,
port_cparams[i].iv.length);
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 39ad1e3..b35c45a 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -464,8 +464,10 @@ struct rte_crypto_sym_op {
} data; /**< Data offsets and length for ciphering */
struct {
- uint8_t *data;
- /**< Initialisation Vector or Counter.
+ uint16_t offset;
+ /**< Starting point for Initialisation Vector or Counter,
+ * specified as number of bytes from start of crypto
+ * operation.
*
* - For block ciphers in CBC or F8 mode, or for KASUMI
* in F8 mode, or for SNOW 3G in UEA2 mode, this is the
@@ -491,7 +493,6 @@ struct rte_crypto_sym_op {
* For optimum performance, the data pointed to SHOULD
* be 8-byte aligned.
*/
- phys_addr_t phys_addr;
uint16_t length;
/**< Length of valid IV data.
*
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 133c439..0dd95ca 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1311,13 +1311,11 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
sym_op->auth.data.length = QUOTE_512_BYTES;
/* Set crypto operation cipher parameters */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
- rte_memcpy(sym_op->cipher.iv.data, aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ aes_cbc_iv, CIPHER_IV_LENGTH_AES_CBC);
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = QUOTE_512_BYTES;
@@ -1464,13 +1462,11 @@ test_AES_CBC_HMAC_SHA512_decrypt_perform(struct rte_cryptodev_sym_session *sess,
sym_op->auth.data.offset = 0;
sym_op->auth.data.length = QUOTE_512_BYTES;
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
- rte_memcpy(sym_op->cipher.iv.data, iv, CIPHER_IV_LENGTH_AES_CBC);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, CIPHER_IV_LENGTH_AES_CBC);
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = QUOTE_512_BYTES;
@@ -1860,13 +1856,11 @@ create_wireless_algo_cipher_operation(const uint8_t *iv, uint8_t iv_len,
sym_op->m_src = ut_params->ibuf;
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = cipher_offset;
return 0;
@@ -1896,13 +1890,11 @@ create_wireless_algo_cipher_operation_oop(const uint8_t *iv, uint8_t iv_len,
sym_op->m_dst = ut_params->obuf;
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = cipher_offset;
return 0;
@@ -2219,13 +2211,11 @@ create_wireless_cipher_hash_operation(const struct wireless_test_data *tdata,
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = cipher_offset + auth_offset;
sym_op->auth.data.length = auth_len;
@@ -2316,13 +2306,11 @@ create_wireless_algo_cipher_hash_operation(const uint8_t *auth_tag,
TEST_HEXDUMP(stdout, "aad:", sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = cipher_offset + auth_offset;
sym_op->auth.data.length = auth_len;
@@ -2401,14 +2389,11 @@ create_wireless_algo_auth_cipher_operation(const unsigned auth_tag_len,
sym_op->auth.aad.data, aad_len);
/* iv */
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, iv, iv_len);
-
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ iv, iv_len);
sym_op->cipher.data.length = cipher_len;
sym_op->cipher.data.offset = auth_offset + cipher_offset;
@@ -4855,14 +4840,13 @@ create_gcm_operation(enum rte_crypto_cipher_operation op,
sym_op->auth.aad.length);
/* Append IV at the end of the crypto operation*/
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
+ uint8_t *, IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = tdata->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, tdata->iv.data, tdata->iv.len);
- TEST_HEXDUMP(stdout, "iv:", sym_op->cipher.iv.data,
+ rte_memcpy(iv_ptr, tdata->iv.data, tdata->iv.len);
+ TEST_HEXDUMP(stdout, "iv:", iv_ptr,
sym_op->cipher.iv.length);
/* Append plaintext/ciphertext */
@@ -6430,15 +6414,15 @@ create_gmac_operation(enum rte_crypto_auth_operation op,
sym_op->auth.digest.length);
}
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
+ uint8_t *, IV_OFFSET);
+
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = tdata->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, tdata->iv.data, tdata->iv.len);
+ rte_memcpy(iv_ptr, tdata->iv.data, tdata->iv.len);
- TEST_HEXDUMP(stdout, "iv:", sym_op->cipher.iv.data, tdata->iv.len);
+ TEST_HEXDUMP(stdout, "iv:", iv_ptr, tdata->iv.len);
sym_op->cipher.data.length = 0;
sym_op->cipher.data.offset = 0;
@@ -6976,13 +6960,11 @@ create_auth_GMAC_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.data,
sym_op->auth.digest.length);
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = reference->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, reference->iv.data, reference->iv.len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ reference->iv.data, reference->iv.len);
sym_op->cipher.data.length = 0;
sym_op->cipher.data.offset = 0;
@@ -7035,13 +7017,11 @@ create_cipher_auth_operation(struct crypto_testsuite_params *ts_params,
sym_op->auth.digest.data,
sym_op->auth.digest.length);
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = reference->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, reference->iv.data, reference->iv.len);
+ rte_memcpy(rte_crypto_op_ctod_offset(ut_params->op, uint8_t *, IV_OFFSET),
+ reference->iv.data, reference->iv.len);
sym_op->cipher.data.length = reference->ciphertext.len;
sym_op->cipher.data.offset = 0;
@@ -7285,13 +7265,12 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
sym_op->auth.digest.length);
}
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ut_params->op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ut_params->op,
- IV_OFFSET);
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(ut_params->op,
+ uint8_t *, IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = iv_len;
- rte_memcpy(sym_op->cipher.iv.data, tdata->iv.data, iv_len);
+ rte_memcpy(iv_ptr, tdata->iv.data, iv_len);
sym_op->auth.aad.data = (uint8_t *)rte_pktmbuf_prepend(
ut_params->ibuf, aad_len);
@@ -7304,7 +7283,7 @@ create_gcm_operation_SGL(enum rte_crypto_cipher_operation op,
memset(sym_op->auth.aad.data, 0, aad_len);
rte_memcpy(sym_op->auth.aad.data, tdata->aad.data, aad_len);
- TEST_HEXDUMP(stdout, "iv:", sym_op->cipher.iv.data, iv_len);
+ TEST_HEXDUMP(stdout, "iv:", iv_ptr, iv_len);
TEST_HEXDUMP(stdout, "aad:",
sym_op->auth.aad.data, aad_len);
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 186e169..ad5fb0d 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -290,12 +290,10 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
sym_op->cipher.data.offset = 0;
sym_op->cipher.data.length = tdata->ciphertext.len;
- sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ sym_op->cipher.iv.offset = IV_OFFSET;
sym_op->cipher.iv.length = tdata->iv.len;
- rte_memcpy(sym_op->cipher.iv.data, tdata->iv.data,
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ tdata->iv.data,
tdata->iv.len);
}
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index b08451d..86bdc6e 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -1981,15 +1981,11 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
op->sym->auth.data.offset = 0;
op->sym->auth.data.length = data_params[0].length;
-
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = CIPHER_IV_LENGTH_AES_CBC;
- rte_memcpy(op->sym->cipher.iv.data, aes_cbc_128_iv,
- CIPHER_IV_LENGTH_AES_CBC);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ aes_cbc_128_iv, CIPHER_IV_LENGTH_AES_CBC);
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_params[0].length;
@@ -2898,13 +2894,10 @@ test_perf_set_crypto_op_aes(struct rte_crypto_op *op, struct rte_mbuf *m,
/* Cipher Parameters */
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = AES_CIPHER_IV_LENGTH;
-
- rte_memcpy(op->sym->cipher.iv.data, aes_iv, AES_CIPHER_IV_LENGTH);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ aes_iv, AES_CIPHER_IV_LENGTH);
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_len;
@@ -2934,12 +2927,10 @@ test_perf_set_crypto_op_aes_gcm(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.aad.length = AES_GCM_AAD_LENGTH;
/* Cipher Parameters */
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = AES_CIPHER_IV_LENGTH;
- rte_memcpy(op->sym->cipher.iv.data, aes_iv, AES_CIPHER_IV_LENGTH);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ aes_iv, AES_CIPHER_IV_LENGTH);
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
@@ -2980,9 +2971,7 @@ test_perf_set_crypto_op_snow3g(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.aad.length = SNOW3G_CIPHER_IV_LENGTH;
/* Cipher Parameters */
- op->sym->cipher.iv.data = iv_ptr;
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
/* Data lengths/offsets Parameters */
@@ -3009,12 +2998,10 @@ test_perf_set_crypto_op_snow3g_cipher(struct rte_crypto_op *op,
}
/* Cipher Parameters */
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = SNOW3G_CIPHER_IV_LENGTH;
- rte_memcpy(op->sym->cipher.iv.data, snow3g_iv, SNOW3G_CIPHER_IV_LENGTH);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ snow3g_iv, SNOW3G_CIPHER_IV_LENGTH);
op->sym->cipher.data.offset = 0;
op->sym->cipher.data.length = data_len << 3;
@@ -3082,13 +3069,10 @@ test_perf_set_crypto_op_3des(struct rte_crypto_op *op, struct rte_mbuf *m,
op->sym->auth.digest.length = digest_len;
/* Cipher Parameters */
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
+ op->sym->cipher.iv.offset = IV_OFFSET;
op->sym->cipher.iv.length = TRIPLE_DES_CIPHER_IV_LENGTH;
- rte_memcpy(op->sym->cipher.iv.data, triple_des_iv,
- TRIPLE_DES_CIPHER_IV_LENGTH);
+ rte_memcpy(rte_crypto_op_ctod_offset(op, uint8_t *, IV_OFFSET),
+ triple_des_iv, TRIPLE_DES_CIPHER_IV_LENGTH);
/* Data lengths/offsets Parameters */
op->sym->auth.data.offset = 0;
@@ -4183,6 +4167,9 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
struct crypto_params *m_hlp,
struct perf_test_params *params)
{
+ uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op,
+ uint8_t *, IV_OFFSET);
+
if (rte_crypto_op_attach_sym_session(op, sess) != 0) {
rte_crypto_op_free(op);
return NULL;
@@ -4203,14 +4190,11 @@ perf_gcm_set_crypto_op(struct rte_crypto_op *op, struct rte_mbuf *m,
rte_memcpy(op->sym->auth.aad.data, params->symmetric_op->aad_data,
params->symmetric_op->aad_len);
- op->sym->cipher.iv.data = rte_crypto_op_ctod_offset(op,
- uint8_t *, IV_OFFSET);
- op->sym->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(op,
- IV_OFFSET);
- rte_memcpy(op->sym->cipher.iv.data, params->symmetric_op->iv_data,
+ op->sym->cipher.iv.offset = IV_OFFSET;
+ rte_memcpy(iv_ptr, params->symmetric_op->iv_data,
params->symmetric_op->iv_len);
if (params->symmetric_op->iv_len == 12)
- op->sym->cipher.iv.data[15] = 1;
+ iv_ptr[15] = 1;
op->sym->cipher.iv.length = params->symmetric_op->iv_len;
--
2.9.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 04/27] cryptodev: do not store pointer to op specific params
` (2 preceding siblings ...)
2017-06-26 10:22 4% ` [dpdk-dev] [PATCH v2 03/27] cryptodev: remove opaque data pointer in crypto op Pablo de Lara
@ 2017-06-26 10:22 4% ` Pablo de Lara
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 13/27] cryptodev: pass IV as offset Pablo de Lara
` (7 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Instead of storing a pointer to operation specific parameters,
such as symmetric crypto parameters, use a zero-length array,
to mark that these parameters will be stored after the
generic crypto operation structure, which was already assumed
in the code, reducing the memory footprint of the crypto operation.
Besides, it is always expected to have rte_crypto_op
and rte_crypto_sym_op (the only operation specific parameters
structure right now) to be together, as they are initialized
as a single object in the crypto operation pool.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
doc/guides/rel_notes/release_17_08.rst | 2 ++
examples/ipsec-secgw/ipsec.c | 1 -
lib/librte_cryptodev/rte_crypto.h | 8 +-------
3 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 20f459e..6acbf35 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -155,6 +155,8 @@ API Changes
``rte_crypto_op_sess_type`` in ``rte_crypto_op`` have been modified to be
uint8_t values.
* Removed the field ``opaque_data`` from ``rte_crypto_op``.
+ * Pointer to ``rte_crypto_sym_op`` in ``rte_crypto_op`` has been replaced
+ with a zero length array.
ABI Changes
diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c
index edca5f0..126d79f 100644
--- a/examples/ipsec-secgw/ipsec.c
+++ b/examples/ipsec-secgw/ipsec.c
@@ -140,7 +140,6 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx,
priv->cop.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
rte_prefetch0(&priv->sym_cop);
- priv->cop.sym = &priv->sym_cop;
if ((unlikely(sa->crypto_session == NULL)) &&
create_session(ipsec_ctx, sa)) {
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index c2677fa..85716a6 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -124,7 +124,7 @@ struct rte_crypto_op {
RTE_STD_C11
union {
- struct rte_crypto_sym_op *sym;
+ struct rte_crypto_sym_op sym[0];
/**< Symmetric operation parameters */
}; /**< operation specific parameters */
} __rte_cache_aligned;
@@ -144,12 +144,6 @@ __rte_crypto_op_reset(struct rte_crypto_op *op, enum rte_crypto_op_type type)
switch (type) {
case RTE_CRYPTO_OP_TYPE_SYMMETRIC:
- /** Symmetric operation structure starts after the end of the
- * rte_crypto_op structure.
- */
- op->sym = (struct rte_crypto_sym_op *)(op + 1);
- op->type = type;
-
__rte_crypto_sym_op_reset(op->sym);
break;
default:
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 03/27] cryptodev: remove opaque data pointer in crypto op
2017-06-26 10:22 2% ` [dpdk-dev] [PATCH v2 01/27] cryptodev: move session type to generic crypto op Pablo de Lara
2017-06-26 10:22 4% ` [dpdk-dev] [PATCH v2 02/27] cryptodev: replace enums with 1-byte variables Pablo de Lara
@ 2017-06-26 10:22 4% ` Pablo de Lara
2017-06-26 10:22 4% ` [dpdk-dev] [PATCH v2 04/27] cryptodev: do not store pointer to op specific params Pablo de Lara
` (8 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Storing a pointer to the user data is unnecessary,
since user can store additional data, after the crypto operation.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
app/test-crypto-perf/cperf_test_latency.c | 36 +++++++++++++++++++++----------
doc/guides/prog_guide/cryptodev_lib.rst | 3 +--
doc/guides/rel_notes/release_17_08.rst | 1 +
lib/librte_cryptodev/rte_crypto.h | 5 -----
4 files changed, 27 insertions(+), 18 deletions(-)
diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c
index e61ac97..a7443a3 100644
--- a/app/test-crypto-perf/cperf_test_latency.c
+++ b/app/test-crypto-perf/cperf_test_latency.c
@@ -66,6 +66,10 @@ struct cperf_latency_ctx {
struct cperf_op_result *res;
};
+struct priv_op_data {
+ struct cperf_op_result *result;
+};
+
#define max(a, b) (a > b ? (uint64_t)a : (uint64_t)b)
#define min(a, b) (a < b ? (uint64_t)a : (uint64_t)b)
@@ -276,8 +280,9 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d",
dev_id);
+ uint16_t priv_size = sizeof(struct priv_op_data);
ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name,
- RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, 0,
+ RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 0, priv_size,
rte_socket_id());
if (ctx->crypto_op_pool == NULL)
goto err;
@@ -295,11 +300,20 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id,
return NULL;
}
+static inline void
+store_timestamp(struct rte_crypto_op *op, uint64_t timestamp)
+{
+ struct priv_op_data *priv_data;
+
+ priv_data = (struct priv_op_data *) (op->sym + 1);
+ priv_data->result->status = op->status;
+ priv_data->result->tsc_end = timestamp;
+}
+
int
cperf_latency_test_runner(void *arg)
{
struct cperf_latency_ctx *ctx = arg;
- struct cperf_op_result *pres;
uint16_t test_burst_size;
uint8_t burst_size_idx = 0;
@@ -311,6 +325,7 @@ cperf_latency_test_runner(void *arg)
struct rte_crypto_op *ops[ctx->options->max_burst_size];
struct rte_crypto_op *ops_processed[ctx->options->max_burst_size];
uint64_t i;
+ struct priv_op_data *priv_data;
uint32_t lcore = rte_lcore_id();
@@ -398,7 +413,12 @@ cperf_latency_test_runner(void *arg)
for (i = 0; i < ops_enqd; i++) {
ctx->res[tsc_idx].tsc_start = tsc_start;
- ops[i]->opaque_data = (void *)&ctx->res[tsc_idx];
+ /*
+ * Private data structure starts after the end of the
+ * rte_crypto_sym_op structure.
+ */
+ priv_data = (struct priv_op_data *) (ops[i]->sym + 1);
+ priv_data->result = (void *)&ctx->res[tsc_idx];
tsc_idx++;
}
@@ -410,10 +430,7 @@ cperf_latency_test_runner(void *arg)
* failures.
*/
for (i = 0; i < ops_deqd; i++) {
- pres = (struct cperf_op_result *)
- (ops_processed[i]->opaque_data);
- pres->status = ops_processed[i]->status;
- pres->tsc_end = tsc_end;
+ store_timestamp(ops_processed[i], tsc_end);
rte_crypto_op_free(ops_processed[i]);
}
@@ -446,10 +463,7 @@ cperf_latency_test_runner(void *arg)
if (ops_deqd != 0) {
for (i = 0; i < ops_deqd; i++) {
- pres = (struct cperf_op_result *)
- (ops_processed[i]->opaque_data);
- pres->status = ops_processed[i]->status;
- pres->tsc_end = tsc_end;
+ store_timestamp(ops_processed[i], tsc_end);
rte_crypto_op_free(ops_processed[i]);
}
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 229cb7a..c9a29f8 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -363,8 +363,7 @@ The operation structure includes the operation type, the operation status
and the session type (session-based/less), a reference to the operation
specific data, which can vary in size and content depending on the operation
being provisioned. It also contains the source mempool for the operation,
-if it allocate from a mempool. Finally an opaque pointer for user specific
-data is provided.
+if it allocated from a mempool.
If Crypto operations are allocated from a Crypto operation mempool, see next
section, there is also the ability to allocate private memory with the
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index bbb14a9..20f459e 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -154,6 +154,7 @@ API Changes
* Enumerations ``rte_crypto_op_sess_type``, ``rte_crypto_op_status`` and
``rte_crypto_op_sess_type`` in ``rte_crypto_op`` have been modified to be
uint8_t values.
+ * Removed the field ``opaque_data`` from ``rte_crypto_op``.
ABI Changes
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index 8e2b640..c2677fa 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -122,9 +122,6 @@ struct rte_crypto_op {
phys_addr_t phys_addr;
/**< physical address of crypto operation */
- void *opaque_data;
- /**< Opaque pointer for user data */
-
RTE_STD_C11
union {
struct rte_crypto_sym_op *sym;
@@ -158,8 +155,6 @@ __rte_crypto_op_reset(struct rte_crypto_op *op, enum rte_crypto_op_type type)
default:
break;
}
-
- op->opaque_data = NULL;
}
/**
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 02/27] cryptodev: replace enums with 1-byte variables
2017-06-26 10:22 2% ` [dpdk-dev] [PATCH v2 01/27] cryptodev: move session type to generic crypto op Pablo de Lara
@ 2017-06-26 10:22 4% ` Pablo de Lara
2017-06-26 10:22 4% ` [dpdk-dev] [PATCH v2 03/27] cryptodev: remove opaque data pointer in crypto op Pablo de Lara
` (9 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Instead of storing some crypto operation flags,
such as operation status, as enumerations,
store them as uint8_t, for memory efficiency.
Also, reserve extra 5 bytes in the crypto operation,
for future additions.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
doc/guides/rel_notes/release_17_08.rst | 3 +++
lib/librte_cryptodev/rte_crypto.h | 9 +++++----
2 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 2bc405d..bbb14a9 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -151,6 +151,9 @@ API Changes
* Removed the field ``rte_crypto_sym_op_sess_type`` from ``rte_crypto_sym_op``,
and moved it to ``rte_crypto_op`` as ``rte_crypto_op_sess_type``.
+ * Enumerations ``rte_crypto_op_sess_type``, ``rte_crypto_op_status`` and
+ ``rte_crypto_op_sess_type`` in ``rte_crypto_op`` have been modified to be
+ uint8_t values.
ABI Changes
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index ac5c184..8e2b640 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -102,19 +102,20 @@ enum rte_crypto_op_sess_type {
* rte_cryptodev_enqueue_burst() / rte_cryptodev_dequeue_burst() .
*/
struct rte_crypto_op {
- enum rte_crypto_op_type type;
+ uint8_t type;
/**< operation type */
-
- enum rte_crypto_op_status status;
+ uint8_t status;
/**<
* operation status - this is reset to
* RTE_CRYPTO_OP_STATUS_NOT_PROCESSED on allocation from mempool and
* will be set to RTE_CRYPTO_OP_STATUS_SUCCESS after crypto operation
* is successfully processed by a crypto PMD
*/
- enum rte_crypto_op_sess_type sess_type;
+ uint8_t sess_type;
/**< operation session type */
+ uint8_t reserved[5];
+ /**< Reserved bytes to fill 64 bits for future additions */
struct rte_mempool *mempool;
/**< crypto operation mempool which operation is allocated from */
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 01/27] cryptodev: move session type to generic crypto op
@ 2017-06-26 10:22 2% ` Pablo de Lara
2017-06-26 10:22 4% ` [dpdk-dev] [PATCH v2 02/27] cryptodev: replace enums with 1-byte variables Pablo de Lara
` (10 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Pablo de Lara @ 2017-06-26 10:22 UTC (permalink / raw)
To: declan.doherty, zbigniew.bodek, jerin.jacob, akhil.goyal, hemant.agrawal
Cc: dev, Pablo de Lara
Session type (operation with or without session) is not
something specific to symmetric operations.
Therefore, the variable is moved to the generic crypto operation
structure.
Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
---
doc/guides/prog_guide/cryptodev_lib.rst | 21 ++++++++++-----------
doc/guides/rel_notes/release_17_08.rst | 8 ++++++++
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 15 ++++++++-------
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 4 ++--
drivers/crypto/armv8/rte_armv8_pmd.c | 4 ++--
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 2 +-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 6 +++---
drivers/crypto/null/null_crypto_pmd.c | 15 ++++++++-------
drivers/crypto/openssl/rte_openssl_pmd.c | 4 ++--
drivers/crypto/qat/qat_crypto.c | 2 +-
drivers/crypto/snow3g/rte_snow3g_pmd.c | 6 +++---
drivers/crypto/zuc/rte_zuc_pmd.c | 4 ++--
lib/librte_cryptodev/rte_crypto.h | 15 +++++++++++++++
lib/librte_cryptodev/rte_crypto_sym.h | 16 ----------------
test/test/test_cryptodev.c | 8 ++++----
15 files changed, 69 insertions(+), 61 deletions(-)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 4f98f28..229cb7a 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1,5 +1,5 @@
.. BSD LICENSE
- Copyright(c) 2016 Intel Corporation. All rights reserved.
+ Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
@@ -359,11 +359,12 @@ Crypto operation to be processed on a particular Crypto device poll mode driver.
.. figure:: img/crypto_op.*
-The operation structure includes the operation type and the operation status,
-a reference to the operation specific data, which can vary in size and content
-depending on the operation being provisioned. It also contains the source
-mempool for the operation, if it allocate from a mempool. Finally an
-opaque pointer for user specific data is provided.
+The operation structure includes the operation type, the operation status
+and the session type (session-based/less), a reference to the operation
+specific data, which can vary in size and content depending on the operation
+being provisioned. It also contains the source mempool for the operation,
+if it allocate from a mempool. Finally an opaque pointer for user specific
+data is provided.
If Crypto operations are allocated from a Crypto operation mempool, see next
section, there is also the ability to allocate private memory with the
@@ -512,9 +513,9 @@ buffer. It is used for either cipher, authentication, AEAD and chained
operations.
As a minimum the symmetric operation must have a source data buffer (``m_src``),
-the session type (session-based/less), a valid session (or transform chain if in
-session-less mode) and the minimum authentication/ cipher parameters required
-depending on the type of operation specified in the session or the transform
+a valid session (or transform chain if in session-less mode) and the minimum
+authentication/ cipher parameters required depending on the type of operation
+specified in the session or the transform
chain.
.. code-block:: c
@@ -523,8 +524,6 @@ chain.
struct rte_mbuf *m_src;
struct rte_mbuf *m_dst;
- enum rte_crypto_sym_op_sess_type type;
-
union {
struct rte_cryptodev_sym_session *session;
/**< Handle for the initialised session context */
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 842f46f..2bc405d 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -144,6 +144,14 @@ API Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* **Reworked rte_cryptodev library.**
+
+ The rte_cryptodev library has been reworked and updated. The following changes
+ have been made to it:
+
+ * Removed the field ``rte_crypto_sym_op_sess_type`` from ``rte_crypto_sym_op``,
+ and moved it to ``rte_crypto_op`` as ``rte_crypto_op_sess_type``.
+
ABI Changes
-----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 4d7aa4f..165f5a1 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -139,16 +139,17 @@ aesni_gcm_set_session_parameters(struct aesni_gcm_session *sess,
/** Get gcm session */
static struct aesni_gcm_session *
-aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
+aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_op *op)
{
struct aesni_gcm_session *sess = NULL;
+ struct rte_crypto_sym_op *sym_op = op->sym;
- if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session->dev_type
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+ if (unlikely(sym_op->session->dev_type
!= RTE_CRYPTODEV_AESNI_GCM_PMD))
return sess;
- sess = (struct aesni_gcm_session *)op->session->_private;
+ sess = (struct aesni_gcm_session *)sym_op->session->_private;
} else {
void *_sess;
@@ -159,7 +160,7 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
((struct rte_cryptodev_sym_session *)_sess)->_private;
if (unlikely(aesni_gcm_set_session_parameters(sess,
- op->xform) != 0)) {
+ sym_op->xform) != 0)) {
rte_mempool_put(qp->sess_mp, _sess);
sess = NULL;
}
@@ -372,7 +373,7 @@ handle_completed_gcm_crypto_op(struct aesni_gcm_qp *qp,
post_process_gcm_crypto_op(op);
/* Free session if a session-less crypto op */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
}
@@ -393,7 +394,7 @@ aesni_gcm_pmd_dequeue_burst(void *queue_pair,
for (i = 0; i < nb_dequeued; i++) {
- sess = aesni_gcm_get_session(qp, ops[i]->sym);
+ sess = aesni_gcm_get_session(qp, ops[i]);
if (unlikely(sess == NULL)) {
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
qp->qp_stats.dequeue_err_count++;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index e88d3cd..efdc321 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -345,7 +345,7 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
{
struct aesni_mb_session *sess = NULL;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
if (unlikely(op->sym->session->dev_type !=
RTE_CRYPTODEV_AESNI_MB_PMD)) {
return NULL;
@@ -541,7 +541,7 @@ post_process_mb_job(struct aesni_mb_qp *qp, JOB_AES_HMAC *job)
}
/* Free session if a session-less crypto op */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
}
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 8ed26db..04d8781 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -545,7 +545,7 @@ get_session(struct armv8_crypto_qp *qp, struct rte_crypto_op *op)
{
struct armv8_crypto_session *sess = NULL;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
op->sym->session->dev_type ==
@@ -700,7 +700,7 @@ process_op(const struct armv8_crypto_qp *qp, struct rte_crypto_op *op,
}
/* Free session if a session-less crypto op */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
memset(sess, 0, sizeof(struct armv8_crypto_session));
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index e32b27e..e154395 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -437,7 +437,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
if (unlikely(nb_ops == 0))
return 0;
- if (ops[0]->sym->sess_type != RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (ops[0]->sess_type != RTE_CRYPTO_OP_WITH_SESSION) {
RTE_LOG(ERR, PMD, "sessionless crypto op not supported\n");
return 0;
}
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index ac80473..667ebfc 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -143,7 +143,7 @@ kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
{
struct kasumi_session *sess;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
if (unlikely(op->sym->session->dev_type !=
RTE_CRYPTODEV_KASUMI_PMD))
return NULL;
@@ -353,7 +353,7 @@ process_ops(struct rte_crypto_op **ops, struct kasumi_session *session,
if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (ops[i]->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
ops[i]->sym->session = NULL;
}
@@ -405,7 +405,7 @@ process_op_bit(struct rte_crypto_op *op, struct kasumi_session *session,
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
}
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 47b8dad..fcf2adc 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -90,16 +90,17 @@ process_op(const struct null_crypto_qp *qp, struct rte_crypto_op *op,
}
static struct null_crypto_session *
-get_session(struct null_crypto_qp *qp, struct rte_crypto_sym_op *op)
+get_session(struct null_crypto_qp *qp, struct rte_crypto_op *op)
{
struct null_crypto_session *sess;
+ struct rte_crypto_sym_op *sym_op = op->sym;
- if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session == NULL ||
- op->session->dev_type != RTE_CRYPTODEV_NULL_PMD))
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+ if (unlikely(sym_op->session == NULL ||
+ sym_op->session->dev_type != RTE_CRYPTODEV_NULL_PMD))
return NULL;
- sess = (struct null_crypto_session *)op->session->_private;
+ sess = (struct null_crypto_session *)sym_op->session->_private;
} else {
struct rte_cryptodev_session *c_sess = NULL;
@@ -108,7 +109,7 @@ get_session(struct null_crypto_qp *qp, struct rte_crypto_sym_op *op)
sess = (struct null_crypto_session *)c_sess->_private;
- if (null_crypto_set_session_parameters(sess, op->xform) != 0)
+ if (null_crypto_set_session_parameters(sess, sym_op->xform) != 0)
return NULL;
}
@@ -126,7 +127,7 @@ null_crypto_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
int i, retval;
for (i = 0; i < nb_ops; i++) {
- sess = get_session(qp, ops[i]->sym);
+ sess = get_session(qp, ops[i]);
if (unlikely(sess == NULL))
goto enqueue_err;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index a6438a8..9f032d3 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -446,7 +446,7 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
{
struct openssl_session *sess = NULL;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
op->sym->session->dev_type ==
@@ -1196,7 +1196,7 @@ process_op(const struct openssl_qp *qp, struct rte_crypto_op *op,
}
/* Free session if a session-less crypto op */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
openssl_reset_session(sess);
memset(sess, 0, sizeof(struct openssl_session));
rte_mempool_put(qp->sess_mp, op->sym->session);
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 8b7b2fa..9b294e4 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -908,7 +908,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
return -EINVAL;
}
#endif
- if (unlikely(op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS)) {
+ if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
PMD_DRV_LOG(ERR, "QAT PMD only supports session oriented"
" requests, op (%p) is sessionless.", op);
return -EINVAL;
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 855be72..6261656 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -143,7 +143,7 @@ snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *op)
{
struct snow3g_session *sess;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
if (unlikely(op->sym->session->dev_type !=
RTE_CRYPTODEV_SNOW3G_PMD))
return NULL;
@@ -357,7 +357,7 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (ops[i]->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
ops[i]->sym->session = NULL;
}
@@ -409,7 +409,7 @@ process_op_bit(struct rte_crypto_op *op, struct snow3g_session *session,
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, op->sym->session);
op->sym->session = NULL;
}
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index 7681587..d2263b4 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -142,7 +142,7 @@ zuc_get_session(struct zuc_qp *qp, struct rte_crypto_op *op)
{
struct zuc_session *sess;
- if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
+ if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
if (unlikely(op->sym->session->dev_type !=
RTE_CRYPTODEV_ZUC_PMD))
return NULL;
@@ -333,7 +333,7 @@ process_ops(struct rte_crypto_op **ops, struct zuc_session *session,
if (ops[i]->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
ops[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
/* Free session if a session-less crypto op. */
- if (ops[i]->sym->sess_type == RTE_CRYPTO_SYM_OP_SESSIONLESS) {
+ if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
rte_mempool_put(qp->sess_mp, ops[i]->sym->session);
ops[i]->sym->session = NULL;
}
diff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h
index 9019518..ac5c184 100644
--- a/lib/librte_cryptodev/rte_crypto.h
+++ b/lib/librte_cryptodev/rte_crypto.h
@@ -82,6 +82,16 @@ enum rte_crypto_op_status {
};
/**
+ * Crypto operation session type. This is used to specify whether a crypto
+ * operation has session structure attached for immutable parameters or if all
+ * operation information is included in the operation data structure.
+ */
+enum rte_crypto_op_sess_type {
+ RTE_CRYPTO_OP_WITH_SESSION, /**< Session based crypto operation */
+ RTE_CRYPTO_OP_SESSIONLESS /**< Session-less crypto operation */
+};
+
+/**
* Cryptographic Operation.
*
* This structure contains data relating to performing cryptographic
@@ -102,6 +112,8 @@ struct rte_crypto_op {
* will be set to RTE_CRYPTO_OP_STATUS_SUCCESS after crypto operation
* is successfully processed by a crypto PMD
*/
+ enum rte_crypto_op_sess_type sess_type;
+ /**< operation session type */
struct rte_mempool *mempool;
/**< crypto operation mempool which operation is allocated from */
@@ -130,6 +142,7 @@ __rte_crypto_op_reset(struct rte_crypto_op *op, enum rte_crypto_op_type type)
{
op->type = type;
op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+ op->sess_type = RTE_CRYPTO_OP_SESSIONLESS;
switch (type) {
case RTE_CRYPTO_OP_TYPE_SYMMETRIC:
@@ -407,6 +420,8 @@ rte_crypto_op_attach_sym_session(struct rte_crypto_op *op,
if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC))
return -1;
+ op->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
return __rte_crypto_sym_op_attach_sym_session(op->sym, sess);
}
diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h
index 3a40844..386b120 100644
--- a/lib/librte_cryptodev/rte_crypto_sym.h
+++ b/lib/librte_cryptodev/rte_crypto_sym.h
@@ -376,17 +376,6 @@ struct rte_crypto_sym_xform {
};
};
-/**
- * Crypto operation session type. This is used to specify whether a crypto
- * operation has session structure attached for immutable parameters or if all
- * operation information is included in the operation data structure.
- */
-enum rte_crypto_sym_op_sess_type {
- RTE_CRYPTO_SYM_OP_WITH_SESSION, /**< Session based crypto operation */
- RTE_CRYPTO_SYM_OP_SESSIONLESS /**< Session-less crypto operation */
-};
-
-
struct rte_cryptodev_sym_session;
/**
@@ -423,8 +412,6 @@ struct rte_crypto_sym_op {
struct rte_mbuf *m_src; /**< source mbuf */
struct rte_mbuf *m_dst; /**< destination mbuf */
- enum rte_crypto_sym_op_sess_type sess_type;
-
RTE_STD_C11
union {
struct rte_cryptodev_sym_session *session;
@@ -665,8 +652,6 @@ static inline void
__rte_crypto_sym_op_reset(struct rte_crypto_sym_op *op)
{
memset(op, 0, sizeof(*op));
-
- op->sess_type = RTE_CRYPTO_SYM_OP_SESSIONLESS;
}
@@ -708,7 +693,6 @@ __rte_crypto_sym_op_attach_sym_session(struct rte_crypto_sym_op *sym_op,
struct rte_cryptodev_sym_session *sess)
{
sym_op->session = sess;
- sym_op->sess_type = RTE_CRYPTO_SYM_OP_WITH_SESSION;
return 0;
}
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index eed7385..cf2f90d 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -5556,8 +5556,8 @@ test_AES_GCM_authenticated_encryption_sessionless(
ut_params->op->sym->m_src = ut_params->ibuf;
- TEST_ASSERT_EQUAL(ut_params->op->sym->sess_type,
- RTE_CRYPTO_SYM_OP_SESSIONLESS,
+ TEST_ASSERT_EQUAL(ut_params->op->sess_type,
+ RTE_CRYPTO_OP_SESSIONLESS,
"crypto op session type not sessionless");
/* Process crypto operation */
@@ -5636,8 +5636,8 @@ test_AES_GCM_authenticated_decryption_sessionless(
ut_params->op->sym->m_src = ut_params->ibuf;
- TEST_ASSERT_EQUAL(ut_params->op->sym->sess_type,
- RTE_CRYPTO_SYM_OP_SESSIONLESS,
+ TEST_ASSERT_EQUAL(ut_params->op->sess_type,
+ RTE_CRYPTO_OP_SESSIONLESS,
"crypto op session type not sessionless");
/* Process crypto operation */
--
2.9.4
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH 1/6] service cores: header and implementation
2017-06-23 9:06 1% [dpdk-dev] [PATCH 1/6] service cores: header and implementation Harry van Haaren
@ 2017-06-26 11:59 0% ` Jerin Jacob
2017-06-29 11:13 3% ` Van Haaren, Harry
1 sibling, 1 reply; 200+ results
From: Jerin Jacob @ 2017-06-26 11:59 UTC (permalink / raw)
To: Harry van Haaren; +Cc: dev, thomas, keith.wiles, bruce.richardson
-----Original Message-----
> Date: Fri, 23 Jun 2017 10:06:14 +0100
> From: Harry van Haaren <harry.van.haaren@intel.com>
> To: dev@dpdk.org
> CC: thomas@monjalon.net, jerin.jacob@caviumnetworks.com,
> keith.wiles@intel.com, bruce.richardson@intel.com, Harry van Haaren
> <harry.van.haaren@intel.com>
> Subject: [PATCH 1/6] service cores: header and implementation
> X-Mailer: git-send-email 2.7.4
>
> Add header files, update .map files with new service
> functions, and add the service header to the doxygen
> for building.
>
> This service header API allows DPDK to use services as
> a concept of something that requires CPU cycles. An example
> is a PMD that runs in software to schedule events, where a
> hardware version exists that does not require a CPU.
>
> The code presented here is based on an initial RFC:
> http://dpdk.org/ml/archives/dev/2017-May/065207.html
>
> This was then reworked, and RFC v2 with the changes posted:
> http://dpdk.org/ml/archives/dev/2017-June/067194.html
>
> This is the third iteration of the service core concept,
> now with an implementation.
>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Nice work. Detailed review comments below
> ---
> doc/api/doxy-api-index.md | 1 +
> lib/librte_eal/bsdapp/eal/Makefile | 1 +
> lib/librte_eal/bsdapp/eal/rte_eal_version.map | 24 +
> lib/librte_eal/common/Makefile | 1 +
> lib/librte_eal/common/include/rte_eal.h | 4 +
> lib/librte_eal/common/include/rte_lcore.h | 3 +-
> lib/librte_eal/common/include/rte_service.h | 274 ++++++++++
> .../common/include/rte_service_private.h | 108 ++++
> lib/librte_eal/common/rte_service.c | 568 +++++++++++++++++++++
> lib/librte_eal/linuxapp/eal/Makefile | 1 +
> lib/librte_eal/linuxapp/eal/rte_eal_version.map | 24 +
> 11 files changed, 1008 insertions(+), 1 deletion(-)
> create mode 100644 lib/librte_eal/common/include/rte_service.h
> create mode 100644 lib/librte_eal/common/include/rte_service_private.h
> create mode 100644 lib/librte_eal/common/rte_service.c
>
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index f5f1f19..55d522a 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -158,6 +158,7 @@ There are many libraries, so their headers may be grouped by topics:
> [common] (@ref rte_common.h),
> [ABI compat] (@ref rte_compat.h),
> [keepalive] (@ref rte_keepalive.h),
> + [Service Cores] (@ref rte_service.h),
1) IMO, To keep the consistency we can rename to "[service cores]"
2) I thought, we decided to expose rte_service_register() and
rte_service_unregister() as well, Considering the case where even application
as register for service functions if required. If it is true then I
think, registration functions can moved of private header file so that
it will visible in doxygen.
3) Should we change core function name as lcore like
rte_service_lcore_add(), rte_service_lcore_del() etc as we are operating
on lcore here.
> [device metrics] (@ref rte_metrics.h),
> [bitrate statistics] (@ref rte_bitrate.h),
> [latency statistics] (@ref rte_latencystats.h),
> diff --git a/lib/librte_eal/bsdapp/eal/Makefile b/lib/librte_eal/bsdapp/eal/Makefile
> index a0f9950..05517a2 100644
> --- a/lib/librte_eal/bsdapp/eal/Makefile
> +++ b/lib/librte_eal/bsdapp/eal/Makefile
> @@ -87,6 +87,7 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_malloc.c
> SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += malloc_elem.c
> SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += malloc_heap.c
> SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_keepalive.c
> +SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_service.c
>
> # from arch dir
> SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_cpuflags.c
> diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
> index 2e48a73..843d4ee 100644
> --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map
> +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
> @@ -193,3 +193,27 @@ DPDK_17.05 {
> vfio_get_group_no;
>
> } DPDK_17.02;
> +
> +DPDK_17.08 {
> + global:
> +
> + rte_service_core_add;
> + rte_service_core_count;
> + rte_service_core_del;
> + rte_service_core_list;
> + rte_service_core_reset_all;
> + rte_service_core_start;
> + rte_service_core_stop;
> + rte_service_disable_on_core;
> + rte_service_enable_on_core;
> + rte_service_get_by_id;
> + rte_service_get_count;
> + rte_service_get_enabled_on_core;
> + rte_service_is_running;
> + rte_service_register;
> + rte_service_reset;
> + rte_service_start;
> + rte_service_stop;
> + rte_service_unregister;
> +
> +} DPDK_17.05;
> diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile
> index a5bd108..2a93397 100644
> --- a/lib/librte_eal/common/Makefile
> +++ b/lib/librte_eal/common/Makefile
> @@ -41,6 +41,7 @@ INC += rte_eal_memconfig.h rte_malloc_heap.h
> INC += rte_hexdump.h rte_devargs.h rte_bus.h rte_dev.h rte_vdev.h
> INC += rte_pci_dev_feature_defs.h rte_pci_dev_features.h
> INC += rte_malloc.h rte_keepalive.h rte_time.h
> +INC += rte_service.h rte_service_private.h
>
> GENERIC_INC := rte_atomic.h rte_byteorder.h rte_cycles.h rte_prefetch.h
> GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h rte_rwlock.h
> diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
> index abf020b..1f203f8 100644
> --- a/lib/librte_eal/common/include/rte_eal.h
> +++ b/lib/librte_eal/common/include/rte_eal.h
> @@ -61,6 +61,7 @@ extern "C" {
> enum rte_lcore_role_t {
> ROLE_RTE,
> ROLE_OFF,
> + ROLE_SERVICE,
> };
>
> /**
> @@ -80,6 +81,7 @@ enum rte_proc_type_t {
> struct rte_config {
> uint32_t master_lcore; /**< Id of the master lcore */
> uint32_t lcore_count; /**< Number of available logical cores. */
> + uint32_t score_count; /**< Number of available service cores. */
Should we call it as service core or service lcore?
> enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
>
> /** Primary or secondary configuration */
> @@ -185,6 +187,8 @@ int rte_eal_iopl_init(void);
> *
> * EPROTO indicates that the PCI bus is either not present, or is not
> * readable by the eal.
> + *
> + * ENOEXEC indicates that a service core failed to launch successfully.
> */
> +#define RTE_SERVICE_CAP_MT_SAFE (1 << 0)
> +
> +/** Return the number of services registered.
> + *
> + * The number of services registered can be passed to *rte_service_get_by_id*,
> + * enabling the application to retireve the specificaion of each service.
s/retireve the specificaion/retrieve the specification
> + *
> + * @return The number of services registered.
> + */
> +uint32_t rte_service_get_count(void);
> +
> +/** Return the specificaion of each service.
s/specificaion/specification
> + *
> + * This function provides the specification of a service. This can be used by
> + * the application to understand what the service represents. The service
> + * must not be modified by the application directly, only passed to the various
> + * rte_service_* functions.
> + *
> + * @param id The integer id of the service to retrieve
> + * @retval non-zero A valid pointer to the service_spec
> + * @retval NULL Invalid *id* provided.
> + */
> +struct rte_service_spec *rte_service_get_by_id(uint32_t id);
> +
> +/** Return the name of the service.
> + *
> + * @return A pointer to the name of the service. The returned pointer remains
> + * in ownership of the service, and the application must not free it.
> + */
> +const char *rte_service_get_name(const struct rte_service_spec *service);
> +
> +/* Check if a service has a specific capability.
Missing the doxygen marker(ie. change to /** Check)
> + *
> + * This function returns if *service* has implements *capability*.
> + * See RTE_SERVICE_CAP_* defines for a list of valid capabilities.
> + * @retval 1 Capability supported by this service instance
> + * @retval 0 Capability not supported by this service instance
> + */
> +int32_t rte_service_probe_capability(const struct rte_service_spec *service,
> + uint32_t capability);
> +
> +/* Start a service core.
Missing the doxygen marker(ie. change to /** Start)
> + *
> + * Starting a core makes the core begin polling. Any services assigned to it
> + * will be run as fast as possible.
> + *
> + * @retval 0 Success
> + * @retval -EINVAL Failed to start core. The *lcore_id* passed in is not
> + * currently assigned to be a service core.
> + */
> +int32_t rte_service_core_start(uint32_t lcore_id);
> +
> +/* Stop a service core.
Missing the doxygen marker(ie. change to /** Stop)
> + *
> + * Stopping a core makes the core become idle, but remains assigned as a
> + * service core.
> + *
> + * @retval 0 Success
> + * @retval -EINVAL Invalid *lcore_id* provided
> + * @retval -EALREADY Already stopped core
> + * @retval -EBUSY Failed to stop core, as it would cause a service to not
> + * be run, as this is the only core currently running the service.
> + * The application must stop the service first, and then stop the
> + * lcore.
> + */
> +int32_t rte_service_core_stop(uint32_t lcore_id);
> +
> +/** Retreve the number of service cores currently avaialble.
typo: ^^^^^^^^ ^^^^^^^^^^
Retrieve the number of service cores currently available.
> + *
> + * This function returns the integer count of service cores available. The
> + * service core count can be used in mapping logic when creating mappings
> + * from service cores to services.
> + *
> + * See *rte_service_core_list* for details on retrieving the lcore_id of each
> + * service core.
> + *
> + * @return The number of service cores currently configured.
> + */
> +int32_t rte_service_core_count(void);
> +
> +/** Retrieve the list of currently enabled service cores.
> + *
> + * This function fills in an application supplied array, with each element
> + * indicating the lcore_id of a service core.
> + *
> + * Adding and removing service cores can be performed using
> + * *rte_service_core_add* and *rte_service_core_del*.
> + * @param array An array of at least N items.
@param [out] array An array of at least n items
> + * @param The size of *array*.
@param n The size of *array*.
> + * @retval >=0 Number of service cores that have been populated in the array
> + * @retval -ENOMEM The provided array is not large enough to fill in the
> + * service core list. No items have been populated, call this function
> + * with a size of at least *rte_service_core_count* items.
> + */
> +int32_t rte_service_core_list(uint32_t array[], uint32_t n);
> +
> +/** Dumps any information available about the service. If service is NULL,
> + * dumps info for all services.
> + */
> +int32_t rte_service_dump(FILE *f, struct rte_service_spec *service);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +
> +#endif /* _RTE_SERVICE_H_ */
> diff --git a/lib/librte_eal/common/include/rte_service_private.h b/lib/librte_eal/common/include/rte_service_private.h
> new file mode 100644
> index 0000000..d8bb644
> --- /dev/null
> +++ b/lib/librte_eal/common/include/rte_service_private.h
> @@ -0,0 +1,108 @@
> +/* This file specifies the internal service specification.
> + * Include this file if you are writing a component that requires CPU cycles to
> + * operate, and you wish to run the component using service cores
> + */
> +
> +#include <rte_service.h>
> +struct rte_service_spec {
> + /** The name of the service. This should be used by the application to
> + * understand what purpose this service provides.
> + */
> + char name[RTE_SERVICE_NAME_MAX];
> + /** The callback to invoke to run one iteration of the service */
> + rte_service_func callback;
> + /** The userdata pointer provided to the service callback. */
> + void *callback_userdata;
> + /** Flags to indicate the capabilities of this service. See
> + * defines in
> + * the public header file for values of RTE_SERVICE_CAP_*
> + */
> + uint32_t capabilities;
> + /** NUMA socket ID that this service is affinitized to */
> + int8_t socket_id;
All other places socket_id is of type "int". I think, we can maintenance
the consistency here too. Looks like socket_id == SOCKET_ID_ANY not take
care in implementation if so, take care of it.
> +};
> +
> +int32_t rte_service_register(const struct rte_service_spec *spec);
> +
> +/** Unregister a service.
> + *
> + * The service being removed must be stopped before calling this function.
> + *
> + * @retval 0 The service was successfully unregistered.
> + * @retval -EBUSY The service is currently running, stop the service before
> + * calling unregister. No action has been taken.
> + */
> +int32_t rte_service_unregister(struct rte_service_spec *service);
> +
> +/** Private function to allow EAL to initialied default mappings.
typo: ^^^^^^^^^^^
> + *
> + * This function iterates all the services, and maps then to the available
> + * cores. Based on the capabilities of the services, they are set to run on the
> + * available cores in a round-robin manner.
> + *
> + * @retval 0 Success
> + */
> +int32_t rte_service_init_default_mapping(void);
> +
> +#endif /* _RTE_SERVICE_PRIVATE_H_ */
> diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
> new file mode 100644
> index 0000000..8b5e344
> --- /dev/null
> +++ b/lib/librte_eal/common/rte_service.c
> +#define RTE_SERVICE_NUM_MAX 64
> +
> +#define RTE_SERVICE_FLAG_REGISTERED_SHIFT 0
Internal macro, Can be shorten to reduce the length(SERVICE_F_REGISTERED?)
> +
> +#define RTE_SERVICE_RUNSTATE_STOPPED 0
> +#define RTE_SERVICE_RUNSTATE_RUNNING 1
Internal macro, Can be shorten to reduce the length(SERVICE_STATE_RUNNING?)
> +
> +/* internal representation of a service */
> +struct rte_service_spec_impl {
> + /* public part of the struct */
> + struct rte_service_spec spec;
Nice approach.
> +
> + /* atomic lock that when set indicates a service core is currently
> + * running this service callback. When not set, a core may take the
> + * lock and then run the service callback.
> + */
> + rte_atomic32_t execute_lock;
> +
> + /* API set/get-able variables */
> + int32_t runstate;
> + uint8_t internal_flags;
> +
> + /* per service statistics */
> + uint32_t num_mapped_cores;
> + uint64_t calls;
> + uint64_t cycles_spent;
> +};
Since it been used in fastpath. better to align to cache line
> +
> +/* the internal values of a service core */
> +struct core_state {
> + uint64_t service_mask; /* map of services IDs are run on this core */
> + uint8_t runstate; /* running or stopped */
> + uint8_t is_service_core; /* set if core is currently a service core */
> +
> + /* extreme statistics */
> + uint64_t calls_per_service[RTE_SERVICE_NUM_MAX];
> +};
aligned to cache line?
> +
> +static uint32_t rte_service_count;
> +static struct rte_service_spec_impl rte_services[RTE_SERVICE_NUM_MAX];
> +static struct core_state cores_state[RTE_MAX_LCORE];
Since these variable are used in fastpath, better to allocate form
huge page area. It will avoid lot of global variables in code as well.
Like other module, you can add a private function for service init and it can be
called from eal_init()
> +
> +/* returns 1 if service is registered and has not been unregistered
> + * Returns 0 if service never registered, or has been unregistered
> + */
> +static int
static inline int
> +service_valid(uint32_t id) {
> + return !!(rte_services[id].internal_flags &
> + (1 << RTE_SERVICE_FLAG_REGISTERED_SHIFT));
> +}
> +
> +uint32_t
> +rte_service_get_count(void)
> +{
> + return rte_service_count;
> +}
> +
> +struct rte_service_spec *
> +rte_service_get_by_id(uint32_t id)
> +{
> + struct rte_service_spec *service = NULL;
> + if (id < rte_service_count)
> + service = (struct rte_service_spec *)&rte_services[id];
> +
> + return service;
> +}
> +
> +const char *
> +rte_service_get_name(const struct rte_service_spec *service)
> +{
> + return service->name;
> +}
> +
> +int32_t
bool could be enough here
> +rte_service_probe_capability(const struct rte_service_spec *service,
> + uint32_t capability)
> +{
> + return service->capabilities & capability;
> +}
> +
> +int32_t
> +rte_service_is_running(const struct rte_service_spec *spec)
> +{
> + if (!spec)
> + return -EINVAL;
> +
> + const struct rte_service_spec_impl *impl =
> + (const struct rte_service_spec_impl *)spec;
> + return impl->runstate == RTE_SERVICE_RUNSTATE_RUNNING;
> +}
> +
> +int32_t
> +rte_service_register(const struct rte_service_spec *spec)
> +{
> + uint32_t i;
> + int32_t free_slot = -1;
> +
> + if (spec->callback == NULL || strlen(spec->name) == 0)
> + return -EINVAL;
> +
> + for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
> + if (!service_valid(i)) {
> + free_slot = i;
> + break;
> + }
> + }
> +
> + if (free_slot < 0)
if ((free_slot < 0) || (i == RTE_SERVICE_NUM_MAX))
> + return -ENOSPC;
> +
> + struct rte_service_spec_impl *s = &rte_services[free_slot];
> + s->spec = *spec;
> + s->internal_flags |= (1 << RTE_SERVICE_FLAG_REGISTERED_SHIFT);
> +
> + rte_smp_wmb();
> + rte_service_count++;
IMO, You can move above rte_smp_wmb() here.
> +
> + return 0;
> +}
> +
> +int32_t
> +rte_service_unregister(struct rte_service_spec *spec)
> +{
> + struct rte_service_spec_impl *s = NULL;
> + struct rte_service_spec_impl *spec_impl =
> + (struct rte_service_spec_impl *)spec;
> +
> + uint32_t i;
> + uint32_t service_id;
> + for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
> + if (&rte_services[i] == spec_impl) {
> + s = spec_impl;
> + service_id = i;
> + break;
> + }
> + }
> +
> + if (!s)
> + return -EINVAL;
> +
> + s->internal_flags &= ~(1 << RTE_SERVICE_FLAG_REGISTERED_SHIFT);
> +
> + for (i = 0; i < RTE_MAX_LCORE; i++)
> + cores_state[i].service_mask &= ~(1 << service_id);
> +
> + memset(&rte_services[service_id], 0,
> + sizeof(struct rte_service_spec_impl));
> +
> + rte_smp_wmb();
> + rte_service_count--;
IMO, You can move above rte_smp_wmb() here.
> +
> + return 0;
> +}
> +
> +int32_t
> +rte_service_start(struct rte_service_spec *service)
> +{
> + struct rte_service_spec_impl *s =
> + (struct rte_service_spec_impl *)service;
> + s->runstate = RTE_SERVICE_RUNSTATE_RUNNING;
Is this function can called from worker thread? if so add rte_smp_wmb()
> + return 0;
> +}
> +
> +int32_t
> +rte_service_stop(struct rte_service_spec *service)
> +{
> + struct rte_service_spec_impl *s =
> + (struct rte_service_spec_impl *)service;
> + s->runstate = RTE_SERVICE_RUNSTATE_STOPPED;
Is this function can called from worker thread? if so add rte_smp_wmb()
> + return 0;
> +}
> +
> +static int32_t
> +rte_service_runner_func(void *arg)
> +{
> + RTE_SET_USED(arg);
> + uint32_t i;
> + const int lcore = rte_lcore_id();
> + struct core_state *cs = &cores_state[lcore];
> +
> + while (cores_state[lcore].runstate == RTE_SERVICE_RUNSTATE_RUNNING) {
> + for (i = 0; i < rte_service_count; i++) {
> + struct rte_service_spec_impl *s = &rte_services[i];
> + uint64_t service_mask = cs->service_mask;
No need to read in loop, Move it above while loop and add const.
const uint64_t service_mask = cs->service_mask;
> +
> + if (s->runstate != RTE_SERVICE_RUNSTATE_RUNNING ||
> + !(service_mask & (1 << i)))
> + continue;
> +
> + uint32_t *lock = (uint32_t *)&s->execute_lock;
> + if (rte_atomic32_cmpset(lock, 0, 1)) {
rte_atomic32 is costly. How about checking RTE_SERVICE_CAP_MT_SAFE
first.
> + void *userdata = s->spec.callback_userdata;
> + uint64_t start = rte_rdtsc();
> + s->spec.callback(userdata);
> + uint64_t end = rte_rdtsc();
> +
> + uint64_t spent = end - start;
> + s->cycles_spent += spent;
> + s->calls++;
> + cs->calls_per_service[i]++;
How about enabling the statistics based on some runtime configuration?
> +
> + rte_atomic32_clear(&s->execute_lock);
> + }
> + }
> + rte_mb();
Do we need full barrier here. Is rte_smp_rmb() inside the loop is
enough?
> + }
> +
> + /* mark core as ready to accept work again */
> + lcore_config[lcore].state = WAIT;
> +
> + return 0;
> +}
> +
> +int32_t
> +rte_service_core_count(void)
> +{
> + int32_t count = 0;
> + uint32_t i;
> + for (i = 0; i < RTE_MAX_LCORE; i++)
> + count += cores_state[i].is_service_core;
> + return count;
> +}
> +
> +int32_t
> +rte_service_core_list(uint32_t array[], uint32_t n)
> +{
> + uint32_t count = rte_service_core_count();
if (!array)
return -EINVAL;
> + if (count > n)
> + return -ENOMEM;
> +
> + uint32_t i;
> + uint32_t idx = 0;
> + for (i = 0; i < RTE_MAX_LCORE; i++) {
Are we good if "count" being the upper limit instead of RTE_MAX_LCORE?
> + struct core_state *cs = &cores_state[i];
> + if (cs->is_service_core) {
> + array[idx] = i;
> + idx++;
> + }
> + }
> +
> + return count;
> +}
> +
> +int32_t
> +rte_service_init_default_mapping(void)
> +{
> + /* create a default mapping from cores to services, then start the
> + * services to make them transparent to unaware applications.
> + */
> + uint32_t i;
> + int ret;
> + uint32_t count = rte_service_get_count();
> + struct rte_config *cfg = rte_eal_get_configuration();
> +
> + for (i = 0; i < count; i++) {
> + struct rte_service_spec *s = rte_service_get_by_id(i);
> + if (!s)
> + return -EINVAL;
> +
> + ret = 0;
> + int j;
> + for (j = 0; j < RTE_MAX_LCORE; j++) {
> + /* TODO: add lcore -> service mapping logic here */
> + if (cfg->lcore_role[j] == ROLE_SERVICE) {
> + ret = rte_service_enable_on_core(s, j);
> + if (ret)
> + rte_panic("Enabling service core %d on service %s failed\n",
> + j, s->name);
avoid panic in library
> + }
> + }
> +
> + ret = rte_service_start(s);
> + if (ret)
> + rte_panic("failed to start service %s\n", s->name);
avoid panic in library
> + }
> +
> + return 0;
> +}
> +
> +static int32_t
> +service_update(struct rte_service_spec *service, uint32_t lcore,
> + uint32_t *set, uint32_t *enabled)
> +{
> + uint32_t i;
> + int32_t sid = -1;
> +
> + for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
> + if ((struct rte_service_spec *)&rte_services[i] == service &&
> + service_valid(i)) {
> + sid = i;
> + break;
> + }
> + }
> +
> + if (sid == -1 || lcore >= RTE_MAX_LCORE)
> + return -EINVAL;
> +
> + if (!cores_state[lcore].is_service_core)
> + return -EINVAL;
> +
> + if (set) {
> + if (*set) {
> + cores_state[lcore].service_mask |= (1 << sid);
> + rte_services[sid].num_mapped_cores++;
> + } else {
> + cores_state[lcore].service_mask &= ~(1 << sid);
> + rte_services[sid].num_mapped_cores--;
> + }
> + }
> +
> + if (enabled)
> + *enabled = (cores_state[lcore].service_mask & (1 << sid));
If the parent functions can be called from worker thread then add
rte_smp_wmb() here.
> +
> + return 0;
> +}
> +
> +int32_t rte_service_get_enabled_on_core(struct rte_service_spec *service,
> + uint32_t lcore)
> +{
> + uint32_t enabled;
> + int ret = service_update(service, lcore, 0, &enabled);
> + if (ret == 0)
> + return enabled;
> + return -EINVAL;
> +}
> +
> +int32_t
> +rte_service_enable_on_core(struct rte_service_spec *service, uint32_t lcore)
> +{
> + uint32_t on = 1;
> + return service_update(service, lcore, &on, 0);
> +}
> +
> +int32_t
> +rte_service_disable_on_core(struct rte_service_spec *service, uint32_t lcore)
> +{
> + uint32_t off = 0;
> + return service_update(service, lcore, &off, 0);
> +}
> +
> +int32_t rte_service_core_reset_all(void)
> +{
> + /* loop over cores, reset all to mask 0 */
> + uint32_t i;
> + for (i = 0; i < RTE_MAX_LCORE; i++) {
> + cores_state[i].service_mask = 0;
> + cores_state[i].is_service_core = 0;
> + }
> +
> + return 0;
> +}
> +
> +int32_t
> +rte_service_core_add(uint32_t lcore)
> +{
> + if (lcore >= RTE_MAX_LCORE)
> + return -EINVAL;
> + if (cores_state[lcore].is_service_core)
> + return -EALREADY;
> +
> + lcore_config[lcore].core_role = ROLE_SERVICE;
> +
> + /* TODO: take from EAL by setting ROLE_SERVICE? */
I think, we need to fix TODO in v2
> + cores_state[lcore].is_service_core = 1;
> + cores_state[lcore].service_mask = 0;
> +
> + return 0;
> +}
> +
> +int32_t
> +rte_service_core_del(uint32_t lcore)
> +{
> + if (lcore >= RTE_MAX_LCORE)
> + return -EINVAL;
> +
> + struct core_state *cs = &cores_state[lcore];
> + if (!cs->is_service_core)
> + return -EINVAL;
> +
> + if (cs->runstate != RTE_SERVICE_RUNSTATE_STOPPED)
> + return -EBUSY;
> +
> + lcore_config[lcore].core_role = ROLE_RTE;
> + cores_state[lcore].is_service_core = 0;
> + /* TODO: return to EAL by setting ROLE_RTE? */
I think, we need to fix TODO in v2
> +
> + return 0;
> +}
> +
> +int32_t
> +rte_service_core_start(uint32_t lcore)
> +{
> + if (lcore >= RTE_MAX_LCORE)
> + return -EINVAL;
> +
> + struct core_state *cs = &cores_state[lcore];
> + if (!cs->is_service_core)
> + return -EINVAL;
> +
> + if (cs->runstate == RTE_SERVICE_RUNSTATE_RUNNING)
> + return -EALREADY;
> +
> + /* set core to run state first, and then launch otherwise it will
> + * return immidiatly as runstate keeps it in the service poll loop
s/immidiatly/immediately
> + */
> + cores_state[lcore].runstate = RTE_SERVICE_RUNSTATE_RUNNING;
> +
> + int ret = rte_eal_remote_launch(rte_service_runner_func, 0, lcore);
> + /* returns -EBUSY if the core is already launched, 0 on success */
> + return ret;
return rte_eal_remote_launch(rte_service_runner_func, 0, lcore);
> +}
> +
> +int32_t
> +rte_service_core_stop(uint32_t lcore)
> +{
> + if (lcore >= RTE_MAX_LCORE)
> + return -EINVAL;
> +
> + if (cores_state[lcore].runstate == RTE_SERVICE_RUNSTATE_STOPPED)
> + return -EALREADY;
> +
> + uint32_t i;
> + for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
> + int32_t enabled = cores_state[i].service_mask & (1 << i);
> + int32_t service_running = rte_services[i].runstate !=
> + RTE_SERVICE_RUNSTATE_STOPPED;
> + int32_t only_core = rte_services[i].num_mapped_cores == 1;
> +
> + /* if the core is mapped, and the service is running, and this
> + * is the only core that is mapped, the service would cease to
> + * run if this core stopped, so fail instead.
> + */
> + if (enabled && service_running && only_core)
> + return -EBUSY;
> + }
> +
> + cores_state[lcore].runstate = RTE_SERVICE_RUNSTATE_STOPPED;
> +
> + return 0;
> +}
> +
> +static void
> +rte_service_dump_one(FILE *f, struct rte_service_spec_impl *s,
> + uint64_t all_cycles, uint32_t reset)
> +{
> + /* avoid divide by zeros */
s/zeros/zero
> + if (all_cycles == 0)
> + all_cycles = 1;
> +
> + int calls = 1;
> + if (s->calls != 0)
> + calls = s->calls;
> +
> + float cycles_pct = (((float)s->cycles_spent) / all_cycles) * 100.f;
> + fprintf(f,
> + " %s : %0.1f %%\tcalls %"PRIu64"\tcycles %"PRIu64"\tavg: %"PRIu64"\n",
> + s->spec.name, cycles_pct, s->calls, s->cycles_spent,
> + s->cycles_spent / calls);
> +
> + if (reset) {
> + s->cycles_spent = 0;
> + s->calls = 0;
> + }
> +}
> +
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 1/6] service cores: header and implementation
@ 2017-06-23 9:06 1% Harry van Haaren
2017-06-26 11:59 0% ` Jerin Jacob
0 siblings, 2 replies; 200+ results
From: Harry van Haaren @ 2017-06-23 9:06 UTC (permalink / raw)
To: dev; +Cc: thomas, jerin.jacob, keith.wiles, bruce.richardson, Harry van Haaren
Add header files, update .map files with new service
functions, and add the service header to the doxygen
for building.
This service header API allows DPDK to use services as
a concept of something that requires CPU cycles. An example
is a PMD that runs in software to schedule events, where a
hardware version exists that does not require a CPU.
The code presented here is based on an initial RFC:
http://dpdk.org/ml/archives/dev/2017-May/065207.html
This was then reworked, and RFC v2 with the changes posted:
http://dpdk.org/ml/archives/dev/2017-June/067194.html
This is the third iteration of the service core concept,
now with an implementation.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
doc/api/doxy-api-index.md | 1 +
lib/librte_eal/bsdapp/eal/Makefile | 1 +
lib/librte_eal/bsdapp/eal/rte_eal_version.map | 24 +
lib/librte_eal/common/Makefile | 1 +
lib/librte_eal/common/include/rte_eal.h | 4 +
lib/librte_eal/common/include/rte_lcore.h | 3 +-
lib/librte_eal/common/include/rte_service.h | 274 ++++++++++
.../common/include/rte_service_private.h | 108 ++++
lib/librte_eal/common/rte_service.c | 568 +++++++++++++++++++++
lib/librte_eal/linuxapp/eal/Makefile | 1 +
lib/librte_eal/linuxapp/eal/rte_eal_version.map | 24 +
11 files changed, 1008 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_eal/common/include/rte_service.h
create mode 100644 lib/librte_eal/common/include/rte_service_private.h
create mode 100644 lib/librte_eal/common/rte_service.c
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f5f1f19..55d522a 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -158,6 +158,7 @@ There are many libraries, so their headers may be grouped by topics:
[common] (@ref rte_common.h),
[ABI compat] (@ref rte_compat.h),
[keepalive] (@ref rte_keepalive.h),
+ [Service Cores] (@ref rte_service.h),
[device metrics] (@ref rte_metrics.h),
[bitrate statistics] (@ref rte_bitrate.h),
[latency statistics] (@ref rte_latencystats.h),
diff --git a/lib/librte_eal/bsdapp/eal/Makefile b/lib/librte_eal/bsdapp/eal/Makefile
index a0f9950..05517a2 100644
--- a/lib/librte_eal/bsdapp/eal/Makefile
+++ b/lib/librte_eal/bsdapp/eal/Makefile
@@ -87,6 +87,7 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_malloc.c
SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += malloc_elem.c
SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += malloc_heap.c
SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_keepalive.c
+SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_service.c
# from arch dir
SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_cpuflags.c
diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
index 2e48a73..843d4ee 100644
--- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
@@ -193,3 +193,27 @@ DPDK_17.05 {
vfio_get_group_no;
} DPDK_17.02;
+
+DPDK_17.08 {
+ global:
+
+ rte_service_core_add;
+ rte_service_core_count;
+ rte_service_core_del;
+ rte_service_core_list;
+ rte_service_core_reset_all;
+ rte_service_core_start;
+ rte_service_core_stop;
+ rte_service_disable_on_core;
+ rte_service_enable_on_core;
+ rte_service_get_by_id;
+ rte_service_get_count;
+ rte_service_get_enabled_on_core;
+ rte_service_is_running;
+ rte_service_register;
+ rte_service_reset;
+ rte_service_start;
+ rte_service_stop;
+ rte_service_unregister;
+
+} DPDK_17.05;
diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile
index a5bd108..2a93397 100644
--- a/lib/librte_eal/common/Makefile
+++ b/lib/librte_eal/common/Makefile
@@ -41,6 +41,7 @@ INC += rte_eal_memconfig.h rte_malloc_heap.h
INC += rte_hexdump.h rte_devargs.h rte_bus.h rte_dev.h rte_vdev.h
INC += rte_pci_dev_feature_defs.h rte_pci_dev_features.h
INC += rte_malloc.h rte_keepalive.h rte_time.h
+INC += rte_service.h rte_service_private.h
GENERIC_INC := rte_atomic.h rte_byteorder.h rte_cycles.h rte_prefetch.h
GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h rte_rwlock.h
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index abf020b..1f203f8 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -61,6 +61,7 @@ extern "C" {
enum rte_lcore_role_t {
ROLE_RTE,
ROLE_OFF,
+ ROLE_SERVICE,
};
/**
@@ -80,6 +81,7 @@ enum rte_proc_type_t {
struct rte_config {
uint32_t master_lcore; /**< Id of the master lcore */
uint32_t lcore_count; /**< Number of available logical cores. */
+ uint32_t score_count; /**< Number of available service cores. */
enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
/** Primary or secondary configuration */
@@ -185,6 +187,8 @@ int rte_eal_iopl_init(void);
*
* EPROTO indicates that the PCI bus is either not present, or is not
* readable by the eal.
+ *
+ * ENOEXEC indicates that a service core failed to launch successfully.
*/
int rte_eal_init(int argc, char **argv);
diff --git a/lib/librte_eal/common/include/rte_lcore.h b/lib/librte_eal/common/include/rte_lcore.h
index fe7b586..50e0d0f 100644
--- a/lib/librte_eal/common/include/rte_lcore.h
+++ b/lib/librte_eal/common/include/rte_lcore.h
@@ -73,6 +73,7 @@ struct lcore_config {
unsigned core_id; /**< core number on socket for this lcore */
int core_index; /**< relative index, starting from 0 */
rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
+ uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
};
/**
@@ -175,7 +176,7 @@ rte_lcore_is_enabled(unsigned lcore_id)
struct rte_config *cfg = rte_eal_get_configuration();
if (lcore_id >= RTE_MAX_LCORE)
return 0;
- return cfg->lcore_role[lcore_id] != ROLE_OFF;
+ return cfg->lcore_role[lcore_id] == ROLE_RTE;
}
/**
diff --git a/lib/librte_eal/common/include/rte_service.h b/lib/librte_eal/common/include/rte_service.h
new file mode 100644
index 0000000..de079dd
--- /dev/null
+++ b/lib/librte_eal/common/include/rte_service.h
@@ -0,0 +1,274 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SERVICE_H_
+#define _RTE_SERVICE_H_
+
+/**
+ * @file
+ *
+ * Service functions
+ *
+ * The service functionality provided by this header allows a DPDK component
+ * to indicate that it requires a function call in order for it to perform
+ * its processing.
+ *
+ * An example usage of this functionality would be a component that registers
+ * a service to perform a particular packet processing duty: for example the
+ * eventdev software PMD. At startup the application requests all services
+ * that have been registered, and the cores in the service-coremask run the
+ * required services. The EAL removes these number of cores from the available
+ * runtime cores, and dedicates them to performing service-core workloads. The
+ * application has access to the remaining lcores as normal.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include<stdio.h>
+#include <stdint.h>
+#include <sys/queue.h>
+
+#include <rte_lcore.h>
+
+/* forward declaration only. Definition in rte_service_private.h */
+struct rte_service_spec;
+
+#define RTE_SERVICE_NAME_MAX 32
+
+/* Capabilities of a service.
+ *
+ * Use the *rte_service_probe_capability* function to check if a service is
+ * capable of a specific capability.
+ */
+/** When set, the service is capable of having multiple threads run it at the
+ * same time.
+ */
+#define RTE_SERVICE_CAP_MT_SAFE (1 << 0)
+
+/** Return the number of services registered.
+ *
+ * The number of services registered can be passed to *rte_service_get_by_id*,
+ * enabling the application to retireve the specificaion of each service.
+ *
+ * @return The number of services registered.
+ */
+uint32_t rte_service_get_count(void);
+
+/** Return the specificaion of each service.
+ *
+ * This function provides the specification of a service. This can be used by
+ * the application to understand what the service represents. The service
+ * must not be modified by the application directly, only passed to the various
+ * rte_service_* functions.
+ *
+ * @param id The integer id of the service to retrieve
+ * @retval non-zero A valid pointer to the service_spec
+ * @retval NULL Invalid *id* provided.
+ */
+struct rte_service_spec *rte_service_get_by_id(uint32_t id);
+
+/** Return the name of the service.
+ *
+ * @return A pointer to the name of the service. The returned pointer remains
+ * in ownership of the service, and the application must not free it.
+ */
+const char *rte_service_get_name(const struct rte_service_spec *service);
+
+/* Check if a service has a specific capability.
+ *
+ * This function returns if *service* has implements *capability*.
+ * See RTE_SERVICE_CAP_* defines for a list of valid capabilities.
+ * @retval 1 Capability supported by this service instance
+ * @retval 0 Capability not supported by this service instance
+ */
+int32_t rte_service_probe_capability(const struct rte_service_spec *service,
+ uint32_t capability);
+
+/** Enable a core to run a service.
+ *
+ * Each core can be added or removed from running specific services. This
+ * functions adds *lcore* to the set of cores that will run *service*.
+ *
+ * If multiple cores are enabled on a service, an atomic is used to ensure that
+ * only one cores runs the service at a time. The exception to this is when
+ * a service indicates that it is multi-thread safe by setting the capability
+ * called RTE_SERVICE_CAP_MT_SAFE. With the multi-thread safe capability set,
+ * the service function can be run on multiple threads at the same time.
+ *
+ * @retval 0 lcore added successfully
+ * @retval -EINVAL An invalid service or lcore was provided.
+ */
+int32_t rte_service_enable_on_core(struct rte_service_spec *service,
+ uint32_t lcore);
+
+/** Disable a core to run a service.
+ *
+ * Each core can be added or removed from running specific services. This
+ * functions removes *lcore* to the set of cores that will run *service*.
+ *
+ * @retval 0 Lcore removed successfully
+ * @retval -EINVAL An invalid service or lcore was provided.
+ */
+int32_t rte_service_disable_on_core(struct rte_service_spec *service,
+ uint32_t lcore);
+
+/** Return if an lcore is enabled for the service.
+ *
+ * This function allows the application to query if *lcore* is currently set to
+ * run *service*.
+ *
+ * @retval 1 Lcore enabled on this lcore
+ * @retval 0 Lcore disabled on this lcore
+ * @retval -EINVAL An invalid service or lcore was provided.
+ */
+int32_t rte_service_get_enabled_on_core(struct rte_service_spec *service,
+ uint32_t lcore);
+
+
+/** Enable *service* to run.
+ *
+ * This function switches on a service during runtime.
+ * @retval 0 The service was successfully started
+ */
+int32_t rte_service_start(struct rte_service_spec *service);
+
+/** Disable *service*.
+ *
+ * Switch off a service, so it is not run until it is *rte_service_start* is
+ * called on it.
+ * @retval 0 Service successfully switched off
+ */
+int32_t rte_service_stop(struct rte_service_spec *service);
+
+/** Returns if *service* is currently running.
+ *
+ * @retval 1 Service is currently running
+ * @retval 0 Service is currently stopped
+ * @retval -EINVAL Invalid service pointer provided
+ */
+int32_t rte_service_is_running(const struct rte_service_spec *service);
+
+/* Start a service core.
+ *
+ * Starting a core makes the core begin polling. Any services assigned to it
+ * will be run as fast as possible.
+ *
+ * @retval 0 Success
+ * @retval -EINVAL Failed to start core. The *lcore_id* passed in is not
+ * currently assigned to be a service core.
+ */
+int32_t rte_service_core_start(uint32_t lcore_id);
+
+/* Stop a service core.
+ *
+ * Stopping a core makes the core become idle, but remains assigned as a
+ * service core.
+ *
+ * @retval 0 Success
+ * @retval -EINVAL Invalid *lcore_id* provided
+ * @retval -EALREADY Already stopped core
+ * @retval -EBUSY Failed to stop core, as it would cause a service to not
+ * be run, as this is the only core currently running the service.
+ * The application must stop the service first, and then stop the
+ * lcore.
+ */
+int32_t rte_service_core_stop(uint32_t lcore_id);
+
+/** Adds lcore to the list of service cores.
+ *
+ * This functions can be used at runtime in order to modify the service core
+ * mask.
+ *
+ * @retval 0 Success
+ * @retval -EBUSY lcore is busy, and not available for service core duty
+ * @retval -EALREADY lcore is already added to the service core list
+ * @retval -EINVAL Invalid lcore provided
+ */
+int32_t rte_service_core_add(uint32_t lcore);
+
+/** Removes lcore from the list of service cores.
+ *
+ * This can fail if the core is not stopped, see *rte_service_core_stop*.
+ *
+ * @retval 0 Success
+ * @retval -EBUSY Lcore is not stopped, stop service core before removing.
+ * @retval -EINVAL failed to add lcore to service core mask.
+ */
+int32_t rte_service_core_del(uint32_t lcore);
+
+/** Retreve the number of service cores currently avaialble.
+ *
+ * This function returns the integer count of service cores available. The
+ * service core count can be used in mapping logic when creating mappings
+ * from service cores to services.
+ *
+ * See *rte_service_core_list* for details on retrieving the lcore_id of each
+ * service core.
+ *
+ * @return The number of service cores currently configured.
+ */
+int32_t rte_service_core_count(void);
+
+/** Reset all service core mappings.
+ * @retval 0 Success
+ */
+int32_t rte_service_core_reset_all(void);
+
+/** Retrieve the list of currently enabled service cores.
+ *
+ * This function fills in an application supplied array, with each element
+ * indicating the lcore_id of a service core.
+ *
+ * Adding and removing service cores can be performed using
+ * *rte_service_core_add* and *rte_service_core_del*.
+ * @param array An array of at least N items.
+ * @param The size of *array*.
+ * @retval >=0 Number of service cores that have been populated in the array
+ * @retval -ENOMEM The provided array is not large enough to fill in the
+ * service core list. No items have been populated, call this function
+ * with a size of at least *rte_service_core_count* items.
+ */
+int32_t rte_service_core_list(uint32_t array[], uint32_t n);
+
+/** Dumps any information available about the service. If service is NULL,
+ * dumps info for all services.
+ */
+int32_t rte_service_dump(FILE *f, struct rte_service_spec *service);
+
+#ifdef __cplusplus
+}
+#endif
+
+
+#endif /* _RTE_SERVICE_H_ */
diff --git a/lib/librte_eal/common/include/rte_service_private.h b/lib/librte_eal/common/include/rte_service_private.h
new file mode 100644
index 0000000..d8bb644
--- /dev/null
+++ b/lib/librte_eal/common/include/rte_service_private.h
@@ -0,0 +1,108 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_SERVICE_PRIVATE_H_
+#define _RTE_SERVICE_PRIVATE_H_
+
+/* This file specifies the internal service specification.
+ * Include this file if you are writing a component that requires CPU cycles to
+ * operate, and you wish to run the component using service cores
+ */
+
+#include <rte_service.h>
+
+/**
+ * Signature of callback function to run a service.
+ */
+typedef int32_t (*rte_service_func)(void *args);
+
+/**
+ * The specification of a service.
+ *
+ * This struct contains metadata about the service itself, the callback
+ * function to run one iteration of the service, a userdata pointer, flags etc.
+ */
+struct rte_service_spec {
+ /** The name of the service. This should be used by the application to
+ * understand what purpose this service provides.
+ */
+ char name[RTE_SERVICE_NAME_MAX];
+ /** The callback to invoke to run one iteration of the service. */
+ rte_service_func callback;
+ /** The userdata pointer provided to the service callback. */
+ void *callback_userdata;
+ /** Flags to indicate the capabilities of this service. See defines in
+ * the public header file for values of RTE_SERVICE_CAP_*
+ */
+ uint32_t capabilities;
+ /** NUMA socket ID that this service is affinitized to */
+ int8_t socket_id;
+};
+
+/** Register a new service.
+ *
+ * A service represents a component that the requires CPU time periodically to
+ * achieve its purpose.
+ *
+ * For example the eventdev SW PMD requires CPU cycles to perform its
+ * scheduling. This can be achieved by registering it as a service, and the
+ * application can then assign CPU resources to it using
+ * *rte_service_set_coremask*.
+ *
+ * @param spec The specification of the service to register
+ * @retval 0 Successfully registered the service.
+ * -EINVAL Attempted to register an invalid service (eg, no callback
+ * set)
+ */
+int32_t rte_service_register(const struct rte_service_spec *spec);
+
+/** Unregister a service.
+ *
+ * The service being removed must be stopped before calling this function.
+ *
+ * @retval 0 The service was successfully unregistered.
+ * @retval -EBUSY The service is currently running, stop the service before
+ * calling unregister. No action has been taken.
+ */
+int32_t rte_service_unregister(struct rte_service_spec *service);
+
+/** Private function to allow EAL to initialied default mappings.
+ *
+ * This function iterates all the services, and maps then to the available
+ * cores. Based on the capabilities of the services, they are set to run on the
+ * available cores in a round-robin manner.
+ *
+ * @retval 0 Success
+ */
+int32_t rte_service_init_default_mapping(void);
+
+#endif /* _RTE_SERVICE_PRIVATE_H_ */
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
new file mode 100644
index 0000000..8b5e344
--- /dev/null
+++ b/lib/librte_eal/common/rte_service.c
@@ -0,0 +1,568 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <inttypes.h>
+#include <limits.h>
+#include <string.h>
+#include <dirent.h>
+
+#include <rte_service.h>
+#include "include/rte_service_private.h"
+
+#include <rte_eal.h>
+#include <rte_lcore.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_cycles.h>
+#include <rte_atomic.h>
+
+#define RTE_SERVICE_NUM_MAX 64
+
+#define RTE_SERVICE_FLAG_REGISTERED_SHIFT 0
+
+#define RTE_SERVICE_RUNSTATE_STOPPED 0
+#define RTE_SERVICE_RUNSTATE_RUNNING 1
+
+/* internal representation of a service */
+struct rte_service_spec_impl {
+ /* public part of the struct */
+ struct rte_service_spec spec;
+
+ /* atomic lock that when set indicates a service core is currently
+ * running this service callback. When not set, a core may take the
+ * lock and then run the service callback.
+ */
+ rte_atomic32_t execute_lock;
+
+ /* API set/get-able variables */
+ int32_t runstate;
+ uint8_t internal_flags;
+
+ /* per service statistics */
+ uint32_t num_mapped_cores;
+ uint64_t calls;
+ uint64_t cycles_spent;
+};
+
+/* the internal values of a service core */
+struct core_state {
+ uint64_t service_mask; /* map of services IDs are run on this core */
+ uint8_t runstate; /* running or stopped */
+ uint8_t is_service_core; /* set if core is currently a service core */
+
+ /* extreme statistics */
+ uint64_t calls_per_service[RTE_SERVICE_NUM_MAX];
+};
+
+static uint32_t rte_service_count;
+static struct rte_service_spec_impl rte_services[RTE_SERVICE_NUM_MAX];
+static struct core_state cores_state[RTE_MAX_LCORE];
+
+/* returns 1 if service is registered and has not been unregistered
+ * Returns 0 if service never registered, or has been unregistered
+ */
+static int
+service_valid(uint32_t id) {
+ return !!(rte_services[id].internal_flags &
+ (1 << RTE_SERVICE_FLAG_REGISTERED_SHIFT));
+}
+
+uint32_t
+rte_service_get_count(void)
+{
+ return rte_service_count;
+}
+
+struct rte_service_spec *
+rte_service_get_by_id(uint32_t id)
+{
+ struct rte_service_spec *service = NULL;
+ if (id < rte_service_count)
+ service = (struct rte_service_spec *)&rte_services[id];
+
+ return service;
+}
+
+const char *
+rte_service_get_name(const struct rte_service_spec *service)
+{
+ return service->name;
+}
+
+int32_t
+rte_service_probe_capability(const struct rte_service_spec *service,
+ uint32_t capability)
+{
+ return service->capabilities & capability;
+}
+
+int32_t
+rte_service_is_running(const struct rte_service_spec *spec)
+{
+ if (!spec)
+ return -EINVAL;
+
+ const struct rte_service_spec_impl *impl =
+ (const struct rte_service_spec_impl *)spec;
+ return impl->runstate == RTE_SERVICE_RUNSTATE_RUNNING;
+}
+
+int32_t
+rte_service_register(const struct rte_service_spec *spec)
+{
+ uint32_t i;
+ int32_t free_slot = -1;
+
+ if (spec->callback == NULL || strlen(spec->name) == 0)
+ return -EINVAL;
+
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ if (!service_valid(i)) {
+ free_slot = i;
+ break;
+ }
+ }
+
+ if (free_slot < 0)
+ return -ENOSPC;
+
+ struct rte_service_spec_impl *s = &rte_services[free_slot];
+ s->spec = *spec;
+ s->internal_flags |= (1 << RTE_SERVICE_FLAG_REGISTERED_SHIFT);
+
+ rte_smp_wmb();
+ rte_service_count++;
+
+ return 0;
+}
+
+int32_t
+rte_service_unregister(struct rte_service_spec *spec)
+{
+ struct rte_service_spec_impl *s = NULL;
+ struct rte_service_spec_impl *spec_impl =
+ (struct rte_service_spec_impl *)spec;
+
+ uint32_t i;
+ uint32_t service_id;
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ if (&rte_services[i] == spec_impl) {
+ s = spec_impl;
+ service_id = i;
+ break;
+ }
+ }
+
+ if (!s)
+ return -EINVAL;
+
+ s->internal_flags &= ~(1 << RTE_SERVICE_FLAG_REGISTERED_SHIFT);
+
+ for (i = 0; i < RTE_MAX_LCORE; i++)
+ cores_state[i].service_mask &= ~(1 << service_id);
+
+ memset(&rte_services[service_id], 0,
+ sizeof(struct rte_service_spec_impl));
+
+ rte_smp_wmb();
+ rte_service_count--;
+
+ return 0;
+}
+
+int32_t
+rte_service_start(struct rte_service_spec *service)
+{
+ struct rte_service_spec_impl *s =
+ (struct rte_service_spec_impl *)service;
+ s->runstate = RTE_SERVICE_RUNSTATE_RUNNING;
+ return 0;
+}
+
+int32_t
+rte_service_stop(struct rte_service_spec *service)
+{
+ struct rte_service_spec_impl *s =
+ (struct rte_service_spec_impl *)service;
+ s->runstate = RTE_SERVICE_RUNSTATE_STOPPED;
+ return 0;
+}
+
+static int32_t
+rte_service_runner_func(void *arg)
+{
+ RTE_SET_USED(arg);
+ uint32_t i;
+ const int lcore = rte_lcore_id();
+ struct core_state *cs = &cores_state[lcore];
+
+ while (cores_state[lcore].runstate == RTE_SERVICE_RUNSTATE_RUNNING) {
+ for (i = 0; i < rte_service_count; i++) {
+ struct rte_service_spec_impl *s = &rte_services[i];
+ uint64_t service_mask = cs->service_mask;
+
+ if (s->runstate != RTE_SERVICE_RUNSTATE_RUNNING ||
+ !(service_mask & (1 << i)))
+ continue;
+
+ uint32_t *lock = (uint32_t *)&s->execute_lock;
+ if (rte_atomic32_cmpset(lock, 0, 1)) {
+ void *userdata = s->spec.callback_userdata;
+ uint64_t start = rte_rdtsc();
+ s->spec.callback(userdata);
+ uint64_t end = rte_rdtsc();
+
+ uint64_t spent = end - start;
+ s->cycles_spent += spent;
+ s->calls++;
+ cs->calls_per_service[i]++;
+
+ rte_atomic32_clear(&s->execute_lock);
+ }
+ }
+ rte_mb();
+ }
+
+ /* mark core as ready to accept work again */
+ lcore_config[lcore].state = WAIT;
+
+ return 0;
+}
+
+int32_t
+rte_service_core_count(void)
+{
+ int32_t count = 0;
+ uint32_t i;
+ for (i = 0; i < RTE_MAX_LCORE; i++)
+ count += cores_state[i].is_service_core;
+ return count;
+}
+
+int32_t
+rte_service_core_list(uint32_t array[], uint32_t n)
+{
+ uint32_t count = rte_service_core_count();
+ if (count > n)
+ return -ENOMEM;
+
+ uint32_t i;
+ uint32_t idx = 0;
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ struct core_state *cs = &cores_state[i];
+ if (cs->is_service_core) {
+ array[idx] = i;
+ idx++;
+ }
+ }
+
+ return count;
+}
+
+int32_t
+rte_service_init_default_mapping(void)
+{
+ /* create a default mapping from cores to services, then start the
+ * services to make them transparent to unaware applications.
+ */
+ uint32_t i;
+ int ret;
+ uint32_t count = rte_service_get_count();
+ struct rte_config *cfg = rte_eal_get_configuration();
+
+ for (i = 0; i < count; i++) {
+ struct rte_service_spec *s = rte_service_get_by_id(i);
+ if (!s)
+ return -EINVAL;
+
+ ret = 0;
+ int j;
+ for (j = 0; j < RTE_MAX_LCORE; j++) {
+ /* TODO: add lcore -> service mapping logic here */
+ if (cfg->lcore_role[j] == ROLE_SERVICE) {
+ ret = rte_service_enable_on_core(s, j);
+ if (ret)
+ rte_panic("Enabling service core %d on service %s failed\n",
+ j, s->name);
+ }
+ }
+
+ ret = rte_service_start(s);
+ if (ret)
+ rte_panic("failed to start service %s\n", s->name);
+ }
+
+ return 0;
+}
+
+static int32_t
+service_update(struct rte_service_spec *service, uint32_t lcore,
+ uint32_t *set, uint32_t *enabled)
+{
+ uint32_t i;
+ int32_t sid = -1;
+
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ if ((struct rte_service_spec *)&rte_services[i] == service &&
+ service_valid(i)) {
+ sid = i;
+ break;
+ }
+ }
+
+ if (sid == -1 || lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+
+ if (!cores_state[lcore].is_service_core)
+ return -EINVAL;
+
+ if (set) {
+ if (*set) {
+ cores_state[lcore].service_mask |= (1 << sid);
+ rte_services[sid].num_mapped_cores++;
+ } else {
+ cores_state[lcore].service_mask &= ~(1 << sid);
+ rte_services[sid].num_mapped_cores--;
+ }
+ }
+
+ if (enabled)
+ *enabled = (cores_state[lcore].service_mask & (1 << sid));
+
+ return 0;
+}
+
+int32_t rte_service_get_enabled_on_core(struct rte_service_spec *service,
+ uint32_t lcore)
+{
+ uint32_t enabled;
+ int ret = service_update(service, lcore, 0, &enabled);
+ if (ret == 0)
+ return enabled;
+ return -EINVAL;
+}
+
+int32_t
+rte_service_enable_on_core(struct rte_service_spec *service, uint32_t lcore)
+{
+ uint32_t on = 1;
+ return service_update(service, lcore, &on, 0);
+}
+
+int32_t
+rte_service_disable_on_core(struct rte_service_spec *service, uint32_t lcore)
+{
+ uint32_t off = 0;
+ return service_update(service, lcore, &off, 0);
+}
+
+int32_t rte_service_core_reset_all(void)
+{
+ /* loop over cores, reset all to mask 0 */
+ uint32_t i;
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ cores_state[i].service_mask = 0;
+ cores_state[i].is_service_core = 0;
+ }
+
+ return 0;
+}
+
+int32_t
+rte_service_core_add(uint32_t lcore)
+{
+ if (lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+ if (cores_state[lcore].is_service_core)
+ return -EALREADY;
+
+ lcore_config[lcore].core_role = ROLE_SERVICE;
+
+ /* TODO: take from EAL by setting ROLE_SERVICE? */
+ cores_state[lcore].is_service_core = 1;
+ cores_state[lcore].service_mask = 0;
+
+ return 0;
+}
+
+int32_t
+rte_service_core_del(uint32_t lcore)
+{
+ if (lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+
+ struct core_state *cs = &cores_state[lcore];
+ if (!cs->is_service_core)
+ return -EINVAL;
+
+ if (cs->runstate != RTE_SERVICE_RUNSTATE_STOPPED)
+ return -EBUSY;
+
+ lcore_config[lcore].core_role = ROLE_RTE;
+ cores_state[lcore].is_service_core = 0;
+ /* TODO: return to EAL by setting ROLE_RTE? */
+
+ return 0;
+}
+
+int32_t
+rte_service_core_start(uint32_t lcore)
+{
+ if (lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+
+ struct core_state *cs = &cores_state[lcore];
+ if (!cs->is_service_core)
+ return -EINVAL;
+
+ if (cs->runstate == RTE_SERVICE_RUNSTATE_RUNNING)
+ return -EALREADY;
+
+ /* set core to run state first, and then launch otherwise it will
+ * return immidiatly as runstate keeps it in the service poll loop
+ */
+ cores_state[lcore].runstate = RTE_SERVICE_RUNSTATE_RUNNING;
+
+ int ret = rte_eal_remote_launch(rte_service_runner_func, 0, lcore);
+ /* returns -EBUSY if the core is already launched, 0 on success */
+ return ret;
+}
+
+int32_t
+rte_service_core_stop(uint32_t lcore)
+{
+ if (lcore >= RTE_MAX_LCORE)
+ return -EINVAL;
+
+ if (cores_state[lcore].runstate == RTE_SERVICE_RUNSTATE_STOPPED)
+ return -EALREADY;
+
+ uint32_t i;
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ int32_t enabled = cores_state[i].service_mask & (1 << i);
+ int32_t service_running = rte_services[i].runstate !=
+ RTE_SERVICE_RUNSTATE_STOPPED;
+ int32_t only_core = rte_services[i].num_mapped_cores == 1;
+
+ /* if the core is mapped, and the service is running, and this
+ * is the only core that is mapped, the service would cease to
+ * run if this core stopped, so fail instead.
+ */
+ if (enabled && service_running && only_core)
+ return -EBUSY;
+ }
+
+ cores_state[lcore].runstate = RTE_SERVICE_RUNSTATE_STOPPED;
+
+ return 0;
+}
+
+static void
+rte_service_dump_one(FILE *f, struct rte_service_spec_impl *s,
+ uint64_t all_cycles, uint32_t reset)
+{
+ /* avoid divide by zeros */
+ if (all_cycles == 0)
+ all_cycles = 1;
+
+ int calls = 1;
+ if (s->calls != 0)
+ calls = s->calls;
+
+ float cycles_pct = (((float)s->cycles_spent) / all_cycles) * 100.f;
+ fprintf(f,
+ " %s : %0.1f %%\tcalls %"PRIu64"\tcycles %"PRIu64"\tavg: %"PRIu64"\n",
+ s->spec.name, cycles_pct, s->calls, s->cycles_spent,
+ s->cycles_spent / calls);
+
+ if (reset) {
+ s->cycles_spent = 0;
+ s->calls = 0;
+ }
+}
+
+static void
+service_dump_calls_per_core(FILE *f, uint32_t lcore, uint32_t reset)
+{
+ uint32_t i;
+ struct core_state *cs = &cores_state[lcore];
+
+ fprintf(f, "%02d\t", lcore);
+ for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
+ if (!service_valid(i))
+ continue;
+ fprintf(f, "%04"PRIu64"\t", cs->calls_per_service[i]);
+ if (reset)
+ cs->calls_per_service[i] = 0;
+ }
+ fprintf(f, "\n");
+}
+
+int32_t rte_service_dump(FILE *f, struct rte_service_spec *service)
+{
+ uint32_t i;
+
+ uint64_t total_cycles = 0;
+ for (i = 0; i < rte_service_count; i++)
+ total_cycles += rte_services[i].cycles_spent;
+
+ if (service) {
+ struct rte_service_spec_impl *s =
+ (struct rte_service_spec_impl *)service;
+ fprintf(f, "Service %s Summary\n", s->spec.name);
+ uint32_t reset = 0;
+ rte_service_dump_one(f, s, total_cycles, reset);
+ return 0;
+ }
+
+ struct rte_config *cfg = rte_eal_get_configuration();
+
+ fprintf(f, "Services Summary\n");
+ for (i = 0; i < rte_service_count; i++) {
+ uint32_t reset = 1;
+ rte_service_dump_one(f, &rte_services[i], total_cycles, reset);
+ }
+
+ fprintf(f, "Service Cores Summary\n");
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (cfg->lcore_role[i] != ROLE_SERVICE)
+ continue;
+
+ uint32_t reset = 0;
+ service_dump_calls_per_core(f, i, reset);
+ }
+
+ return 0;
+}
diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile
index 640afd0..438dcf9 100644
--- a/lib/librte_eal/linuxapp/eal/Makefile
+++ b/lib/librte_eal/linuxapp/eal/Makefile
@@ -96,6 +96,7 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_malloc.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += malloc_elem.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += malloc_heap.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_keepalive.c
+SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_service.c
# from arch dir
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_cpuflags.c
diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
index 670bab3..f0ed607 100644
--- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
@@ -198,3 +198,27 @@ DPDK_17.05 {
vfio_get_group_no;
} DPDK_17.02;
+
+DPDK_17.08 {
+ global:
+
+ rte_service_core_add;
+ rte_service_core_count;
+ rte_service_core_del;
+ rte_service_core_list;
+ rte_service_core_reset_all;
+ rte_service_core_start;
+ rte_service_core_stop;
+ rte_service_disable_on_core;
+ rte_service_enable_on_core;
+ rte_service_get_by_id;
+ rte_service_get_count;
+ rte_service_get_enabled_on_core;
+ rte_service_is_running;
+ rte_service_register;
+ rte_service_reset;
+ rte_service_start;
+ rte_service_stop;
+ rte_service_unregister;
+
+} DPDK_17.05;
--
2.7.4
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v2 2/3] eal: PCI domain should be 32 bits
2017-06-22 16:05 0% ` Thomas Monjalon
@ 2017-06-22 16:23 0% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2017-06-22 16:23 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Stephen Hemminger
On Thu, 22 Jun 2017 18:05:42 +0200
Thomas Monjalon <thomas@monjalon.net> wrote:
> 22/06/2017 17:56, Stephen Hemminger:
> > In some environments, the PCI domain can be larger than 16 bits.
> > For example, a PCI device passed through in Azure gets a synthetic domain
> > id which is internally generated based on GUID. The PCI standard does
> > not restrict domain to be 16 bits.
> >
> > This change breaks ABI for API's that expose PCI address structure.
> >
> > Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
> > ---
> [...]
> > --- a/lib/librte_eal/common/include/rte_pci.h
> > +++ b/lib/librte_eal/common/include/rte_pci.h
> > @@ -63,7 +63,7 @@ const char *pci_get_sysfs_path(void);
> >
> > /** Formatting string for PCI device identifier: Ex: 0000:00:01.0 */
> > #define PCI_PRI_FMT "%.4" PRIx16 ":%.2" PRIx8 ":%.2" PRIx8 ".%" PRIx8
> > -#define PCI_PRI_STR_SIZE sizeof("XXXX:XX:XX.X")
> > +#define PCI_PRI_STR_SIZE sizeof("XXXXXXXX:XX:XX.X")
>
> I think you need to change PCI_PRI_FMT accordingly.
No. I don't want all outputs to have extra leading zeros on other platforms
The existing format works:
Example:
--- cut here ---
struct rte_pci_addr {
uint32_t domain; /**< Device domain */
uint8_t bus; /**< Device bus */
uint8_t devid; /**< Device ID */
uint8_t function; /**< Device function. */
};
#define PCI_PRI_FMT "%.4" PRIx16 ":%.2" PRIx8 ":%.2" PRIx8 ".%" PRIx8
int main(void)
{
struct rte_pci_addr pci_addr = { 0, 5, 0, 0 };
printf(PCI_PRI_FMT "\n",
pci_addr.domain,
pci_addr.bus, pci_addr.devid,
pci_addr.function);
pci_addr.domain = 0xdeadbeef;
printf(PCI_PRI_FMT "\n",
pci_addr.domain,
pci_addr.bus, pci_addr.devid,
pci_addr.function);
return 0;
}
--- output ---
0000:05:00.0
deadbeef:05:00.0
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 2/3] eal: PCI domain should be 32 bits
2017-06-22 15:56 3% ` [dpdk-dev] [PATCH v2 2/3] eal: PCI domain should be 32 bits Stephen Hemminger
@ 2017-06-22 16:05 0% ` Thomas Monjalon
2017-06-22 16:23 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2017-06-22 16:05 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, Stephen Hemminger
22/06/2017 17:56, Stephen Hemminger:
> In some environments, the PCI domain can be larger than 16 bits.
> For example, a PCI device passed through in Azure gets a synthetic domain
> id which is internally generated based on GUID. The PCI standard does
> not restrict domain to be 16 bits.
>
> This change breaks ABI for API's that expose PCI address structure.
>
> Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
> ---
[...]
> --- a/lib/librte_eal/common/include/rte_pci.h
> +++ b/lib/librte_eal/common/include/rte_pci.h
> @@ -63,7 +63,7 @@ const char *pci_get_sysfs_path(void);
>
> /** Formatting string for PCI device identifier: Ex: 0000:00:01.0 */
> #define PCI_PRI_FMT "%.4" PRIx16 ":%.2" PRIx8 ":%.2" PRIx8 ".%" PRIx8
> -#define PCI_PRI_STR_SIZE sizeof("XXXX:XX:XX.X")
> +#define PCI_PRI_STR_SIZE sizeof("XXXXXXXX:XX:XX.X")
I think you need to change PCI_PRI_FMT accordingly.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2 2/3] eal: PCI domain should be 32 bits
@ 2017-06-22 15:56 3% ` Stephen Hemminger
2017-06-22 16:05 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2017-06-22 15:56 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Stephen Hemminger
In some environments, the PCI domain can be larger than 16 bits.
For example, a PCI device passed through in Azure gets a synthetic domain
id which is internally generated based on GUID. The PCI standard does
not restrict domain to be 16 bits.
This change breaks ABI for API's that expose PCI address structure.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
---
v2 -- need to expand string size and shifts in compare function
lib/librte_eal/common/include/rte_pci.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/lib/librte_eal/common/include/rte_pci.h b/lib/librte_eal/common/include/rte_pci.h
index 0284a6208aa5..e416714b32ff 100644
--- a/lib/librte_eal/common/include/rte_pci.h
+++ b/lib/librte_eal/common/include/rte_pci.h
@@ -63,7 +63,7 @@ const char *pci_get_sysfs_path(void);
/** Formatting string for PCI device identifier: Ex: 0000:00:01.0 */
#define PCI_PRI_FMT "%.4" PRIx16 ":%.2" PRIx8 ":%.2" PRIx8 ".%" PRIx8
-#define PCI_PRI_STR_SIZE sizeof("XXXX:XX:XX.X")
+#define PCI_PRI_STR_SIZE sizeof("XXXXXXXX:XX:XX.X")
/** Short formatting string, without domain, for PCI device: Ex: 00:01.0 */
#define PCI_SHORT_PRI_FMT "%.2" PRIx8 ":%.2" PRIx8 ".%" PRIx8
@@ -112,7 +112,7 @@ struct rte_pci_id {
* A structure describing the location of a PCI device.
*/
struct rte_pci_addr {
- uint16_t domain; /**< Device domain */
+ uint32_t domain; /**< Device domain */
uint8_t bus; /**< Device bus */
uint8_t devid; /**< Device ID */
uint8_t function; /**< Device function. */
@@ -346,10 +346,10 @@ rte_eal_compare_pci_addr(const struct rte_pci_addr *addr,
if ((addr == NULL) || (addr2 == NULL))
return -1;
- dev_addr = (addr->domain << 24) | (addr->bus << 16) |
- (addr->devid << 8) | addr->function;
- dev_addr2 = (addr2->domain << 24) | (addr2->bus << 16) |
- (addr2->devid << 8) | addr2->function;
+ dev_addr = ((uint64_t)addr->domain << 24) |
+ (addr->bus << 16) | (addr->devid << 8) | addr->function;
+ dev_addr2 = ((uint64_t)addr2->domain << 24) |
+ (addr2->bus << 16) | (addr2->devid << 8) | addr2->function;
if (dev_addr > dev_addr2)
return 1;
--
2.11.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH 2/3] eal: PCI domain should be 32 bits
2017-06-21 16:35 3% ` [dpdk-dev] [PATCH 2/3] eal: PCI domain should be 32 bits Stephen Hemminger
@ 2017-06-22 9:28 0% ` Chang, Cunyin
0 siblings, 0 replies; 200+ results
From: Chang, Cunyin @ 2017-06-22 9:28 UTC (permalink / raw)
To: Stephen Hemminger, dev; +Cc: Stephen Hemminger
I think the series patches does not cover all area which need to adapt to u32 PCI domain,
We still need some other work to do:
we need define another macro such as PCI_PRI_FMT. Something like:
#define PCI_XXX_PRI_FMT "%.5" PRIx32 ":%.2" PRIx8 ":%.2" PRIx8 ".%" PRIx8
PCI_PRI_STR_SIZE also need to be modified:
#define PCI_PRI_STR_SIZE sizeof("XXXXX:XX:XX.X")
The macro PCI_PRI_FMT will not works if
The domain exceed 16bits. It will impact the following functions:
1 RTE_LOG function, there a lots of RTE_LOG such as:
RTE_LOG(WARNING, EAL,
"Requested device " PCI_PRI_FMT " cannot be used\n",
addr->domain, addr->bus, addr->devid, addr->function);
2 pci_dump_one_device().
3 rte_eal_pci_device_name()
4 pci_update_device()
5 pci_ioport_map()
6 pci_get_uio_dev()
7 pci_uio_map_resource_by_index()
8 pci_uio_ioport_map()
9 pci_vfio_map_resource()
10 pci_vfio_unmap_resource()
All the above functions will related with the macro PCI_PRI_FMT, so I think they need to be modified too.
There are some other code need modify:
In function rte_eal_compare_pci_addr(), we need do the following work:
dev_addr = ((uint64_t)addr->domain << 24) | ((uint64_t)addr->bus << 16) |
((uint64_t)addr->devid << 8) | (uint64_t)addr->function;
dev_addr2 = ((uint64_t)addr2->domain << 24) | ((uint64_t)addr2->bus << 16) |
((uint64_t)addr2->devid << 8) | (uint64_t)addr2->function;
In function eal_parse_pci_BDF(), we need do the following work:
GET_PCIADDR_FIELD(input, dev_addr->domain, UINT32_MAX, ':');
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen
> Hemminger
> Sent: Thursday, June 22, 2017 12:36 AM
> To: dev@dpdk.org
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Stephen
> Hemminger <sthemmin@microsoft.com>
> Subject: [dpdk-dev] [PATCH 2/3] eal: PCI domain should be 32 bits
>
> In some environments, the PCI domain can be larger than 16 bits.
> For example, a PCI device passed through in Azure gets a synthetic domain id
> which is internally generated based on GUID. The PCI standard does not
> restrict domain to be 16 bits.
>
> This change breaks ABI for API's that expose PCI address structure.
>
> Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
> ---
> lib/librte_eal/common/include/rte_pci.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/librte_eal/common/include/rte_pci.h
> b/lib/librte_eal/common/include/rte_pci.h
> index 0284a6208aa5..8b549aadfbe6 100644
> --- a/lib/librte_eal/common/include/rte_pci.h
> +++ b/lib/librte_eal/common/include/rte_pci.h
> @@ -112,7 +112,7 @@ struct rte_pci_id {
> * A structure describing the location of a PCI device.
> */
> struct rte_pci_addr {
> - uint16_t domain; /**< Device domain */
> + uint32_t domain; /**< Device domain */
> uint8_t bus; /**< Device bus */
> uint8_t devid; /**< Device ID */
> uint8_t function; /**< Device function. */
> --
> 2.11.0
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 2/3] eal: PCI domain should be 32 bits
@ 2017-06-21 16:35 3% ` Stephen Hemminger
2017-06-22 9:28 0% ` Chang, Cunyin
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2017-06-21 16:35 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Stephen Hemminger
In some environments, the PCI domain can be larger than 16 bits.
For example, a PCI device passed through in Azure gets a synthetic domain
id which is internally generated based on GUID. The PCI standard does
not restrict domain to be 16 bits.
This change breaks ABI for API's that expose PCI address structure.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
---
lib/librte_eal/common/include/rte_pci.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/librte_eal/common/include/rte_pci.h b/lib/librte_eal/common/include/rte_pci.h
index 0284a6208aa5..8b549aadfbe6 100644
--- a/lib/librte_eal/common/include/rte_pci.h
+++ b/lib/librte_eal/common/include/rte_pci.h
@@ -112,7 +112,7 @@ struct rte_pci_id {
* A structure describing the location of a PCI device.
*/
struct rte_pci_addr {
- uint16_t domain; /**< Device domain */
+ uint32_t domain; /**< Device domain */
uint8_t bus; /**< Device bus */
uint8_t devid; /**< Device ID */
uint8_t function; /**< Device function. */
--
2.11.0
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC PATCH] mk: symlink every headers first
2017-06-21 15:14 3% ` Gaëtan Rivet
@ 2017-06-21 15:53 0% ` Wiles, Keith
0 siblings, 0 replies; 200+ results
From: Wiles, Keith @ 2017-06-21 15:53 UTC (permalink / raw)
To: Gaëtan Rivet; +Cc: Richardson, Bruce, Thomas Monjalon, dev
> On Jun 21, 2017, at 10:14 AM, Gaëtan Rivet <gaetan.rivet@6wind.com> wrote:
>
> Hi,
>
> On Wed, Jun 21, 2017 at 02:27:49PM +0000, Wiles, Keith wrote:
>>
>>> On Jun 21, 2017, at 5:27 AM, Bruce Richardson <bruce.richardson@intel.com> wrote:
>>>
>>> On Tue, Jun 20, 2017 at 11:21:39PM +0200, Thomas Monjalon wrote:
>>>> If a library or a build tool uses a definition from a driver,
>>>> there is a build ordering issue, like seen when moving PCI code
>>>> into a bus driver.
>>>>
>>>> One option is to keep PCI helpers and some common definitions in EAL.
>>>> The other option is to symlink every headers at the beginning of
>>>> the build so they can be included by any other component.
>>>>
>>>> This patch shows how to achieve the second option.
>>>>
>>>> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
>>>> ---
>>>
>>> My 2c.
>>>
>>> This may be worth doing, however, two points to consider.
>>>
>>> 1. If we look to change build system this may not be worth doing unless
>>> a compelling need becomes obvious in the short term. In the meantime,
>>> for cases where it is needed...
>>> 2. libraries can already access the includes from drivers or other
>>> places fairly easily, just by adding the relevant "-I" flag to the
>>> CFLAGS for that lib.
>>>
>>> That said, since the work is already done developing this, and if it
>>> doesn't hurt in terms of build time, I suppose we might as well merge
>>> it in.
>>>
>>> So tentative ack from me, subject to testing and feedback from others.
>>
>> +1, I already have to make sure everything is symlinked first in my private DPDK work for other reasons. This patch would allow me to remove that special code.
>>
>>>
>>> /Bruce
>>
>> Regards,
>> Keith
>>
>
> Personally I do not like this approach.
>
> A well designed architecture introduces constraints that developers
> ought to follow. These constraints, when well defined, foster a cleaner
> growth on top of it with better practices.
>
> This solution is a crutch meant to alleviate other deep-rooted issues that
> should be fixed anyway. It makes them disappear right now, only for
> them to reappear when people will write libs and drivers with blurred
> hierarchies and hard-to-determine dependencies.
>
> These constraints should be a tool for developers to be critical of their work
> and help them see whether they are making a mistake, for example by
> putting implementation specific data in generic structures (as seen by
> the problems at the root of this RFC: KNI, pmdinfogen dependencies).
>
> This would have led for example eventdev, cryptodev, ethdev to leave PCI
> specific data within their structures. Yes, it works. But it is not
> clean, it leads to ABI instability, unsafe design practices.
>
> This RFC is the easy way out, making it work, at the price of technical
> debt.
>
> I understand that it is a lot of work to properly clean up the
> structures and that ressource is scarce. Having a clear architecture and
> proper structures however helps external eyes to read, understand, use,
> and ultimately contribute.
>
> This kind of move goes against the previous work that was done to
> make devices, drivers and buses generic, which I think was right.
>
> I do not feel that it is my place to nack, nor that it is constructive
> to block this if others are thinking that it is useful; however I hope
> to sway others to my position.
I do not think this is a crutch as it allows the headers to be included from a single location, instead of relative includes, which can be the worst method.
The real problem is we did not police or restrict usage of structures/includes in the current work or patches. I suggest we start the cleanup of these dependencies and be more strict on what can be included in a feature or file. This way we can have both and it makes it easier for locating headers. We also do not really restrict or create public/private headers. Private headers should never be accessed by another feature/module or exported to the global location. Most if not all of the PMD headers should be private or they need to be split into a public/private set of headers.
>
> Cheers,
> --
> Gaëtan Rivet
> 6WIND
Regards,
Keith
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH] mk: symlink every headers first
@ 2017-06-21 15:14 3% ` Gaëtan Rivet
2017-06-21 15:53 0% ` Wiles, Keith
0 siblings, 1 reply; 200+ results
From: Gaëtan Rivet @ 2017-06-21 15:14 UTC (permalink / raw)
To: Wiles, Keith; +Cc: Richardson, Bruce, Thomas Monjalon, dev
Hi,
On Wed, Jun 21, 2017 at 02:27:49PM +0000, Wiles, Keith wrote:
>
> > On Jun 21, 2017, at 5:27 AM, Bruce Richardson <bruce.richardson@intel.com> wrote:
> >
> > On Tue, Jun 20, 2017 at 11:21:39PM +0200, Thomas Monjalon wrote:
> >> If a library or a build tool uses a definition from a driver,
> >> there is a build ordering issue, like seen when moving PCI code
> >> into a bus driver.
> >>
> >> One option is to keep PCI helpers and some common definitions in EAL.
> >> The other option is to symlink every headers at the beginning of
> >> the build so they can be included by any other component.
> >>
> >> This patch shows how to achieve the second option.
> >>
> >> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> >> ---
> >
> > My 2c.
> >
> > This may be worth doing, however, two points to consider.
> >
> > 1. If we look to change build system this may not be worth doing unless
> > a compelling need becomes obvious in the short term. In the meantime,
> > for cases where it is needed...
> > 2. libraries can already access the includes from drivers or other
> > places fairly easily, just by adding the relevant "-I" flag to the
> > CFLAGS for that lib.
> >
> > That said, since the work is already done developing this, and if it
> > doesn't hurt in terms of build time, I suppose we might as well merge
> > it in.
> >
> > So tentative ack from me, subject to testing and feedback from others.
>
> +1, I already have to make sure everything is symlinked first in my private DPDK work for other reasons. This patch would allow me to remove that special code.
>
> >
> > /Bruce
>
> Regards,
> Keith
>
Personally I do not like this approach.
A well designed architecture introduces constraints that developers
ought to follow. These constraints, when well defined, foster a cleaner
growth on top of it with better practices.
This solution is a crutch meant to alleviate other deep-rooted issues that
should be fixed anyway. It makes them disappear right now, only for
them to reappear when people will write libs and drivers with blurred
hierarchies and hard-to-determine dependencies.
These constraints should be a tool for developers to be critical of their work
and help them see whether they are making a mistake, for example by
putting implementation specific data in generic structures (as seen by
the problems at the root of this RFC: KNI, pmdinfogen dependencies).
This would have led for example eventdev, cryptodev, ethdev to leave PCI
specific data within their structures. Yes, it works. But it is not
clean, it leads to ABI instability, unsafe design practices.
This RFC is the easy way out, making it work, at the price of technical
debt.
I understand that it is a lot of work to properly clean up the
structures and that ressource is scarce. Having a clear architecture and
proper structures however helps external eyes to read, understand, use,
and ultimately contribute.
This kind of move goes against the previous work that was done to
make devices, drivers and buses generic, which I think was right.
I do not feel that it is my place to nack, nor that it is constructive
to block this if others are thinking that it is useful; however I hope
to sway others to my position.
Cheers,
--
Gaëtan Rivet
6WIND
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3 5/9] pmdinfogen: move to drivers subdirectory
@ 2017-06-20 23:36 1% ` Gaetan Rivet
0 siblings, 0 replies; 200+ results
From: Gaetan Rivet @ 2017-06-20 23:36 UTC (permalink / raw)
To: dev; +Cc: Gaetan Rivet
pmdinfogen has a dependency on the PCI bus. The latter must be built
first.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
---
GNUmakefile | 2 +-
MAINTAINERS | 2 +-
buildtools/Makefile | 36 ----
buildtools/pmdinfogen/Makefile | 47 -----
buildtools/pmdinfogen/pmdinfogen.c | 422 -------------------------------------
buildtools/pmdinfogen/pmdinfogen.h | 125 -----------
drivers/Makefile | 4 +-
drivers/pmdinfogen/Makefile | 47 +++++
drivers/pmdinfogen/pmdinfogen.c | 422 +++++++++++++++++++++++++++++++++++++
drivers/pmdinfogen/pmdinfogen.h | 125 +++++++++++
10 files changed, 599 insertions(+), 633 deletions(-)
delete mode 100644 buildtools/Makefile
delete mode 100644 buildtools/pmdinfogen/Makefile
delete mode 100644 buildtools/pmdinfogen/pmdinfogen.c
delete mode 100644 buildtools/pmdinfogen/pmdinfogen.h
create mode 100644 drivers/pmdinfogen/Makefile
create mode 100644 drivers/pmdinfogen/pmdinfogen.c
create mode 100644 drivers/pmdinfogen/pmdinfogen.h
diff --git a/GNUmakefile b/GNUmakefile
index 45b7fbb..c292646 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -40,7 +40,7 @@ export RTE_SDK
# directory list
#
-ROOTDIRS-y := buildtools lib drivers app
+ROOTDIRS-y := lib drivers app
ROOTDIRS- := test
include $(RTE_SDK)/mk/rte.sdkroot.mk
diff --git a/MAINTAINERS b/MAINTAINERS
index f6095ef..c8c57cb 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -72,7 +72,7 @@ F: doc/guides/rel_notes/deprecation.rst
F: devtools/validate-abi.sh
Driver information
-F: buildtools/pmdinfogen/
+F: drivers/pmdinfogen/
F: usertools/dpdk-pmdinfo.py
F: doc/guides/tools/pmdinfo.rst
diff --git a/buildtools/Makefile b/buildtools/Makefile
deleted file mode 100644
index 35a42ff..0000000
--- a/buildtools/Makefile
+++ /dev/null
@@ -1,36 +0,0 @@
-# BSD LICENSE
-#
-# Copyright(c) 2016 Neil Horman. All rights reserved.
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-# * Neither the name of Intel Corporation nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-DIRS-y += pmdinfogen
-
-include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/buildtools/pmdinfogen/Makefile b/buildtools/pmdinfogen/Makefile
deleted file mode 100644
index bf07b6f..0000000
--- a/buildtools/pmdinfogen/Makefile
+++ /dev/null
@@ -1,47 +0,0 @@
-# BSD LICENSE
-#
-# Copyright(c) 2016 Neil Horman. All rights reserved.
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-# * Neither the name of Intel Corporation nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-#
-# library name
-#
-HOSTAPP = dpdk-pmdinfogen
-
-#
-# all sources are stored in SRCS-y
-#
-SRCS-y += pmdinfogen.c
-
-HOST_CFLAGS += $(WERROR_FLAGS) -g
-HOST_CFLAGS += -I$(RTE_OUTPUT)/include
-
-include $(RTE_SDK)/mk/rte.hostapp.mk
diff --git a/buildtools/pmdinfogen/pmdinfogen.c b/buildtools/pmdinfogen/pmdinfogen.c
deleted file mode 100644
index ba1a12e..0000000
--- a/buildtools/pmdinfogen/pmdinfogen.c
+++ /dev/null
@@ -1,422 +0,0 @@
-/* Postprocess pmd object files to export hw support
- *
- * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
- * Based in part on modpost.c from the linux kernel
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License V2, incorporated herein by reference.
- *
- */
-
-#define _GNU_SOURCE
-#include <stdio.h>
-#include <ctype.h>
-#include <string.h>
-#include <limits.h>
-#include <stdbool.h>
-#include <errno.h>
-#include <libgen.h>
-
-#include <rte_common.h>
-#include "pmdinfogen.h"
-
-#ifdef RTE_ARCH_64
-#define ADDR_SIZE 64
-#else
-#define ADDR_SIZE 32
-#endif
-
-
-static const char *sym_name(struct elf_info *elf, Elf_Sym *sym)
-{
- if (sym)
- return elf->strtab + sym->st_name;
- else
- return "(unknown)";
-}
-
-static void *grab_file(const char *filename, unsigned long *size)
-{
- struct stat st;
- void *map = MAP_FAILED;
- int fd;
-
- fd = open(filename, O_RDONLY);
- if (fd < 0)
- return NULL;
- if (fstat(fd, &st))
- goto failed;
-
- *size = st.st_size;
- map = mmap(NULL, *size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
-
-failed:
- close(fd);
- if (map == MAP_FAILED)
- return NULL;
- return map;
-}
-
-/**
- * Return a copy of the next line in a mmap'ed file.
- * spaces in the beginning of the line is trimmed away.
- * Return a pointer to a static buffer.
- **/
-static void release_file(void *file, unsigned long size)
-{
- munmap(file, size);
-}
-
-
-static void *get_sym_value(struct elf_info *info, const Elf_Sym *sym)
-{
- return RTE_PTR_ADD(info->hdr,
- info->sechdrs[sym->st_shndx].sh_offset + sym->st_value);
-}
-
-static Elf_Sym *find_sym_in_symtab(struct elf_info *info,
- const char *name, Elf_Sym *last)
-{
- Elf_Sym *idx;
- if (last)
- idx = last+1;
- else
- idx = info->symtab_start;
-
- for (; idx < info->symtab_stop; idx++) {
- const char *n = sym_name(info, idx);
- if (!strncmp(n, name, strlen(name)))
- return idx;
- }
- return NULL;
-}
-
-static int parse_elf(struct elf_info *info, const char *filename)
-{
- unsigned int i;
- Elf_Ehdr *hdr;
- Elf_Shdr *sechdrs;
- Elf_Sym *sym;
- int endian;
- unsigned int symtab_idx = ~0U, symtab_shndx_idx = ~0U;
-
- hdr = grab_file(filename, &info->size);
- if (!hdr) {
- perror(filename);
- exit(1);
- }
- info->hdr = hdr;
- if (info->size < sizeof(*hdr)) {
- /* file too small, assume this is an empty .o file */
- return 0;
- }
- /* Is this a valid ELF file? */
- if ((hdr->e_ident[EI_MAG0] != ELFMAG0) ||
- (hdr->e_ident[EI_MAG1] != ELFMAG1) ||
- (hdr->e_ident[EI_MAG2] != ELFMAG2) ||
- (hdr->e_ident[EI_MAG3] != ELFMAG3)) {
- /* Not an ELF file - silently ignore it */
- return 0;
- }
-
- if (!hdr->e_ident[EI_DATA]) {
- /* Unknown endian */
- return 0;
- }
-
- endian = hdr->e_ident[EI_DATA];
-
- /* Fix endianness in ELF header */
- hdr->e_type = TO_NATIVE(endian, 16, hdr->e_type);
- hdr->e_machine = TO_NATIVE(endian, 16, hdr->e_machine);
- hdr->e_version = TO_NATIVE(endian, 32, hdr->e_version);
- hdr->e_entry = TO_NATIVE(endian, ADDR_SIZE, hdr->e_entry);
- hdr->e_phoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_phoff);
- hdr->e_shoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_shoff);
- hdr->e_flags = TO_NATIVE(endian, 32, hdr->e_flags);
- hdr->e_ehsize = TO_NATIVE(endian, 16, hdr->e_ehsize);
- hdr->e_phentsize = TO_NATIVE(endian, 16, hdr->e_phentsize);
- hdr->e_phnum = TO_NATIVE(endian, 16, hdr->e_phnum);
- hdr->e_shentsize = TO_NATIVE(endian, 16, hdr->e_shentsize);
- hdr->e_shnum = TO_NATIVE(endian, 16, hdr->e_shnum);
- hdr->e_shstrndx = TO_NATIVE(endian, 16, hdr->e_shstrndx);
-
- sechdrs = RTE_PTR_ADD(hdr, hdr->e_shoff);
- info->sechdrs = sechdrs;
-
- /* Check if file offset is correct */
- if (hdr->e_shoff > info->size) {
- fprintf(stderr, "section header offset=%lu in file '%s' "
- "is bigger than filesize=%lu\n",
- (unsigned long)hdr->e_shoff,
- filename, info->size);
- return 0;
- }
-
- if (hdr->e_shnum == SHN_UNDEF) {
- /*
- * There are more than 64k sections,
- * read count from .sh_size.
- */
- info->num_sections = TO_NATIVE(endian, 32, sechdrs[0].sh_size);
- } else {
- info->num_sections = hdr->e_shnum;
- }
- if (hdr->e_shstrndx == SHN_XINDEX)
- info->secindex_strings =
- TO_NATIVE(endian, 32, sechdrs[0].sh_link);
- else
- info->secindex_strings = hdr->e_shstrndx;
-
- /* Fix endianness in section headers */
- for (i = 0; i < info->num_sections; i++) {
- sechdrs[i].sh_name =
- TO_NATIVE(endian, 32, sechdrs[i].sh_name);
- sechdrs[i].sh_type =
- TO_NATIVE(endian, 32, sechdrs[i].sh_type);
- sechdrs[i].sh_flags =
- TO_NATIVE(endian, 32, sechdrs[i].sh_flags);
- sechdrs[i].sh_addr =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addr);
- sechdrs[i].sh_offset =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_offset);
- sechdrs[i].sh_size =
- TO_NATIVE(endian, 32, sechdrs[i].sh_size);
- sechdrs[i].sh_link =
- TO_NATIVE(endian, 32, sechdrs[i].sh_link);
- sechdrs[i].sh_info =
- TO_NATIVE(endian, 32, sechdrs[i].sh_info);
- sechdrs[i].sh_addralign =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addralign);
- sechdrs[i].sh_entsize =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_entsize);
- }
- /* Find symbol table. */
- for (i = 1; i < info->num_sections; i++) {
- int nobits = sechdrs[i].sh_type == SHT_NOBITS;
-
- if (!nobits && sechdrs[i].sh_offset > info->size) {
- fprintf(stderr, "%s is truncated. "
- "sechdrs[i].sh_offset=%lu > sizeof(*hrd)=%zu\n",
- filename, (unsigned long)sechdrs[i].sh_offset,
- sizeof(*hdr));
- return 0;
- }
-
- if (sechdrs[i].sh_type == SHT_SYMTAB) {
- unsigned int sh_link_idx;
- symtab_idx = i;
- info->symtab_start = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset);
- info->symtab_stop = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset + sechdrs[i].sh_size);
- sh_link_idx = sechdrs[i].sh_link;
- info->strtab = RTE_PTR_ADD(hdr,
- sechdrs[sh_link_idx].sh_offset);
- }
-
- /* 32bit section no. table? ("more than 64k sections") */
- if (sechdrs[i].sh_type == SHT_SYMTAB_SHNDX) {
- symtab_shndx_idx = i;
- info->symtab_shndx_start = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset);
- info->symtab_shndx_stop = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset + sechdrs[i].sh_size);
- }
- }
- if (!info->symtab_start)
- fprintf(stderr, "%s has no symtab?\n", filename);
- else {
- /* Fix endianness in symbols */
- for (sym = info->symtab_start; sym < info->symtab_stop; sym++) {
- sym->st_shndx = TO_NATIVE(endian, 16, sym->st_shndx);
- sym->st_name = TO_NATIVE(endian, 32, sym->st_name);
- sym->st_value = TO_NATIVE(endian, ADDR_SIZE, sym->st_value);
- sym->st_size = TO_NATIVE(endian, ADDR_SIZE, sym->st_size);
- }
- }
-
- if (symtab_shndx_idx != ~0U) {
- Elf32_Word *p;
- if (symtab_idx != sechdrs[symtab_shndx_idx].sh_link)
- fprintf(stderr,
- "%s: SYMTAB_SHNDX has bad sh_link: %u!=%u\n",
- filename, sechdrs[symtab_shndx_idx].sh_link,
- symtab_idx);
- /* Fix endianness */
- for (p = info->symtab_shndx_start; p < info->symtab_shndx_stop;
- p++)
- *p = TO_NATIVE(endian, 32, *p);
- }
-
- return 1;
-}
-
-static void parse_elf_finish(struct elf_info *info)
-{
- struct pmd_driver *tmp, *idx = info->drivers;
- release_file(info->hdr, info->size);
- while (idx) {
- tmp = idx->next;
- free(idx);
- idx = tmp;
- }
-}
-
-struct opt_tag {
- const char *suffix;
- const char *json_id;
-};
-
-static const struct opt_tag opt_tags[] = {
- {"_param_string_export", "params"},
- {"_kmod_dep_export", "kmod"},
-};
-
-static int complete_pmd_entry(struct elf_info *info, struct pmd_driver *drv)
-{
- const char *tname;
- int i;
- char tmpsymname[128];
- Elf_Sym *tmpsym;
-
- drv->name = get_sym_value(info, drv->name_sym);
-
- for (i = 0; i < PMD_OPT_MAX; i++) {
- memset(tmpsymname, 0, 128);
- sprintf(tmpsymname, "__%s%s", drv->name, opt_tags[i].suffix);
- tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
- if (!tmpsym)
- continue;
- drv->opt_vals[i] = get_sym_value(info, tmpsym);
- }
-
- memset(tmpsymname, 0, 128);
- sprintf(tmpsymname, "__%s_pci_tbl_export", drv->name);
-
- tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
-
-
- /*
- * If this returns NULL, then this is a PMD_VDEV, because
- * it has no pci table reference
- */
- if (!tmpsym) {
- drv->pci_tbl = NULL;
- return 0;
- }
-
- tname = get_sym_value(info, tmpsym);
- tmpsym = find_sym_in_symtab(info, tname, NULL);
- if (!tmpsym)
- return -ENOENT;
-
- drv->pci_tbl = (struct rte_pci_id *)get_sym_value(info, tmpsym);
- if (!drv->pci_tbl)
- return -ENOENT;
-
- return 0;
-}
-
-static int locate_pmd_entries(struct elf_info *info)
-{
- Elf_Sym *last = NULL;
- struct pmd_driver *new;
-
- info->drivers = NULL;
-
- do {
- new = calloc(sizeof(struct pmd_driver), 1);
- new->name_sym = find_sym_in_symtab(info, "this_pmd_name", last);
- last = new->name_sym;
- if (!new->name_sym)
- free(new);
- else {
- if (complete_pmd_entry(info, new)) {
- fprintf(stderr,
- "Failed to complete pmd entry\n");
- free(new);
- } else {
- new->next = info->drivers;
- info->drivers = new;
- }
- }
- } while (last);
-
- return 0;
-}
-
-static void output_pmd_info_string(struct elf_info *info, char *outfile)
-{
- FILE *ofd;
- struct pmd_driver *drv;
- struct rte_pci_id *pci_ids;
- int idx = 0;
-
- ofd = fopen(outfile, "w+");
- if (!ofd) {
- fprintf(stderr, "Unable to open output file\n");
- return;
- }
-
- drv = info->drivers;
-
- while (drv) {
- fprintf(ofd, "const char %s_pmd_info[] __attribute__((used)) = "
- "\"PMD_INFO_STRING= {",
- drv->name);
- fprintf(ofd, "\\\"name\\\" : \\\"%s\\\", ", drv->name);
-
- for (idx = 0; idx < PMD_OPT_MAX; idx++) {
- if (drv->opt_vals[idx])
- fprintf(ofd, "\\\"%s\\\" : \\\"%s\\\", ",
- opt_tags[idx].json_id,
- drv->opt_vals[idx]);
- }
-
- pci_ids = drv->pci_tbl;
- fprintf(ofd, "\\\"pci_ids\\\" : [");
-
- while (pci_ids && pci_ids->device_id) {
- fprintf(ofd, "[%d, %d, %d, %d]",
- pci_ids->vendor_id, pci_ids->device_id,
- pci_ids->subsystem_vendor_id,
- pci_ids->subsystem_device_id);
- pci_ids++;
- if (pci_ids->device_id)
- fprintf(ofd, ",");
- else
- fprintf(ofd, " ");
- }
- fprintf(ofd, "]}\";\n");
- drv = drv->next;
- }
-
- fclose(ofd);
-}
-
-int main(int argc, char **argv)
-{
- struct elf_info info;
- int rc = 1;
-
- if (argc < 3) {
- fprintf(stderr,
- "usage: %s <object file> <c output file>\n",
- basename(argv[0]));
- exit(127);
- }
- parse_elf(&info, argv[1]);
-
- locate_pmd_entries(&info);
-
- if (info.drivers) {
- output_pmd_info_string(&info, argv[2]);
- rc = 0;
- } else {
- fprintf(stderr, "No drivers registered\n");
- }
-
- parse_elf_finish(&info);
- exit(rc);
-}
diff --git a/buildtools/pmdinfogen/pmdinfogen.h b/buildtools/pmdinfogen/pmdinfogen.h
deleted file mode 100644
index 27bab30..0000000
--- a/buildtools/pmdinfogen/pmdinfogen.h
+++ /dev/null
@@ -1,125 +0,0 @@
-
-/* Postprocess pmd object files to export hw support
- *
- * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
- * Based in part on modpost.c from the linux kernel
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License V2, incorporated herein by reference.
- *
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdarg.h>
-#include <string.h>
-#include <sys/types.h>
-#include <sys/stat.h>
-#include <sys/mman.h>
-#ifdef __linux__
-#include <endian.h>
-#else
-#include <sys/endian.h>
-#endif
-#include <fcntl.h>
-#include <unistd.h>
-#include <elf.h>
-#include <rte_config.h>
-#include <rte_pci.h>
-
-/* On BSD-alike OSes elf.h defines these according to host's word size */
-#undef ELF_ST_BIND
-#undef ELF_ST_TYPE
-#undef ELF_R_SYM
-#undef ELF_R_TYPE
-
-/*
- * Define ELF64_* to ELF_*, the latter being defined in both 32 and 64 bit
- * flavors in elf.h. This makes our code a bit more generic between arches
- * and allows us to support 32 bit code in the future should we ever want to
- */
-#ifdef RTE_ARCH_64
-#define Elf_Ehdr Elf64_Ehdr
-#define Elf_Shdr Elf64_Shdr
-#define Elf_Sym Elf64_Sym
-#define Elf_Addr Elf64_Addr
-#define Elf_Sword Elf64_Sxword
-#define Elf_Section Elf64_Half
-#define ELF_ST_BIND ELF64_ST_BIND
-#define ELF_ST_TYPE ELF64_ST_TYPE
-
-#define Elf_Rel Elf64_Rel
-#define Elf_Rela Elf64_Rela
-#define ELF_R_SYM ELF64_R_SYM
-#define ELF_R_TYPE ELF64_R_TYPE
-#else
-#define Elf_Ehdr Elf32_Ehdr
-#define Elf_Shdr Elf32_Shdr
-#define Elf_Sym Elf32_Sym
-#define Elf_Addr Elf32_Addr
-#define Elf_Sword Elf32_Sxword
-#define Elf_Section Elf32_Half
-#define ELF_ST_BIND ELF32_ST_BIND
-#define ELF_ST_TYPE ELF32_ST_TYPE
-
-#define Elf_Rel Elf32_Rel
-#define Elf_Rela Elf32_Rela
-#define ELF_R_SYM ELF32_R_SYM
-#define ELF_R_TYPE ELF32_R_TYPE
-#endif
-
-
-/*
- * Note, it seems odd that we have both a CONVERT_NATIVE and a TO_NATIVE macro
- * below. We do this because the values passed to TO_NATIVE may themselves be
- * macros and need both macros here to get expanded. Specifically its the width
- * variable we are concerned with, because it needs to get expanded prior to
- * string concatenation
- */
-#define CONVERT_NATIVE(fend, width, x) ({ \
-typeof(x) ___x; \
-if ((fend) == ELFDATA2LSB) \
- ___x = le##width##toh(x); \
-else \
- ___x = be##width##toh(x); \
- ___x; \
-})
-
-#define TO_NATIVE(fend, width, x) CONVERT_NATIVE(fend, width, x)
-
-enum opt_params {
- PMD_PARAM_STRING = 0,
- PMD_KMOD_DEP,
- PMD_OPT_MAX
-};
-
-struct pmd_driver {
- Elf_Sym *name_sym;
- const char *name;
- struct rte_pci_id *pci_tbl;
- struct pmd_driver *next;
-
- const char *opt_vals[PMD_OPT_MAX];
-};
-
-struct elf_info {
- unsigned long size;
- Elf_Ehdr *hdr;
- Elf_Shdr *sechdrs;
- Elf_Sym *symtab_start;
- Elf_Sym *symtab_stop;
- char *strtab;
-
- /* support for 32bit section numbers */
-
- unsigned int num_sections; /* max_secindex + 1 */
- unsigned int secindex_strings;
- /* if Nth symbol table entry has .st_shndx = SHN_XINDEX,
- * take shndx from symtab_shndx_start[N] instead
- */
- Elf32_Word *symtab_shndx_start;
- Elf32_Word *symtab_shndx_stop;
-
- struct pmd_driver *drivers;
-};
-
diff --git a/drivers/Makefile b/drivers/Makefile
index a04a01f..f3f9417 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,10 +32,12 @@
include $(RTE_SDK)/mk/rte.vars.mk
DIRS-y += bus
+DIRS-y += pmdinfogen
+DEPDIRS-pmdinfogen := bus
DIRS-y += mempool
DEPDIRS-mempool := bus
DIRS-y += net
-DEPDIRS-net := bus mempool
+DEPDIRS-net := bus pmdinfogen mempool
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
DEPDIRS-crypto := mempool
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
diff --git a/drivers/pmdinfogen/Makefile b/drivers/pmdinfogen/Makefile
new file mode 100644
index 0000000..bf07b6f
--- /dev/null
+++ b/drivers/pmdinfogen/Makefile
@@ -0,0 +1,47 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Neil Horman. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+HOSTAPP = dpdk-pmdinfogen
+
+#
+# all sources are stored in SRCS-y
+#
+SRCS-y += pmdinfogen.c
+
+HOST_CFLAGS += $(WERROR_FLAGS) -g
+HOST_CFLAGS += -I$(RTE_OUTPUT)/include
+
+include $(RTE_SDK)/mk/rte.hostapp.mk
diff --git a/drivers/pmdinfogen/pmdinfogen.c b/drivers/pmdinfogen/pmdinfogen.c
new file mode 100644
index 0000000..ba1a12e
--- /dev/null
+++ b/drivers/pmdinfogen/pmdinfogen.c
@@ -0,0 +1,422 @@
+/* Postprocess pmd object files to export hw support
+ *
+ * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
+ * Based in part on modpost.c from the linux kernel
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License V2, incorporated herein by reference.
+ *
+ */
+
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <ctype.h>
+#include <string.h>
+#include <limits.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <libgen.h>
+
+#include <rte_common.h>
+#include "pmdinfogen.h"
+
+#ifdef RTE_ARCH_64
+#define ADDR_SIZE 64
+#else
+#define ADDR_SIZE 32
+#endif
+
+
+static const char *sym_name(struct elf_info *elf, Elf_Sym *sym)
+{
+ if (sym)
+ return elf->strtab + sym->st_name;
+ else
+ return "(unknown)";
+}
+
+static void *grab_file(const char *filename, unsigned long *size)
+{
+ struct stat st;
+ void *map = MAP_FAILED;
+ int fd;
+
+ fd = open(filename, O_RDONLY);
+ if (fd < 0)
+ return NULL;
+ if (fstat(fd, &st))
+ goto failed;
+
+ *size = st.st_size;
+ map = mmap(NULL, *size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
+
+failed:
+ close(fd);
+ if (map == MAP_FAILED)
+ return NULL;
+ return map;
+}
+
+/**
+ * Return a copy of the next line in a mmap'ed file.
+ * spaces in the beginning of the line is trimmed away.
+ * Return a pointer to a static buffer.
+ **/
+static void release_file(void *file, unsigned long size)
+{
+ munmap(file, size);
+}
+
+
+static void *get_sym_value(struct elf_info *info, const Elf_Sym *sym)
+{
+ return RTE_PTR_ADD(info->hdr,
+ info->sechdrs[sym->st_shndx].sh_offset + sym->st_value);
+}
+
+static Elf_Sym *find_sym_in_symtab(struct elf_info *info,
+ const char *name, Elf_Sym *last)
+{
+ Elf_Sym *idx;
+ if (last)
+ idx = last+1;
+ else
+ idx = info->symtab_start;
+
+ for (; idx < info->symtab_stop; idx++) {
+ const char *n = sym_name(info, idx);
+ if (!strncmp(n, name, strlen(name)))
+ return idx;
+ }
+ return NULL;
+}
+
+static int parse_elf(struct elf_info *info, const char *filename)
+{
+ unsigned int i;
+ Elf_Ehdr *hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *sym;
+ int endian;
+ unsigned int symtab_idx = ~0U, symtab_shndx_idx = ~0U;
+
+ hdr = grab_file(filename, &info->size);
+ if (!hdr) {
+ perror(filename);
+ exit(1);
+ }
+ info->hdr = hdr;
+ if (info->size < sizeof(*hdr)) {
+ /* file too small, assume this is an empty .o file */
+ return 0;
+ }
+ /* Is this a valid ELF file? */
+ if ((hdr->e_ident[EI_MAG0] != ELFMAG0) ||
+ (hdr->e_ident[EI_MAG1] != ELFMAG1) ||
+ (hdr->e_ident[EI_MAG2] != ELFMAG2) ||
+ (hdr->e_ident[EI_MAG3] != ELFMAG3)) {
+ /* Not an ELF file - silently ignore it */
+ return 0;
+ }
+
+ if (!hdr->e_ident[EI_DATA]) {
+ /* Unknown endian */
+ return 0;
+ }
+
+ endian = hdr->e_ident[EI_DATA];
+
+ /* Fix endianness in ELF header */
+ hdr->e_type = TO_NATIVE(endian, 16, hdr->e_type);
+ hdr->e_machine = TO_NATIVE(endian, 16, hdr->e_machine);
+ hdr->e_version = TO_NATIVE(endian, 32, hdr->e_version);
+ hdr->e_entry = TO_NATIVE(endian, ADDR_SIZE, hdr->e_entry);
+ hdr->e_phoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_phoff);
+ hdr->e_shoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_shoff);
+ hdr->e_flags = TO_NATIVE(endian, 32, hdr->e_flags);
+ hdr->e_ehsize = TO_NATIVE(endian, 16, hdr->e_ehsize);
+ hdr->e_phentsize = TO_NATIVE(endian, 16, hdr->e_phentsize);
+ hdr->e_phnum = TO_NATIVE(endian, 16, hdr->e_phnum);
+ hdr->e_shentsize = TO_NATIVE(endian, 16, hdr->e_shentsize);
+ hdr->e_shnum = TO_NATIVE(endian, 16, hdr->e_shnum);
+ hdr->e_shstrndx = TO_NATIVE(endian, 16, hdr->e_shstrndx);
+
+ sechdrs = RTE_PTR_ADD(hdr, hdr->e_shoff);
+ info->sechdrs = sechdrs;
+
+ /* Check if file offset is correct */
+ if (hdr->e_shoff > info->size) {
+ fprintf(stderr, "section header offset=%lu in file '%s' "
+ "is bigger than filesize=%lu\n",
+ (unsigned long)hdr->e_shoff,
+ filename, info->size);
+ return 0;
+ }
+
+ if (hdr->e_shnum == SHN_UNDEF) {
+ /*
+ * There are more than 64k sections,
+ * read count from .sh_size.
+ */
+ info->num_sections = TO_NATIVE(endian, 32, sechdrs[0].sh_size);
+ } else {
+ info->num_sections = hdr->e_shnum;
+ }
+ if (hdr->e_shstrndx == SHN_XINDEX)
+ info->secindex_strings =
+ TO_NATIVE(endian, 32, sechdrs[0].sh_link);
+ else
+ info->secindex_strings = hdr->e_shstrndx;
+
+ /* Fix endianness in section headers */
+ for (i = 0; i < info->num_sections; i++) {
+ sechdrs[i].sh_name =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_name);
+ sechdrs[i].sh_type =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_type);
+ sechdrs[i].sh_flags =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_flags);
+ sechdrs[i].sh_addr =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addr);
+ sechdrs[i].sh_offset =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_offset);
+ sechdrs[i].sh_size =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_size);
+ sechdrs[i].sh_link =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_link);
+ sechdrs[i].sh_info =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_info);
+ sechdrs[i].sh_addralign =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addralign);
+ sechdrs[i].sh_entsize =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_entsize);
+ }
+ /* Find symbol table. */
+ for (i = 1; i < info->num_sections; i++) {
+ int nobits = sechdrs[i].sh_type == SHT_NOBITS;
+
+ if (!nobits && sechdrs[i].sh_offset > info->size) {
+ fprintf(stderr, "%s is truncated. "
+ "sechdrs[i].sh_offset=%lu > sizeof(*hrd)=%zu\n",
+ filename, (unsigned long)sechdrs[i].sh_offset,
+ sizeof(*hdr));
+ return 0;
+ }
+
+ if (sechdrs[i].sh_type == SHT_SYMTAB) {
+ unsigned int sh_link_idx;
+ symtab_idx = i;
+ info->symtab_start = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset);
+ info->symtab_stop = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset + sechdrs[i].sh_size);
+ sh_link_idx = sechdrs[i].sh_link;
+ info->strtab = RTE_PTR_ADD(hdr,
+ sechdrs[sh_link_idx].sh_offset);
+ }
+
+ /* 32bit section no. table? ("more than 64k sections") */
+ if (sechdrs[i].sh_type == SHT_SYMTAB_SHNDX) {
+ symtab_shndx_idx = i;
+ info->symtab_shndx_start = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset);
+ info->symtab_shndx_stop = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset + sechdrs[i].sh_size);
+ }
+ }
+ if (!info->symtab_start)
+ fprintf(stderr, "%s has no symtab?\n", filename);
+ else {
+ /* Fix endianness in symbols */
+ for (sym = info->symtab_start; sym < info->symtab_stop; sym++) {
+ sym->st_shndx = TO_NATIVE(endian, 16, sym->st_shndx);
+ sym->st_name = TO_NATIVE(endian, 32, sym->st_name);
+ sym->st_value = TO_NATIVE(endian, ADDR_SIZE, sym->st_value);
+ sym->st_size = TO_NATIVE(endian, ADDR_SIZE, sym->st_size);
+ }
+ }
+
+ if (symtab_shndx_idx != ~0U) {
+ Elf32_Word *p;
+ if (symtab_idx != sechdrs[symtab_shndx_idx].sh_link)
+ fprintf(stderr,
+ "%s: SYMTAB_SHNDX has bad sh_link: %u!=%u\n",
+ filename, sechdrs[symtab_shndx_idx].sh_link,
+ symtab_idx);
+ /* Fix endianness */
+ for (p = info->symtab_shndx_start; p < info->symtab_shndx_stop;
+ p++)
+ *p = TO_NATIVE(endian, 32, *p);
+ }
+
+ return 1;
+}
+
+static void parse_elf_finish(struct elf_info *info)
+{
+ struct pmd_driver *tmp, *idx = info->drivers;
+ release_file(info->hdr, info->size);
+ while (idx) {
+ tmp = idx->next;
+ free(idx);
+ idx = tmp;
+ }
+}
+
+struct opt_tag {
+ const char *suffix;
+ const char *json_id;
+};
+
+static const struct opt_tag opt_tags[] = {
+ {"_param_string_export", "params"},
+ {"_kmod_dep_export", "kmod"},
+};
+
+static int complete_pmd_entry(struct elf_info *info, struct pmd_driver *drv)
+{
+ const char *tname;
+ int i;
+ char tmpsymname[128];
+ Elf_Sym *tmpsym;
+
+ drv->name = get_sym_value(info, drv->name_sym);
+
+ for (i = 0; i < PMD_OPT_MAX; i++) {
+ memset(tmpsymname, 0, 128);
+ sprintf(tmpsymname, "__%s%s", drv->name, opt_tags[i].suffix);
+ tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
+ if (!tmpsym)
+ continue;
+ drv->opt_vals[i] = get_sym_value(info, tmpsym);
+ }
+
+ memset(tmpsymname, 0, 128);
+ sprintf(tmpsymname, "__%s_pci_tbl_export", drv->name);
+
+ tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
+
+
+ /*
+ * If this returns NULL, then this is a PMD_VDEV, because
+ * it has no pci table reference
+ */
+ if (!tmpsym) {
+ drv->pci_tbl = NULL;
+ return 0;
+ }
+
+ tname = get_sym_value(info, tmpsym);
+ tmpsym = find_sym_in_symtab(info, tname, NULL);
+ if (!tmpsym)
+ return -ENOENT;
+
+ drv->pci_tbl = (struct rte_pci_id *)get_sym_value(info, tmpsym);
+ if (!drv->pci_tbl)
+ return -ENOENT;
+
+ return 0;
+}
+
+static int locate_pmd_entries(struct elf_info *info)
+{
+ Elf_Sym *last = NULL;
+ struct pmd_driver *new;
+
+ info->drivers = NULL;
+
+ do {
+ new = calloc(sizeof(struct pmd_driver), 1);
+ new->name_sym = find_sym_in_symtab(info, "this_pmd_name", last);
+ last = new->name_sym;
+ if (!new->name_sym)
+ free(new);
+ else {
+ if (complete_pmd_entry(info, new)) {
+ fprintf(stderr,
+ "Failed to complete pmd entry\n");
+ free(new);
+ } else {
+ new->next = info->drivers;
+ info->drivers = new;
+ }
+ }
+ } while (last);
+
+ return 0;
+}
+
+static void output_pmd_info_string(struct elf_info *info, char *outfile)
+{
+ FILE *ofd;
+ struct pmd_driver *drv;
+ struct rte_pci_id *pci_ids;
+ int idx = 0;
+
+ ofd = fopen(outfile, "w+");
+ if (!ofd) {
+ fprintf(stderr, "Unable to open output file\n");
+ return;
+ }
+
+ drv = info->drivers;
+
+ while (drv) {
+ fprintf(ofd, "const char %s_pmd_info[] __attribute__((used)) = "
+ "\"PMD_INFO_STRING= {",
+ drv->name);
+ fprintf(ofd, "\\\"name\\\" : \\\"%s\\\", ", drv->name);
+
+ for (idx = 0; idx < PMD_OPT_MAX; idx++) {
+ if (drv->opt_vals[idx])
+ fprintf(ofd, "\\\"%s\\\" : \\\"%s\\\", ",
+ opt_tags[idx].json_id,
+ drv->opt_vals[idx]);
+ }
+
+ pci_ids = drv->pci_tbl;
+ fprintf(ofd, "\\\"pci_ids\\\" : [");
+
+ while (pci_ids && pci_ids->device_id) {
+ fprintf(ofd, "[%d, %d, %d, %d]",
+ pci_ids->vendor_id, pci_ids->device_id,
+ pci_ids->subsystem_vendor_id,
+ pci_ids->subsystem_device_id);
+ pci_ids++;
+ if (pci_ids->device_id)
+ fprintf(ofd, ",");
+ else
+ fprintf(ofd, " ");
+ }
+ fprintf(ofd, "]}\";\n");
+ drv = drv->next;
+ }
+
+ fclose(ofd);
+}
+
+int main(int argc, char **argv)
+{
+ struct elf_info info;
+ int rc = 1;
+
+ if (argc < 3) {
+ fprintf(stderr,
+ "usage: %s <object file> <c output file>\n",
+ basename(argv[0]));
+ exit(127);
+ }
+ parse_elf(&info, argv[1]);
+
+ locate_pmd_entries(&info);
+
+ if (info.drivers) {
+ output_pmd_info_string(&info, argv[2]);
+ rc = 0;
+ } else {
+ fprintf(stderr, "No drivers registered\n");
+ }
+
+ parse_elf_finish(&info);
+ exit(rc);
+}
diff --git a/drivers/pmdinfogen/pmdinfogen.h b/drivers/pmdinfogen/pmdinfogen.h
new file mode 100644
index 0000000..27bab30
--- /dev/null
+++ b/drivers/pmdinfogen/pmdinfogen.h
@@ -0,0 +1,125 @@
+
+/* Postprocess pmd object files to export hw support
+ *
+ * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
+ * Based in part on modpost.c from the linux kernel
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License V2, incorporated herein by reference.
+ *
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdarg.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#ifdef __linux__
+#include <endian.h>
+#else
+#include <sys/endian.h>
+#endif
+#include <fcntl.h>
+#include <unistd.h>
+#include <elf.h>
+#include <rte_config.h>
+#include <rte_pci.h>
+
+/* On BSD-alike OSes elf.h defines these according to host's word size */
+#undef ELF_ST_BIND
+#undef ELF_ST_TYPE
+#undef ELF_R_SYM
+#undef ELF_R_TYPE
+
+/*
+ * Define ELF64_* to ELF_*, the latter being defined in both 32 and 64 bit
+ * flavors in elf.h. This makes our code a bit more generic between arches
+ * and allows us to support 32 bit code in the future should we ever want to
+ */
+#ifdef RTE_ARCH_64
+#define Elf_Ehdr Elf64_Ehdr
+#define Elf_Shdr Elf64_Shdr
+#define Elf_Sym Elf64_Sym
+#define Elf_Addr Elf64_Addr
+#define Elf_Sword Elf64_Sxword
+#define Elf_Section Elf64_Half
+#define ELF_ST_BIND ELF64_ST_BIND
+#define ELF_ST_TYPE ELF64_ST_TYPE
+
+#define Elf_Rel Elf64_Rel
+#define Elf_Rela Elf64_Rela
+#define ELF_R_SYM ELF64_R_SYM
+#define ELF_R_TYPE ELF64_R_TYPE
+#else
+#define Elf_Ehdr Elf32_Ehdr
+#define Elf_Shdr Elf32_Shdr
+#define Elf_Sym Elf32_Sym
+#define Elf_Addr Elf32_Addr
+#define Elf_Sword Elf32_Sxword
+#define Elf_Section Elf32_Half
+#define ELF_ST_BIND ELF32_ST_BIND
+#define ELF_ST_TYPE ELF32_ST_TYPE
+
+#define Elf_Rel Elf32_Rel
+#define Elf_Rela Elf32_Rela
+#define ELF_R_SYM ELF32_R_SYM
+#define ELF_R_TYPE ELF32_R_TYPE
+#endif
+
+
+/*
+ * Note, it seems odd that we have both a CONVERT_NATIVE and a TO_NATIVE macro
+ * below. We do this because the values passed to TO_NATIVE may themselves be
+ * macros and need both macros here to get expanded. Specifically its the width
+ * variable we are concerned with, because it needs to get expanded prior to
+ * string concatenation
+ */
+#define CONVERT_NATIVE(fend, width, x) ({ \
+typeof(x) ___x; \
+if ((fend) == ELFDATA2LSB) \
+ ___x = le##width##toh(x); \
+else \
+ ___x = be##width##toh(x); \
+ ___x; \
+})
+
+#define TO_NATIVE(fend, width, x) CONVERT_NATIVE(fend, width, x)
+
+enum opt_params {
+ PMD_PARAM_STRING = 0,
+ PMD_KMOD_DEP,
+ PMD_OPT_MAX
+};
+
+struct pmd_driver {
+ Elf_Sym *name_sym;
+ const char *name;
+ struct rte_pci_id *pci_tbl;
+ struct pmd_driver *next;
+
+ const char *opt_vals[PMD_OPT_MAX];
+};
+
+struct elf_info {
+ unsigned long size;
+ Elf_Ehdr *hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *symtab_start;
+ Elf_Sym *symtab_stop;
+ char *strtab;
+
+ /* support for 32bit section numbers */
+
+ unsigned int num_sections; /* max_secindex + 1 */
+ unsigned int secindex_strings;
+ /* if Nth symbol table entry has .st_shndx = SHN_XINDEX,
+ * take shndx from symtab_shndx_start[N] instead
+ */
+ Elf32_Word *symtab_shndx_start;
+ Elf32_Word *symtab_shndx_stop;
+
+ struct pmd_driver *drivers;
+};
+
--
2.1.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v2 3/3] ethdev: tidy up endianness handling in flow API
@ 2017-06-15 15:48 3% ` Adrien Mazarguil
0 siblings, 0 replies; 200+ results
From: Adrien Mazarguil @ 2017-06-15 15:48 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon
The flow API defines several structures whose fields must be specified in
network order. This commit documents them using explicit type names and
related endianness conversion macros.
No ABI change.
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
lib/librte_ether/rte_flow.h | 52 ++++++++++++++++++----------------------
1 file changed, 23 insertions(+), 29 deletions(-)
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index 34a5876..057f7e7 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -429,7 +429,7 @@ static const struct rte_flow_item_raw rte_flow_item_raw_mask = {
struct rte_flow_item_eth {
struct ether_addr dst; /**< Destination MAC. */
struct ether_addr src; /**< Source MAC. */
- uint16_t type; /**< EtherType. */
+ rte_be16_t type; /**< EtherType. */
};
/** Default mask for RTE_FLOW_ITEM_TYPE_ETH. */
@@ -437,7 +437,7 @@ struct rte_flow_item_eth {
static const struct rte_flow_item_eth rte_flow_item_eth_mask = {
.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff",
.src.addr_bytes = "\xff\xff\xff\xff\xff\xff",
- .type = 0x0000,
+ .type = RTE_BE16(0x0000),
};
#endif
@@ -450,15 +450,15 @@ static const struct rte_flow_item_eth rte_flow_item_eth_mask = {
* RTE_FLOW_ITEM_TYPE_VLAN.
*/
struct rte_flow_item_vlan {
- uint16_t tpid; /**< Tag protocol identifier. */
- uint16_t tci; /**< Tag control information. */
+ rte_be16_t tpid; /**< Tag protocol identifier. */
+ rte_be16_t tci; /**< Tag control information. */
};
/** Default mask for RTE_FLOW_ITEM_TYPE_VLAN. */
#ifndef __cplusplus
static const struct rte_flow_item_vlan rte_flow_item_vlan_mask = {
- .tpid = 0x0000,
- .tci = 0xffff,
+ .tpid = RTE_BE16(0x0000),
+ .tci = RTE_BE16(0xffff),
};
#endif
@@ -477,8 +477,8 @@ struct rte_flow_item_ipv4 {
#ifndef __cplusplus
static const struct rte_flow_item_ipv4 rte_flow_item_ipv4_mask = {
.hdr = {
- .src_addr = 0xffffffff,
- .dst_addr = 0xffffffff,
+ .src_addr = RTE_BE32(0xffffffff),
+ .dst_addr = RTE_BE32(0xffffffff),
},
};
#endif
@@ -540,8 +540,8 @@ struct rte_flow_item_udp {
#ifndef __cplusplus
static const struct rte_flow_item_udp rte_flow_item_udp_mask = {
.hdr = {
- .src_port = 0xffff,
- .dst_port = 0xffff,
+ .src_port = RTE_BE16(0xffff),
+ .dst_port = RTE_BE16(0xffff),
},
};
#endif
@@ -559,8 +559,8 @@ struct rte_flow_item_tcp {
#ifndef __cplusplus
static const struct rte_flow_item_tcp rte_flow_item_tcp_mask = {
.hdr = {
- .src_port = 0xffff,
- .dst_port = 0xffff,
+ .src_port = RTE_BE16(0xffff),
+ .dst_port = RTE_BE16(0xffff),
},
};
#endif
@@ -578,8 +578,8 @@ struct rte_flow_item_sctp {
#ifndef __cplusplus
static const struct rte_flow_item_sctp rte_flow_item_sctp_mask = {
.hdr = {
- .src_port = 0xffff,
- .dst_port = 0xffff,
+ .src_port = RTE_BE16(0xffff),
+ .dst_port = RTE_BE16(0xffff),
},
};
#endif
@@ -609,14 +609,14 @@ static const struct rte_flow_item_vxlan rte_flow_item_vxlan_mask = {
* Matches a E-tag header.
*/
struct rte_flow_item_e_tag {
- uint16_t tpid; /**< Tag protocol identifier (0x893F). */
+ rte_be16_t tpid; /**< Tag protocol identifier (0x893F). */
/**
* E-Tag control information (E-TCI).
* E-PCP (3b), E-DEI (1b), ingress E-CID base (12b).
*/
- uint16_t epcp_edei_in_ecid_b;
+ rte_be16_t epcp_edei_in_ecid_b;
/** Reserved (2b), GRP (2b), E-CID base (12b). */
- uint16_t rsvd_grp_ecid_b;
+ rte_be16_t rsvd_grp_ecid_b;
uint8_t in_ecid_e; /**< Ingress E-CID ext. */
uint8_t ecid_e; /**< E-CID ext. */
};
@@ -624,13 +624,7 @@ struct rte_flow_item_e_tag {
/** Default mask for RTE_FLOW_ITEM_TYPE_E_TAG. */
#ifndef __cplusplus
static const struct rte_flow_item_e_tag rte_flow_item_e_tag_mask = {
-#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
- .rsvd_grp_ecid_b = 0x3fff,
-#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
- .rsvd_grp_ecid_b = 0xff3f,
-#else
-#error Unsupported endianness.
-#endif
+ .rsvd_grp_ecid_b = RTE_BE16(0x3fff),
};
#endif
@@ -646,8 +640,8 @@ struct rte_flow_item_nvgre {
*
* c_k_s_rsvd0_ver must have value 0x2000 according to RFC 7637.
*/
- uint16_t c_k_s_rsvd0_ver;
- uint16_t protocol; /**< Protocol type (0x6558). */
+ rte_be16_t c_k_s_rsvd0_ver;
+ rte_be16_t protocol; /**< Protocol type (0x6558). */
uint8_t tni[3]; /**< Virtual subnet ID. */
uint8_t flow_id; /**< Flow ID. */
};
@@ -689,14 +683,14 @@ struct rte_flow_item_gre {
* Checksum (1b), reserved 0 (12b), version (3b).
* Refer to RFC 2784.
*/
- uint16_t c_rsvd0_ver;
- uint16_t protocol; /**< Protocol type. */
+ rte_be16_t c_rsvd0_ver;
+ rte_be16_t protocol; /**< Protocol type. */
};
/** Default mask for RTE_FLOW_ITEM_TYPE_GRE. */
#ifndef __cplusplus
static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
- .protocol = 0xffff,
+ .protocol = RTE_BE16(0xffff),
};
#endif
--
2.1.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 08/12] kni: disabled by default
2017-06-15 13:09 0% ` Ferruh Yigit
@ 2017-06-15 14:48 3% ` Gaëtan Rivet
0 siblings, 0 replies; 200+ results
From: Gaëtan Rivet @ 2017-06-15 14:48 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev
On Thu, Jun 15, 2017 at 02:09:41PM +0100, Ferruh Yigit wrote:
> On 6/9/2017 10:06 AM, Gaëtan Rivet wrote:
> > Hi Ferruh,
> >
> > On Fri, Jun 09, 2017 at 09:56:14AM +0100, Ferruh Yigit wrote:
> >> On 6/8/2017 12:59 AM, Gaetan Rivet wrote:
> >>> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
> >>> ---
> >>> config/common_linuxapp | 2 +-
> >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/config/common_linuxapp b/config/common_linuxapp
> >>> index b3cf41b..cc85cc6 100644
> >>> --- a/config/common_linuxapp
> >>> +++ b/config/common_linuxapp
> >>> @@ -38,7 +38,7 @@ CONFIG_RTE_EXEC_ENV_LINUXAPP=y
> >>> CONFIG_RTE_EAL_IGB_UIO=y
> >>> CONFIG_RTE_EAL_VFIO=y
> >>> CONFIG_RTE_KNI_KMOD=y
> >>> -CONFIG_RTE_LIBRTE_KNI=y
> >>> +CONFIG_RTE_LIBRTE_KNI=n
> >>> CONFIG_RTE_LIBRTE_PMD_KNI=y
> >>> CONFIG_RTE_LIBRTE_VHOST=y
> >>> CONFIG_RTE_LIBRTE_PMD_VHOST=y
> >>>
> >>
> >> Hi Gaetan,
> >>
> >> We shouldn't just disable components that doesn't compile.
> >>
> >
> > Ah, sure :) . This patch is not meant to be integrated as is, but only
> > as a convenient way for testers to apply the patchset and verify the
> > compilation, as far as KNI is not concerned.
> >
> > Eventdev and cryptodev fixed this dependency. I was thinking about
> > looking into it for KNI and PDUMP but I don't have the time right now,
> > and I'm not sure I will have until the end of June.
>
> Moved from another mail thread
> (http://dpdk.org/ml/archives/dev/2017-June/067936.html)
>
> >> KNI uses / depends pci, I am not sure what to fix here.
> >>
> >> The problem to enable the KNI is build dependency problem, right?
> >>
> >> I guess problem will be fixes if we can build in following order:
> >> - lib/eal
> >> - drivers/bus
> >> - lib
> >> - drivers
> >>
> >> This was the case when bus drives compiled within eal. What do you think
> >> about this build order?
> >>
> >
> > Yes, that build order would fix the issue.
> > However, IMO this is not the proper way to proceed.
> > It obscures the architecture, the distinction between DPDK abstractions
> > and their implementations.
> >
> > Looking quickly into this dependency, it seems that the PCI info is only
> > used during allocation, and only to register PCI information within
> > device infos. They do not seem used afterward at the library level,
> > except to print some device description upon device start.
> >
> > They can be completely removed from KNI (both from the lib and the
> > driver), without breaking the compilation.
> > This however changes the API of rte_kni_alloc() and the ABI of
> > rte_kni_conf.
>
> PCI info is not only for printing, it is required for ethtool support.
>
I see. Sorry, I never really looked into it before.
> The pci info sent to KNI kernel module, which uses this information to
> associate kernel driver to DPDK interface. Basically this is required
> for control part support of KNI.
>
> So I believe we can't just remove this.
>
Ok, those infos should stay reachable. Do they have to stay at the same
place however?
> > Ideally KNI interfaces should be able to use any rte_device, not only
> > PCI. But if it is forced to use only PCI devices, then pointing to an
> > rte_pci_device seems a better way to proceed, as it has all those infos
> > readily available. It would allow the PCI device to grow and evolve
> > without breaking the KNI lib.
>
> Ideally this is good idea to make DPDK libraries bus agnostic.
>
> But for KNI, it is not just lib/librte_kni, it has a kernel module
> counterpart and it needs to know the bus information, in this manner KNI
> is different.
>
> Even if we assume it is possible to make KNI independent from bus, this
> effort is not very useful because we don't want to continue KNI ethtool
> support as it is (by having Linux NIC drivers in kernel module), so
> there won't be any other NIC that benefits from update.
>
> And option can be replace KNI ethtool support with KCP, but we are
> struggling to upstream KCP for a while, and this is another story.
>
> For specific to KNI, we can say, it is a library that is dependent to
> specific bus. I believe build system should be able to support some
> components depends to specific bus.
>
Would it be feasible to have the same kind of bus-specific interface
that was added for ethdev, cryptodev, eventdev? Having an rte_pci_device
pointer within dev_infos for KNI itself, but setting it while at
the drivers/net/kni level?
The problem is that it breaks KNI API / ABI. But I believe it's possible
this way to reach feature parity while leaving the build order alone.
There may be a compromise, where for this release the build order is
modified only for KNI and fixed in the next, thus respecting the
API change rules in place.
Before that call could be made though, we should first determine whether KNI
is fixable and a proper solution can be reached without impacting the
build order in the long term. Once we have this plan, we can decide whether
the priority is the KNI API and its users or the DPDK arch integrity.
--
Gaëtan Rivet
6WIND
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 08/12] kni: disabled by default
@ 2017-06-15 13:09 0% ` Ferruh Yigit
2017-06-15 14:48 3% ` Gaëtan Rivet
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2017-06-15 13:09 UTC (permalink / raw)
To: Gaëtan Rivet; +Cc: dev
On 6/9/2017 10:06 AM, Gaëtan Rivet wrote:
> Hi Ferruh,
>
> On Fri, Jun 09, 2017 at 09:56:14AM +0100, Ferruh Yigit wrote:
>> On 6/8/2017 12:59 AM, Gaetan Rivet wrote:
>>> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
>>> ---
>>> config/common_linuxapp | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/config/common_linuxapp b/config/common_linuxapp
>>> index b3cf41b..cc85cc6 100644
>>> --- a/config/common_linuxapp
>>> +++ b/config/common_linuxapp
>>> @@ -38,7 +38,7 @@ CONFIG_RTE_EXEC_ENV_LINUXAPP=y
>>> CONFIG_RTE_EAL_IGB_UIO=y
>>> CONFIG_RTE_EAL_VFIO=y
>>> CONFIG_RTE_KNI_KMOD=y
>>> -CONFIG_RTE_LIBRTE_KNI=y
>>> +CONFIG_RTE_LIBRTE_KNI=n
>>> CONFIG_RTE_LIBRTE_PMD_KNI=y
>>> CONFIG_RTE_LIBRTE_VHOST=y
>>> CONFIG_RTE_LIBRTE_PMD_VHOST=y
>>>
>>
>> Hi Gaetan,
>>
>> We shouldn't just disable components that doesn't compile.
>>
>
> Ah, sure :) . This patch is not meant to be integrated as is, but only
> as a convenient way for testers to apply the patchset and verify the
> compilation, as far as KNI is not concerned.
>
> Eventdev and cryptodev fixed this dependency. I was thinking about
> looking into it for KNI and PDUMP but I don't have the time right now,
> and I'm not sure I will have until the end of June.
Moved from another mail thread
(http://dpdk.org/ml/archives/dev/2017-June/067936.html)
>> KNI uses / depends pci, I am not sure what to fix here.
>>
>> The problem to enable the KNI is build dependency problem, right?
>>
>> I guess problem will be fixes if we can build in following order:
>> - lib/eal
>> - drivers/bus
>> - lib
>> - drivers
>>
>> This was the case when bus drives compiled within eal. What do you think
>> about this build order?
>>
>
> Yes, that build order would fix the issue.
> However, IMO this is not the proper way to proceed.
> It obscures the architecture, the distinction between DPDK abstractions
> and their implementations.
>
> Looking quickly into this dependency, it seems that the PCI info is only
> used during allocation, and only to register PCI information within
> device infos. They do not seem used afterward at the library level,
> except to print some device description upon device start.
>
> They can be completely removed from KNI (both from the lib and the
> driver), without breaking the compilation.
> This however changes the API of rte_kni_alloc() and the ABI of
> rte_kni_conf.
PCI info is not only for printing, it is required for ethtool support.
The pci info sent to KNI kernel module, which uses this information to
associate kernel driver to DPDK interface. Basically this is required
for control part support of KNI.
So I believe we can't just remove this.
>
> But it seems better than changing the build order and opening a can of
> all kind of worms, allowing a few libraries to skirt around their duty to
> remain generic and independent from abstraction implementations.
I see your point.
>
> Ideally KNI interfaces should be able to use any rte_device, not only
> PCI. But if it is forced to use only PCI devices, then pointing to an
> rte_pci_device seems a better way to proceed, as it has all those infos
> readily available. It would allow the PCI device to grow and evolve
without
> breaking the KNI lib.
Ideally this is good idea to make DPDK libraries bus agnostic.
But for KNI, it is not just lib/librte_kni, it has a kernel module
counterpart and it needs to know the bus information, in this manner KNI
is different.
Even if we assume it is possible to make KNI independent from bus, this
effort is not very useful because we don't want to continue KNI ethtool
support as it is (by having Linux NIC drivers in kernel module), so
there won't be any other NIC that benefits from update.
And option can be replace KNI ethtool support with KCP, but we are
struggling to upstream KCP for a while, and this is another story.
For specific to KNI, we can say, it is a library that is dependent to
specific bus. I believe build system should be able to support some
components depends to specific bus.
>
> Anyway, I think there are several possible solutions to this, before
> resorting to modifying the build order.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 07/12] pdump: disabled by default
2017-06-14 23:01 3% ` Gaëtan Rivet
@ 2017-06-15 13:07 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2017-06-15 13:07 UTC (permalink / raw)
To: Gaëtan Rivet; +Cc: Pattan, Reshma, dev
On 6/15/2017 12:01 AM, Gaëtan Rivet wrote:
> Hi Ferruh,
>
> On Tue, Jun 13, 2017 at 06:15:45PM +0100, Ferruh Yigit wrote:
>> On 6/11/2017 8:42 PM, Gaëtan Rivet wrote:
>>> Hi Reshma,
>>>
>>> On Fri, Jun 09, 2017 at 02:24:58PM +0000, Pattan, Reshma wrote:
>>>> Hi,
>>>>
>>>>> -----Original Message-----
>>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Gaetan Rivet
>>>>> Sent: Thursday, June 8, 2017 12:59 AM
>>>>> To: dev@dpdk.org
>>>>> Cc: Gaetan Rivet <gaetan.rivet@6wind.com>
>>>>> Subject: [dpdk-dev] [PATCH v2 07/12] pdump: disabled by default
>>>>>
>>>>> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
>>>>> ---
>>>>> config/common_base | 2 +-
>>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/config/common_base b/config/common_base index
>>>>> cade611..8ec5e4e 100644
>>>>> --- a/config/common_base
>>>>> +++ b/config/common_base
>>>>> @@ -700,7 +700,7 @@ CONFIG_RTE_KNI_PREEMPT_DEFAULT=y # # Compile
>>>>> the pdump library # -CONFIG_RTE_LIBRTE_PDUMP=y
>>>>> +CONFIG_RTE_LIBRTE_PDUMP=n
>>>>>
>>>>> #
>>>>> # Compile vhost user library
>>>>> --
>>>>> 2.1.4
>>>>
>>>> Since, you already mentioned in other mail to Ferruh that config flag disabling patches are only for testers compilation purpose and you have plans to make proper fix by end of June. I will wait on for actual patch.
>>>>
>>>
>>> I said I planned to do so, but found out that I would not have enough
>>> time before the end of June. Sorry about the ambiguous phrasing.
>>>
>>> Do you think you will be able to fix this library in time?
>>
>> KNI uses / depends pci, I am not sure what to fix here.
>>
>> The problem to enable the KNI is build dependency problem, right?
>>
>> I guess problem will be fixes if we can build in following order:
>> - lib/eal
>> - drivers/bus
>> - lib
>> - drivers
>>
>> This was the case when bus drives compiled within eal. What do you think
>> about this build order?
>>
>
> Yes, that build order would fix the issue.
> However, IMO this is not the proper way to proceed.
> It obscures the architecture, the distinction between DPDK abstractions
> and their implementations.
>
> Looking quickly into this dependency, it seems that the PCI info is only
> used during allocation, and only to register PCI information within
> device infos. They do not seem used afterward at the library level,
> except to print some device description upon device start.
>
> They can be completely removed from KNI (both from the lib and the
> driver), without breaking the compilation.
> This however changes the API of rte_kni_alloc() and the ABI of
> rte_kni_conf.
>
> But it seems better than changing the build order and opening a can of
> all kind of worms, allowing a few libraries to skirt around their duty to
> remain generic and independent from abstraction implementations.
>
> Ideally KNI interfaces should be able to use any rte_device, not only
> PCI. But if it is forced to use only PCI devices, then pointing to an
> rte_pci_device seems a better way to proceed, as it has all those infos
> readily available. It would allow the PCI device to grow and evolve without
> breaking the KNI lib.
>
> Anyway, I think there are several possible solutions to this, before
> resorting to modifying the build order.
I started the discussion in wrong thread, I will copy your mail and
answer from correct thread, hoping this won't make things more confusing.
For future reference, moving to:
http://dpdk.org/ml/archives/dev/2017-June/067688.html
>
>>>
>>>> Please see if rte_pci.h can be removed from the includes of rte_pdump.c , might be unnecessary include.
>>>>
>>>> Thanks,
>>>> Reshma
>>>
>>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 1/4] ethdev: modify callback process API
2017-06-14 21:22 3% ` Thomas Monjalon
@ 2017-06-15 10:56 3% ` Iremonger, Bernard
0 siblings, 0 replies; 200+ results
From: Iremonger, Bernard @ 2017-06-15 10:56 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
Hi Thomas,
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, June 14, 2017 10:22 PM
> To: Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2 1/4] ethdev: modify callback process API
>
> Hi,
>
> 12/06/2017 17:18, Bernard Iremonger:
> > Change the rte_eth_dev_callback_process function to return int, and
> > add a void *ret_param parameter.
>
> You should squash tests and examples changes in this change to avoid
> breaking compilation.
> Doc patch can also be squashed.
>
I will squash patches into one patch.
> > --- a/lib/librte_ether/rte_ether_version.map
> > +++ b/lib/librte_ether/rte_ether_version.map
> > +DPDK_17.08 {
> > + global:
> > +
> > + _rte_eth_dev_callback_process;
> > +
> > +} DPDK_17.05;
>
> You should remove the original function from 2.2 ABI block.
I will remove from 2.2 ABI block.
Thanks for the review.
Regards,
Bernard.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 07/12] pdump: disabled by default
@ 2017-06-14 23:01 3% ` Gaëtan Rivet
2017-06-15 13:07 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Gaëtan Rivet @ 2017-06-14 23:01 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: Pattan, Reshma, dev
Hi Ferruh,
On Tue, Jun 13, 2017 at 06:15:45PM +0100, Ferruh Yigit wrote:
> On 6/11/2017 8:42 PM, Gaëtan Rivet wrote:
> > Hi Reshma,
> >
> > On Fri, Jun 09, 2017 at 02:24:58PM +0000, Pattan, Reshma wrote:
> >> Hi,
> >>
> >>> -----Original Message-----
> >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Gaetan Rivet
> >>> Sent: Thursday, June 8, 2017 12:59 AM
> >>> To: dev@dpdk.org
> >>> Cc: Gaetan Rivet <gaetan.rivet@6wind.com>
> >>> Subject: [dpdk-dev] [PATCH v2 07/12] pdump: disabled by default
> >>>
> >>> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
> >>> ---
> >>> config/common_base | 2 +-
> >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/config/common_base b/config/common_base index
> >>> cade611..8ec5e4e 100644
> >>> --- a/config/common_base
> >>> +++ b/config/common_base
> >>> @@ -700,7 +700,7 @@ CONFIG_RTE_KNI_PREEMPT_DEFAULT=y # # Compile
> >>> the pdump library # -CONFIG_RTE_LIBRTE_PDUMP=y
> >>> +CONFIG_RTE_LIBRTE_PDUMP=n
> >>>
> >>> #
> >>> # Compile vhost user library
> >>> --
> >>> 2.1.4
> >>
> >> Since, you already mentioned in other mail to Ferruh that config flag disabling patches are only for testers compilation purpose and you have plans to make proper fix by end of June. I will wait on for actual patch.
> >>
> >
> > I said I planned to do so, but found out that I would not have enough
> > time before the end of June. Sorry about the ambiguous phrasing.
> >
> > Do you think you will be able to fix this library in time?
>
> KNI uses / depends pci, I am not sure what to fix here.
>
> The problem to enable the KNI is build dependency problem, right?
>
> I guess problem will be fixes if we can build in following order:
> - lib/eal
> - drivers/bus
> - lib
> - drivers
>
> This was the case when bus drives compiled within eal. What do you think
> about this build order?
>
Yes, that build order would fix the issue.
However, IMO this is not the proper way to proceed.
It obscures the architecture, the distinction between DPDK abstractions
and their implementations.
Looking quickly into this dependency, it seems that the PCI info is only
used during allocation, and only to register PCI information within
device infos. They do not seem used afterward at the library level,
except to print some device description upon device start.
They can be completely removed from KNI (both from the lib and the
driver), without breaking the compilation.
This however changes the API of rte_kni_alloc() and the ABI of
rte_kni_conf.
But it seems better than changing the build order and opening a can of
all kind of worms, allowing a few libraries to skirt around their duty to
remain generic and independent from abstraction implementations.
Ideally KNI interfaces should be able to use any rte_device, not only
PCI. But if it is forced to use only PCI devices, then pointing to an
rte_pci_device seems a better way to proceed, as it has all those infos
readily available. It would allow the PCI device to grow and evolve without
breaking the KNI lib.
Anyway, I think there are several possible solutions to this, before
resorting to modifying the build order.
> >
> >> Please see if rte_pci.h can be removed from the includes of rte_pdump.c , might be unnecessary include.
> >>
> >> Thanks,
> >> Reshma
> >
>
--
Gaëtan Rivet
6WIND
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 1/4] ethdev: modify callback process API
@ 2017-06-14 21:22 3% ` Thomas Monjalon
2017-06-15 10:56 3% ` Iremonger, Bernard
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2017-06-14 21:22 UTC (permalink / raw)
To: bernard.iremonger; +Cc: dev
Hi,
12/06/2017 17:18, Bernard Iremonger:
> Change the rte_eth_dev_callback_process function to return int,
> and add a void *ret_param parameter.
You should squash tests and examples changes in this change to avoid
breaking compilation.
Doc patch can also be squashed.
> --- a/lib/librte_ether/rte_ether_version.map
> +++ b/lib/librte_ether/rte_ether_version.map
> +DPDK_17.08 {
> + global:
> +
> + _rte_eth_dev_callback_process;
> +
> +} DPDK_17.05;
You should remove the original function from 2.2 ABI block.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3] ethdev: add isolated mode to flow API
2017-06-14 13:35 3% ` Adrien Mazarguil
@ 2017-06-14 14:04 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2017-06-14 14:04 UTC (permalink / raw)
To: Adrien Mazarguil; +Cc: dev
On 06/14/2017 04:35 PM, Adrien Mazarguil wrote:
> On Wed, Jun 14, 2017 at 04:01:46PM +0300, Andrew Rybchenko wrote:
>> On 06/14/2017 03:45 PM, Adrien Mazarguil wrote:
>>> Isolated mode can be requested by applications on individual ports to avoid
>>> ingress traffic outside of the flow rules they define.
>>>
>>> Besides making ingress more deterministic, it allows PMDs to safely reuse
>>> resources otherwise assigned to handle the remaining traffic, such as
>>> global RSS configuration settings, VLAN filters, MAC address entries,
>>> legacy filter API rules and so on in order to expand the set of possible
>>> flow rule types.
>>>
>>> To minimize code complexity, PMDs implementing this mode may provide
>>> partial (or even no) support for flow rules when not enabled (e.g. no
>>> priorities, no RSS action). Applications written to use the flow API are
>>> therefore encouraged to enable it.
>>>
>>> Once effective, leaving isolated mode may not be possible depending on PMD
>>> implementation.
>>>
>>> Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
>>> Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
>>>
>>> ---
>>>
>>> v3:
>>> - Rebased on next-net/master. Note this patch depends on
>>> commit c0688ef1eded ("net/igb: parse flow API n-tuple filter") due to a
>>> necessary fix in igb's rte_flow_ops definition to avoid a compilation
>>> issue.
>>>
>>> v2:
>>> - Rebased on master.
>>> ---
>>> app/test-pmd/cmdline.c | 4 ++
>>> app/test-pmd/cmdline_flow.c | 49 ++++++++++++++++++++-
>>> app/test-pmd/config.c | 16 +++++++
>>> app/test-pmd/testpmd.h | 1 +
>>> doc/guides/prog_guide/rte_flow.rst | 56 ++++++++++++++++++++++++
>>> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 48 +++++++++++++++++++-
>>> drivers/net/e1000/igb_flow.c | 9 ++--
>>> drivers/net/ixgbe/ixgbe_flow.c | 9 ++--
>>> lib/librte_ether/rte_ether_version.map | 7 +++
>>> lib/librte_ether/rte_flow.c | 18 ++++++++
>>> lib/librte_ether/rte_flow.h | 33 ++++++++++++++
>>> lib/librte_ether/rte_flow_driver.h | 5 +++
>>> 12 files changed, 242 insertions(+), 13 deletions(-)
>> <snip>
>>
>>> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
>>> index b587ba9..699d2b2 100644
>>> --- a/doc/guides/prog_guide/rte_flow.rst
>>> +++ b/doc/guides/prog_guide/rte_flow.rst
>>> @@ -1517,6 +1517,62 @@ Return values:
>>> - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
>>> +Isolated mode
>>> +-------------
>>> +
>>> +The general expectation for ingress traffic is that flow rules process it
>>> +first; the remaining unmatched or pass-through traffic usually ends up in a
>>> +queue (with or without RSS, locally or in some sub-device instance)
>>> +depending on the global configuration settings of a port.
>>> +
>>> +While fine from a compatibility standpoint, this approach makes drivers more
>>> +complex as they have to check for possible side effects outside of this API
>>> +when creating or destroying flow rules. It results in a more limited set of
>>> +available rule types due to the way device resources are assigned (e.g. no
>>> +support for the RSS action even on capable hardware).
>>> +
>>> +Given that nonspecific traffic can be handled by flow rules as well,
>>> +isolated mode is a means for applications to tell a driver that ingress on
>>> +the underlying port must be injected from the defined flow rules only; that
>>> +no default traffic is expected outside those rules.
>>> +
>>> +This has the following benefits:
>>> +
>>> +- Applications get finer-grained control over the kind of traffic they want
>>> + to receive (no traffic by default).
>>> +
>>> +- More importantly they control at what point nonspecific traffic is handled
>>> + relative to other flow rules, by adjusting priority levels.
>>> +
>>> +- Drivers can assign more hardware resources to flow rules and expand the
>>> + set of supported rule types.
>>> +
>>> +Because toggling isolated mode may cause profound changes to the ingress
>>> +processing path of a driver, it may not be possible to leave it once
>>> +entered. Likewise, existing flow rules or global configuration settings may
>>> +prevent a driver from entering isolated mode.
>>> +
>>> +Applications relying on this mode are therefore encouraged to toggle it as
>>> +soon as possible after device initialization, ideally before the first call
>>> +to ``rte_eth_dev_configure()`` to avoid possible failures due to conflicting
>>> +settings.
>>> +
>> I think it would be useful to highlight how isolated mode coexists with
>> promiscuous
>> and all-multicast. What is the expected behaviour of the functions which
>> toggle
>> promiscuous and all-multicast mode if isolated mode is enabled? These
>> functions
>> return void right now, so it is impossible to return error. What should
>> rte_eth_promiscuous_get() and rte_eth_allmulticast_get() return?
> They can technically return nothing/anything as long as they have no effect
> on received traffic, as described.
I was just asking to highlight it in the documentation. Yes, idea of the
isolated
mode is clear and may be it is enough.
> Modifying existing wrappers that currently return void instead of an error
> is outside the scope of this patch and requires ABI breakage. This can be
> done later when the need arises.
It is perfectly clear.
> For mlx4/mlx5, we plan to expose a different set of rte_eth_dev_ops
> depending on whether isolated mode is toggled. When enabled, the
> allmulti/promisc/MAC/VLAN/etc callbacks would be NULL for instance, and the
> associated ethdev wrappers would automatically return an error where
> applicable.
Thanks for the idea. We'll consider it as well.
Andrew.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3] ethdev: add isolated mode to flow API
@ 2017-06-14 13:35 3% ` Adrien Mazarguil
2017-06-14 14:04 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Adrien Mazarguil @ 2017-06-14 13:35 UTC (permalink / raw)
To: Andrew Rybchenko; +Cc: dev
On Wed, Jun 14, 2017 at 04:01:46PM +0300, Andrew Rybchenko wrote:
> On 06/14/2017 03:45 PM, Adrien Mazarguil wrote:
> >Isolated mode can be requested by applications on individual ports to avoid
> >ingress traffic outside of the flow rules they define.
> >
> >Besides making ingress more deterministic, it allows PMDs to safely reuse
> >resources otherwise assigned to handle the remaining traffic, such as
> >global RSS configuration settings, VLAN filters, MAC address entries,
> >legacy filter API rules and so on in order to expand the set of possible
> >flow rule types.
> >
> >To minimize code complexity, PMDs implementing this mode may provide
> >partial (or even no) support for flow rules when not enabled (e.g. no
> >priorities, no RSS action). Applications written to use the flow API are
> >therefore encouraged to enable it.
> >
> >Once effective, leaving isolated mode may not be possible depending on PMD
> >implementation.
> >
> >Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> >Acked-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
> >
> >---
> >
> >v3:
> >- Rebased on next-net/master. Note this patch depends on
> > commit c0688ef1eded ("net/igb: parse flow API n-tuple filter") due to a
> > necessary fix in igb's rte_flow_ops definition to avoid a compilation
> > issue.
> >
> >v2:
> >- Rebased on master.
> >---
> > app/test-pmd/cmdline.c | 4 ++
> > app/test-pmd/cmdline_flow.c | 49 ++++++++++++++++++++-
> > app/test-pmd/config.c | 16 +++++++
> > app/test-pmd/testpmd.h | 1 +
> > doc/guides/prog_guide/rte_flow.rst | 56 ++++++++++++++++++++++++
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 48 +++++++++++++++++++-
> > drivers/net/e1000/igb_flow.c | 9 ++--
> > drivers/net/ixgbe/ixgbe_flow.c | 9 ++--
> > lib/librte_ether/rte_ether_version.map | 7 +++
> > lib/librte_ether/rte_flow.c | 18 ++++++++
> > lib/librte_ether/rte_flow.h | 33 ++++++++++++++
> > lib/librte_ether/rte_flow_driver.h | 5 +++
> > 12 files changed, 242 insertions(+), 13 deletions(-)
>
> <snip>
>
> >diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> >index b587ba9..699d2b2 100644
> >--- a/doc/guides/prog_guide/rte_flow.rst
> >+++ b/doc/guides/prog_guide/rte_flow.rst
> >@@ -1517,6 +1517,62 @@ Return values:
> > - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
> >+Isolated mode
> >+-------------
> >+
> >+The general expectation for ingress traffic is that flow rules process it
> >+first; the remaining unmatched or pass-through traffic usually ends up in a
> >+queue (with or without RSS, locally or in some sub-device instance)
> >+depending on the global configuration settings of a port.
> >+
> >+While fine from a compatibility standpoint, this approach makes drivers more
> >+complex as they have to check for possible side effects outside of this API
> >+when creating or destroying flow rules. It results in a more limited set of
> >+available rule types due to the way device resources are assigned (e.g. no
> >+support for the RSS action even on capable hardware).
> >+
> >+Given that nonspecific traffic can be handled by flow rules as well,
> >+isolated mode is a means for applications to tell a driver that ingress on
> >+the underlying port must be injected from the defined flow rules only; that
> >+no default traffic is expected outside those rules.
> >+
> >+This has the following benefits:
> >+
> >+- Applications get finer-grained control over the kind of traffic they want
> >+ to receive (no traffic by default).
> >+
> >+- More importantly they control at what point nonspecific traffic is handled
> >+ relative to other flow rules, by adjusting priority levels.
> >+
> >+- Drivers can assign more hardware resources to flow rules and expand the
> >+ set of supported rule types.
> >+
> >+Because toggling isolated mode may cause profound changes to the ingress
> >+processing path of a driver, it may not be possible to leave it once
> >+entered. Likewise, existing flow rules or global configuration settings may
> >+prevent a driver from entering isolated mode.
> >+
> >+Applications relying on this mode are therefore encouraged to toggle it as
> >+soon as possible after device initialization, ideally before the first call
> >+to ``rte_eth_dev_configure()`` to avoid possible failures due to conflicting
> >+settings.
> >+
>
> I think it would be useful to highlight how isolated mode coexists with
> promiscuous
> and all-multicast. What is the expected behaviour of the functions which
> toggle
> promiscuous and all-multicast mode if isolated mode is enabled? These
> functions
> return void right now, so it is impossible to return error. What should
> rte_eth_promiscuous_get() and rte_eth_allmulticast_get() return?
They can technically return nothing/anything as long as they have no effect
on received traffic, as described.
Modifying existing wrappers that currently return void instead of an error
is outside the scope of this patch and requires ABI breakage. This can be
done later when the need arises.
For mlx4/mlx5, we plan to expose a different set of rte_eth_dev_ops
depending on whether isolated mode is toggled. When enabled, the
allmulti/promisc/MAC/VLAN/etc callbacks would be NULL for instance, and the
associated ethdev wrappers would automatically return an error where
applicable.
--
Adrien Mazarguil
6WIND
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3] ethdev: add fuzzy match in flow API
2017-05-30 12:46 4% ` Adrien Mazarguil
@ 2017-06-13 3:07 2% ` Qi Zhang
1 sibling, 0 replies; 200+ results
From: Qi Zhang @ 2017-06-13 3:07 UTC (permalink / raw)
To: adrien.mazarguil; +Cc: dev, Qi Zhang
Add new meta pattern item RTE_FLOW_TYPE_ITEM_FUZZY in flow API.
This is for device that support fuzzy match option.
Usually a fuzzy match is fast but the cost is accuracy.
i.e. Signature Match only match pattern's hash value, but it is
possible that two different patterns have the same hash value.
Matching accuracy level can be configured by subfield threshold.
Driver can divide the range of threshold and map to different
accuracy levels that device support.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
v2:
- replace "Roughly" with "Fuzzy"
v3:
- Append the new item to avoid ABI/API break.
- last, mask will take effect.
- "threshold" is abbreviated as "thresh".
- modified couple comments (proberly need to be further polished).
app/test-pmd/cmdline_flow.c | 25 +++++++++++++
app/test-pmd/config.c | 1 +
doc/guides/prog_guide/rte_flow.rst | 54 ++++++++++++++++++++++++++---
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 4 +++
lib/librte_ether/rte_flow.h | 37 ++++++++++++++++++++
5 files changed, 117 insertions(+), 4 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0fd69f9..6befcaf 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -167,6 +167,8 @@ enum index {
ITEM_MPLS_LABEL,
ITEM_GRE,
ITEM_GRE_PROTO,
+ ITEM_FUZZY,
+ ITEM_FUZZY_THRESH,
/* Validate/create actions. */
ACTIONS,
@@ -444,6 +446,13 @@ static const enum index next_item[] = {
ITEM_NVGRE,
ITEM_MPLS,
ITEM_GRE,
+ ITEM_FUZZY,
+ ZERO,
+};
+
+static const enum index item_fuzzy[] = {
+ ITEM_FUZZY_THRESH,
+ ITEM_NEXT,
ZERO,
};
@@ -1372,6 +1381,22 @@ static const struct token token_list[] = {
.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_gre,
protocol)),
},
+ [ITEM_FUZZY] = {
+ .name = "fuzzy",
+ .help = "fuzzy pattern match, expect faster than default",
+ .priv = PRIV_ITEM(FUZZY,
+ sizeof(struct rte_flow_item_fuzzy)),
+ .next = NEXT(item_fuzzy),
+ .call = parse_vc,
+ },
+ [ITEM_FUZZY_THRESH] = {
+ .name = "thresh",
+ .help = "match accuracy threshold",
+ .next = NEXT(item_fuzzy, NEXT_ENTRY(UNSIGNED), item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_fuzzy,
+ thresh)),
+ },
+
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 4d873cd..8846df3 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -970,6 +970,7 @@ static const struct {
MK_FLOW_ITEM(VXLAN, sizeof(struct rte_flow_item_vxlan)),
MK_FLOW_ITEM(MPLS, sizeof(struct rte_flow_item_mpls)),
MK_FLOW_ITEM(GRE, sizeof(struct rte_flow_item_gre)),
+ MK_FLOW_ITEM(FUZZY, sizeof(struct rte_flow_item_fuzzy)),
};
/** Compute storage space needed by item specification. */
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b587ba9..bf801de 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -906,6 +906,52 @@ Matches a GRE header.
- ``protocol``: protocol type.
- Default ``mask`` matches protocol only.
+Item: ``FUZZY``
+^^^^^^^^^^^^^^^^^
+
+Fuzzy pattern match, expect faster than default.
+
+This is for device that support fuzzy match option. Usually a fuzzy match is
+fast but the cost is accuracy. i.e. Signature Match only match pattern's hash
+value, but it is possible two different patterns have the same hash value.
+
+Matching accuracy level can be configured by threshold. Driver can divide the
+range of threshold and map to different accuracy levels that device support.
+
+.. _table_rte_flow_item_fuzzy:
+
+.. table:: FUZZY
+
+ +----------+---------------+--------------------------------------------------+
+ | Field | Subfield | Value |
+ +==========+===========+======================================================+
+ | ``spec`` | ``threshold`` | 0 as perfect match, 0xffffffff as fuzziest match |
+ +----------+---------------+--------------------------------------------------+
+ | ``last`` | ``threshold`` | upper range value |
+ +----------+-----------+------------------------------------------------------+
+ | ``mask`` | ``threshold`` | bit-mask apply to "spec" and "last" |
+ +----------+-----------+------------------------------------------------------+
+
+Usage example, fuzzy match a TCPv4 packets:
+
+.. _table_rte_flow_item_fuzzy_example:
+
+.. table:: Fuzzy matching
+
+ +-------+----------+
+ | Index | Item |
+ +=======+==========+
+ | 0 | FUZZY |
+ +-------+----------+
+ | 1 | Ethernet |
+ +-------+----------+
+ | 2 | IPv4 |
+ +-------+----------+
+ | 3 | TCP |
+ +-------+----------+
+ | 4 | END |
+ +-------+----------+
+
Actions
~~~~~~~
@@ -2026,8 +2072,8 @@ A few features are intentionally not supported:
- "MAC VLAN" or "tunnel" perfect matching modes should be automatically set
according to the created flow rules.
-- Signature mode of operation is not defined but could be handled through a
- specific item type if needed.
+- Signature mode of operation is not defined but could be handled through
+ "FUZZY" item.
.. _table_rte_flow_migration_fdir:
@@ -2054,8 +2100,8 @@ A few features are intentionally not supported:
| | +----------+-----+ |
| | | ``mask`` | any | |
+---+-------------------+----------+-----+ |
- | 3 | VF, PF (optional) | ``spec`` | any | |
- | | +----------+-----+ |
+ | 3 | VF, PF, FUZZY | ``spec`` | any | |
+ | | (optional) +----------+-----+ |
| | | ``last`` | N/A | |
| | +----------+-----+ |
| | | ``mask`` | any | |
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 0e50c10..8437f7a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2608,6 +2608,10 @@ This section lists supported pattern items and their attributes, if any.
- ``protocol {unsigned}``: protocol type.
+- ``fuzzy``: fuzzy pattern match, expect faster than default.
+
+ - ``thresh {unsigned}``: accuracy threshold.
+
Actions list
^^^^^^^^^^^^
diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
index c47edbc..90749b1 100644
--- a/lib/librte_ether/rte_flow.h
+++ b/lib/librte_ether/rte_flow.h
@@ -297,6 +297,18 @@ enum rte_flow_item_type {
* See struct rte_flow_item_gre.
*/
RTE_FLOW_ITEM_TYPE_GRE,
+
+ /**
+ * [META]
+ *
+ * Fuzzy pattern match, expect faster than default.
+ *
+ * This is for device that support fuzzy matching option.
+ * Usually a fuzzy matching is fast but the cost is accuracy.
+ *
+ * See struct rte_flow_item_fuzzy.
+ */
+ RTE_FLOW_ITEM_TYPE_FUZZY,
};
/**
@@ -701,6 +713,31 @@ static const struct rte_flow_item_gre rte_flow_item_gre_mask = {
#endif
/**
+ * RTE_FLOW_ITEM_TYPE_FUZZY
+ *
+ * Fuzzy pattern match, expect faster than default.
+ *
+ * This is for device that support fuzzy match option.
+ * Usually a fuzzy match is fast but the cost is accuracy.
+ * i.e. Signature Match only match pattern's hash value, but it is
+ * possible two different patterns have the same hash value.
+ *
+ * Matching accuracy level can be configure by threshold.
+ * Driver can divide the range of threshold and map to different
+ * accuracy levels that device support.
+ */
+struct rte_flow_item_fuzzy {
+ uint32_t thresh; /**< Accuracy threshold*/
+};
+
+/** Default mask for RTE_FLOW_ITEM_TYPE_FUZZY. */
+#ifndef __cplusplus
+static const struct rte_flow_item_fuzzy rte_flow_item_fuzzy_mask = {
+ .thresh = 0xffffffff,
+};
+#endif
+
+/**
* Matching pattern item definition.
*
* A pattern is formed by stacking items starting from the lowest protocol
--
2.7.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v6 2/2] ethdev: add traffic management API
@ 2017-06-12 13:35 1% ` Cristian Dumitrescu
0 siblings, 0 replies; 200+ results
From: Cristian Dumitrescu @ 2017-06-12 13:35 UTC (permalink / raw)
To: dev
Cc: thomas, jerin.jacob, balasubramanian.manoharan, hemant.agrawal,
shreyansh.jain, jasvinder.singh, wenzhuo.lu
This patch introduces the generic ethdev API for the traffic manager
capability, which includes: hierarchical scheduling, traffic shaping,
congestion management, packet marking.
Main features:
- Exposed as ethdev plugin capability (similar to rte_flow)
- Capability query API per port, per level and per node
- Scheduling algorithms: Strict Priority (SP), Weighed Fair Queuing (WFQ)
- Traffic shaping: single/dual rate, private (per node) and shared (by
multiple nodes) shapers
- Congestion management for hierarchy leaf nodes: algorithms of tail drop,
head drop, WRED; private (per node) and shared (by multiple nodes) WRED
contexts
- Packet marking: IEEE 802.1q (VLAN DEI), IETF RFC 3168 (IPv4/IPv6 ECN for
TCP and SCTP), IETF RFC 2597 (IPv4 / IPv6 DSCP)
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Balasubramanian.Manoharan <balasubramanian.manoharan@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Changes in v6:
- Implemented feedback from Jerin [9]
- Doxygen: improved @see to point to specific capability fields
- Doxygen: fixed hooks in doc/api/doxy-api-index.md
Changes in v5:
- Implemented feedback from Jerin [8]
- Add level parameter to node add API function
- Doxygen: fixed comments applicable to field below/before
- Doxygen: added missing @see
- Doxygen: fixed hooks in doc/api/doxy-api-index.md
- Doxygen: fixed table rendering
- Added copyright on API header file from Cavium and NXP to
existing Intel copyright
- MANTAINERS: added next-tm tree
- Added V4 ACKs from Jerin, Bala and Hemant
Changes in v4:
- Implemented feedback from Hemant [6]
- Capability API: Reworked the port, level and node capability API
data structure to remove confusion due to "summary across all
nodes" approach, which made it unclear whether a particular
capability is supported by all nodes or by at least one node.
- Capability API: Added flags for "all nodes have identical
capability set"
- Suspended state: documented the required behavior in Doxygen
description
- Implemented feedback from Jerin [7]
- Node add: added level parameter (see new API function:
rte_tm_node_add_check_level())
- RTE_TM_ETH_FRAMING_OVERHEAD, RTE_TM_ETH_FRAMING_OVERHEAD_FCS:
documented their usage in their Doxygen description
- Capability API: for each function, mention the related
capability field (Doxygen @see)
- stats_mask, capability_mask: document the enum flags used to
build each mask (Doxygen @see)
- Rename rte_tm_get_leaf_nodes() to
rte_tm_get_number_of_leaf_nodes()
- Doxygen: add @param[in, out] to the description of all API funcs
- Doxygen: fix hooks in doc/api/doxy-api-index.md
- Rename rte_tm_hierarchy_set() to rte_tm_hierarchy_commit(), improved
Doxygen description
- Node add, node delete: improved Doxygen description
- Fixed incorrect design assumption that packet-based weight mode for WFQ
is identical to WRR. As result, removed all references to WRR support.
Renamed the "scheduling mode" node parameters to "wfq_weight_mode".
Changes in v3:
- Implemented feedback from Jerin [5]
- Changed naming convention: scheddev -> tm
- Improvements on the capability API:
- Specification of marking capabilities per color
- WFQ/WRR groups: sp_n_children_max ->
wfq_wrr_n_children_per_group_max, added wfq_wrr_n_groups_max,
improved description of both, improved description of
wfq_wrr_weight_max
- Dynamic updates: added KEEP_LEVEL and CHANGE_LEVEL for parent
update
- Enforced/documented restrictions for root node (node_add() and
update())
- Enforced/documented shaper profile restrictions on PIR: PIR != 0,
PIR >= CIR
- Turned repetitive code in rte_tm.c into macro
- Removed dependency on rte_red.h file (added RED params to rte_tm.h)
- Color: removed "e_" from color names enum
- Fixed small Doxygen style issues
Changes in v2:
- Implemented feedback from Hemant [4]
- Improvements on the capability API
- Added capability API for hierarchy level
- Merged stats capability into the capability API
- Added dynamic updates
- Added non-leaf/leaf union to the node capability structure
- Renamed sp_priority_min to sp_n_priorities_max, added
clarifications
- Fixed description for sp_n_children_max
- Clarified and enforced rule on node ID range for leaf and non-leaf nodes
- Added API functions to get node type (i.e. leaf/non-leaf):
get_leaf_nodes(), node_type_get()
- Added clarification for the root node: its creation, parent, role
- Macro NODE_ID_NULL as root node's parent
- Description of the node_add() and node_parent_update() API funcs
- Added clarification for the first time add vs. subsequent updates rule
- Cleaned up the description for the node_add() function
- Statistics API improvements
- Merged stats capability into the capability API
- Added API function node_stats_update()
- Added more stats per packet color
- Added more error types
- Fixed small Doxygen style issues
Changes in v1 (since RFC [1]):
- Implemented as ethdev plugin (similar to rte_flow) as opposed to more
monolithic additions to ethdev itself
- Implemented feedback from Jerin [2] and Hemant [3]. Implemented all the
suggested items with only one exception, see the long list below,
hopefully nothing was forgotten.
- The item not done (hopefully for a good reason): driver-generated
object IDs. IMO the choice to have application-generated object IDs
adds marginal complexity to the driver (search ID function
required), but it provides huge simplification for the application.
The app does not need to worry about building & managing tree-like
structure for storing driver-generated object IDs, the app can use
its own convention for node IDs depending on the specific hierarchy
that it needs. Trivial example: identify all level-2 nodes with IDs
like 100, 200, 300, … and the level-3 nodes based on their level-2
parents: 110, 120, 130, 140, …, 210, 220, 230, 240, …, 310, 320,
330, … and level-4 nodes based on their level-3 parents: 111, 112,
113, 114, …, 121, 122, 123, 124, …). Moreover, see the change log
for the other related simplification that was implemented: leaf
nodes now have predefined IDs that are the same with their Ethernet
TX queue ID ( therefore no translation is required for leaf nodes).
- Capability API. Done per port and per node as well.
- Dual rate shapers
- Added configuration of private shaper (per node) directly from the
shaper profile as part of node API (no shaper ID needed for private
shapers), while the shared shapers are configured outside of the node
API using shaper profile and communicated to the node using shared
shaper ID. So there is no configuration overhead for shared shapers if
the app does not use any of them.
- Leaf nodes now have predefined IDs that are the same with their Ethernet
TX queue ID (therefore no translation is required for leaf nodes). This
is also used to differentiate between a leaf node and a non-leaf node.
- Domain-specific errors to give a precise indication of the error cause
(same as done by rte_flow)
- Packet marking API
- Packet length optional adjustment for shapers, positive (e.g. for adding
Ethernet framing overhead of 20 bytes) or negative (e.g. for rate
limiting based on IP packet bytes)
[1] RFC: http://dpdk.org/ml/archives/dev/2016-November/050956.html
[2] Jerin’s feedback on RFC: http://www.dpdk.org/ml/archives/dev/2017-January/054484.html
[3] Hemant’s feedback on RFC: http://www.dpdk.org/ml/archives/dev/2017-January/054866.html
[4] Hemant's feedback on v1: http://www.dpdk.org/ml/archives/dev/2017-February/058033.html
[5] Jerin's feedback on v1: http://www.dpdk.org/ml/archives/dev/2017-March/058895.html
[6] Hemant's feedback on v3: http://www.dpdk.org/ml/archives/dev/2017-March/062354.html
[7] Jerin's feedback on v3: http://www.dpdk.org/ml/archives/dev/2017-April/063429.html
[8] Jerin's feedback on v4: http://www.dpdk.org/ml/archives/dev/2017-May/066932.html
MAINTAINERS | 5 +
doc/api/doxy-api-index.md | 2 +
lib/librte_ether/Makefile | 5 +-
lib/librte_ether/rte_ether_version.map | 30 +
lib/librte_ether/rte_tm.c | 438 ++++++++
lib/librte_ether/rte_tm.h | 1904 ++++++++++++++++++++++++++++++++
lib/librte_ether/rte_tm_driver.h | 366 ++++++
7 files changed, 2749 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_ether/rte_tm.c
create mode 100644 lib/librte_ether/rte_tm.h
create mode 100644 lib/librte_ether/rte_tm_driver.h
diff --git a/MAINTAINERS b/MAINTAINERS
index f6095ef..3c7414f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -240,6 +240,11 @@ Flow API
M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
F: lib/librte_ether/rte_flow*
+Traffic Management API
+M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+T: git://dpdk.org/next/dpdk-next-tm
+F: lib/librte_ether/rte_tm*
+
Crypto API
M: Declan Doherty <declan.doherty@intel.com>
F: lib/librte_cryptodev/
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index f5f1f19..bcd0fdd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -41,6 +41,8 @@ There are many libraries, so their headers may be grouped by topics:
[ethctrl] (@ref rte_eth_ctrl.h),
[rte_flow] (@ref rte_flow.h),
[rte_flow_driver] (@ref rte_flow_driver.h),
+ [rte_tm] (@ref rte_tm.h),
+ [rte_tm_driver] (@ref rte_tm_driver.h),
[cryptodev] (@ref rte_cryptodev.h),
[eventdev] (@ref rte_eventdev.h),
[devargs] (@ref rte_devargs.h),
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index 93fdde1..db692ae 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -1,6 +1,6 @@
# BSD LICENSE
#
-# Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+# Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
@@ -45,6 +45,7 @@ LIBABIVER := 6
SRCS-y += rte_ethdev.c
SRCS-y += rte_flow.c
+SRCS-y += rte_tm.c
#
# Export include files
@@ -56,5 +57,7 @@ SYMLINK-y-include += rte_eth_ctrl.h
SYMLINK-y-include += rte_dev_info.h
SYMLINK-y-include += rte_flow.h
SYMLINK-y-include += rte_flow_driver.h
+SYMLINK-y-include += rte_tm.h
+SYMLINK-y-include += rte_tm_driver.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 2788e7b..5e8651d 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -161,4 +161,34 @@ DPDK_17.08 {
global:
rte_eth_dev_tm_ops_get;
+ rte_tm_get_leaf_nodes;
+ rte_tm_node_type_get;
+ rte_tm_capabilities_get;
+ rte_tm_level_capabilities_get;
+ rte_tm_node_capabilities_get;
+ rte_tm_wred_profile_add;
+ rte_tm_wred_profile_delete;
+ rte_tm_shared_wred_context_add_update;
+ rte_tm_shared_wred_context_delete;
+ rte_tm_shaper_profile_add;
+ rte_tm_shaper_profile_delete;
+ rte_tm_shared_shaper_add_update;
+ rte_tm_shared_shaper_delete;
+ rte_tm_node_add;
+ rte_tm_node_delete;
+ rte_tm_node_suspend;
+ rte_tm_node_resume;
+ rte_tm_hierarchy_commit;
+ rte_tm_node_parent_update;
+ rte_tm_node_shaper_update;
+ rte_tm_node_shared_shaper_update;
+ rte_tm_node_stats_update;
+ rte_tm_node_wfq_weight_mode_update;
+ rte_tm_node_cman_update;
+ rte_tm_node_wred_context_update;
+ rte_tm_node_shared_wred_context_update;
+ rte_tm_node_stats_read;
+ rte_tm_mark_vlan_dei;
+ rte_tm_mark_ip_ecn;
+ rte_tm_mark_ip_dscp;
} DPDK_17.05;
diff --git a/lib/librte_ether/rte_tm.c b/lib/librte_ether/rte_tm.c
new file mode 100644
index 0000000..7167965
--- /dev/null
+++ b/lib/librte_ether/rte_tm.c
@@ -0,0 +1,438 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include "rte_ethdev.h"
+#include "rte_tm_driver.h"
+#include "rte_tm.h"
+
+/* Get generic traffic manager operations structure from a port. */
+const struct rte_tm_ops *
+rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ const struct rte_tm_ops *ops;
+
+ if (!rte_eth_dev_is_valid_port(port_id)) {
+ rte_tm_error_set(error,
+ ENODEV,
+ RTE_TM_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(ENODEV));
+ return NULL;
+ }
+
+ if ((dev->dev_ops->tm_ops_get == NULL) ||
+ (dev->dev_ops->tm_ops_get(dev, &ops) != 0) ||
+ (ops == NULL)) {
+ rte_tm_error_set(error,
+ ENOSYS,
+ RTE_TM_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(ENOSYS));
+ return NULL;
+ }
+
+ return ops;
+}
+
+#define RTE_TM_FUNC(port_id, func) \
+({ \
+ const struct rte_tm_ops *ops = \
+ rte_tm_ops_get(port_id, error); \
+ if (ops == NULL) \
+ return -rte_errno; \
+ \
+ if (ops->func == NULL) \
+ return -rte_tm_error_set(error, \
+ ENOSYS, \
+ RTE_TM_ERROR_TYPE_UNSPECIFIED, \
+ NULL, \
+ rte_strerror(ENOSYS)); \
+ \
+ ops->func; \
+})
+
+/* Get number of leaf nodes */
+int
+rte_tm_get_number_of_leaf_nodes(uint8_t port_id,
+ uint32_t *n_leaf_nodes,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ const struct rte_tm_ops *ops =
+ rte_tm_ops_get(port_id, error);
+
+ if (ops == NULL)
+ return -rte_errno;
+
+ if (n_leaf_nodes == NULL) {
+ rte_tm_error_set(error,
+ EINVAL,
+ RTE_TM_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(EINVAL));
+ return -rte_errno;
+ }
+
+ *n_leaf_nodes = dev->data->nb_tx_queues;
+ return 0;
+}
+
+/* Check node type (leaf or non-leaf) */
+int
+rte_tm_node_type_get(uint8_t port_id,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_type_get)(dev,
+ node_id, is_leaf, error);
+}
+
+/* Get capabilities */
+int rte_tm_capabilities_get(uint8_t port_id,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, capabilities_get)(dev,
+ cap, error);
+}
+
+/* Get level capabilities */
+int rte_tm_level_capabilities_get(uint8_t port_id,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, level_capabilities_get)(dev,
+ level_id, cap, error);
+}
+
+/* Get node capabilities */
+int rte_tm_node_capabilities_get(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_capabilities_get)(dev,
+ node_id, cap, error);
+}
+
+/* Add WRED profile */
+int rte_tm_wred_profile_add(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_wred_params *profile,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, wred_profile_add)(dev,
+ wred_profile_id, profile, error);
+}
+
+/* Delete WRED profile */
+int rte_tm_wred_profile_delete(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, wred_profile_delete)(dev,
+ wred_profile_id, error);
+}
+
+/* Add/update shared WRED context */
+int rte_tm_shared_wred_context_add_update(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_wred_context_add_update)(dev,
+ shared_wred_context_id, wred_profile_id, error);
+}
+
+/* Delete shared WRED context */
+int rte_tm_shared_wred_context_delete(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_wred_context_delete)(dev,
+ shared_wred_context_id, error);
+}
+
+/* Add shaper profile */
+int rte_tm_shaper_profile_add(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shaper_profile_add)(dev,
+ shaper_profile_id, profile, error);
+}
+
+/* Delete WRED profile */
+int rte_tm_shaper_profile_delete(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shaper_profile_delete)(dev,
+ shaper_profile_id, error);
+}
+
+/* Add shared shaper */
+int rte_tm_shared_shaper_add_update(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_shaper_add_update)(dev,
+ shared_shaper_id, shaper_profile_id, error);
+}
+
+/* Delete shared shaper */
+int rte_tm_shared_shaper_delete(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_shaper_delete)(dev,
+ shared_shaper_id, error);
+}
+
+/* Add node to port traffic manager hierarchy */
+int rte_tm_node_add(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_add)(dev,
+ node_id, parent_node_id, priority, weight, level_id,
+ params, error);
+}
+
+/* Delete node from traffic manager hierarchy */
+int rte_tm_node_delete(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_delete)(dev,
+ node_id, error);
+}
+
+/* Suspend node */
+int rte_tm_node_suspend(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_suspend)(dev,
+ node_id, error);
+}
+
+/* Resume node */
+int rte_tm_node_resume(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_resume)(dev,
+ node_id, error);
+}
+
+/* Commit the initial port traffic manager hierarchy */
+int rte_tm_hierarchy_commit(uint8_t port_id,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, hierarchy_commit)(dev,
+ clear_on_fail, error);
+}
+
+/* Update node parent */
+int rte_tm_node_parent_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_parent_update)(dev,
+ node_id, parent_node_id, priority, weight, error);
+}
+
+/* Update node private shaper */
+int rte_tm_node_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_shaper_update)(dev,
+ node_id, shaper_profile_id, error);
+}
+
+/* Update node shared shapers */
+int rte_tm_node_shared_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_shaper_id,
+ int add,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_shared_shaper_update)(dev,
+ node_id, shared_shaper_id, add, error);
+}
+
+/* Update node stats */
+int rte_tm_node_stats_update(uint8_t port_id,
+ uint32_t node_id,
+ uint64_t stats_mask,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_stats_update)(dev,
+ node_id, stats_mask, error);
+}
+
+/* Update WFQ weight mode */
+int rte_tm_node_wfq_weight_mode_update(uint8_t port_id,
+ uint32_t node_id,
+ int *wfq_weight_mode,
+ uint32_t n_sp_priorities,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_wfq_weight_mode_update)(dev,
+ node_id, wfq_weight_mode, n_sp_priorities, error);
+}
+
+/* Update node congestion management mode */
+int rte_tm_node_cman_update(uint8_t port_id,
+ uint32_t node_id,
+ enum rte_tm_cman_mode cman,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_cman_update)(dev,
+ node_id, cman, error);
+}
+
+/* Update node private WRED context */
+int rte_tm_node_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_wred_context_update)(dev,
+ node_id, wred_profile_id, error);
+}
+
+/* Update node shared WRED context */
+int rte_tm_node_shared_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_wred_context_id,
+ int add,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_shared_wred_context_update)(dev,
+ node_id, shared_wred_context_id, add, error);
+}
+
+/* Read and/or clear stats counters for specific node */
+int rte_tm_node_stats_read(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_stats_read)(dev,
+ node_id, stats, stats_mask, clear, error);
+}
+
+/* Packet marking - VLAN DEI */
+int rte_tm_mark_vlan_dei(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, mark_vlan_dei)(dev,
+ mark_green, mark_yellow, mark_red, error);
+}
+
+/* Packet marking - IPv4/IPv6 ECN */
+int rte_tm_mark_ip_ecn(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, mark_ip_ecn)(dev,
+ mark_green, mark_yellow, mark_red, error);
+}
+
+/* Packet marking - IPv4/IPv6 DSCP */
+int rte_tm_mark_ip_dscp(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, mark_ip_dscp)(dev,
+ mark_green, mark_yellow, mark_red, error);
+}
diff --git a/lib/librte_ether/rte_tm.h b/lib/librte_ether/rte_tm.h
new file mode 100644
index 0000000..c8ef2e1
--- /dev/null
+++ b/lib/librte_ether/rte_tm.h
@@ -0,0 +1,1904 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation.
+ * Copyright(c) 2017 Cavium.
+ * Copyright(c) 2017 NXP.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TM_H__
+#define __INCLUDE_RTE_TM_H__
+
+/**
+ * @file
+ * RTE Generic Traffic Manager API
+ *
+ * This interface provides the ability to configure the traffic manager in a
+ * generic way. It includes features such as: hierarchical scheduling,
+ * traffic shaping, congestion management, packet marking, etc.
+ */
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Ethernet framing overhead.
+ *
+ * Overhead fields per Ethernet frame:
+ * 1. Preamble: 7 bytes;
+ * 2. Start of Frame Delimiter (SFD): 1 byte;
+ * 3. Inter-Frame Gap (IFG): 12 bytes.
+ *
+ * One of the typical values for the *pkt_length_adjust* field of the shaper
+ * profile.
+ *
+ * @see struct rte_tm_shaper_params
+ */
+#define RTE_TM_ETH_FRAMING_OVERHEAD 20
+
+/**
+ * Ethernet framing overhead including the Frame Check Sequence (FCS) field.
+ * Useful when FCS is generated and added at the end of the Ethernet frame on
+ * TX side without any SW intervention.
+ *
+ * One of the typical values for the pkt_length_adjust field of the shaper
+ * profile.
+ *
+ * @see struct rte_tm_shaper_params
+ */
+#define RTE_TM_ETH_FRAMING_OVERHEAD_FCS 24
+
+/**
+ * Invalid WRED profile ID.
+ *
+ * @see struct rte_tm_node_params
+ * @see rte_tm_node_add()
+ * @see rte_tm_node_wred_context_update()
+ */
+#define RTE_TM_WRED_PROFILE_ID_NONE UINT32_MAX
+
+/**
+ *Invalid shaper profile ID.
+ *
+ * @see struct rte_tm_node_params
+ * @see rte_tm_node_add()
+ * @see rte_tm_node_shaper_update()
+ */
+#define RTE_TM_SHAPER_PROFILE_ID_NONE UINT32_MAX
+
+/**
+ * Node ID for the parent of the root node.
+ *
+ * @see rte_tm_node_add()
+ */
+#define RTE_TM_NODE_ID_NULL UINT32_MAX
+
+/**
+ * Node level ID used to disable level ID checking.
+ *
+ * @see rte_tm_node_add()
+ */
+#define RTE_TM_NODE_LEVEL_ID_ANY UINT32_MAX
+
+/**
+ * Color
+ */
+enum rte_tm_color {
+ RTE_TM_GREEN = 0, /**< Green */
+ RTE_TM_YELLOW, /**< Yellow */
+ RTE_TM_RED, /**< Red */
+ RTE_TM_COLORS /**< Number of colors */
+};
+
+/**
+ * Node statistics counter type
+ */
+enum rte_tm_stats_type {
+ /** Number of packets scheduled from current node. */
+ RTE_TM_STATS_N_PKTS = 1 << 0,
+
+ /** Number of bytes scheduled from current node. */
+ RTE_TM_STATS_N_BYTES = 1 << 1,
+
+ /** Number of green packets dropped by current leaf node. */
+ RTE_TM_STATS_N_PKTS_GREEN_DROPPED = 1 << 2,
+
+ /** Number of yellow packets dropped by current leaf node. */
+ RTE_TM_STATS_N_PKTS_YELLOW_DROPPED = 1 << 3,
+
+ /** Number of red packets dropped by current leaf node. */
+ RTE_TM_STATS_N_PKTS_RED_DROPPED = 1 << 4,
+
+ /** Number of green bytes dropped by current leaf node. */
+ RTE_TM_STATS_N_BYTES_GREEN_DROPPED = 1 << 5,
+
+ /** Number of yellow bytes dropped by current leaf node. */
+ RTE_TM_STATS_N_BYTES_YELLOW_DROPPED = 1 << 6,
+
+ /** Number of red bytes dropped by current leaf node. */
+ RTE_TM_STATS_N_BYTES_RED_DROPPED = 1 << 7,
+
+ /** Number of packets currently waiting in the packet queue of current
+ * leaf node.
+ */
+ RTE_TM_STATS_N_PKTS_QUEUED = 1 << 8,
+
+ /** Number of bytes currently waiting in the packet queue of current
+ * leaf node.
+ */
+ RTE_TM_STATS_N_BYTES_QUEUED = 1 << 9,
+};
+
+/**
+ * Node statistics counters
+ */
+struct rte_tm_node_stats {
+ /** Number of packets scheduled from current node. */
+ uint64_t n_pkts;
+
+ /** Number of bytes scheduled from current node. */
+ uint64_t n_bytes;
+
+ /** Statistics counters for leaf nodes only. */
+ struct {
+ /** Number of packets dropped by current leaf node per each
+ * color.
+ */
+ uint64_t n_pkts_dropped[RTE_TM_COLORS];
+
+ /** Number of bytes dropped by current leaf node per each
+ * color.
+ */
+ uint64_t n_bytes_dropped[RTE_TM_COLORS];
+
+ /** Number of packets currently waiting in the packet queue of
+ * current leaf node.
+ */
+ uint64_t n_pkts_queued;
+
+ /** Number of bytes currently waiting in the packet queue of
+ * current leaf node.
+ */
+ uint64_t n_bytes_queued;
+ } leaf;
+};
+
+/**
+ * Traffic manager dynamic updates
+ */
+enum rte_tm_dynamic_update_type {
+ /** Dynamic parent node update. The new parent node is located on same
+ * hierarchy level as the former parent node. Consequently, the node
+ * whose parent is changed preserves its hierarchy level.
+ */
+ RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL = 1 << 0,
+
+ /** Dynamic parent node update. The new parent node is located on
+ * different hierarchy level than the former parent node. Consequently,
+ * the node whose parent is changed also changes its hierarchy level.
+ */
+ RTE_TM_UPDATE_NODE_PARENT_CHANGE_LEVEL = 1 << 1,
+
+ /** Dynamic node add/delete. */
+ RTE_TM_UPDATE_NODE_ADD_DELETE = 1 << 2,
+
+ /** Suspend/resume nodes. */
+ RTE_TM_UPDATE_NODE_SUSPEND_RESUME = 1 << 3,
+
+ /** Dynamic switch between byte-based and packet-based WFQ weights. */
+ RTE_TM_UPDATE_NODE_WFQ_WEIGHT_MODE = 1 << 4,
+
+ /** Dynamic update on number of SP priorities. */
+ RTE_TM_UPDATE_NODE_N_SP_PRIORITIES = 1 << 5,
+
+ /** Dynamic update of congestion management mode for leaf nodes. */
+ RTE_TM_UPDATE_NODE_CMAN = 1 << 6,
+
+ /** Dynamic update of the set of enabled stats counter types. */
+ RTE_TM_UPDATE_NODE_STATS = 1 << 7,
+};
+
+/**
+ * Traffic manager capabilities
+ */
+struct rte_tm_capabilities {
+ /** Maximum number of nodes. */
+ uint32_t n_nodes_max;
+
+ /** Maximum number of levels (i.e. number of nodes connecting the root
+ * node with any leaf node, including the root and the leaf).
+ */
+ uint32_t n_levels_max;
+
+ /** When non-zero, this flag indicates that all the non-leaf nodes
+ * (with the exception of the root node) have identical capability set.
+ */
+ int non_leaf_nodes_identical;
+
+ /** When non-zero, this flag indicates that all the leaf nodes have
+ * identical capability set.
+ */
+ int leaf_nodes_identical;
+
+ /** Maximum number of shapers, either private or shared. In case the
+ * implementation does not share any resources between private and
+ * shared shapers, it is typically equal to the sum of
+ * *shaper_private_n_max* and *shaper_shared_n_max*. The
+ * value of zero indicates that traffic shaping is not supported.
+ */
+ uint32_t shaper_n_max;
+
+ /** Maximum number of private shapers. Indicates the maximum number of
+ * nodes that can concurrently have their private shaper enabled. The
+ * value of zero indicates that private shapers are not supported.
+ */
+ uint32_t shaper_private_n_max;
+
+ /** Maximum number of private shapers that support dual rate shaping.
+ * Indicates the maximum number of nodes that can concurrently have
+ * their private shaper enabled with dual rate support. Only valid when
+ * private shapers are supported. The value of zero indicates that dual
+ * rate shaping is not available for private shapers. The maximum value
+ * is *shaper_private_n_max*.
+ */
+ int shaper_private_dual_rate_n_max;
+
+ /** Minimum committed/peak rate (bytes per second) for any private
+ * shaper. Valid only when private shapers are supported.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for any private
+ * shaper. Valid only when private shapers are supported.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /** Maximum number of shared shapers. The value of zero indicates that
+ * shared shapers are not supported.
+ */
+ uint32_t shaper_shared_n_max;
+
+ /** Maximum number of nodes that can share the same shared shaper.
+ * Only valid when shared shapers are supported.
+ */
+ uint32_t shaper_shared_n_nodes_per_shaper_max;
+
+ /** Maximum number of shared shapers a node can be part of. This
+ * parameter indicates that there is at least one node that can be
+ * configured with this many shared shapers, which might not be true for
+ * all the nodes. Only valid when shared shapers are supported, in which
+ * case it ranges from 1 to *shaper_shared_n_max*.
+ */
+ uint32_t shaper_shared_n_shapers_per_node_max;
+
+ /** Maximum number of shared shapers that can be configured with dual
+ * rate shaping. The value of zero indicates that dual rate shaping
+ * support is not available for shared shapers.
+ */
+ uint32_t shaper_shared_dual_rate_n_max;
+
+ /** Minimum committed/peak rate (bytes per second) for any shared
+ * shaper. Only valid when shared shapers are supported.
+ */
+ uint64_t shaper_shared_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for any shared
+ * shaper. Only valid when shared shapers are supported.
+ */
+ uint64_t shaper_shared_rate_max;
+
+ /** Minimum value allowed for packet length adjustment for any private
+ * or shared shaper.
+ */
+ int shaper_pkt_length_adjust_min;
+
+ /** Maximum value allowed for packet length adjustment for any private
+ * or shared shaper.
+ */
+ int shaper_pkt_length_adjust_max;
+
+ /** Maximum number of children nodes. This parameter indicates that
+ * there is at least one non-leaf node that can be configured with this
+ * many children nodes, which might not be true for all the non-leaf
+ * nodes.
+ */
+ uint32_t sched_n_children_max;
+
+ /** Maximum number of supported priority levels. This parameter
+ * indicates that there is at least one non-leaf node that can be
+ * configured with this many priority levels for managing its children
+ * nodes, which might not be true for all the non-leaf nodes. The value
+ * of zero is invalid. The value of 1 indicates that only priority 0 is
+ * supported, which essentially means that Strict Priority (SP)
+ * algorithm is not supported.
+ */
+ uint32_t sched_sp_n_priorities_max;
+
+ /** Maximum number of sibling nodes that can have the same priority at
+ * any given time, i.e. maximum size of the WFQ sibling node group. This
+ * parameter indicates there is at least one non-leaf node that meets
+ * this condition, which might not be true for all the non-leaf nodes.
+ * The value of zero is invalid. The value of 1 indicates that WFQ
+ * algorithm is not supported. The maximum value is
+ * *sched_n_children_max*.
+ */
+ uint32_t sched_wfq_n_children_per_group_max;
+
+ /** Maximum number of priority levels that can have more than one child
+ * node at any given time, i.e. maximum number of WFQ sibling node
+ * groups that have two or more members. This parameter indicates there
+ * is at least one non-leaf node that meets this condition, which might
+ * not be true for all the non-leaf nodes. The value of zero states that
+ * WFQ algorithm is not supported. The value of 1 indicates that
+ * (*sched_sp_n_priorities_max* - 1) priority levels have at most one
+ * child node, so there can be only one priority level with two or
+ * more sibling nodes making up a WFQ group. The maximum value is:
+ * min(floor(*sched_n_children_max* / 2), *sched_sp_n_priorities_max*).
+ */
+ uint32_t sched_wfq_n_groups_max;
+
+ /** Maximum WFQ weight. The value of 1 indicates that all sibling nodes
+ * with same priority have the same WFQ weight, so WFQ is reduced to FQ.
+ */
+ uint32_t sched_wfq_weight_max;
+
+ /** Head drop algorithm support. When non-zero, this parameter
+ * indicates that there is at least one leaf node that supports the head
+ * drop algorithm, which might not be true for all the leaf nodes.
+ */
+ int cman_head_drop_supported;
+
+ /** Maximum number of WRED contexts, either private or shared. In case
+ * the implementation does not share any resources between private and
+ * shared WRED contexts, it is typically equal to the sum of
+ * *cman_wred_context_private_n_max* and
+ * *cman_wred_context_shared_n_max*. The value of zero indicates that
+ * WRED is not supported.
+ */
+ uint32_t cman_wred_context_n_max;
+
+ /** Maximum number of private WRED contexts. Indicates the maximum
+ * number of leaf nodes that can concurrently have their private WRED
+ * context enabled. The value of zero indicates that private WRED
+ * contexts are not supported.
+ */
+ uint32_t cman_wred_context_private_n_max;
+
+ /** Maximum number of shared WRED contexts. The value of zero
+ * indicates that shared WRED contexts are not supported.
+ */
+ uint32_t cman_wred_context_shared_n_max;
+
+ /** Maximum number of leaf nodes that can share the same WRED context.
+ * Only valid when shared WRED contexts are supported.
+ */
+ uint32_t cman_wred_context_shared_n_nodes_per_context_max;
+
+ /** Maximum number of shared WRED contexts a leaf node can be part of.
+ * This parameter indicates that there is at least one leaf node that
+ * can be configured with this many shared WRED contexts, which might
+ * not be true for all the leaf nodes. Only valid when shared WRED
+ * contexts are supported, in which case it ranges from 1 to
+ * *cman_wred_context_shared_n_max*.
+ */
+ uint32_t cman_wred_context_shared_n_contexts_per_node_max;
+
+ /** Support for VLAN DEI packet marking (per color). */
+ int mark_vlan_dei_supported[RTE_TM_COLORS];
+
+ /** Support for IPv4/IPv6 ECN marking of TCP packets (per color). */
+ int mark_ip_ecn_tcp_supported[RTE_TM_COLORS];
+
+ /** Support for IPv4/IPv6 ECN marking of SCTP packets (per color). */
+ int mark_ip_ecn_sctp_supported[RTE_TM_COLORS];
+
+ /** Support for IPv4/IPv6 DSCP packet marking (per color). */
+ int mark_ip_dscp_supported[RTE_TM_COLORS];
+
+ /** Set of supported dynamic update operations.
+ * @see enum rte_tm_dynamic_update_type
+ */
+ uint64_t dynamic_update_mask;
+
+ /** Set of supported statistics counter types.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+};
+
+/**
+ * Traffic manager level capabilities
+ */
+struct rte_tm_level_capabilities {
+ /** Maximum number of nodes for the current hierarchy level. */
+ uint32_t n_nodes_max;
+
+ /** Maximum number of non-leaf nodes for the current hierarchy level.
+ * The value of 0 indicates that current level only supports leaf
+ * nodes. The maximum value is *n_nodes_max*.
+ */
+ uint32_t n_nodes_nonleaf_max;
+
+ /** Maximum number of leaf nodes for the current hierarchy level. The
+ * value of 0 indicates that current level only supports non-leaf
+ * nodes. The maximum value is *n_nodes_max*.
+ */
+ uint32_t n_nodes_leaf_max;
+
+ /** When non-zero, this flag indicates that all the non-leaf nodes on
+ * this level have identical capability set. Valid only when
+ * *n_nodes_nonleaf_max* is non-zero.
+ */
+ int non_leaf_nodes_identical;
+
+ /** When non-zero, this flag indicates that all the leaf nodes on this
+ * level have identical capability set. Valid only when
+ * *n_nodes_leaf_max* is non-zero.
+ */
+ int leaf_nodes_identical;
+
+ union {
+ /** Items valid only for the non-leaf nodes on this level. */
+ struct {
+ /** Private shaper support. When non-zero, it indicates
+ * there is at least one non-leaf node on this level
+ * with private shaper support, which may not be the
+ * case for all the non-leaf nodes on this level.
+ */
+ int shaper_private_supported;
+
+ /** Dual rate support for private shaper. Valid only
+ * when private shaper is supported for the non-leaf
+ * nodes on the current level. When non-zero, it
+ * indicates there is at least one non-leaf node on this
+ * level with dual rate private shaper support, which
+ * may not be the case for all the non-leaf nodes on
+ * this level.
+ */
+ int shaper_private_dual_rate_supported;
+
+ /** Minimum committed/peak rate (bytes per second) for
+ * private shapers of the non-leaf nodes of this level.
+ * Valid only when private shaper is supported on this
+ * level.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for
+ * private shapers of the non-leaf nodes on this level.
+ * Valid only when private shaper is supported on this
+ * level.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /** Maximum number of shared shapers that any non-leaf
+ * node on this level can be part of. The value of zero
+ * indicates that shared shapers are not supported by
+ * the non-leaf nodes on this level. When non-zero, it
+ * indicates there is at least one non-leaf node on this
+ * level that meets this condition, which may not be the
+ * case for all the non-leaf nodes on this level.
+ */
+ uint32_t shaper_shared_n_max;
+
+ /** Maximum number of children nodes. This parameter
+ * indicates that there is at least one non-leaf node on
+ * this level that can be configured with this many
+ * children nodes, which might not be true for all the
+ * non-leaf nodes on this level.
+ */
+ uint32_t sched_n_children_max;
+
+ /** Maximum number of supported priority levels. This
+ * parameter indicates that there is at least one
+ * non-leaf node on this level that can be configured
+ * with this many priority levels for managing its
+ * children nodes, which might not be true for all the
+ * non-leaf nodes on this level. The value of zero is
+ * invalid. The value of 1 indicates that only priority
+ * 0 is supported, which essentially means that Strict
+ * Priority (SP) algorithm is not supported on this
+ * level.
+ */
+ uint32_t sched_sp_n_priorities_max;
+
+ /** Maximum number of sibling nodes that can have the
+ * same priority at any given time, i.e. maximum size of
+ * the WFQ sibling node group. This parameter indicates
+ * there is at least one non-leaf node on this level
+ * that meets this condition, which may not be true for
+ * all the non-leaf nodes on this level. The value of
+ * zero is invalid. The value of 1 indicates that WFQ
+ * algorithm is not supported on this level. The maximum
+ * value is *sched_n_children_max*.
+ */
+ uint32_t sched_wfq_n_children_per_group_max;
+
+ /** Maximum number of priority levels that can have
+ * more than one child node at any given time, i.e.
+ * maximum number of WFQ sibling node groups that
+ * have two or more members. This parameter indicates
+ * there is at least one non-leaf node on this level
+ * that meets this condition, which might not be true
+ * for all the non-leaf nodes. The value of zero states
+ * that WFQ algorithm is not supported on this level.
+ * The value of 1 indicates that
+ * (*sched_sp_n_priorities_max* - 1) priority levels on
+ * this level have at most one child node, so there can
+ * be only one priority level with two or more sibling
+ * nodes making up a WFQ group on this level. The
+ * maximum value is:
+ * min(floor(*sched_n_children_max* / 2),
+ * *sched_sp_n_priorities_max*).
+ */
+ uint32_t sched_wfq_n_groups_max;
+
+ /** Maximum WFQ weight. The value of 1 indicates that
+ * all sibling nodes on this level with same priority
+ * have the same WFQ weight, so on this level WFQ is
+ * reduced to FQ.
+ */
+ uint32_t sched_wfq_weight_max;
+
+ /** Mask of statistics counter types supported by the
+ * non-leaf nodes on this level. Every supported
+ * statistics counter type is supported by at least one
+ * non-leaf node on this level, which may not be true
+ * for all the non-leaf nodes on this level.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+ } nonleaf;
+
+ /** Items valid only for the leaf nodes on this level. */
+ struct {
+ /** Private shaper support. When non-zero, it indicates
+ * there is at least one leaf node on this level with
+ * private shaper support, which may not be the case for
+ * all the leaf nodes on this level.
+ */
+ int shaper_private_supported;
+
+ /** Dual rate support for private shaper. Valid only
+ * when private shaper is supported for the leaf nodes
+ * on this level. When non-zero, it indicates there is
+ * at least one leaf node on this level with dual rate
+ * private shaper support, which may not be the case for
+ * all the leaf nodes on this level.
+ */
+ int shaper_private_dual_rate_supported;
+
+ /** Minimum committed/peak rate (bytes per second) for
+ * private shapers of the leaf nodes of this level.
+ * Valid only when private shaper is supported for the
+ * leaf nodes on this level.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for
+ * private shapers of the leaf nodes on this level.
+ * Valid only when private shaper is supported for the
+ * leaf nodes on this level.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /** Maximum number of shared shapers that any leaf node
+ * on this level can be part of. The value of zero
+ * indicates that shared shapers are not supported by
+ * the leaf nodes on this level. When non-zero, it
+ * indicates there is at least one leaf node on this
+ * level that meets this condition, which may not be the
+ * case for all the leaf nodes on this level.
+ */
+ uint32_t shaper_shared_n_max;
+
+ /** Head drop algorithm support. When non-zero, this
+ * parameter indicates that there is at least one leaf
+ * node on this level that supports the head drop
+ * algorithm, which might not be true for all the leaf
+ * nodes on this level.
+ */
+ int cman_head_drop_supported;
+
+ /** Private WRED context support. When non-zero, it
+ * indicates there is at least one node on this level
+ * with private WRED context support, which may not be
+ * true for all the leaf nodes on this level.
+ */
+ int cman_wred_context_private_supported;
+
+ /** Maximum number of shared WRED contexts that any
+ * leaf node on this level can be part of. The value of
+ * zero indicates that shared WRED contexts are not
+ * supported by the leaf nodes on this level. When
+ * non-zero, it indicates there is at least one leaf
+ * node on this level that meets this condition, which
+ * may not be the case for all the leaf nodes on this
+ * level.
+ */
+ uint32_t cman_wred_context_shared_n_max;
+
+ /** Mask of statistics counter types supported by the
+ * leaf nodes on this level. Every supported statistics
+ * counter type is supported by at least one leaf node
+ * on this level, which may not be true for all the leaf
+ * nodes on this level.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+ } leaf;
+ };
+};
+
+/**
+ * Traffic manager node capabilities
+ */
+struct rte_tm_node_capabilities {
+ /** Private shaper support for the current node. */
+ int shaper_private_supported;
+
+ /** Dual rate shaping support for private shaper of current node.
+ * Valid only when private shaper is supported by the current node.
+ */
+ int shaper_private_dual_rate_supported;
+
+ /** Minimum committed/peak rate (bytes per second) for private
+ * shaper of current node. Valid only when private shaper is supported
+ * by the current node.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for private
+ * shaper of current node. Valid only when private shaper is supported
+ * by the current node.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /** Maximum number of shared shapers the current node can be part of.
+ * The value of zero indicates that shared shapers are not supported by
+ * the current node.
+ */
+ uint32_t shaper_shared_n_max;
+
+ union {
+ /** Items valid only for non-leaf nodes. */
+ struct {
+ /** Maximum number of children nodes. */
+ uint32_t sched_n_children_max;
+
+ /** Maximum number of supported priority levels. The
+ * value of zero is invalid. The value of 1 indicates
+ * that only priority 0 is supported, which essentially
+ * means that Strict Priority (SP) algorithm is not
+ * supported.
+ */
+ uint32_t sched_sp_n_priorities_max;
+
+ /** Maximum number of sibling nodes that can have the
+ * same priority at any given time, i.e. maximum size
+ * of the WFQ sibling node group. The value of zero
+ * is invalid. The value of 1 indicates that WFQ
+ * algorithm is not supported. The maximum value is
+ * *sched_n_children_max*.
+ */
+ uint32_t sched_wfq_n_children_per_group_max;
+
+ /** Maximum number of priority levels that can have
+ * more than one child node at any given time, i.e.
+ * maximum number of WFQ sibling node groups that have
+ * two or more members. The value of zero states that
+ * WFQ algorithm is not supported. The value of 1
+ * indicates that (*sched_sp_n_priorities_max* - 1)
+ * priority levels have at most one child node, so there
+ * can be only one priority level with two or more
+ * sibling nodes making up a WFQ group. The maximum
+ * value is: min(floor(*sched_n_children_max* / 2),
+ * *sched_sp_n_priorities_max*).
+ */
+ uint32_t sched_wfq_n_groups_max;
+
+ /** Maximum WFQ weight. The value of 1 indicates that
+ * all sibling nodes with same priority have the same
+ * WFQ weight, so WFQ is reduced to FQ.
+ */
+ uint32_t sched_wfq_weight_max;
+ } nonleaf;
+
+ /** Items valid only for leaf nodes. */
+ struct {
+ /** Head drop algorithm support for current node. */
+ int cman_head_drop_supported;
+
+ /** Private WRED context support for current node. */
+ int cman_wred_context_private_supported;
+
+ /** Maximum number of shared WRED contexts the current
+ * node can be part of. The value of zero indicates that
+ * shared WRED contexts are not supported by the current
+ * node.
+ */
+ uint32_t cman_wred_context_shared_n_max;
+ } leaf;
+ };
+
+ /** Mask of statistics counter types supported by the current node.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+};
+
+/**
+ * Congestion management (CMAN) mode
+ *
+ * This is used for controlling the admission of packets into a packet queue or
+ * group of packet queues on congestion. On request of writing a new packet
+ * into the current queue while the queue is full, the *tail drop* algorithm
+ * drops the new packet while leaving the queue unmodified, as opposed to *head
+ * drop* algorithm, which drops the packet at the head of the queue (the oldest
+ * packet waiting in the queue) and admits the new packet at the tail of the
+ * queue.
+ *
+ * The *Random Early Detection (RED)* algorithm works by proactively dropping
+ * more and more input packets as the queue occupancy builds up. When the queue
+ * is full or almost full, RED effectively works as *tail drop*. The *Weighted
+ * RED* algorithm uses a separate set of RED thresholds for each packet color.
+ */
+enum rte_tm_cman_mode {
+ RTE_TM_CMAN_TAIL_DROP = 0, /**< Tail drop */
+ RTE_TM_CMAN_HEAD_DROP, /**< Head drop */
+ RTE_TM_CMAN_WRED, /**< Weighted Random Early Detection (WRED) */
+};
+
+/**
+ * Random Early Detection (RED) profile
+ */
+struct rte_tm_red_params {
+ /** Minimum queue threshold */
+ uint16_t min_th;
+
+ /** Maximum queue threshold */
+ uint16_t max_th;
+
+ /** Inverse of packet marking probability maximum value (maxp), i.e.
+ * maxp_inv = 1 / maxp
+ */
+ uint16_t maxp_inv;
+
+ /** Negated log2 of queue weight (wq), i.e. wq = 1 / (2 ^ wq_log2) */
+ uint16_t wq_log2;
+};
+
+/**
+ * Weighted RED (WRED) profile
+ *
+ * Multiple WRED contexts can share the same WRED profile. Each leaf node with
+ * WRED enabled as its congestion management mode has zero or one private WRED
+ * context (only one leaf node using it) and/or zero, one or several shared
+ * WRED contexts (multiple leaf nodes use the same WRED context). A private
+ * WRED context is used to perform congestion management for a single leaf
+ * node, while a shared WRED context is used to perform congestion management
+ * for a group of leaf nodes.
+ */
+struct rte_tm_wred_params {
+ /** One set of RED parameters per packet color */
+ struct rte_tm_red_params red_params[RTE_TM_COLORS];
+};
+
+/**
+ * Token bucket
+ */
+struct rte_tm_token_bucket {
+ /** Token bucket rate (bytes per second) */
+ uint64_t rate;
+
+ /** Token bucket size (bytes), a.k.a. max burst size */
+ uint64_t size;
+};
+
+/**
+ * Shaper (rate limiter) profile
+ *
+ * Multiple shaper instances can share the same shaper profile. Each node has
+ * zero or one private shaper (only one node using it) and/or zero, one or
+ * several shared shapers (multiple nodes use the same shaper instance).
+ * A private shaper is used to perform traffic shaping for a single node, while
+ * a shared shaper is used to perform traffic shaping for a group of nodes.
+ *
+ * Single rate shapers use a single token bucket. A single rate shaper can be
+ * configured by setting the rate of the committed bucket to zero, which
+ * effectively disables this bucket. The peak bucket is used to limit the rate
+ * and the burst size for the current shaper.
+ *
+ * Dual rate shapers use both the committed and the peak token buckets. The
+ * rate of the peak bucket has to be bigger than zero, as well as greater than
+ * or equal to the rate of the committed bucket.
+ */
+struct rte_tm_shaper_params {
+ /** Committed token bucket */
+ struct rte_tm_token_bucket committed;
+
+ /** Peak token bucket */
+ struct rte_tm_token_bucket peak;
+
+ /** Signed value to be added to the length of each packet for the
+ * purpose of shaping. Can be used to correct the packet length with
+ * the framing overhead bytes that are also consumed on the wire (e.g.
+ * RTE_TM_ETH_FRAMING_OVERHEAD_FCS).
+ */
+ int32_t pkt_length_adjust;
+};
+
+/**
+ * Node parameters
+ *
+ * Each non-leaf node has multiple inputs (its children nodes) and single output
+ * (which is input to its parent node). It arbitrates its inputs using Strict
+ * Priority (SP) and Weighted Fair Queuing (WFQ) algorithms to schedule input
+ * packets to its output while observing its shaping (rate limiting)
+ * constraints.
+ *
+ * Algorithms such as Weighted Round Robin (WRR), Byte-level WRR, Deficit WRR
+ * (DWRR), etc. are considered approximations of the WFQ ideal and are
+ * assimilated to WFQ, although an associated implementation-dependent trade-off
+ * on accuracy, performance and resource usage might exist.
+ *
+ * Children nodes with different priorities are scheduled using the SP algorithm
+ * based on their priority, with zero (0) as the highest priority. Children with
+ * the same priority are scheduled using the WFQ algorithm according to their
+ * weights. The WFQ weight of a given child node is relative to the sum of the
+ * weights of all its sibling nodes that have the same priority, with one (1) as
+ * the lowest weight. For each SP priority, the WFQ weight mode can be set as
+ * either byte-based or packet-based.
+ *
+ * Each leaf node sits on top of a TX queue of the current Ethernet port. Hence,
+ * the leaf nodes are predefined, with their node IDs set to 0 .. (N-1), where N
+ * is the number of TX queues configured for the current Ethernet port. The
+ * non-leaf nodes have their IDs generated by the application.
+ */
+struct rte_tm_node_params {
+ /** Shaper profile for the private shaper. The absence of the private
+ * shaper for the current node is indicated by setting this parameter
+ * to RTE_TM_SHAPER_PROFILE_ID_NONE.
+ */
+ uint32_t shaper_profile_id;
+
+ /** User allocated array of valid shared shaper IDs. */
+ uint32_t *shared_shaper_id;
+
+ /** Number of shared shaper IDs in the *shared_shaper_id* array. */
+ uint32_t n_shared_shapers;
+
+ union {
+ /** Parameters only valid for non-leaf nodes. */
+ struct {
+ /** WFQ weight mode for each SP priority. When NULL, it
+ * indicates that WFQ is to be used for all priorities.
+ * When non-NULL, it points to a pre-allocated array of
+ * *n_sp_priorities* values, with non-zero value for
+ * byte-mode and zero for packet-mode.
+ */
+ int *wfq_weight_mode;
+
+ /** Number of SP priorities. */
+ uint32_t n_sp_priorities;
+ } nonleaf;
+
+ /** Parameters only valid for leaf nodes. */
+ struct {
+ /** Congestion management mode */
+ enum rte_tm_cman_mode cman;
+
+ /** WRED parameters (only valid when *cman* is set to
+ * WRED).
+ */
+ struct {
+ /** WRED profile for private WRED context. The
+ * absence of a private WRED context for the
+ * current leaf node is indicated by value
+ * RTE_TM_WRED_PROFILE_ID_NONE.
+ */
+ uint32_t wred_profile_id;
+
+ /** User allocated array of shared WRED context
+ * IDs. When set to NULL, it indicates that the
+ * current leaf node should not currently be
+ * part of any shared WRED contexts.
+ */
+ uint32_t *shared_wred_context_id;
+
+ /** Number of elements in the
+ * *shared_wred_context_id* array. Only valid
+ * when *shared_wred_context_id* is non-NULL,
+ * in which case it should be non-zero.
+ */
+ uint32_t n_shared_wred_contexts;
+ } wred;
+ } leaf;
+ };
+
+ /** Mask of statistics counter types to be enabled for this node. This
+ * needs to be a subset of the statistics counter types available for
+ * the current node. Any statistics counter type not included in this
+ * set is to be disabled for the current node.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+};
+
+/**
+ * Verbose error types.
+ *
+ * Most of them provide the type of the object referenced by struct
+ * rte_tm_error::cause.
+ */
+enum rte_tm_error_type {
+ RTE_TM_ERROR_TYPE_NONE, /**< No error. */
+ RTE_TM_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
+ RTE_TM_ERROR_TYPE_CAPABILITIES,
+ RTE_TM_ERROR_TYPE_LEVEL_ID,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_GREEN,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_YELLOW,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_RED,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_SHARED_WRED_CONTEXT_ID,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_SHARED_SHAPER_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID,
+ RTE_TM_ERROR_TYPE_NODE_PRIORITY,
+ RTE_TM_ERROR_TYPE_NODE_WEIGHT,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_STATS,
+ RTE_TM_ERROR_TYPE_NODE_ID,
+};
+
+/**
+ * Verbose error structure definition.
+ *
+ * This object is normally allocated by applications and set by PMDs, the
+ * message points to a constant string which does not need to be freed by
+ * the application, however its pointer can be considered valid only as long
+ * as its associated DPDK port remains configured. Closing the underlying
+ * device or unloading the PMD invalidates it.
+ *
+ * Both cause and message may be NULL regardless of the error type.
+ */
+struct rte_tm_error {
+ enum rte_tm_error_type type; /**< Cause field and error type. */
+ const void *cause; /**< Object responsible for the error. */
+ const char *message; /**< Human-readable error message. */
+};
+
+/**
+ * Traffic manager get number of leaf nodes
+ *
+ * Each leaf node sits on on top of a TX queue of the current Ethernet port.
+ * Therefore, the set of leaf nodes is predefined, their number is always equal
+ * to N (where N is the number of TX queues configured for the current port)
+ * and their IDs are 0 .. (N-1).
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] n_leaf_nodes
+ * Number of leaf nodes for the current port.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_get_number_of_leaf_nodes(uint8_t port_id,
+ uint32_t *n_leaf_nodes,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node ID validate and type (i.e. leaf or non-leaf) get
+ *
+ * The leaf nodes have predefined IDs in the range of 0 .. (N-1), where N is
+ * the number of TX queues of the current Ethernet port. The non-leaf nodes
+ * have their IDs generated by the application outside of the above range,
+ * which is reserved for leaf nodes.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID value. Needs to be valid.
+ * @param[out] is_leaf
+ * Set to non-zero value when node is leaf and to zero otherwise (non-leaf).
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_type_get(uint8_t port_id,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager capabilities get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] cap
+ * Traffic manager capabilities. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_capabilities_get(uint8_t port_id,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager level capabilities get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] level_id
+ * The hierarchy level identifier. The value of 0 identifies the level of the
+ * root node.
+ * @param[out] cap
+ * Traffic manager level capabilities. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_level_capabilities_get(uint8_t port_id,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node capabilities get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] cap
+ * Traffic manager node capabilities. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_capabilities_get(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager WRED profile add
+ *
+ * Create a new WRED profile with ID set to *wred_profile_id*. The new profile
+ * is used to create one or several WRED contexts.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] wred_profile_id
+ * WRED profile ID for the new profile. Needs to be unused.
+ * @param[in] profile
+ * WRED profile parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::cman_wred_context_n_max
+ */
+int
+rte_tm_wred_profile_add(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_wred_params *profile,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager WRED profile delete
+ *
+ * Delete an existing WRED profile. This operation fails when there is
+ * currently at least one user (i.e. WRED context) of this WRED profile.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] wred_profile_id
+ * WRED profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::cman_wred_context_n_max
+ */
+int
+rte_tm_wred_profile_delete(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared WRED context add or update
+ *
+ * When *shared_wred_context_id* is invalid, a new WRED context with this ID is
+ * created by using the WRED profile identified by *wred_profile_id*.
+ *
+ * When *shared_wred_context_id* is valid, this WRED context is no longer using
+ * the profile previously assigned to it and is updated to use the profile
+ * identified by *wred_profile_id*.
+ *
+ * A valid shared WRED context can be assigned to several hierarchy leaf nodes
+ * configured to use WRED as the congestion management mode.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_wred_context_id
+ * Shared WRED context ID
+ * @param[in] wred_profile_id
+ * WRED profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::cman_wred_context_shared_n_max
+ */
+int
+rte_tm_shared_wred_context_add_update(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared WRED context delete
+ *
+ * Delete an existing shared WRED context. This operation fails when there is
+ * currently at least one user (i.e. hierarchy leaf node) of this shared WRED
+ * context.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_wred_context_id
+ * Shared WRED context ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::cman_wred_context_shared_n_max
+ */
+int
+rte_tm_shared_wred_context_delete(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shaper profile add
+ *
+ * Create a new shaper profile with ID set to *shaper_profile_id*. The new
+ * shaper profile is used to create one or several shapers.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shaper_profile_id
+ * Shaper profile ID for the new profile. Needs to be unused.
+ * @param[in] profile
+ * Shaper profile parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::shaper_n_max
+ */
+int
+rte_tm_shaper_profile_add(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shaper profile delete
+ *
+ * Delete an existing shaper profile. This operation fails when there is
+ * currently at least one user (i.e. shaper) of this shaper profile.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shaper_profile_id
+ * Shaper profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::shaper_n_max
+ */
+int
+rte_tm_shaper_profile_delete(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared shaper add or update
+ *
+ * When *shared_shaper_id* is not a valid shared shaper ID, a new shared shaper
+ * with this ID is created using the shaper profile identified by
+ * *shaper_profile_id*.
+ *
+ * When *shared_shaper_id* is a valid shared shaper ID, this shared shaper is
+ * no longer using the shaper profile previously assigned to it and is updated
+ * to use the shaper profile identified by *shaper_profile_id*.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_shaper_id
+ * Shared shaper ID
+ * @param[in] shaper_profile_id
+ * Shaper profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::shaper_shared_n_max
+ */
+int
+rte_tm_shared_shaper_add_update(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared shaper delete
+ *
+ * Delete an existing shared shaper. This operation fails when there is
+ * currently at least one user (i.e. hierarchy node) of this shared shaper.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_shaper_id
+ * Shared shaper ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::shaper_shared_n_max
+ */
+int
+rte_tm_shared_shaper_delete(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node add
+ *
+ * Create new node and connect it as child of an existing node. The new node is
+ * further identified by *node_id*, which needs to be unused by any of the
+ * existing nodes. The parent node is identified by *parent_node_id*, which
+ * needs to be the valid ID of an existing non-leaf node. The parent node is
+ * going to use the provided SP *priority* and WFQ *weight* to schedule its new
+ * child node.
+ *
+ * This function has to be called for both leaf and non-leaf nodes. In the case
+ * of leaf nodes (i.e. *node_id* is within the range of 0 .. (N-1), with N as
+ * the number of configured TX queues of the current port), the leaf node is
+ * configured rather than created (as the set of leaf nodes is predefined) and
+ * it is also connected as child of an existing node.
+ *
+ * The first node that is added becomes the root node and all the nodes that
+ * are subsequently added have to be added as descendants of the root node. The
+ * parent of the root node has to be specified as RTE_TM_NODE_ID_NULL and there
+ * can only be one node with this parent ID (i.e. the root node). Further
+ * restrictions for root node: needs to be non-leaf, its private shaper profile
+ * needs to be valid and single rate, cannot use any shared shapers.
+ *
+ * When called before rte_tm_hierarchy_commit() invocation, this function is
+ * typically used to define the initial start-up hierarchy for the port.
+ * Provided that dynamic hierarchy updates are supported by the current port (as
+ * advertised in the port capability set), this function can be also called
+ * after the rte_tm_hierarchy_commit() invocation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be unused by any of the existing nodes.
+ * @param[in] parent_node_id
+ * Parent node ID. Needs to be the valid.
+ * @param[in] priority
+ * Node priority. The highest node priority is zero. Used by the SP algorithm
+ * running on the parent of the current node for scheduling this child node.
+ * @param[in] weight
+ * Node weight. The node weight is relative to the weight sum of all siblings
+ * that have the same priority. The lowest weight is one. Used by the WFQ
+ * algorithm running on the parent of the current node for scheduling this
+ * child node.
+ * @param[in] level_id
+ * Level ID that should be met by this node. The hierarchy level of the
+ * current node is already fully specified through its parent node (i.e. the
+ * level of this node is equal to the level of its parent node plus one),
+ * therefore the reason for providing this parameter is to enable the
+ * application to perform step-by-step checking of the node level during
+ * successive invocations of this function. When not desired, this check can
+ * be disabled by assigning value RTE_TM_NODE_LEVEL_ID_ANY to this parameter.
+ * @param[in] params
+ * Node parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_hierarchy_commit()
+ * @see RTE_TM_UPDATE_NODE_ADD_DELETE
+ * @see RTE_TM_NODE_LEVEL_ID_ANY
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_node_add(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node delete
+ *
+ * Delete an existing node. This operation fails when this node currently has
+ * at least one user (i.e. child node).
+ *
+ * When called before rte_tm_hierarchy_commit() invocation, this function is
+ * typically used to define the initial start-up hierarchy for the port.
+ * Provided that dynamic hierarchy updates are supported by the current port (as
+ * advertised in the port capability set), this function can be also called
+ * after the rte_tm_hierarchy_commit() invocation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_ADD_DELETE
+ */
+int
+rte_tm_node_delete(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node suspend
+ *
+ * Suspend an existing node. While the node is in suspended state, no packet is
+ * scheduled from this node and its descendants. The node exits the suspended
+ * state through the node resume operation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_node_resume()
+ * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME
+ */
+int
+rte_tm_node_suspend(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node resume
+ *
+ * Resume an existing node that is currently in suspended state. The node
+ * entered the suspended state as result of a previous node suspend operation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_node_suspend()
+ * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME
+ */
+int
+rte_tm_node_resume(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager hierarchy commit
+ *
+ * This function is called during the port initialization phase (before the
+ * Ethernet port is started) to freeze the start-up hierarchy.
+ *
+ * This function typically performs the following steps:
+ * a) It validates the start-up hierarchy that was previously defined for the
+ * current port through successive rte_tm_node_add() invocations;
+ * b) Assuming successful validation, it performs all the necessary port
+ * specific configuration operations to install the specified hierarchy on
+ * the current port, with immediate effect once the port is started.
+ *
+ * This function fails when the currently configured hierarchy is not supported
+ * by the Ethernet port, in which case the user can abort or try out another
+ * hierarchy configuration (e.g. a hierarchy with less leaf nodes), which can be
+ * build from scratch (when *clear_on_fail* is enabled) or by modifying the
+ * existing hierarchy configuration (when *clear_on_fail* is disabled).
+ *
+ * Note that this function can still fail due to other causes (e.g. not enough
+ * memory available in the system, etc), even though the specified hierarchy is
+ * supported in principle by the current port.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] clear_on_fail
+ * On function call failure, hierarchy is cleared when this parameter is
+ * non-zero and preserved when this parameter is equal to zero.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_node_add()
+ * @see rte_tm_node_delete()
+ */
+int
+rte_tm_hierarchy_commit(uint8_t port_id,
+ int clear_on_fail,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node parent update
+ *
+ * Restriction for root node: its parent cannot be changed.
+ *
+ * This function can only be called after the rte_tm_hierarchy_commit()
+ * invocation. Its success depends on the port support for this operation, as
+ * advertised through the port capability set.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] parent_node_id
+ * Node ID for the new parent. Needs to be valid.
+ * @param[in] priority
+ * Node priority. The highest node priority is zero. Used by the SP algorithm
+ * running on the parent of the current node for scheduling this child node.
+ * @param[in] weight
+ * Node weight. The node weight is relative to the weight sum of all siblings
+ * that have the same priority. The lowest weight is zero. Used by the WFQ
+ * algorithm running on the parent of the current node for scheduling this
+ * child node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL
+ * @see RTE_TM_UPDATE_NODE_PARENT_CHANGE_LEVEL
+ */
+int
+rte_tm_node_parent_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node private shaper update
+ *
+ * Restriction for the root node: its private shaper profile needs to be valid
+ * and single rate.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] shaper_profile_id
+ * Shaper profile ID for the private shaper of the current node. Needs to be
+ * either valid shaper profile ID or RTE_TM_SHAPER_PROFILE_ID_NONE, with
+ * the latter disabling the private shaper of the current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::shaper_private_n_max
+ */
+int
+rte_tm_node_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node shared shapers update
+ *
+ * Restriction for root node: cannot use any shared rate shapers.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] shared_shaper_id
+ * Shared shaper ID. Needs to be valid.
+ * @param[in] add
+ * Set to non-zero value to add this shared shaper to current node or to zero
+ * to delete this shared shaper from current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::shaper_shared_n_max
+ */
+int
+rte_tm_node_shared_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_shaper_id,
+ int add,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node enabled statistics counters update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] stats_mask
+ * Mask of statistics counter types to be enabled for the current node. This
+ * needs to be a subset of the statistics counter types available for the
+ * current node. Any statistics counter type not included in this set is to
+ * be disabled for the current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see enum rte_tm_stats_type
+ * @see RTE_TM_UPDATE_NODE_STATS
+ */
+int
+rte_tm_node_stats_update(uint8_t port_id,
+ uint32_t node_id,
+ uint64_t stats_mask,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node WFQ weight mode update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] wfq_weight_mode
+ * WFQ weight mode for each SP priority. When NULL, it indicates that WFQ is
+ * to be used for all priorities. When non-NULL, it points to a pre-allocated
+ * array of *n_sp_priorities* values, with non-zero value for byte-mode and
+ * zero for packet-mode.
+ * @param[in] n_sp_priorities
+ * Number of SP priorities.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_WFQ_WEIGHT_MODE
+ * @see RTE_TM_UPDATE_NODE_N_SP_PRIORITIES
+ */
+int
+rte_tm_node_wfq_weight_mode_update(uint8_t port_id,
+ uint32_t node_id,
+ int *wfq_weight_mode,
+ uint32_t n_sp_priorities,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node congestion management mode update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] cman
+ * Congestion management mode.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_CMAN
+ */
+int
+rte_tm_node_cman_update(uint8_t port_id,
+ uint32_t node_id,
+ enum rte_tm_cman_mode cman,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node private WRED context update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] wred_profile_id
+ * WRED profile ID for the private WRED context of the current node. Needs to
+ * be either valid WRED profile ID or RTE_TM_WRED_PROFILE_ID_NONE, with the
+ * latter disabling the private WRED context of the current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::cman_wred_context_private_n_max
+*/
+int
+rte_tm_node_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node shared WRED context update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] shared_wred_context_id
+ * Shared WRED context ID. Needs to be valid.
+ * @param[in] add
+ * Set to non-zero value to add this shared WRED context to current node or
+ * to zero to delete this shared WRED context from current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::cman_wred_context_shared_n_max
+ */
+int
+rte_tm_node_shared_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_wred_context_id,
+ int add,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node statistics counters read
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] stats
+ * When non-NULL, it contains the current value for the statistics counters
+ * enabled for the current node.
+ * @param[out] stats_mask
+ * When non-NULL, it contains the mask of statistics counter types that are
+ * currently enabled for this node, indicating which of the counters
+ * retrieved with the *stats* structure are valid.
+ * @param[in] clear
+ * When this parameter has a non-zero value, the statistics counters are
+ * cleared (i.e. set to zero) immediately after they have been read,
+ * otherwise the statistics counters are left untouched.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see enum rte_tm_stats_type
+ */
+int
+rte_tm_node_stats_read(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager packet marking - VLAN DEI (IEEE 802.1Q)
+ *
+ * IEEE 802.1p maps the traffic class to the VLAN Priority Code Point (PCP)
+ * field (3 bits), while IEEE 802.1q maps the drop priority to the VLAN Drop
+ * Eligible Indicator (DEI) field (1 bit), which was previously named Canonical
+ * Format Indicator (CFI).
+ *
+ * All VLAN frames of a given color get their DEI bit set if marking is enabled
+ * for this color; otherwise, their DEI bit is left as is (either set or not).
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mark_green
+ * Set to non-zero value to enable marking of green packets and to zero to
+ * disable it.
+ * @param[in] mark_yellow
+ * Set to non-zero value to enable marking of yellow packets and to zero to
+ * disable it.
+ * @param[in] mark_red
+ * Set to non-zero value to enable marking of red packets and to zero to
+ * disable it.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::mark_vlan_dei_supported
+ */
+int
+rte_tm_mark_vlan_dei(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager packet marking - IPv4 / IPv6 ECN (IETF RFC 3168)
+ *
+ * IETF RFCs 2474 and 3168 reorganize the IPv4 Type of Service (TOS) field
+ * (8 bits) and the IPv6 Traffic Class (TC) field (8 bits) into Differentiated
+ * Services Codepoint (DSCP) field (6 bits) and Explicit Congestion
+ * Notification (ECN) field (2 bits). The DSCP field is typically used to
+ * encode the traffic class and/or drop priority (RFC 2597), while the ECN
+ * field is used by RFC 3168 to implement a congestion notification mechanism
+ * to be leveraged by transport layer protocols such as TCP and SCTP that have
+ * congestion control mechanisms.
+ *
+ * When congestion is experienced, as alternative to dropping the packet,
+ * routers can change the ECN field of input packets from 2'b01 or 2'b10
+ * (values indicating that source endpoint is ECN-capable) to 2'b11 (meaning
+ * that congestion is experienced). The destination endpoint can use the
+ * ECN-Echo (ECE) TCP flag to relay the congestion indication back to the
+ * source endpoint, which acknowledges it back to the destination endpoint with
+ * the Congestion Window Reduced (CWR) TCP flag.
+ *
+ * All IPv4/IPv6 packets of a given color with ECN set to 2’b01 or 2’b10
+ * carrying TCP or SCTP have their ECN set to 2’b11 if the marking feature is
+ * enabled for the current color, otherwise the ECN field is left as is.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mark_green
+ * Set to non-zero value to enable marking of green packets and to zero to
+ * disable it.
+ * @param[in] mark_yellow
+ * Set to non-zero value to enable marking of yellow packets and to zero to
+ * disable it.
+ * @param[in] mark_red
+ * Set to non-zero value to enable marking of red packets and to zero to
+ * disable it.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::mark_ip_ecn_tcp_supported
+ * @see struct rte_tm_capabilities::mark_ip_ecn_sctp_supported
+ */
+int
+rte_tm_mark_ip_ecn(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager packet marking - IPv4 / IPv6 DSCP (IETF RFC 2597)
+ *
+ * IETF RFC 2597 maps the traffic class and the drop priority to the IPv4/IPv6
+ * Differentiated Services Codepoint (DSCP) field (6 bits). Here are the DSCP
+ * values proposed by this RFC:
+ *
+ * <pre> Class 1 Class 2 Class 3 Class 4 </pre>
+ * <pre> +----------+----------+----------+----------+</pre>
+ * <pre>Low Drop Prec | 001010 | 010010 | 011010 | 100010 |</pre>
+ * <pre>Medium Drop Prec | 001100 | 010100 | 011100 | 100100 |</pre>
+ * <pre>High Drop Prec | 001110 | 010110 | 011110 | 100110 |</pre>
+ * <pre> +----------+----------+----------+----------+</pre>
+ *
+ * There are 4 traffic classes (classes 1 .. 4) encoded by DSCP bits 1 and 2,
+ * as well as 3 drop priorities (low/medium/high) encoded by DSCP bits 3 and 4.
+ *
+ * All IPv4/IPv6 packets have their color marked into DSCP bits 3 and 4 as
+ * follows: green mapped to Low Drop Precedence (2’b01), yellow to Medium
+ * (2’b10) and red to High (2’b11). Marking needs to be explicitly enabled
+ * for each color; when not enabled for a given color, the DSCP field of all
+ * packets with that color is left as is.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mark_green
+ * Set to non-zero value to enable marking of green packets and to zero to
+ * disable it.
+ * @param[in] mark_yellow
+ * Set to non-zero value to enable marking of yellow packets and to zero to
+ * disable it.
+ * @param[in] mark_red
+ * Set to non-zero value to enable marking of red packets and to zero to
+ * disable it.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::mark_ip_dscp_supported
+ */
+int
+rte_tm_mark_ip_dscp(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_TM_H__ */
diff --git a/lib/librte_ether/rte_tm_driver.h b/lib/librte_ether/rte_tm_driver.h
new file mode 100644
index 0000000..a5b698f
--- /dev/null
+++ b/lib/librte_ether/rte_tm_driver.h
@@ -0,0 +1,366 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TM_DRIVER_H__
+#define __INCLUDE_RTE_TM_DRIVER_H__
+
+/**
+ * @file
+ * RTE Generic Traffic Manager API (Driver Side)
+ *
+ * This file provides implementation helpers for internal use by PMDs, they
+ * are not intended to be exposed to applications and are not subject to ABI
+ * versioning.
+ */
+
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include "rte_ethdev.h"
+#include "rte_tm.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** @internal Traffic manager node ID validate and type get */
+typedef int (*rte_tm_node_type_get_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager capabilities get */
+typedef int (*rte_tm_capabilities_get_t)(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager level capabilities get */
+typedef int (*rte_tm_level_capabilities_get_t)(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node capabilities get */
+typedef int (*rte_tm_node_capabilities_get_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager WRED profile add */
+typedef int (*rte_tm_wred_profile_add_t)(struct rte_eth_dev *dev,
+ uint32_t wred_profile_id,
+ struct rte_tm_wred_params *profile,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager WRED profile delete */
+typedef int (*rte_tm_wred_profile_delete_t)(struct rte_eth_dev *dev,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shared WRED context add */
+typedef int (*rte_tm_shared_wred_context_add_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t shared_wred_context_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shared WRED context delete */
+typedef int (*rte_tm_shared_wred_context_delete_t)(
+ struct rte_eth_dev *dev,
+ uint32_t shared_wred_context_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shaper profile add */
+typedef int (*rte_tm_shaper_profile_add_t)(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shaper profile delete */
+typedef int (*rte_tm_shaper_profile_delete_t)(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shared shaper add/update */
+typedef int (*rte_tm_shared_shaper_add_update_t)(struct rte_eth_dev *dev,
+ uint32_t shared_shaper_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shared shaper delete */
+typedef int (*rte_tm_shared_shaper_delete_t)(struct rte_eth_dev *dev,
+ uint32_t shared_shaper_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node add */
+typedef int (*rte_tm_node_add_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node delete */
+typedef int (*rte_tm_node_delete_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node suspend */
+typedef int (*rte_tm_node_suspend_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node resume */
+typedef int (*rte_tm_node_resume_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager hierarchy commit */
+typedef int (*rte_tm_hierarchy_commit_t)(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node parent update */
+typedef int (*rte_tm_node_parent_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node shaper update */
+typedef int (*rte_tm_node_shaper_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node shaper update */
+typedef int (*rte_tm_node_shared_shaper_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t shared_shaper_id,
+ int32_t add,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node stats update */
+typedef int (*rte_tm_node_stats_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint64_t stats_mask,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node WFQ weight mode update */
+typedef int (*rte_tm_node_wfq_weight_mode_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t node_id,
+ int *wfq_weigth_mode,
+ uint32_t n_sp_priorities,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node congestion management mode update */
+typedef int (*rte_tm_node_cman_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ enum rte_tm_cman_mode cman,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node WRED context update */
+typedef int (*rte_tm_node_wred_context_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node WRED context update */
+typedef int (*rte_tm_node_shared_wred_context_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t shared_wred_context_id,
+ int add,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager read stats counters for specific node */
+typedef int (*rte_tm_node_stats_read_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager packet marking - VLAN DEI */
+typedef int (*rte_tm_mark_vlan_dei_t)(struct rte_eth_dev *dev,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager packet marking - IPv4/IPv6 ECN */
+typedef int (*rte_tm_mark_ip_ecn_t)(struct rte_eth_dev *dev,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager packet marking - IPv4/IPv6 DSCP */
+typedef int (*rte_tm_mark_ip_dscp_t)(struct rte_eth_dev *dev,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+struct rte_tm_ops {
+ /** Traffic manager node type get */
+ rte_tm_node_type_get_t node_type_get;
+
+ /** Traffic manager capabilities_get */
+ rte_tm_capabilities_get_t capabilities_get;
+ /** Traffic manager level capabilities_get */
+ rte_tm_level_capabilities_get_t level_capabilities_get;
+ /** Traffic manager node capabilities get */
+ rte_tm_node_capabilities_get_t node_capabilities_get;
+
+ /** Traffic manager WRED profile add */
+ rte_tm_wred_profile_add_t wred_profile_add;
+ /** Traffic manager WRED profile delete */
+ rte_tm_wred_profile_delete_t wred_profile_delete;
+ /** Traffic manager shared WRED context add/update */
+ rte_tm_shared_wred_context_add_update_t
+ shared_wred_context_add_update;
+ /** Traffic manager shared WRED context delete */
+ rte_tm_shared_wred_context_delete_t
+ shared_wred_context_delete;
+
+ /** Traffic manager shaper profile add */
+ rte_tm_shaper_profile_add_t shaper_profile_add;
+ /** Traffic manager shaper profile delete */
+ rte_tm_shaper_profile_delete_t shaper_profile_delete;
+ /** Traffic manager shared shaper add/update */
+ rte_tm_shared_shaper_add_update_t shared_shaper_add_update;
+ /** Traffic manager shared shaper delete */
+ rte_tm_shared_shaper_delete_t shared_shaper_delete;
+
+ /** Traffic manager node add */
+ rte_tm_node_add_t node_add;
+ /** Traffic manager node delete */
+ rte_tm_node_delete_t node_delete;
+ /** Traffic manager node suspend */
+ rte_tm_node_suspend_t node_suspend;
+ /** Traffic manager node resume */
+ rte_tm_node_resume_t node_resume;
+ /** Traffic manager hierarchy commit */
+ rte_tm_hierarchy_commit_t hierarchy_commit;
+
+ /** Traffic manager node parent update */
+ rte_tm_node_parent_update_t node_parent_update;
+ /** Traffic manager node shaper update */
+ rte_tm_node_shaper_update_t node_shaper_update;
+ /** Traffic manager node shared shaper update */
+ rte_tm_node_shared_shaper_update_t node_shared_shaper_update;
+ /** Traffic manager node stats update */
+ rte_tm_node_stats_update_t node_stats_update;
+ /** Traffic manager node WFQ weight mode update */
+ rte_tm_node_wfq_weight_mode_update_t node_wfq_weight_mode_update;
+ /** Traffic manager node congestion management mode update */
+ rte_tm_node_cman_update_t node_cman_update;
+ /** Traffic manager node WRED context update */
+ rte_tm_node_wred_context_update_t node_wred_context_update;
+ /** Traffic manager node shared WRED context update */
+ rte_tm_node_shared_wred_context_update_t
+ node_shared_wred_context_update;
+ /** Traffic manager read statistics counters for current node */
+ rte_tm_node_stats_read_t node_stats_read;
+
+ /** Traffic manager packet marking - VLAN DEI */
+ rte_tm_mark_vlan_dei_t mark_vlan_dei;
+ /** Traffic manager packet marking - IPv4/IPv6 ECN */
+ rte_tm_mark_ip_ecn_t mark_ip_ecn;
+ /** Traffic manager packet marking - IPv4/IPv6 DSCP */
+ rte_tm_mark_ip_dscp_t mark_ip_dscp;
+};
+
+/**
+ * Initialize generic error structure.
+ *
+ * This function also sets rte_errno to a given value.
+ *
+ * @param[out] error
+ * Pointer to error structure (may be NULL).
+ * @param[in] code
+ * Related error code (rte_errno).
+ * @param[in] type
+ * Cause field and error type.
+ * @param[in] cause
+ * Object responsible for the error.
+ * @param[in] message
+ * Human-readable error message.
+ *
+ * @return
+ * Error code.
+ */
+static inline int
+rte_tm_error_set(struct rte_tm_error *error,
+ int code,
+ enum rte_tm_error_type type,
+ const void *cause,
+ const char *message)
+{
+ if (error) {
+ *error = (struct rte_tm_error){
+ .type = type,
+ .cause = cause,
+ .message = message,
+ };
+ }
+ rte_errno = code;
+ return code;
+}
+
+/**
+ * Get generic traffic manager operations structure from a port
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] error
+ * Error details
+ *
+ * @return
+ * The traffic manager operations structure associated with port_id on
+ * success, NULL otherwise.
+ */
+const struct rte_tm_ops *
+rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_TM_DRIVER_H__ */
--
2.7.4
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-12 9:02 4% ` Olivier Matz
@ 2017-06-12 9:56 0% ` Bruce Richardson
2017-06-30 11:35 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2017-06-12 9:56 UTC (permalink / raw)
To: Olivier Matz; +Cc: Ananyev, Konstantin, Verkamp, Daniel, dev
On Mon, Jun 12, 2017 at 11:02:32AM +0200, Olivier Matz wrote:
> On Fri, 9 Jun 2017 10:02:55 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > On Thu, Jun 08, 2017 at 05:42:00PM +0100, Ananyev, Konstantin wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: Richardson, Bruce
> > > > Sent: Thursday, June 8, 2017 5:21 PM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Ananyev, Konstantin
> > > > > Sent: Thursday, June 8, 2017 5:13 PM
> > > > > To: Richardson, Bruce <bruce.richardson@intel.com>
> > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Richardson, Bruce
> > > > > > Sent: Thursday, June 8, 2017 5:04 PM
> > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > allocation
> > > > > >
> > > > > > On Thu, Jun 08, 2017 at 04:35:20PM +0100, Ananyev, Konstantin wrote:
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Richardson, Bruce
> > > > > > > > Sent: Thursday, June 8, 2017 4:25 PM
> > > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > > > allocation
> > > > > > > >
> > > > > > > > On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > Sent: Thursday, June 8, 2017 3:12 PM
> > > > > > > > > > To: Olivier Matz <olivier.matz@6wind.com>
> > > > > > > > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > > > > > Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > > > > > allocation
> > > > > > > > > >
> > > > > > > > > > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > > > > > > > > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson
> > > > > <bruce.richardson@intel.com> wrote:
> > > > > > > > > > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz
> > > > > wrote:
> > > > > > > > > > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson
> > > > > <bruce.richardson@intel.com> wrote:
> > > > > > > > > > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev,
> > > > > Konstantin wrote:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > > > > > > > > > To: Ananyev, Konstantin
> > > > > > > > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > > > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>;
> > > > > > > > > > > > > > > > dev@dpdk.org
> > > > > > > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use
> > > > > > > > > > > > > > > > aligned memzone allocation
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev,
> > > > > Konstantin wrote:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are
> > > > > > > > > > > > > > > > > > > > set to 2 cache lines, so members
> > > > > > > > > > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > > > > > > > > > >and therefore the whole struct needs
> > > > > > > > > > > > > > > > > > > >128-byte alignment according to the ABI
> > > > > > > > > > > > > > > > > > > so that the 128-byte alignment of the fields
> > > > > can be guaranteed.
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B
> > > > > aligned these days.
> > > > > > > > > > > > > > > > > > > BTW, I probably missed the initial discussion,
> > > > > but what was the reason for that?
> > > > > > > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128
> > > > > > > > > > > > > > > > > > byte alignment; it seems unnecessary if the
> > > > > > > > > > > > > > > > > > cache line is only 64
> > > > > > > > bytes.
> > > > > > > > > > An
> > > > > > > > > > > > > > > > alternate
> > > > > > > > > > > > > > > > > > fix would be to just use cache line alignment
> > > > > for these fields (since memzones are already cache line aligned).
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Yes, had the same thought.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Maybe there is some deeper reason for the >=
> > > > > 128-byte alignment logic in rte_ring.h?
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Might be, would be good to hear opinion the author
> > > > > of that change.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > It gives improved performance for core-2-core
> > > > > transfer.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > > > > > > > > > That's ok but why we can't keep them and whole
> > > > > rte_ring aligned on cache-line boundaries?
> > > > > > > > > > > > > > > Something like that:
> > > > > > > > > > > > > > > struct rte_ring {
> > > > > > > > > > > > > > > ...
> > > > > > > > > > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > > > > };
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Sure. That should probably work too.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > /Bruce
> > > > > > > > > > > > >
> > > > > > > > > > > > > I also agree with Konstantin's proposal. One question
> > > > > > > > > > > > > though: since it changes the alignment constraint of the
> > > > > > > > > > > > > rte_ring structure, I think it is an ABI breakage: a
> > > > > > > > > > > > > structure including the rte_ring structure inherits from
> > > > > this constraint.
> > > > > > > > > > > > >
> > > > > > > > > > > > > How could we handle that, knowing this is probably a rare
> > > > > case?
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > Is it an ABI break so long as we keep the resulting size
> > > > > > > > > > > > and field placement of the structures the same? The
> > > > > > > > > > > > alignment being reduced should not be a problem, as
> > > > > > > > > > > > 128byte alignment is also valid as 64byte alignment, after
> > > > > all.
> > > > > > > > > > >
> > > > > > > > > > > I'd say yes. Consider the following example:
> > > > > > > > > > >
> > > > > > > > > > > ---8<---
> > > > > > > > > > > #include <stdio.h>
> > > > > > > > > > > #include <stdlib.h>
> > > > > > > > > > >
> > > > > > > > > > > #define ALIGN 64
> > > > > > > > > > > /* #define ALIGN 128 */
> > > > > > > > > > >
> > > > > > > > > > > /* dummy rte_ring struct */
> > > > > > > > > > > struct rte_ring {
> > > > > > > > > > > char x[128];
> > > > > > > > > > > } __attribute__((aligned(ALIGN)));
> > > > > > > > > > >
> > > > > > > > > > > struct foo {
> > > > > > > > > > > struct rte_ring r;
> > > > > > > > > > > unsigned bar;
> > > > > > > > > > > };
> > > > > > > > > > >
> > > > > > > > > > > int main(void)
> > > > > > > > > > > {
> > > > > > > > > > > struct foo array[2];
> > > > > > > > > > >
> > > > > > > > > > > printf("sizeof(ring)=%zu diff=%u\n",
> > > > > > > > > > > sizeof(struct rte_ring),
> > > > > > > > > > > (unsigned int)((char *)&array[1].r - (char
> > > > > *)array));
> > > > > > > > > > >
> > > > > > > > > > > return 0;
> > > > > > > > > > > }
> > > > > > > > > > > ---8<---
> > > > > > > > > > >
> > > > > > > > > > > The size of rte_ring is always 128.
> > > > > > > > > > > diff is 192 or 256, depending on the value of ALIGN.
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Olivier
> > > > > > > > >
> > > > > > > > > About would it be an ABI breakage to 17.05 - I think would...
> > > > > > > > > Though for me the actual breakage happens in 17.05 when rte_ring
> > > > > > > > > alignment was increased from 64B 128B.
> > > > > > > > > Now we just restoring it.
> > > > > > > > >
> > > > > > > > Yes, ABI change was announced in advance and explicitly broken in
> > > > > 17.05.
> > > > > > > > There was no announcement of ABI break in 17.08 for rte_ring.
> > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Yes, the diff will change, but that is after a recompile. If
> > > > > > > > > > we have rte_ring_create function always return a 128-byte
> > > > > > > > > > aligned structure, will any already-compiled apps fail to work
> > > > > > > > > > if we also change the alignment of the rte_ring struct in the
> > > > > header?
> > > > > > > > >
> > > > > > > > > Why 128B?
> > > > > > > > > I thought we are discussing making rte_ring 64B aligned again?
> > > > > > > > >
> > > > > > > > > Konstantin
> > > > > > > >
> > > > > > > > To avoid possibly breaking apps compiled against 17.05 when run
> > > > > > > > against shared libs for 17.08. Having the extra alignment won't
> > > > > > > > affect 17.08 apps, since they only require 64-byte alignment, but
> > > > > > > > returning only 64-byte aligned memory for apps which expect
> > > > > > > > 128byte aligned memory may cause issues.
> > > > > > > >
> > > > > > > > Therefore, we should reduce the required alignment to 64B, which
> > > > > > > > should only affect any apps that do a recompile, and have memory
> > > > > > > > allocation for rings return 128B aligned addresses to work with
> > > > > > > > both 64B aligned and 128B aligned ring structures.
> > > > > > >
> > > > > > > Ah, I see - you are talking just about rte_ring_create().
> > > > > > > BTW, are you sure that right now it allocates rings 128B aligned?
> > > > > > > As I can see it does just:
> > > > > > > mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > > > > > > which means cache line alignment.
> > > > > > >
> > > > > > It doesn't currently allocate with that alignment, which is something
> > > > > > we need to fix - and what this patch was originally submitted to do.
> > > > > > So I think this patch should be applied, along with a further patch to
> > > > > > reduce the alignment going forward to avoid any other problems.
> > > > >
> > > > > But if we going to reduce alignment anyway (patch #2) why do we need patch
> > > > > #1 at all?
> > > >
> > > > Because any app compiled against 17.05 will use the old alignment value. Therefore patch 1 should be applied to 17.08 for backward
> > > > compatibility, and backported to 17.05.1.
> > >
> > > Why then just no backport patch #2 to 17.05.1?
> > >
> > Maybe so. I'm just a little wary about backporting changes like that to
> > an older release, even though I'm not aware of any specific issues it
> > might cause.
>
>
> If we want to fully respect the API/ABI deprecation process, we should
> have patch #1 in 17.05 and 17.08, a deprecation notice in 17.08, and patch
> #2 starting from 17.11.
>
> More pragmatically, it's quite difficult to foresee really big problems
> due to the changes in patch #2. One I can see is:
>
> - rte_ring.so: the dpdk ring library
> - another_ring.so: a library based on dpdk ring. The struct another_ring
> is like the struct foo in my previous example.
> - application: uses another_ring structure
>
> After we apply patch #2 on dpdk, and recompile the another_ring library,
> its ABI will change.
>
>
> So I suggest to follow the deprecation process for that issue.
>
While this theoretically can occur, I consider it fairly unlikely, so my
preference is to have patch #1 in 17.05 and .08, as you suggest,
but put patch #2 into 17.08 as well.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-09 9:02 0% ` Bruce Richardson
@ 2017-06-12 9:02 4% ` Olivier Matz
2017-06-12 9:56 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2017-06-12 9:02 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Ananyev, Konstantin, Verkamp, Daniel, dev
On Fri, 9 Jun 2017 10:02:55 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> On Thu, Jun 08, 2017 at 05:42:00PM +0100, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Richardson, Bruce
> > > Sent: Thursday, June 8, 2017 5:21 PM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Ananyev, Konstantin
> > > > Sent: Thursday, June 8, 2017 5:13 PM
> > > > To: Richardson, Bruce <bruce.richardson@intel.com>
> > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Richardson, Bruce
> > > > > Sent: Thursday, June 8, 2017 5:04 PM
> > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > allocation
> > > > >
> > > > > On Thu, Jun 08, 2017 at 04:35:20PM +0100, Ananyev, Konstantin wrote:
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Richardson, Bruce
> > > > > > > Sent: Thursday, June 8, 2017 4:25 PM
> > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > > allocation
> > > > > > >
> > > > > > > On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Richardson, Bruce
> > > > > > > > > Sent: Thursday, June 8, 2017 3:12 PM
> > > > > > > > > To: Olivier Matz <olivier.matz@6wind.com>
> > > > > > > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > > > > Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > > > > allocation
> > > > > > > > >
> > > > > > > > > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > > > > > > > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson
> > > > <bruce.richardson@intel.com> wrote:
> > > > > > > > > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz
> > > > wrote:
> > > > > > > > > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson
> > > > <bruce.richardson@intel.com> wrote:
> > > > > > > > > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev,
> > > > Konstantin wrote:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > > > > > > > > To: Ananyev, Konstantin
> > > > > > > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>;
> > > > > > > > > > > > > > > dev@dpdk.org
> > > > > > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use
> > > > > > > > > > > > > > > aligned memzone allocation
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev,
> > > > Konstantin wrote:
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are
> > > > > > > > > > > > > > > > > > > set to 2 cache lines, so members
> > > > > > > > > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > > > > > > > > >and therefore the whole struct needs
> > > > > > > > > > > > > > > > > > >128-byte alignment according to the ABI
> > > > > > > > > > > > > > > > > > so that the 128-byte alignment of the fields
> > > > can be guaranteed.
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B
> > > > aligned these days.
> > > > > > > > > > > > > > > > > > BTW, I probably missed the initial discussion,
> > > > but what was the reason for that?
> > > > > > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128
> > > > > > > > > > > > > > > > > byte alignment; it seems unnecessary if the
> > > > > > > > > > > > > > > > > cache line is only 64
> > > > > > > bytes.
> > > > > > > > > An
> > > > > > > > > > > > > > > alternate
> > > > > > > > > > > > > > > > > fix would be to just use cache line alignment
> > > > for these fields (since memzones are already cache line aligned).
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Yes, had the same thought.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Maybe there is some deeper reason for the >=
> > > > 128-byte alignment logic in rte_ring.h?
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Might be, would be good to hear opinion the author
> > > > of that change.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > It gives improved performance for core-2-core
> > > > transfer.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > > > > > > > > That's ok but why we can't keep them and whole
> > > > rte_ring aligned on cache-line boundaries?
> > > > > > > > > > > > > > Something like that:
> > > > > > > > > > > > > > struct rte_ring {
> > > > > > > > > > > > > > ...
> > > > > > > > > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > > > };
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > >
> > > > > > > > > > > > > Sure. That should probably work too.
> > > > > > > > > > > > >
> > > > > > > > > > > > > /Bruce
> > > > > > > > > > > >
> > > > > > > > > > > > I also agree with Konstantin's proposal. One question
> > > > > > > > > > > > though: since it changes the alignment constraint of the
> > > > > > > > > > > > rte_ring structure, I think it is an ABI breakage: a
> > > > > > > > > > > > structure including the rte_ring structure inherits from
> > > > this constraint.
> > > > > > > > > > > >
> > > > > > > > > > > > How could we handle that, knowing this is probably a rare
> > > > case?
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > Is it an ABI break so long as we keep the resulting size
> > > > > > > > > > > and field placement of the structures the same? The
> > > > > > > > > > > alignment being reduced should not be a problem, as
> > > > > > > > > > > 128byte alignment is also valid as 64byte alignment, after
> > > > all.
> > > > > > > > > >
> > > > > > > > > > I'd say yes. Consider the following example:
> > > > > > > > > >
> > > > > > > > > > ---8<---
> > > > > > > > > > #include <stdio.h>
> > > > > > > > > > #include <stdlib.h>
> > > > > > > > > >
> > > > > > > > > > #define ALIGN 64
> > > > > > > > > > /* #define ALIGN 128 */
> > > > > > > > > >
> > > > > > > > > > /* dummy rte_ring struct */
> > > > > > > > > > struct rte_ring {
> > > > > > > > > > char x[128];
> > > > > > > > > > } __attribute__((aligned(ALIGN)));
> > > > > > > > > >
> > > > > > > > > > struct foo {
> > > > > > > > > > struct rte_ring r;
> > > > > > > > > > unsigned bar;
> > > > > > > > > > };
> > > > > > > > > >
> > > > > > > > > > int main(void)
> > > > > > > > > > {
> > > > > > > > > > struct foo array[2];
> > > > > > > > > >
> > > > > > > > > > printf("sizeof(ring)=%zu diff=%u\n",
> > > > > > > > > > sizeof(struct rte_ring),
> > > > > > > > > > (unsigned int)((char *)&array[1].r - (char
> > > > *)array));
> > > > > > > > > >
> > > > > > > > > > return 0;
> > > > > > > > > > }
> > > > > > > > > > ---8<---
> > > > > > > > > >
> > > > > > > > > > The size of rte_ring is always 128.
> > > > > > > > > > diff is 192 or 256, depending on the value of ALIGN.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Olivier
> > > > > > > >
> > > > > > > > About would it be an ABI breakage to 17.05 - I think would...
> > > > > > > > Though for me the actual breakage happens in 17.05 when rte_ring
> > > > > > > > alignment was increased from 64B 128B.
> > > > > > > > Now we just restoring it.
> > > > > > > >
> > > > > > > Yes, ABI change was announced in advance and explicitly broken in
> > > > 17.05.
> > > > > > > There was no announcement of ABI break in 17.08 for rte_ring.
> > > > > > >
> > > > > > > > >
> > > > > > > > > Yes, the diff will change, but that is after a recompile. If
> > > > > > > > > we have rte_ring_create function always return a 128-byte
> > > > > > > > > aligned structure, will any already-compiled apps fail to work
> > > > > > > > > if we also change the alignment of the rte_ring struct in the
> > > > header?
> > > > > > > >
> > > > > > > > Why 128B?
> > > > > > > > I thought we are discussing making rte_ring 64B aligned again?
> > > > > > > >
> > > > > > > > Konstantin
> > > > > > >
> > > > > > > To avoid possibly breaking apps compiled against 17.05 when run
> > > > > > > against shared libs for 17.08. Having the extra alignment won't
> > > > > > > affect 17.08 apps, since they only require 64-byte alignment, but
> > > > > > > returning only 64-byte aligned memory for apps which expect
> > > > > > > 128byte aligned memory may cause issues.
> > > > > > >
> > > > > > > Therefore, we should reduce the required alignment to 64B, which
> > > > > > > should only affect any apps that do a recompile, and have memory
> > > > > > > allocation for rings return 128B aligned addresses to work with
> > > > > > > both 64B aligned and 128B aligned ring structures.
> > > > > >
> > > > > > Ah, I see - you are talking just about rte_ring_create().
> > > > > > BTW, are you sure that right now it allocates rings 128B aligned?
> > > > > > As I can see it does just:
> > > > > > mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > > > > > which means cache line alignment.
> > > > > >
> > > > > It doesn't currently allocate with that alignment, which is something
> > > > > we need to fix - and what this patch was originally submitted to do.
> > > > > So I think this patch should be applied, along with a further patch to
> > > > > reduce the alignment going forward to avoid any other problems.
> > > >
> > > > But if we going to reduce alignment anyway (patch #2) why do we need patch
> > > > #1 at all?
> > >
> > > Because any app compiled against 17.05 will use the old alignment value. Therefore patch 1 should be applied to 17.08 for backward
> > > compatibility, and backported to 17.05.1.
> >
> > Why then just no backport patch #2 to 17.05.1?
> >
> Maybe so. I'm just a little wary about backporting changes like that to
> an older release, even though I'm not aware of any specific issues it
> might cause.
If we want to fully respect the API/ABI deprecation process, we should
have patch #1 in 17.05 and 17.08, a deprecation notice in 17.08, and patch
#2 starting from 17.11.
More pragmatically, it's quite difficult to foresee really big problems
due to the changes in patch #2. One I can see is:
- rte_ring.so: the dpdk ring library
- another_ring.so: a library based on dpdk ring. The struct another_ring
is like the struct foo in my previous example.
- application: uses another_ring structure
After we apply patch #2 on dpdk, and recompile the another_ring library,
its ABI will change.
So I suggest to follow the deprecation process for that issue.
Olivier
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 2/2] drivers/net: use device name from device structure
@ 2017-06-12 8:57 3% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2017-06-12 8:57 UTC (permalink / raw)
To: Jan Blunck, Thomas Monjalon
Cc: dev, John W. Linville, Stephen Hurd, Ajit Khaparde,
Declan Doherty, Helin Zhang, Jingjing Wu, Wenzhuo Lu,
Konstantin Ananyev, Pascal Mazon, Gaetan Rivet
On 6/10/2017 8:35 AM, Jan Blunck wrote:
> On Fri, Jun 9, 2017 at 3:52 PM, Thomas Monjalon <thomas@monjalon.net> wrote:
>> 26/05/2017 18:11, Ferruh Yigit:
>>> Device name resides in two different locations, in rte_device->name and
>>> in ethernet device private data.
>>
>> Yes would be nice to remove the name from rte_eth_dev_data.
>>
>
> I wonder if this is really the right thing to do. The name in the
> eth_dev data is the eth_dev device name and it might be different from
> the low-level device name. Some busses might use UUID as the device
> identifier and I don't believe that this is a user friendly name.
Right now eth_dev->data->name is same with with rte_dev->device->name.
And there is an assumption that they will be same [1].
But if you think they can be different in the future, I think we can:
1- Keep as it is.
2- Reduce to single variable as much as possible (this patch), when
different naming implemented, update relevant parts according. Since
this is internal structure, I believe won't cause an ABI issue.
[1]
rte_eth_dev_pci_allocate() and rte_eth_vdev_allocate() use
rte_dev->device->name to call rte_eth_dev_allocate(), which inside calls
rte_eth_dev_allocated() with same name. Which assumes any previously
created eth_dev, created with rte_dev->device->name.
>
>>> For now, the copy in the ethernet device private data is required for
>>> multi process support, the name is the how secondary process finds about
>>> primary process device.
>>
>> Yes it is in rte_eth_dev_attach_secondary().
>> This secondary process forces us to write ugly data structures.
>>
>>> But for drivers there is no reason to use the copy in the ethernet
>>> device private data.
>>
>> Yes I agree.
>
> Probably. But it also depends on at what stage the driver is using the
> name and what information is printed. During probing I would expect
> the low-level device name to be printed. After probing the eth_dev PMD
> should use the user friendly device name.
This makes sense when different naming used for core device and eth_dev,
but this is not the case for now.
>
>> There are probably other places where we can avoid using this field.
>> I see rte_eth_dev_get_name_by_port() and rte_eth_dev_get_port_by_name()
>> using rte_eth_dev_data[port].name.
>>
>>> This patch updates PMDs to use only rte_device->name.
>>>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 1/2] mbuf: introduce new Tx offload flag for MPLS-in-UDP
[not found] ` <20170609091811.0867b1d1@platinum>
@ 2017-06-10 6:17 0% ` Patil, Harish
0 siblings, 0 replies; 200+ results
From: Patil, Harish @ 2017-06-10 6:17 UTC (permalink / raw)
To: Olivier Matz; +Cc: Mody, Rasesh, Ferruh Yigit, dev, Dept-Eng DPDK Dev
>
>On Thu, 8 Jun 2017 21:46:00 +0000, "Patil, Harish"
><Harish.Patil@cavium.com> wrote:
>> >Hi Rasesh,
>> >
>> >On Wed, 7 Jun 2017 00:43:48 -0700, Rasesh Mody <rasesh.mody@cavium.com>
>> >wrote:
>> >> From: Harish Patil <harish.patil@cavium.com>
>> >>
>> >> Some PMDs need to know the tunnel type in order to handle advance TX
>> >> features. This patch adds a new TX offload flag for MPLS-in-UDP
>>packets.
>> >>
>> >> Signed-off-by: Harish Patil <harish.patil@cavium.com>
>> >> ---
>> >> lib/librte_mbuf/rte_mbuf.c | 2 ++
>> >> lib/librte_mbuf/rte_mbuf.h | 17 ++++++++++-------
>> >> 2 files changed, 12 insertions(+), 7 deletions(-)
>> >>
>> >> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
>> >> index 0e3e36a..c2793fb 100644
>> >> --- a/lib/librte_mbuf/rte_mbuf.c
>> >> +++ b/lib/librte_mbuf/rte_mbuf.c
>> >> @@ -410,6 +410,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t
>>mask)
>> >> case PKT_TX_TUNNEL_IPIP: return "PKT_TX_TUNNEL_IPIP";
>> >> case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
>> >> case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
>> >> + case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
>> >> default: return NULL;
>> >> }
>> >> }
>> >> @@ -441,6 +442,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t
>>mask)
>> >> { PKT_TX_TUNNEL_GENEVE, PKT_TX_TUNNEL_MASK,
>> >> "PKT_TX_TUNNEL_NONE" },
>> >> { PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
>> >> + { PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MPLSINUDP, NULL },
>> >> };
>> >> const char *name;
>> >> unsigned int i;
>> >> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
>> >> index 1cb0310..27ad421 100644
>> >> --- a/lib/librte_mbuf/rte_mbuf.h
>> >> +++ b/lib/librte_mbuf/rte_mbuf.h
>> >> @@ -197,19 +197,22 @@
>> >> * Offload the MACsec. This flag must be set by the application to
>> >>enable
>> >> * this offload feature for a packet to be transmitted.
>> >> */
>> >> -#define PKT_TX_MACSEC (1ULL << 44)
>> >> +#define PKT_TX_MACSEC (1ULL << 43)
>> >
>> >I'm not sure it is suitable to change the value of an existing
>> >flag, since it breaks the ABI.
>> >
>> >
>> >> /**
>> >> - * Bits 45:48 used for the tunnel type.
>> >> + * Bits 44:48 used for the tunnel type.
>> >> * When doing Tx offload like TSO or checksum, the HW needs to
>> >>configure the
>> >> * tunnel type into the HW descriptors.
>> >> */
>> >> -#define PKT_TX_TUNNEL_VXLAN (0x1ULL << 45)
>> >> -#define PKT_TX_TUNNEL_GRE (0x2ULL << 45)
>> >> -#define PKT_TX_TUNNEL_IPIP (0x3ULL << 45)
>> >> -#define PKT_TX_TUNNEL_GENEVE (0x4ULL << 45)
>> >> +/**< TX packet with MPLS-in-UDP RFC 7510 header. */
>> >> +#define PKT_TX_TUNNEL_MPLSINUDP (0x1ULL << 44)
>> >> +
>> >> +#define PKT_TX_TUNNEL_VXLAN (0x2ULL << 44)
>> >> +#define PKT_TX_TUNNEL_GRE (0x3ULL << 44)
>> >> +#define PKT_TX_TUNNEL_IPIP (0x4ULL << 44)
>> >> +#define PKT_TX_TUNNEL_GENEVE (0x5ULL << 45)
>> >> /* add new TX TUNNEL type here */
>> >> -#define PKT_TX_TUNNEL_MASK (0xFULL << 45)
>> >> +#define PKT_TX_TUNNEL_MASK (0x1FULL << 44)
>> >>
>> >> /**
>> >> * Second VLAN insertion (QinQ) flag.
>> >
>> >I dont understand why adding
>> >#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
>> >wouldn't do the job?
>> >
>> >Currently, the tunnel mask is 0xF << 45, which gives 16 possible
>>values.
>>
>> [Harish] Hi Olivier,
>> Not too sure whether I understand your comment.
>> My understanding is that those are bitmapped values for each Tx tunnel
>> type in the range [48:45].
>> They are not values. So defining PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
>> won’t work.
>
>Currently, we have:
>
>#define PKT_TX_TUNNEL_VXLAN (0x1ULL << 45)
>in binary: 000..000[0001]000..000
>
>#define PKT_TX_TUNNEL_GRE (0x2ULL << 45)
>in binary: 000..000[0010]000..000
>
>#define PKT_TX_TUNNEL_IPIP (0x3ULL << 45)
>in binary: 000..000[0011]000..000
>
>#define PKT_TX_TUNNEL_GENEVE (0x4ULL << 45)
>in binary: 000..000[0100]000..000
>
>So, I'm still saying there's a room for 11 more values.
>
>
>
>Olivier
>
[Harish] Okay thanks, got it. I shall send the v2 patch.
I have to update the driver also to use it as a value rather than as
bitmapped value.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v5 2/2] ethdev: add traffic management API
@ 2017-06-09 16:51 1% ` Cristian Dumitrescu
0 siblings, 1 reply; 200+ results
From: Cristian Dumitrescu @ 2017-06-09 16:51 UTC (permalink / raw)
To: dev
Cc: thomas, jerin.jacob, balasubramanian.manoharan, hemant.agrawal,
shreyansh.jain, jasvinder.singh, wenzhuo.lu
This patch introduces the generic ethdev API for the traffic manager
capability, which includes: hierarchical scheduling, traffic shaping,
congestion management, packet marking.
Main features:
- Exposed as ethdev plugin capability (similar to rte_flow)
- Capability query API per port, per level and per node
- Scheduling algorithms: Strict Priority (SP), Weighed Fair Queuing (WFQ)
- Traffic shaping: single/dual rate, private (per node) and shared (by
multiple nodes) shapers
- Congestion management for hierarchy leaf nodes: algorithms of tail drop,
head drop, WRED; private (per node) and shared (by multiple nodes) WRED
contexts
- Packet marking: IEEE 802.1q (VLAN DEI), IETF RFC 3168 (IPv4/IPv6 ECN for
TCP and SCTP), IETF RFC 2597 (IPv4 / IPv6 DSCP)
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Balasubramanian.Manoharan <balasubramanian.manoharan@caviumnetworks.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Changes in v5:
- Implemented feedback from Jerin [8]
- Add level parameter to node add API function
- Doxygen: fixed comments applicable to field below/before
- Doxygen: added missing @see
- Doxygen: fixed hooks in doc/api/doxy-api-index.md
- Doxygen: fixed table rendering
- Added copyright on API header file from Cavium and NXP to
existing Intel copyright
- MANTAINERS: added next-tm tree
- Added V4 ACKs from Jerin, Bala and Hemant
Changes in v4:
- Implemented feedback from Hemant [6]
- Capability API: Reworked the port, level and node capability API
data structure to remove confusion due to "summary across all
nodes" approach, which made it unclear whether a particular
capability is supported by all nodes or by at least one node.
- Capability API: Added flags for "all nodes have identical
capability set"
- Suspended state: documented the required behavior in Doxygen
description
- Implemented feedback from Jerin [7]
- Node add: added level parameter (see new API function:
rte_tm_node_add_check_level())
- RTE_TM_ETH_FRAMING_OVERHEAD, RTE_TM_ETH_FRAMING_OVERHEAD_FCS:
documented their usage in their Doxygen description
- Capability API: for each function, mention the related
capability field (Doxygen @see)
- stats_mask, capability_mask: document the enum flags used to
build each mask (Doxygen @see)
- Rename rte_tm_get_leaf_nodes() to
rte_tm_get_number_of_leaf_nodes()
- Doxygen: add @param[in, out] to the description of all API funcs
- Doxygen: fix hooks in doc/api/doxy-api-index.md
- Rename rte_tm_hierarchy_set() to rte_tm_hierarchy_commit(), improved
Doxygen description
- Node add, node delete: improved Doxygen description
- Fixed incorrect design assumption that packet-based weight mode for WFQ
is identical to WRR. As result, removed all references to WRR support.
Renamed the "scheduling mode" node parameters to "wfq_weight_mode".
Changes in v3:
- Implemented feedback from Jerin [5]
- Changed naming convention: scheddev -> tm
- Improvements on the capability API:
- Specification of marking capabilities per color
- WFQ/WRR groups: sp_n_children_max ->
wfq_wrr_n_children_per_group_max, added wfq_wrr_n_groups_max,
improved description of both, improved description of
wfq_wrr_weight_max
- Dynamic updates: added KEEP_LEVEL and CHANGE_LEVEL for parent
update
- Enforced/documented restrictions for root node (node_add() and
update())
- Enforced/documented shaper profile restrictions on PIR: PIR != 0,
PIR >= CIR
- Turned repetitive code in rte_tm.c into macro
- Removed dependency on rte_red.h file (added RED params to rte_tm.h)
- Color: removed "e_" from color names enum
- Fixed small Doxygen style issues
Changes in v2:
- Implemented feedback from Hemant [4]
- Improvements on the capability API
- Added capability API for hierarchy level
- Merged stats capability into the capability API
- Added dynamic updates
- Added non-leaf/leaf union to the node capability structure
- Renamed sp_priority_min to sp_n_priorities_max, added
clarifications
- Fixed description for sp_n_children_max
- Clarified and enforced rule on node ID range for leaf and non-leaf nodes
- Added API functions to get node type (i.e. leaf/non-leaf):
get_leaf_nodes(), node_type_get()
- Added clarification for the root node: its creation, parent, role
- Macro NODE_ID_NULL as root node's parent
- Description of the node_add() and node_parent_update() API funcs
- Added clarification for the first time add vs. subsequent updates rule
- Cleaned up the description for the node_add() function
- Statistics API improvements
- Merged stats capability into the capability API
- Added API function node_stats_update()
- Added more stats per packet color
- Added more error types
- Fixed small Doxygen style issues
Changes in v1 (since RFC [1]):
- Implemented as ethdev plugin (similar to rte_flow) as opposed to more
monolithic additions to ethdev itself
- Implemented feedback from Jerin [2] and Hemant [3]. Implemented all the
suggested items with only one exception, see the long list below,
hopefully nothing was forgotten.
- The item not done (hopefully for a good reason): driver-generated
object IDs. IMO the choice to have application-generated object IDs
adds marginal complexity to the driver (search ID function
required), but it provides huge simplification for the application.
The app does not need to worry about building & managing tree-like
structure for storing driver-generated object IDs, the app can use
its own convention for node IDs depending on the specific hierarchy
that it needs. Trivial example: identify all level-2 nodes with IDs
like 100, 200, 300, … and the level-3 nodes based on their level-2
parents: 110, 120, 130, 140, …, 210, 220, 230, 240, …, 310, 320,
330, … and level-4 nodes based on their level-3 parents: 111, 112,
113, 114, …, 121, 122, 123, 124, …). Moreover, see the change log
for the other related simplification that was implemented: leaf
nodes now have predefined IDs that are the same with their Ethernet
TX queue ID ( therefore no translation is required for leaf nodes).
- Capability API. Done per port and per node as well.
- Dual rate shapers
- Added configuration of private shaper (per node) directly from the
shaper profile as part of node API (no shaper ID needed for private
shapers), while the shared shapers are configured outside of the node
API using shaper profile and communicated to the node using shared
shaper ID. So there is no configuration overhead for shared shapers if
the app does not use any of them.
- Leaf nodes now have predefined IDs that are the same with their Ethernet
TX queue ID (therefore no translation is required for leaf nodes). This
is also used to differentiate between a leaf node and a non-leaf node.
- Domain-specific errors to give a precise indication of the error cause
(same as done by rte_flow)
- Packet marking API
- Packet length optional adjustment for shapers, positive (e.g. for adding
Ethernet framing overhead of 20 bytes) or negative (e.g. for rate
limiting based on IP packet bytes)
[1] RFC: http://dpdk.org/ml/archives/dev/2016-November/050956.html
[2] Jerin’s feedback on RFC: http://www.dpdk.org/ml/archives/dev/2017-January/054484.html
[3] Hemant’s feedback on RFC: http://www.dpdk.org/ml/archives/dev/2017-January/054866.html
[4] Hemant's feedback on v1: http://www.dpdk.org/ml/archives/dev/2017-February/058033.html
[5] Jerin's feedback on v1: http://www.dpdk.org/ml/archives/dev/2017-March/058895.html
[6] Hemant's feedback on v3: http://www.dpdk.org/ml/archives/dev/2017-March/062354.html
[7] Jerin's feedback on v3: http://www.dpdk.org/ml/archives/dev/2017-April/063429.html
[8] Jerin's feedback on v4: http://www.dpdk.org/ml/archives/dev/2017-May/066932.html
MAINTAINERS | 5 +
lib/librte_ether/Makefile | 5 +-
lib/librte_ether/rte_ether_version.map | 30 +
lib/librte_ether/rte_tm.c | 438 ++++++++
lib/librte_ether/rte_tm.h | 1899 ++++++++++++++++++++++++++++++++
lib/librte_ether/rte_tm_driver.h | 366 ++++++
6 files changed, 2742 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_ether/rte_tm.c
create mode 100644 lib/librte_ether/rte_tm.h
create mode 100644 lib/librte_ether/rte_tm_driver.h
diff --git a/MAINTAINERS b/MAINTAINERS
index f6095ef..3c7414f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -240,6 +240,11 @@ Flow API
M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
F: lib/librte_ether/rte_flow*
+Traffic Management API
+M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+T: git://dpdk.org/next/dpdk-next-tm
+F: lib/librte_ether/rte_tm*
+
Crypto API
M: Declan Doherty <declan.doherty@intel.com>
F: lib/librte_cryptodev/
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index 93fdde1..db692ae 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -1,6 +1,6 @@
# BSD LICENSE
#
-# Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+# Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
@@ -45,6 +45,7 @@ LIBABIVER := 6
SRCS-y += rte_ethdev.c
SRCS-y += rte_flow.c
+SRCS-y += rte_tm.c
#
# Export include files
@@ -56,5 +57,7 @@ SYMLINK-y-include += rte_eth_ctrl.h
SYMLINK-y-include += rte_dev_info.h
SYMLINK-y-include += rte_flow.h
SYMLINK-y-include += rte_flow_driver.h
+SYMLINK-y-include += rte_tm.h
+SYMLINK-y-include += rte_tm_driver.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 2788e7b..5e8651d 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -161,4 +161,34 @@ DPDK_17.08 {
global:
rte_eth_dev_tm_ops_get;
+ rte_tm_get_leaf_nodes;
+ rte_tm_node_type_get;
+ rte_tm_capabilities_get;
+ rte_tm_level_capabilities_get;
+ rte_tm_node_capabilities_get;
+ rte_tm_wred_profile_add;
+ rte_tm_wred_profile_delete;
+ rte_tm_shared_wred_context_add_update;
+ rte_tm_shared_wred_context_delete;
+ rte_tm_shaper_profile_add;
+ rte_tm_shaper_profile_delete;
+ rte_tm_shared_shaper_add_update;
+ rte_tm_shared_shaper_delete;
+ rte_tm_node_add;
+ rte_tm_node_delete;
+ rte_tm_node_suspend;
+ rte_tm_node_resume;
+ rte_tm_hierarchy_commit;
+ rte_tm_node_parent_update;
+ rte_tm_node_shaper_update;
+ rte_tm_node_shared_shaper_update;
+ rte_tm_node_stats_update;
+ rte_tm_node_wfq_weight_mode_update;
+ rte_tm_node_cman_update;
+ rte_tm_node_wred_context_update;
+ rte_tm_node_shared_wred_context_update;
+ rte_tm_node_stats_read;
+ rte_tm_mark_vlan_dei;
+ rte_tm_mark_ip_ecn;
+ rte_tm_mark_ip_dscp;
} DPDK_17.05;
diff --git a/lib/librte_ether/rte_tm.c b/lib/librte_ether/rte_tm.c
new file mode 100644
index 0000000..7167965
--- /dev/null
+++ b/lib/librte_ether/rte_tm.c
@@ -0,0 +1,438 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include "rte_ethdev.h"
+#include "rte_tm_driver.h"
+#include "rte_tm.h"
+
+/* Get generic traffic manager operations structure from a port. */
+const struct rte_tm_ops *
+rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ const struct rte_tm_ops *ops;
+
+ if (!rte_eth_dev_is_valid_port(port_id)) {
+ rte_tm_error_set(error,
+ ENODEV,
+ RTE_TM_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(ENODEV));
+ return NULL;
+ }
+
+ if ((dev->dev_ops->tm_ops_get == NULL) ||
+ (dev->dev_ops->tm_ops_get(dev, &ops) != 0) ||
+ (ops == NULL)) {
+ rte_tm_error_set(error,
+ ENOSYS,
+ RTE_TM_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(ENOSYS));
+ return NULL;
+ }
+
+ return ops;
+}
+
+#define RTE_TM_FUNC(port_id, func) \
+({ \
+ const struct rte_tm_ops *ops = \
+ rte_tm_ops_get(port_id, error); \
+ if (ops == NULL) \
+ return -rte_errno; \
+ \
+ if (ops->func == NULL) \
+ return -rte_tm_error_set(error, \
+ ENOSYS, \
+ RTE_TM_ERROR_TYPE_UNSPECIFIED, \
+ NULL, \
+ rte_strerror(ENOSYS)); \
+ \
+ ops->func; \
+})
+
+/* Get number of leaf nodes */
+int
+rte_tm_get_number_of_leaf_nodes(uint8_t port_id,
+ uint32_t *n_leaf_nodes,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ const struct rte_tm_ops *ops =
+ rte_tm_ops_get(port_id, error);
+
+ if (ops == NULL)
+ return -rte_errno;
+
+ if (n_leaf_nodes == NULL) {
+ rte_tm_error_set(error,
+ EINVAL,
+ RTE_TM_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(EINVAL));
+ return -rte_errno;
+ }
+
+ *n_leaf_nodes = dev->data->nb_tx_queues;
+ return 0;
+}
+
+/* Check node type (leaf or non-leaf) */
+int
+rte_tm_node_type_get(uint8_t port_id,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_type_get)(dev,
+ node_id, is_leaf, error);
+}
+
+/* Get capabilities */
+int rte_tm_capabilities_get(uint8_t port_id,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, capabilities_get)(dev,
+ cap, error);
+}
+
+/* Get level capabilities */
+int rte_tm_level_capabilities_get(uint8_t port_id,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, level_capabilities_get)(dev,
+ level_id, cap, error);
+}
+
+/* Get node capabilities */
+int rte_tm_node_capabilities_get(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_capabilities_get)(dev,
+ node_id, cap, error);
+}
+
+/* Add WRED profile */
+int rte_tm_wred_profile_add(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_wred_params *profile,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, wred_profile_add)(dev,
+ wred_profile_id, profile, error);
+}
+
+/* Delete WRED profile */
+int rte_tm_wred_profile_delete(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, wred_profile_delete)(dev,
+ wred_profile_id, error);
+}
+
+/* Add/update shared WRED context */
+int rte_tm_shared_wred_context_add_update(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_wred_context_add_update)(dev,
+ shared_wred_context_id, wred_profile_id, error);
+}
+
+/* Delete shared WRED context */
+int rte_tm_shared_wred_context_delete(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_wred_context_delete)(dev,
+ shared_wred_context_id, error);
+}
+
+/* Add shaper profile */
+int rte_tm_shaper_profile_add(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shaper_profile_add)(dev,
+ shaper_profile_id, profile, error);
+}
+
+/* Delete WRED profile */
+int rte_tm_shaper_profile_delete(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shaper_profile_delete)(dev,
+ shaper_profile_id, error);
+}
+
+/* Add shared shaper */
+int rte_tm_shared_shaper_add_update(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_shaper_add_update)(dev,
+ shared_shaper_id, shaper_profile_id, error);
+}
+
+/* Delete shared shaper */
+int rte_tm_shared_shaper_delete(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_shaper_delete)(dev,
+ shared_shaper_id, error);
+}
+
+/* Add node to port traffic manager hierarchy */
+int rte_tm_node_add(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_add)(dev,
+ node_id, parent_node_id, priority, weight, level_id,
+ params, error);
+}
+
+/* Delete node from traffic manager hierarchy */
+int rte_tm_node_delete(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_delete)(dev,
+ node_id, error);
+}
+
+/* Suspend node */
+int rte_tm_node_suspend(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_suspend)(dev,
+ node_id, error);
+}
+
+/* Resume node */
+int rte_tm_node_resume(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_resume)(dev,
+ node_id, error);
+}
+
+/* Commit the initial port traffic manager hierarchy */
+int rte_tm_hierarchy_commit(uint8_t port_id,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, hierarchy_commit)(dev,
+ clear_on_fail, error);
+}
+
+/* Update node parent */
+int rte_tm_node_parent_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_parent_update)(dev,
+ node_id, parent_node_id, priority, weight, error);
+}
+
+/* Update node private shaper */
+int rte_tm_node_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_shaper_update)(dev,
+ node_id, shaper_profile_id, error);
+}
+
+/* Update node shared shapers */
+int rte_tm_node_shared_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_shaper_id,
+ int add,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_shared_shaper_update)(dev,
+ node_id, shared_shaper_id, add, error);
+}
+
+/* Update node stats */
+int rte_tm_node_stats_update(uint8_t port_id,
+ uint32_t node_id,
+ uint64_t stats_mask,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_stats_update)(dev,
+ node_id, stats_mask, error);
+}
+
+/* Update WFQ weight mode */
+int rte_tm_node_wfq_weight_mode_update(uint8_t port_id,
+ uint32_t node_id,
+ int *wfq_weight_mode,
+ uint32_t n_sp_priorities,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_wfq_weight_mode_update)(dev,
+ node_id, wfq_weight_mode, n_sp_priorities, error);
+}
+
+/* Update node congestion management mode */
+int rte_tm_node_cman_update(uint8_t port_id,
+ uint32_t node_id,
+ enum rte_tm_cman_mode cman,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_cman_update)(dev,
+ node_id, cman, error);
+}
+
+/* Update node private WRED context */
+int rte_tm_node_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_wred_context_update)(dev,
+ node_id, wred_profile_id, error);
+}
+
+/* Update node shared WRED context */
+int rte_tm_node_shared_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_wred_context_id,
+ int add,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_shared_wred_context_update)(dev,
+ node_id, shared_wred_context_id, add, error);
+}
+
+/* Read and/or clear stats counters for specific node */
+int rte_tm_node_stats_read(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_stats_read)(dev,
+ node_id, stats, stats_mask, clear, error);
+}
+
+/* Packet marking - VLAN DEI */
+int rte_tm_mark_vlan_dei(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, mark_vlan_dei)(dev,
+ mark_green, mark_yellow, mark_red, error);
+}
+
+/* Packet marking - IPv4/IPv6 ECN */
+int rte_tm_mark_ip_ecn(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, mark_ip_ecn)(dev,
+ mark_green, mark_yellow, mark_red, error);
+}
+
+/* Packet marking - IPv4/IPv6 DSCP */
+int rte_tm_mark_ip_dscp(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, mark_ip_dscp)(dev,
+ mark_green, mark_yellow, mark_red, error);
+}
diff --git a/lib/librte_ether/rte_tm.h b/lib/librte_ether/rte_tm.h
new file mode 100644
index 0000000..9513f13
--- /dev/null
+++ b/lib/librte_ether/rte_tm.h
@@ -0,0 +1,1899 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation.
+ * Copyright(c) 2017 Cavium.
+ * Copyright(c) 2017 NXP.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TM_H__
+#define __INCLUDE_RTE_TM_H__
+
+/**
+ * @file
+ * RTE Generic Traffic Manager API
+ *
+ * This interface provides the ability to configure the traffic manager in a
+ * generic way. It includes features such as: hierarchical scheduling,
+ * traffic shaping, congestion management, packet marking, etc.
+ */
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Ethernet framing overhead.
+ *
+ * Overhead fields per Ethernet frame:
+ * 1. Preamble: 7 bytes;
+ * 2. Start of Frame Delimiter (SFD): 1 byte;
+ * 3. Inter-Frame Gap (IFG): 12 bytes.
+ *
+ * One of the typical values for the *pkt_length_adjust* field of the shaper
+ * profile.
+ *
+ * @see struct rte_tm_shaper_params
+ */
+#define RTE_TM_ETH_FRAMING_OVERHEAD 20
+
+/**
+ * Ethernet framing overhead including the Frame Check Sequence (FCS) field.
+ * Useful when FCS is generated and added at the end of the Ethernet frame on
+ * TX side without any SW intervention.
+ *
+ * One of the typical values for the pkt_length_adjust field of the shaper
+ * profile.
+ *
+ * @see struct rte_tm_shaper_params
+ */
+#define RTE_TM_ETH_FRAMING_OVERHEAD_FCS 24
+
+/**
+ * Invalid WRED profile ID.
+ *
+ * @see struct rte_tm_node_params
+ * @see rte_tm_node_add()
+ * @see rte_tm_node_wred_context_update()
+ */
+#define RTE_TM_WRED_PROFILE_ID_NONE UINT32_MAX
+
+/**
+ *Invalid shaper profile ID.
+ *
+ * @see struct rte_tm_node_params
+ * @see rte_tm_node_add()
+ * @see rte_tm_node_shaper_update()
+ */
+#define RTE_TM_SHAPER_PROFILE_ID_NONE UINT32_MAX
+
+/**
+ * Node ID for the parent of the root node.
+ *
+ * @see rte_tm_node_add()
+ */
+#define RTE_TM_NODE_ID_NULL UINT32_MAX
+
+/**
+ * Node level ID used to disable level ID checking.
+ *
+ * @see rte_tm_node_add()
+ */
+#define RTE_TM_NODE_LEVEL_ID_ANY UINT32_MAX
+
+/**
+ * Color
+ */
+enum rte_tm_color {
+ RTE_TM_GREEN = 0, /**< Green */
+ RTE_TM_YELLOW, /**< Yellow */
+ RTE_TM_RED, /**< Red */
+ RTE_TM_COLORS /**< Number of colors */
+};
+
+/**
+ * Node statistics counter type
+ */
+enum rte_tm_stats_type {
+ /** Number of packets scheduled from current node. */
+ RTE_TM_STATS_N_PKTS = 1 << 0,
+
+ /** Number of bytes scheduled from current node. */
+ RTE_TM_STATS_N_BYTES = 1 << 1,
+
+ /** Number of green packets dropped by current leaf node. */
+ RTE_TM_STATS_N_PKTS_GREEN_DROPPED = 1 << 2,
+
+ /** Number of yellow packets dropped by current leaf node. */
+ RTE_TM_STATS_N_PKTS_YELLOW_DROPPED = 1 << 3,
+
+ /** Number of red packets dropped by current leaf node. */
+ RTE_TM_STATS_N_PKTS_RED_DROPPED = 1 << 4,
+
+ /** Number of green bytes dropped by current leaf node. */
+ RTE_TM_STATS_N_BYTES_GREEN_DROPPED = 1 << 5,
+
+ /** Number of yellow bytes dropped by current leaf node. */
+ RTE_TM_STATS_N_BYTES_YELLOW_DROPPED = 1 << 6,
+
+ /** Number of red bytes dropped by current leaf node. */
+ RTE_TM_STATS_N_BYTES_RED_DROPPED = 1 << 7,
+
+ /** Number of packets currently waiting in the packet queue of current
+ * leaf node.
+ */
+ RTE_TM_STATS_N_PKTS_QUEUED = 1 << 8,
+
+ /** Number of bytes currently waiting in the packet queue of current
+ * leaf node.
+ */
+ RTE_TM_STATS_N_BYTES_QUEUED = 1 << 9,
+};
+
+/**
+ * Node statistics counters
+ */
+struct rte_tm_node_stats {
+ /** Number of packets scheduled from current node. */
+ uint64_t n_pkts;
+
+ /** Number of bytes scheduled from current node. */
+ uint64_t n_bytes;
+
+ /** Statistics counters for leaf nodes only. */
+ struct {
+ /** Number of packets dropped by current leaf node per each
+ * color.
+ */
+ uint64_t n_pkts_dropped[RTE_TM_COLORS];
+
+ /** Number of bytes dropped by current leaf node per each
+ * color.
+ */
+ uint64_t n_bytes_dropped[RTE_TM_COLORS];
+
+ /** Number of packets currently waiting in the packet queue of
+ * current leaf node.
+ */
+ uint64_t n_pkts_queued;
+
+ /** Number of bytes currently waiting in the packet queue of
+ * current leaf node.
+ */
+ uint64_t n_bytes_queued;
+ } leaf;
+};
+
+/**
+ * Traffic manager dynamic updates
+ */
+enum rte_tm_dynamic_update_type {
+ /** Dynamic parent node update. The new parent node is located on same
+ * hierarchy level as the former parent node. Consequently, the node
+ * whose parent is changed preserves its hierarchy level.
+ */
+ RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL = 1 << 0,
+
+ /** Dynamic parent node update. The new parent node is located on
+ * different hierarchy level than the former parent node. Consequently,
+ * the node whose parent is changed also changes its hierarchy level.
+ */
+ RTE_TM_UPDATE_NODE_PARENT_CHANGE_LEVEL = 1 << 1,
+
+ /** Dynamic node add/delete. */
+ RTE_TM_UPDATE_NODE_ADD_DELETE = 1 << 2,
+
+ /** Suspend/resume nodes. */
+ RTE_TM_UPDATE_NODE_SUSPEND_RESUME = 1 << 3,
+
+ /** Dynamic switch between byte-based and packet-based WFQ weights. */
+ RTE_TM_UPDATE_NODE_WFQ_WEIGHT_MODE = 1 << 4,
+
+ /** Dynamic update on number of SP priorities. */
+ RTE_TM_UPDATE_NODE_N_SP_PRIORITIES = 1 << 5,
+
+ /** Dynamic update of congestion management mode for leaf nodes. */
+ RTE_TM_UPDATE_NODE_CMAN = 1 << 6,
+
+ /** Dynamic update of the set of enabled stats counter types. */
+ RTE_TM_UPDATE_NODE_STATS = 1 << 7,
+};
+
+/**
+ * Traffic manager capabilities
+ */
+struct rte_tm_capabilities {
+ /** Maximum number of nodes. */
+ uint32_t n_nodes_max;
+
+ /** Maximum number of levels (i.e. number of nodes connecting the root
+ * node with any leaf node, including the root and the leaf).
+ */
+ uint32_t n_levels_max;
+
+ /** When non-zero, this flag indicates that all the non-leaf nodes
+ * (with the exception of the root node) have identical capability set.
+ */
+ int non_leaf_nodes_identical;
+
+ /** When non-zero, this flag indicates that all the leaf nodes have
+ * identical capability set.
+ */
+ int leaf_nodes_identical;
+
+ /** Maximum number of shapers, either private or shared. In case the
+ * implementation does not share any resources between private and
+ * shared shapers, it is typically equal to the sum of
+ * *shaper_private_n_max* and *shaper_shared_n_max*.
+ */
+ uint32_t shaper_n_max;
+
+ /** Maximum number of private shapers. Indicates the maximum number of
+ * nodes that can concurrently have their private shaper enabled.
+ */
+ uint32_t shaper_private_n_max;
+
+ /** Maximum number of private shapers that support dual rate shaping.
+ * Indicates the maximum number of nodes that can concurrently have
+ * their private shaper enabled with dual rate support. Only valid when
+ * private shapers are supported. The value of zero indicates that dual
+ * rate shaping is not available for private shapers. The maximum value
+ * is *shaper_private_n_max*.
+ */
+ int shaper_private_dual_rate_n_max;
+
+ /** Minimum committed/peak rate (bytes per second) for any private
+ * shaper. Valid only when private shapers are supported.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for any private
+ * shaper. Valid only when private shapers are supported.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /** Maximum number of shared shapers. The value of zero indicates that
+ * shared shapers are not supported.
+ */
+ uint32_t shaper_shared_n_max;
+
+ /** Maximum number of nodes that can share the same shared shaper.
+ * Only valid when shared shapers are supported.
+ */
+ uint32_t shaper_shared_n_nodes_per_shaper_max;
+
+ /** Maximum number of shared shapers a node can be part of. This
+ * parameter indicates that there is at least one node that can be
+ * configured with this many shared shapers, which might not be true for
+ * all the nodes. Only valid when shared shapers are supported, in which
+ * case it ranges from 1 to *shaper_shared_n_max*.
+ */
+ uint32_t shaper_shared_n_shapers_per_node_max;
+
+ /** Maximum number of shared shapers that can be configured with dual
+ * rate shaping. The value of zero indicates that dual rate shaping
+ * support is not available for shared shapers.
+ */
+ uint32_t shaper_shared_dual_rate_n_max;
+
+ /** Minimum committed/peak rate (bytes per second) for any shared
+ * shaper. Only valid when shared shapers are supported.
+ */
+ uint64_t shaper_shared_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for any shared
+ * shaper. Only valid when shared shapers are supported.
+ */
+ uint64_t shaper_shared_rate_max;
+
+ /** Minimum value allowed for packet length adjustment for any private
+ * or shared shaper.
+ */
+ int shaper_pkt_length_adjust_min;
+
+ /** Maximum value allowed for packet length adjustment for any private
+ * or shared shaper.
+ */
+ int shaper_pkt_length_adjust_max;
+
+ /** Maximum number of children nodes. This parameter indicates that
+ * there is at least one non-leaf node that can be configured with this
+ * many children nodes, which might not be true for all the non-leaf
+ * nodes.
+ */
+ uint32_t sched_n_children_max;
+
+ /** Maximum number of supported priority levels. This parameter
+ * indicates that there is at least one non-leaf node that can be
+ * configured with this many priority levels for managing its children
+ * nodes, which might not be true for all the non-leaf nodes. The value
+ * of zero is invalid. The value of 1 indicates that only priority 0 is
+ * supported, which essentially means that Strict Priority (SP)
+ * algorithm is not supported.
+ */
+ uint32_t sched_sp_n_priorities_max;
+
+ /** Maximum number of sibling nodes that can have the same priority at
+ * any given time, i.e. maximum size of the WFQ sibling node group. This
+ * parameter indicates there is at least one non-leaf node that meets
+ * this condition, which might not be true for all the non-leaf nodes.
+ * The value of zero is invalid. The value of 1 indicates that WFQ
+ * algorithm is not supported. The maximum value is
+ * *sched_n_children_max*.
+ */
+ uint32_t sched_wfq_n_children_per_group_max;
+
+ /** Maximum number of priority levels that can have more than one child
+ * node at any given time, i.e. maximum number of WFQ sibling node
+ * groups that have two or more members. This parameter indicates there
+ * is at least one non-leaf node that meets this condition, which might
+ * not be true for all the non-leaf nodes. The value of zero states that
+ * WFQ algorithm is not supported. The value of 1 indicates that
+ * (*sched_sp_n_priorities_max* - 1) priority levels have at most one
+ * child node, so there can be only one priority level with two or
+ * more sibling nodes making up a WFQ group. The maximum value is:
+ * min(floor(*sched_n_children_max* / 2), *sched_sp_n_priorities_max*).
+ */
+ uint32_t sched_wfq_n_groups_max;
+
+ /** Maximum WFQ weight. The value of 1 indicates that all sibling nodes
+ * with same priority have the same WFQ weight, so WFQ is reduced to FQ.
+ */
+ uint32_t sched_wfq_weight_max;
+
+ /** Head drop algorithm support. When non-zero, this parameter
+ * indicates that there is at least one leaf node that supports the head
+ * drop algorithm, which might not be true for all the leaf nodes.
+ */
+ int cman_head_drop_supported;
+
+ /** Maximum number of WRED contexts, either private or shared. In case
+ * the implementation does not share any resources between private and
+ * shared WRED contexts, it is typically equal to the sum of
+ * *cman_wred_context_private_n_max* and
+ * *cman_wred_context_shared_n_max*.
+ */
+ uint32_t cman_wred_context_n_max;
+
+ /** Maximum number of private WRED contexts. Indicates the maximum
+ * number of leaf nodes that can concurrently have their private WRED
+ * context enabled.
+ */
+ uint32_t cman_wred_context_private_n_max;
+
+ /** Maximum number of shared WRED contexts. The value of zero
+ * indicates that shared WRED contexts are not supported.
+ */
+ uint32_t cman_wred_context_shared_n_max;
+
+ /** Maximum number of leaf nodes that can share the same WRED context.
+ * Only valid when shared WRED contexts are supported.
+ */
+ uint32_t cman_wred_context_shared_n_nodes_per_context_max;
+
+ /** Maximum number of shared WRED contexts a leaf node can be part of.
+ * This parameter indicates that there is at least one leaf node that
+ * can be configured with this many shared WRED contexts, which might
+ * not be true for all the leaf nodes. Only valid when shared WRED
+ * contexts are supported, in which case it ranges from 1 to
+ * *cman_wred_context_shared_n_max*.
+ */
+ uint32_t cman_wred_context_shared_n_contexts_per_node_max;
+
+ /** Support for VLAN DEI packet marking (per color). */
+ int mark_vlan_dei_supported[RTE_TM_COLORS];
+
+ /** Support for IPv4/IPv6 ECN marking of TCP packets (per color). */
+ int mark_ip_ecn_tcp_supported[RTE_TM_COLORS];
+
+ /** Support for IPv4/IPv6 ECN marking of SCTP packets (per color). */
+ int mark_ip_ecn_sctp_supported[RTE_TM_COLORS];
+
+ /** Support for IPv4/IPv6 DSCP packet marking (per color). */
+ int mark_ip_dscp_supported[RTE_TM_COLORS];
+
+ /** Set of supported dynamic update operations.
+ * @see enum rte_tm_dynamic_update_type
+ */
+ uint64_t dynamic_update_mask;
+
+ /** Set of supported statistics counter types.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+};
+
+/**
+ * Traffic manager level capabilities
+ */
+struct rte_tm_level_capabilities {
+ /** Maximum number of nodes for the current hierarchy level. */
+ uint32_t n_nodes_max;
+
+ /** Maximum number of non-leaf nodes for the current hierarchy level.
+ * The value of 0 indicates that current level only supports leaf
+ * nodes. The maximum value is *n_nodes_max*.
+ */
+ uint32_t n_nodes_nonleaf_max;
+
+ /** Maximum number of leaf nodes for the current hierarchy level. The
+ * value of 0 indicates that current level only supports non-leaf
+ * nodes. The maximum value is *n_nodes_max*.
+ */
+ uint32_t n_nodes_leaf_max;
+
+ /** When non-zero, this flag indicates that all the non-leaf nodes on
+ * this level have identical capability set. Valid only when
+ * *n_nodes_nonleaf_max* is non-zero.
+ */
+ int non_leaf_nodes_identical;
+
+ /** When non-zero, this flag indicates that all the leaf nodes on this
+ * level have identical capability set. Valid only when
+ * *n_nodes_leaf_max* is non-zero.
+ */
+ int leaf_nodes_identical;
+
+ union {
+ /** Items valid only for the non-leaf nodes on this level. */
+ struct {
+ /** Private shaper support. When non-zero, it indicates
+ * there is at least one non-leaf node on this level
+ * with private shaper support, which may not be the
+ * case for all the non-leaf nodes on this level.
+ */
+ int shaper_private_supported;
+
+ /** Dual rate support for private shaper. Valid only
+ * when private shaper is supported for the non-leaf
+ * nodes on the current level. When non-zero, it
+ * indicates there is at least one non-leaf node on this
+ * level with dual rate private shaper support, which
+ * may not be the case for all the non-leaf nodes on
+ * this level.
+ */
+ int shaper_private_dual_rate_supported;
+
+ /** Minimum committed/peak rate (bytes per second) for
+ * private shapers of the non-leaf nodes of this level.
+ * Valid only when private shaper is supported on this
+ * level.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for
+ * private shapers of the non-leaf nodes on this level.
+ * Valid only when private shaper is supported on this
+ * level.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /** Maximum number of shared shapers that any non-leaf
+ * node on this level can be part of. The value of zero
+ * indicates that shared shapers are not supported by
+ * the non-leaf nodes on this level. When non-zero, it
+ * indicates there is at least one non-leaf node on this
+ * level that meets this condition, which may not be the
+ * case for all the non-leaf nodes on this level.
+ */
+ uint32_t shaper_shared_n_max;
+
+ /** Maximum number of children nodes. This parameter
+ * indicates that there is at least one non-leaf node on
+ * this level that can be configured with this many
+ * children nodes, which might not be true for all the
+ * non-leaf nodes on this level.
+ */
+ uint32_t sched_n_children_max;
+
+ /** Maximum number of supported priority levels. This
+ * parameter indicates that there is at least one
+ * non-leaf node on this level that can be configured
+ * with this many priority levels for managing its
+ * children nodes, which might not be true for all the
+ * non-leaf nodes on this level. The value of zero is
+ * invalid. The value of 1 indicates that only priority
+ * 0 is supported, which essentially means that Strict
+ * Priority (SP) algorithm is not supported on this
+ * level.
+ */
+ uint32_t sched_sp_n_priorities_max;
+
+ /** Maximum number of sibling nodes that can have the
+ * same priority at any given time, i.e. maximum size of
+ * the WFQ sibling node group. This parameter indicates
+ * there is at least one non-leaf node on this level
+ * that meets this condition, which may not be true for
+ * all the non-leaf nodes on this level. The value of
+ * zero is invalid. The value of 1 indicates that WFQ
+ * algorithm is not supported on this level. The maximum
+ * value is *sched_n_children_max*.
+ */
+ uint32_t sched_wfq_n_children_per_group_max;
+
+ /** Maximum number of priority levels that can have
+ * more than one child node at any given time, i.e.
+ * maximum number of WFQ sibling node groups that
+ * have two or more members. This parameter indicates
+ * there is at least one non-leaf node on this level
+ * that meets this condition, which might not be true
+ * for all the non-leaf nodes. The value of zero states
+ * that WFQ algorithm is not supported on this level.
+ * The value of 1 indicates that
+ * (*sched_sp_n_priorities_max* - 1) priority levels on
+ * this level have at most one child node, so there can
+ * be only one priority level with two or more sibling
+ * nodes making up a WFQ group on this level. The
+ * maximum value is:
+ * min(floor(*sched_n_children_max* / 2),
+ * *sched_sp_n_priorities_max*).
+ */
+ uint32_t sched_wfq_n_groups_max;
+
+ /** Maximum WFQ weight. The value of 1 indicates that
+ * all sibling nodes on this level with same priority
+ * have the same WFQ weight, so on this level WFQ is
+ * reduced to FQ.
+ */
+ uint32_t sched_wfq_weight_max;
+
+ /** Mask of statistics counter types supported by the
+ * non-leaf nodes on this level. Every supported
+ * statistics counter type is supported by at least one
+ * non-leaf node on this level, which may not be true
+ * for all the non-leaf nodes on this level.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+ } nonleaf;
+
+ /** Items valid only for the leaf nodes on this level. */
+ struct {
+ /** Private shaper support. When non-zero, it indicates
+ * there is at least one leaf node on this level with
+ * private shaper support, which may not be the case for
+ * all the leaf nodes on this level.
+ */
+ int shaper_private_supported;
+
+ /** Dual rate support for private shaper. Valid only
+ * when private shaper is supported for the leaf nodes
+ * on this level. When non-zero, it indicates there is
+ * at least one leaf node on this level with dual rate
+ * private shaper support, which may not be the case for
+ * all the leaf nodes on this level.
+ */
+ int shaper_private_dual_rate_supported;
+
+ /** Minimum committed/peak rate (bytes per second) for
+ * private shapers of the leaf nodes of this level.
+ * Valid only when private shaper is supported for the
+ * leaf nodes on this level.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for
+ * private shapers of the leaf nodes on this level.
+ * Valid only when private shaper is supported for the
+ * leaf nodes on this level.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /** Maximum number of shared shapers that any leaf node
+ * on this level can be part of. The value of zero
+ * indicates that shared shapers are not supported by
+ * the leaf nodes on this level. When non-zero, it
+ * indicates there is at least one leaf node on this
+ * level that meets this condition, which may not be the
+ * case for all the leaf nodes on this level.
+ */
+ uint32_t shaper_shared_n_max;
+
+ /** Head drop algorithm support. When non-zero, this
+ * parameter indicates that there is at least one leaf
+ * node on this level that supports the head drop
+ * algorithm, which might not be true for all the leaf
+ * nodes on this level.
+ */
+ int cman_head_drop_supported;
+
+ /** Private WRED context support. When non-zero, it
+ * indicates there is at least one node on this level
+ * with private WRED context support, which may not be
+ * true for all the leaf nodes on this level.
+ */
+ int cman_wred_context_private_supported;
+
+ /** Maximum number of shared WRED contexts that any
+ * leaf node on this level can be part of. The value of
+ * zero indicates that shared WRED contexts are not
+ * supported by the leaf nodes on this level. When
+ * non-zero, it indicates there is at least one leaf
+ * node on this level that meets this condition, which
+ * may not be the case for all the leaf nodes on this
+ * level.
+ */
+ uint32_t cman_wred_context_shared_n_max;
+
+ /** Mask of statistics counter types supported by the
+ * leaf nodes on this level. Every supported statistics
+ * counter type is supported by at least one leaf node
+ * on this level, which may not be true for all the leaf
+ * nodes on this level.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+ } leaf;
+ };
+};
+
+/**
+ * Traffic manager node capabilities
+ */
+struct rte_tm_node_capabilities {
+ /** Private shaper support for the current node. */
+ int shaper_private_supported;
+
+ /** Dual rate shaping support for private shaper of current node.
+ * Valid only when private shaper is supported by the current node.
+ */
+ int shaper_private_dual_rate_supported;
+
+ /** Minimum committed/peak rate (bytes per second) for private
+ * shaper of current node. Valid only when private shaper is supported
+ * by the current node.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /** Maximum committed/peak rate (bytes per second) for private
+ * shaper of current node. Valid only when private shaper is supported
+ * by the current node.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /** Maximum number of shared shapers the current node can be part of.
+ * The value of zero indicates that shared shapers are not supported by
+ * the current node.
+ */
+ uint32_t shaper_shared_n_max;
+
+ union {
+ /** Items valid only for non-leaf nodes. */
+ struct {
+ /** Maximum number of children nodes. */
+ uint32_t sched_n_children_max;
+
+ /** Maximum number of supported priority levels. The
+ * value of zero is invalid. The value of 1 indicates
+ * that only priority 0 is supported, which essentially
+ * means that Strict Priority (SP) algorithm is not
+ * supported.
+ */
+ uint32_t sched_sp_n_priorities_max;
+
+ /** Maximum number of sibling nodes that can have the
+ * same priority at any given time, i.e. maximum size
+ * of the WFQ sibling node group. The value of zero
+ * is invalid. The value of 1 indicates that WFQ
+ * algorithm is not supported. The maximum value is
+ * *sched_n_children_max*.
+ */
+ uint32_t sched_wfq_n_children_per_group_max;
+
+ /** Maximum number of priority levels that can have
+ * more than one child node at any given time, i.e.
+ * maximum number of WFQ sibling node groups that have
+ * two or more members. The value of zero states that
+ * WFQ algorithm is not supported. The value of 1
+ * indicates that (*sched_sp_n_priorities_max* - 1)
+ * priority levels have at most one child node, so there
+ * can be only one priority level with two or more
+ * sibling nodes making up a WFQ group. The maximum
+ * value is: min(floor(*sched_n_children_max* / 2),
+ * *sched_sp_n_priorities_max*).
+ */
+ uint32_t sched_wfq_n_groups_max;
+
+ /** Maximum WFQ weight. The value of 1 indicates that
+ * all sibling nodes with same priority have the same
+ * WFQ weight, so WFQ is reduced to FQ.
+ */
+ uint32_t sched_wfq_weight_max;
+ } nonleaf;
+
+ /** Items valid only for leaf nodes. */
+ struct {
+ /** Head drop algorithm support for current node. */
+ int cman_head_drop_supported;
+
+ /** Private WRED context support for current node. */
+ int cman_wred_context_private_supported;
+
+ /** Maximum number of shared WRED contexts the current
+ * node can be part of. The value of zero indicates that
+ * shared WRED contexts are not supported by the current
+ * node.
+ */
+ uint32_t cman_wred_context_shared_n_max;
+ } leaf;
+ };
+
+ /** Mask of statistics counter types supported by the current node.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+};
+
+/**
+ * Congestion management (CMAN) mode
+ *
+ * This is used for controlling the admission of packets into a packet queue or
+ * group of packet queues on congestion. On request of writing a new packet
+ * into the current queue while the queue is full, the *tail drop* algorithm
+ * drops the new packet while leaving the queue unmodified, as opposed to *head
+ * drop* algorithm, which drops the packet at the head of the queue (the oldest
+ * packet waiting in the queue) and admits the new packet at the tail of the
+ * queue.
+ *
+ * The *Random Early Detection (RED)* algorithm works by proactively dropping
+ * more and more input packets as the queue occupancy builds up. When the queue
+ * is full or almost full, RED effectively works as *tail drop*. The *Weighted
+ * RED* algorithm uses a separate set of RED thresholds for each packet color.
+ */
+enum rte_tm_cman_mode {
+ RTE_TM_CMAN_TAIL_DROP = 0, /**< Tail drop */
+ RTE_TM_CMAN_HEAD_DROP, /**< Head drop */
+ RTE_TM_CMAN_WRED, /**< Weighted Random Early Detection (WRED) */
+};
+
+/**
+ * Random Early Detection (RED) profile
+ */
+struct rte_tm_red_params {
+ /** Minimum queue threshold */
+ uint16_t min_th;
+
+ /** Maximum queue threshold */
+ uint16_t max_th;
+
+ /** Inverse of packet marking probability maximum value (maxp), i.e.
+ * maxp_inv = 1 / maxp
+ */
+ uint16_t maxp_inv;
+
+ /** Negated log2 of queue weight (wq), i.e. wq = 1 / (2 ^ wq_log2) */
+ uint16_t wq_log2;
+};
+
+/**
+ * Weighted RED (WRED) profile
+ *
+ * Multiple WRED contexts can share the same WRED profile. Each leaf node with
+ * WRED enabled as its congestion management mode has zero or one private WRED
+ * context (only one leaf node using it) and/or zero, one or several shared
+ * WRED contexts (multiple leaf nodes use the same WRED context). A private
+ * WRED context is used to perform congestion management for a single leaf
+ * node, while a shared WRED context is used to perform congestion management
+ * for a group of leaf nodes.
+ */
+struct rte_tm_wred_params {
+ /** One set of RED parameters per packet color */
+ struct rte_tm_red_params red_params[RTE_TM_COLORS];
+};
+
+/**
+ * Token bucket
+ */
+struct rte_tm_token_bucket {
+ /** Token bucket rate (bytes per second) */
+ uint64_t rate;
+
+ /** Token bucket size (bytes), a.k.a. max burst size */
+ uint64_t size;
+};
+
+/**
+ * Shaper (rate limiter) profile
+ *
+ * Multiple shaper instances can share the same shaper profile. Each node has
+ * zero or one private shaper (only one node using it) and/or zero, one or
+ * several shared shapers (multiple nodes use the same shaper instance).
+ * A private shaper is used to perform traffic shaping for a single node, while
+ * a shared shaper is used to perform traffic shaping for a group of nodes.
+ *
+ * Single rate shapers use a single token bucket. A single rate shaper can be
+ * configured by setting the rate of the committed bucket to zero, which
+ * effectively disables this bucket. The peak bucket is used to limit the rate
+ * and the burst size for the current shaper.
+ *
+ * Dual rate shapers use both the committed and the peak token buckets. The
+ * rate of the peak bucket has to be bigger than zero, as well as greater than
+ * or equal to the rate of the committed bucket.
+ */
+struct rte_tm_shaper_params {
+ /** Committed token bucket */
+ struct rte_tm_token_bucket committed;
+
+ /** Peak token bucket */
+ struct rte_tm_token_bucket peak;
+
+ /** Signed value to be added to the length of each packet for the
+ * purpose of shaping. Can be used to correct the packet length with
+ * the framing overhead bytes that are also consumed on the wire (e.g.
+ * RTE_TM_ETH_FRAMING_OVERHEAD_FCS).
+ */
+ int32_t pkt_length_adjust;
+};
+
+/**
+ * Node parameters
+ *
+ * Each non-leaf node has multiple inputs (its children nodes) and single output
+ * (which is input to its parent node). It arbitrates its inputs using Strict
+ * Priority (SP) and Weighted Fair Queuing (WFQ) algorithms to schedule input
+ * packets to its output while observing its shaping (rate limiting)
+ * constraints.
+ *
+ * Algorithms such as Weighted Round Robin (WRR), Byte-level WRR, Deficit WRR
+ * (DWRR), etc. are considered approximations of the WFQ ideal and are
+ * assimilated to WFQ, although an associated implementation-dependent trade-off
+ * on accuracy, performance and resource usage might exist.
+ *
+ * Children nodes with different priorities are scheduled using the SP algorithm
+ * based on their priority, with zero (0) as the highest priority. Children with
+ * the same priority are scheduled using the WFQ algorithm according to their
+ * weights. The WFQ weight of a given child node is relative to the sum of the
+ * weights of all its sibling nodes that have the same priority, with one (1) as
+ * the lowest weight. For each SP priority, the WFQ weight mode can be set as
+ * either byte-based or packet-based.
+ *
+ * Each leaf node sits on top of a TX queue of the current Ethernet port. Hence,
+ * the leaf nodes are predefined, with their node IDs set to 0 .. (N-1), where N
+ * is the number of TX queues configured for the current Ethernet port. The
+ * non-leaf nodes have their IDs generated by the application.
+ */
+struct rte_tm_node_params {
+ /** Shaper profile for the private shaper. The absence of the private
+ * shaper for the current node is indicated by setting this parameter
+ * to RTE_TM_SHAPER_PROFILE_ID_NONE.
+ */
+ uint32_t shaper_profile_id;
+
+ /** User allocated array of valid shared shaper IDs. */
+ uint32_t *shared_shaper_id;
+
+ /** Number of shared shaper IDs in the *shared_shaper_id* array. */
+ uint32_t n_shared_shapers;
+
+ union {
+ /** Parameters only valid for non-leaf nodes. */
+ struct {
+ /** WFQ weight mode for each SP priority. When NULL, it
+ * indicates that WFQ is to be used for all priorities.
+ * When non-NULL, it points to a pre-allocated array of
+ * *n_sp_priorities* values, with non-zero value for
+ * byte-mode and zero for packet-mode.
+ */
+ int *wfq_weight_mode;
+
+ /** Number of SP priorities. */
+ uint32_t n_sp_priorities;
+ } nonleaf;
+
+ /** Parameters only valid for leaf nodes. */
+ struct {
+ /** Congestion management mode */
+ enum rte_tm_cman_mode cman;
+
+ /** WRED parameters (only valid when *cman* is set to
+ * WRED).
+ */
+ struct {
+ /** WRED profile for private WRED context. The
+ * absence of a private WRED context for the
+ * current leaf node is indicated by value
+ * RTE_TM_WRED_PROFILE_ID_NONE.
+ */
+ uint32_t wred_profile_id;
+
+ /** User allocated array of shared WRED context
+ * IDs. When set to NULL, it indicates that the
+ * current leaf node should not currently be
+ * part of any shared WRED contexts.
+ */
+ uint32_t *shared_wred_context_id;
+
+ /** Number of elements in the
+ * *shared_wred_context_id* array. Only valid
+ * when *shared_wred_context_id* is non-NULL,
+ * in which case it should be non-zero.
+ */
+ uint32_t n_shared_wred_contexts;
+ } wred;
+ } leaf;
+ };
+
+ /** Mask of statistics counter types to be enabled for this node. This
+ * needs to be a subset of the statistics counter types available for
+ * the current node. Any statistics counter type not included in this
+ * set is to be disabled for the current node.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+};
+
+/**
+ * Verbose error types.
+ *
+ * Most of them provide the type of the object referenced by struct
+ * rte_tm_error::cause.
+ */
+enum rte_tm_error_type {
+ RTE_TM_ERROR_TYPE_NONE, /**< No error. */
+ RTE_TM_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
+ RTE_TM_ERROR_TYPE_CAPABILITIES,
+ RTE_TM_ERROR_TYPE_LEVEL_ID,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_GREEN,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_YELLOW,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_RED,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_SHARED_WRED_CONTEXT_ID,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_SHARED_SHAPER_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID,
+ RTE_TM_ERROR_TYPE_NODE_PRIORITY,
+ RTE_TM_ERROR_TYPE_NODE_WEIGHT,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_STATS,
+ RTE_TM_ERROR_TYPE_NODE_ID,
+};
+
+/**
+ * Verbose error structure definition.
+ *
+ * This object is normally allocated by applications and set by PMDs, the
+ * message points to a constant string which does not need to be freed by
+ * the application, however its pointer can be considered valid only as long
+ * as its associated DPDK port remains configured. Closing the underlying
+ * device or unloading the PMD invalidates it.
+ *
+ * Both cause and message may be NULL regardless of the error type.
+ */
+struct rte_tm_error {
+ enum rte_tm_error_type type; /**< Cause field and error type. */
+ const void *cause; /**< Object responsible for the error. */
+ const char *message; /**< Human-readable error message. */
+};
+
+/**
+ * Traffic manager get number of leaf nodes
+ *
+ * Each leaf node sits on on top of a TX queue of the current Ethernet port.
+ * Therefore, the set of leaf nodes is predefined, their number is always equal
+ * to N (where N is the number of TX queues configured for the current port)
+ * and their IDs are 0 .. (N-1).
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] n_leaf_nodes
+ * Number of leaf nodes for the current port.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_get_number_of_leaf_nodes(uint8_t port_id,
+ uint32_t *n_leaf_nodes,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node ID validate and type (i.e. leaf or non-leaf) get
+ *
+ * The leaf nodes have predefined IDs in the range of 0 .. (N-1), where N is
+ * the number of TX queues of the current Ethernet port. The non-leaf nodes
+ * have their IDs generated by the application outside of the above range,
+ * which is reserved for leaf nodes.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID value. Needs to be valid.
+ * @param[out] is_leaf
+ * Set to non-zero value when node is leaf and to zero otherwise (non-leaf).
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_type_get(uint8_t port_id,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager capabilities get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] cap
+ * Traffic manager capabilities. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_capabilities_get(uint8_t port_id,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager level capabilities get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] level_id
+ * The hierarchy level identifier. The value of 0 identifies the level of the
+ * root node.
+ * @param[out] cap
+ * Traffic manager level capabilities. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_level_capabilities_get(uint8_t port_id,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node capabilities get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] cap
+ * Traffic manager node capabilities. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_capabilities_get(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager WRED profile add
+ *
+ * Create a new WRED profile with ID set to *wred_profile_id*. The new profile
+ * is used to create one or several WRED contexts.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] wred_profile_id
+ * WRED profile ID for the new profile. Needs to be unused.
+ * @param[in] profile
+ * WRED profile parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_wred_profile_add(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_wred_params *profile,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager WRED profile delete
+ *
+ * Delete an existing WRED profile. This operation fails when there is
+ * currently at least one user (i.e. WRED context) of this WRED profile.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] wred_profile_id
+ * WRED profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_wred_profile_delete(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared WRED context add or update
+ *
+ * When *shared_wred_context_id* is invalid, a new WRED context with this ID is
+ * created by using the WRED profile identified by *wred_profile_id*.
+ *
+ * When *shared_wred_context_id* is valid, this WRED context is no longer using
+ * the profile previously assigned to it and is updated to use the profile
+ * identified by *wred_profile_id*.
+ *
+ * A valid shared WRED context can be assigned to several hierarchy leaf nodes
+ * configured to use WRED as the congestion management mode.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_wred_context_id
+ * Shared WRED context ID
+ * @param[in] wred_profile_id
+ * WRED profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_shared_wred_context_add_update(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared WRED context delete
+ *
+ * Delete an existing shared WRED context. This operation fails when there is
+ * currently at least one user (i.e. hierarchy leaf node) of this shared WRED
+ * context.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_wred_context_id
+ * Shared WRED context ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_shared_wred_context_delete(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shaper profile add
+ *
+ * Create a new shaper profile with ID set to *shaper_profile_id*. The new
+ * shaper profile is used to create one or several shapers.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shaper_profile_id
+ * Shaper profile ID for the new profile. Needs to be unused.
+ * @param[in] profile
+ * Shaper profile parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_shaper_profile_add(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shaper profile delete
+ *
+ * Delete an existing shaper profile. This operation fails when there is
+ * currently at least one user (i.e. shaper) of this shaper profile.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shaper_profile_id
+ * Shaper profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_shaper_profile_delete(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared shaper add or update
+ *
+ * When *shared_shaper_id* is not a valid shared shaper ID, a new shared shaper
+ * with this ID is created using the shaper profile identified by
+ * *shaper_profile_id*.
+ *
+ * When *shared_shaper_id* is a valid shared shaper ID, this shared shaper is
+ * no longer using the shaper profile previously assigned to it and is updated
+ * to use the shaper profile identified by *shaper_profile_id*.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_shaper_id
+ * Shared shaper ID
+ * @param[in] shaper_profile_id
+ * Shaper profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_shared_shaper_add_update(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared shaper delete
+ *
+ * Delete an existing shared shaper. This operation fails when there is
+ * currently at least one user (i.e. hierarchy node) of this shared shaper.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_shaper_id
+ * Shared shaper ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_shared_shaper_delete(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node add
+ *
+ * Create new node and connect it as child of an existing node. The new node is
+ * further identified by *node_id*, which needs to be unused by any of the
+ * existing nodes. The parent node is identified by *parent_node_id*, which
+ * needs to be the valid ID of an existing non-leaf node. The parent node is
+ * going to use the provided SP *priority* and WFQ *weight* to schedule its new
+ * child node.
+ *
+ * This function has to be called for both leaf and non-leaf nodes. In the case
+ * of leaf nodes (i.e. *node_id* is within the range of 0 .. (N-1), with N as
+ * the number of configured TX queues of the current port), the leaf node is
+ * configured rather than created (as the set of leaf nodes is predefined) and
+ * it is also connected as child of an existing node.
+ *
+ * The first node that is added becomes the root node and all the nodes that
+ * are subsequently added have to be added as descendants of the root node. The
+ * parent of the root node has to be specified as RTE_TM_NODE_ID_NULL and there
+ * can only be one node with this parent ID (i.e. the root node). Further
+ * restrictions for root node: needs to be non-leaf, its private shaper profile
+ * needs to be valid and single rate, cannot use any shared shapers.
+ *
+ * When called before rte_tm_hierarchy_commit() invocation, this function is
+ * typically used to define the initial start-up hierarchy for the port.
+ * Provided that dynamic hierarchy updates are supported by the current port (as
+ * advertised in the port capability set), this function can be also called
+ * after the rte_tm_hierarchy_commit() invocation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be unused by any of the existing nodes.
+ * @param[in] parent_node_id
+ * Parent node ID. Needs to be the valid.
+ * @param[in] priority
+ * Node priority. The highest node priority is zero. Used by the SP algorithm
+ * running on the parent of the current node for scheduling this child node.
+ * @param[in] weight
+ * Node weight. The node weight is relative to the weight sum of all siblings
+ * that have the same priority. The lowest weight is one. Used by the WFQ
+ * algorithm running on the parent of the current node for scheduling this
+ * child node.
+ * @param[in] level_id
+ * Level ID that should be met by this node. The hierarchy level of the
+ * current node is already fully specified through its parent node (i.e. the
+ * level of this node is equal to the level of its parent node plus one),
+ * therefore the reason for providing this parameter is to enable the
+ * application to perform step-by-step checking of the node level during
+ * successive invocations of this function. When not desired, this check can
+ * be disabled by assigning value RTE_TM_NODE_LEVEL_ID_ANY to this parameter.
+ * @param[in] params
+ * Node parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_hierarchy_commit()
+ * @see RTE_TM_UPDATE_NODE_ADD_DELETE
+ * @see RTE_TM_NODE_LEVEL_ID_ANY
+ */
+int
+rte_tm_node_add(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node delete
+ *
+ * Delete an existing node. This operation fails when this node currently has
+ * at least one user (i.e. child node).
+ *
+ * When called before rte_tm_hierarchy_commit() invocation, this function is
+ * typically used to define the initial start-up hierarchy for the port.
+ * Provided that dynamic hierarchy updates are supported by the current port (as
+ * advertised in the port capability set), this function can be also called
+ * after the rte_tm_hierarchy_commit() invocation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_ADD_DELETE
+ */
+int
+rte_tm_node_delete(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node suspend
+ *
+ * Suspend an existing node. While the node is in suspended state, no packet is
+ * scheduled from this node and its descendants. The node exits the suspended
+ * state through the node resume operation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_node_resume()
+ * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME
+ */
+int
+rte_tm_node_suspend(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node resume
+ *
+ * Resume an existing node that is currently in suspended state. The node
+ * entered the suspended state as result of a previous node suspend operation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_node_suspend()
+ * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME
+ */
+int
+rte_tm_node_resume(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager hierarchy commit
+ *
+ * This function is called during the port initialization phase (before the
+ * Ethernet port is started) to freeze the start-up hierarchy.
+ *
+ * This function typically performs the following steps:
+ * a) It validates the start-up hierarchy that was previously defined for the
+ * current port through successive rte_tm_node_add() invocations;
+ * b) Assuming successful validation, it performs all the necessary port
+ * specific configuration operations to install the specified hierarchy on
+ * the current port, with immediate effect once the port is started.
+ *
+ * This function fails when the currently configured hierarchy is not supported
+ * by the Ethernet port, in which case the user can abort or try out another
+ * hierarchy configuration (e.g. a hierarchy with less leaf nodes), which can be
+ * build from scratch (when *clear_on_fail* is enabled) or by modifying the
+ * existing hierarchy configuration (when *clear_on_fail* is disabled).
+ *
+ * Note that this function can still fail due to other causes (e.g. not enough
+ * memory available in the system, etc), even though the specified hierarchy is
+ * supported in principle by the current port.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] clear_on_fail
+ * On function call failure, hierarchy is cleared when this parameter is
+ * non-zero and preserved when this parameter is equal to zero.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_node_add()
+ * @see rte_tm_node_delete()
+ */
+int
+rte_tm_hierarchy_commit(uint8_t port_id,
+ int clear_on_fail,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node parent update
+ *
+ * Restriction for root node: its parent cannot be changed.
+ *
+ * This function can only be called after the rte_tm_hierarchy_commit()
+ * invocation. Its success depends on the port support for this operation, as
+ * advertised through the port capability set.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] parent_node_id
+ * Node ID for the new parent. Needs to be valid.
+ * @param[in] priority
+ * Node priority. The highest node priority is zero. Used by the SP algorithm
+ * running on the parent of the current node for scheduling this child node.
+ * @param[in] weight
+ * Node weight. The node weight is relative to the weight sum of all siblings
+ * that have the same priority. The lowest weight is zero. Used by the WFQ
+ * algorithm running on the parent of the current node for scheduling this
+ * child node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL
+ * @see RTE_TM_UPDATE_NODE_PARENT_CHANGE_LEVEL
+ */
+int
+rte_tm_node_parent_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node private shaper update
+ *
+ * Restriction for the root node: its private shaper profile needs to be valid
+ * and single rate.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] shaper_profile_id
+ * Shaper profile ID for the private shaper of the current node. Needs to be
+ * either valid shaper profile ID or RTE_TM_SHAPER_PROFILE_ID_NONE, with
+ * the latter disabling the private shaper of the current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_node_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node shared shapers update
+ *
+ * Restriction for root node: cannot use any shared rate shapers.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] shared_shaper_id
+ * Shared shaper ID. Needs to be valid.
+ * @param[in] add
+ * Set to non-zero value to add this shared shaper to current node or to zero
+ * to delete this shared shaper from current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_node_shared_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_shaper_id,
+ int add,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node enabled statistics counters update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] stats_mask
+ * Mask of statistics counter types to be enabled for the current node. This
+ * needs to be a subset of the statistics counter types available for the
+ * current node. Any statistics counter type not included in this set is to
+ * be disabled for the current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see enum rte_tm_stats_type
+ * @see RTE_TM_UPDATE_NODE_STATS
+ */
+int
+rte_tm_node_stats_update(uint8_t port_id,
+ uint32_t node_id,
+ uint64_t stats_mask,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node WFQ weight mode update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] wfq_weight_mode
+ * WFQ weight mode for each SP priority. When NULL, it indicates that WFQ is
+ * to be used for all priorities. When non-NULL, it points to a pre-allocated
+ * array of *n_sp_priorities* values, with non-zero value for byte-mode and
+ * zero for packet-mode.
+ * @param[in] n_sp_priorities
+ * Number of SP priorities.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_WFQ_WEIGHT_MODE
+ * @see RTE_TM_UPDATE_NODE_N_SP_PRIORITIES
+ */
+int
+rte_tm_node_wfq_weight_mode_update(uint8_t port_id,
+ uint32_t node_id,
+ int *wfq_weight_mode,
+ uint32_t n_sp_priorities,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node congestion management mode update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] cman
+ * Congestion management mode.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_CMAN
+ */
+int
+rte_tm_node_cman_update(uint8_t port_id,
+ uint32_t node_id,
+ enum rte_tm_cman_mode cman,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node private WRED context update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] wred_profile_id
+ * WRED profile ID for the private WRED context of the current node. Needs to
+ * be either valid WRED profile ID or RTE_TM_WRED_PROFILE_ID_NONE, with the
+ * latter disabling the private WRED context of the current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+*/
+int
+rte_tm_node_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node shared WRED context update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] shared_wred_context_id
+ * Shared WRED context ID. Needs to be valid.
+ * @param[in] add
+ * Set to non-zero value to add this shared WRED context to current node or
+ * to zero to delete this shared WRED context from current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities
+ */
+int
+rte_tm_node_shared_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_wred_context_id,
+ int add,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node statistics counters read
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] stats
+ * When non-NULL, it contains the current value for the statistics counters
+ * enabled for the current node.
+ * @param[out] stats_mask
+ * When non-NULL, it contains the mask of statistics counter types that are
+ * currently enabled for this node, indicating which of the counters
+ * retrieved with the *stats* structure are valid.
+ * @param[in] clear
+ * When this parameter has a non-zero value, the statistics counters are
+ * cleared (i.e. set to zero) immediately after they have been read,
+ * otherwise the statistics counters are left untouched.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see enum rte_tm_stats_type
+ */
+int
+rte_tm_node_stats_read(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager packet marking - VLAN DEI (IEEE 802.1Q)
+ *
+ * IEEE 802.1p maps the traffic class to the VLAN Priority Code Point (PCP)
+ * field (3 bits), while IEEE 802.1q maps the drop priority to the VLAN Drop
+ * Eligible Indicator (DEI) field (1 bit), which was previously named Canonical
+ * Format Indicator (CFI).
+ *
+ * All VLAN frames of a given color get their DEI bit set if marking is enabled
+ * for this color; otherwise, their DEI bit is left as is (either set or not).
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mark_green
+ * Set to non-zero value to enable marking of green packets and to zero to
+ * disable it.
+ * @param[in] mark_yellow
+ * Set to non-zero value to enable marking of yellow packets and to zero to
+ * disable it.
+ * @param[in] mark_red
+ * Set to non-zero value to enable marking of red packets and to zero to
+ * disable it.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::mark_vlan_dei_supported
+ */
+int
+rte_tm_mark_vlan_dei(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager packet marking - IPv4 / IPv6 ECN (IETF RFC 3168)
+ *
+ * IETF RFCs 2474 and 3168 reorganize the IPv4 Type of Service (TOS) field
+ * (8 bits) and the IPv6 Traffic Class (TC) field (8 bits) into Differentiated
+ * Services Codepoint (DSCP) field (6 bits) and Explicit Congestion
+ * Notification (ECN) field (2 bits). The DSCP field is typically used to
+ * encode the traffic class and/or drop priority (RFC 2597), while the ECN
+ * field is used by RFC 3168 to implement a congestion notification mechanism
+ * to be leveraged by transport layer protocols such as TCP and SCTP that have
+ * congestion control mechanisms.
+ *
+ * When congestion is experienced, as alternative to dropping the packet,
+ * routers can change the ECN field of input packets from 2'b01 or 2'b10
+ * (values indicating that source endpoint is ECN-capable) to 2'b11 (meaning
+ * that congestion is experienced). The destination endpoint can use the
+ * ECN-Echo (ECE) TCP flag to relay the congestion indication back to the
+ * source endpoint, which acknowledges it back to the destination endpoint with
+ * the Congestion Window Reduced (CWR) TCP flag.
+ *
+ * All IPv4/IPv6 packets of a given color with ECN set to 2’b01 or 2’b10
+ * carrying TCP or SCTP have their ECN set to 2’b11 if the marking feature is
+ * enabled for the current color, otherwise the ECN field is left as is.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mark_green
+ * Set to non-zero value to enable marking of green packets and to zero to
+ * disable it.
+ * @param[in] mark_yellow
+ * Set to non-zero value to enable marking of yellow packets and to zero to
+ * disable it.
+ * @param[in] mark_red
+ * Set to non-zero value to enable marking of red packets and to zero to
+ * disable it.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::mark_ip_ecn_tcp_supported
+ * @see struct rte_tm_capabilities::mark_ip_ecn_sctp_supported
+ */
+int
+rte_tm_mark_ip_ecn(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager packet marking - IPv4 / IPv6 DSCP (IETF RFC 2597)
+ *
+ * IETF RFC 2597 maps the traffic class and the drop priority to the IPv4/IPv6
+ * Differentiated Services Codepoint (DSCP) field (6 bits). Here are the DSCP
+ * values proposed by this RFC:
+ *
+ * <pre> Class 1 Class 2 Class 3 Class 4 </pre>
+ * <pre> +----------+----------+----------+----------+</pre>
+ * <pre>Low Drop Prec | 001010 | 010010 | 011010 | 100010 |</pre>
+ * <pre>Medium Drop Prec | 001100 | 010100 | 011100 | 100100 |</pre>
+ * <pre>High Drop Prec | 001110 | 010110 | 011110 | 100110 |</pre>
+ * <pre> +----------+----------+----------+----------+</pre>
+ *
+ * There are 4 traffic classes (classes 1 .. 4) encoded by DSCP bits 1 and 2,
+ * as well as 3 drop priorities (low/medium/high) encoded by DSCP bits 3 and 4.
+ *
+ * All IPv4/IPv6 packets have their color marked into DSCP bits 3 and 4 as
+ * follows: green mapped to Low Drop Precedence (2’b01), yellow to Medium
+ * (2’b10) and red to High (2’b11). Marking needs to be explicitly enabled
+ * for each color; when not enabled for a given color, the DSCP field of all
+ * packets with that color is left as is.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mark_green
+ * Set to non-zero value to enable marking of green packets and to zero to
+ * disable it.
+ * @param[in] mark_yellow
+ * Set to non-zero value to enable marking of yellow packets and to zero to
+ * disable it.
+ * @param[in] mark_red
+ * Set to non-zero value to enable marking of red packets and to zero to
+ * disable it.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::mark_ip_dscp_supported
+ */
+int
+rte_tm_mark_ip_dscp(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_TM_H__ */
diff --git a/lib/librte_ether/rte_tm_driver.h b/lib/librte_ether/rte_tm_driver.h
new file mode 100644
index 0000000..a5b698f
--- /dev/null
+++ b/lib/librte_ether/rte_tm_driver.h
@@ -0,0 +1,366 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TM_DRIVER_H__
+#define __INCLUDE_RTE_TM_DRIVER_H__
+
+/**
+ * @file
+ * RTE Generic Traffic Manager API (Driver Side)
+ *
+ * This file provides implementation helpers for internal use by PMDs, they
+ * are not intended to be exposed to applications and are not subject to ABI
+ * versioning.
+ */
+
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include "rte_ethdev.h"
+#include "rte_tm.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/** @internal Traffic manager node ID validate and type get */
+typedef int (*rte_tm_node_type_get_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager capabilities get */
+typedef int (*rte_tm_capabilities_get_t)(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager level capabilities get */
+typedef int (*rte_tm_level_capabilities_get_t)(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node capabilities get */
+typedef int (*rte_tm_node_capabilities_get_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager WRED profile add */
+typedef int (*rte_tm_wred_profile_add_t)(struct rte_eth_dev *dev,
+ uint32_t wred_profile_id,
+ struct rte_tm_wred_params *profile,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager WRED profile delete */
+typedef int (*rte_tm_wred_profile_delete_t)(struct rte_eth_dev *dev,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shared WRED context add */
+typedef int (*rte_tm_shared_wred_context_add_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t shared_wred_context_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shared WRED context delete */
+typedef int (*rte_tm_shared_wred_context_delete_t)(
+ struct rte_eth_dev *dev,
+ uint32_t shared_wred_context_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shaper profile add */
+typedef int (*rte_tm_shaper_profile_add_t)(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shaper profile delete */
+typedef int (*rte_tm_shaper_profile_delete_t)(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shared shaper add/update */
+typedef int (*rte_tm_shared_shaper_add_update_t)(struct rte_eth_dev *dev,
+ uint32_t shared_shaper_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager shared shaper delete */
+typedef int (*rte_tm_shared_shaper_delete_t)(struct rte_eth_dev *dev,
+ uint32_t shared_shaper_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node add */
+typedef int (*rte_tm_node_add_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node delete */
+typedef int (*rte_tm_node_delete_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node suspend */
+typedef int (*rte_tm_node_suspend_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node resume */
+typedef int (*rte_tm_node_resume_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager hierarchy commit */
+typedef int (*rte_tm_hierarchy_commit_t)(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node parent update */
+typedef int (*rte_tm_node_parent_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node shaper update */
+typedef int (*rte_tm_node_shaper_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node shaper update */
+typedef int (*rte_tm_node_shared_shaper_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t shared_shaper_id,
+ int32_t add,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node stats update */
+typedef int (*rte_tm_node_stats_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint64_t stats_mask,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node WFQ weight mode update */
+typedef int (*rte_tm_node_wfq_weight_mode_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t node_id,
+ int *wfq_weigth_mode,
+ uint32_t n_sp_priorities,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node congestion management mode update */
+typedef int (*rte_tm_node_cman_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ enum rte_tm_cman_mode cman,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node WRED context update */
+typedef int (*rte_tm_node_wred_context_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager node WRED context update */
+typedef int (*rte_tm_node_shared_wred_context_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t shared_wred_context_id,
+ int add,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager read stats counters for specific node */
+typedef int (*rte_tm_node_stats_read_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager packet marking - VLAN DEI */
+typedef int (*rte_tm_mark_vlan_dei_t)(struct rte_eth_dev *dev,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager packet marking - IPv4/IPv6 ECN */
+typedef int (*rte_tm_mark_ip_ecn_t)(struct rte_eth_dev *dev,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/** @internal Traffic manager packet marking - IPv4/IPv6 DSCP */
+typedef int (*rte_tm_mark_ip_dscp_t)(struct rte_eth_dev *dev,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+struct rte_tm_ops {
+ /** Traffic manager node type get */
+ rte_tm_node_type_get_t node_type_get;
+
+ /** Traffic manager capabilities_get */
+ rte_tm_capabilities_get_t capabilities_get;
+ /** Traffic manager level capabilities_get */
+ rte_tm_level_capabilities_get_t level_capabilities_get;
+ /** Traffic manager node capabilities get */
+ rte_tm_node_capabilities_get_t node_capabilities_get;
+
+ /** Traffic manager WRED profile add */
+ rte_tm_wred_profile_add_t wred_profile_add;
+ /** Traffic manager WRED profile delete */
+ rte_tm_wred_profile_delete_t wred_profile_delete;
+ /** Traffic manager shared WRED context add/update */
+ rte_tm_shared_wred_context_add_update_t
+ shared_wred_context_add_update;
+ /** Traffic manager shared WRED context delete */
+ rte_tm_shared_wred_context_delete_t
+ shared_wred_context_delete;
+
+ /** Traffic manager shaper profile add */
+ rte_tm_shaper_profile_add_t shaper_profile_add;
+ /** Traffic manager shaper profile delete */
+ rte_tm_shaper_profile_delete_t shaper_profile_delete;
+ /** Traffic manager shared shaper add/update */
+ rte_tm_shared_shaper_add_update_t shared_shaper_add_update;
+ /** Traffic manager shared shaper delete */
+ rte_tm_shared_shaper_delete_t shared_shaper_delete;
+
+ /** Traffic manager node add */
+ rte_tm_node_add_t node_add;
+ /** Traffic manager node delete */
+ rte_tm_node_delete_t node_delete;
+ /** Traffic manager node suspend */
+ rte_tm_node_suspend_t node_suspend;
+ /** Traffic manager node resume */
+ rte_tm_node_resume_t node_resume;
+ /** Traffic manager hierarchy commit */
+ rte_tm_hierarchy_commit_t hierarchy_commit;
+
+ /** Traffic manager node parent update */
+ rte_tm_node_parent_update_t node_parent_update;
+ /** Traffic manager node shaper update */
+ rte_tm_node_shaper_update_t node_shaper_update;
+ /** Traffic manager node shared shaper update */
+ rte_tm_node_shared_shaper_update_t node_shared_shaper_update;
+ /** Traffic manager node stats update */
+ rte_tm_node_stats_update_t node_stats_update;
+ /** Traffic manager node WFQ weight mode update */
+ rte_tm_node_wfq_weight_mode_update_t node_wfq_weight_mode_update;
+ /** Traffic manager node congestion management mode update */
+ rte_tm_node_cman_update_t node_cman_update;
+ /** Traffic manager node WRED context update */
+ rte_tm_node_wred_context_update_t node_wred_context_update;
+ /** Traffic manager node shared WRED context update */
+ rte_tm_node_shared_wred_context_update_t
+ node_shared_wred_context_update;
+ /** Traffic manager read statistics counters for current node */
+ rte_tm_node_stats_read_t node_stats_read;
+
+ /** Traffic manager packet marking - VLAN DEI */
+ rte_tm_mark_vlan_dei_t mark_vlan_dei;
+ /** Traffic manager packet marking - IPv4/IPv6 ECN */
+ rte_tm_mark_ip_ecn_t mark_ip_ecn;
+ /** Traffic manager packet marking - IPv4/IPv6 DSCP */
+ rte_tm_mark_ip_dscp_t mark_ip_dscp;
+};
+
+/**
+ * Initialize generic error structure.
+ *
+ * This function also sets rte_errno to a given value.
+ *
+ * @param[out] error
+ * Pointer to error structure (may be NULL).
+ * @param[in] code
+ * Related error code (rte_errno).
+ * @param[in] type
+ * Cause field and error type.
+ * @param[in] cause
+ * Object responsible for the error.
+ * @param[in] message
+ * Human-readable error message.
+ *
+ * @return
+ * Error code.
+ */
+static inline int
+rte_tm_error_set(struct rte_tm_error *error,
+ int code,
+ enum rte_tm_error_type type,
+ const void *cause,
+ const char *message)
+{
+ if (error) {
+ *error = (struct rte_tm_error){
+ .type = type,
+ .cause = cause,
+ .message = message,
+ };
+ }
+ rte_errno = code;
+ return code;
+}
+
+/**
+ * Get generic traffic manager operations structure from a port
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] error
+ * Error details
+ *
+ * @return
+ * The traffic manager operations structure associated with port_id on
+ * success, NULL otherwise.
+ */
+const struct rte_tm_ops *
+rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_TM_DRIVER_H__ */
--
2.7.4
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 16:42 0% ` Ananyev, Konstantin
@ 2017-06-09 9:02 0% ` Bruce Richardson
2017-06-12 9:02 4% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2017-06-09 9:02 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: Olivier Matz, Verkamp, Daniel, dev
On Thu, Jun 08, 2017 at 05:42:00PM +0100, Ananyev, Konstantin wrote:
>
>
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Thursday, June 8, 2017 5:21 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> >
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Thursday, June 8, 2017 5:13 PM
> > > To: Richardson, Bruce <bruce.richardson@intel.com>
> > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Richardson, Bruce
> > > > Sent: Thursday, June 8, 2017 5:04 PM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > allocation
> > > >
> > > > On Thu, Jun 08, 2017 at 04:35:20PM +0100, Ananyev, Konstantin wrote:
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Richardson, Bruce
> > > > > > Sent: Thursday, June 8, 2017 4:25 PM
> > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > allocation
> > > > > >
> > > > > > On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Richardson, Bruce
> > > > > > > > Sent: Thursday, June 8, 2017 3:12 PM
> > > > > > > > To: Olivier Matz <olivier.matz@6wind.com>
> > > > > > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > > > Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > > > allocation
> > > > > > > >
> > > > > > > > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > > > > > > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > > > > > > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz
> > > wrote:
> > > > > > > > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > > > > > > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev,
> > > Konstantin wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > > > > > > > To: Ananyev, Konstantin
> > > > > > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>;
> > > > > > > > > > > > > > dev@dpdk.org
> > > > > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use
> > > > > > > > > > > > > > aligned memzone allocation
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev,
> > > Konstantin wrote:
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are
> > > > > > > > > > > > > > > > > > set to 2 cache lines, so members
> > > > > > > > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > > > > > > > >and therefore the whole struct needs
> > > > > > > > > > > > > > > > > >128-byte alignment according to the ABI
> > > > > > > > > > > > > > > > > so that the 128-byte alignment of the fields
> > > can be guaranteed.
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B
> > > aligned these days.
> > > > > > > > > > > > > > > > > BTW, I probably missed the initial discussion,
> > > but what was the reason for that?
> > > > > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128
> > > > > > > > > > > > > > > > byte alignment; it seems unnecessary if the
> > > > > > > > > > > > > > > > cache line is only 64
> > > > > > bytes.
> > > > > > > > An
> > > > > > > > > > > > > > alternate
> > > > > > > > > > > > > > > > fix would be to just use cache line alignment
> > > for these fields (since memzones are already cache line aligned).
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Yes, had the same thought.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Maybe there is some deeper reason for the >=
> > > 128-byte alignment logic in rte_ring.h?
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Might be, would be good to hear opinion the author
> > > of that change.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > It gives improved performance for core-2-core
> > > transfer.
> > > > > > > > > > > > >
> > > > > > > > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > > > > > > > That's ok but why we can't keep them and whole
> > > rte_ring aligned on cache-line boundaries?
> > > > > > > > > > > > > Something like that:
> > > > > > > > > > > > > struct rte_ring {
> > > > > > > > > > > > > ...
> > > > > > > > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > > };
> > > > > > > > > > > > >
> > > > > > > > > > > > > Konstantin
> > > > > > > > > > > >
> > > > > > > > > > > > Sure. That should probably work too.
> > > > > > > > > > > >
> > > > > > > > > > > > /Bruce
> > > > > > > > > > >
> > > > > > > > > > > I also agree with Konstantin's proposal. One question
> > > > > > > > > > > though: since it changes the alignment constraint of the
> > > > > > > > > > > rte_ring structure, I think it is an ABI breakage: a
> > > > > > > > > > > structure including the rte_ring structure inherits from
> > > this constraint.
> > > > > > > > > > >
> > > > > > > > > > > How could we handle that, knowing this is probably a rare
> > > case?
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > Is it an ABI break so long as we keep the resulting size
> > > > > > > > > > and field placement of the structures the same? The
> > > > > > > > > > alignment being reduced should not be a problem, as
> > > > > > > > > > 128byte alignment is also valid as 64byte alignment, after
> > > all.
> > > > > > > > >
> > > > > > > > > I'd say yes. Consider the following example:
> > > > > > > > >
> > > > > > > > > ---8<---
> > > > > > > > > #include <stdio.h>
> > > > > > > > > #include <stdlib.h>
> > > > > > > > >
> > > > > > > > > #define ALIGN 64
> > > > > > > > > /* #define ALIGN 128 */
> > > > > > > > >
> > > > > > > > > /* dummy rte_ring struct */
> > > > > > > > > struct rte_ring {
> > > > > > > > > char x[128];
> > > > > > > > > } __attribute__((aligned(ALIGN)));
> > > > > > > > >
> > > > > > > > > struct foo {
> > > > > > > > > struct rte_ring r;
> > > > > > > > > unsigned bar;
> > > > > > > > > };
> > > > > > > > >
> > > > > > > > > int main(void)
> > > > > > > > > {
> > > > > > > > > struct foo array[2];
> > > > > > > > >
> > > > > > > > > printf("sizeof(ring)=%zu diff=%u\n",
> > > > > > > > > sizeof(struct rte_ring),
> > > > > > > > > (unsigned int)((char *)&array[1].r - (char
> > > *)array));
> > > > > > > > >
> > > > > > > > > return 0;
> > > > > > > > > }
> > > > > > > > > ---8<---
> > > > > > > > >
> > > > > > > > > The size of rte_ring is always 128.
> > > > > > > > > diff is 192 or 256, depending on the value of ALIGN.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Olivier
> > > > > > >
> > > > > > > About would it be an ABI breakage to 17.05 - I think would...
> > > > > > > Though for me the actual breakage happens in 17.05 when rte_ring
> > > > > > > alignment was increased from 64B 128B.
> > > > > > > Now we just restoring it.
> > > > > > >
> > > > > > Yes, ABI change was announced in advance and explicitly broken in
> > > 17.05.
> > > > > > There was no announcement of ABI break in 17.08 for rte_ring.
> > > > > >
> > > > > > > >
> > > > > > > > Yes, the diff will change, but that is after a recompile. If
> > > > > > > > we have rte_ring_create function always return a 128-byte
> > > > > > > > aligned structure, will any already-compiled apps fail to work
> > > > > > > > if we also change the alignment of the rte_ring struct in the
> > > header?
> > > > > > >
> > > > > > > Why 128B?
> > > > > > > I thought we are discussing making rte_ring 64B aligned again?
> > > > > > >
> > > > > > > Konstantin
> > > > > >
> > > > > > To avoid possibly breaking apps compiled against 17.05 when run
> > > > > > against shared libs for 17.08. Having the extra alignment won't
> > > > > > affect 17.08 apps, since they only require 64-byte alignment, but
> > > > > > returning only 64-byte aligned memory for apps which expect
> > > > > > 128byte aligned memory may cause issues.
> > > > > >
> > > > > > Therefore, we should reduce the required alignment to 64B, which
> > > > > > should only affect any apps that do a recompile, and have memory
> > > > > > allocation for rings return 128B aligned addresses to work with
> > > > > > both 64B aligned and 128B aligned ring structures.
> > > > >
> > > > > Ah, I see - you are talking just about rte_ring_create().
> > > > > BTW, are you sure that right now it allocates rings 128B aligned?
> > > > > As I can see it does just:
> > > > > mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > > > > which means cache line alignment.
> > > > >
> > > > It doesn't currently allocate with that alignment, which is something
> > > > we need to fix - and what this patch was originally submitted to do.
> > > > So I think this patch should be applied, along with a further patch to
> > > > reduce the alignment going forward to avoid any other problems.
> > >
> > > But if we going to reduce alignment anyway (patch #2) why do we need patch
> > > #1 at all?
> >
> > Because any app compiled against 17.05 will use the old alignment value. Therefore patch 1 should be applied to 17.08 for backward
> > compatibility, and backported to 17.05.1.
>
> Why then just no backport patch #2 to 17.05.1?
>
Maybe so. I'm just a little wary about backporting changes like that to
an older release, even though I'm not aware of any specific issues it
might cause.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [dpdk-users] how to build 'example' folder in dpdk-2.2.0?
@ 2017-06-09 7:01 0% ` Dharmesh Mehta
0 siblings, 0 replies; 200+ results
From: Dharmesh Mehta @ 2017-06-09 7:01 UTC (permalink / raw)
To: Shyam Shrivastav, Sam; +Cc: users, dev
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=UTF-8, Size: 16208 bytes --]
Following is my makefile. I hope this will help.copy following content in "Makefile" and just run
1) make download_and_extract2) make
HOME_DIR = $(shell pwd)DPDK_VERSION=17.05PKTGEN_VERSION=3.2.10####################################################DPDK_PACKAGE_NAME=dpdk-$(DPDK_VERSION)DPDK_TAR_FILE_NAME=$(DPDK_PACKAGE_NAME).tar.xz####################################################PKTGEN_PACKAGE_NAME=pktgen-$(PKTGEN_VERSION)PKTGEN_TAR_FILE_NAME=$(PKTGEN_PACKAGE_NAME).tar.xz####################################################DPDK_SOURCE_CODE_URL=http://fast.dpdk.org/rel/$(DPDK_TAR_FILE_NAME)PKTGEN_SOURCE_CODE_URL=http://dpdk.org/browse/apps/pktgen-dpdk/snapshot/$(PKTGEN_TAR_FILE_NAME)####################################################export RTE_SDK=$(HOME_DIR)/$(DPDK_PACKAGE_NAME)export RTE_TARGET=x86_64-native-linuxapp-gcc####################################################export PKTGEN_PATH=$(HOME_DIR)/$(PKTGEN_PACKAGE_NAME)####################################################
PATH_TO_DPDK_PACKAGE = $(HOME_DIR)/packages/$(DPDK_TAR_FILE_NAME)ifneq ("$(wildcard $(PATH_TO_DPDK_PACKAGE))","") export DPDK_PACKAGE_FILE_EXISTS = 1else export DPDK_PACKAGE_FILE_EXISTS = 0endif
PATH_TO_PKTGEN_PACKAGE = $(HOME_DIR)/packages/$(PKTGEN_TAR_FILE_NAME)ifneq ("$(wildcard $(PATH_TO_PKTGEN_PACKAGE))","") export PKTGEN_PACKAGE_FILE_EXISTS = 1else export PKTGEN_PACKAGE_FILE_EXISTS = 0endif
####################################################
all: banner make build_dpdk make build_dpdk_all_examples make load_module
####################################################
download_and_extract: banner make download make extract
download: bannerifeq ($(DPDK_PACKAGE_FILE_EXISTS),1) cp $(HOME_DIR)/packages/$(DPDK_TAR_FILE_NAME) Â $(HOME_DIR)else rm -rf $(DPDK_TAR_FILE_NAME) wget $(DPDK_SOURCE_CODE_URL) mkdir -p $(HOME_DIR)/packages cp $(HOME_DIR)/$(DPDK_TAR_FILE_NAME) $(HOME_DIR)/packagesendif####ifeq ($(PKTGEN_PACKAGE_FILE_EXISTS),1) cp $(HOME_DIR)/packages/$(PKTGEN_TAR_FILE_NAME) Â $(HOME_DIR)else rm -rf $(PKTGEN_TAR_FILE_NAME) wget $(PKTGEN_SOURCE_CODE_URL) mkdir -p $(HOME_DIR)/packages cp $(HOME_DIR)/$(PKTGEN_TAR_FILE_NAME) $(HOME_DIR)/packagesendif
####################################################
extract: tar -xJf $(DPDK_TAR_FILE_NAME) tar -xJf $(PKTGEN_TAR_FILE_NAME)
####################################################
build_dpdk: cd $(RTE_SDK) && mkdir -p $(RTE_SDK)/$(RTE_TARGET) cd $(RTE_SDK) && make config T=$(RTE_TARGET)#cd $(RTE_SDK) && sed -ri 's,(PMD_PCAP=).*,\1y,' build/.config cd $(RTE_SDK) && make T=$(RTE_TARGET) cd $(RTE_SDK) && sudo make install cd $(RTE_SDK) && cp $(RTE_SDK)/build/.config   $(RTE_SDK)/$(RTE_TARGET) cd $(RTE_SDK) && cp -r $(RTE_SDK)/build/include $(RTE_SDK)/$(RTE_TARGET)
####################################################
load_module: @echo "Loading module uio_pci_generic" sudo modprobe uio_pci_generic sudo modprobe uio -sudo insmod $(RTE_SDK)/build/kmod/igb_uio.ko sudo modprobe vfio-pci
####################################################
clean: Â banner rm -rf $(RTE_SDK) rm -rf $(PKTGEN_PATH)
####################################################
clean_all: banner rm -rf $(RTE_SDK) rm -rf $(DPDK_TAR_FILE_NAME) rm -rf $(PKTGEN_TAR_FILE_NAME)
####################################################example_helloworld: cd $(RTE_SDK)/examples/helloworld && make
example_cmdline: cd $(RTE_SDK)/examples/cmdline && make
example_vhost: cd $(RTE_SDK)/examples/vhost && make
example_exception_path: cd $(RTE_SDK)/examples/exception_path && make
example_bond: cd $(RTE_SDK)/examples/bond && make
example_ethtool: cd $(RTE_SDK)/examples/ethtool && make
example_ip_pipeline: cd $(RTE_SDK)/examples/ip_pipeline && make
example_kni: cd $(RTE_SDK)/examples/kni && make
example_l2fwd-jobstats: cd $(RTE_SDK)/examples/l2fwd-jobstats && make
example_l3fwd-power: cd $(RTE_SDK)/examples/l3fwd-power && make
example_performance-thread: cd $(RTE_SDK)/examples/performance-thread && make
example_quota_watermark: cd $(RTE_SDK)/examples/quota_watermark && make
example_tep_termination: cd $(RTE_SDK)/examples/tep_termination && make
example_vmdq: cd $(RTE_SDK)/examples/vmdq && make
example_ip_reassembly: cd $(RTE_SDK)/examples/ip_reassembly && make
example_l2fwd: cd $(RTE_SDK)/examples/l2fwd && make
example_l2fwd-keepalive: cd $(RTE_SDK)/examples/l2fwd-keepalive && make
example_l3fwd-vf: cd $(RTE_SDK)/examples/l3fwd-vf && make
example_multi_process: cd $(RTE_SDK)/examples/multi_process && make
example_ptpclient: cd $(RTE_SDK)/examples/ptpclient && make
example_rxtx_callbacks: cd $(RTE_SDK)/examples/rxtx_callbacks && make
example_timer: cd $(RTE_SDK)/examples/timer && make
example_vmdq_dcb: cd $(RTE_SDK)/examples/vmdq_dcb && make
example_distributor: cd $(RTE_SDK)/examples/distributor && make
example_ipsec-secgw: cd $(RTE_SDK)/examples/ipsec-secgw && make
example_l2fwd-cat: cd $(RTE_SDK)/examples/l2fwd-cat && make
example_l3fwd: cd $(RTE_SDK)/examples/l3fwd && make
example_link_status_interrupt: cd $(RTE_SDK)/examples/link_status_interrupt && make
example_netmap_compat: cd $(RTE_SDK)/examples/netmap_compat && make
example_qos_meter: cd $(RTE_SDK)/examples/qos_meter && make
example_server_node_efd: cd $(RTE_SDK)/examples/server_node_efd && make
example_vm_power_manager: cd $(RTE_SDK)/examples/vm_power_manager && make
example_dpdk_qat: cd $(RTE_SDK)/examples/dpdk_qat && make
example_ip_fragmentation: cd $(RTE_SDK)/examples/ip_fragmentation && make
example_ipv4_multicast: cd $(RTE_SDK)/examples/ipv4_multicast && make
example_l2fwd-crypto: cd $(RTE_SDK)/examples/l2fwd-crypto && make
example_l3fwd-acl: cd $(RTE_SDK)/examples/l3fwd-acl && make
example_load_balancer: cd $(RTE_SDK)/examples/load_balancer && make
example_packet_ordering: cd $(RTE_SDK)/examples/packet_ordering && make
example_qos_sched: cd $(RTE_SDK)/examples/qos_sched && make
example_skeleton: cd $(RTE_SDK)/examples/skeleton && make
example_vhost_xen: cd $(RTE_SDK)/examples/vhost_xen && make
####################################################
build_dpdk_all_examples: banner make example_vhost make example_helloworld make example_cmdline make example_exception_path make example_bond make example_ethtool make example_ip_pipeline make example_kni make example_l2fwd-jobstats make example_l3fwd-power make example_performance-thread make example_quota_watermark make example_tep_termination make example_vmdq make example_ip_reassembly make example_l2fwd make example_l2fwd-keepalive make example_l3fwd-vf make example_multi_process make example_ptpclient make example_rxtx_callbacks make example_timer make example_vmdq_dcb make example_distributor make example_ipsec-secgw make example_l3fwd make example_link_status_interrupt make example_netmap_compat make example_qos_meter make example_server_node_efd make example_ip_fragmentation make example_ipv4_multicast make example_l2fwd-crypto make example_l3fwd-acl make example_load_balancer make example_packet_ordering make example_qos_sched make example_skeleton #make example_l2fwd-cat #make example_vm_power_manager #make example_dpdk_qat #make example_vhost_xen
####################################################
banner: @echo "" @echo "**************************************************" @echo "HOME_DIR=$(HOME_DIR)" @echo "DPDK_VERSION=$(DPDK_VERSION)" @echo "DPDK_PACKAGE_NAME=$(DPDK_PACKAGE_NAME)" @echo "DPDK_TAR_FILE_NAME=$(DPDK_TAR_FILE_NAME)" @echo "DPDK_SOURCE_CODE_URL=$(DPDK_SOURCE_CODE_URL)" @echo "RTE_SDK=$(RTE_SDK)" @echo "RTE_TARGET=$(RTE_TARGET)" @echo "PKTGEN_PACKAGE_NAME=$(PKTGEN_PACKAGE_NAME)" @echo "PKTGEN_TAR_FILE_NAME=$(PKTGEN_TAR_FILE_NAME)" @echo "PKTGEN_SOURCE_CODE_URL=$(PKTGEN_SOURCE_CODE_URL)" @echo "**************************************************" @echo ""
####################################################
From: Shyam Shrivastav <shrivastav.shyam@gmail.com>
To: Sam <batmanustc@gmail.com>
Cc: users@dpdk.org; dev@dpdk.org
Sent: Thursday, June 8, 2017 11:43 PM
Subject: Re: [dpdk-users] [dpdk-dev] how to build 'example' folder in dpdk-2.2.0?
For linux
http://dpdk.org/doc/guides/linux_gsg/index.html
http://dpdk.org/doc/guides/linux_gsg/build_dpdk.html
http://dpdk.org/doc/guides/linux_gsg/build_sample_apps.html
On Fri, Jun 9, 2017 at 8:31 AM, Sam <batmanustc@gmail.com> wrote:
> hi all,
>
> I want to build example(DPDK_HOME/example/*) in dpdk, and to have a look at
> vhost demo. But I can't find guide in dpdk home page or document.
>
> So is there some document to tell me HOWTO?
>
From olivier.matz@6wind.com Fri Jun 9 09:18:14 2017
Return-Path: <olivier.matz@6wind.com>
Received: from mail-wr0-f175.google.com (mail-wr0-f175.google.com
[209.85.128.175]) by dpdk.org (Postfix) with ESMTP id 2FFD22BA3
for <dev@dpdk.org>; Fri, 9 Jun 2017 09:18:14 +0200 (CEST)
Received: by mail-wr0-f175.google.com with SMTP id v111so26557264wrc.3
for <dev@dpdk.org>; Fri, 09 Jun 2017 00:18:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d=6wind-com.20150623.gappssmtp.com; s 150623;
hÚte:from:to:cc:subject:message-id:in-reply-to:references
:mime-version:content-transfer-encoding;
bh=pO7i/rnPOEOPRIe4apvaNwrExETLL8GKthlVCw89WlY=;
b=tw7+8Iek8m+TQn0y3k9u/zM0FDidlwMg0hjYpwkopOSMDKmi7wk/GYf9SNxEHvde0K
tHYv0088bTnDa1FRT8W29ZnrpNdxeyRduoJ760q2xhtX/RwsxYjwCY847WabOhh2HzGk
uC0YWzdQc2Fm/xiVTxUQBAgr9iEES68bsvGegqumKigKMYBlFc4w2CalvwOtIeLHDTBr
HcRd1mS9UDn4zDrRRt4t1i3qQsnVUJsjbOPqRVZLrx5x80eJ8bdgKyloTeFjleUM1LLI
V+Bi4y0/XH5AmArd9JJZOqi0ZQzgb6h2qCPNLDddpUuzhewAF6fVo0kOdeMksonFdgKd
udRg=X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
d\x1e100.net; s 161025;
h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
:references:mime-version:content-transfer-encoding;
bh=pO7i/rnPOEOPRIe4apvaNwrExETLL8GKthlVCw89WlY=;
b=fWVygQovMtFLK1f58kjUBOuI8IBqlVer2e3bkQyJvmEB4Pbr8TGlesaAeoHHEsbTMM
JHtnA2qLcjDkXUqmsjf/HLUTGCkGLKKZQRTFm55geZgcfrPjo8rvThOVM2DRri05AOmn
jkZjh65diywtSqlR2cOo2mGRPmZ1llo+xSXeGsM59FM0OCe6xipFbvabpfqWnDvNcu9F
aKbZwucHGkkhTgSiW21ESiAa5SOITV+0LVpJUsMYlsvwmXFC39MdG5O5zyF4OpY3gJQf
XOIS0UBpRv4I2yRaOk19bEY4uzB95E+tmfncM5xkVxo3pRqtHm4ONpH8x7jTdvVgyiUS
jksw=X-Gm-Message-State: AODbwcCY3qW7tyug9C6a6ZGAN72BAf8EL+LjlBLJR0XbTwtF6hXRWfAh
9fgpfMwUiq9vgDHW
X-Received: by 10.223.136.14 with SMTP id d14mr26330869wrd.168.1496992693819;
Fri, 09 Jun 2017 00:18:13 -0700 (PDT)
Received: from platinum (2a01cb0c03c651000226b0fffeed02fc.ipv6.abo.wanadoo.fr.
[2a01:cb0c:3c6:5100:226:b0ff:feed:2fc])
by smtp.gmail.com with ESMTPSA id s10sm689972wmb.8.2017.06.09.00.18.13
(version=TLS1_2 cipherìDHE-RSA-CHACHA20-POLY1305 bits%6/256);
Fri, 09 Jun 2017 00:18:13 -0700 (PDT)
Date: Fri, 9 Jun 2017 09:18:11 +0200
From: Olivier Matz <olivier.matz@6wind.com>
To: "Patil, Harish" <Harish.Patil@cavium.com>
Cc: "Mody, Rasesh" <Rasesh.Mody@cavium.com>, Ferruh Yigit
<ferruh.yigit@intel.com>, "dev@dpdk.org" <dev@dpdk.org>, Dept-Eng DPDK Dev
<Dept-EngDPDKDev@cavium.com>
Message-ID: <20170609091811.0867b1d1@platinum>
In-Reply-To: <D55F0C99.1469AE%Harish.Patil@cavium.com>
References: <1495960654-352-1-git-send-email-rasesh.mody@cavium.com>
<1496821429-6954-1-git-send-email-rasesh.mody@cavium.com>
<20170608142545.10cc8326@platinum>
<D55F0C99.1469AE%Harish.Patil@cavium.com>
X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Subject: Re: [dpdk-dev] [PATCH v2 1/2] mbuf: introduce new Tx offload flag
for MPLS-in-UDP
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
<mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
<mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Fri, 09 Jun 2017 07:18:14 -0000
On Thu, 8 Jun 2017 21:46:00 +0000, "Patil, Harish" <Harish.Patil@cavium.com> wrote:
> >Hi Rasesh,
> >
> >On Wed, 7 Jun 2017 00:43:48 -0700, Rasesh Mody <rasesh.mody@cavium.com>
> >wrote:
> >> From: Harish Patil <harish.patil@cavium.com>
> >>
> >> Some PMDs need to know the tunnel type in order to handle advance TX
> >> features. This patch adds a new TX offload flag for MPLS-in-UDP packets.
> >>
> >> Signed-off-by: Harish Patil <harish.patil@cavium.com>
> >> ---
> >> lib/librte_mbuf/rte_mbuf.c | 2 ++
> >> lib/librte_mbuf/rte_mbuf.h | 17 ++++++++++-------
> >> 2 files changed, 12 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> >> index 0e3e36a..c2793fb 100644
> >> --- a/lib/librte_mbuf/rte_mbuf.c
> >> +++ b/lib/librte_mbuf/rte_mbuf.c
> >> @@ -410,6 +410,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
> >> case PKT_TX_TUNNEL_IPIP: return "PKT_TX_TUNNEL_IPIP";
> >> case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
> >> case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
> >> + case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
> >> default: return NULL;
> >> }
> >> }
> >> @@ -441,6 +442,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
> >> { PKT_TX_TUNNEL_GENEVE, PKT_TX_TUNNEL_MASK,
> >> "PKT_TX_TUNNEL_NONE" },
> >> { PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
> >> + { PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MPLSINUDP, NULL },
> >> };
> >> const char *name;
> >> unsigned int i;
> >> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> >> index 1cb0310..27ad421 100644
> >> --- a/lib/librte_mbuf/rte_mbuf.h
> >> +++ b/lib/librte_mbuf/rte_mbuf.h
> >> @@ -197,19 +197,22 @@
> >> * Offload the MACsec. This flag must be set by the application to
> >>enable
> >> * this offload feature for a packet to be transmitted.
> >> */
> >> -#define PKT_TX_MACSEC (1ULL << 44)
> >> +#define PKT_TX_MACSEC (1ULL << 43)
> >
> >I'm not sure it is suitable to change the value of an existing
> >flag, since it breaks the ABI.
> >
> >
> >> /**
> >> - * Bits 45:48 used for the tunnel type.
> >> + * Bits 44:48 used for the tunnel type.
> >> * When doing Tx offload like TSO or checksum, the HW needs to
> >>configure the
> >> * tunnel type into the HW descriptors.
> >> */
> >> -#define PKT_TX_TUNNEL_VXLAN (0x1ULL << 45)
> >> -#define PKT_TX_TUNNEL_GRE (0x2ULL << 45)
> >> -#define PKT_TX_TUNNEL_IPIP (0x3ULL << 45)
> >> -#define PKT_TX_TUNNEL_GENEVE (0x4ULL << 45)
> >> +/**< TX packet with MPLS-in-UDP RFC 7510 header. */
> >> +#define PKT_TX_TUNNEL_MPLSINUDP (0x1ULL << 44)
> >> +
> >> +#define PKT_TX_TUNNEL_VXLAN (0x2ULL << 44)
> >> +#define PKT_TX_TUNNEL_GRE (0x3ULL << 44)
> >> +#define PKT_TX_TUNNEL_IPIP (0x4ULL << 44)
> >> +#define PKT_TX_TUNNEL_GENEVE (0x5ULL << 45)
> >> /* add new TX TUNNEL type here */
> >> -#define PKT_TX_TUNNEL_MASK (0xFULL << 45)
> >> +#define PKT_TX_TUNNEL_MASK (0x1FULL << 44)
> >>
> >> /**
> >> * Second VLAN insertion (QinQ) flag.
> >
> >I dont understand why adding
> >#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
> >wouldn't do the job?
> >
> >Currently, the tunnel mask is 0xF << 45, which gives 16 possible values.
>
> [Harish] Hi Olivier,
> Not too sure whether I understand your comment.
> My understanding is that those are bitmapped values for each Tx tunnel
> type in the range [48:45].
> They are not values. So defining PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
> wonât work.
Currently, we have:
#define PKT_TX_TUNNEL_VXLAN (0x1ULL << 45)
in binary: 000..000[0001]000..000
#define PKT_TX_TUNNEL_GRE (0x2ULL << 45)
in binary: 000..000[0010]000..000
#define PKT_TX_TUNNEL_IPIP (0x3ULL << 45)
in binary: 000..000[0011]000..000
#define PKT_TX_TUNNEL_GENEVE (0x4ULL << 45)
in binary: 000..000[0100]000..000
So, I'm still saying there's a room for 11 more values.
Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 1/2] mbuf: introduce new Tx offload flag for MPLS-in-UDP
2017-06-08 12:25 3% ` Olivier Matz
@ 2017-06-08 21:46 3% ` Patil, Harish
[not found] ` <20170609091811.0867b1d1@platinum>
0 siblings, 1 reply; 200+ results
From: Patil, Harish @ 2017-06-08 21:46 UTC (permalink / raw)
To: Olivier Matz, Mody, Rasesh, Ferruh Yigit; +Cc: dev, Dept-Eng DPDK Dev
>Hi Rasesh,
>
>On Wed, 7 Jun 2017 00:43:48 -0700, Rasesh Mody <rasesh.mody@cavium.com>
>wrote:
>> From: Harish Patil <harish.patil@cavium.com>
>>
>> Some PMDs need to know the tunnel type in order to handle advance TX
>> features. This patch adds a new TX offload flag for MPLS-in-UDP packets.
>>
>> Signed-off-by: Harish Patil <harish.patil@cavium.com>
>> ---
>> lib/librte_mbuf/rte_mbuf.c | 2 ++
>> lib/librte_mbuf/rte_mbuf.h | 17 ++++++++++-------
>> 2 files changed, 12 insertions(+), 7 deletions(-)
>>
>> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
>> index 0e3e36a..c2793fb 100644
>> --- a/lib/librte_mbuf/rte_mbuf.c
>> +++ b/lib/librte_mbuf/rte_mbuf.c
>> @@ -410,6 +410,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
>> case PKT_TX_TUNNEL_IPIP: return "PKT_TX_TUNNEL_IPIP";
>> case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
>> case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
>> + case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
>> default: return NULL;
>> }
>> }
>> @@ -441,6 +442,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
>> { PKT_TX_TUNNEL_GENEVE, PKT_TX_TUNNEL_MASK,
>> "PKT_TX_TUNNEL_NONE" },
>> { PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
>> + { PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MPLSINUDP, NULL },
>> };
>> const char *name;
>> unsigned int i;
>> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
>> index 1cb0310..27ad421 100644
>> --- a/lib/librte_mbuf/rte_mbuf.h
>> +++ b/lib/librte_mbuf/rte_mbuf.h
>> @@ -197,19 +197,22 @@
>> * Offload the MACsec. This flag must be set by the application to
>>enable
>> * this offload feature for a packet to be transmitted.
>> */
>> -#define PKT_TX_MACSEC (1ULL << 44)
>> +#define PKT_TX_MACSEC (1ULL << 43)
>
>I'm not sure it is suitable to change the value of an existing
>flag, since it breaks the ABI.
>
>
>> /**
>> - * Bits 45:48 used for the tunnel type.
>> + * Bits 44:48 used for the tunnel type.
>> * When doing Tx offload like TSO or checksum, the HW needs to
>>configure the
>> * tunnel type into the HW descriptors.
>> */
>> -#define PKT_TX_TUNNEL_VXLAN (0x1ULL << 45)
>> -#define PKT_TX_TUNNEL_GRE (0x2ULL << 45)
>> -#define PKT_TX_TUNNEL_IPIP (0x3ULL << 45)
>> -#define PKT_TX_TUNNEL_GENEVE (0x4ULL << 45)
>> +/**< TX packet with MPLS-in-UDP RFC 7510 header. */
>> +#define PKT_TX_TUNNEL_MPLSINUDP (0x1ULL << 44)
>> +
>> +#define PKT_TX_TUNNEL_VXLAN (0x2ULL << 44)
>> +#define PKT_TX_TUNNEL_GRE (0x3ULL << 44)
>> +#define PKT_TX_TUNNEL_IPIP (0x4ULL << 44)
>> +#define PKT_TX_TUNNEL_GENEVE (0x5ULL << 45)
>> /* add new TX TUNNEL type here */
>> -#define PKT_TX_TUNNEL_MASK (0xFULL << 45)
>> +#define PKT_TX_TUNNEL_MASK (0x1FULL << 44)
>>
>> /**
>> * Second VLAN insertion (QinQ) flag.
>
>I dont understand why adding
>#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
>wouldn't do the job?
>
>Currently, the tunnel mask is 0xF << 45, which gives 16 possible values.
[Harish] Hi Olivier,
Not too sure whether I understand your comment.
My understanding is that those are bitmapped values for each Tx tunnel
type in the range [48:45].
They are not values. So defining PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
won’t work.
Currently bits[48:45] are reserved for Tx tunnel types. Bits[63:49] and
bit 44 are already taken.
Bits [43:18] are free. That’s why we see a code comment there:
/* add new RX flags here */
/* add new TX flags here */
So I could have added MPLSINUDP as:
#define PKT_TX_TUNNEL_MPLSINUDP (1ULL << 18)
But I wanted to group all Tx tunnel types together which is logical and
update PKT_TX_TUNNEL_MASK accordingly. So to accommodate the new MPLSoUDP
flag, I had to move PKT_TX_MACSEC to one bit position back from 44
to 43 and hence the code comment:
- * Bits 45:48 used for the tunnel type.
+ * Bits 44:48 used for the tunnel type.
But if this would cause a ABI breakage the option is to use bit 18 as
shown above and update the mask accordingly.
But the down side of this is that it will not be grouped together.
Please let me know if this is okay?
Thanks,
Harish
>
>Regards,
>Olivier
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 16:20 0% ` Richardson, Bruce
@ 2017-06-08 16:42 0% ` Ananyev, Konstantin
2017-06-09 9:02 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2017-06-08 16:42 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Olivier Matz, Verkamp, Daniel, dev
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Thursday, June 8, 2017 5:21 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
>
>
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Thursday, June 8, 2017 5:13 PM
> > To: Richardson, Bruce <bruce.richardson@intel.com>
> > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > <daniel.verkamp@intel.com>; dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> >
> >
> >
> > > -----Original Message-----
> > > From: Richardson, Bruce
> > > Sent: Thursday, June 8, 2017 5:04 PM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > allocation
> > >
> > > On Thu, Jun 08, 2017 at 04:35:20PM +0100, Ananyev, Konstantin wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Richardson, Bruce
> > > > > Sent: Thursday, June 8, 2017 4:25 PM
> > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > allocation
> > > > >
> > > > > On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Richardson, Bruce
> > > > > > > Sent: Thursday, June 8, 2017 3:12 PM
> > > > > > > To: Olivier Matz <olivier.matz@6wind.com>
> > > > > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > > Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > > allocation
> > > > > > >
> > > > > > > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > > > > > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > > > > > > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz
> > wrote:
> > > > > > > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > > > > > > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev,
> > Konstantin wrote:
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > > > > > > To: Ananyev, Konstantin
> > > > > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>;
> > > > > > > > > > > > > dev@dpdk.org
> > > > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use
> > > > > > > > > > > > > aligned memzone allocation
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev,
> > Konstantin wrote:
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are
> > > > > > > > > > > > > > > > > set to 2 cache lines, so members
> > > > > > > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > > > > > > >and therefore the whole struct needs
> > > > > > > > > > > > > > > > >128-byte alignment according to the ABI
> > > > > > > > > > > > > > > > so that the 128-byte alignment of the fields
> > can be guaranteed.
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B
> > aligned these days.
> > > > > > > > > > > > > > > > BTW, I probably missed the initial discussion,
> > but what was the reason for that?
> > > > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128
> > > > > > > > > > > > > > > byte alignment; it seems unnecessary if the
> > > > > > > > > > > > > > > cache line is only 64
> > > > > bytes.
> > > > > > > An
> > > > > > > > > > > > > alternate
> > > > > > > > > > > > > > > fix would be to just use cache line alignment
> > for these fields (since memzones are already cache line aligned).
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Yes, had the same thought.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Maybe there is some deeper reason for the >=
> > 128-byte alignment logic in rte_ring.h?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Might be, would be good to hear opinion the author
> > of that change.
> > > > > > > > > > > > >
> > > > > > > > > > > > > It gives improved performance for core-2-core
> > transfer.
> > > > > > > > > > > >
> > > > > > > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > > > > > > That's ok but why we can't keep them and whole
> > rte_ring aligned on cache-line boundaries?
> > > > > > > > > > > > Something like that:
> > > > > > > > > > > > struct rte_ring {
> > > > > > > > > > > > ...
> > > > > > > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > > };
> > > > > > > > > > > >
> > > > > > > > > > > > Konstantin
> > > > > > > > > > >
> > > > > > > > > > > Sure. That should probably work too.
> > > > > > > > > > >
> > > > > > > > > > > /Bruce
> > > > > > > > > >
> > > > > > > > > > I also agree with Konstantin's proposal. One question
> > > > > > > > > > though: since it changes the alignment constraint of the
> > > > > > > > > > rte_ring structure, I think it is an ABI breakage: a
> > > > > > > > > > structure including the rte_ring structure inherits from
> > this constraint.
> > > > > > > > > >
> > > > > > > > > > How could we handle that, knowing this is probably a rare
> > case?
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > Is it an ABI break so long as we keep the resulting size
> > > > > > > > > and field placement of the structures the same? The
> > > > > > > > > alignment being reduced should not be a problem, as
> > > > > > > > > 128byte alignment is also valid as 64byte alignment, after
> > all.
> > > > > > > >
> > > > > > > > I'd say yes. Consider the following example:
> > > > > > > >
> > > > > > > > ---8<---
> > > > > > > > #include <stdio.h>
> > > > > > > > #include <stdlib.h>
> > > > > > > >
> > > > > > > > #define ALIGN 64
> > > > > > > > /* #define ALIGN 128 */
> > > > > > > >
> > > > > > > > /* dummy rte_ring struct */
> > > > > > > > struct rte_ring {
> > > > > > > > char x[128];
> > > > > > > > } __attribute__((aligned(ALIGN)));
> > > > > > > >
> > > > > > > > struct foo {
> > > > > > > > struct rte_ring r;
> > > > > > > > unsigned bar;
> > > > > > > > };
> > > > > > > >
> > > > > > > > int main(void)
> > > > > > > > {
> > > > > > > > struct foo array[2];
> > > > > > > >
> > > > > > > > printf("sizeof(ring)=%zu diff=%u\n",
> > > > > > > > sizeof(struct rte_ring),
> > > > > > > > (unsigned int)((char *)&array[1].r - (char
> > *)array));
> > > > > > > >
> > > > > > > > return 0;
> > > > > > > > }
> > > > > > > > ---8<---
> > > > > > > >
> > > > > > > > The size of rte_ring is always 128.
> > > > > > > > diff is 192 or 256, depending on the value of ALIGN.
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Olivier
> > > > > >
> > > > > > About would it be an ABI breakage to 17.05 - I think would...
> > > > > > Though for me the actual breakage happens in 17.05 when rte_ring
> > > > > > alignment was increased from 64B 128B.
> > > > > > Now we just restoring it.
> > > > > >
> > > > > Yes, ABI change was announced in advance and explicitly broken in
> > 17.05.
> > > > > There was no announcement of ABI break in 17.08 for rte_ring.
> > > > >
> > > > > > >
> > > > > > > Yes, the diff will change, but that is after a recompile. If
> > > > > > > we have rte_ring_create function always return a 128-byte
> > > > > > > aligned structure, will any already-compiled apps fail to work
> > > > > > > if we also change the alignment of the rte_ring struct in the
> > header?
> > > > > >
> > > > > > Why 128B?
> > > > > > I thought we are discussing making rte_ring 64B aligned again?
> > > > > >
> > > > > > Konstantin
> > > > >
> > > > > To avoid possibly breaking apps compiled against 17.05 when run
> > > > > against shared libs for 17.08. Having the extra alignment won't
> > > > > affect 17.08 apps, since they only require 64-byte alignment, but
> > > > > returning only 64-byte aligned memory for apps which expect
> > > > > 128byte aligned memory may cause issues.
> > > > >
> > > > > Therefore, we should reduce the required alignment to 64B, which
> > > > > should only affect any apps that do a recompile, and have memory
> > > > > allocation for rings return 128B aligned addresses to work with
> > > > > both 64B aligned and 128B aligned ring structures.
> > > >
> > > > Ah, I see - you are talking just about rte_ring_create().
> > > > BTW, are you sure that right now it allocates rings 128B aligned?
> > > > As I can see it does just:
> > > > mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > > > which means cache line alignment.
> > > >
> > > It doesn't currently allocate with that alignment, which is something
> > > we need to fix - and what this patch was originally submitted to do.
> > > So I think this patch should be applied, along with a further patch to
> > > reduce the alignment going forward to avoid any other problems.
> >
> > But if we going to reduce alignment anyway (patch #2) why do we need patch
> > #1 at all?
>
> Because any app compiled against 17.05 will use the old alignment value. Therefore patch 1 should be applied to 17.08 for backward
> compatibility, and backported to 17.05.1.
Why then just no backport patch #2 to 17.05.1?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 16:12 0% ` Ananyev, Konstantin
@ 2017-06-08 16:20 0% ` Richardson, Bruce
2017-06-08 16:42 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Richardson, Bruce @ 2017-06-08 16:20 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: Olivier Matz, Verkamp, Daniel, dev
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, June 8, 2017 5:13 PM
> To: Richardson, Bruce <bruce.richardson@intel.com>
> Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> <daniel.verkamp@intel.com>; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
>
>
>
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Thursday, June 8, 2017 5:04 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > <daniel.verkamp@intel.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > allocation
> >
> > On Thu, Jun 08, 2017 at 04:35:20PM +0100, Ananyev, Konstantin wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: Richardson, Bruce
> > > > Sent: Thursday, June 8, 2017 4:25 PM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel
> > > > <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > allocation
> > > >
> > > > On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Richardson, Bruce
> > > > > > Sent: Thursday, June 8, 2017 3:12 PM
> > > > > > To: Olivier Matz <olivier.matz@6wind.com>
> > > > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone
> > > > > > allocation
> > > > > >
> > > > > > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > > > > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> > > > > > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz
> wrote:
> > > > > > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> > > > > > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev,
> Konstantin wrote:
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > > > > > To: Ananyev, Konstantin
> > > > > > > > > > > > <konstantin.ananyev@intel.com>
> > > > > > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>;
> > > > > > > > > > > > dev@dpdk.org
> > > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use
> > > > > > > > > > > > aligned memzone allocation
> > > > > > > > > > > >
> > > > > > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev,
> Konstantin wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are
> > > > > > > > > > > > > > > > set to 2 cache lines, so members
> > > > > > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > > > > > >and therefore the whole struct needs
> > > > > > > > > > > > > > > >128-byte alignment according to the ABI
> > > > > > > > > > > > > > > so that the 128-byte alignment of the fields
> can be guaranteed.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B
> aligned these days.
> > > > > > > > > > > > > > > BTW, I probably missed the initial discussion,
> but what was the reason for that?
> > > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128
> > > > > > > > > > > > > > byte alignment; it seems unnecessary if the
> > > > > > > > > > > > > > cache line is only 64
> > > > bytes.
> > > > > > An
> > > > > > > > > > > > alternate
> > > > > > > > > > > > > > fix would be to just use cache line alignment
> for these fields (since memzones are already cache line aligned).
> > > > > > > > > > > > >
> > > > > > > > > > > > > Yes, had the same thought.
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Maybe there is some deeper reason for the >=
> 128-byte alignment logic in rte_ring.h?
> > > > > > > > > > > > >
> > > > > > > > > > > > > Might be, would be good to hear opinion the author
> of that change.
> > > > > > > > > > > >
> > > > > > > > > > > > It gives improved performance for core-2-core
> transfer.
> > > > > > > > > > >
> > > > > > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > > > > > That's ok but why we can't keep them and whole
> rte_ring aligned on cache-line boundaries?
> > > > > > > > > > > Something like that:
> > > > > > > > > > > struct rte_ring {
> > > > > > > > > > > ...
> > > > > > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > > };
> > > > > > > > > > >
> > > > > > > > > > > Konstantin
> > > > > > > > > >
> > > > > > > > > > Sure. That should probably work too.
> > > > > > > > > >
> > > > > > > > > > /Bruce
> > > > > > > > >
> > > > > > > > > I also agree with Konstantin's proposal. One question
> > > > > > > > > though: since it changes the alignment constraint of the
> > > > > > > > > rte_ring structure, I think it is an ABI breakage: a
> > > > > > > > > structure including the rte_ring structure inherits from
> this constraint.
> > > > > > > > >
> > > > > > > > > How could we handle that, knowing this is probably a rare
> case?
> > > > > > > > >
> > > > > > > > >
> > > > > > > > Is it an ABI break so long as we keep the resulting size
> > > > > > > > and field placement of the structures the same? The
> > > > > > > > alignment being reduced should not be a problem, as
> > > > > > > > 128byte alignment is also valid as 64byte alignment, after
> all.
> > > > > > >
> > > > > > > I'd say yes. Consider the following example:
> > > > > > >
> > > > > > > ---8<---
> > > > > > > #include <stdio.h>
> > > > > > > #include <stdlib.h>
> > > > > > >
> > > > > > > #define ALIGN 64
> > > > > > > /* #define ALIGN 128 */
> > > > > > >
> > > > > > > /* dummy rte_ring struct */
> > > > > > > struct rte_ring {
> > > > > > > char x[128];
> > > > > > > } __attribute__((aligned(ALIGN)));
> > > > > > >
> > > > > > > struct foo {
> > > > > > > struct rte_ring r;
> > > > > > > unsigned bar;
> > > > > > > };
> > > > > > >
> > > > > > > int main(void)
> > > > > > > {
> > > > > > > struct foo array[2];
> > > > > > >
> > > > > > > printf("sizeof(ring)=%zu diff=%u\n",
> > > > > > > sizeof(struct rte_ring),
> > > > > > > (unsigned int)((char *)&array[1].r - (char
> *)array));
> > > > > > >
> > > > > > > return 0;
> > > > > > > }
> > > > > > > ---8<---
> > > > > > >
> > > > > > > The size of rte_ring is always 128.
> > > > > > > diff is 192 or 256, depending on the value of ALIGN.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Olivier
> > > > >
> > > > > About would it be an ABI breakage to 17.05 - I think would...
> > > > > Though for me the actual breakage happens in 17.05 when rte_ring
> > > > > alignment was increased from 64B 128B.
> > > > > Now we just restoring it.
> > > > >
> > > > Yes, ABI change was announced in advance and explicitly broken in
> 17.05.
> > > > There was no announcement of ABI break in 17.08 for rte_ring.
> > > >
> > > > > >
> > > > > > Yes, the diff will change, but that is after a recompile. If
> > > > > > we have rte_ring_create function always return a 128-byte
> > > > > > aligned structure, will any already-compiled apps fail to work
> > > > > > if we also change the alignment of the rte_ring struct in the
> header?
> > > > >
> > > > > Why 128B?
> > > > > I thought we are discussing making rte_ring 64B aligned again?
> > > > >
> > > > > Konstantin
> > > >
> > > > To avoid possibly breaking apps compiled against 17.05 when run
> > > > against shared libs for 17.08. Having the extra alignment won't
> > > > affect 17.08 apps, since they only require 64-byte alignment, but
> > > > returning only 64-byte aligned memory for apps which expect
> > > > 128byte aligned memory may cause issues.
> > > >
> > > > Therefore, we should reduce the required alignment to 64B, which
> > > > should only affect any apps that do a recompile, and have memory
> > > > allocation for rings return 128B aligned addresses to work with
> > > > both 64B aligned and 128B aligned ring structures.
> > >
> > > Ah, I see - you are talking just about rte_ring_create().
> > > BTW, are you sure that right now it allocates rings 128B aligned?
> > > As I can see it does just:
> > > mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > > which means cache line alignment.
> > >
> > It doesn't currently allocate with that alignment, which is something
> > we need to fix - and what this patch was originally submitted to do.
> > So I think this patch should be applied, along with a further patch to
> > reduce the alignment going forward to avoid any other problems.
>
> But if we going to reduce alignment anyway (patch #2) why do we need patch
> #1 at all?
Because any app compiled against 17.05 will use the old alignment value. Therefore patch 1 should be applied to 17.08 for backward compatibility, and backported to 17.05.1.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 16:03 0% ` Bruce Richardson
@ 2017-06-08 16:12 0% ` Ananyev, Konstantin
2017-06-08 16:20 0% ` Richardson, Bruce
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2017-06-08 16:12 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Olivier Matz, Verkamp, Daniel, dev
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Thursday, June 8, 2017 5:04 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
>
> On Thu, Jun 08, 2017 at 04:35:20PM +0100, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Richardson, Bruce
> > > Sent: Thursday, June 8, 2017 4:25 PM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > >
> > > On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Richardson, Bruce
> > > > > Sent: Thursday, June 8, 2017 3:12 PM
> > > > > To: Olivier Matz <olivier.matz@6wind.com>
> > > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > >
> > > > > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > > > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz wrote:
> > > > > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > -----Original Message-----
> > > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > > > > > > > >
> > > > > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > > > > > > > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > > > > > > > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > > > > > > > > > > Konstantin
> > > > > > > > > > > > >
> > > > > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64
> > > bytes.
> > > > > An
> > > > > > > > > > > alternate
> > > > > > > > > > > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > > > > > > > > > > >
> > > > > > > > > > > > Yes, had the same thought.
> > > > > > > > > > > >
> > > > > > > > > > > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > > > > > > > > > > >
> > > > > > > > > > > > Might be, would be good to hear opinion the author of that change.
> > > > > > > > > > >
> > > > > > > > > > > It gives improved performance for core-2-core transfer.
> > > > > > > > > >
> > > > > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > > > > That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> > > > > > > > > > Something like that:
> > > > > > > > > > struct rte_ring {
> > > > > > > > > > ...
> > > > > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > > };
> > > > > > > > > >
> > > > > > > > > > Konstantin
> > > > > > > > >
> > > > > > > > > Sure. That should probably work too.
> > > > > > > > >
> > > > > > > > > /Bruce
> > > > > > > >
> > > > > > > > I also agree with Konstantin's proposal. One question though: since it
> > > > > > > > changes the alignment constraint of the rte_ring structure, I think it is
> > > > > > > > an ABI breakage: a structure including the rte_ring structure inherits
> > > > > > > > from this constraint.
> > > > > > > >
> > > > > > > > How could we handle that, knowing this is probably a rare case?
> > > > > > > >
> > > > > > > >
> > > > > > > Is it an ABI break so long as we keep the resulting size and field
> > > > > > > placement of the structures the same? The alignment being reduced should
> > > > > > > not be a problem, as 128byte alignment is also valid as 64byte
> > > > > > > alignment, after all.
> > > > > >
> > > > > > I'd say yes. Consider the following example:
> > > > > >
> > > > > > ---8<---
> > > > > > #include <stdio.h>
> > > > > > #include <stdlib.h>
> > > > > >
> > > > > > #define ALIGN 64
> > > > > > /* #define ALIGN 128 */
> > > > > >
> > > > > > /* dummy rte_ring struct */
> > > > > > struct rte_ring {
> > > > > > char x[128];
> > > > > > } __attribute__((aligned(ALIGN)));
> > > > > >
> > > > > > struct foo {
> > > > > > struct rte_ring r;
> > > > > > unsigned bar;
> > > > > > };
> > > > > >
> > > > > > int main(void)
> > > > > > {
> > > > > > struct foo array[2];
> > > > > >
> > > > > > printf("sizeof(ring)=%zu diff=%u\n",
> > > > > > sizeof(struct rte_ring),
> > > > > > (unsigned int)((char *)&array[1].r - (char *)array));
> > > > > >
> > > > > > return 0;
> > > > > > }
> > > > > > ---8<---
> > > > > >
> > > > > > The size of rte_ring is always 128.
> > > > > > diff is 192 or 256, depending on the value of ALIGN.
> > > > > >
> > > > > >
> > > > > >
> > > > > > Olivier
> > > >
> > > > About would it be an ABI breakage to 17.05 - I think would...
> > > > Though for me the actual breakage happens in 17.05 when rte_ring
> > > > alignment was increased from 64B 128B.
> > > > Now we just restoring it.
> > > >
> > > Yes, ABI change was announced in advance and explicitly broken in 17.05.
> > > There was no announcement of ABI break in 17.08 for rte_ring.
> > >
> > > > >
> > > > > Yes, the diff will change, but that is after a recompile. If we have
> > > > > rte_ring_create function always return a 128-byte aligned structure,
> > > > > will any already-compiled apps fail to work if we also change the alignment
> > > > > of the rte_ring struct in the header?
> > > >
> > > > Why 128B?
> > > > I thought we are discussing making rte_ring 64B aligned again?
> > > >
> > > > Konstantin
> > >
> > > To avoid possibly breaking apps compiled against 17.05 when run against
> > > shared libs for 17.08. Having the extra alignment won't affect 17.08
> > > apps, since they only require 64-byte alignment, but returning only
> > > 64-byte aligned memory for apps which expect 128byte aligned memory may
> > > cause issues.
> > >
> > > Therefore, we should reduce the required alignment to 64B, which should
> > > only affect any apps that do a recompile, and have memory allocation for
> > > rings return 128B aligned addresses to work with both 64B aligned and
> > > 128B aligned ring structures.
> >
> > Ah, I see - you are talking just about rte_ring_create().
> > BTW, are you sure that right now it allocates rings 128B aligned?
> > As I can see it does just:
> > mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > which means cache line alignment.
> >
> It doesn't currently allocate with that alignment, which is something we
> need to fix - and what this patch was originally submitted to do. So I
> think this patch should be applied, along with a further patch to reduce
> the alignment going forward to avoid any other problems.
But if we going to reduce alignment anyway (patch #2) why do we need
patch #1 at all?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 15:35 0% ` Ananyev, Konstantin
@ 2017-06-08 16:03 0% ` Bruce Richardson
2017-06-08 16:12 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2017-06-08 16:03 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: Olivier Matz, Verkamp, Daniel, dev
On Thu, Jun 08, 2017 at 04:35:20PM +0100, Ananyev, Konstantin wrote:
>
>
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Thursday, June 8, 2017 4:25 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> >
> > On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: Richardson, Bruce
> > > > Sent: Thursday, June 8, 2017 3:12 PM
> > > > To: Olivier Matz <olivier.matz@6wind.com>
> > > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > >
> > > > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz wrote:
> > > > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > -----Original Message-----
> > > > > > > > > > From: Richardson, Bruce
> > > > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > > > > > > >
> > > > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > > > > > > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > > > > > > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > > > > > > > > > Konstantin
> > > > > > > > > > > >
> > > > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64
> > bytes.
> > > > An
> > > > > > > > > > alternate
> > > > > > > > > > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > > > > > > > > > >
> > > > > > > > > > > Yes, had the same thought.
> > > > > > > > > > >
> > > > > > > > > > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > > > > > > > > > >
> > > > > > > > > > > Might be, would be good to hear opinion the author of that change.
> > > > > > > > > >
> > > > > > > > > > It gives improved performance for core-2-core transfer.
> > > > > > > > >
> > > > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > > > That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> > > > > > > > > Something like that:
> > > > > > > > > struct rte_ring {
> > > > > > > > > ...
> > > > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > > };
> > > > > > > > >
> > > > > > > > > Konstantin
> > > > > > > >
> > > > > > > > Sure. That should probably work too.
> > > > > > > >
> > > > > > > > /Bruce
> > > > > > >
> > > > > > > I also agree with Konstantin's proposal. One question though: since it
> > > > > > > changes the alignment constraint of the rte_ring structure, I think it is
> > > > > > > an ABI breakage: a structure including the rte_ring structure inherits
> > > > > > > from this constraint.
> > > > > > >
> > > > > > > How could we handle that, knowing this is probably a rare case?
> > > > > > >
> > > > > > >
> > > > > > Is it an ABI break so long as we keep the resulting size and field
> > > > > > placement of the structures the same? The alignment being reduced should
> > > > > > not be a problem, as 128byte alignment is also valid as 64byte
> > > > > > alignment, after all.
> > > > >
> > > > > I'd say yes. Consider the following example:
> > > > >
> > > > > ---8<---
> > > > > #include <stdio.h>
> > > > > #include <stdlib.h>
> > > > >
> > > > > #define ALIGN 64
> > > > > /* #define ALIGN 128 */
> > > > >
> > > > > /* dummy rte_ring struct */
> > > > > struct rte_ring {
> > > > > char x[128];
> > > > > } __attribute__((aligned(ALIGN)));
> > > > >
> > > > > struct foo {
> > > > > struct rte_ring r;
> > > > > unsigned bar;
> > > > > };
> > > > >
> > > > > int main(void)
> > > > > {
> > > > > struct foo array[2];
> > > > >
> > > > > printf("sizeof(ring)=%zu diff=%u\n",
> > > > > sizeof(struct rte_ring),
> > > > > (unsigned int)((char *)&array[1].r - (char *)array));
> > > > >
> > > > > return 0;
> > > > > }
> > > > > ---8<---
> > > > >
> > > > > The size of rte_ring is always 128.
> > > > > diff is 192 or 256, depending on the value of ALIGN.
> > > > >
> > > > >
> > > > >
> > > > > Olivier
> > >
> > > About would it be an ABI breakage to 17.05 - I think would...
> > > Though for me the actual breakage happens in 17.05 when rte_ring
> > > alignment was increased from 64B 128B.
> > > Now we just restoring it.
> > >
> > Yes, ABI change was announced in advance and explicitly broken in 17.05.
> > There was no announcement of ABI break in 17.08 for rte_ring.
> >
> > > >
> > > > Yes, the diff will change, but that is after a recompile. If we have
> > > > rte_ring_create function always return a 128-byte aligned structure,
> > > > will any already-compiled apps fail to work if we also change the alignment
> > > > of the rte_ring struct in the header?
> > >
> > > Why 128B?
> > > I thought we are discussing making rte_ring 64B aligned again?
> > >
> > > Konstantin
> >
> > To avoid possibly breaking apps compiled against 17.05 when run against
> > shared libs for 17.08. Having the extra alignment won't affect 17.08
> > apps, since they only require 64-byte alignment, but returning only
> > 64-byte aligned memory for apps which expect 128byte aligned memory may
> > cause issues.
> >
> > Therefore, we should reduce the required alignment to 64B, which should
> > only affect any apps that do a recompile, and have memory allocation for
> > rings return 128B aligned addresses to work with both 64B aligned and
> > 128B aligned ring structures.
>
> Ah, I see - you are talking just about rte_ring_create().
> BTW, are you sure that right now it allocates rings 128B aligned?
> As I can see it does just:
> mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> which means cache line alignment.
>
It doesn't currently allocate with that alignment, which is something we
need to fix - and what this patch was originally submitted to do. So I
think this patch should be applied, along with a further patch to reduce
the alignment going forward to avoid any other problems.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 15:24 4% ` Bruce Richardson
@ 2017-06-08 15:35 0% ` Ananyev, Konstantin
2017-06-08 16:03 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2017-06-08 15:35 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Olivier Matz, Verkamp, Daniel, dev
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Thursday, June 8, 2017 4:25 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Olivier Matz <olivier.matz@6wind.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
>
> On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Richardson, Bruce
> > > Sent: Thursday, June 8, 2017 3:12 PM
> > > To: Olivier Matz <olivier.matz@6wind.com>
> > > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > >
> > > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz wrote:
> > > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Richardson, Bruce
> > > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > > > > > >
> > > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > > > > > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > > > > > > > > >
> > > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > > > > > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > > > > > > > > Konstantin
> > > > > > > > > > >
> > > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64
> bytes.
> > > An
> > > > > > > > > alternate
> > > > > > > > > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > > > > > > > > >
> > > > > > > > > > Yes, had the same thought.
> > > > > > > > > >
> > > > > > > > > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > > > > > > > > >
> > > > > > > > > > Might be, would be good to hear opinion the author of that change.
> > > > > > > > >
> > > > > > > > > It gives improved performance for core-2-core transfer.
> > > > > > > >
> > > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > > That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> > > > > > > > Something like that:
> > > > > > > > struct rte_ring {
> > > > > > > > ...
> > > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > > };
> > > > > > > >
> > > > > > > > Konstantin
> > > > > > >
> > > > > > > Sure. That should probably work too.
> > > > > > >
> > > > > > > /Bruce
> > > > > >
> > > > > > I also agree with Konstantin's proposal. One question though: since it
> > > > > > changes the alignment constraint of the rte_ring structure, I think it is
> > > > > > an ABI breakage: a structure including the rte_ring structure inherits
> > > > > > from this constraint.
> > > > > >
> > > > > > How could we handle that, knowing this is probably a rare case?
> > > > > >
> > > > > >
> > > > > Is it an ABI break so long as we keep the resulting size and field
> > > > > placement of the structures the same? The alignment being reduced should
> > > > > not be a problem, as 128byte alignment is also valid as 64byte
> > > > > alignment, after all.
> > > >
> > > > I'd say yes. Consider the following example:
> > > >
> > > > ---8<---
> > > > #include <stdio.h>
> > > > #include <stdlib.h>
> > > >
> > > > #define ALIGN 64
> > > > /* #define ALIGN 128 */
> > > >
> > > > /* dummy rte_ring struct */
> > > > struct rte_ring {
> > > > char x[128];
> > > > } __attribute__((aligned(ALIGN)));
> > > >
> > > > struct foo {
> > > > struct rte_ring r;
> > > > unsigned bar;
> > > > };
> > > >
> > > > int main(void)
> > > > {
> > > > struct foo array[2];
> > > >
> > > > printf("sizeof(ring)=%zu diff=%u\n",
> > > > sizeof(struct rte_ring),
> > > > (unsigned int)((char *)&array[1].r - (char *)array));
> > > >
> > > > return 0;
> > > > }
> > > > ---8<---
> > > >
> > > > The size of rte_ring is always 128.
> > > > diff is 192 or 256, depending on the value of ALIGN.
> > > >
> > > >
> > > >
> > > > Olivier
> >
> > About would it be an ABI breakage to 17.05 - I think would...
> > Though for me the actual breakage happens in 17.05 when rte_ring
> > alignment was increased from 64B 128B.
> > Now we just restoring it.
> >
> Yes, ABI change was announced in advance and explicitly broken in 17.05.
> There was no announcement of ABI break in 17.08 for rte_ring.
>
> > >
> > > Yes, the diff will change, but that is after a recompile. If we have
> > > rte_ring_create function always return a 128-byte aligned structure,
> > > will any already-compiled apps fail to work if we also change the alignment
> > > of the rte_ring struct in the header?
> >
> > Why 128B?
> > I thought we are discussing making rte_ring 64B aligned again?
> >
> > Konstantin
>
> To avoid possibly breaking apps compiled against 17.05 when run against
> shared libs for 17.08. Having the extra alignment won't affect 17.08
> apps, since they only require 64-byte alignment, but returning only
> 64-byte aligned memory for apps which expect 128byte aligned memory may
> cause issues.
>
> Therefore, we should reduce the required alignment to 64B, which should
> only affect any apps that do a recompile, and have memory allocation for
> rings return 128B aligned addresses to work with both 64B aligned and
> 128B aligned ring structures.
Ah, I see - you are talking just about rte_ring_create().
BTW, are you sure that right now it allocates rings 128B aligned?
As I can see it does just:
mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
which means cache line alignment.
Konstantin
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 14:50 3% ` Ananyev, Konstantin
@ 2017-06-08 15:24 4% ` Bruce Richardson
2017-06-08 15:35 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2017-06-08 15:24 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: Olivier Matz, Verkamp, Daniel, dev
On Thu, Jun 08, 2017 at 03:50:34PM +0100, Ananyev, Konstantin wrote:
>
>
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Thursday, June 8, 2017 3:12 PM
> > To: Olivier Matz <olivier.matz@6wind.com>
> > Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> >
> > On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz wrote:
> > > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Richardson, Bruce
> > > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > > > > >
> > > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > > > > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > > > > > > > >
> > > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > > > > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > > > > > > > Konstantin
> > > > > > > > > >
> > > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes.
> > An
> > > > > > > > alternate
> > > > > > > > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > > > > > > > >
> > > > > > > > > Yes, had the same thought.
> > > > > > > > >
> > > > > > > > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > > > > > > > >
> > > > > > > > > Might be, would be good to hear opinion the author of that change.
> > > > > > > >
> > > > > > > > It gives improved performance for core-2-core transfer.
> > > > > > >
> > > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > > That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> > > > > > > Something like that:
> > > > > > > struct rte_ring {
> > > > > > > ...
> > > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > > };
> > > > > > >
> > > > > > > Konstantin
> > > > > >
> > > > > > Sure. That should probably work too.
> > > > > >
> > > > > > /Bruce
> > > > >
> > > > > I also agree with Konstantin's proposal. One question though: since it
> > > > > changes the alignment constraint of the rte_ring structure, I think it is
> > > > > an ABI breakage: a structure including the rte_ring structure inherits
> > > > > from this constraint.
> > > > >
> > > > > How could we handle that, knowing this is probably a rare case?
> > > > >
> > > > >
> > > > Is it an ABI break so long as we keep the resulting size and field
> > > > placement of the structures the same? The alignment being reduced should
> > > > not be a problem, as 128byte alignment is also valid as 64byte
> > > > alignment, after all.
> > >
> > > I'd say yes. Consider the following example:
> > >
> > > ---8<---
> > > #include <stdio.h>
> > > #include <stdlib.h>
> > >
> > > #define ALIGN 64
> > > /* #define ALIGN 128 */
> > >
> > > /* dummy rte_ring struct */
> > > struct rte_ring {
> > > char x[128];
> > > } __attribute__((aligned(ALIGN)));
> > >
> > > struct foo {
> > > struct rte_ring r;
> > > unsigned bar;
> > > };
> > >
> > > int main(void)
> > > {
> > > struct foo array[2];
> > >
> > > printf("sizeof(ring)=%zu diff=%u\n",
> > > sizeof(struct rte_ring),
> > > (unsigned int)((char *)&array[1].r - (char *)array));
> > >
> > > return 0;
> > > }
> > > ---8<---
> > >
> > > The size of rte_ring is always 128.
> > > diff is 192 or 256, depending on the value of ALIGN.
> > >
> > >
> > >
> > > Olivier
>
> About would it be an ABI breakage to 17.05 - I think would...
> Though for me the actual breakage happens in 17.05 when rte_ring
> alignment was increased from 64B 128B.
> Now we just restoring it.
>
Yes, ABI change was announced in advance and explicitly broken in 17.05.
There was no announcement of ABI break in 17.08 for rte_ring.
> >
> > Yes, the diff will change, but that is after a recompile. If we have
> > rte_ring_create function always return a 128-byte aligned structure,
> > will any already-compiled apps fail to work if we also change the alignment
> > of the rte_ring struct in the header?
>
> Why 128B?
> I thought we are discussing making rte_ring 64B aligned again?
>
> Konstantin
To avoid possibly breaking apps compiled against 17.05 when run against
shared libs for 17.08. Having the extra alignment won't affect 17.08
apps, since they only require 64-byte alignment, but returning only
64-byte aligned memory for apps which expect 128byte aligned memory may
cause issues.
Therefore, we should reduce the required alignment to 64B, which should
only affect any apps that do a recompile, and have memory allocation for
rings return 128B aligned addresses to work with both 64B aligned and
128B aligned ring structures.
/Bruce
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 14:11 0% ` Bruce Richardson
@ 2017-06-08 14:50 3% ` Ananyev, Konstantin
2017-06-08 15:24 4% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2017-06-08 14:50 UTC (permalink / raw)
To: Richardson, Bruce, Olivier Matz; +Cc: Verkamp, Daniel, dev
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Thursday, June 8, 2017 3:12 PM
> To: Olivier Matz <olivier.matz@6wind.com>
> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
>
> On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> > On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz wrote:
> > > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Richardson, Bruce
> > > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > > > >
> > > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > > > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > > > > > > >
> > > > > > > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > > > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > > > > > > Konstantin
> > > > > > > > >
> > > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes.
> An
> > > > > > > alternate
> > > > > > > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > > > > > > >
> > > > > > > > Yes, had the same thought.
> > > > > > > >
> > > > > > > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > > > > > > >
> > > > > > > > Might be, would be good to hear opinion the author of that change.
> > > > > > >
> > > > > > > It gives improved performance for core-2-core transfer.
> > > > > >
> > > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > > That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> > > > > > Something like that:
> > > > > > struct rte_ring {
> > > > > > ...
> > > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > > };
> > > > > >
> > > > > > Konstantin
> > > > >
> > > > > Sure. That should probably work too.
> > > > >
> > > > > /Bruce
> > > >
> > > > I also agree with Konstantin's proposal. One question though: since it
> > > > changes the alignment constraint of the rte_ring structure, I think it is
> > > > an ABI breakage: a structure including the rte_ring structure inherits
> > > > from this constraint.
> > > >
> > > > How could we handle that, knowing this is probably a rare case?
> > > >
> > > >
> > > Is it an ABI break so long as we keep the resulting size and field
> > > placement of the structures the same? The alignment being reduced should
> > > not be a problem, as 128byte alignment is also valid as 64byte
> > > alignment, after all.
> >
> > I'd say yes. Consider the following example:
> >
> > ---8<---
> > #include <stdio.h>
> > #include <stdlib.h>
> >
> > #define ALIGN 64
> > /* #define ALIGN 128 */
> >
> > /* dummy rte_ring struct */
> > struct rte_ring {
> > char x[128];
> > } __attribute__((aligned(ALIGN)));
> >
> > struct foo {
> > struct rte_ring r;
> > unsigned bar;
> > };
> >
> > int main(void)
> > {
> > struct foo array[2];
> >
> > printf("sizeof(ring)=%zu diff=%u\n",
> > sizeof(struct rte_ring),
> > (unsigned int)((char *)&array[1].r - (char *)array));
> >
> > return 0;
> > }
> > ---8<---
> >
> > The size of rte_ring is always 128.
> > diff is 192 or 256, depending on the value of ALIGN.
> >
> >
> >
> > Olivier
About would it be an ABI breakage to 17.05 - I think would...
Though for me the actual breakage happens in 17.05 when rte_ring
alignment was increased from 64B 128B.
Now we just restoring it.
>
> Yes, the diff will change, but that is after a recompile. If we have
> rte_ring_create function always return a 128-byte aligned structure,
> will any already-compiled apps fail to work if we also change the alignment
> of the rte_ring struct in the header?
Why 128B?
I thought we are discussing making rte_ring 64B aligned again?
Konstantin
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 14:05 0% ` Olivier Matz
@ 2017-06-08 14:11 0% ` Bruce Richardson
2017-06-08 14:50 3% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2017-06-08 14:11 UTC (permalink / raw)
To: Olivier Matz; +Cc: Ananyev, Konstantin, Verkamp, Daniel, dev
On Thu, Jun 08, 2017 at 04:05:26PM +0200, Olivier Matz wrote:
> On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz wrote:
> > > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Richardson, Bruce
> > > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > > >
> > > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > > > > > >
> > > > > > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > > > > > Konstantin
> > > > > > > >
> > > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes. An
> > > > > > alternate
> > > > > > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > > > > > >
> > > > > > > Yes, had the same thought.
> > > > > > >
> > > > > > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > > > > > >
> > > > > > > Might be, would be good to hear opinion the author of that change.
> > > > > >
> > > > > > It gives improved performance for core-2-core transfer.
> > > > >
> > > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > > That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> > > > > Something like that:
> > > > > struct rte_ring {
> > > > > ...
> > > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > > };
> > > > >
> > > > > Konstantin
> > > >
> > > > Sure. That should probably work too.
> > > >
> > > > /Bruce
> > >
> > > I also agree with Konstantin's proposal. One question though: since it
> > > changes the alignment constraint of the rte_ring structure, I think it is
> > > an ABI breakage: a structure including the rte_ring structure inherits
> > > from this constraint.
> > >
> > > How could we handle that, knowing this is probably a rare case?
> > >
> > >
> > Is it an ABI break so long as we keep the resulting size and field
> > placement of the structures the same? The alignment being reduced should
> > not be a problem, as 128byte alignment is also valid as 64byte
> > alignment, after all.
>
> I'd say yes. Consider the following example:
>
> ---8<---
> #include <stdio.h>
> #include <stdlib.h>
>
> #define ALIGN 64
> /* #define ALIGN 128 */
>
> /* dummy rte_ring struct */
> struct rte_ring {
> char x[128];
> } __attribute__((aligned(ALIGN)));
>
> struct foo {
> struct rte_ring r;
> unsigned bar;
> };
>
> int main(void)
> {
> struct foo array[2];
>
> printf("sizeof(ring)=%zu diff=%u\n",
> sizeof(struct rte_ring),
> (unsigned int)((char *)&array[1].r - (char *)array));
>
> return 0;
> }
> ---8<---
>
> The size of rte_ring is always 128.
> diff is 192 or 256, depending on the value of ALIGN.
>
>
>
> Olivier
Yes, the diff will change, but that is after a recompile. If we have
rte_ring_create function always return a 128-byte aligned structure,
will any already-compiled apps fail to work if we also change the alignment
of the rte_ring struct in the header?
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 13:20 3% ` Bruce Richardson
@ 2017-06-08 14:05 0% ` Olivier Matz
2017-06-08 14:11 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2017-06-08 14:05 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Ananyev, Konstantin, Verkamp, Daniel, dev
On Thu, 8 Jun 2017 14:20:52 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz wrote:
> > On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Richardson, Bruce
> > > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > > >
> > > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > > > > >
> > > > > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > > > > Konstantin
> > > > > > >
> > > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes. An
> > > > > alternate
> > > > > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > > > > >
> > > > > > Yes, had the same thought.
> > > > > >
> > > > > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > > > > >
> > > > > > Might be, would be good to hear opinion the author of that change.
> > > > >
> > > > > It gives improved performance for core-2-core transfer.
> > > >
> > > > You mean empty cache-line(s) after prod/cons, correct?
> > > > That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> > > > Something like that:
> > > > struct rte_ring {
> > > > ...
> > > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > > };
> > > >
> > > > Konstantin
> > >
> > > Sure. That should probably work too.
> > >
> > > /Bruce
> >
> > I also agree with Konstantin's proposal. One question though: since it
> > changes the alignment constraint of the rte_ring structure, I think it is
> > an ABI breakage: a structure including the rte_ring structure inherits
> > from this constraint.
> >
> > How could we handle that, knowing this is probably a rare case?
> >
> >
> Is it an ABI break so long as we keep the resulting size and field
> placement of the structures the same? The alignment being reduced should
> not be a problem, as 128byte alignment is also valid as 64byte
> alignment, after all.
I'd say yes. Consider the following example:
---8<---
#include <stdio.h>
#include <stdlib.h>
#define ALIGN 64
/* #define ALIGN 128 */
/* dummy rte_ring struct */
struct rte_ring {
char x[128];
} __attribute__((aligned(ALIGN)));
struct foo {
struct rte_ring r;
unsigned bar;
};
int main(void)
{
struct foo array[2];
printf("sizeof(ring)=%zu diff=%u\n",
sizeof(struct rte_ring),
(unsigned int)((char *)&array[1].r - (char *)array));
return 0;
}
---8<---
The size of rte_ring is always 128.
diff is 192 or 256, depending on the value of ALIGN.
Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-08 12:45 3% ` Olivier Matz
@ 2017-06-08 13:20 3% ` Bruce Richardson
2017-06-08 14:05 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2017-06-08 13:20 UTC (permalink / raw)
To: Olivier Matz; +Cc: Ananyev, Konstantin, Verkamp, Daniel, dev
On Thu, Jun 08, 2017 at 02:45:40PM +0200, Olivier Matz wrote:
> On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: Richardson, Bruce
> > > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > > >
> > > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > > > >
> > > > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > > > Konstantin
> > > > > >
> > > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes. An
> > > > alternate
> > > > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > > > >
> > > > > Yes, had the same thought.
> > > > >
> > > > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > > > >
> > > > > Might be, would be good to hear opinion the author of that change.
> > > >
> > > > It gives improved performance for core-2-core transfer.
> > >
> > > You mean empty cache-line(s) after prod/cons, correct?
> > > That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> > > Something like that:
> > > struct rte_ring {
> > > ...
> > > struct rte_ring_headtail prod __rte_cache_aligned;
> > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > struct rte_ring_headtail cons __rte_cache_aligned;
> > > EMPTY_CACHE_LINE __rte_cache_aligned;
> > > };
> > >
> > > Konstantin
> >
> > Sure. That should probably work too.
> >
> > /Bruce
>
> I also agree with Konstantin's proposal. One question though: since it
> changes the alignment constraint of the rte_ring structure, I think it is
> an ABI breakage: a structure including the rte_ring structure inherits
> from this constraint.
>
> How could we handle that, knowing this is probably a rare case?
>
>
Is it an ABI break so long as we keep the resulting size and field
placement of the structures the same? The alignment being reduced should
not be a problem, as 128byte alignment is also valid as 64byte
alignment, after all.
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-06 14:56 0% ` Bruce Richardson
@ 2017-06-08 12:45 3% ` Olivier Matz
2017-06-08 13:20 3% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2017-06-08 12:45 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Ananyev, Konstantin, Verkamp, Daniel, dev
On Tue, 6 Jun 2017 15:56:28 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Richardson, Bruce
> > > Sent: Tuesday, June 6, 2017 1:42 PM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > >
> > > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > > of struct rte_ring are 128 byte aligned,
> > > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > > >
> > > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > > Konstantin
> > > > >
> > > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes. An
> > > alternate
> > > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > > >
> > > > Yes, had the same thought.
> > > >
> > > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > > >
> > > > Might be, would be good to hear opinion the author of that change.
> > >
> > > It gives improved performance for core-2-core transfer.
> >
> > You mean empty cache-line(s) after prod/cons, correct?
> > That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> > Something like that:
> > struct rte_ring {
> > ...
> > struct rte_ring_headtail prod __rte_cache_aligned;
> > EMPTY_CACHE_LINE __rte_cache_aligned;
> > struct rte_ring_headtail cons __rte_cache_aligned;
> > EMPTY_CACHE_LINE __rte_cache_aligned;
> > };
> >
> > Konstantin
>
> Sure. That should probably work too.
>
> /Bruce
I also agree with Konstantin's proposal. One question though: since it
changes the alignment constraint of the rte_ring structure, I think it is
an ABI breakage: a structure including the rte_ring structure inherits
from this constraint.
How could we handle that, knowing this is probably a rare case?
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2 1/2] mbuf: introduce new Tx offload flag for MPLS-in-UDP
@ 2017-06-08 12:25 3% ` Olivier Matz
2017-06-08 21:46 3% ` Patil, Harish
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2017-06-08 12:25 UTC (permalink / raw)
To: Rasesh Mody; +Cc: dev, Harish Patil, Dept-EngDPDKDev
Hi Rasesh,
On Wed, 7 Jun 2017 00:43:48 -0700, Rasesh Mody <rasesh.mody@cavium.com> wrote:
> From: Harish Patil <harish.patil@cavium.com>
>
> Some PMDs need to know the tunnel type in order to handle advance TX
> features. This patch adds a new TX offload flag for MPLS-in-UDP packets.
>
> Signed-off-by: Harish Patil <harish.patil@cavium.com>
> ---
> lib/librte_mbuf/rte_mbuf.c | 2 ++
> lib/librte_mbuf/rte_mbuf.h | 17 ++++++++++-------
> 2 files changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index 0e3e36a..c2793fb 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -410,6 +410,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
> case PKT_TX_TUNNEL_IPIP: return "PKT_TX_TUNNEL_IPIP";
> case PKT_TX_TUNNEL_GENEVE: return "PKT_TX_TUNNEL_GENEVE";
> case PKT_TX_MACSEC: return "PKT_TX_MACSEC";
> + case PKT_TX_TUNNEL_MPLSINUDP: return "PKT_TX_TUNNEL_MPLSINUDP";
> default: return NULL;
> }
> }
> @@ -441,6 +442,7 @@ const char *rte_get_tx_ol_flag_name(uint64_t mask)
> { PKT_TX_TUNNEL_GENEVE, PKT_TX_TUNNEL_MASK,
> "PKT_TX_TUNNEL_NONE" },
> { PKT_TX_MACSEC, PKT_TX_MACSEC, NULL },
> + { PKT_TX_TUNNEL_MPLSINUDP, PKT_TX_TUNNEL_MPLSINUDP, NULL },
> };
> const char *name;
> unsigned int i;
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 1cb0310..27ad421 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -197,19 +197,22 @@
> * Offload the MACsec. This flag must be set by the application to enable
> * this offload feature for a packet to be transmitted.
> */
> -#define PKT_TX_MACSEC (1ULL << 44)
> +#define PKT_TX_MACSEC (1ULL << 43)
I'm not sure it is suitable to change the value of an existing
flag, since it breaks the ABI.
> /**
> - * Bits 45:48 used for the tunnel type.
> + * Bits 44:48 used for the tunnel type.
> * When doing Tx offload like TSO or checksum, the HW needs to configure the
> * tunnel type into the HW descriptors.
> */
> -#define PKT_TX_TUNNEL_VXLAN (0x1ULL << 45)
> -#define PKT_TX_TUNNEL_GRE (0x2ULL << 45)
> -#define PKT_TX_TUNNEL_IPIP (0x3ULL << 45)
> -#define PKT_TX_TUNNEL_GENEVE (0x4ULL << 45)
> +/**< TX packet with MPLS-in-UDP RFC 7510 header. */
> +#define PKT_TX_TUNNEL_MPLSINUDP (0x1ULL << 44)
> +
> +#define PKT_TX_TUNNEL_VXLAN (0x2ULL << 44)
> +#define PKT_TX_TUNNEL_GRE (0x3ULL << 44)
> +#define PKT_TX_TUNNEL_IPIP (0x4ULL << 44)
> +#define PKT_TX_TUNNEL_GENEVE (0x5ULL << 45)
> /* add new TX TUNNEL type here */
> -#define PKT_TX_TUNNEL_MASK (0xFULL << 45)
> +#define PKT_TX_TUNNEL_MASK (0x1FULL << 44)
>
> /**
> * Second VLAN insertion (QinQ) flag.
I dont understand why adding
#define PKT_TX_TUNNEL_MPLSINUDP (0x5ULL << 45)
wouldn't do the job?
Currently, the tunnel mask is 0xF << 45, which gives 16 possible values.
Regards,
Olivier
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] cryptodev: fix cryptodev start return value
@ 2017-06-08 8:12 3% ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2017-06-08 8:12 UTC (permalink / raw)
To: dev, Trahe, Fiona; +Cc: Doherty, Declan, Pavan Nikhilesh
On Wed, Jun 07, 2017 at 03:54:23PM +0000, Trahe, Fiona wrote:
> Hi Pavan,
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pavan Nikhilesh
> > Sent: Wednesday, June 7, 2017 11:37 AM
> > To: dev@dpdk.org
> > Cc: Doherty, Declan <declan.doherty@intel.com>; Pavan Nikhilesh Bhagavatula
> > <pbhagavatula@caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH] cryptodev: fix cryptodev start return value
> >
> > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@caviumnetworks.com>
> >
> > If cryptodev has already started it should return -EBUSY instead of 0
> > when rte_cryptodev_start is called.
> >
> > Fixes: d11b0f30df88 ("cryptodev: introduce API and framework for crypto devices")
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> > ---
> > lib/librte_cryptodev/rte_cryptodev.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
> > index b65cd9c..c815038 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev.c
> > +++ b/lib/librte_cryptodev/rte_cryptodev.c
> > @@ -1000,7 +1000,7 @@ rte_cryptodev_start(uint8_t dev_id)
> > if (dev->data->dev_started != 0) {
> > CDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
> > dev_id);
> > - return 0;
> > + return -EBUSY;
> It makes sense to me to return 0/success in this case, as the end result is the
> same, the device is successfully started.
> But I don't feel strongly about it if there's a good argument for making the change?
I do agree with this but from an application perspective when the API
is called again after the device has already started (without calling
the stop API) it would mean that there is an underlying issue with
the application's business logic and it would go undetected, so I feel
that we should strictly enforce this scenario as an error.
> However, as it is an API change doesn't it need to be flagged in a release before the change is made?
I don't think that this would be an API change as it doesn't deprecate
the existing ABI.
Any thoughts about this from the community are welcome as the same
issue affects multiple core libraries (crytodev, ethdev, eventdev).
>
>
> }
> >
> > diag = (*dev->dev_ops->dev_start)(dev);
> > --
> > 2.7.4
>
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 05/12] pmdinfogen: move to drivers subdirectory
@ 2017-06-07 23:59 1% ` Gaetan Rivet
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Gaetan Rivet @ 2017-06-07 23:59 UTC (permalink / raw)
To: dev; +Cc: Gaetan Rivet
pmdinfogen has a dependency on the PCI bus. The latter must be built
first.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
---
GNUmakefile | 2 +-
MAINTAINERS | 2 +-
buildtools/Makefile | 36 ----
buildtools/pmdinfogen/Makefile | 47 -----
buildtools/pmdinfogen/pmdinfogen.c | 422 -------------------------------------
buildtools/pmdinfogen/pmdinfogen.h | 125 -----------
drivers/Makefile | 4 +-
drivers/pmdinfogen/Makefile | 47 +++++
drivers/pmdinfogen/pmdinfogen.c | 422 +++++++++++++++++++++++++++++++++++++
drivers/pmdinfogen/pmdinfogen.h | 125 +++++++++++
10 files changed, 599 insertions(+), 633 deletions(-)
delete mode 100644 buildtools/Makefile
delete mode 100644 buildtools/pmdinfogen/Makefile
delete mode 100644 buildtools/pmdinfogen/pmdinfogen.c
delete mode 100644 buildtools/pmdinfogen/pmdinfogen.h
create mode 100644 drivers/pmdinfogen/Makefile
create mode 100644 drivers/pmdinfogen/pmdinfogen.c
create mode 100644 drivers/pmdinfogen/pmdinfogen.h
diff --git a/GNUmakefile b/GNUmakefile
index 45b7fbb..c292646 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -40,7 +40,7 @@ export RTE_SDK
# directory list
#
-ROOTDIRS-y := buildtools lib drivers app
+ROOTDIRS-y := lib drivers app
ROOTDIRS- := test
include $(RTE_SDK)/mk/rte.sdkroot.mk
diff --git a/MAINTAINERS b/MAINTAINERS
index f6095ef..c8c57cb 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -72,7 +72,7 @@ F: doc/guides/rel_notes/deprecation.rst
F: devtools/validate-abi.sh
Driver information
-F: buildtools/pmdinfogen/
+F: drivers/pmdinfogen/
F: usertools/dpdk-pmdinfo.py
F: doc/guides/tools/pmdinfo.rst
diff --git a/buildtools/Makefile b/buildtools/Makefile
deleted file mode 100644
index 35a42ff..0000000
--- a/buildtools/Makefile
+++ /dev/null
@@ -1,36 +0,0 @@
-# BSD LICENSE
-#
-# Copyright(c) 2016 Neil Horman. All rights reserved.
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-# * Neither the name of Intel Corporation nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-DIRS-y += pmdinfogen
-
-include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/buildtools/pmdinfogen/Makefile b/buildtools/pmdinfogen/Makefile
deleted file mode 100644
index bf07b6f..0000000
--- a/buildtools/pmdinfogen/Makefile
+++ /dev/null
@@ -1,47 +0,0 @@
-# BSD LICENSE
-#
-# Copyright(c) 2016 Neil Horman. All rights reserved.
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-# * Neither the name of Intel Corporation nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-#
-# library name
-#
-HOSTAPP = dpdk-pmdinfogen
-
-#
-# all sources are stored in SRCS-y
-#
-SRCS-y += pmdinfogen.c
-
-HOST_CFLAGS += $(WERROR_FLAGS) -g
-HOST_CFLAGS += -I$(RTE_OUTPUT)/include
-
-include $(RTE_SDK)/mk/rte.hostapp.mk
diff --git a/buildtools/pmdinfogen/pmdinfogen.c b/buildtools/pmdinfogen/pmdinfogen.c
deleted file mode 100644
index ba1a12e..0000000
--- a/buildtools/pmdinfogen/pmdinfogen.c
+++ /dev/null
@@ -1,422 +0,0 @@
-/* Postprocess pmd object files to export hw support
- *
- * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
- * Based in part on modpost.c from the linux kernel
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License V2, incorporated herein by reference.
- *
- */
-
-#define _GNU_SOURCE
-#include <stdio.h>
-#include <ctype.h>
-#include <string.h>
-#include <limits.h>
-#include <stdbool.h>
-#include <errno.h>
-#include <libgen.h>
-
-#include <rte_common.h>
-#include "pmdinfogen.h"
-
-#ifdef RTE_ARCH_64
-#define ADDR_SIZE 64
-#else
-#define ADDR_SIZE 32
-#endif
-
-
-static const char *sym_name(struct elf_info *elf, Elf_Sym *sym)
-{
- if (sym)
- return elf->strtab + sym->st_name;
- else
- return "(unknown)";
-}
-
-static void *grab_file(const char *filename, unsigned long *size)
-{
- struct stat st;
- void *map = MAP_FAILED;
- int fd;
-
- fd = open(filename, O_RDONLY);
- if (fd < 0)
- return NULL;
- if (fstat(fd, &st))
- goto failed;
-
- *size = st.st_size;
- map = mmap(NULL, *size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
-
-failed:
- close(fd);
- if (map == MAP_FAILED)
- return NULL;
- return map;
-}
-
-/**
- * Return a copy of the next line in a mmap'ed file.
- * spaces in the beginning of the line is trimmed away.
- * Return a pointer to a static buffer.
- **/
-static void release_file(void *file, unsigned long size)
-{
- munmap(file, size);
-}
-
-
-static void *get_sym_value(struct elf_info *info, const Elf_Sym *sym)
-{
- return RTE_PTR_ADD(info->hdr,
- info->sechdrs[sym->st_shndx].sh_offset + sym->st_value);
-}
-
-static Elf_Sym *find_sym_in_symtab(struct elf_info *info,
- const char *name, Elf_Sym *last)
-{
- Elf_Sym *idx;
- if (last)
- idx = last+1;
- else
- idx = info->symtab_start;
-
- for (; idx < info->symtab_stop; idx++) {
- const char *n = sym_name(info, idx);
- if (!strncmp(n, name, strlen(name)))
- return idx;
- }
- return NULL;
-}
-
-static int parse_elf(struct elf_info *info, const char *filename)
-{
- unsigned int i;
- Elf_Ehdr *hdr;
- Elf_Shdr *sechdrs;
- Elf_Sym *sym;
- int endian;
- unsigned int symtab_idx = ~0U, symtab_shndx_idx = ~0U;
-
- hdr = grab_file(filename, &info->size);
- if (!hdr) {
- perror(filename);
- exit(1);
- }
- info->hdr = hdr;
- if (info->size < sizeof(*hdr)) {
- /* file too small, assume this is an empty .o file */
- return 0;
- }
- /* Is this a valid ELF file? */
- if ((hdr->e_ident[EI_MAG0] != ELFMAG0) ||
- (hdr->e_ident[EI_MAG1] != ELFMAG1) ||
- (hdr->e_ident[EI_MAG2] != ELFMAG2) ||
- (hdr->e_ident[EI_MAG3] != ELFMAG3)) {
- /* Not an ELF file - silently ignore it */
- return 0;
- }
-
- if (!hdr->e_ident[EI_DATA]) {
- /* Unknown endian */
- return 0;
- }
-
- endian = hdr->e_ident[EI_DATA];
-
- /* Fix endianness in ELF header */
- hdr->e_type = TO_NATIVE(endian, 16, hdr->e_type);
- hdr->e_machine = TO_NATIVE(endian, 16, hdr->e_machine);
- hdr->e_version = TO_NATIVE(endian, 32, hdr->e_version);
- hdr->e_entry = TO_NATIVE(endian, ADDR_SIZE, hdr->e_entry);
- hdr->e_phoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_phoff);
- hdr->e_shoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_shoff);
- hdr->e_flags = TO_NATIVE(endian, 32, hdr->e_flags);
- hdr->e_ehsize = TO_NATIVE(endian, 16, hdr->e_ehsize);
- hdr->e_phentsize = TO_NATIVE(endian, 16, hdr->e_phentsize);
- hdr->e_phnum = TO_NATIVE(endian, 16, hdr->e_phnum);
- hdr->e_shentsize = TO_NATIVE(endian, 16, hdr->e_shentsize);
- hdr->e_shnum = TO_NATIVE(endian, 16, hdr->e_shnum);
- hdr->e_shstrndx = TO_NATIVE(endian, 16, hdr->e_shstrndx);
-
- sechdrs = RTE_PTR_ADD(hdr, hdr->e_shoff);
- info->sechdrs = sechdrs;
-
- /* Check if file offset is correct */
- if (hdr->e_shoff > info->size) {
- fprintf(stderr, "section header offset=%lu in file '%s' "
- "is bigger than filesize=%lu\n",
- (unsigned long)hdr->e_shoff,
- filename, info->size);
- return 0;
- }
-
- if (hdr->e_shnum == SHN_UNDEF) {
- /*
- * There are more than 64k sections,
- * read count from .sh_size.
- */
- info->num_sections = TO_NATIVE(endian, 32, sechdrs[0].sh_size);
- } else {
- info->num_sections = hdr->e_shnum;
- }
- if (hdr->e_shstrndx == SHN_XINDEX)
- info->secindex_strings =
- TO_NATIVE(endian, 32, sechdrs[0].sh_link);
- else
- info->secindex_strings = hdr->e_shstrndx;
-
- /* Fix endianness in section headers */
- for (i = 0; i < info->num_sections; i++) {
- sechdrs[i].sh_name =
- TO_NATIVE(endian, 32, sechdrs[i].sh_name);
- sechdrs[i].sh_type =
- TO_NATIVE(endian, 32, sechdrs[i].sh_type);
- sechdrs[i].sh_flags =
- TO_NATIVE(endian, 32, sechdrs[i].sh_flags);
- sechdrs[i].sh_addr =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addr);
- sechdrs[i].sh_offset =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_offset);
- sechdrs[i].sh_size =
- TO_NATIVE(endian, 32, sechdrs[i].sh_size);
- sechdrs[i].sh_link =
- TO_NATIVE(endian, 32, sechdrs[i].sh_link);
- sechdrs[i].sh_info =
- TO_NATIVE(endian, 32, sechdrs[i].sh_info);
- sechdrs[i].sh_addralign =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addralign);
- sechdrs[i].sh_entsize =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_entsize);
- }
- /* Find symbol table. */
- for (i = 1; i < info->num_sections; i++) {
- int nobits = sechdrs[i].sh_type == SHT_NOBITS;
-
- if (!nobits && sechdrs[i].sh_offset > info->size) {
- fprintf(stderr, "%s is truncated. "
- "sechdrs[i].sh_offset=%lu > sizeof(*hrd)=%zu\n",
- filename, (unsigned long)sechdrs[i].sh_offset,
- sizeof(*hdr));
- return 0;
- }
-
- if (sechdrs[i].sh_type == SHT_SYMTAB) {
- unsigned int sh_link_idx;
- symtab_idx = i;
- info->symtab_start = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset);
- info->symtab_stop = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset + sechdrs[i].sh_size);
- sh_link_idx = sechdrs[i].sh_link;
- info->strtab = RTE_PTR_ADD(hdr,
- sechdrs[sh_link_idx].sh_offset);
- }
-
- /* 32bit section no. table? ("more than 64k sections") */
- if (sechdrs[i].sh_type == SHT_SYMTAB_SHNDX) {
- symtab_shndx_idx = i;
- info->symtab_shndx_start = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset);
- info->symtab_shndx_stop = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset + sechdrs[i].sh_size);
- }
- }
- if (!info->symtab_start)
- fprintf(stderr, "%s has no symtab?\n", filename);
- else {
- /* Fix endianness in symbols */
- for (sym = info->symtab_start; sym < info->symtab_stop; sym++) {
- sym->st_shndx = TO_NATIVE(endian, 16, sym->st_shndx);
- sym->st_name = TO_NATIVE(endian, 32, sym->st_name);
- sym->st_value = TO_NATIVE(endian, ADDR_SIZE, sym->st_value);
- sym->st_size = TO_NATIVE(endian, ADDR_SIZE, sym->st_size);
- }
- }
-
- if (symtab_shndx_idx != ~0U) {
- Elf32_Word *p;
- if (symtab_idx != sechdrs[symtab_shndx_idx].sh_link)
- fprintf(stderr,
- "%s: SYMTAB_SHNDX has bad sh_link: %u!=%u\n",
- filename, sechdrs[symtab_shndx_idx].sh_link,
- symtab_idx);
- /* Fix endianness */
- for (p = info->symtab_shndx_start; p < info->symtab_shndx_stop;
- p++)
- *p = TO_NATIVE(endian, 32, *p);
- }
-
- return 1;
-}
-
-static void parse_elf_finish(struct elf_info *info)
-{
- struct pmd_driver *tmp, *idx = info->drivers;
- release_file(info->hdr, info->size);
- while (idx) {
- tmp = idx->next;
- free(idx);
- idx = tmp;
- }
-}
-
-struct opt_tag {
- const char *suffix;
- const char *json_id;
-};
-
-static const struct opt_tag opt_tags[] = {
- {"_param_string_export", "params"},
- {"_kmod_dep_export", "kmod"},
-};
-
-static int complete_pmd_entry(struct elf_info *info, struct pmd_driver *drv)
-{
- const char *tname;
- int i;
- char tmpsymname[128];
- Elf_Sym *tmpsym;
-
- drv->name = get_sym_value(info, drv->name_sym);
-
- for (i = 0; i < PMD_OPT_MAX; i++) {
- memset(tmpsymname, 0, 128);
- sprintf(tmpsymname, "__%s%s", drv->name, opt_tags[i].suffix);
- tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
- if (!tmpsym)
- continue;
- drv->opt_vals[i] = get_sym_value(info, tmpsym);
- }
-
- memset(tmpsymname, 0, 128);
- sprintf(tmpsymname, "__%s_pci_tbl_export", drv->name);
-
- tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
-
-
- /*
- * If this returns NULL, then this is a PMD_VDEV, because
- * it has no pci table reference
- */
- if (!tmpsym) {
- drv->pci_tbl = NULL;
- return 0;
- }
-
- tname = get_sym_value(info, tmpsym);
- tmpsym = find_sym_in_symtab(info, tname, NULL);
- if (!tmpsym)
- return -ENOENT;
-
- drv->pci_tbl = (struct rte_pci_id *)get_sym_value(info, tmpsym);
- if (!drv->pci_tbl)
- return -ENOENT;
-
- return 0;
-}
-
-static int locate_pmd_entries(struct elf_info *info)
-{
- Elf_Sym *last = NULL;
- struct pmd_driver *new;
-
- info->drivers = NULL;
-
- do {
- new = calloc(sizeof(struct pmd_driver), 1);
- new->name_sym = find_sym_in_symtab(info, "this_pmd_name", last);
- last = new->name_sym;
- if (!new->name_sym)
- free(new);
- else {
- if (complete_pmd_entry(info, new)) {
- fprintf(stderr,
- "Failed to complete pmd entry\n");
- free(new);
- } else {
- new->next = info->drivers;
- info->drivers = new;
- }
- }
- } while (last);
-
- return 0;
-}
-
-static void output_pmd_info_string(struct elf_info *info, char *outfile)
-{
- FILE *ofd;
- struct pmd_driver *drv;
- struct rte_pci_id *pci_ids;
- int idx = 0;
-
- ofd = fopen(outfile, "w+");
- if (!ofd) {
- fprintf(stderr, "Unable to open output file\n");
- return;
- }
-
- drv = info->drivers;
-
- while (drv) {
- fprintf(ofd, "const char %s_pmd_info[] __attribute__((used)) = "
- "\"PMD_INFO_STRING= {",
- drv->name);
- fprintf(ofd, "\\\"name\\\" : \\\"%s\\\", ", drv->name);
-
- for (idx = 0; idx < PMD_OPT_MAX; idx++) {
- if (drv->opt_vals[idx])
- fprintf(ofd, "\\\"%s\\\" : \\\"%s\\\", ",
- opt_tags[idx].json_id,
- drv->opt_vals[idx]);
- }
-
- pci_ids = drv->pci_tbl;
- fprintf(ofd, "\\\"pci_ids\\\" : [");
-
- while (pci_ids && pci_ids->device_id) {
- fprintf(ofd, "[%d, %d, %d, %d]",
- pci_ids->vendor_id, pci_ids->device_id,
- pci_ids->subsystem_vendor_id,
- pci_ids->subsystem_device_id);
- pci_ids++;
- if (pci_ids->device_id)
- fprintf(ofd, ",");
- else
- fprintf(ofd, " ");
- }
- fprintf(ofd, "]}\";\n");
- drv = drv->next;
- }
-
- fclose(ofd);
-}
-
-int main(int argc, char **argv)
-{
- struct elf_info info;
- int rc = 1;
-
- if (argc < 3) {
- fprintf(stderr,
- "usage: %s <object file> <c output file>\n",
- basename(argv[0]));
- exit(127);
- }
- parse_elf(&info, argv[1]);
-
- locate_pmd_entries(&info);
-
- if (info.drivers) {
- output_pmd_info_string(&info, argv[2]);
- rc = 0;
- } else {
- fprintf(stderr, "No drivers registered\n");
- }
-
- parse_elf_finish(&info);
- exit(rc);
-}
diff --git a/buildtools/pmdinfogen/pmdinfogen.h b/buildtools/pmdinfogen/pmdinfogen.h
deleted file mode 100644
index 27bab30..0000000
--- a/buildtools/pmdinfogen/pmdinfogen.h
+++ /dev/null
@@ -1,125 +0,0 @@
-
-/* Postprocess pmd object files to export hw support
- *
- * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
- * Based in part on modpost.c from the linux kernel
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License V2, incorporated herein by reference.
- *
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdarg.h>
-#include <string.h>
-#include <sys/types.h>
-#include <sys/stat.h>
-#include <sys/mman.h>
-#ifdef __linux__
-#include <endian.h>
-#else
-#include <sys/endian.h>
-#endif
-#include <fcntl.h>
-#include <unistd.h>
-#include <elf.h>
-#include <rte_config.h>
-#include <rte_pci.h>
-
-/* On BSD-alike OSes elf.h defines these according to host's word size */
-#undef ELF_ST_BIND
-#undef ELF_ST_TYPE
-#undef ELF_R_SYM
-#undef ELF_R_TYPE
-
-/*
- * Define ELF64_* to ELF_*, the latter being defined in both 32 and 64 bit
- * flavors in elf.h. This makes our code a bit more generic between arches
- * and allows us to support 32 bit code in the future should we ever want to
- */
-#ifdef RTE_ARCH_64
-#define Elf_Ehdr Elf64_Ehdr
-#define Elf_Shdr Elf64_Shdr
-#define Elf_Sym Elf64_Sym
-#define Elf_Addr Elf64_Addr
-#define Elf_Sword Elf64_Sxword
-#define Elf_Section Elf64_Half
-#define ELF_ST_BIND ELF64_ST_BIND
-#define ELF_ST_TYPE ELF64_ST_TYPE
-
-#define Elf_Rel Elf64_Rel
-#define Elf_Rela Elf64_Rela
-#define ELF_R_SYM ELF64_R_SYM
-#define ELF_R_TYPE ELF64_R_TYPE
-#else
-#define Elf_Ehdr Elf32_Ehdr
-#define Elf_Shdr Elf32_Shdr
-#define Elf_Sym Elf32_Sym
-#define Elf_Addr Elf32_Addr
-#define Elf_Sword Elf32_Sxword
-#define Elf_Section Elf32_Half
-#define ELF_ST_BIND ELF32_ST_BIND
-#define ELF_ST_TYPE ELF32_ST_TYPE
-
-#define Elf_Rel Elf32_Rel
-#define Elf_Rela Elf32_Rela
-#define ELF_R_SYM ELF32_R_SYM
-#define ELF_R_TYPE ELF32_R_TYPE
-#endif
-
-
-/*
- * Note, it seems odd that we have both a CONVERT_NATIVE and a TO_NATIVE macro
- * below. We do this because the values passed to TO_NATIVE may themselves be
- * macros and need both macros here to get expanded. Specifically its the width
- * variable we are concerned with, because it needs to get expanded prior to
- * string concatenation
- */
-#define CONVERT_NATIVE(fend, width, x) ({ \
-typeof(x) ___x; \
-if ((fend) == ELFDATA2LSB) \
- ___x = le##width##toh(x); \
-else \
- ___x = be##width##toh(x); \
- ___x; \
-})
-
-#define TO_NATIVE(fend, width, x) CONVERT_NATIVE(fend, width, x)
-
-enum opt_params {
- PMD_PARAM_STRING = 0,
- PMD_KMOD_DEP,
- PMD_OPT_MAX
-};
-
-struct pmd_driver {
- Elf_Sym *name_sym;
- const char *name;
- struct rte_pci_id *pci_tbl;
- struct pmd_driver *next;
-
- const char *opt_vals[PMD_OPT_MAX];
-};
-
-struct elf_info {
- unsigned long size;
- Elf_Ehdr *hdr;
- Elf_Shdr *sechdrs;
- Elf_Sym *symtab_start;
- Elf_Sym *symtab_stop;
- char *strtab;
-
- /* support for 32bit section numbers */
-
- unsigned int num_sections; /* max_secindex + 1 */
- unsigned int secindex_strings;
- /* if Nth symbol table entry has .st_shndx = SHN_XINDEX,
- * take shndx from symtab_shndx_start[N] instead
- */
- Elf32_Word *symtab_shndx_start;
- Elf32_Word *symtab_shndx_stop;
-
- struct pmd_driver *drivers;
-};
-
diff --git a/drivers/Makefile b/drivers/Makefile
index a04a01f..f3f9417 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,10 +32,12 @@
include $(RTE_SDK)/mk/rte.vars.mk
DIRS-y += bus
+DIRS-y += pmdinfogen
+DEPDIRS-pmdinfogen := bus
DIRS-y += mempool
DEPDIRS-mempool := bus
DIRS-y += net
-DEPDIRS-net := bus mempool
+DEPDIRS-net := bus pmdinfogen mempool
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
DEPDIRS-crypto := mempool
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
diff --git a/drivers/pmdinfogen/Makefile b/drivers/pmdinfogen/Makefile
new file mode 100644
index 0000000..bf07b6f
--- /dev/null
+++ b/drivers/pmdinfogen/Makefile
@@ -0,0 +1,47 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Neil Horman. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+HOSTAPP = dpdk-pmdinfogen
+
+#
+# all sources are stored in SRCS-y
+#
+SRCS-y += pmdinfogen.c
+
+HOST_CFLAGS += $(WERROR_FLAGS) -g
+HOST_CFLAGS += -I$(RTE_OUTPUT)/include
+
+include $(RTE_SDK)/mk/rte.hostapp.mk
diff --git a/drivers/pmdinfogen/pmdinfogen.c b/drivers/pmdinfogen/pmdinfogen.c
new file mode 100644
index 0000000..ba1a12e
--- /dev/null
+++ b/drivers/pmdinfogen/pmdinfogen.c
@@ -0,0 +1,422 @@
+/* Postprocess pmd object files to export hw support
+ *
+ * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
+ * Based in part on modpost.c from the linux kernel
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License V2, incorporated herein by reference.
+ *
+ */
+
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <ctype.h>
+#include <string.h>
+#include <limits.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <libgen.h>
+
+#include <rte_common.h>
+#include "pmdinfogen.h"
+
+#ifdef RTE_ARCH_64
+#define ADDR_SIZE 64
+#else
+#define ADDR_SIZE 32
+#endif
+
+
+static const char *sym_name(struct elf_info *elf, Elf_Sym *sym)
+{
+ if (sym)
+ return elf->strtab + sym->st_name;
+ else
+ return "(unknown)";
+}
+
+static void *grab_file(const char *filename, unsigned long *size)
+{
+ struct stat st;
+ void *map = MAP_FAILED;
+ int fd;
+
+ fd = open(filename, O_RDONLY);
+ if (fd < 0)
+ return NULL;
+ if (fstat(fd, &st))
+ goto failed;
+
+ *size = st.st_size;
+ map = mmap(NULL, *size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
+
+failed:
+ close(fd);
+ if (map == MAP_FAILED)
+ return NULL;
+ return map;
+}
+
+/**
+ * Return a copy of the next line in a mmap'ed file.
+ * spaces in the beginning of the line is trimmed away.
+ * Return a pointer to a static buffer.
+ **/
+static void release_file(void *file, unsigned long size)
+{
+ munmap(file, size);
+}
+
+
+static void *get_sym_value(struct elf_info *info, const Elf_Sym *sym)
+{
+ return RTE_PTR_ADD(info->hdr,
+ info->sechdrs[sym->st_shndx].sh_offset + sym->st_value);
+}
+
+static Elf_Sym *find_sym_in_symtab(struct elf_info *info,
+ const char *name, Elf_Sym *last)
+{
+ Elf_Sym *idx;
+ if (last)
+ idx = last+1;
+ else
+ idx = info->symtab_start;
+
+ for (; idx < info->symtab_stop; idx++) {
+ const char *n = sym_name(info, idx);
+ if (!strncmp(n, name, strlen(name)))
+ return idx;
+ }
+ return NULL;
+}
+
+static int parse_elf(struct elf_info *info, const char *filename)
+{
+ unsigned int i;
+ Elf_Ehdr *hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *sym;
+ int endian;
+ unsigned int symtab_idx = ~0U, symtab_shndx_idx = ~0U;
+
+ hdr = grab_file(filename, &info->size);
+ if (!hdr) {
+ perror(filename);
+ exit(1);
+ }
+ info->hdr = hdr;
+ if (info->size < sizeof(*hdr)) {
+ /* file too small, assume this is an empty .o file */
+ return 0;
+ }
+ /* Is this a valid ELF file? */
+ if ((hdr->e_ident[EI_MAG0] != ELFMAG0) ||
+ (hdr->e_ident[EI_MAG1] != ELFMAG1) ||
+ (hdr->e_ident[EI_MAG2] != ELFMAG2) ||
+ (hdr->e_ident[EI_MAG3] != ELFMAG3)) {
+ /* Not an ELF file - silently ignore it */
+ return 0;
+ }
+
+ if (!hdr->e_ident[EI_DATA]) {
+ /* Unknown endian */
+ return 0;
+ }
+
+ endian = hdr->e_ident[EI_DATA];
+
+ /* Fix endianness in ELF header */
+ hdr->e_type = TO_NATIVE(endian, 16, hdr->e_type);
+ hdr->e_machine = TO_NATIVE(endian, 16, hdr->e_machine);
+ hdr->e_version = TO_NATIVE(endian, 32, hdr->e_version);
+ hdr->e_entry = TO_NATIVE(endian, ADDR_SIZE, hdr->e_entry);
+ hdr->e_phoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_phoff);
+ hdr->e_shoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_shoff);
+ hdr->e_flags = TO_NATIVE(endian, 32, hdr->e_flags);
+ hdr->e_ehsize = TO_NATIVE(endian, 16, hdr->e_ehsize);
+ hdr->e_phentsize = TO_NATIVE(endian, 16, hdr->e_phentsize);
+ hdr->e_phnum = TO_NATIVE(endian, 16, hdr->e_phnum);
+ hdr->e_shentsize = TO_NATIVE(endian, 16, hdr->e_shentsize);
+ hdr->e_shnum = TO_NATIVE(endian, 16, hdr->e_shnum);
+ hdr->e_shstrndx = TO_NATIVE(endian, 16, hdr->e_shstrndx);
+
+ sechdrs = RTE_PTR_ADD(hdr, hdr->e_shoff);
+ info->sechdrs = sechdrs;
+
+ /* Check if file offset is correct */
+ if (hdr->e_shoff > info->size) {
+ fprintf(stderr, "section header offset=%lu in file '%s' "
+ "is bigger than filesize=%lu\n",
+ (unsigned long)hdr->e_shoff,
+ filename, info->size);
+ return 0;
+ }
+
+ if (hdr->e_shnum == SHN_UNDEF) {
+ /*
+ * There are more than 64k sections,
+ * read count from .sh_size.
+ */
+ info->num_sections = TO_NATIVE(endian, 32, sechdrs[0].sh_size);
+ } else {
+ info->num_sections = hdr->e_shnum;
+ }
+ if (hdr->e_shstrndx == SHN_XINDEX)
+ info->secindex_strings =
+ TO_NATIVE(endian, 32, sechdrs[0].sh_link);
+ else
+ info->secindex_strings = hdr->e_shstrndx;
+
+ /* Fix endianness in section headers */
+ for (i = 0; i < info->num_sections; i++) {
+ sechdrs[i].sh_name =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_name);
+ sechdrs[i].sh_type =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_type);
+ sechdrs[i].sh_flags =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_flags);
+ sechdrs[i].sh_addr =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addr);
+ sechdrs[i].sh_offset =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_offset);
+ sechdrs[i].sh_size =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_size);
+ sechdrs[i].sh_link =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_link);
+ sechdrs[i].sh_info =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_info);
+ sechdrs[i].sh_addralign =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addralign);
+ sechdrs[i].sh_entsize =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_entsize);
+ }
+ /* Find symbol table. */
+ for (i = 1; i < info->num_sections; i++) {
+ int nobits = sechdrs[i].sh_type == SHT_NOBITS;
+
+ if (!nobits && sechdrs[i].sh_offset > info->size) {
+ fprintf(stderr, "%s is truncated. "
+ "sechdrs[i].sh_offset=%lu > sizeof(*hrd)=%zu\n",
+ filename, (unsigned long)sechdrs[i].sh_offset,
+ sizeof(*hdr));
+ return 0;
+ }
+
+ if (sechdrs[i].sh_type == SHT_SYMTAB) {
+ unsigned int sh_link_idx;
+ symtab_idx = i;
+ info->symtab_start = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset);
+ info->symtab_stop = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset + sechdrs[i].sh_size);
+ sh_link_idx = sechdrs[i].sh_link;
+ info->strtab = RTE_PTR_ADD(hdr,
+ sechdrs[sh_link_idx].sh_offset);
+ }
+
+ /* 32bit section no. table? ("more than 64k sections") */
+ if (sechdrs[i].sh_type == SHT_SYMTAB_SHNDX) {
+ symtab_shndx_idx = i;
+ info->symtab_shndx_start = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset);
+ info->symtab_shndx_stop = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset + sechdrs[i].sh_size);
+ }
+ }
+ if (!info->symtab_start)
+ fprintf(stderr, "%s has no symtab?\n", filename);
+ else {
+ /* Fix endianness in symbols */
+ for (sym = info->symtab_start; sym < info->symtab_stop; sym++) {
+ sym->st_shndx = TO_NATIVE(endian, 16, sym->st_shndx);
+ sym->st_name = TO_NATIVE(endian, 32, sym->st_name);
+ sym->st_value = TO_NATIVE(endian, ADDR_SIZE, sym->st_value);
+ sym->st_size = TO_NATIVE(endian, ADDR_SIZE, sym->st_size);
+ }
+ }
+
+ if (symtab_shndx_idx != ~0U) {
+ Elf32_Word *p;
+ if (symtab_idx != sechdrs[symtab_shndx_idx].sh_link)
+ fprintf(stderr,
+ "%s: SYMTAB_SHNDX has bad sh_link: %u!=%u\n",
+ filename, sechdrs[symtab_shndx_idx].sh_link,
+ symtab_idx);
+ /* Fix endianness */
+ for (p = info->symtab_shndx_start; p < info->symtab_shndx_stop;
+ p++)
+ *p = TO_NATIVE(endian, 32, *p);
+ }
+
+ return 1;
+}
+
+static void parse_elf_finish(struct elf_info *info)
+{
+ struct pmd_driver *tmp, *idx = info->drivers;
+ release_file(info->hdr, info->size);
+ while (idx) {
+ tmp = idx->next;
+ free(idx);
+ idx = tmp;
+ }
+}
+
+struct opt_tag {
+ const char *suffix;
+ const char *json_id;
+};
+
+static const struct opt_tag opt_tags[] = {
+ {"_param_string_export", "params"},
+ {"_kmod_dep_export", "kmod"},
+};
+
+static int complete_pmd_entry(struct elf_info *info, struct pmd_driver *drv)
+{
+ const char *tname;
+ int i;
+ char tmpsymname[128];
+ Elf_Sym *tmpsym;
+
+ drv->name = get_sym_value(info, drv->name_sym);
+
+ for (i = 0; i < PMD_OPT_MAX; i++) {
+ memset(tmpsymname, 0, 128);
+ sprintf(tmpsymname, "__%s%s", drv->name, opt_tags[i].suffix);
+ tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
+ if (!tmpsym)
+ continue;
+ drv->opt_vals[i] = get_sym_value(info, tmpsym);
+ }
+
+ memset(tmpsymname, 0, 128);
+ sprintf(tmpsymname, "__%s_pci_tbl_export", drv->name);
+
+ tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
+
+
+ /*
+ * If this returns NULL, then this is a PMD_VDEV, because
+ * it has no pci table reference
+ */
+ if (!tmpsym) {
+ drv->pci_tbl = NULL;
+ return 0;
+ }
+
+ tname = get_sym_value(info, tmpsym);
+ tmpsym = find_sym_in_symtab(info, tname, NULL);
+ if (!tmpsym)
+ return -ENOENT;
+
+ drv->pci_tbl = (struct rte_pci_id *)get_sym_value(info, tmpsym);
+ if (!drv->pci_tbl)
+ return -ENOENT;
+
+ return 0;
+}
+
+static int locate_pmd_entries(struct elf_info *info)
+{
+ Elf_Sym *last = NULL;
+ struct pmd_driver *new;
+
+ info->drivers = NULL;
+
+ do {
+ new = calloc(sizeof(struct pmd_driver), 1);
+ new->name_sym = find_sym_in_symtab(info, "this_pmd_name", last);
+ last = new->name_sym;
+ if (!new->name_sym)
+ free(new);
+ else {
+ if (complete_pmd_entry(info, new)) {
+ fprintf(stderr,
+ "Failed to complete pmd entry\n");
+ free(new);
+ } else {
+ new->next = info->drivers;
+ info->drivers = new;
+ }
+ }
+ } while (last);
+
+ return 0;
+}
+
+static void output_pmd_info_string(struct elf_info *info, char *outfile)
+{
+ FILE *ofd;
+ struct pmd_driver *drv;
+ struct rte_pci_id *pci_ids;
+ int idx = 0;
+
+ ofd = fopen(outfile, "w+");
+ if (!ofd) {
+ fprintf(stderr, "Unable to open output file\n");
+ return;
+ }
+
+ drv = info->drivers;
+
+ while (drv) {
+ fprintf(ofd, "const char %s_pmd_info[] __attribute__((used)) = "
+ "\"PMD_INFO_STRING= {",
+ drv->name);
+ fprintf(ofd, "\\\"name\\\" : \\\"%s\\\", ", drv->name);
+
+ for (idx = 0; idx < PMD_OPT_MAX; idx++) {
+ if (drv->opt_vals[idx])
+ fprintf(ofd, "\\\"%s\\\" : \\\"%s\\\", ",
+ opt_tags[idx].json_id,
+ drv->opt_vals[idx]);
+ }
+
+ pci_ids = drv->pci_tbl;
+ fprintf(ofd, "\\\"pci_ids\\\" : [");
+
+ while (pci_ids && pci_ids->device_id) {
+ fprintf(ofd, "[%d, %d, %d, %d]",
+ pci_ids->vendor_id, pci_ids->device_id,
+ pci_ids->subsystem_vendor_id,
+ pci_ids->subsystem_device_id);
+ pci_ids++;
+ if (pci_ids->device_id)
+ fprintf(ofd, ",");
+ else
+ fprintf(ofd, " ");
+ }
+ fprintf(ofd, "]}\";\n");
+ drv = drv->next;
+ }
+
+ fclose(ofd);
+}
+
+int main(int argc, char **argv)
+{
+ struct elf_info info;
+ int rc = 1;
+
+ if (argc < 3) {
+ fprintf(stderr,
+ "usage: %s <object file> <c output file>\n",
+ basename(argv[0]));
+ exit(127);
+ }
+ parse_elf(&info, argv[1]);
+
+ locate_pmd_entries(&info);
+
+ if (info.drivers) {
+ output_pmd_info_string(&info, argv[2]);
+ rc = 0;
+ } else {
+ fprintf(stderr, "No drivers registered\n");
+ }
+
+ parse_elf_finish(&info);
+ exit(rc);
+}
diff --git a/drivers/pmdinfogen/pmdinfogen.h b/drivers/pmdinfogen/pmdinfogen.h
new file mode 100644
index 0000000..27bab30
--- /dev/null
+++ b/drivers/pmdinfogen/pmdinfogen.h
@@ -0,0 +1,125 @@
+
+/* Postprocess pmd object files to export hw support
+ *
+ * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
+ * Based in part on modpost.c from the linux kernel
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License V2, incorporated herein by reference.
+ *
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdarg.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#ifdef __linux__
+#include <endian.h>
+#else
+#include <sys/endian.h>
+#endif
+#include <fcntl.h>
+#include <unistd.h>
+#include <elf.h>
+#include <rte_config.h>
+#include <rte_pci.h>
+
+/* On BSD-alike OSes elf.h defines these according to host's word size */
+#undef ELF_ST_BIND
+#undef ELF_ST_TYPE
+#undef ELF_R_SYM
+#undef ELF_R_TYPE
+
+/*
+ * Define ELF64_* to ELF_*, the latter being defined in both 32 and 64 bit
+ * flavors in elf.h. This makes our code a bit more generic between arches
+ * and allows us to support 32 bit code in the future should we ever want to
+ */
+#ifdef RTE_ARCH_64
+#define Elf_Ehdr Elf64_Ehdr
+#define Elf_Shdr Elf64_Shdr
+#define Elf_Sym Elf64_Sym
+#define Elf_Addr Elf64_Addr
+#define Elf_Sword Elf64_Sxword
+#define Elf_Section Elf64_Half
+#define ELF_ST_BIND ELF64_ST_BIND
+#define ELF_ST_TYPE ELF64_ST_TYPE
+
+#define Elf_Rel Elf64_Rel
+#define Elf_Rela Elf64_Rela
+#define ELF_R_SYM ELF64_R_SYM
+#define ELF_R_TYPE ELF64_R_TYPE
+#else
+#define Elf_Ehdr Elf32_Ehdr
+#define Elf_Shdr Elf32_Shdr
+#define Elf_Sym Elf32_Sym
+#define Elf_Addr Elf32_Addr
+#define Elf_Sword Elf32_Sxword
+#define Elf_Section Elf32_Half
+#define ELF_ST_BIND ELF32_ST_BIND
+#define ELF_ST_TYPE ELF32_ST_TYPE
+
+#define Elf_Rel Elf32_Rel
+#define Elf_Rela Elf32_Rela
+#define ELF_R_SYM ELF32_R_SYM
+#define ELF_R_TYPE ELF32_R_TYPE
+#endif
+
+
+/*
+ * Note, it seems odd that we have both a CONVERT_NATIVE and a TO_NATIVE macro
+ * below. We do this because the values passed to TO_NATIVE may themselves be
+ * macros and need both macros here to get expanded. Specifically its the width
+ * variable we are concerned with, because it needs to get expanded prior to
+ * string concatenation
+ */
+#define CONVERT_NATIVE(fend, width, x) ({ \
+typeof(x) ___x; \
+if ((fend) == ELFDATA2LSB) \
+ ___x = le##width##toh(x); \
+else \
+ ___x = be##width##toh(x); \
+ ___x; \
+})
+
+#define TO_NATIVE(fend, width, x) CONVERT_NATIVE(fend, width, x)
+
+enum opt_params {
+ PMD_PARAM_STRING = 0,
+ PMD_KMOD_DEP,
+ PMD_OPT_MAX
+};
+
+struct pmd_driver {
+ Elf_Sym *name_sym;
+ const char *name;
+ struct rte_pci_id *pci_tbl;
+ struct pmd_driver *next;
+
+ const char *opt_vals[PMD_OPT_MAX];
+};
+
+struct elf_info {
+ unsigned long size;
+ Elf_Ehdr *hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *symtab_start;
+ Elf_Sym *symtab_stop;
+ char *strtab;
+
+ /* support for 32bit section numbers */
+
+ unsigned int num_sections; /* max_secindex + 1 */
+ unsigned int secindex_strings;
+ /* if Nth symbol table entry has .st_shndx = SHN_XINDEX,
+ * take shndx from symtab_shndx_start[N] instead
+ */
+ Elf32_Word *symtab_shndx_start;
+ Elf32_Word *symtab_shndx_stop;
+
+ struct pmd_driver *drivers;
+};
+
--
2.1.4
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] PCI domain size
2017-05-24 23:40 3% [dpdk-dev] PCI domain size Stephen Hemminger
@ 2017-06-07 14:23 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-06-07 14:23 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
25/05/2017 01:40, Stephen Hemminger:
> While working on SR-IOV support on Azure, it was discovered that some applications
> and drivers do not support full size PCI domains. On Azure environment the PCI pass
> through device has a synthetic domain value (ie generated by host) which is > 16 bits.
>
> The common PCI utilities (pci-utils) and Linux kernel both support
> full 32 bits but DPDK does not. FreeBSD also supports 32 bit domains.
>
> Changing the one place in DPDK (rte_pci.h) in source is trivial but of course
> it is a major ABI breakage which is a complete flag day. I.e no binary compatiabilty
> is possible.
I guess you are talking about
struct rte_pci_addr {
uint16_t domain;
uint8_t bus;
uint8_t devid;
uint8_t function;
};
I do not see why we would not change it to comply to the standard.
Do you want to propose a deprecation?
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] dpdk: remove typos using codespell utility
@ 2017-06-07 5:05 6% Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2017-06-07 5:05 UTC (permalink / raw)
To: dev; +Cc: thomas, john.mcnamara, Jerin Jacob
Fixing typos across dpdk source code using codespell utility.
Skipped the ethdev driver's base code fixes to keep the base
code intact.
Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
- This is not completely an automatic process. The tool can do 90% of the job
but need to crosscheck the changes manually.
- The patchset does not create any new check patch errors. This patch only
fixes the typos, not the existing code check patch issues.
---
app/test-pmd/testpmd.c | 2 +-
devtools/cocci/mtod-offset.cocci | 2 +-
devtools/validate-abi.sh | 4 +--
doc/guides/nics/sfc_efx.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 6 ++--
drivers/bus/fslmc/portal/dpaa2_hw_pvt.h | 2 +-
drivers/bus/fslmc/qbman/qbman_portal.c | 4 +--
drivers/crypto/qat/qat_crypto.c | 2 +-
drivers/net/ark/ark_ethdev.c | 2 +-
drivers/net/bnx2x/bnx2x.c | 8 ++---
drivers/net/bnx2x/bnx2x_stats.c | 10 +++---
drivers/net/bnx2x/bnx2x_vfpf.h | 2 +-
drivers/net/bnx2x/ecore_hsi.h | 12 +++----
drivers/net/bnx2x/ecore_init.h | 2 +-
drivers/net/bnx2x/ecore_sp.c | 4 +--
drivers/net/bnx2x/ecore_sp.h | 22 ++++++-------
drivers/net/bnx2x/elink.c | 16 ++++-----
drivers/net/bonding/rte_eth_bond_8023ad.c | 2 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 4 +--
drivers/net/bonding/rte_eth_bond_private.h | 2 +-
drivers/net/cxgbe/cxgbe_main.c | 2 +-
drivers/net/cxgbe/sge.c | 4 +--
drivers/net/enic/enic_main.c | 2 +-
drivers/net/i40e/i40e_ethdev.c | 10 +++---
drivers/net/i40e/i40e_ethdev.h | 2 +-
drivers/net/i40e/i40e_rxtx.c | 6 ++--
drivers/net/ixgbe/ixgbe_ethdev.h | 2 +-
drivers/net/ixgbe/ixgbe_fdir.c | 2 +-
drivers/net/nfp/nfp_net.c | 2 +-
drivers/net/qede/qede_rxtx.c | 6 ++--
drivers/net/ring/rte_eth_ring.c | 2 +-
drivers/net/sfc/sfc_rx.c | 2 +-
drivers/net/tap/rte_eth_tap.c | 2 +-
drivers/net/thunderx/nicvf_ethdev.c | 2 +-
examples/Makefile | 2 +-
examples/bond/main.c | 2 +-
examples/cmdline/Makefile | 2 +-
examples/distributor/Makefile | 2 +-
examples/ethtool/ethtool-app/main.c | 2 +-
examples/ethtool/lib/rte_ethtool.c | 2 +-
examples/ethtool/lib/rte_ethtool.h | 4 +--
examples/exception_path/Makefile | 2 +-
examples/helloworld/Makefile | 2 +-
examples/ip_fragmentation/Makefile | 2 +-
examples/ip_fragmentation/main.c | 2 +-
examples/ip_reassembly/Makefile | 2 +-
examples/ipv4_multicast/Makefile | 2 +-
examples/kni/Makefile | 2 +-
examples/l2fwd/Makefile | 2 +-
examples/l3fwd-acl/Makefile | 2 +-
examples/l3fwd-power/Makefile | 2 +-
examples/l3fwd-vf/Makefile | 2 +-
examples/l3fwd/Makefile | 2 +-
examples/l3fwd/l3fwd_sse.h | 2 +-
examples/link_status_interrupt/Makefile | 2 +-
examples/load_balancer/Makefile | 2 +-
examples/multi_process/Makefile | 2 +-
examples/multi_process/client_server_mp/Makefile | 2 +-
.../client_server_mp/mp_client/Makefile | 2 +-
.../client_server_mp/mp_server/Makefile | 2 +-
examples/multi_process/l2fwd_fork/Makefile | 2 +-
examples/multi_process/l2fwd_fork/flib.h | 2 +-
examples/multi_process/simple_mp/Makefile | 2 +-
examples/multi_process/symmetric_mp/Makefile | 2 +-
examples/netmap_compat/Makefile | 2 +-
examples/netmap_compat/bridge/Makefile | 2 +-
examples/netmap_compat/lib/compat_netmap.c | 2 +-
examples/performance-thread/common/lthread_mutex.c | 2 +-
examples/performance-thread/l3fwd-thread/main.c | 2 +-
examples/qos_meter/Makefile | 2 +-
examples/qos_sched/Makefile | 2 +-
examples/quota_watermark/Makefile | 2 +-
examples/quota_watermark/qw/Makefile | 2 +-
examples/quota_watermark/qwctl/Makefile | 2 +-
examples/timer/Makefile | 2 +-
examples/vhost/Makefile | 2 +-
examples/vhost_xen/Makefile | 2 +-
examples/vhost_xen/main.c | 2 +-
examples/vhost_xen/xenstore_parse.c | 4 +--
examples/vmdq/Makefile | 2 +-
examples/vmdq_dcb/Makefile | 2 +-
lib/librte_bitratestats/rte_bitrate.c | 2 +-
lib/librte_compat/rte_compat.h | 2 +-
lib/librte_eal/common/eal_common_log.c | 4 +--
lib/librte_eal/common/include/rte_alarm.h | 2 +-
lib/librte_eal/common/include/rte_bus.h | 2 +-
lib/librte_eal/common/include/rte_malloc.h | 2 +-
lib/librte_eal/common/include/rte_time.h | 2 +-
lib/librte_eal/linuxapp/eal/eal_pci_vfio.c | 2 +-
.../linuxapp/kni/ethtool/igb/e1000_82575.c | 2 +-
lib/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c | 2 +-
lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h | 2 +-
.../linuxapp/kni/ethtool/ixgbe/ixgbe_api.c | 2 +-
.../linuxapp/kni/ethtool/ixgbe/ixgbe_common.c | 2 +-
.../linuxapp/kni/ethtool/ixgbe/kcompat.h | 2 +-
lib/librte_ether/rte_ethdev.h | 2 +-
lib/librte_hash/rte_cuckoo_hash.c | 12 +++----
lib/librte_ip_frag/rte_ip_frag.h | 4 +--
lib/librte_ip_frag/rte_ipv4_reassembly.c | 2 +-
lib/librte_ip_frag/rte_ipv6_reassembly.c | 2 +-
lib/librte_kni/rte_kni.c | 2 +-
lib/librte_reorder/rte_reorder.h | 2 +-
lib/librte_timer/rte_timer.c | 2 +-
lib/librte_vhost/rte_vhost.h | 2 +-
lib/librte_vhost/virtio_net.c | 2 +-
mk/arch/i686/rte.vars.mk | 10 +++---
mk/arch/x86_64/rte.vars.mk | 10 +++---
mk/exec-env/bsdapp/rte.vars.mk | 6 ++--
mk/exec-env/linuxapp/rte.vars.mk | 6 ++--
mk/machine/atm/rte.vars.mk | 16 ++++-----
mk/machine/default/rte.vars.mk | 16 ++++-----
mk/machine/hsw/rte.vars.mk | 16 ++++-----
mk/machine/ivb/rte.vars.mk | 16 ++++-----
mk/machine/native/rte.vars.mk | 16 ++++-----
mk/machine/nhm/rte.vars.mk | 16 ++++-----
mk/machine/snb/rte.vars.mk | 16 ++++-----
mk/machine/wsm/rte.vars.mk | 16 ++++-----
mk/rte.vars.mk | 2 +-
mk/target/generic/rte.vars.mk | 38 +++++++++++-----------
mk/toolchain/clang/rte.vars.mk | 8 ++---
mk/toolchain/gcc/rte.vars.mk | 8 ++---
mk/toolchain/icc/rte.vars.mk | 8 ++---
test/test/test_cmdline_cirbuf.c | 2 +-
test/test/test_distributor.c | 2 +-
test/test/test_eal_flags.c | 4 +--
test/test/test_func_reentrancy.c | 4 +--
test/test/test_hash.c | 10 +++---
test/test/test_interrupts.c | 6 ++--
test/test/test_link_bonding.c | 28 ++++++++--------
test/test/test_link_bonding_mode4.c | 6 ++--
test/test/test_malloc.c | 2 +-
test/test/test_mbuf.c | 2 +-
test/test/test_spinlock.c | 2 +-
133 files changed, 304 insertions(+), 304 deletions(-)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index d1041afa5..b878978de 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -389,7 +389,7 @@ static void eth_event_callback(uint8_t port_id,
static int all_ports_started(void);
/*
- * Helper function to check if socket is allready discovered.
+ * Helper function to check if socket is already discovered.
* If yes, return positive value. If not, return zero.
*/
int
diff --git a/devtools/cocci/mtod-offset.cocci b/devtools/cocci/mtod-offset.cocci
index 13134e9df..3f83f3223 100644
--- a/devtools/cocci/mtod-offset.cocci
+++ b/devtools/cocci/mtod-offset.cocci
@@ -55,7 +55,7 @@ expression M, O1, O2;
//
-// Cleanup rules. Fold in double casts, remove unnecessary paranthesis, etc.
+// Cleanup rules. Fold in double casts, remove unnecessary parenthesis, etc.
//
@disable paren@
expression M, O;
diff --git a/devtools/validate-abi.sh b/devtools/validate-abi.sh
index 52e4e7ae7..0accc99b1 100755
--- a/devtools/validate-abi.sh
+++ b/devtools/validate-abi.sh
@@ -150,14 +150,14 @@ TAG2="$TAG2 ($HASH2)"
ABICHECK=`which abi-compliance-checker 2>/dev/null`
if [ $? -ne 0 ]
then
- log "INFO" "Cant find abi-compliance-checker utility"
+ log "INFO" "Can't find abi-compliance-checker utility"
cleanup_and_exit 1
fi
ABIDUMP=`which abi-dumper 2>/dev/null`
if [ $? -ne 0 ]
then
- log "INFO" "Cant find abi-dumper utility"
+ log "INFO" "Can't find abi-dumper utility"
cleanup_and_exit 1
fi
diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 5f825e9a3..7761989c7 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -259,7 +259,7 @@ boolean parameters value.
- ``debug_init`` [bool] (default **n**)
- Enable extra logging during device intialization and startup.
+ Enable extra logging during device initialization and startup.
- ``mcdi_logging`` [bool] (default **n**)
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index 2d225d56a..1acde763b 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -308,9 +308,9 @@ Test Vector File
The test vector file is a text file contain information about test vectors.
The file is made of the sections. The first section doesn't have header.
It contain global information used in each test variant vectors -
-typicaly information about plaintext, ciphertext, cipher key, aut key,
+typically information about plaintext, ciphertext, cipher key, aut key,
initial vector. All other sections begin header.
-The sections contain particular information typicaly digest.
+The sections contain particular information typically digest.
**Format of the file:**
@@ -362,7 +362,7 @@ Examples
Call application for performance throughput test of single Aesni MB PMD
for cipher encryption aes-cbc and auth generation sha1-hmac,
-one milion operations, burst size 32, packet size 64::
+one million operations, burst size 32, packet size 64::
dpdk-test-crypto-perf -l 6-7 --vdev crypto_aesni_mb_pmd -w 0000:00:00.0 --
--ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
index c0223734b..2eb3b7fb2 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_pvt.h
@@ -231,7 +231,7 @@ static phys_addr_t dpaa2_mem_vtop(uint64_t vaddr)
/**
* When we are using Physical addresses as IO Virtual Addresses,
* Need to call conversion routines dpaa2_mem_vtop & dpaa2_mem_ptov
- * whereever required.
+ * wherever required.
* These routines are called with help of below MACRO's
*/
diff --git a/drivers/bus/fslmc/qbman/qbman_portal.c b/drivers/bus/fslmc/qbman/qbman_portal.c
index 5d407cc01..be4e2e577 100644
--- a/drivers/bus/fslmc/qbman/qbman_portal.c
+++ b/drivers/bus/fslmc/qbman/qbman_portal.c
@@ -288,7 +288,7 @@ void *qbman_swp_mc_result(struct qbman_swp *p)
qbman_cena_invalidate_prefetch(&p->sys,
QBMAN_CENA_SWP_RR(p->mc.valid_bit));
ret = qbman_cena_read(&p->sys, QBMAN_CENA_SWP_RR(p->mc.valid_bit));
- /* Remove the valid-bit - command completed iff the rest is non-zero */
+ /* Remove the valid-bit - command completed if the rest is non-zero */
verb = ret[0] & ~QB_VALID_BIT;
if (!verb)
return NULL;
@@ -769,7 +769,7 @@ const struct qbman_result *qbman_swp_dqrr_next(struct qbman_swp *s)
*/
uint32_t dqpi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI);
uint32_t pi = qb_attr_code_decode(&code_dqpi_pi, &dqpi);
- /* there are new entries iff pi != next_idx */
+ /* there are new entries if pi != next_idx */
if (pi == s->dqrr.next_idx)
return NULL;
/* if next_idx is/was the last ring index, and 'pi' is
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 386aa453b..37d8a585b 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -757,7 +757,7 @@ qat_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
tmp_qp->stats.enqueue_err_count++;
/*
* This message cannot be enqueued,
- * decrease number of ops that wasnt sent
+ * decrease number of ops that wasn't sent
*/
rte_atomic16_sub(&tmp_qp->inflights16,
nb_ops_possible - nb_ops_sent);
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 995c93d3d..062c8e29b 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -636,7 +636,7 @@ eth_ark_dev_stop(struct rte_eth_dev *dev)
}
/* Stop DDM */
- /* Wait up to 0.1 second. each stop is upto 1000 * 10 useconds */
+ /* Wait up to 0.1 second. each stop is up to 1000 * 10 useconds */
for (i = 0; i < 10; i++) {
status = ark_ddm_stop(ark->ddm.v, 1);
if (status == 0)
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 1a7e1c8e1..3e95664eb 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -887,7 +887,7 @@ storm_memset_eq_prod(struct bnx2x_softc *sc, uint16_t eq_prod, uint16_t pfid)
/*
* Post a slowpath command.
*
- * A slowpath command is used to propogate a configuration change through
+ * A slowpath command is used to propagate a configuration change through
* the controller in a controlled manner, allowing each STORM processor and
* other H/W blocks to phase in the change. The commands sent on the
* slowpath are referred to as ramrods. Depending on the ramrod used the
@@ -2002,7 +2002,7 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link
/*
* Nothing to do during unload if previous bnx2x_nic_load()
- * did not completed succesfully - all resourses are released.
+ * did not completed successfully - all resourses are released.
*/
if ((sc->state == BNX2X_STATE_CLOSED) || (sc->state == BNX2X_STATE_ERROR)) {
return 0;
@@ -3931,7 +3931,7 @@ static void bnx2x_attn_int_deasserted2(struct bnx2x_softc *sc, uint32_t attn)
mask1 = REG_RD(sc, PXP2_REG_PXP2_INT_MASK_1);
val0 = REG_RD(sc, PXP2_REG_PXP2_INT_STS_0);
/*
- * If the olny PXP2_EOP_ERROR_BIT is set in
+ * If the only PXP2_EOP_ERROR_BIT is set in
* STS0 and STS1 - clear it
*
* probably we lose additional attentions between
@@ -5910,7 +5910,7 @@ static void bnx2x_set_234_gates(struct bnx2x_softc *sc, uint8_t close)
(val | HC_CONFIG_0_REG_BLOCK_DISABLE_0));
} else {
-/* Prevent incomming interrupts in IGU */
+/* Prevent incoming interrupts in IGU */
val = REG_RD(sc, IGU_REG_BLOCK_CONFIGURATION);
if (close)
diff --git a/drivers/net/bnx2x/bnx2x_stats.c b/drivers/net/bnx2x/bnx2x_stats.c
index c489cbeef..6223cfef1 100644
--- a/drivers/net/bnx2x/bnx2x_stats.c
+++ b/drivers/net/bnx2x/bnx2x_stats.c
@@ -1371,9 +1371,9 @@ bnx2x_prep_fw_stats_req(struct bnx2x_softc *sc)
cur_query_entry = &sc->fw_stats_req->query[BNX2X_PORT_QUERY_IDX];
cur_query_entry->kind = STATS_TYPE_PORT;
- /* For port query index is a DONT CARE */
+ /* For port query index is a DON'T CARE */
cur_query_entry->index = SC_PORT(sc);
- /* For port query funcID is a DONT CARE */
+ /* For port query funcID is a DON'T CARE */
cur_query_entry->funcID = htole16(SC_FUNC(sc));
cur_query_entry->address.hi = htole32(U64_HI(cur_data_offset));
cur_query_entry->address.lo = htole32(U64_LO(cur_data_offset));
@@ -1385,7 +1385,7 @@ bnx2x_prep_fw_stats_req(struct bnx2x_softc *sc)
cur_query_entry = &sc->fw_stats_req->query[BNX2X_PF_QUERY_IDX];
cur_query_entry->kind = STATS_TYPE_PF;
- /* For PF query index is a DONT CARE */
+ /* For PF query index is a DON'T CARE */
cur_query_entry->index = SC_PORT(sc);
cur_query_entry->funcID = htole16(SC_FUNC(sc));
cur_query_entry->address.hi = htole32(U64_HI(cur_data_offset));
@@ -1450,7 +1450,7 @@ void bnx2x_memset_stats(struct bnx2x_softc *sc)
if (sc->port.pmf && sc->port.port_stx)
bnx2x_port_stats_base_init(sc);
- /* mark the end of statistics initializiation */
+ /* mark the end of statistics initialization */
sc->stats_init = false;
}
@@ -1536,7 +1536,7 @@ bnx2x_stats_init(struct bnx2x_softc *sc)
bnx2x_port_stats_base_init(sc);
}
- /* mark the end of statistics initializiation */
+ /* mark the end of statistics initialization */
sc->stats_init = FALSE;
}
diff --git a/drivers/net/bnx2x/bnx2x_vfpf.h b/drivers/net/bnx2x/bnx2x_vfpf.h
index 955ea9825..d7cc11be0 100644
--- a/drivers/net/bnx2x/bnx2x_vfpf.h
+++ b/drivers/net/bnx2x/bnx2x_vfpf.h
@@ -282,7 +282,7 @@ struct bnx2x_vf_bulletin {
uint16_t version;
uint16_t length;
- uint64_t valid_bitmap; /* bitmap indicating wich fields
+ uint64_t valid_bitmap; /* bitmap indicating which fields
* hold valid values
*/
diff --git a/drivers/net/bnx2x/ecore_hsi.h b/drivers/net/bnx2x/ecore_hsi.h
index 5808e1ae3..5cce66474 100644
--- a/drivers/net/bnx2x/ecore_hsi.h
+++ b/drivers/net/bnx2x/ecore_hsi.h
@@ -2499,7 +2499,7 @@ struct shmem2_region {
* SHMEM_EEE_XXX_ADV definitions (where XXX is replaced by speed).
* bit 28 when 1'b1 EEE was requested.
* bit 29 when 1'b1 tx lpi was requested.
- * bit 30 when 1'b1 EEE was negotiated. Tx lpi will be asserted iff
+ * bit 30 when 1'b1 EEE was negotiated. Tx lpi will be asserted if
* 30:29 are 2'b11.
* bit 31 when 1'b0 bits 15:0 contain a PORT_FEAT_CFG_EEE_ define as
* value. When 1'b1 those bits contains a value times 16 microseconds.
@@ -3911,7 +3911,7 @@ struct client_init_rx_data
uint16_t bd_pause_thr_high /* number of remaining bds above which, we send un-pause message */;
uint16_t sge_pause_thr_low /* number of remaining sges under which, we send pause message */;
uint16_t sge_pause_thr_high /* number of remaining sges above which, we send un-pause message */;
- uint16_t rx_cos_mask /* the bits that will be set on pfc/ safc paket whith will be genratet when this ring is full. for regular flow control set this to 1 */;
+ uint16_t rx_cos_mask /* the bits that will be set on pfc/ safc paket with will be genratet when this ring is full. for regular flow control set this to 1 */;
uint16_t silent_vlan_value /* The vlan to compare, in case, silent vlan is set */;
uint16_t silent_vlan_mask /* The vlan mask, in case, silent vlan is set */;
uint32_t reserved6[2];
@@ -4903,7 +4903,7 @@ enum eth_tx_vlan_type
*/
enum eth_vlan_filter_mode
{
- ETH_VLAN_FILTER_ANY_VLAN /* Dont filter by vlan */,
+ ETH_VLAN_FILTER_ANY_VLAN /* Don't filter by vlan */,
ETH_VLAN_FILTER_SPECIFIC_VLAN /* Only the vlan_id is allowed */,
ETH_VLAN_FILTER_CLASSIFY /* Vlan will be added to CAM for classification */,
MAX_ETH_VLAN_FILTER_MODE};
@@ -4937,7 +4937,7 @@ struct mac_configuration_entry
#define MAC_CONFIGURATION_ENTRY_RDMA_MAC_SHIFT 1
#define MAC_CONFIGURATION_ENTRY_VLAN_FILTERING_MODE (0x3<<2) /* BitField flags (use enum eth_vlan_filter_mode) */
#define MAC_CONFIGURATION_ENTRY_VLAN_FILTERING_MODE_SHIFT 2
-#define MAC_CONFIGURATION_ENTRY_OVERRIDE_VLAN_REMOVAL (0x1<<4) /* BitField flags BitField flags 0 - cant remove vlan 1 - can remove vlan. relevant only to everest1 */
+#define MAC_CONFIGURATION_ENTRY_OVERRIDE_VLAN_REMOVAL (0x1<<4) /* BitField flags BitField flags 0 - can't remove vlan 1 - can remove vlan. relevant only to everest1 */
#define MAC_CONFIGURATION_ENTRY_OVERRIDE_VLAN_REMOVAL_SHIFT 4
#define MAC_CONFIGURATION_ENTRY_BROADCAST (0x1<<5) /* BitField flags BitField flags 0 - not broadcast 1 - broadcast. relevant only to everest1 */
#define MAC_CONFIGURATION_ENTRY_BROADCAST_SHIFT 5
@@ -5024,7 +5024,7 @@ struct tstorm_eth_function_common_config
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_IPV6_TCP_CAPABILITY_SHIFT 3
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_MODE (0x7<<4) /* BitField config_flagsGeneral configuration flags RSS mode of operation (use enum eth_rss_mode) */
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_RSS_MODE_SHIFT 4
-#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_VLAN_FILTERING_ENABLE (0x1<<7) /* BitField config_flagsGeneral configuration flags 0 - Dont filter by vlan, 1 - Filter according to the vlans specificied in mac_filter_config */
+#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_VLAN_FILTERING_ENABLE (0x1<<7) /* BitField config_flagsGeneral configuration flags 0 - Don't filter by vlan, 1 - Filter according to the vlans specificied in mac_filter_config */
#define TSTORM_ETH_FUNCTION_COMMON_CONFIG_VLAN_FILTERING_ENABLE_SHIFT 7
#define __TSTORM_ETH_FUNCTION_COMMON_CONFIG_RESERVED0 (0xFF<<8) /* BitField config_flagsGeneral configuration flags */
#define __TSTORM_ETH_FUNCTION_COMMON_CONFIG_RESERVED0_SHIFT 8
@@ -5634,7 +5634,7 @@ struct flow_control_configuration
struct function_start_data
{
uint8_t function_mode /* the function mode */;
- uint8_t allow_npar_tx_switching /* If set, inter-pf tx switching is allowed in Switch Independant function mode. (E2/E3 Only) */;
+ uint8_t allow_npar_tx_switching /* If set, inter-pf tx switching is allowed in Switch Independent function mode. (E2/E3 Only) */;
uint16_t sd_vlan_tag /* value of Vlan in case of switch depended multi-function mode */;
uint16_t vif_id /* value of VIF id in case of NIV multi-function mode */;
uint8_t path_id;
diff --git a/drivers/net/bnx2x/ecore_init.h b/drivers/net/bnx2x/ecore_init.h
index d25e2803a..4576c5657 100644
--- a/drivers/net/bnx2x/ecore_init.h
+++ b/drivers/net/bnx2x/ecore_init.h
@@ -304,7 +304,7 @@ static inline void ecore_dcb_config_qm(struct bnx2x_softc *sc, enum cos_mode mod
/*
- * congestion managment port init api description
+ * congestion management port init api description
* the api works as follows:
* the driver should pass the cmng_init_input struct, the port_init function
* will prepare the required internal ram structure which will be passed back
diff --git a/drivers/net/bnx2x/ecore_sp.c b/drivers/net/bnx2x/ecore_sp.c
index 22f2dc95f..ef7f9fea4 100644
--- a/drivers/net/bnx2x/ecore_sp.c
+++ b/drivers/net/bnx2x/ecore_sp.c
@@ -2676,7 +2676,7 @@ static void ecore_mcast_hdl_del(struct bnx2x_softc *sc,
* @cmd:
* @start_cnt: first line in the ramrod data that may be used
*
- * This function is called iff there is enough place for the current command in
+ * This function is called if there is enough place for the current command in
* the ramrod data.
* Returns number of lines filled in the ramrod data in total.
*/
@@ -2834,7 +2834,7 @@ static int ecore_mcast_setup_e2(struct bnx2x_softc *sc,
if (ECORE_LIST_IS_EMPTY(&o->pending_cmds_head))
o->clear_sched(o);
- /* The below may be TRUE iff there was enough room in ramrod
+ /* The below may be TRUE if there was enough room in ramrod
* data for all pending commands and for the current
* command. Otherwise the current command would have been added
* to the pending commands and p->mcast_list_len would have been
diff --git a/drivers/net/bnx2x/ecore_sp.h b/drivers/net/bnx2x/ecore_sp.h
index 9c1f55dfd..e7ec96e94 100644
--- a/drivers/net/bnx2x/ecore_sp.h
+++ b/drivers/net/bnx2x/ecore_sp.h
@@ -1116,10 +1116,10 @@ struct ecore_config_rss_params {
/* RSS hash values */
uint32_t rss_key[10];
- /* valid only iff ECORE_RSS_UPDATE_TOE is set */
+ /* valid only if ECORE_RSS_UPDATE_TOE is set */
uint16_t toe_rss_bitmap;
- /* valid iff ECORE_RSS_TUNNELING is set */
+ /* valid if ECORE_RSS_TUNNELING is set */
uint16_t tunnel_value;
uint16_t tunnel_mask;
};
@@ -1286,14 +1286,14 @@ struct rxq_pause_params {
uint16_t bd_th_hi;
uint16_t rcq_th_lo;
uint16_t rcq_th_hi;
- uint16_t sge_th_lo; /* valid iff ECORE_Q_FLG_TPA */
- uint16_t sge_th_hi; /* valid iff ECORE_Q_FLG_TPA */
+ uint16_t sge_th_lo; /* valid if ECORE_Q_FLG_TPA */
+ uint16_t sge_th_hi; /* valid if ECORE_Q_FLG_TPA */
uint16_t pri_map;
};
/* general */
struct ecore_general_setup_params {
- /* valid iff ECORE_Q_FLG_STATS */
+ /* valid if ECORE_Q_FLG_STATS */
uint8_t stat_id;
uint8_t spcl_id;
@@ -1312,19 +1312,19 @@ struct ecore_rxq_setup_params {
uint8_t fw_sb_id;
uint8_t cl_qzone_id;
- /* valid iff ECORE_Q_FLG_TPA */
+ /* valid if ECORE_Q_FLG_TPA */
uint16_t tpa_agg_sz;
uint8_t max_tpa_queues;
uint8_t rss_engine_id;
- /* valid iff ECORE_Q_FLG_MCAST */
+ /* valid if ECORE_Q_FLG_MCAST */
uint8_t mcast_engine_id;
uint8_t cache_line_log;
uint8_t sb_cq_index;
- /* valid iff BXN2X_Q_FLG_SILENT_VLAN_REM */
+ /* valid if BXN2X_Q_FLG_SILENT_VLAN_REM */
uint16_t silent_removal_value;
uint16_t silent_removal_mask;
};
@@ -1335,12 +1335,12 @@ struct ecore_txq_setup_params {
uint8_t fw_sb_id;
uint8_t sb_cq_index;
- uint8_t cos; /* valid iff ECORE_Q_FLG_COS */
+ uint8_t cos; /* valid if ECORE_Q_FLG_COS */
uint16_t traffic_type;
/* equals to the leading rss client id, used for TX classification*/
uint8_t tss_leading_cl_id;
- /* valid iff ECORE_Q_FLG_DEF_VLAN */
+ /* valid if ECORE_Q_FLG_DEF_VLAN */
uint16_t default_vlan;
};
@@ -1733,7 +1733,7 @@ void ecore_init_mcast_obj(struct bnx2x_softc *sc,
* the current command will be enqueued to the tail of the
* pending commands list.
*
- * Return: 0 is operation was successfull and there are no pending completions,
+ * Return: 0 is operation was successful and there are no pending completions,
* negative if there were errors, positive if there are pending
* completions.
*/
diff --git a/drivers/net/bnx2x/elink.c b/drivers/net/bnx2x/elink.c
index 9ffa7dc66..9d0f31364 100644
--- a/drivers/net/bnx2x/elink.c
+++ b/drivers/net/bnx2x/elink.c
@@ -2702,7 +2702,7 @@ static elink_status_t elink_eee_initial_config(struct elink_params *params,
{
vars->eee_status |= ((uint32_t) mode) << SHMEM_EEE_SUPPORTED_SHIFT;
- /* Propogate params' bits --> vars (for migration exposure) */
+ /* Propagate params' bits --> vars (for migration exposure) */
if (params->eee_mode & ELINK_EEE_MODE_ENABLE_LPI)
vars->eee_status |= SHMEM_EEE_LPI_REQUESTED_BIT;
else
@@ -3450,7 +3450,7 @@ static void elink_warpcore_enable_AN_KR(struct elink_phy *phy,
{MDIO_WC_DEVAD, MDIO_WC_REG_CL72_USERB0_CL72_TX_FIR_TAP, 0},
};
PMD_DRV_LOG(DEBUG, "Enable Auto Negotiation for KR");
- /* Set to default registers that may be overriden by 10G force */
+ /* Set to default registers that may be overridden by 10G force */
for (i = 0; i < ARRAY_SIZE(reg_set); i++)
elink_cl45_write(sc, phy, reg_set[i].devad, reg_set[i].reg,
reg_set[i].val);
@@ -4363,14 +4363,14 @@ static void elink_sync_link(struct elink_params *params,
switch (vars->link_status & LINK_STATUS_SPEED_AND_DUPLEX_MASK) {
case ELINK_LINK_10THD:
vars->duplex = DUPLEX_HALF;
- /* Fall thru */
+ /* Fall through */
case ELINK_LINK_10TFD:
vars->line_speed = ELINK_SPEED_10;
break;
case ELINK_LINK_100TXHD:
vars->duplex = DUPLEX_HALF;
- /* Fall thru */
+ /* Fall through */
case ELINK_LINK_100T4:
case ELINK_LINK_100TXFD:
vars->line_speed = ELINK_SPEED_100;
@@ -4378,14 +4378,14 @@ static void elink_sync_link(struct elink_params *params,
case ELINK_LINK_1000THD:
vars->duplex = DUPLEX_HALF;
- /* Fall thru */
+ /* Fall through */
case ELINK_LINK_1000TFD:
vars->line_speed = ELINK_SPEED_1000;
break;
case ELINK_LINK_2500THD:
vars->duplex = DUPLEX_HALF;
- /* Fall thru */
+ /* Fall through */
case ELINK_LINK_2500TFD:
vars->line_speed = ELINK_SPEED_2500;
break;
@@ -6341,7 +6341,7 @@ elink_status_t elink_link_update(struct elink_params * params,
* hence its link is expected to be down
* - SECOND_PHY means that first phy should not be able
* to link up by itself (using configuration)
- * - DEFAULT should be overriden during initialiazation
+ * - DEFAULT should be overridden during initialization
*/
PMD_DRV_LOG(DEBUG, "Invalid link indication"
"mpc=0x%x. DISABLING LINK !!!",
@@ -12680,7 +12680,7 @@ static void elink_check_over_curr(struct elink_params *params,
vars->phy_flags &= ~PHY_OVER_CURRENT_FLAG;
}
-/* Returns 0 if no change occured since last check; 1 otherwise. */
+/* Returns 0 if no change occurred since last check; 1 otherwise. */
static uint8_t elink_analyze_link_error(struct elink_params *params,
struct elink_vars *vars,
uint32_t status, uint32_t phy_flag,
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 7b863d6ed..d2b75927c 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -905,7 +905,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id)
32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
0, element_size, socket_id);
- /* Any memory allocation failure in initalization is critical because
+ /* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->mbuf_pool == NULL) {
rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 82959abc3..37f3d43bc 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -793,8 +793,8 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
/*
- * We create separate transmit buffers for update packets as they wont be
- * counted in num_tx_total.
+ * We create separate transmit buffers for update packets as they won't
+ * be counted in num_tx_total.
*/
struct rte_mbuf *update_bufs[RTE_MAX_ETHPORTS][ALB_HASH_TABLE_SIZE];
uint16_t update_bufs_pkts[RTE_MAX_ETHPORTS] = { 0 };
diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
index c8db09005..378832e45 100644
--- a/drivers/net/bonding/rte_eth_bond_private.h
+++ b/drivers/net/bonding/rte_eth_bond_private.h
@@ -184,7 +184,7 @@ extern const struct eth_dev_ops default_dev_ops;
int
check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
-/* Search given slave array to find possition of given id.
+/* Search given slave array to find position of given id.
* Return slave pos or slaves_count if not found. */
static inline uint8_t
find_slave_by_id(uint8_t *slaves, uint8_t slaves_count, uint8_t slave_id) {
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 1f230cd50..5b88b2811 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -575,7 +575,7 @@ static int adap_init0_config(struct adapter *adapter, int reset)
/*
* Return successfully and note that we're operating with parameters
* not supplied by the driver, rather than from hard-wired
- * initialization constants burried in the driver.
+ * initialization constants buried in the driver.
*/
dev_info(adapter,
"Successfully configured using Firmware Configuration File \"%s\", version %#x, computed checksum %#x\n",
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 2f9e12c9e..b6aecd605 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -388,7 +388,7 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
struct rte_pktmbuf_pool_private *mbp_priv;
u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.jumbo_frame;
- /* Use jumbo mtu buffers iff mbuf data room size can fit jumbo data. */
+ /* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
if (jumbo_en &&
((mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM) >= 9000))
@@ -591,7 +591,7 @@ static inline unsigned int calc_tx_flits(const struct rte_mbuf *m)
* Write Header (incorporated as part of the cpl_tx_pkt_lso and
* cpl_tx_pkt structures), followed by either a TX Packet Write CPL
* message or, if we're doing a Large Send Offload, an LSO CPL message
- * with an embeded TX Packet Write CPL message.
+ * with an embedded TX Packet Write CPL message.
*/
flits = sgl_len(m->nb_segs);
if (m->tso_segsz)
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index d0262418d..323a4aad2 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -1225,7 +1225,7 @@ int enic_set_mtu(struct enic *enic, uint16_t new_mtu)
}
}
- /* replace Rx funciton with a no-op to avoid getting stale pkts */
+ /* replace Rx function with a no-op to avoid getting stale pkts */
eth_dev->rx_pkt_burst = enic_dummy_recv_pkts;
rte_mb();
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index c18a93b2b..a34d057aa 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -74,7 +74,7 @@
/* Maximun number of capability elements */
#define I40E_MAX_CAP_ELE_NUM 128
-/* Wait count and inteval */
+/* Wait count and interval */
#define I40E_CHK_Q_ENA_COUNT 1000
#define I40E_CHK_Q_ENA_INTERVAL_US 1000
@@ -5710,16 +5710,16 @@ i40e_dev_handle_vfr_event(struct rte_eth_dev *dev)
index = abs_vf_id / I40E_UINT32_BIT_SIZE;
offset = abs_vf_id % I40E_UINT32_BIT_SIZE;
val = I40E_READ_REG(hw, I40E_GLGEN_VFLRSTAT(index));
- /* VFR event occured */
+ /* VFR event occurred */
if (val & (0x1 << offset)) {
int ret;
/* Clear the event first */
I40E_WRITE_REG(hw, I40E_GLGEN_VFLRSTAT(index),
(0x1 << offset));
- PMD_DRV_LOG(INFO, "VF %u reset occured", abs_vf_id);
+ PMD_DRV_LOG(INFO, "VF %u reset occurred", abs_vf_id);
/**
- * Only notify a VF reset event occured,
+ * Only notify a VF reset event occurred,
* don't trigger another SW reset
*/
ret = i40e_pf_host_vf_reset(&pf->vfs[i], 0);
@@ -7436,7 +7436,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
/*
* If both VMDQ and RSS enabled, not all of PF queues are configured.
- * It's necessary to calulate the actual PF queues that are configured.
+ * It's necessary to calculate the actual PF queues that are configured.
*/
if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
num = i40e_pf_calc_configured_queues_num(pf);
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index 2ff8282f7..50e3deda6 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -405,7 +405,7 @@ enum I40E_VF_STATE {
struct i40e_pf_vf {
struct i40e_pf *pf;
struct i40e_vsi *vsi;
- enum I40E_VF_STATE state; /* The number of queue pairs availiable */
+ enum I40E_VF_STATE state; /* The number of queue pairs available */
uint16_t vf_idx; /* VF index in pf->vfs */
uint16_t lan_nb_qps; /* Actual queues allocated */
uint16_t reset_cnt; /* Total vf reset times */
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 351cb94dd..26aa6dc57 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1608,7 +1608,7 @@ i40e_dev_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
rxq = dev->data->rx_queues[rx_queue_id];
/*
- * rx_queue_id is queue id aplication refers to, while
+ * rx_queue_id is queue id application refers to, while
* rxq->reg_idx is the real queue index.
*/
err = i40e_switch_rx_queue(hw, rxq->reg_idx, FALSE);
@@ -1639,7 +1639,7 @@ i40e_dev_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
txq = dev->data->tx_queues[tx_queue_id];
/*
- * tx_queue_id is queue id aplication refers to, while
+ * tx_queue_id is queue id application refers to, while
* rxq->reg_idx is the real queue index.
*/
err = i40e_switch_tx_queue(hw, txq->reg_idx, TRUE);
@@ -1664,7 +1664,7 @@ i40e_dev_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
txq = dev->data->tx_queues[tx_queue_id];
/*
- * tx_queue_id is queue id aplication refers to, while
+ * tx_queue_id is queue id application refers to, while
* txq->reg_idx is the real queue index.
*/
err = i40e_switch_tx_queue(hw, txq->reg_idx, FALSE);
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index b576a6f4b..05de4a384 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -72,7 +72,7 @@
#endif
#define IXGBE_HWSTRIP_BITMAP_SIZE (IXGBE_MAX_RX_QUEUE_NUM / (sizeof(uint32_t) * NBBY))
-/* EITR Inteval is in 2048ns uinits for 1G and 10G link */
+/* EITR Interval is in 2048ns uinits for 1G and 10G link */
#define IXGBE_EITR_INTERVAL_UNIT_NS 2048
#define IXGBE_EITR_ITR_INT_SHIFT 3
#define IXGBE_EITR_INTERVAL_US(us) \
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 7f6c7b58f..ae3035057 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -654,7 +654,7 @@ ixgbe_fdir_configure(struct rte_eth_dev *dev)
/*
* The defaults in the HW for RX PB 1-7 are not zero and so should be
- * intialized to zero for non DCB mode otherwise actual total RX PB
+ * initialized to zero for non DCB mode otherwise actual total RX PB
* would be bigger than programmed and filter space would run into
* the PB 0 region.
*/
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index 5479fb366..4b08c71fb 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -1173,7 +1173,7 @@ nfp_net_rx_queue_count(struct rte_eth_dev *dev, uint16_t queue_idx)
* Other PMDs are just checking the DD bit in intervals of 4
* descriptors and counting all four if the first has the DD
* bit on. Of course, this is not accurate but can be good for
- * perfomance. But ideally that should be done in descriptors
+ * performance. But ideally that should be done in descriptors
* chunks belonging to the same cache line
*/
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index baea1bb04..3751e6bd0 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -1411,7 +1411,7 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
break;
}
#endif
- /* TBD: pseudo csum calcuation required iff
+ /* TBD: pseudo csum calcuation required if
* ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE not set?
*/
ret = rte_net_intel_cksum_prepare(m);
@@ -1600,7 +1600,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
(hdr_size +
rte_mbuf_data_dma_addr(mbuf)),
mbuf->data_len - hdr_size);
- /* TBD: check pseudo csum iff tx_prepare not called? */
+ /* TBD: check pseudo csum if tx_prepare not called? */
if (ipv6_ext_flg) {
bd2->data.bitfields1 |=
ETH_L4_PSEUDO_CSUM_ZERO_LENGTH <<
@@ -1711,7 +1711,7 @@ int qede_dev_start(struct rte_eth_dev *eth_dev)
}
/* Newer SR-IOV PF driver expects RX/TX queues to be started before
- * enabling RSS. Hence RSS configuration is deferred upto this point.
+ * enabling RSS. Hence RSS configuration is deferred up to this point.
* Also, we would like to retain similar behavior in PF case, so we
* don't do PF/VF specific check here.
*/
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 87d22581b..ac4143562 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -450,7 +450,7 @@ static int parse_kvlist (const char *key __rte_unused, const char *value, void *
ret = -EINVAL;
if (!name) {
- RTE_LOG(WARNING, PMD, "command line paramter is empty for ring pmd!\n");
+ RTE_LOG(WARNING, PMD, "command line parameter is empty for ring pmd!\n");
goto out;
}
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 2ecd6f26e..97175e5f9 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -292,7 +292,7 @@ sfc_efx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (desc_flags & EFX_PKT_CONT) {
/* The packet is scattered, more fragments to come */
scatter_pkt = m;
- /* Futher fragments have no prefix */
+ /* Further fragments have no prefix */
prefix_size = 0;
continue;
}
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index e44de027d..40f79ced5 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -1296,7 +1296,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
memset(remote_iface, 0, RTE_ETH_NAME_MAX_LEN);
if (params && (params[0] != '\0')) {
- RTE_LOG(DEBUG, PMD, "paramaters (%s)\n", params);
+ RTE_LOG(DEBUG, PMD, "parameters (%s)\n", params);
kvlist = rte_kvargs_parse(params, valid_arguments);
if (kvlist) {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index 2152029b5..f472469e9 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -2079,7 +2079,7 @@ nicvf_eth_dev_init(struct rte_eth_dev *eth_dev)
goto fail;
}
- /* Detach port by returning postive error number */
+ /* Detach port by returning positive error number */
return ENOTSUP;
}
diff --git a/examples/Makefile b/examples/Makefile
index 6298626b7..c0e9c3be6 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -32,7 +32,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9a4ec8073..dc6067667 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -550,7 +550,7 @@ static void cmd_help_parsed(__attribute__((unused)) void *parsed_result,
{
cmdline_printf(cl,
"ALB - link bonding mode 6 example\n"
- "send IP - sends one ARPrequest thru bonding for IP.\n"
+ "send IP - sends one ARPrequest through bonding for IP.\n"
"start - starts listening ARPs.\n"
"stop - stops lcore_main.\n"
"show - shows some bond info: ex. active slaves etc.\n"
diff --git a/examples/cmdline/Makefile b/examples/cmdline/Makefile
index 9ebe43558..5155a6c80 100644
--- a/examples/cmdline/Makefile
+++ b/examples/cmdline/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/distributor/Makefile b/examples/distributor/Makefile
index 6a5badaa0..404993ebf 100644
--- a/examples/distributor/Makefile
+++ b/examples/distributor/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 6d50d4639..f79b9621b 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -264,7 +264,7 @@ int main(int argc, char **argv)
uint32_t id_core;
uint32_t cnt_ports;
- /* Init runtime enviornment */
+ /* Init runtime environment */
cnt_args_parsed = rte_eal_init(argc, argv);
if (cnt_args_parsed < 0)
rte_exit(EXIT_FAILURE, "rte_eal_init(): Failed");
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 7e4652063..fabfcb2ba 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -64,7 +64,7 @@ rte_ethtool_get_drvinfo(uint8_t port_id, struct ethtool_drvinfo *drvinfo)
printf("firmware version get error: (%s)\n", strerror(-ret));
else if (ret > 0)
printf("Insufficient fw version buffer size, "
- "the minimun size should be %d\n", ret);
+ "the minimum size should be %d\n", ret);
memset(&dev_info, 0, sizeof(dev_info));
rte_eth_dev_info_get(port_id, &dev_info);
diff --git a/examples/ethtool/lib/rte_ethtool.h b/examples/ethtool/lib/rte_ethtool.h
index 2e79d4535..18f44404b 100644
--- a/examples/ethtool/lib/rte_ethtool.h
+++ b/examples/ethtool/lib/rte_ethtool.h
@@ -365,7 +365,7 @@ int rte_ethtool_net_vlan_rx_kill_vid(uint8_t port_id, uint16_t vid);
int rte_ethtool_net_set_rx_mode(uint8_t port_id);
/**
- * Getting ring paramaters for Ethernet device.
+ * Getting ring parameters for Ethernet device.
*
* @param port_id
* The port identifier of the Ethernet device.
@@ -384,7 +384,7 @@ int rte_ethtool_get_ringparam(uint8_t port_id,
struct ethtool_ringparam *ring_param);
/**
- * Setting ring paramaters for Ethernet device.
+ * Setting ring parameters for Ethernet device.
*
* @param port_id
* The port identifier of the Ethernet device.
diff --git a/examples/exception_path/Makefile b/examples/exception_path/Makefile
index 76706c124..d16f74f6f 100644
--- a/examples/exception_path/Makefile
+++ b/examples/exception_path/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/helloworld/Makefile b/examples/helloworld/Makefile
index d2cca7a70..c83ec01e8 100644
--- a/examples/helloworld/Makefile
+++ b/examples/helloworld/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/ip_fragmentation/Makefile b/examples/ip_fragmentation/Makefile
index c321e6a13..4bc01abb9 100644
--- a/examples/ip_fragmentation/Makefile
+++ b/examples/ip_fragmentation/Makefile
@@ -34,7 +34,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index 71c1d12fc..654b315cf 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -1020,7 +1020,7 @@ main(int argc, char **argv)
if (check_ptype(portid) == 0) {
rte_eth_add_rx_callback(portid, 0, cb_parse_ptype, NULL);
- printf("Add Rx callback funciton to detect L3 packet type by SW :"
+ printf("Add Rx callback function to detect L3 packet type by SW :"
" port = %d\n", portid);
}
}
diff --git a/examples/ip_reassembly/Makefile b/examples/ip_reassembly/Makefile
index d9539a3a9..85c64a38b 100644
--- a/examples/ip_reassembly/Makefile
+++ b/examples/ip_reassembly/Makefile
@@ -34,7 +34,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/ipv4_multicast/Makefile b/examples/ipv4_multicast/Makefile
index 44f0a3bb4..1f7c53af3 100644
--- a/examples/ipv4_multicast/Makefile
+++ b/examples/ipv4_multicast/Makefile
@@ -34,7 +34,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/kni/Makefile b/examples/kni/Makefile
index 6800dd5c6..08a4f0c57 100644
--- a/examples/kni/Makefile
+++ b/examples/kni/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/l2fwd/Makefile b/examples/l2fwd/Makefile
index 78feeeb83..8896ab452 100644
--- a/examples/l2fwd/Makefile
+++ b/examples/l2fwd/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/l3fwd-acl/Makefile b/examples/l3fwd-acl/Makefile
index a3473a83e..3cd299f1b 100644
--- a/examples/l3fwd-acl/Makefile
+++ b/examples/l3fwd-acl/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/l3fwd-power/Makefile b/examples/l3fwd-power/Makefile
index 783772a79..9c4f44300 100644
--- a/examples/l3fwd-power/Makefile
+++ b/examples/l3fwd-power/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/l3fwd-vf/Makefile b/examples/l3fwd-vf/Makefile
index d97611cfc..989faf032 100644
--- a/examples/l3fwd-vf/Makefile
+++ b/examples/l3fwd-vf/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
index 5ce0ce05a..d99a43ade 100644
--- a/examples/l3fwd/Makefile
+++ b/examples/l3fwd/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/l3fwd/l3fwd_sse.h b/examples/l3fwd/l3fwd_sse.h
index fa9c4829d..4334be376 100644
--- a/examples/l3fwd/l3fwd_sse.h
+++ b/examples/l3fwd/l3fwd_sse.h
@@ -158,7 +158,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
* Suppose we have array of destionation ports:
* dst_port[] = {a, b, c, d,, e, ... }
* dp1 should contain: <a, b, c, d>, dp2: <b, c, d, e>.
- * We doing 4 comparisions at once and the result is 4 bit mask.
+ * We doing 4 comparisons at once and the result is 4 bit mask.
* This mask is used as an index into prebuild array of pnum values.
*/
static inline uint16_t *
diff --git a/examples/link_status_interrupt/Makefile b/examples/link_status_interrupt/Makefile
index 9ecc7fc4c..d5ee073a4 100644
--- a/examples/link_status_interrupt/Makefile
+++ b/examples/link_status_interrupt/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/load_balancer/Makefile b/examples/load_balancer/Makefile
index 2c5fd9b06..f656e51ce 100644
--- a/examples/load_balancer/Makefile
+++ b/examples/load_balancer/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/multi_process/Makefile b/examples/multi_process/Makefile
index 6b315cc0e..696633b93 100644
--- a/examples/multi_process/Makefile
+++ b/examples/multi_process/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/multi_process/client_server_mp/Makefile b/examples/multi_process/client_server_mp/Makefile
index 89cc6bf8f..feb508a45 100644
--- a/examples/multi_process/client_server_mp/Makefile
+++ b/examples/multi_process/client_server_mp/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/multi_process/client_server_mp/mp_client/Makefile b/examples/multi_process/client_server_mp/mp_client/Makefile
index 2688fed02..2ee8cd2c7 100644
--- a/examples/multi_process/client_server_mp/mp_client/Makefile
+++ b/examples/multi_process/client_server_mp/mp_client/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
include $(RTE_SDK)/mk/rte.vars.mk
# binary name
diff --git a/examples/multi_process/client_server_mp/mp_server/Makefile b/examples/multi_process/client_server_mp/mp_server/Makefile
index c29e4783e..5552999b5 100644
--- a/examples/multi_process/client_server_mp/mp_server/Makefile
+++ b/examples/multi_process/client_server_mp/mp_server/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/multi_process/l2fwd_fork/Makefile b/examples/multi_process/l2fwd_fork/Makefile
index ff257a35e..11ae8ff42 100644
--- a/examples/multi_process/l2fwd_fork/Makefile
+++ b/examples/multi_process/l2fwd_fork/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/multi_process/l2fwd_fork/flib.h b/examples/multi_process/l2fwd_fork/flib.h
index 711e3b6d0..1064c9bbd 100644
--- a/examples/multi_process/l2fwd_fork/flib.h
+++ b/examples/multi_process/l2fwd_fork/flib.h
@@ -120,7 +120,7 @@ int flib_register_slave_exit_notify(unsigned slave_id,
/**
* Assign a lcore ID to non-slave thread. Non-slave thread refers to thread that
* not created by function rte_eal_remote_launch or rte_eal_mp_remote_launch.
- * These threads can either bind lcore or float among differnt lcores.
+ * These threads can either bind lcore or float among different lcores.
* This lcore ID will be unique in multi-thread or multi-process DPDK running
* environment, then it can benefit from using the cache mechanism provided in
* mempool library.
diff --git a/examples/multi_process/simple_mp/Makefile b/examples/multi_process/simple_mp/Makefile
index 31ec0c806..7ac96f2f1 100644
--- a/examples/multi_process/simple_mp/Makefile
+++ b/examples/multi_process/simple_mp/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/multi_process/symmetric_mp/Makefile b/examples/multi_process/symmetric_mp/Makefile
index c789f3c9f..77d90c68d 100644
--- a/examples/multi_process/symmetric_mp/Makefile
+++ b/examples/multi_process/symmetric_mp/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/netmap_compat/Makefile b/examples/netmap_compat/Makefile
index 52d808696..fd4630aff 100644
--- a/examples/netmap_compat/Makefile
+++ b/examples/netmap_compat/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/netmap_compat/bridge/Makefile b/examples/netmap_compat/bridge/Makefile
index 1d4ddfffc..ce38a3455 100644
--- a/examples/netmap_compat/bridge/Makefile
+++ b/examples/netmap_compat/bridge/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define the RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/netmap_compat/lib/compat_netmap.c b/examples/netmap_compat/lib/compat_netmap.c
index 112c551f1..d9b40513d 100644
--- a/examples/netmap_compat/lib/compat_netmap.c
+++ b/examples/netmap_compat/lib/compat_netmap.c
@@ -168,7 +168,7 @@ mbuf_to_slot(struct rte_mbuf *mbuf, struct netmap_ring *r, uint32_t index)
/**
* Given a Netmap ring and a slot index for that ring, construct a dpdk mbuf
* from the data held in the buffer associated with the slot.
- * Allocation/deallocation of the dpdk mbuf are the responsability of the
+ * Allocation/deallocation of the dpdk mbuf are the responsibility of the
* caller.
* Note that mbuf chains are not supported.
*/
diff --git a/examples/performance-thread/common/lthread_mutex.c b/examples/performance-thread/common/lthread_mutex.c
index c1bc62710..c06d3d513 100644
--- a/examples/performance-thread/common/lthread_mutex.c
+++ b/examples/performance-thread/common/lthread_mutex.c
@@ -173,7 +173,7 @@ int lthread_mutex_lock(struct lthread_mutex *m)
return 0;
}
-/* try to lock a mutex but dont block */
+/* try to lock a mutex but don't block */
int lthread_mutex_trylock(struct lthread_mutex *m)
{
struct lthread *lt = THIS_LTHREAD;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index ac85a369f..f2a0fa90d 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -1604,7 +1604,7 @@ processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])
* Suppose we have array of destionation ports:
* dst_port[] = {a, b, c, d,, e, ... }
* dp1 should contain: <a, b, c, d>, dp2: <b, c, d, e>.
- * We doing 4 comparisions at once and the result is 4 bit mask.
+ * We doing 4 comparisons at once and the result is 4 bit mask.
* This mask is used as an index into prebuild array of pnum values.
*/
static inline uint16_t *
diff --git a/examples/qos_meter/Makefile b/examples/qos_meter/Makefile
index 5113a1298..de1f12ce0 100644
--- a/examples/qos_meter/Makefile
+++ b/examples/qos_meter/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/qos_sched/Makefile b/examples/qos_sched/Makefile
index e41ac500f..56829c215 100644
--- a/examples/qos_sched/Makefile
+++ b/examples/qos_sched/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/quota_watermark/Makefile b/examples/quota_watermark/Makefile
index 17fe473b0..40a01fa47 100644
--- a/examples/quota_watermark/Makefile
+++ b/examples/quota_watermark/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/quota_watermark/qw/Makefile b/examples/quota_watermark/qw/Makefile
index fac9328d0..627897ce4 100644
--- a/examples/quota_watermark/qw/Makefile
+++ b/examples/quota_watermark/qw/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/quota_watermark/qwctl/Makefile b/examples/quota_watermark/qwctl/Makefile
index 1ca2f1e96..e0f0083d0 100644
--- a/examples/quota_watermark/qwctl/Makefile
+++ b/examples/quota_watermark/qwctl/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/timer/Makefile b/examples/timer/Makefile
index af12b7ba4..7db48ec6b 100644
--- a/examples/timer/Makefile
+++ b/examples/timer/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/vhost/Makefile b/examples/vhost/Makefile
index af7be99a9..add9f27bb 100644
--- a/examples/vhost/Makefile
+++ b/examples/vhost/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/vhost_xen/Makefile b/examples/vhost_xen/Makefile
index 47e14898a..ad2466aa7 100644
--- a/examples/vhost_xen/Makefile
+++ b/examples/vhost_xen/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/vhost_xen/main.c b/examples/vhost_xen/main.c
index d9ef140f7..979fa1d0f 100644
--- a/examples/vhost_xen/main.c
+++ b/examples/vhost_xen/main.c
@@ -534,7 +534,7 @@ gpa_to_vva(struct virtio_net *dev, uint64_t guest_pa)
/*
* This function adds buffers to the virtio devices RX virtqueue. Buffers can
* be received from the physical port or from another virtio device. A packet
- * count is returned to indicate the number of packets that were succesfully
+ * count is returned to indicate the number of packets that were successfully
* added to the RX queue.
*/
static inline uint32_t __attribute__((always_inline))
diff --git a/examples/vhost_xen/xenstore_parse.c b/examples/vhost_xen/xenstore_parse.c
index 26d243208..ab089f1b7 100644
--- a/examples/vhost_xen/xenstore_parse.c
+++ b/examples/vhost_xen/xenstore_parse.c
@@ -293,7 +293,7 @@ parse_gntnode(int dom_id, char *path)
}
/*
- * This function maps grant node of vring or mbuf pool to a continous virtual address space,
+ * This function maps grant node of vring or mbuf pool to a continuous virtual address space,
* and returns mapped address, pfn array, index array
* @param gntnode
* Pointer to grant node
@@ -460,7 +460,7 @@ cleanup_mempool(struct xen_mempool *mempool)
/*
* process mempool node idx#_mempool_gref, idx = 0, 1, 2...
- * untill we encounter a node that doesn't exist.
+ * until we encounter a node that doesn't exist.
*/
int
parse_mempoolnode(struct xen_guest *guest)
diff --git a/examples/vmdq/Makefile b/examples/vmdq/Makefile
index 198e3bfec..501728222 100644
--- a/examples/vmdq/Makefile
+++ b/examples/vmdq/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/examples/vmdq_dcb/Makefile b/examples/vmdq_dcb/Makefile
index 8c51131b9..0c200a980 100644
--- a/examples/vmdq_dcb/Makefile
+++ b/examples/vmdq_dcb/Makefile
@@ -33,7 +33,7 @@ ifeq ($(RTE_SDK),)
$(error "Please define RTE_SDK environment variable")
endif
-# Default target, can be overriden by command line or environment
+# Default target, can be overridden by command line or environment
RTE_TARGET ?= x86_64-native-linuxapp-gcc
include $(RTE_SDK)/mk/rte.vars.mk
diff --git a/lib/librte_bitratestats/rte_bitrate.c b/lib/librte_bitratestats/rte_bitrate.c
index 193aa690e..3ceb35166 100644
--- a/lib/librte_bitratestats/rte_bitrate.c
+++ b/lib/librte_bitratestats/rte_bitrate.c
@@ -112,7 +112,7 @@ rte_stats_bitrate_calc(struct rte_stats_bitrates *bitrate_data,
port_data->peak_ibits = cnt_bits;
delta = cnt_bits;
delta -= port_data->ewma_ibits;
- /* The +-50 fixes integer rounding during divison */
+ /* The +-50 fixes integer rounding during division */
if (delta > 0)
delta = (delta * alpha_percent + 50) / 100;
else
diff --git a/lib/librte_compat/rte_compat.h b/lib/librte_compat/rte_compat.h
index 1c3c8d521..41e8032ba 100644
--- a/lib/librte_compat/rte_compat.h
+++ b/lib/librte_compat/rte_compat.h
@@ -39,7 +39,7 @@
* When a symol is exported from a library to provide an API, it also provides a
* calling convention (ABI) that is embodied in its name, return type,
* arguments, etc. On occasion that function may need to change to accommodate
- * new functionality, behavior, etc. When that occurs, it is desireable to
+ * new functionality, behavior, etc. When that occurs, it is desirable to
* allow for backwards compatibility for a time with older binaries that are
* dynamically linked to the dpdk. To support that, the __vsym and
* VERSION_SYMBOL macros are created. They, in conjunction with the
diff --git a/lib/librte_eal/common/eal_common_log.c b/lib/librte_eal/common/eal_common_log.c
index ddf65b7fd..41ea92472 100644
--- a/lib/librte_eal/common/eal_common_log.c
+++ b/lib/librte_eal/common/eal_common_log.c
@@ -173,13 +173,13 @@ rte_log_set_level_regexp(const char *pattern, uint32_t level)
return 0;
}
-/* get the current loglevel for the message beeing processed */
+/* get the current loglevel for the message being processed */
int rte_log_cur_msg_loglevel(void)
{
return RTE_PER_LCORE(log_cur_msg).loglevel;
}
-/* get the current logtype for the message beeing processed */
+/* get the current logtype for the message being processed */
int rte_log_cur_msg_logtype(void)
{
return RTE_PER_LCORE(log_cur_msg).logtype;
diff --git a/lib/librte_eal/common/include/rte_alarm.h b/lib/librte_eal/common/include/rte_alarm.h
index 4012cd67e..c275be18b 100644
--- a/lib/librte_eal/common/include/rte_alarm.h
+++ b/lib/librte_eal/common/include/rte_alarm.h
@@ -91,7 +91,7 @@ int rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb, void *cb_arg);
* the number of canceled alarm callback functions
* - value greater or equal 0 and rte_errno set to EINPROGRESS, at least one
* alarm could not be canceled because cancellation was requested from alarm
- * callback context. Returned value is the number of succesfuly canceled
+ * callback context. Returned value is the number of successfully canceled
* alarm callbacks
* - 0 and rte_errno set to ENOENT - no alarm found
* - -1 and rte_errno set to EINVAL - invalid parameter (NULL callback)
diff --git a/lib/librte_eal/common/include/rte_bus.h b/lib/librte_eal/common/include/rte_bus.h
index 7c3696926..5f47b829b 100644
--- a/lib/librte_eal/common/include/rte_bus.h
+++ b/lib/librte_eal/common/include/rte_bus.h
@@ -58,7 +58,7 @@ TAILQ_HEAD(rte_bus_list, rte_bus);
/**
* Bus specific scan for devices attached on the bus.
- * For each bus object, the scan would be reponsible for finding devices and
+ * For each bus object, the scan would be responsible for finding devices and
* adding them to its private device list.
*
* A bus should mandatorily implement this method.
diff --git a/lib/librte_eal/common/include/rte_malloc.h b/lib/librte_eal/common/include/rte_malloc.h
index 008ce134a..61aac322b 100644
--- a/lib/librte_eal/common/include/rte_malloc.h
+++ b/lib/librte_eal/common/include/rte_malloc.h
@@ -327,7 +327,7 @@ rte_malloc_set_limit(const char *type, size_t max);
* rte_malloc
*
* @param addr
- * Adress obtained from a previous rte_malloc call
+ * Address obtained from a previous rte_malloc call
* @return
* NULL on error
* otherwise return physical address of the buffer
diff --git a/lib/librte_eal/common/include/rte_time.h b/lib/librte_eal/common/include/rte_time.h
index 28c6274c4..373c41acc 100644
--- a/lib/librte_eal/common/include/rte_time.h
+++ b/lib/librte_eal/common/include/rte_time.h
@@ -52,7 +52,7 @@ struct rte_timecounter {
uint64_t nsec_mask;
/** Sub-nanoseconds count. */
uint64_t nsec_frac;
- /** Bitmask for two's complement substraction of non-64 bit counters. */
+ /** Bitmask for two's complement subtraction of non-64 bit counters. */
uint64_t cc_mask;
/** Cycle to nanosecond divisor (power of two). */
uint32_t cc_shift;
diff --git a/lib/librte_eal/linuxapp/eal/eal_pci_vfio.c b/lib/librte_eal/linuxapp/eal/eal_pci_vfio.c
index 2be131959..aa9d96eda 100644
--- a/lib/librte_eal/linuxapp/eal/eal_pci_vfio.c
+++ b/lib/librte_eal/linuxapp/eal/eal_pci_vfio.c
@@ -214,7 +214,7 @@ pci_vfio_setup_interrupts(struct rte_pci_device *dev, int vfio_dev_fd)
intr_idx = VFIO_PCI_NUM_IRQS;
/* get interrupt type from internal config (MSI-X by default, can be
- * overriden from the command line
+ * overridden from the command line
*/
switch (internal_config.vfio_intr_mode) {
case RTE_INTR_MODE_MSIX:
diff --git a/lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.c b/lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.c
index d558af204..1c30d12b0 100644
--- a/lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.c
+++ b/lib/librte_eal/linuxapp/kni/ethtool/igb/e1000_82575.c
@@ -1357,7 +1357,7 @@ static s32 e1000_get_pcs_speed_and_duplex_82575(struct e1000_hw *hw,
* @hw: pointer to the HW structure
*
* In the case of serdes shut down sfp and PCS on driver unload
- * when management pass thru is not enabled.
+ * when management pass through is not enabled.
**/
void e1000_shutdown_serdes_link_82575(struct e1000_hw *hw)
{
diff --git a/lib/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c b/lib/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c
index 5f1f3a6b5..99338c5c8 100644
--- a/lib/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c
+++ b/lib/librte_eal/linuxapp/kni/ethtool/igb/igb_main.c
@@ -1133,7 +1133,7 @@ static int igb_alloc_q_vector(struct igb_adapter *adapter,
/* initialize pointer to rings */
ring = q_vector->ring;
- /* intialize ITR */
+ /* initialize ITR */
if (rxr_count) {
/* rx or rx/tx vector */
if (!adapter->rx_itr_setting || adapter->rx_itr_setting > 3)
diff --git a/lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h b/lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h
index 4c52da3c1..e0a035423 100644
--- a/lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h
+++ b/lib/librte_eal/linuxapp/kni/ethtool/igb/kcompat.h
@@ -1165,7 +1165,7 @@ static inline u32 _kc_netif_msg_init(int debug_value, int default_msg_enable_bit
#define pci_register_driver pci_module_init
/*
- * Most of the dma compat code is copied/modifed from the 2.4.37
+ * Most of the dma compat code is copied/modified from the 2.4.37
* /include/linux/libata-compat.h header file
*/
/* These definitions mirror those in pci.h, so they can be used
diff --git a/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.c b/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.c
index f00fe7969..4808d06ec 100644
--- a/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.c
+++ b/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_api.c
@@ -718,7 +718,7 @@ s32 ixgbe_update_eeprom_checksum(struct ixgbe_hw *hw)
* @vmdq: VMDq pool to assign
*
* Puts an ethernet address into a receive address register, or
- * finds the rar that it is aleady in; adds to the pool list
+ * finds the rar that it is already in; adds to the pool list
**/
s32 ixgbe_insert_mac_addr(struct ixgbe_hw *hw, u8 *addr, u32 vmdq)
{
diff --git a/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.c b/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.c
index 88b33fa0d..2c861de5d 100644
--- a/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.c
+++ b/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/ixgbe_common.c
@@ -3007,7 +3007,7 @@ u16 ixgbe_get_pcie_msix_count_generic(struct ixgbe_hw *hw)
* @vmdq: VMDq pool to assign
*
* Puts an ethernet address into a receive address register, or
- * finds the rar that it is aleady in; adds to the pool list
+ * finds the rar that it is already in; adds to the pool list
**/
s32 ixgbe_insert_mac_addr_generic(struct ixgbe_hw *hw, u8 *addr, u32 vmdq)
{
diff --git a/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h b/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h
index 4c7a64086..f62a7b56e 100644
--- a/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h
+++ b/lib/librte_eal/linuxapp/kni/ethtool/ixgbe/kcompat.h
@@ -1108,7 +1108,7 @@ static inline u32 _kc_netif_msg_init(int debug_value, int default_msg_enable_bit
#define pci_register_driver pci_module_init
/*
- * Most of the dma compat code is copied/modifed from the 2.4.37
+ * Most of the dma compat code is copied/modified from the 2.4.37
* /include/linux/libata-compat.h header file
*/
/* These definitions mirror those in pci.h, so they can be used
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 0f38b45f8..1bc74ce94 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -2358,7 +2358,7 @@ rte_eth_xstats_get_names_by_id(uint8_t port_id,
* @param port_id
* The port identifier of the Ethernet device.
* @param ids
- * A pointer to an ids array passed by application. This tells wich
+ * A pointer to an ids array passed by application. This tells which
* statistics values function should retrieve. This parameter
* can be set to NULL if n is 0. In this case function will retrieve
* all avalible statistics.
diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c
index 645c0cfab..1b7a0da9f 100644
--- a/lib/librte_hash/rte_cuckoo_hash.c
+++ b/lib/librte_hash/rte_cuckoo_hash.c
@@ -569,7 +569,7 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
k->pdata = data;
/*
* Return index where key is stored,
- * substracting the first dummy index
+ * subtracting the first dummy index
*/
return prim_bkt->key_idx[i] - 1;
}
@@ -589,7 +589,7 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
k->pdata = data;
/*
* Return index where key is stored,
- * substracting the first dummy index
+ * subtracting the first dummy index
*/
return sec_bkt->key_idx[i] - 1;
}
@@ -730,7 +730,7 @@ __rte_hash_lookup_with_hash(const struct rte_hash *h, const void *key,
*data = k->pdata;
/*
* Return index where key is stored,
- * substracting the first dummy index
+ * subtracting the first dummy index
*/
return bkt->key_idx[i] - 1;
}
@@ -753,7 +753,7 @@ __rte_hash_lookup_with_hash(const struct rte_hash *h, const void *key,
*data = k->pdata;
/*
* Return index where key is stored,
- * substracting the first dummy index
+ * subtracting the first dummy index
*/
return bkt->key_idx[i] - 1;
}
@@ -847,7 +847,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key,
/*
* Return index where key is stored,
- * substracting the first dummy index
+ * subtracting the first dummy index
*/
ret = bkt->key_idx[i] - 1;
bkt->key_idx[i] = EMPTY_SLOT;
@@ -872,7 +872,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key,
/*
* Return index where key is stored,
- * substracting the first dummy index
+ * subtracting the first dummy index
*/
ret = bkt->key_idx[i] - 1;
bkt->key_idx[i] = EMPTY_SLOT;
diff --git a/lib/librte_ip_frag/rte_ip_frag.h b/lib/librte_ip_frag/rte_ip_frag.h
index 6708906d3..4794587ef 100644
--- a/lib/librte_ip_frag/rte_ip_frag.h
+++ b/lib/librte_ip_frag/rte_ip_frag.h
@@ -233,7 +233,7 @@ rte_ipv6_fragment_packet(struct rte_mbuf *pkt_in,
* Pointer to the IPv6 fragment extension header.
* @return
* Pointer to mbuf for reassembled packet, or NULL if:
- * - an error occured.
+ * - an error occurred.
* - not all fragments of the packet are collected yet.
*/
struct rte_mbuf *rte_ipv6_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
@@ -307,7 +307,7 @@ int32_t rte_ipv4_fragment_packet(struct rte_mbuf *pkt_in,
* Pointer to the IPV4 header inside the fragment.
* @return
* Pointer to mbuf for reassebled packet, or NULL if:
- * - an error occured.
+ * - an error occurred.
* - not all fragments of the packet are collected yet.
*/
struct rte_mbuf * rte_ipv4_frag_reassemble_packet(struct rte_ip_frag_tbl *tbl,
diff --git a/lib/librte_ip_frag/rte_ipv4_reassembly.c b/lib/librte_ip_frag/rte_ipv4_reassembly.c
index e084ca59a..b13308966 100644
--- a/lib/librte_ip_frag/rte_ipv4_reassembly.c
+++ b/lib/librte_ip_frag/rte_ipv4_reassembly.c
@@ -118,7 +118,7 @@ ipv4_frag_reassemble(struct ip_frag_pkt *fp)
* Pointer to the IPV4 header inside the fragment.
* @return
* Pointer to mbuf for reassebled packet, or NULL if:
- * - an error occured.
+ * - an error occurred.
* - not all fragments of the packet are collected yet.
*/
struct rte_mbuf *
diff --git a/lib/librte_ip_frag/rte_ipv6_reassembly.c b/lib/librte_ip_frag/rte_ipv6_reassembly.c
index 21a5ef5d3..dde58cb78 100644
--- a/lib/librte_ip_frag/rte_ipv6_reassembly.c
+++ b/lib/librte_ip_frag/rte_ipv6_reassembly.c
@@ -155,7 +155,7 @@ ipv6_frag_reassemble(struct ip_frag_pkt *fp)
* Pointer to the IPV6 fragment extension header.
* @return
* Pointer to mbuf for reassembled packet, or NULL if:
- * - an error occured.
+ * - an error occurred.
* - not all fragments of the packet are collected yet.
*/
#define MORE_FRAGS(x) (((x) & 0x100) >> 8)
diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c
index c3f9208c1..40288a179 100644
--- a/lib/librte_kni/rte_kni.c
+++ b/lib/librte_kni/rte_kni.c
@@ -119,7 +119,7 @@ struct rte_kni_memzone_pool {
uint32_t max_ifaces; /**< Max. num of KNI ifaces */
struct rte_kni_memzone_slot *slots; /**< Pool slots */
- rte_spinlock_t mutex; /**< alloc/relase mutex */
+ rte_spinlock_t mutex; /**< alloc/release mutex */
/* Free memzone slots linked-list */
struct rte_kni_memzone_slot *free; /**< First empty slot */
diff --git a/lib/librte_reorder/rte_reorder.h b/lib/librte_reorder/rte_reorder.h
index 737e0554c..4cd8de765 100644
--- a/lib/librte_reorder/rte_reorder.h
+++ b/lib/librte_reorder/rte_reorder.h
@@ -147,7 +147,7 @@ rte_reorder_free(struct rte_reorder_buffer *b);
* -1 on error
* On error case, rte_errno will be set appropriately:
* - ENOSPC - Cannot move existing mbufs from reorder buffer to accommodate
- * ealry mbuf, but it can be accomodated by performing drain and then insert.
+ * ealry mbuf, but it can be accommodated by performing drain and then insert.
* - ERANGE - Too early or late mbuf which is vastly out of range of expected
* window should be ingnored without any handling.
*/
diff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c
index 18782fab0..43e61782d 100644
--- a/lib/librte_timer/rte_timer.c
+++ b/lib/librte_timer/rte_timer.c
@@ -183,7 +183,7 @@ timer_set_running_state(struct rte_timer *tim)
return -1;
/* here, we know that timer is stopped or pending,
- * mark it atomically as beeing configured */
+ * mark it atomically as being configured */
status.state = RTE_TIMER_RUNNING;
status.owner = (int16_t)lcore_id;
success = rte_atomic32_cmpset(&tim->status.u32,
diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
index 605e47cbf..80e911a8d 100644
--- a/lib/librte_vhost/rte_vhost.h
+++ b/lib/librte_vhost/rte_vhost.h
@@ -365,7 +365,7 @@ struct rte_mempool;
/**
* This function adds buffers to the virtio devices RX virtqueue. Buffers can
* be received from the physical port or from another virtual device. A packet
- * count is returned to indicate the number of packets that were succesfully
+ * count is returned to indicate the number of packets that were successfully
* added to the RX queue.
* @param vid
* vhost device ID
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 48219e050..6c6dde915 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -233,7 +233,7 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vring_desc *descs,
/**
* This function adds buffers to the virtio devices RX virtqueue. Buffers can
* be received from the physical port or from another virtio device. A packet
- * count is returned to indicate the number of packets that are succesfully
+ * count is returned to indicate the number of packets that are successfully
* added to the RX queue. This function works when the mbuf is scattered, but
* it doesn't support the mergeable feature.
*/
diff --git a/mk/arch/i686/rte.vars.mk b/mk/arch/i686/rte.vars.mk
index 6a25312cc..ec7b7a6bb 100644
--- a/mk/arch/i686/rte.vars.mk
+++ b/mk/arch/i686/rte.vars.mk
@@ -32,15 +32,15 @@
#
# arch:
#
-# - define ARCH variable (overriden by cmdline or by previous
+# - define ARCH variable (overridden by cmdline or by previous
# optional define in machine .mk)
-# - define CROSS variable (overriden by cmdline or previous define
+# - define CROSS variable (overridden by cmdline or previous define
# in machine .mk)
-# - define CPU_CFLAGS variable (overriden by cmdline or previous
+# - define CPU_CFLAGS variable (overridden by cmdline or previous
# define in machine .mk)
-# - define CPU_LDFLAGS variable (overriden by cmdline or previous
+# - define CPU_LDFLAGS variable (overridden by cmdline or previous
# define in machine .mk)
-# - define CPU_ASFLAGS variable (overriden by cmdline or previous
+# - define CPU_ASFLAGS variable (overridden by cmdline or previous
# define in machine .mk)
# - may override any previously defined variable
#
diff --git a/mk/arch/x86_64/rte.vars.mk b/mk/arch/x86_64/rte.vars.mk
index 83723c8e0..617fa655b 100644
--- a/mk/arch/x86_64/rte.vars.mk
+++ b/mk/arch/x86_64/rte.vars.mk
@@ -32,15 +32,15 @@
#
# arch:
#
-# - define ARCH variable (overriden by cmdline or by previous
+# - define ARCH variable (overridden by cmdline or by previous
# optional define in machine .mk)
-# - define CROSS variable (overriden by cmdline or previous define
+# - define CROSS variable (overridden by cmdline or previous define
# in machine .mk)
-# - define CPU_CFLAGS variable (overriden by cmdline or previous
+# - define CPU_CFLAGS variable (overridden by cmdline or previous
# define in machine .mk)
-# - define CPU_LDFLAGS variable (overriden by cmdline or previous
+# - define CPU_LDFLAGS variable (overridden by cmdline or previous
# define in machine .mk)
-# - define CPU_ASFLAGS variable (overriden by cmdline or previous
+# - define CPU_ASFLAGS variable (overridden by cmdline or previous
# define in machine .mk)
# - may override any previously defined variable
#
diff --git a/mk/exec-env/bsdapp/rte.vars.mk b/mk/exec-env/bsdapp/rte.vars.mk
index 47a673e7d..ebae68ccc 100644
--- a/mk/exec-env/bsdapp/rte.vars.mk
+++ b/mk/exec-env/bsdapp/rte.vars.mk
@@ -32,9 +32,9 @@
#
# exec-env:
#
-# - define EXECENV_CFLAGS variable (overriden by cmdline)
-# - define EXECENV_LDFLAGS variable (overriden by cmdline)
-# - define EXECENV_ASFLAGS variable (overriden by cmdline)
+# - define EXECENV_CFLAGS variable (overridden by cmdline)
+# - define EXECENV_LDFLAGS variable (overridden by cmdline)
+# - define EXECENV_ASFLAGS variable (overridden by cmdline)
# - may override any previously defined variable
#
# examples for RTE_EXEC_ENV: linuxapp, bsdapp
diff --git a/mk/exec-env/linuxapp/rte.vars.mk b/mk/exec-env/linuxapp/rte.vars.mk
index a8a1ee4cb..9a7169996 100644
--- a/mk/exec-env/linuxapp/rte.vars.mk
+++ b/mk/exec-env/linuxapp/rte.vars.mk
@@ -32,9 +32,9 @@
#
# exec-env:
#
-# - define EXECENV_CFLAGS variable (overriden by cmdline)
-# - define EXECENV_LDFLAGS variable (overriden by cmdline)
-# - define EXECENV_ASFLAGS variable (overriden by cmdline)
+# - define EXECENV_CFLAGS variable (overridden by cmdline)
+# - define EXECENV_LDFLAGS variable (overridden by cmdline)
+# - define EXECENV_ASFLAGS variable (overridden by cmdline)
# - may override any previously defined variable
#
# examples for RTE_EXEC_ENV: linuxapp, bsdapp
diff --git a/mk/machine/atm/rte.vars.mk b/mk/machine/atm/rte.vars.mk
index d6fbba0f9..cfed1108b 100644
--- a/mk/machine/atm/rte.vars.mk
+++ b/mk/machine/atm/rte.vars.mk
@@ -32,16 +32,16 @@
#
# machine:
#
-# - can define ARCH variable (overriden by cmdline value)
-# - can define CROSS variable (overriden by cmdline value)
-# - define MACHINE_CFLAGS variable (overriden by cmdline value)
-# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
-# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
-# - can define CPU_CFLAGS variable (overriden by cmdline value) that
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
# - may override any previously defined variable
#
diff --git a/mk/machine/default/rte.vars.mk b/mk/machine/default/rte.vars.mk
index 53c6af6e4..a6fb84244 100644
--- a/mk/machine/default/rte.vars.mk
+++ b/mk/machine/default/rte.vars.mk
@@ -32,16 +32,16 @@
#
# machine:
#
-# - can define ARCH variable (overriden by cmdline value)
-# - can define CROSS variable (overriden by cmdline value)
-# - define MACHINE_CFLAGS variable (overriden by cmdline value)
-# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
-# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
-# - can define CPU_CFLAGS variable (overriden by cmdline value) that
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
# - may override any previously defined variable
#
diff --git a/mk/machine/hsw/rte.vars.mk b/mk/machine/hsw/rte.vars.mk
index eedc5d026..66a562e75 100644
--- a/mk/machine/hsw/rte.vars.mk
+++ b/mk/machine/hsw/rte.vars.mk
@@ -32,16 +32,16 @@
#
# machine:
#
-# - can define ARCH variable (overriden by cmdline value)
-# - can define CROSS variable (overriden by cmdline value)
-# - define MACHINE_CFLAGS variable (overriden by cmdline value)
-# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
-# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
-# - can define CPU_CFLAGS variable (overriden by cmdline value) that
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
# - may override any previously defined variable
#
diff --git a/mk/machine/ivb/rte.vars.mk b/mk/machine/ivb/rte.vars.mk
index 932241afa..768a25d23 100644
--- a/mk/machine/ivb/rte.vars.mk
+++ b/mk/machine/ivb/rte.vars.mk
@@ -32,16 +32,16 @@
#
# machine:
#
-# - can define ARCH variable (overriden by cmdline value)
-# - can define CROSS variable (overriden by cmdline value)
-# - define MACHINE_CFLAGS variable (overriden by cmdline value)
-# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
-# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
-# - can define CPU_CFLAGS variable (overriden by cmdline value) that
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
# - may override any previously defined variable
#
diff --git a/mk/machine/native/rte.vars.mk b/mk/machine/native/rte.vars.mk
index 6ce0c723b..7f55b54a8 100644
--- a/mk/machine/native/rte.vars.mk
+++ b/mk/machine/native/rte.vars.mk
@@ -32,16 +32,16 @@
#
# machine:
#
-# - can define ARCH variable (overriden by cmdline value)
-# - can define CROSS variable (overriden by cmdline value)
-# - define MACHINE_CFLAGS variable (overriden by cmdline value)
-# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
-# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
-# - can define CPU_CFLAGS variable (overriden by cmdline value) that
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
# - may override any previously defined variable
#
diff --git a/mk/machine/nhm/rte.vars.mk b/mk/machine/nhm/rte.vars.mk
index 9566efd7c..8921c328f 100644
--- a/mk/machine/nhm/rte.vars.mk
+++ b/mk/machine/nhm/rte.vars.mk
@@ -32,16 +32,16 @@
#
# machine:
#
-# - can define ARCH variable (overriden by cmdline value)
-# - can define CROSS variable (overriden by cmdline value)
-# - define MACHINE_CFLAGS variable (overriden by cmdline value)
-# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
-# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
-# - can define CPU_CFLAGS variable (overriden by cmdline value) that
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
# - may override any previously defined variable
#
diff --git a/mk/machine/snb/rte.vars.mk b/mk/machine/snb/rte.vars.mk
index a9c1b7caf..0709f3d28 100644
--- a/mk/machine/snb/rte.vars.mk
+++ b/mk/machine/snb/rte.vars.mk
@@ -32,16 +32,16 @@
#
# machine:
#
-# - can define ARCH variable (overriden by cmdline value)
-# - can define CROSS variable (overriden by cmdline value)
-# - define MACHINE_CFLAGS variable (overriden by cmdline value)
-# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
-# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
-# - can define CPU_CFLAGS variable (overriden by cmdline value) that
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
# - may override any previously defined variable
#
diff --git a/mk/machine/wsm/rte.vars.mk b/mk/machine/wsm/rte.vars.mk
index c8a266cab..3a2fa2b06 100644
--- a/mk/machine/wsm/rte.vars.mk
+++ b/mk/machine/wsm/rte.vars.mk
@@ -32,16 +32,16 @@
#
# machine:
#
-# - can define ARCH variable (overriden by cmdline value)
-# - can define CROSS variable (overriden by cmdline value)
-# - define MACHINE_CFLAGS variable (overriden by cmdline value)
-# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
-# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
-# - can define CPU_CFLAGS variable (overriden by cmdline value) that
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
# - may override any previously defined variable
#
diff --git a/mk/rte.vars.mk b/mk/rte.vars.mk
index 418e49a2e..b51006b81 100644
--- a/mk/rte.vars.mk
+++ b/mk/rte.vars.mk
@@ -96,7 +96,7 @@ ifeq ($(RTE_TOOLCHAIN),)
$(error RTE_TOOLCHAIN is not defined)
endif
-# can be overriden by make command line or exported environment variable
+# can be overridden by make command line or exported environment variable
RTE_KERNELDIR ?= /lib/modules/$(shell uname -r)/build
export RTE_TARGET
diff --git a/mk/target/generic/rte.vars.mk b/mk/target/generic/rte.vars.mk
index 5d22a6a67..84649b9b1 100644
--- a/mk/target/generic/rte.vars.mk
+++ b/mk/target/generic/rte.vars.mk
@@ -38,16 +38,16 @@
#
# machine:
#
-# - can define ARCH variable (overriden by cmdline value)
-# - can define CROSS variable (overriden by cmdline value)
-# - define MACHINE_CFLAGS variable (overriden by cmdline value)
-# - define MACHINE_LDFLAGS variable (overriden by cmdline value)
-# - define MACHINE_ASFLAGS variable (overriden by cmdline value)
-# - can define CPU_CFLAGS variable (overriden by cmdline value) that
+# - can define ARCH variable (overridden by cmdline value)
+# - can define CROSS variable (overridden by cmdline value)
+# - define MACHINE_CFLAGS variable (overridden by cmdline value)
+# - define MACHINE_LDFLAGS variable (overridden by cmdline value)
+# - define MACHINE_ASFLAGS variable (overridden by cmdline value)
+# - can define CPU_CFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_LDFLAGS variable (overriden by cmdline value) that
+# - can define CPU_LDFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
-# - can define CPU_ASFLAGS variable (overriden by cmdline value) that
+# - can define CPU_ASFLAGS variable (overridden by cmdline value) that
# overrides the one defined in arch.
#
ifneq ($(wildcard $(RTE_SDK)/mk/machine/$(RTE_MACHINE)/rte.vars.mk),)
@@ -59,15 +59,15 @@ endif
#
# arch:
#
-# - define ARCH variable (overriden by cmdline or by previous
+# - define ARCH variable (overridden by cmdline or by previous
# optional define in machine .mk)
-# - define CROSS variable (overriden by cmdline or previous define
+# - define CROSS variable (overridden by cmdline or previous define
# in machine .mk)
-# - define CPU_CFLAGS variable (overriden by cmdline or previous
+# - define CPU_CFLAGS variable (overridden by cmdline or previous
# define in machine .mk)
-# - define CPU_LDFLAGS variable (overriden by cmdline or previous
+# - define CPU_LDFLAGS variable (overridden by cmdline or previous
# define in machine .mk)
-# - define CPU_ASFLAGS variable (overriden by cmdline or previous
+# - define CPU_ASFLAGS variable (overridden by cmdline or previous
# define in machine .mk)
# - may override any previously defined variable
#
@@ -77,9 +77,9 @@ include $(RTE_SDK)/mk/arch/$(RTE_ARCH)/rte.vars.mk
# toolchain:
#
# - define CC, LD, AR, AS, ...
-# - define TOOLCHAIN_CFLAGS variable (overriden by cmdline value)
-# - define TOOLCHAIN_LDFLAGS variable (overriden by cmdline value)
-# - define TOOLCHAIN_ASFLAGS variable (overriden by cmdline value)
+# - define TOOLCHAIN_CFLAGS variable (overridden by cmdline value)
+# - define TOOLCHAIN_LDFLAGS variable (overridden by cmdline value)
+# - define TOOLCHAIN_ASFLAGS variable (overridden by cmdline value)
# - may override any previously defined variable
#
include $(RTE_SDK)/mk/toolchain/$(RTE_TOOLCHAIN)/rte.vars.mk
@@ -87,9 +87,9 @@ include $(RTE_SDK)/mk/toolchain/$(RTE_TOOLCHAIN)/rte.vars.mk
#
# exec-env:
#
-# - define EXECENV_CFLAGS variable (overriden by cmdline)
-# - define EXECENV_LDFLAGS variable (overriden by cmdline)
-# - define EXECENV_ASFLAGS variable (overriden by cmdline)
+# - define EXECENV_CFLAGS variable (overridden by cmdline)
+# - define EXECENV_LDFLAGS variable (overridden by cmdline)
+# - define EXECENV_ASFLAGS variable (overridden by cmdline)
# - may override any previously defined variable
#
include $(RTE_SDK)/mk/exec-env/$(RTE_EXEC_ENV)/rte.vars.mk
diff --git a/mk/toolchain/clang/rte.vars.mk b/mk/toolchain/clang/rte.vars.mk
index af34c10aa..dde922d22 100644
--- a/mk/toolchain/clang/rte.vars.mk
+++ b/mk/toolchain/clang/rte.vars.mk
@@ -32,10 +32,10 @@
#
# toolchain:
#
-# - define CC, LD, AR, AS, ... (overriden by cmdline value)
-# - define TOOLCHAIN_CFLAGS variable (overriden by cmdline value)
-# - define TOOLCHAIN_LDFLAGS variable (overriden by cmdline value)
-# - define TOOLCHAIN_ASFLAGS variable (overriden by cmdline value)
+# - define CC, LD, AR, AS, ... (overridden by cmdline value)
+# - define TOOLCHAIN_CFLAGS variable (overridden by cmdline value)
+# - define TOOLCHAIN_LDFLAGS variable (overridden by cmdline value)
+# - define TOOLCHAIN_ASFLAGS variable (overridden by cmdline value)
#
CC = $(CROSS)clang
diff --git a/mk/toolchain/gcc/rte.vars.mk b/mk/toolchain/gcc/rte.vars.mk
index 3834e00cf..3b907e201 100644
--- a/mk/toolchain/gcc/rte.vars.mk
+++ b/mk/toolchain/gcc/rte.vars.mk
@@ -32,10 +32,10 @@
#
# toolchain:
#
-# - define CC, LD, AR, AS, ... (overriden by cmdline value)
-# - define TOOLCHAIN_CFLAGS variable (overriden by cmdline value)
-# - define TOOLCHAIN_LDFLAGS variable (overriden by cmdline value)
-# - define TOOLCHAIN_ASFLAGS variable (overriden by cmdline value)
+# - define CC, LD, AR, AS, ... (overridden by cmdline value)
+# - define TOOLCHAIN_CFLAGS variable (overridden by cmdline value)
+# - define TOOLCHAIN_LDFLAGS variable (overridden by cmdline value)
+# - define TOOLCHAIN_ASFLAGS variable (overridden by cmdline value)
#
CC = $(CROSS)gcc
diff --git a/mk/toolchain/icc/rte.vars.mk b/mk/toolchain/icc/rte.vars.mk
index dd3364519..33a8ba79e 100644
--- a/mk/toolchain/icc/rte.vars.mk
+++ b/mk/toolchain/icc/rte.vars.mk
@@ -32,10 +32,10 @@
#
# toolchain:
#
-# - define CC, LD, AR, AS, ... (overriden by cmdline value)
-# - define TOOLCHAIN_CFLAGS variable (overriden by cmdline value)
-# - define TOOLCHAIN_LDFLAGS variable (overriden by cmdline value)
-# - define TOOLCHAIN_ASFLAGS variable (overriden by cmdline value)
+# - define CC, LD, AR, AS, ... (overridden by cmdline value)
+# - define TOOLCHAIN_CFLAGS variable (overridden by cmdline value)
+# - define TOOLCHAIN_LDFLAGS variable (overridden by cmdline value)
+# - define TOOLCHAIN_ASFLAGS variable (overridden by cmdline value)
#
# Warning: we do not use CROSS environment variable as icc is mainly a
diff --git a/test/test/test_cmdline_cirbuf.c b/test/test/test_cmdline_cirbuf.c
index 87f83cc6d..2c321457d 100644
--- a/test/test/test_cmdline_cirbuf.c
+++ b/test/test/test_cmdline_cirbuf.c
@@ -45,7 +45,7 @@
#define CIRBUF_STR_HEAD " HEAD"
#define CIRBUF_STR_TAIL "TAIL"
-/* miscelaneous tests - they make bullseye happy */
+/* miscellaneous tests - they make bullseye happy */
static int
test_cirbuf_string_misc(void)
{
diff --git a/test/test/test_distributor.c b/test/test/test_distributor.c
index 890a8526b..9fae688b8 100644
--- a/test/test/test_distributor.c
+++ b/test/test/test_distributor.c
@@ -112,7 +112,7 @@ handle_work(void *arg)
/* do basic sanity testing of the distributor. This test tests the following:
* - send 32 packets through distributor with the same tag and ensure they
* all go to the one worker
- * - send 32 packets throught the distributor with two different tags and
+ * - send 32 packets through the distributor with two different tags and
* verify that they go equally to two different workers.
* - send 32 packets with different tags through the distributors and
* just verify we get all packets back.
diff --git a/test/test/test_eal_flags.c b/test/test/test_eal_flags.c
index 91b40664b..4afb36432 100644
--- a/test/test/test_eal_flags.c
+++ b/test/test/test_eal_flags.c
@@ -824,7 +824,7 @@ test_dom0_misc_flags(void)
/* check that some general flags don't prevent things from working.
* All cases, apart from the first, app should run.
- * No futher testing of output done.
+ * No further testing of output done.
*/
/* sanity check - failure with invalid option */
const char *argv0[] = {prgname, prefix, mp_flag, "-c", "1", "--invalid-opt"};
@@ -932,7 +932,7 @@ test_misc_flags(void)
/* check that some general flags don't prevent things from working.
* All cases, apart from the first, app should run.
- * No futher testing of output done.
+ * No further testing of output done.
*/
/* sanity check - failure with invalid option */
const char *argv0[] = {prgname, prefix, mp_flag, "-c", "1", "--invalid-opt"};
diff --git a/test/test/test_func_reentrancy.c b/test/test/test_func_reentrancy.c
index baa01ffc2..95440dd10 100644
--- a/test/test/test_func_reentrancy.c
+++ b/test/test/test_func_reentrancy.c
@@ -138,7 +138,7 @@ ring_create_lookup(__attribute__((unused)) void *arg)
return -1;
}
- /* verify all ring created sucessful */
+ /* verify all ring created successful */
for (i = 0; i < MAX_ITER_TIMES; i++) {
snprintf(ring_name, sizeof(ring_name), "fr_test_%d_%d", lcore_self, i);
if (rte_ring_lookup(ring_name) == NULL)
@@ -192,7 +192,7 @@ mempool_create_lookup(__attribute__((unused)) void *arg)
return -1;
}
- /* verify all ring created sucessful */
+ /* verify all ring created successful */
for (i = 0; i < MAX_ITER_TIMES; i++) {
snprintf(mempool_name, sizeof(mempool_name), "fr_test_%d_%d", lcore_self, i);
if (rte_mempool_lookup(mempool_name) == NULL)
diff --git a/test/test/test_hash.c b/test/test/test_hash.c
index 2c87efe69..2fafc44b0 100644
--- a/test/test/test_hash.c
+++ b/test/test/test_hash.c
@@ -1030,7 +1030,7 @@ static int test_hash_creation_with_bad_parameters(void)
handle = rte_hash_create(NULL);
if (handle != NULL) {
rte_hash_free(handle);
- printf("Impossible creating hash sucessfully without any parameter\n");
+ printf("Impossible creating hash successfully without any parameter\n");
return -1;
}
@@ -1040,7 +1040,7 @@ static int test_hash_creation_with_bad_parameters(void)
handle = rte_hash_create(¶ms);
if (handle != NULL) {
rte_hash_free(handle);
- printf("Impossible creating hash sucessfully with entries in parameter exceeded\n");
+ printf("Impossible creating hash successfully with entries in parameter exceeded\n");
return -1;
}
@@ -1050,7 +1050,7 @@ static int test_hash_creation_with_bad_parameters(void)
handle = rte_hash_create(¶ms);
if (handle != NULL) {
rte_hash_free(handle);
- printf("Impossible creating hash sucessfully if entries less than bucket_entries in parameter\n");
+ printf("Impossible creating hash successfully if entries less than bucket_entries in parameter\n");
return -1;
}
@@ -1060,7 +1060,7 @@ static int test_hash_creation_with_bad_parameters(void)
handle = rte_hash_create(¶ms);
if (handle != NULL) {
rte_hash_free(handle);
- printf("Impossible creating hash sucessfully if key_len in parameter is zero\n");
+ printf("Impossible creating hash successfully if key_len in parameter is zero\n");
return -1;
}
@@ -1070,7 +1070,7 @@ static int test_hash_creation_with_bad_parameters(void)
handle = rte_hash_create(¶ms);
if (handle != NULL) {
rte_hash_free(handle);
- printf("Impossible creating hash sucessfully with invalid socket\n");
+ printf("Impossible creating hash successfully with invalid socket\n");
return -1;
}
diff --git a/test/test/test_interrupts.c b/test/test/test_interrupts.c
index e0229e5ee..e0cd8a82f 100644
--- a/test/test/test_interrupts.c
+++ b/test/test/test_interrupts.c
@@ -409,7 +409,7 @@ test_interrupt(void)
printf("Check unknown valid interrupt full path\n");
if (test_interrupt_full_path_check(TEST_INTERRUPT_HANDLE_VALID) < 0) {
- printf("failure occured during checking unknown valid "
+ printf("failure occurred during checking unknown valid "
"interrupt full path\n");
goto out;
}
@@ -417,7 +417,7 @@ test_interrupt(void)
printf("Check valid UIO interrupt full path\n");
if (test_interrupt_full_path_check(TEST_INTERRUPT_HANDLE_VALID_UIO)
< 0) {
- printf("failure occured during checking valid UIO interrupt "
+ printf("failure occurred during checking valid UIO interrupt "
"full path\n");
goto out;
}
@@ -425,7 +425,7 @@ test_interrupt(void)
printf("Check valid alarm interrupt full path\n");
if (test_interrupt_full_path_check(TEST_INTERRUPT_HANDLE_VALID_ALARM)
< 0) {
- printf("failure occured during checking valid alarm "
+ printf("failure occurred during checking valid alarm "
"interrupt full path\n");
goto out;
}
diff --git a/test/test/test_link_bonding.c b/test/test/test_link_bonding.c
index 52d2d052f..57830391a 100644
--- a/test/test/test_link_bonding.c
+++ b/test/test/test_link_bonding.c
@@ -1375,7 +1375,7 @@ test_roundrobin_tx_burst(void)
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
burst_size = 20 * test_params->bonded_slave_count;
@@ -1464,7 +1464,7 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_ROUND_ROBIN, 0,
TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
@@ -2951,7 +2951,7 @@ test_balance_tx_burst_slave_tx_fail(void)
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BALANCE, 0,
TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -3082,7 +3082,7 @@ test_balance_rx_burst(void)
/* Initialize bonded device with 4 slaves in round robin mode */
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BALANCE, 0, 3, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
@@ -3161,7 +3161,7 @@ test_balance_verify_promiscuous_enable_disable(void)
/* Initialize bonded device with 4 slaves in round robin mode */
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BALANCE, 0, 4, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -3204,7 +3204,7 @@ test_balance_verify_mac_assignment(void)
/* Initialize bonded device with 2 slaves in active backup mode */
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
/* Verify that bonded MACs is that of first slave and that the other slave
* MAC hasn't been changed */
@@ -3321,7 +3321,7 @@ test_balance_verify_slave_link_status_change_behaviour(void)
/* Initialize bonded device with 4 slaves in round robin mode */
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -3476,7 +3476,7 @@ test_broadcast_tx_burst(void)
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BROADCAST, 0, 2, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
initialize_eth_header(test_params->pkt_eth_hdr,
(struct ether_addr *)src_mac, (struct ether_addr *)dst_mac_0,
@@ -3557,7 +3557,7 @@ test_broadcast_tx_burst_slave_tx_fail(void)
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BROADCAST, 0,
TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
@@ -3675,7 +3675,7 @@ test_broadcast_rx_burst(void)
/* Initialize bonded device with 4 slaves in round robin mode */
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BROADCAST, 0, 3, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
@@ -3754,7 +3754,7 @@ test_broadcast_verify_promiscuous_enable_disable(void)
/* Initialize bonded device with 4 slaves in round robin mode */
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BROADCAST, 0, 4, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -3800,7 +3800,7 @@ test_broadcast_verify_mac_assignment(void)
/* Initialize bonded device with 4 slaves in round robin mode */
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BROADCAST, 0, 4, 1),
- "Failed to intialise bonded device");
+ "Failed to initialise bonded device");
/* Verify that all MACs are the same as first slave added to bonded
* device */
@@ -3890,7 +3890,7 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
/* Initialize bonded device with 4 slaves in round robin mode */
TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
- 1), "Failed to intialise bonded device");
+ 1), "Failed to initialise bonded device");
/* Verify Current Slaves Count /Active Slave Count is */
slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
@@ -4372,7 +4372,7 @@ test_tlb_verify_mac_assignment(void)
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
test_params->bonded_port_id, (struct ether_addr *)bonded_mac),
- "failed to set MAC addres");
+ "failed to set MAC address");
rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
diff --git a/test/test/test_link_bonding_mode4.c b/test/test/test_link_bonding_mode4.c
index 106ec624f..c6745e891 100644
--- a/test/test/test_link_bonding_mode4.c
+++ b/test/test/test_link_bonding_mode4.c
@@ -821,7 +821,7 @@ test_mode4_rx(void)
TEST_ASSERT(cnt[0] == expected_pkts_cnt / 2 &&
cnt[1] == expected_pkts_cnt / 2,
"Expected %u packets with the same MAC and %u with different but "
- "got %u with the same and %u with diffrent MAC",
+ "got %u with the same and %u with different MAC",
expected_pkts_cnt / 2, expected_pkts_cnt / 2, cnt[1], cnt[0]);
} else if (retval > 0)
free_pkts(pkts, retval);
@@ -925,7 +925,7 @@ test_mode4_rx(void)
break;
}
- TEST_ASSERT(j < 5, "Failed to agregate slave after link up");
+ TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
}
remove_slaves_and_stop_bonded_device();
@@ -1263,7 +1263,7 @@ test_mode4_expired(void)
retval = bond_handshake_reply(slave);
TEST_ASSERT(retval >= 0, "Handshake failed");
- /* Remove replay for slave that supose to be expired. */
+ /* Remove replay for slave that suppose to be expired. */
if (slave == exp_slave) {
while (rte_ring_count(slave->rx_queue) > 0) {
void *pkt = NULL;
diff --git a/test/test/test_malloc.c b/test/test/test_malloc.c
index 0673d85b7..fcd8b8639 100644
--- a/test/test/test_malloc.c
+++ b/test/test/test_malloc.c
@@ -738,7 +738,7 @@ test_malloc_bad_params(void)
return -1;
}
-/* Check if memory is avilable on a specific socket */
+/* Check if memory is available on a specific socket */
static int
is_mem_on_socket(int32_t socket)
{
diff --git a/test/test/test_mbuf.c b/test/test/test_mbuf.c
index d3ea812e5..758402d6b 100644
--- a/test/test/test_mbuf.c
+++ b/test/test/test_mbuf.c
@@ -674,7 +674,7 @@ test_pktmbuf_free_segment(void)
/*
* Stress test for rte_mbuf atomic refcnt.
* Implies that RTE_MBUF_REFCNT_ATOMIC is defined.
- * For more efficency, recomended to run with RTE_LIBRTE_MBUF_DEBUG defined.
+ * For more efficiency, recommended to run with RTE_LIBRTE_MBUF_DEBUG defined.
*/
#ifdef RTE_MBUF_REFCNT_ATOMIC
diff --git a/test/test/test_spinlock.c b/test/test/test_spinlock.c
index 2d94eecc2..b86cd887c 100644
--- a/test/test/test_spinlock.c
+++ b/test/test/test_spinlock.c
@@ -202,7 +202,7 @@ test_spinlock_perf(void)
/*
* Use rte_spinlock_trylock() to trylock a spinlock object,
- * If it could not lock the object sucessfully, it would
+ * If it could not lock the object successfully, it would
* return immediately and the variable of "count" would be
* increased by one per times. the value of "count" could be
* checked as the result later.
--
2.13.0
^ permalink raw reply [relevance 6%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-06 13:19 0% ` Ananyev, Konstantin
@ 2017-06-06 14:56 0% ` Bruce Richardson
2017-06-08 12:45 3% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2017-06-06 14:56 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: Verkamp, Daniel, dev
On Tue, Jun 06, 2017 at 02:19:21PM +0100, Ananyev, Konstantin wrote:
>
>
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Tuesday, June 6, 2017 1:42 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> >
> > On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> > >
> > > > >
> > > > >
> > > > >
> > > > > >
> > > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > > of struct rte_ring are 128 byte aligned,
> > > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > > >
> > > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > > Konstantin
> > > >
> > > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes. An
> > alternate
> > > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> > >
> > > Yes, had the same thought.
> > >
> > > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> > >
> > > Might be, would be good to hear opinion the author of that change.
> >
> > It gives improved performance for core-2-core transfer.
>
> You mean empty cache-line(s) after prod/cons, correct?
> That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
> Something like that:
> struct rte_ring {
> ...
> struct rte_ring_headtail prod __rte_cache_aligned;
> EMPTY_CACHE_LINE __rte_cache_aligned;
> struct rte_ring_headtail cons __rte_cache_aligned;
> EMPTY_CACHE_LINE __rte_cache_aligned;
> };
>
> Konstantin
Sure. That should probably work too.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-06 12:42 0% ` Bruce Richardson
@ 2017-06-06 13:19 0% ` Ananyev, Konstantin
2017-06-06 14:56 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2017-06-06 13:19 UTC (permalink / raw)
To: Richardson, Bruce; +Cc: Verkamp, Daniel, dev
> -----Original Message-----
> From: Richardson, Bruce
> Sent: Tuesday, June 6, 2017 1:42 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
>
> On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
> >
> > > >
> > > >
> > > >
> > > > >
> > > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > > of struct rte_ring are 128 byte aligned,
> > > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > > so that the 128-byte alignment of the fields can be guaranteed.
> > > >
> > > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > > Konstantin
> > >
> > > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes. An
> alternate
> > > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
> >
> > Yes, had the same thought.
> >
> > > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
> >
> > Might be, would be good to hear opinion the author of that change.
>
> It gives improved performance for core-2-core transfer.
You mean empty cache-line(s) after prod/cons, correct?
That's ok but why we can't keep them and whole rte_ring aligned on cache-line boundaries?
Something like that:
struct rte_ring {
...
struct rte_ring_headtail prod __rte_cache_aligned;
EMPTY_CACHE_LINE __rte_cache_aligned;
struct rte_ring_headtail cons __rte_cache_aligned;
EMPTY_CACHE_LINE __rte_cache_aligned;
};
Konstantin
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-06 9:59 0% ` Ananyev, Konstantin
@ 2017-06-06 12:42 0% ` Bruce Richardson
2017-06-06 13:19 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2017-06-06 12:42 UTC (permalink / raw)
To: Ananyev, Konstantin; +Cc: Verkamp, Daniel, dev
On Tue, Jun 06, 2017 at 10:59:59AM +0100, Ananyev, Konstantin wrote:
>
> > >
> > >
> > >
> > > >
> > > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > > of struct rte_ring are 128 byte aligned,
> > > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > > so that the 128-byte alignment of the fields can be guaranteed.
> > >
> > > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > > BTW, I probably missed the initial discussion, but what was the reason for that?
> > > Konstantin
> >
> > I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes. An alternate
> > fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
>
> Yes, had the same thought.
>
> > Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
>
> Might be, would be good to hear opinion the author of that change.
It gives improved performance for core-2-core transfer.
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-05 16:21 0% ` Verkamp, Daniel
@ 2017-06-06 9:59 0% ` Ananyev, Konstantin
2017-06-06 12:42 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2017-06-06 9:59 UTC (permalink / raw)
To: Verkamp, Daniel, dev; +Cc: Richardson, Bruce
> >
> >
> >
> > >
> > > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> > of struct rte_ring are 128 byte aligned,
> > >and therefore the whole struct needs 128-byte alignment according to the ABI
> > so that the 128-byte alignment of the fields can be guaranteed.
> >
> > Ah ok, missed the fact that rte_ring is 128B aligned these days.
> > BTW, I probably missed the initial discussion, but what was the reason for that?
> > Konstantin
>
> I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes. An alternate
> fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned).
Yes, had the same thought.
> Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
Might be, would be good to hear opinion the author of that change.
Thanks
Konstantin
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-03 10:00 0% ` Ananyev, Konstantin
@ 2017-06-05 16:21 0% ` Verkamp, Daniel
2017-06-06 9:59 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Verkamp, Daniel @ 2017-06-05 16:21 UTC (permalink / raw)
To: Ananyev, Konstantin, dev; +Cc: Richardson, Bruce
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Saturday, June 3, 2017 3:00 AM
> To: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> Cc: Richardson, Bruce <bruce.richardson@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
>
>
>
> >
> > The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members
> of struct rte_ring are 128 byte aligned,
> >and therefore the whole struct needs 128-byte alignment according to the ABI
> so that the 128-byte alignment of the fields can be guaranteed.
>
> Ah ok, missed the fact that rte_ring is 128B aligned these days.
> BTW, I probably missed the initial discussion, but what was the reason for that?
> Konstantin
I don't know why PROD_ALIGN/CONS_ALIGN use 128 byte alignment; it seems unnecessary if the cache line is only 64 bytes. An alternate fix would be to just use cache line alignment for these fields (since memzones are already cache line aligned). Maybe there is some deeper reason for the >= 128-byte alignment logic in rte_ring.h?
Thanks,
-- Daniel
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
2017-06-02 22:24 3% ` Verkamp, Daniel
@ 2017-06-03 10:00 0% ` Ananyev, Konstantin
2017-06-05 16:21 0% ` Verkamp, Daniel
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2017-06-03 10:00 UTC (permalink / raw)
To: Verkamp, Daniel, dev; +Cc: Richardson, Bruce
>
> The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members of struct rte_ring are 128 byte aligned,
>and therefore the whole struct needs 128-byte alignment according to the ABI so that the 128-byte alignment of the fields can be guaranteed.
Ah ok, missed the fact that rte_ring is 128B aligned these days.
BTW, I probably missed the initial discussion, but what was the reason for that?
Konstantin
>
> If the allocation is only 64-byte aligned, the beginning of the prod and cons fields may not actually be 128-byte aligned (but we've told the
> compiler that they are using the __rte_aligned macro). Accessing these fields when they are misaligned will work in practice on x86 (as long
> as the compiler doesn't use e.g. aligned SSE instructions), but it is undefined behavior according to the C standard, and UBSan (-
> fsanitize=undefined) checks for this.
>
> Thanks,
> -- Daniel Verkamp
>
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Friday, June 2, 2017 1:52 PM
> > To: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> >
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Daniel Verkamp
> > > Sent: Friday, June 2, 2017 9:12 PM
> > > To: dev@dpdk.org
> > > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>
> > > Subject: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> > >
> > > rte_memzone_reserve() provides cache line alignment, but
> > > struct rte_ring may require more than cache line alignment: on x86-64,
> > > it needs 128-byte alignment due to PROD_ALIGN and CONS_ALIGN, which are
> > > 128 bytes, but cache line size is 64 bytes.
> >
> > Hmm but what for?
> > I understand we need our rte_ring cche-line aligned,
> > but why do you want it 2 cache-line aligned?
> > Konstantin
> >
> > >
> > > Fixes runtime warnings with UBSan enabled.
> > >
> > > Fixes: d9f0d3a1ffd4 ("ring: remove split cacheline build setting")
> > >
> > > Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
> > > ---
> > >
> > > v2: fixed checkpatch warnings
> > >
> > > lib/librte_ring/rte_ring.c | 3 ++-
> > > 1 file changed, 2 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
> > > index 5f98c33..6f58faf 100644
> > > --- a/lib/librte_ring/rte_ring.c
> > > +++ b/lib/librte_ring/rte_ring.c
> > > @@ -189,7 +189,8 @@ rte_ring_create(const char *name, unsigned count, int
> > socket_id,
> > > /* reserve a memory zone for this ring. If we can't get rte_config or
> > > * we are secondary process, the memzone_reserve function will set
> > > * rte_errno for us appropriately - hence no check in this this function */
> > > - mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > > + mz = rte_memzone_reserve_aligned(mz_name, ring_size, socket_id,
> > > + mz_flags, __alignof__(*r));
> > > if (mz != NULL) {
> > > r = mz->addr;
> > > /* no need to check return value here, we already checked the
> > > --
> > > 2.9.4
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
@ 2017-06-02 22:24 3% ` Verkamp, Daniel
2017-06-03 10:00 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Verkamp, Daniel @ 2017-06-02 22:24 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
The PROD/CONS_ALIGN values on x86-64 are set to 2 cache lines, so members of struct rte_ring are 128 byte aligned, and therefore the whole struct needs 128-byte alignment according to the ABI so that the 128-byte alignment of the fields can be guaranteed.
If the allocation is only 64-byte aligned, the beginning of the prod and cons fields may not actually be 128-byte aligned (but we've told the compiler that they are using the __rte_aligned macro). Accessing these fields when they are misaligned will work in practice on x86 (as long as the compiler doesn't use e.g. aligned SSE instructions), but it is undefined behavior according to the C standard, and UBSan (-fsanitize=undefined) checks for this.
Thanks,
-- Daniel Verkamp
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Friday, June 2, 2017 1:52 PM
> To: Verkamp, Daniel <daniel.verkamp@intel.com>; dev@dpdk.org
> Cc: Verkamp, Daniel <daniel.verkamp@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
>
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Daniel Verkamp
> > Sent: Friday, June 2, 2017 9:12 PM
> > To: dev@dpdk.org
> > Cc: Verkamp, Daniel <daniel.verkamp@intel.com>
> > Subject: [dpdk-dev] [PATCH v2] ring: use aligned memzone allocation
> >
> > rte_memzone_reserve() provides cache line alignment, but
> > struct rte_ring may require more than cache line alignment: on x86-64,
> > it needs 128-byte alignment due to PROD_ALIGN and CONS_ALIGN, which are
> > 128 bytes, but cache line size is 64 bytes.
>
> Hmm but what for?
> I understand we need our rte_ring cche-line aligned,
> but why do you want it 2 cache-line aligned?
> Konstantin
>
> >
> > Fixes runtime warnings with UBSan enabled.
> >
> > Fixes: d9f0d3a1ffd4 ("ring: remove split cacheline build setting")
> >
> > Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
> > ---
> >
> > v2: fixed checkpatch warnings
> >
> > lib/librte_ring/rte_ring.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
> > index 5f98c33..6f58faf 100644
> > --- a/lib/librte_ring/rte_ring.c
> > +++ b/lib/librte_ring/rte_ring.c
> > @@ -189,7 +189,8 @@ rte_ring_create(const char *name, unsigned count, int
> socket_id,
> > /* reserve a memory zone for this ring. If we can't get rte_config or
> > * we are secondary process, the memzone_reserve function will set
> > * rte_errno for us appropriately - hence no check in this this function */
> > - mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
> > + mz = rte_memzone_reserve_aligned(mz_name, ring_size, socket_id,
> > + mz_flags, __alignof__(*r));
> > if (mz != NULL) {
> > r = mz->addr;
> > /* no need to check return value here, we already checked the
> > --
> > 2.9.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 3/8] pmdinfogen: move to drivers subdirectory
@ 2017-06-01 10:14 1% ` Gaetan Rivet
1 sibling, 0 replies; 200+ results
From: Gaetan Rivet @ 2017-06-01 10:14 UTC (permalink / raw)
To: dev; +Cc: Gaetan Rivet
pmdinfogen has a dependency on the PCI bus. The latter must be built
first.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
---
GNUmakefile | 2 +-
MAINTAINERS | 2 +-
buildtools/Makefile | 36 ----
buildtools/pmdinfogen/Makefile | 47 -----
buildtools/pmdinfogen/pmdinfogen.c | 422 -------------------------------------
buildtools/pmdinfogen/pmdinfogen.h | 125 -----------
drivers/Makefile | 4 +-
drivers/pmdinfogen/Makefile | 47 +++++
drivers/pmdinfogen/pmdinfogen.c | 422 +++++++++++++++++++++++++++++++++++++
drivers/pmdinfogen/pmdinfogen.h | 125 +++++++++++
10 files changed, 599 insertions(+), 633 deletions(-)
delete mode 100644 buildtools/Makefile
delete mode 100644 buildtools/pmdinfogen/Makefile
delete mode 100644 buildtools/pmdinfogen/pmdinfogen.c
delete mode 100644 buildtools/pmdinfogen/pmdinfogen.h
create mode 100644 drivers/pmdinfogen/Makefile
create mode 100644 drivers/pmdinfogen/pmdinfogen.c
create mode 100644 drivers/pmdinfogen/pmdinfogen.h
diff --git a/GNUmakefile b/GNUmakefile
index 45b7fbb..c292646 100644
--- a/GNUmakefile
+++ b/GNUmakefile
@@ -40,7 +40,7 @@ export RTE_SDK
# directory list
#
-ROOTDIRS-y := buildtools lib drivers app
+ROOTDIRS-y := lib drivers app
ROOTDIRS- := test
include $(RTE_SDK)/mk/rte.sdkroot.mk
diff --git a/MAINTAINERS b/MAINTAINERS
index afb4cab..9f9b81b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -72,7 +72,7 @@ F: doc/guides/rel_notes/deprecation.rst
F: devtools/validate-abi.sh
Driver information
-F: buildtools/pmdinfogen/
+F: drivers/pmdinfogen/
F: usertools/dpdk-pmdinfo.py
F: doc/guides/tools/pmdinfo.rst
diff --git a/buildtools/Makefile b/buildtools/Makefile
deleted file mode 100644
index 35a42ff..0000000
--- a/buildtools/Makefile
+++ /dev/null
@@ -1,36 +0,0 @@
-# BSD LICENSE
-#
-# Copyright(c) 2016 Neil Horman. All rights reserved.
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-# * Neither the name of Intel Corporation nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-DIRS-y += pmdinfogen
-
-include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/buildtools/pmdinfogen/Makefile b/buildtools/pmdinfogen/Makefile
deleted file mode 100644
index bf07b6f..0000000
--- a/buildtools/pmdinfogen/Makefile
+++ /dev/null
@@ -1,47 +0,0 @@
-# BSD LICENSE
-#
-# Copyright(c) 2016 Neil Horman. All rights reserved.
-# All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions
-# are met:
-#
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in
-# the documentation and/or other materials provided with the
-# distribution.
-# * Neither the name of Intel Corporation nor the names of its
-# contributors may be used to endorse or promote products derived
-# from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-#
-# library name
-#
-HOSTAPP = dpdk-pmdinfogen
-
-#
-# all sources are stored in SRCS-y
-#
-SRCS-y += pmdinfogen.c
-
-HOST_CFLAGS += $(WERROR_FLAGS) -g
-HOST_CFLAGS += -I$(RTE_OUTPUT)/include
-
-include $(RTE_SDK)/mk/rte.hostapp.mk
diff --git a/buildtools/pmdinfogen/pmdinfogen.c b/buildtools/pmdinfogen/pmdinfogen.c
deleted file mode 100644
index ba1a12e..0000000
--- a/buildtools/pmdinfogen/pmdinfogen.c
+++ /dev/null
@@ -1,422 +0,0 @@
-/* Postprocess pmd object files to export hw support
- *
- * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
- * Based in part on modpost.c from the linux kernel
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License V2, incorporated herein by reference.
- *
- */
-
-#define _GNU_SOURCE
-#include <stdio.h>
-#include <ctype.h>
-#include <string.h>
-#include <limits.h>
-#include <stdbool.h>
-#include <errno.h>
-#include <libgen.h>
-
-#include <rte_common.h>
-#include "pmdinfogen.h"
-
-#ifdef RTE_ARCH_64
-#define ADDR_SIZE 64
-#else
-#define ADDR_SIZE 32
-#endif
-
-
-static const char *sym_name(struct elf_info *elf, Elf_Sym *sym)
-{
- if (sym)
- return elf->strtab + sym->st_name;
- else
- return "(unknown)";
-}
-
-static void *grab_file(const char *filename, unsigned long *size)
-{
- struct stat st;
- void *map = MAP_FAILED;
- int fd;
-
- fd = open(filename, O_RDONLY);
- if (fd < 0)
- return NULL;
- if (fstat(fd, &st))
- goto failed;
-
- *size = st.st_size;
- map = mmap(NULL, *size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
-
-failed:
- close(fd);
- if (map == MAP_FAILED)
- return NULL;
- return map;
-}
-
-/**
- * Return a copy of the next line in a mmap'ed file.
- * spaces in the beginning of the line is trimmed away.
- * Return a pointer to a static buffer.
- **/
-static void release_file(void *file, unsigned long size)
-{
- munmap(file, size);
-}
-
-
-static void *get_sym_value(struct elf_info *info, const Elf_Sym *sym)
-{
- return RTE_PTR_ADD(info->hdr,
- info->sechdrs[sym->st_shndx].sh_offset + sym->st_value);
-}
-
-static Elf_Sym *find_sym_in_symtab(struct elf_info *info,
- const char *name, Elf_Sym *last)
-{
- Elf_Sym *idx;
- if (last)
- idx = last+1;
- else
- idx = info->symtab_start;
-
- for (; idx < info->symtab_stop; idx++) {
- const char *n = sym_name(info, idx);
- if (!strncmp(n, name, strlen(name)))
- return idx;
- }
- return NULL;
-}
-
-static int parse_elf(struct elf_info *info, const char *filename)
-{
- unsigned int i;
- Elf_Ehdr *hdr;
- Elf_Shdr *sechdrs;
- Elf_Sym *sym;
- int endian;
- unsigned int symtab_idx = ~0U, symtab_shndx_idx = ~0U;
-
- hdr = grab_file(filename, &info->size);
- if (!hdr) {
- perror(filename);
- exit(1);
- }
- info->hdr = hdr;
- if (info->size < sizeof(*hdr)) {
- /* file too small, assume this is an empty .o file */
- return 0;
- }
- /* Is this a valid ELF file? */
- if ((hdr->e_ident[EI_MAG0] != ELFMAG0) ||
- (hdr->e_ident[EI_MAG1] != ELFMAG1) ||
- (hdr->e_ident[EI_MAG2] != ELFMAG2) ||
- (hdr->e_ident[EI_MAG3] != ELFMAG3)) {
- /* Not an ELF file - silently ignore it */
- return 0;
- }
-
- if (!hdr->e_ident[EI_DATA]) {
- /* Unknown endian */
- return 0;
- }
-
- endian = hdr->e_ident[EI_DATA];
-
- /* Fix endianness in ELF header */
- hdr->e_type = TO_NATIVE(endian, 16, hdr->e_type);
- hdr->e_machine = TO_NATIVE(endian, 16, hdr->e_machine);
- hdr->e_version = TO_NATIVE(endian, 32, hdr->e_version);
- hdr->e_entry = TO_NATIVE(endian, ADDR_SIZE, hdr->e_entry);
- hdr->e_phoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_phoff);
- hdr->e_shoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_shoff);
- hdr->e_flags = TO_NATIVE(endian, 32, hdr->e_flags);
- hdr->e_ehsize = TO_NATIVE(endian, 16, hdr->e_ehsize);
- hdr->e_phentsize = TO_NATIVE(endian, 16, hdr->e_phentsize);
- hdr->e_phnum = TO_NATIVE(endian, 16, hdr->e_phnum);
- hdr->e_shentsize = TO_NATIVE(endian, 16, hdr->e_shentsize);
- hdr->e_shnum = TO_NATIVE(endian, 16, hdr->e_shnum);
- hdr->e_shstrndx = TO_NATIVE(endian, 16, hdr->e_shstrndx);
-
- sechdrs = RTE_PTR_ADD(hdr, hdr->e_shoff);
- info->sechdrs = sechdrs;
-
- /* Check if file offset is correct */
- if (hdr->e_shoff > info->size) {
- fprintf(stderr, "section header offset=%lu in file '%s' "
- "is bigger than filesize=%lu\n",
- (unsigned long)hdr->e_shoff,
- filename, info->size);
- return 0;
- }
-
- if (hdr->e_shnum == SHN_UNDEF) {
- /*
- * There are more than 64k sections,
- * read count from .sh_size.
- */
- info->num_sections = TO_NATIVE(endian, 32, sechdrs[0].sh_size);
- } else {
- info->num_sections = hdr->e_shnum;
- }
- if (hdr->e_shstrndx == SHN_XINDEX)
- info->secindex_strings =
- TO_NATIVE(endian, 32, sechdrs[0].sh_link);
- else
- info->secindex_strings = hdr->e_shstrndx;
-
- /* Fix endianness in section headers */
- for (i = 0; i < info->num_sections; i++) {
- sechdrs[i].sh_name =
- TO_NATIVE(endian, 32, sechdrs[i].sh_name);
- sechdrs[i].sh_type =
- TO_NATIVE(endian, 32, sechdrs[i].sh_type);
- sechdrs[i].sh_flags =
- TO_NATIVE(endian, 32, sechdrs[i].sh_flags);
- sechdrs[i].sh_addr =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addr);
- sechdrs[i].sh_offset =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_offset);
- sechdrs[i].sh_size =
- TO_NATIVE(endian, 32, sechdrs[i].sh_size);
- sechdrs[i].sh_link =
- TO_NATIVE(endian, 32, sechdrs[i].sh_link);
- sechdrs[i].sh_info =
- TO_NATIVE(endian, 32, sechdrs[i].sh_info);
- sechdrs[i].sh_addralign =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addralign);
- sechdrs[i].sh_entsize =
- TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_entsize);
- }
- /* Find symbol table. */
- for (i = 1; i < info->num_sections; i++) {
- int nobits = sechdrs[i].sh_type == SHT_NOBITS;
-
- if (!nobits && sechdrs[i].sh_offset > info->size) {
- fprintf(stderr, "%s is truncated. "
- "sechdrs[i].sh_offset=%lu > sizeof(*hrd)=%zu\n",
- filename, (unsigned long)sechdrs[i].sh_offset,
- sizeof(*hdr));
- return 0;
- }
-
- if (sechdrs[i].sh_type == SHT_SYMTAB) {
- unsigned int sh_link_idx;
- symtab_idx = i;
- info->symtab_start = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset);
- info->symtab_stop = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset + sechdrs[i].sh_size);
- sh_link_idx = sechdrs[i].sh_link;
- info->strtab = RTE_PTR_ADD(hdr,
- sechdrs[sh_link_idx].sh_offset);
- }
-
- /* 32bit section no. table? ("more than 64k sections") */
- if (sechdrs[i].sh_type == SHT_SYMTAB_SHNDX) {
- symtab_shndx_idx = i;
- info->symtab_shndx_start = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset);
- info->symtab_shndx_stop = RTE_PTR_ADD(hdr,
- sechdrs[i].sh_offset + sechdrs[i].sh_size);
- }
- }
- if (!info->symtab_start)
- fprintf(stderr, "%s has no symtab?\n", filename);
- else {
- /* Fix endianness in symbols */
- for (sym = info->symtab_start; sym < info->symtab_stop; sym++) {
- sym->st_shndx = TO_NATIVE(endian, 16, sym->st_shndx);
- sym->st_name = TO_NATIVE(endian, 32, sym->st_name);
- sym->st_value = TO_NATIVE(endian, ADDR_SIZE, sym->st_value);
- sym->st_size = TO_NATIVE(endian, ADDR_SIZE, sym->st_size);
- }
- }
-
- if (symtab_shndx_idx != ~0U) {
- Elf32_Word *p;
- if (symtab_idx != sechdrs[symtab_shndx_idx].sh_link)
- fprintf(stderr,
- "%s: SYMTAB_SHNDX has bad sh_link: %u!=%u\n",
- filename, sechdrs[symtab_shndx_idx].sh_link,
- symtab_idx);
- /* Fix endianness */
- for (p = info->symtab_shndx_start; p < info->symtab_shndx_stop;
- p++)
- *p = TO_NATIVE(endian, 32, *p);
- }
-
- return 1;
-}
-
-static void parse_elf_finish(struct elf_info *info)
-{
- struct pmd_driver *tmp, *idx = info->drivers;
- release_file(info->hdr, info->size);
- while (idx) {
- tmp = idx->next;
- free(idx);
- idx = tmp;
- }
-}
-
-struct opt_tag {
- const char *suffix;
- const char *json_id;
-};
-
-static const struct opt_tag opt_tags[] = {
- {"_param_string_export", "params"},
- {"_kmod_dep_export", "kmod"},
-};
-
-static int complete_pmd_entry(struct elf_info *info, struct pmd_driver *drv)
-{
- const char *tname;
- int i;
- char tmpsymname[128];
- Elf_Sym *tmpsym;
-
- drv->name = get_sym_value(info, drv->name_sym);
-
- for (i = 0; i < PMD_OPT_MAX; i++) {
- memset(tmpsymname, 0, 128);
- sprintf(tmpsymname, "__%s%s", drv->name, opt_tags[i].suffix);
- tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
- if (!tmpsym)
- continue;
- drv->opt_vals[i] = get_sym_value(info, tmpsym);
- }
-
- memset(tmpsymname, 0, 128);
- sprintf(tmpsymname, "__%s_pci_tbl_export", drv->name);
-
- tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
-
-
- /*
- * If this returns NULL, then this is a PMD_VDEV, because
- * it has no pci table reference
- */
- if (!tmpsym) {
- drv->pci_tbl = NULL;
- return 0;
- }
-
- tname = get_sym_value(info, tmpsym);
- tmpsym = find_sym_in_symtab(info, tname, NULL);
- if (!tmpsym)
- return -ENOENT;
-
- drv->pci_tbl = (struct rte_pci_id *)get_sym_value(info, tmpsym);
- if (!drv->pci_tbl)
- return -ENOENT;
-
- return 0;
-}
-
-static int locate_pmd_entries(struct elf_info *info)
-{
- Elf_Sym *last = NULL;
- struct pmd_driver *new;
-
- info->drivers = NULL;
-
- do {
- new = calloc(sizeof(struct pmd_driver), 1);
- new->name_sym = find_sym_in_symtab(info, "this_pmd_name", last);
- last = new->name_sym;
- if (!new->name_sym)
- free(new);
- else {
- if (complete_pmd_entry(info, new)) {
- fprintf(stderr,
- "Failed to complete pmd entry\n");
- free(new);
- } else {
- new->next = info->drivers;
- info->drivers = new;
- }
- }
- } while (last);
-
- return 0;
-}
-
-static void output_pmd_info_string(struct elf_info *info, char *outfile)
-{
- FILE *ofd;
- struct pmd_driver *drv;
- struct rte_pci_id *pci_ids;
- int idx = 0;
-
- ofd = fopen(outfile, "w+");
- if (!ofd) {
- fprintf(stderr, "Unable to open output file\n");
- return;
- }
-
- drv = info->drivers;
-
- while (drv) {
- fprintf(ofd, "const char %s_pmd_info[] __attribute__((used)) = "
- "\"PMD_INFO_STRING= {",
- drv->name);
- fprintf(ofd, "\\\"name\\\" : \\\"%s\\\", ", drv->name);
-
- for (idx = 0; idx < PMD_OPT_MAX; idx++) {
- if (drv->opt_vals[idx])
- fprintf(ofd, "\\\"%s\\\" : \\\"%s\\\", ",
- opt_tags[idx].json_id,
- drv->opt_vals[idx]);
- }
-
- pci_ids = drv->pci_tbl;
- fprintf(ofd, "\\\"pci_ids\\\" : [");
-
- while (pci_ids && pci_ids->device_id) {
- fprintf(ofd, "[%d, %d, %d, %d]",
- pci_ids->vendor_id, pci_ids->device_id,
- pci_ids->subsystem_vendor_id,
- pci_ids->subsystem_device_id);
- pci_ids++;
- if (pci_ids->device_id)
- fprintf(ofd, ",");
- else
- fprintf(ofd, " ");
- }
- fprintf(ofd, "]}\";\n");
- drv = drv->next;
- }
-
- fclose(ofd);
-}
-
-int main(int argc, char **argv)
-{
- struct elf_info info;
- int rc = 1;
-
- if (argc < 3) {
- fprintf(stderr,
- "usage: %s <object file> <c output file>\n",
- basename(argv[0]));
- exit(127);
- }
- parse_elf(&info, argv[1]);
-
- locate_pmd_entries(&info);
-
- if (info.drivers) {
- output_pmd_info_string(&info, argv[2]);
- rc = 0;
- } else {
- fprintf(stderr, "No drivers registered\n");
- }
-
- parse_elf_finish(&info);
- exit(rc);
-}
diff --git a/buildtools/pmdinfogen/pmdinfogen.h b/buildtools/pmdinfogen/pmdinfogen.h
deleted file mode 100644
index 27bab30..0000000
--- a/buildtools/pmdinfogen/pmdinfogen.h
+++ /dev/null
@@ -1,125 +0,0 @@
-
-/* Postprocess pmd object files to export hw support
- *
- * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
- * Based in part on modpost.c from the linux kernel
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License V2, incorporated herein by reference.
- *
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdarg.h>
-#include <string.h>
-#include <sys/types.h>
-#include <sys/stat.h>
-#include <sys/mman.h>
-#ifdef __linux__
-#include <endian.h>
-#else
-#include <sys/endian.h>
-#endif
-#include <fcntl.h>
-#include <unistd.h>
-#include <elf.h>
-#include <rte_config.h>
-#include <rte_pci.h>
-
-/* On BSD-alike OSes elf.h defines these according to host's word size */
-#undef ELF_ST_BIND
-#undef ELF_ST_TYPE
-#undef ELF_R_SYM
-#undef ELF_R_TYPE
-
-/*
- * Define ELF64_* to ELF_*, the latter being defined in both 32 and 64 bit
- * flavors in elf.h. This makes our code a bit more generic between arches
- * and allows us to support 32 bit code in the future should we ever want to
- */
-#ifdef RTE_ARCH_64
-#define Elf_Ehdr Elf64_Ehdr
-#define Elf_Shdr Elf64_Shdr
-#define Elf_Sym Elf64_Sym
-#define Elf_Addr Elf64_Addr
-#define Elf_Sword Elf64_Sxword
-#define Elf_Section Elf64_Half
-#define ELF_ST_BIND ELF64_ST_BIND
-#define ELF_ST_TYPE ELF64_ST_TYPE
-
-#define Elf_Rel Elf64_Rel
-#define Elf_Rela Elf64_Rela
-#define ELF_R_SYM ELF64_R_SYM
-#define ELF_R_TYPE ELF64_R_TYPE
-#else
-#define Elf_Ehdr Elf32_Ehdr
-#define Elf_Shdr Elf32_Shdr
-#define Elf_Sym Elf32_Sym
-#define Elf_Addr Elf32_Addr
-#define Elf_Sword Elf32_Sxword
-#define Elf_Section Elf32_Half
-#define ELF_ST_BIND ELF32_ST_BIND
-#define ELF_ST_TYPE ELF32_ST_TYPE
-
-#define Elf_Rel Elf32_Rel
-#define Elf_Rela Elf32_Rela
-#define ELF_R_SYM ELF32_R_SYM
-#define ELF_R_TYPE ELF32_R_TYPE
-#endif
-
-
-/*
- * Note, it seems odd that we have both a CONVERT_NATIVE and a TO_NATIVE macro
- * below. We do this because the values passed to TO_NATIVE may themselves be
- * macros and need both macros here to get expanded. Specifically its the width
- * variable we are concerned with, because it needs to get expanded prior to
- * string concatenation
- */
-#define CONVERT_NATIVE(fend, width, x) ({ \
-typeof(x) ___x; \
-if ((fend) == ELFDATA2LSB) \
- ___x = le##width##toh(x); \
-else \
- ___x = be##width##toh(x); \
- ___x; \
-})
-
-#define TO_NATIVE(fend, width, x) CONVERT_NATIVE(fend, width, x)
-
-enum opt_params {
- PMD_PARAM_STRING = 0,
- PMD_KMOD_DEP,
- PMD_OPT_MAX
-};
-
-struct pmd_driver {
- Elf_Sym *name_sym;
- const char *name;
- struct rte_pci_id *pci_tbl;
- struct pmd_driver *next;
-
- const char *opt_vals[PMD_OPT_MAX];
-};
-
-struct elf_info {
- unsigned long size;
- Elf_Ehdr *hdr;
- Elf_Shdr *sechdrs;
- Elf_Sym *symtab_start;
- Elf_Sym *symtab_stop;
- char *strtab;
-
- /* support for 32bit section numbers */
-
- unsigned int num_sections; /* max_secindex + 1 */
- unsigned int secindex_strings;
- /* if Nth symbol table entry has .st_shndx = SHN_XINDEX,
- * take shndx from symtab_shndx_start[N] instead
- */
- Elf32_Word *symtab_shndx_start;
- Elf32_Word *symtab_shndx_stop;
-
- struct pmd_driver *drivers;
-};
-
diff --git a/drivers/Makefile b/drivers/Makefile
index a04a01f..f3f9417 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -32,10 +32,12 @@
include $(RTE_SDK)/mk/rte.vars.mk
DIRS-y += bus
+DIRS-y += pmdinfogen
+DEPDIRS-pmdinfogen := bus
DIRS-y += mempool
DEPDIRS-mempool := bus
DIRS-y += net
-DEPDIRS-net := bus mempool
+DEPDIRS-net := bus pmdinfogen mempool
DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
DEPDIRS-crypto := mempool
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
diff --git a/drivers/pmdinfogen/Makefile b/drivers/pmdinfogen/Makefile
new file mode 100644
index 0000000..bf07b6f
--- /dev/null
+++ b/drivers/pmdinfogen/Makefile
@@ -0,0 +1,47 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Neil Horman. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+HOSTAPP = dpdk-pmdinfogen
+
+#
+# all sources are stored in SRCS-y
+#
+SRCS-y += pmdinfogen.c
+
+HOST_CFLAGS += $(WERROR_FLAGS) -g
+HOST_CFLAGS += -I$(RTE_OUTPUT)/include
+
+include $(RTE_SDK)/mk/rte.hostapp.mk
diff --git a/drivers/pmdinfogen/pmdinfogen.c b/drivers/pmdinfogen/pmdinfogen.c
new file mode 100644
index 0000000..ba1a12e
--- /dev/null
+++ b/drivers/pmdinfogen/pmdinfogen.c
@@ -0,0 +1,422 @@
+/* Postprocess pmd object files to export hw support
+ *
+ * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
+ * Based in part on modpost.c from the linux kernel
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License V2, incorporated herein by reference.
+ *
+ */
+
+#define _GNU_SOURCE
+#include <stdio.h>
+#include <ctype.h>
+#include <string.h>
+#include <limits.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <libgen.h>
+
+#include <rte_common.h>
+#include "pmdinfogen.h"
+
+#ifdef RTE_ARCH_64
+#define ADDR_SIZE 64
+#else
+#define ADDR_SIZE 32
+#endif
+
+
+static const char *sym_name(struct elf_info *elf, Elf_Sym *sym)
+{
+ if (sym)
+ return elf->strtab + sym->st_name;
+ else
+ return "(unknown)";
+}
+
+static void *grab_file(const char *filename, unsigned long *size)
+{
+ struct stat st;
+ void *map = MAP_FAILED;
+ int fd;
+
+ fd = open(filename, O_RDONLY);
+ if (fd < 0)
+ return NULL;
+ if (fstat(fd, &st))
+ goto failed;
+
+ *size = st.st_size;
+ map = mmap(NULL, *size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
+
+failed:
+ close(fd);
+ if (map == MAP_FAILED)
+ return NULL;
+ return map;
+}
+
+/**
+ * Return a copy of the next line in a mmap'ed file.
+ * spaces in the beginning of the line is trimmed away.
+ * Return a pointer to a static buffer.
+ **/
+static void release_file(void *file, unsigned long size)
+{
+ munmap(file, size);
+}
+
+
+static void *get_sym_value(struct elf_info *info, const Elf_Sym *sym)
+{
+ return RTE_PTR_ADD(info->hdr,
+ info->sechdrs[sym->st_shndx].sh_offset + sym->st_value);
+}
+
+static Elf_Sym *find_sym_in_symtab(struct elf_info *info,
+ const char *name, Elf_Sym *last)
+{
+ Elf_Sym *idx;
+ if (last)
+ idx = last+1;
+ else
+ idx = info->symtab_start;
+
+ for (; idx < info->symtab_stop; idx++) {
+ const char *n = sym_name(info, idx);
+ if (!strncmp(n, name, strlen(name)))
+ return idx;
+ }
+ return NULL;
+}
+
+static int parse_elf(struct elf_info *info, const char *filename)
+{
+ unsigned int i;
+ Elf_Ehdr *hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *sym;
+ int endian;
+ unsigned int symtab_idx = ~0U, symtab_shndx_idx = ~0U;
+
+ hdr = grab_file(filename, &info->size);
+ if (!hdr) {
+ perror(filename);
+ exit(1);
+ }
+ info->hdr = hdr;
+ if (info->size < sizeof(*hdr)) {
+ /* file too small, assume this is an empty .o file */
+ return 0;
+ }
+ /* Is this a valid ELF file? */
+ if ((hdr->e_ident[EI_MAG0] != ELFMAG0) ||
+ (hdr->e_ident[EI_MAG1] != ELFMAG1) ||
+ (hdr->e_ident[EI_MAG2] != ELFMAG2) ||
+ (hdr->e_ident[EI_MAG3] != ELFMAG3)) {
+ /* Not an ELF file - silently ignore it */
+ return 0;
+ }
+
+ if (!hdr->e_ident[EI_DATA]) {
+ /* Unknown endian */
+ return 0;
+ }
+
+ endian = hdr->e_ident[EI_DATA];
+
+ /* Fix endianness in ELF header */
+ hdr->e_type = TO_NATIVE(endian, 16, hdr->e_type);
+ hdr->e_machine = TO_NATIVE(endian, 16, hdr->e_machine);
+ hdr->e_version = TO_NATIVE(endian, 32, hdr->e_version);
+ hdr->e_entry = TO_NATIVE(endian, ADDR_SIZE, hdr->e_entry);
+ hdr->e_phoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_phoff);
+ hdr->e_shoff = TO_NATIVE(endian, ADDR_SIZE, hdr->e_shoff);
+ hdr->e_flags = TO_NATIVE(endian, 32, hdr->e_flags);
+ hdr->e_ehsize = TO_NATIVE(endian, 16, hdr->e_ehsize);
+ hdr->e_phentsize = TO_NATIVE(endian, 16, hdr->e_phentsize);
+ hdr->e_phnum = TO_NATIVE(endian, 16, hdr->e_phnum);
+ hdr->e_shentsize = TO_NATIVE(endian, 16, hdr->e_shentsize);
+ hdr->e_shnum = TO_NATIVE(endian, 16, hdr->e_shnum);
+ hdr->e_shstrndx = TO_NATIVE(endian, 16, hdr->e_shstrndx);
+
+ sechdrs = RTE_PTR_ADD(hdr, hdr->e_shoff);
+ info->sechdrs = sechdrs;
+
+ /* Check if file offset is correct */
+ if (hdr->e_shoff > info->size) {
+ fprintf(stderr, "section header offset=%lu in file '%s' "
+ "is bigger than filesize=%lu\n",
+ (unsigned long)hdr->e_shoff,
+ filename, info->size);
+ return 0;
+ }
+
+ if (hdr->e_shnum == SHN_UNDEF) {
+ /*
+ * There are more than 64k sections,
+ * read count from .sh_size.
+ */
+ info->num_sections = TO_NATIVE(endian, 32, sechdrs[0].sh_size);
+ } else {
+ info->num_sections = hdr->e_shnum;
+ }
+ if (hdr->e_shstrndx == SHN_XINDEX)
+ info->secindex_strings =
+ TO_NATIVE(endian, 32, sechdrs[0].sh_link);
+ else
+ info->secindex_strings = hdr->e_shstrndx;
+
+ /* Fix endianness in section headers */
+ for (i = 0; i < info->num_sections; i++) {
+ sechdrs[i].sh_name =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_name);
+ sechdrs[i].sh_type =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_type);
+ sechdrs[i].sh_flags =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_flags);
+ sechdrs[i].sh_addr =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addr);
+ sechdrs[i].sh_offset =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_offset);
+ sechdrs[i].sh_size =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_size);
+ sechdrs[i].sh_link =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_link);
+ sechdrs[i].sh_info =
+ TO_NATIVE(endian, 32, sechdrs[i].sh_info);
+ sechdrs[i].sh_addralign =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_addralign);
+ sechdrs[i].sh_entsize =
+ TO_NATIVE(endian, ADDR_SIZE, sechdrs[i].sh_entsize);
+ }
+ /* Find symbol table. */
+ for (i = 1; i < info->num_sections; i++) {
+ int nobits = sechdrs[i].sh_type == SHT_NOBITS;
+
+ if (!nobits && sechdrs[i].sh_offset > info->size) {
+ fprintf(stderr, "%s is truncated. "
+ "sechdrs[i].sh_offset=%lu > sizeof(*hrd)=%zu\n",
+ filename, (unsigned long)sechdrs[i].sh_offset,
+ sizeof(*hdr));
+ return 0;
+ }
+
+ if (sechdrs[i].sh_type == SHT_SYMTAB) {
+ unsigned int sh_link_idx;
+ symtab_idx = i;
+ info->symtab_start = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset);
+ info->symtab_stop = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset + sechdrs[i].sh_size);
+ sh_link_idx = sechdrs[i].sh_link;
+ info->strtab = RTE_PTR_ADD(hdr,
+ sechdrs[sh_link_idx].sh_offset);
+ }
+
+ /* 32bit section no. table? ("more than 64k sections") */
+ if (sechdrs[i].sh_type == SHT_SYMTAB_SHNDX) {
+ symtab_shndx_idx = i;
+ info->symtab_shndx_start = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset);
+ info->symtab_shndx_stop = RTE_PTR_ADD(hdr,
+ sechdrs[i].sh_offset + sechdrs[i].sh_size);
+ }
+ }
+ if (!info->symtab_start)
+ fprintf(stderr, "%s has no symtab?\n", filename);
+ else {
+ /* Fix endianness in symbols */
+ for (sym = info->symtab_start; sym < info->symtab_stop; sym++) {
+ sym->st_shndx = TO_NATIVE(endian, 16, sym->st_shndx);
+ sym->st_name = TO_NATIVE(endian, 32, sym->st_name);
+ sym->st_value = TO_NATIVE(endian, ADDR_SIZE, sym->st_value);
+ sym->st_size = TO_NATIVE(endian, ADDR_SIZE, sym->st_size);
+ }
+ }
+
+ if (symtab_shndx_idx != ~0U) {
+ Elf32_Word *p;
+ if (symtab_idx != sechdrs[symtab_shndx_idx].sh_link)
+ fprintf(stderr,
+ "%s: SYMTAB_SHNDX has bad sh_link: %u!=%u\n",
+ filename, sechdrs[symtab_shndx_idx].sh_link,
+ symtab_idx);
+ /* Fix endianness */
+ for (p = info->symtab_shndx_start; p < info->symtab_shndx_stop;
+ p++)
+ *p = TO_NATIVE(endian, 32, *p);
+ }
+
+ return 1;
+}
+
+static void parse_elf_finish(struct elf_info *info)
+{
+ struct pmd_driver *tmp, *idx = info->drivers;
+ release_file(info->hdr, info->size);
+ while (idx) {
+ tmp = idx->next;
+ free(idx);
+ idx = tmp;
+ }
+}
+
+struct opt_tag {
+ const char *suffix;
+ const char *json_id;
+};
+
+static const struct opt_tag opt_tags[] = {
+ {"_param_string_export", "params"},
+ {"_kmod_dep_export", "kmod"},
+};
+
+static int complete_pmd_entry(struct elf_info *info, struct pmd_driver *drv)
+{
+ const char *tname;
+ int i;
+ char tmpsymname[128];
+ Elf_Sym *tmpsym;
+
+ drv->name = get_sym_value(info, drv->name_sym);
+
+ for (i = 0; i < PMD_OPT_MAX; i++) {
+ memset(tmpsymname, 0, 128);
+ sprintf(tmpsymname, "__%s%s", drv->name, opt_tags[i].suffix);
+ tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
+ if (!tmpsym)
+ continue;
+ drv->opt_vals[i] = get_sym_value(info, tmpsym);
+ }
+
+ memset(tmpsymname, 0, 128);
+ sprintf(tmpsymname, "__%s_pci_tbl_export", drv->name);
+
+ tmpsym = find_sym_in_symtab(info, tmpsymname, NULL);
+
+
+ /*
+ * If this returns NULL, then this is a PMD_VDEV, because
+ * it has no pci table reference
+ */
+ if (!tmpsym) {
+ drv->pci_tbl = NULL;
+ return 0;
+ }
+
+ tname = get_sym_value(info, tmpsym);
+ tmpsym = find_sym_in_symtab(info, tname, NULL);
+ if (!tmpsym)
+ return -ENOENT;
+
+ drv->pci_tbl = (struct rte_pci_id *)get_sym_value(info, tmpsym);
+ if (!drv->pci_tbl)
+ return -ENOENT;
+
+ return 0;
+}
+
+static int locate_pmd_entries(struct elf_info *info)
+{
+ Elf_Sym *last = NULL;
+ struct pmd_driver *new;
+
+ info->drivers = NULL;
+
+ do {
+ new = calloc(sizeof(struct pmd_driver), 1);
+ new->name_sym = find_sym_in_symtab(info, "this_pmd_name", last);
+ last = new->name_sym;
+ if (!new->name_sym)
+ free(new);
+ else {
+ if (complete_pmd_entry(info, new)) {
+ fprintf(stderr,
+ "Failed to complete pmd entry\n");
+ free(new);
+ } else {
+ new->next = info->drivers;
+ info->drivers = new;
+ }
+ }
+ } while (last);
+
+ return 0;
+}
+
+static void output_pmd_info_string(struct elf_info *info, char *outfile)
+{
+ FILE *ofd;
+ struct pmd_driver *drv;
+ struct rte_pci_id *pci_ids;
+ int idx = 0;
+
+ ofd = fopen(outfile, "w+");
+ if (!ofd) {
+ fprintf(stderr, "Unable to open output file\n");
+ return;
+ }
+
+ drv = info->drivers;
+
+ while (drv) {
+ fprintf(ofd, "const char %s_pmd_info[] __attribute__((used)) = "
+ "\"PMD_INFO_STRING= {",
+ drv->name);
+ fprintf(ofd, "\\\"name\\\" : \\\"%s\\\", ", drv->name);
+
+ for (idx = 0; idx < PMD_OPT_MAX; idx++) {
+ if (drv->opt_vals[idx])
+ fprintf(ofd, "\\\"%s\\\" : \\\"%s\\\", ",
+ opt_tags[idx].json_id,
+ drv->opt_vals[idx]);
+ }
+
+ pci_ids = drv->pci_tbl;
+ fprintf(ofd, "\\\"pci_ids\\\" : [");
+
+ while (pci_ids && pci_ids->device_id) {
+ fprintf(ofd, "[%d, %d, %d, %d]",
+ pci_ids->vendor_id, pci_ids->device_id,
+ pci_ids->subsystem_vendor_id,
+ pci_ids->subsystem_device_id);
+ pci_ids++;
+ if (pci_ids->device_id)
+ fprintf(ofd, ",");
+ else
+ fprintf(ofd, " ");
+ }
+ fprintf(ofd, "]}\";\n");
+ drv = drv->next;
+ }
+
+ fclose(ofd);
+}
+
+int main(int argc, char **argv)
+{
+ struct elf_info info;
+ int rc = 1;
+
+ if (argc < 3) {
+ fprintf(stderr,
+ "usage: %s <object file> <c output file>\n",
+ basename(argv[0]));
+ exit(127);
+ }
+ parse_elf(&info, argv[1]);
+
+ locate_pmd_entries(&info);
+
+ if (info.drivers) {
+ output_pmd_info_string(&info, argv[2]);
+ rc = 0;
+ } else {
+ fprintf(stderr, "No drivers registered\n");
+ }
+
+ parse_elf_finish(&info);
+ exit(rc);
+}
diff --git a/drivers/pmdinfogen/pmdinfogen.h b/drivers/pmdinfogen/pmdinfogen.h
new file mode 100644
index 0000000..27bab30
--- /dev/null
+++ b/drivers/pmdinfogen/pmdinfogen.h
@@ -0,0 +1,125 @@
+
+/* Postprocess pmd object files to export hw support
+ *
+ * Copyright 2016 Neil Horman <nhorman@tuxdriver.com>
+ * Based in part on modpost.c from the linux kernel
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License V2, incorporated herein by reference.
+ *
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdarg.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#ifdef __linux__
+#include <endian.h>
+#else
+#include <sys/endian.h>
+#endif
+#include <fcntl.h>
+#include <unistd.h>
+#include <elf.h>
+#include <rte_config.h>
+#include <rte_pci.h>
+
+/* On BSD-alike OSes elf.h defines these according to host's word size */
+#undef ELF_ST_BIND
+#undef ELF_ST_TYPE
+#undef ELF_R_SYM
+#undef ELF_R_TYPE
+
+/*
+ * Define ELF64_* to ELF_*, the latter being defined in both 32 and 64 bit
+ * flavors in elf.h. This makes our code a bit more generic between arches
+ * and allows us to support 32 bit code in the future should we ever want to
+ */
+#ifdef RTE_ARCH_64
+#define Elf_Ehdr Elf64_Ehdr
+#define Elf_Shdr Elf64_Shdr
+#define Elf_Sym Elf64_Sym
+#define Elf_Addr Elf64_Addr
+#define Elf_Sword Elf64_Sxword
+#define Elf_Section Elf64_Half
+#define ELF_ST_BIND ELF64_ST_BIND
+#define ELF_ST_TYPE ELF64_ST_TYPE
+
+#define Elf_Rel Elf64_Rel
+#define Elf_Rela Elf64_Rela
+#define ELF_R_SYM ELF64_R_SYM
+#define ELF_R_TYPE ELF64_R_TYPE
+#else
+#define Elf_Ehdr Elf32_Ehdr
+#define Elf_Shdr Elf32_Shdr
+#define Elf_Sym Elf32_Sym
+#define Elf_Addr Elf32_Addr
+#define Elf_Sword Elf32_Sxword
+#define Elf_Section Elf32_Half
+#define ELF_ST_BIND ELF32_ST_BIND
+#define ELF_ST_TYPE ELF32_ST_TYPE
+
+#define Elf_Rel Elf32_Rel
+#define Elf_Rela Elf32_Rela
+#define ELF_R_SYM ELF32_R_SYM
+#define ELF_R_TYPE ELF32_R_TYPE
+#endif
+
+
+/*
+ * Note, it seems odd that we have both a CONVERT_NATIVE and a TO_NATIVE macro
+ * below. We do this because the values passed to TO_NATIVE may themselves be
+ * macros and need both macros here to get expanded. Specifically its the width
+ * variable we are concerned with, because it needs to get expanded prior to
+ * string concatenation
+ */
+#define CONVERT_NATIVE(fend, width, x) ({ \
+typeof(x) ___x; \
+if ((fend) == ELFDATA2LSB) \
+ ___x = le##width##toh(x); \
+else \
+ ___x = be##width##toh(x); \
+ ___x; \
+})
+
+#define TO_NATIVE(fend, width, x) CONVERT_NATIVE(fend, width, x)
+
+enum opt_params {
+ PMD_PARAM_STRING = 0,
+ PMD_KMOD_DEP,
+ PMD_OPT_MAX
+};
+
+struct pmd_driver {
+ Elf_Sym *name_sym;
+ const char *name;
+ struct rte_pci_id *pci_tbl;
+ struct pmd_driver *next;
+
+ const char *opt_vals[PMD_OPT_MAX];
+};
+
+struct elf_info {
+ unsigned long size;
+ Elf_Ehdr *hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *symtab_start;
+ Elf_Sym *symtab_stop;
+ char *strtab;
+
+ /* support for 32bit section numbers */
+
+ unsigned int num_sections; /* max_secindex + 1 */
+ unsigned int secindex_strings;
+ /* if Nth symbol table entry has .st_shndx = SHN_XINDEX,
+ * take shndx from symtab_shndx_start[N] instead
+ */
+ Elf32_Word *symtab_shndx_start;
+ Elf32_Word *symtab_shndx_stop;
+
+ struct pmd_driver *drivers;
+};
+
--
2.1.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH 2/3] timer: add symbol versions for ABI compatibility
2017-05-31 9:16 4% ` [dpdk-dev] [PATCH 0/3] " Bruce Richardson
@ 2017-05-31 9:16 12% ` Bruce Richardson
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2017-05-31 9:16 UTC (permalink / raw)
To: Robert Sanford; +Cc: dev, Bruce Richardson
With the change in parameters to the callback function for timers, ABI
compatibility needs to be managed. Do this by adding symbol version
information and adding back in older copies of some functions.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/librte_timer/rte_timer.c | 79 ++++++++++++++++++++++++++++++----
lib/librte_timer/rte_timer.h | 27 ++++++++++--
lib/librte_timer/rte_timer_version.map | 9 ++++
3 files changed, 104 insertions(+), 11 deletions(-)
diff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c
index 2c5d5f3..c272643 100644
--- a/lib/librte_timer/rte_timer.c
+++ b/lib/librte_timer/rte_timer.c
@@ -365,7 +365,7 @@ timer_del(struct rte_timer *tim, union rte_timer_status prev_status,
static int
__rte_timer_reset(struct rte_timer *tim, uint64_t expire,
uint64_t period, unsigned tim_lcore,
- rte_timer_cb_t fct, void *arg,
+ void *fct, void *arg,
int local_is_locked)
{
union rte_timer_status prev_status, status;
@@ -424,9 +424,9 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,
/* Reset and start the timer associated with the timer handle tim */
int
-rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
+rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned tim_lcore,
- rte_timer_cb_t fct, void *arg)
+ rte_timer_cb_t_v20 fct, void *arg)
{
uint64_t cur_time = rte_get_timer_cycles();
uint64_t period;
@@ -443,17 +443,58 @@ rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
return __rte_timer_reset(tim, cur_time + ticks, period, tim_lcore,
fct, arg, 0);
}
+VERSION_SYMBOL(rte_timer_reset, _v20, 2.0);
/* loop until rte_timer_reset() succeed */
void
-rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,
+rte_timer_reset_sync_v20(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned tim_lcore,
- rte_timer_cb_t fct, void *arg)
+ rte_timer_cb_t_v20 fct, void *arg)
+{
+ while (rte_timer_reset_v20(tim, ticks, type, tim_lcore,
+ fct, arg) != 0)
+ rte_pause();
+}
+VERSION_SYMBOL(rte_timer_reset_sync, _v20, 2.0);
+
+/* Reset and start the timer associated with the timer handle tim */
+int
+rte_timer_reset_v1708(struct rte_timer *tim, uint64_t ticks,
+ enum rte_timer_type type, unsigned int tim_lcore,
+ rte_timer_cb_t fct, void *arg)
+{
+ uint64_t cur_time = rte_get_timer_cycles();
+ uint64_t period;
+
+ if (unlikely((tim_lcore != (unsigned int)LCORE_ID_ANY) &&
+ !rte_lcore_is_enabled(tim_lcore)))
+ return -1;
+
+ if (type == PERIODICAL)
+ period = ticks;
+ else
+ period = 0;
+
+ return __rte_timer_reset(tim, cur_time + ticks, period, tim_lcore,
+ fct, arg, 0);
+}
+MAP_STATIC_SYMBOL(int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
+ enum rte_timer_type type, unsigned int tim_lcore,
+ rte_timer_cb_t fct, void *arg), rte_timer_reset_v1708);
+
+/* loop until rte_timer_reset() succeed */
+void
+rte_timer_reset_sync_v1708(struct rte_timer *tim, uint64_t ticks,
+ enum rte_timer_type type, unsigned int tim_lcore,
+ rte_timer_cb_t fct, void *arg)
{
while (rte_timer_reset(tim, ticks, type, tim_lcore,
fct, arg) != 0)
rte_pause();
}
+MAP_STATIC_SYMBOL(void rte_timer_reset_sync(struct rte_timer *tim,
+ uint64_t ticks, enum rte_timer_type type, unsigned int tim_lcore,
+ rte_timer_cb_t fct, void *arg), rte_timer_reset_sync_v1708);
/* Stop the timer associated with the timer handle tim */
int
@@ -505,8 +546,13 @@ rte_timer_pending(struct rte_timer *tim)
return tim->status.state == RTE_TIMER_PENDING;
}
+enum timer_version {
+ TIMER_VERSION_20,
+ TIMER_VERSION_1708,
+};
/* must be called periodically, run all timer that expired */
-void rte_timer_manage(void)
+static void
+__timer_manage(enum timer_version ver)
{
union rte_timer_status status;
struct rte_timer *tim, *next_tim;
@@ -590,9 +636,12 @@ void rte_timer_manage(void)
priv_timer[lcore_id].running_tim = tim;
/* execute callback function with list unlocked */
- if (tim->period == 0)
+ if (ver == TIMER_VERSION_20) {
+ rte_timer_cb_t_v20 f = (void *)tim->f;
+ f(tim, tim->arg);
+ } else if (tim->period == 0) {
tim->f(tim, 1, tim->arg);
- else {
+ } else {
/* for periodic check how many expiries we have */
uint64_t over_time = cur_time - tim->expire;
unsigned int extra_expiries = over_time / tim->period;
@@ -630,6 +679,20 @@ void rte_timer_manage(void)
priv_timer[lcore_id].running_tim = NULL;
}
+void
+rte_timer_manage_v20(void)
+{
+ __timer_manage(TIMER_VERSION_20);
+}
+VERSION_SYMBOL(rte_timer_manage, _v20, 2.0);
+
+void
+rte_timer_manage_v1708(void)
+{
+ __timer_manage(TIMER_VERSION_1708);
+}
+MAP_STATIC_SYMBOL(void rte_timer_manage(void), rte_timer_manage_v1708);
+
/* dump statistics about timers */
void rte_timer_dump_stats(FILE *f)
{
diff --git a/lib/librte_timer/rte_timer.h b/lib/librte_timer/rte_timer.h
index bc434ec..049af72 100644
--- a/lib/librte_timer/rte_timer.h
+++ b/lib/librte_timer/rte_timer.h
@@ -1,7 +1,7 @@
/*-
* BSD LICENSE
*
- * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
@@ -67,6 +67,7 @@
#include <stdint.h>
#include <stddef.h>
#include <rte_common.h>
+#include <rte_compat.h>
#ifdef __cplusplus
extern "C" {
@@ -224,7 +225,7 @@ void rte_timer_init(struct rte_timer *tim);
int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned tim_lcore,
rte_timer_cb_t fct, void *arg);
-
+BIND_DEFAULT_SYMBOL(rte_timer_reset, _v1708, 17.08);
/**
* Loop until rte_timer_reset() succeeds.
@@ -256,6 +257,7 @@ void
rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned tim_lcore,
rte_timer_cb_t fct, void *arg);
+BIND_DEFAULT_SYMBOL(rte_timer_reset_sync, _v1708, 17.08);
/**
* Stop a timer.
@@ -321,7 +323,7 @@ int rte_timer_pending(struct rte_timer *tim);
* CPU resources it will use.
*/
void rte_timer_manage(void);
-
+BIND_DEFAULT_SYMBOL(rte_timer_manage, _v1708, 17.08);
/**
* Dump statistics about timers.
*
@@ -330,6 +332,25 @@ void rte_timer_manage(void);
*/
void rte_timer_dump_stats(FILE *f);
+/* legacy definition for ABI compatibility */
+typedef void (*rte_timer_cb_t_v20)(struct rte_timer *, void *);
+
+/* prototypes for versioned functions for ABI compatibility */
+int rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
+ enum rte_timer_type type, unsigned int tim_lcore,
+ rte_timer_cb_t_v20 fct, void *arg);
+void rte_timer_reset_sync_v20(struct rte_timer *tim, uint64_t ticks,
+ enum rte_timer_type type, unsigned int tim_lcore,
+ rte_timer_cb_t_v20 fct, void *arg);
+int rte_timer_reset_v1708(struct rte_timer *tim, uint64_t ticks,
+ enum rte_timer_type type, unsigned int tim_lcore,
+ rte_timer_cb_t fct, void *arg);
+void rte_timer_reset_sync_v1708(struct rte_timer *tim, uint64_t ticks,
+ enum rte_timer_type type, unsigned int tim_lcore,
+ rte_timer_cb_t fct, void *arg);
+void rte_timer_manage_v20(void);
+void rte_timer_manage_v1708(void);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_timer/rte_timer_version.map b/lib/librte_timer/rte_timer_version.map
index 9b2e4b8..06553d1 100644
--- a/lib/librte_timer/rte_timer_version.map
+++ b/lib/librte_timer/rte_timer_version.map
@@ -13,3 +13,12 @@ DPDK_2.0 {
local: *;
};
+
+DPDK_17.08 {
+ global:
+
+ rte_timer_reset;
+ rte_timer_reset_sync;
+ rte_timer_manage;
+
+} DPDK_2.0;
--
2.9.4
^ permalink raw reply [relevance 12%]
* [dpdk-dev] [PATCH 0/3] timer: inform periodic timers of multiple expiries
2017-04-28 13:29 4% ` Bruce Richardson
@ 2017-05-31 9:16 4% ` Bruce Richardson
2017-05-31 9:16 12% ` [dpdk-dev] [PATCH 2/3] timer: add symbol versions for ABI compatibility Bruce Richardson
1 sibling, 2 replies; 200+ results
From: Bruce Richardson @ 2017-05-31 9:16 UTC (permalink / raw)
To: Robert Sanford; +Cc: dev, Bruce Richardson
For periodic timers, with the current there is no way to know if timer
expiries have been missed between calls to rte_timer_manage(). This
patchset adds in a new parameter to timer callbacks, to give the number of
expiries since the last one. ABI compatibility with previous releases is
kept, and a new unit test for that functionality is added
Bruce Richardson (3):
timer: inform periodic timers of multiple expiries
timer: add symbol versions for ABI compatibility
test/test: add test for multiple timer expiries
examples/l2fwd-jobstats/main.c | 7 +-
examples/l2fwd-keepalive/main.c | 8 +-
examples/l3fwd-power/main.c | 5 +-
examples/performance-thread/common/lthread_sched.c | 4 +-
examples/performance-thread/common/lthread_sched.h | 2 +-
examples/timer/main.c | 10 ++-
lib/librte_timer/rte_timer.c | 88 ++++++++++++++++++++--
lib/librte_timer/rte_timer.h | 29 ++++++-
lib/librte_timer/rte_timer_version.map | 9 +++
test/test/test_timer.c | 68 +++++++++++++++--
test/test/test_timer_perf.c | 4 +-
test/test/test_timer_racecond.c | 3 +-
12 files changed, 203 insertions(+), 34 deletions(-)
--
2.9.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [RFC 2/3] ethdev: add new rte_mtr API for traffic metering and policing
@ 2017-05-30 16:44 2% ` Cristian Dumitrescu
0 siblings, 0 replies; 200+ results
From: Cristian Dumitrescu @ 2017-05-30 16:44 UTC (permalink / raw)
To: dev
Cc: thomas, adrien.mazarguil, jerin.jacob, hemant.agrawal,
declan.doherty, keith.wiles
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
MAINTAINERS | 4 +
lib/librte_ether/Makefile | 7 +-
lib/librte_ether/rte_ether_version.map | 8 +
lib/librte_ether/rte_mtr.c | 184 +++++++++++++
lib/librte_ether/rte_mtr.h | 465 +++++++++++++++++++++++++++++++++
lib/librte_ether/rte_mtr_driver.h | 188 +++++++++++++
6 files changed, 854 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_ether/rte_mtr.c
create mode 100644 lib/librte_ether/rte_mtr.h
create mode 100644 lib/librte_ether/rte_mtr_driver.h
diff --git a/MAINTAINERS b/MAINTAINERS
index afb4cab..b025a7b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -240,6 +240,10 @@ Flow API
M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
F: lib/librte_ether/rte_flow*
+Traffic metering and policing API
+M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+F: lib/librte_ether/rte_mtr*
+
Crypto API
M: Declan Doherty <declan.doherty@intel.com>
F: lib/librte_cryptodev/
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index 93fdde1..48f1098 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -1,6 +1,6 @@
# BSD LICENSE
#
-# Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+# Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
@@ -41,10 +41,11 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_ether_version.map
-LIBABIVER := 6
+LIBABIVER := 7
SRCS-y += rte_ethdev.c
SRCS-y += rte_flow.c
+SRCS-y += rte_mtr.c
#
# Export include files
@@ -56,5 +57,7 @@ SYMLINK-y-include += rte_eth_ctrl.h
SYMLINK-y-include += rte_dev_info.h
SYMLINK-y-include += rte_flow.h
SYMLINK-y-include += rte_flow_driver.h
+SYMLINK-y-include += rte_mtr.h
+SYMLINK-y-include += rte_mtr_driver.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 9783aa1..89ea2ed 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -161,4 +161,12 @@ DPDK_17.08 {
global:
rte_eth_dev_mtr_ops_get;
+ rte_mtr_meter_profile_add;
+ rte_mtr_meter_profile_delete;
+ rte_mtr_create;
+ rte_mtr_destroy;
+ rte_mtr_meter_profile_update;
+ rte_mtr_policer_action_update;
+ rte_mtr_stats_update;
+ rte_mtr_stats_read;
} DPDK_17.05;
diff --git a/lib/librte_ether/rte_mtr.c b/lib/librte_ether/rte_mtr.c
new file mode 100644
index 0000000..efbe7fb
--- /dev/null
+++ b/lib/librte_ether/rte_mtr.c
@@ -0,0 +1,184 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include "rte_ethdev.h"
+#include "rte_mtr_driver.h"
+#include "rte_mtr.h"
+
+/* Get generic traffic metering and policing operations structure from a port. */
+const struct rte_mtr_ops *
+rte_mtr_ops_get(uint8_t port_id, struct rte_mtr_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ const struct rte_mtr_ops *ops;
+
+ if (!rte_eth_dev_is_valid_port(port_id)) {
+ rte_mtr_error_set(error,
+ ENODEV,
+ RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(ENODEV));
+ return NULL;
+ }
+
+ if ((dev->dev_ops->mtr_ops_get == NULL) ||
+ (dev->dev_ops->mtr_ops_get(dev, &ops) != 0) ||
+ (ops == NULL)) {
+ rte_mtr_error_set(error,
+ ENOSYS,
+ RTE_MTR_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(ENOSYS));
+ return NULL;
+ }
+
+ return ops;
+}
+
+#define RTE_MTR_FUNC(port_id, func) \
+({ \
+ const struct rte_mtr_ops *ops = \
+ rte_mtr_ops_get(port_id, error); \
+ if (ops == NULL) \
+ return -rte_errno; \
+ \
+ if (ops->func == NULL) \
+ return -rte_mtr_error_set(error, \
+ ENOSYS, \
+ RTE_MTR_ERROR_TYPE_UNSPECIFIED, \
+ NULL, \
+ rte_strerror(ENOSYS)); \
+ \
+ ops->func; \
+})
+
+/* MTR meter profile add */
+int
+rte_mtr_meter_profile_add(uint8_t port_id,
+ uint32_t meter_profile_id,
+ struct rte_mtr_meter_profile *profile,
+ struct rte_mtr_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_MTR_FUNC(port_id, meter_profile_add)(dev,
+ meter_profile_id, profile, error);
+}
+
+/** MTR meter profile delete */
+int
+rte_mtr_meter_profile_delete(uint8_t port_id,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_MTR_FUNC(port_id, meter_profile_delete)(dev,
+ meter_profile_id, error);
+}
+
+/** MTR object create */
+int
+rte_mtr_create(uint8_t port_id,
+ uint32_t mtr_id,
+ struct rte_mtr_params *params,
+ int shared,
+ struct rte_mtr_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_MTR_FUNC(port_id, create)(dev,
+ mtr_id, params, shared, error);
+}
+
+/** MTR object destroy */
+int
+rte_mtr_destroy(uint8_t port_id,
+ uint32_t mtr_id,
+ struct rte_mtr_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_MTR_FUNC(port_id, destroy)(dev,
+ mtr_id, error);
+}
+
+/** MTR object meter profile update */
+int
+rte_mtr_meter_profile_update(uint8_t port_id,
+ uint32_t mtr_id,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_MTR_FUNC(port_id, meter_profile_update)(dev,
+ mtr_id, meter_profile_id, error);
+}
+
+/** MTR object policer action update */
+int
+rte_mtr_policer_action_update(uint8_t port_id,
+ uint32_t mtr_id,
+ enum rte_mtr_color color,
+ enum rte_mtr_policer_action action,
+ struct rte_mtr_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_MTR_FUNC(port_id, policer_action_update)(dev,
+ mtr_id, color, action, error);
+}
+
+/** MTR object enabled stats update */
+int
+rte_mtr_stats_update(uint8_t port_id,
+ uint32_t mtr_id,
+ uint64_t stats_mask,
+ struct rte_mtr_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_MTR_FUNC(port_id, stats_update)(dev,
+ mtr_id, stats_mask, error);
+}
+
+/** MTR object stats read */
+int
+rte_mtr_stats_read(uint8_t port_id,
+ uint32_t mtr_id,
+ struct rte_mtr_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_mtr_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_MTR_FUNC(port_id, stats_read)(dev,
+ mtr_id, stats, stats_mask, clear, error);
+}
diff --git a/lib/librte_ether/rte_mtr.h b/lib/librte_ether/rte_mtr.h
new file mode 100644
index 0000000..63179d1
--- /dev/null
+++ b/lib/librte_ether/rte_mtr.h
@@ -0,0 +1,465 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_MTR_H__
+#define __INCLUDE_RTE_MTR_H__
+
+/**
+ * @file
+ * RTE Generic Traffic Metering and Policing API
+ *
+ * This interface provides the ability to configure the traffic metering and
+ * policing (MTR) in a generic way.
+ *
+ * The processing done for each input packet hitting a MTR object is:
+ * A) Traffic metering: The packet is assigned a color (the meter output
+ * color), based on the previous history of the flow reflected in the
+ * current state of the MTR object, according to the specific traffic
+ * metering algorithm. The traffic metering algorithm can typically work
+ * in color aware mode, in which case the input packet already has an
+ * initial color (the input color), or in color blind mode, which is
+ * equivalent to considering all input packets initially colored as green.
+ * B) Policing: There is a separate policer action configured for each meter
+ * output color, which can:
+ * a) Drop the packet.
+ * b) Keep the same packet color: the policer output color matches the
+ * meter output color (essentially a no-op action).
+ * c) Recolor the packet: the policer output color is different than
+ * the meter output color.
+ * The policer output color is the output color of the packet, which is
+ * set in the packet meta-data (i.e. struct rte_mbuf::sched::color).
+ * C) Statistics: The set of counters maintained for each MTR object is
+ * configurable and subject to the implementation support. This set
+ * includes the number of packets and bytes dropped or passed for each
+ * output color.
+ *
+ * Once successfully created, an MTR object is linked to one or several flows
+ * through the meter action of the flow API.
+ * A) Whether an MTR object is private to a flow or potentially shared by
+ * several flows has to be specified at creation time.
+ * B) Several meter actions can be potentially registered for the same flow.
+ */
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Color
+ */
+enum rte_mtr_color {
+ RTE_MTR_GREEN = 0, /**< Green */
+ RTE_MTR_YELLOW, /**< Yellow */
+ RTE_MTR_RED, /**< Red */
+ RTE_MTR_COLORS /**< Number of colors */
+};
+
+/**
+ * Statistics counter type
+ */
+enum rte_mtr_stats_type {
+ /**< Number of packets passed as green by the policer. */
+ RTE_MTR_STATS_N_PKTS_GREEN = 1 << 0,
+
+ /**< Number of bytes passed as green by the policer. */
+ RTE_MTR_STATS_N_BYTES_GREEN = 1 << 1,
+
+ /**< Number of packets passed as yellow by the policer. */
+ RTE_MTR_STATS_N_PKTS_YELLOW = 1 << 2,
+
+ /**< Number of bytes passed as yellow by the policer. */
+ RTE_MTR_STATS_N_BYTES_YELLOW = 1 << 3,
+
+ /**< Number of packets passed as red by the policer. */
+ RTE_MTR_STATS_N_PKTS_RED = 1 << 4,
+
+ /**< Number of bytes passed as red by the policer. */
+ RTE_MTR_STATS_N_BYTES_RED = 1 << 5,
+
+ /**< Number of packets dropped by the policer. */
+ RTE_MTR_STATS_N_PKTS_DROPPED = 1 << 6,
+
+ /**< Number of bytes dropped by the policer. */
+ RTE_MTR_STATS_N_BYTES_DROPPED = 1 << 7,
+};
+
+/**
+ * Statistics counters
+ */
+struct rte_mtr_stats {
+ /**< Number of packets passed by the policer (per color). */
+ uint64_t n_pkts[RTE_MTR_COLORS];
+
+ /**< Number of bytes passed by the policer (per color). */
+ uint64_t n_bytes[RTE_MTR_COLORS];
+
+ /**< Number of packets dropped by the policer. */
+ uint64_t n_pkts_dropped;
+
+ /**< Number of bytes passed by the policer. */
+ uint64_t n_bytes_dropped;
+};
+
+/**
+ * Traffic metering algorithms
+ */
+enum rte_mtr_algorithm {
+ /**< Single Rate Three Color Marker (srTCM) - IETF RFC 2697. */
+ RTE_MTR_SRTCM_RFC2697 = 0,
+
+ /**< Two Rate Three Color Marker (trTCM) - IETF RFC 2698. */
+ RTE_MTR_TRTCM_RFC2698,
+};
+
+/**
+ * Meter profile
+ */
+struct rte_mtr_meter_profile {
+ /**< Traffic metering algorithm. */
+ enum rte_mtr_algorithm alg;
+
+ union {
+ /**< Items only valid when *alg* is set to srTCM - RFC2697. */
+ struct {
+ /**< Committed Information Rate (CIR) (bytes/second). */
+ uint64_t cir;
+
+ /**< Committed Burst Size (CBS) (bytes). */
+ uint64_t cbs;
+
+ /**< Excess Burst Size (EBS) (bytes). */
+ uint64_t ebs;
+
+ /**< Non-zero for color aware mode, zero for color blind
+ * mode. In color aware mode, the packet input color is
+ * read from the IPv4/IPv6 DSCP field, as defined by
+ * IETF RFC 2597 (low/medium/high drop precedence
+ * translates to green/yellow/red color respectively).
+ */
+ int color_aware;
+ } srtcm_rfc2697;
+
+ /**< Items only valid when *alg* is set to trTCM - RFC2698. */
+ struct {
+ /**< Committed Information Rate (CIR) (bytes/second). */
+ uint64_t cir;
+
+ /**< Peak Information Rate (PIR) (bytes/second). */
+ uint64_t pir;
+
+ /**< Committed Burst Size (CBS) (byes). */
+ uint64_t cbs;
+
+ /**< Peak Burst Size (PBS) (bytes). */
+ uint64_t pbs;
+
+ /**< Non-zero for color aware mode, zero for color blind
+ * mode. In color aware mode, the packet input color is
+ * read from the IPv4/IPv6 DSCP field, as defined by
+ * IETF RFC 2597 (low/medium/high drop precedence
+ * translates to green/yellow/red color respectively).
+ */
+ int color_aware;
+ } trtcm_rfc2698;
+ };
+};
+
+/**
+ * Policer actions
+ */
+enum rte_mtr_policer_action {
+ /**< Recolor the packet as green. */
+ e_MTR_POLICER_ACTION_COLOR_GREEN = 0,
+
+ /**< Recolor the packet as yellow. */
+ e_MTR_POLICER_ACTION_COLOR_YELLOW,
+
+ /**< Recolor the packet as red. */
+ e_MTR_POLICER_ACTION_COLOR_RED,
+
+ /**< Drop the packet. */
+ e_MTR_POLICER_ACTION_DROP,
+};
+
+/**
+ * Parameters for each traffic metering & policing object
+ *
+ * @see enum rte_mtr_stats_type
+ */
+struct rte_mtr_params {
+ /**< Meter profile ID. */
+ uint32_t meter_profile_id;
+
+ /**< Policer actions (per meter output color). */
+ enum rte_mtr_policer_action action[RTE_MTR_COLORS];
+
+ /**< Set of stats counters to be enabled. */
+ uint64_t stats_mask;
+};
+
+/**
+ * Verbose error types.
+ *
+ * Most of them provide the type of the object referenced by struct
+ * rte_mtr_error::cause.
+ */
+enum rte_mtr_error_type {
+ RTE_MTR_ERROR_TYPE_NONE, /**< No error. */
+ RTE_MTR_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
+ RTE_MTR_ERROR_TYPE_METER_PROFILE_ID,
+ RTE_MTR_ERROR_TYPE_METER_PROFILE,
+ RTE_MTR_ERROR_TYPE_MTR_ID,
+ RTE_MTR_ERROR_TYPE_MTR_PARAMS,
+ RTE_MTR_ERROR_TYPE_POLICER_ACTION_GREEN,
+ RTE_MTR_ERROR_TYPE_POLICER_ACTION_YELLOW,
+ RTE_MTR_ERROR_TYPE_POLICER_ACTION_RED,
+ RTE_MTR_ERROR_TYPE_STATS_MASK,
+ RTE_MTR_ERROR_TYPE_STATS,
+ RTE_MTR_ERROR_TYPE_SHARED,
+};
+
+/**
+ * Verbose error structure definition.
+ *
+ * This object is normally allocated by applications and set by PMDs, the
+ * message points to a constant string which does not need to be freed by
+ * the application, however its pointer can be considered valid only as long
+ * as its associated DPDK port remains configured. Closing the underlying
+ * device or unloading the PMD invalidates it.
+ *
+ * Both cause and message may be NULL regardless of the error type.
+ */
+struct rte_mtr_error {
+ enum rte_mtr_error_type type; /**< Cause field and error type. */
+ const void *cause; /**< Object responsible for the error. */
+ const char *message; /**< Human-readable error message. */
+};
+
+/**
+ * Meter profile add
+ *
+ * Create a new meter profile with ID set to *meter_profile_id*. The new profile
+ * is used to create one or several MTR objects.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] meter_profile_id
+ * ID for the new meter profile. Needs to be unused by any of the existing
+ * meter profiles added for the current port.
+ * @param[in] profile
+ * Meter profile parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_mtr_meter_profile_add(uint8_t port_id,
+ uint32_t meter_profile_id,
+ struct rte_mtr_meter_profile *profile,
+ struct rte_mtr_error *error);
+
+/**
+ * Meter profile delete
+ *
+ * Delete an existing meter profile. This operation fails when there is
+ * currently at least one user (i.e. MTR object) of this profile.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] meter_profile_id
+ * Meter profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_mtr_meter_profile_delete(uint8_t port_id,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error);
+
+/**
+ * MTR object create
+ *
+ * Create a new MTR object for the current port.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mtr_id
+ * MTR object ID. Needs to be unused by any of the existing MTR objects
+ * created for the current port.
+ * @param[in] params
+ * MTR object params. Needs to be pre-allocated and valid.
+ * @param[in] shared
+ * Non-zero when this MTR object can be shared by multiple flows, zero when
+ * this MTR object can be used by a single flow.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_mtr_create(uint8_t port_id,
+ uint32_t mtr_id,
+ struct rte_mtr_params *params,
+ int shared,
+ struct rte_mtr_error *error);
+
+/**
+ * MTR object destroy
+ *
+ * Delete an existing MTR object. This operation fails when there is currently
+ * at least one user (i.e. flow) of this MTR object.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mtr_id
+ * MTR object ID. Needs to be unused by any of the existing MTR objects
+ * created for the current port.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_mtr_destroy(uint8_t port_id,
+ uint32_t mtr_id,
+ struct rte_mtr_error *error);
+
+/**
+ * MTR object meter profile update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mtr_id
+ * MTR object ID. Needs to be valid.
+ * @param[in] meter_profile_id
+ * Meter profile ID for the current MTR object. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_mtr_meter_profile_update(uint8_t port_id,
+ uint32_t mtr_id,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error);
+
+/**
+ * MTR object policer action update for given color
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mtr_id
+ * MTR object ID. Needs to be valid.
+ * @param[in] color
+ * Color for which the policer action is updated.
+ * @param[in] action
+ * Policer action for specified color.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_mtr_policer_action_update(uint8_t port_id,
+ uint32_t mtr_id,
+ enum rte_mtr_color color,
+ enum rte_mtr_policer_action action,
+ struct rte_mtr_error *error);
+
+/**
+ * MTR object enabled statistics counters update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mtr_id
+ * MTR object ID. Needs to be valid.
+ * @param[in] stats_mask
+ * Mask of statistics counter types to be enabled for the current MTR object.
+ * Any statistics counter type not included in this set is to be disabled for
+ * the current MTR object.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see enum rte_mtr_stats_type
+ */
+int
+rte_mtr_stats_update(uint8_t port_id,
+ uint32_t mtr_id,
+ uint64_t stats_mask,
+ struct rte_mtr_error *error);
+
+/**
+ * MTR object statistics counters read
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mtr_id
+ * MTR object ID. Needs to be valid.
+ * @param[out] stats
+ * When non-NULL, it contains the current value for the statistics counters
+ * enabled for the current MTR object.
+ * @param[out] stats_mask
+ * When non-NULL, it contains the mask of statistics counter types that are
+ * currently enabled for this MTR object, indicating which of the counters
+ * retrieved with the *stats* structure are valid.
+ * @param[in] clear
+ * When this parameter has a non-zero value, the statistics counters are
+ * cleared (i.e. set to zero) immediately after they have been read,
+ * otherwise the statistics counters are left untouched.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see enum rte_mtr_stats_type
+ */
+int
+rte_mtr_stats_read(uint8_t port_id,
+ uint32_t mtr_id,
+ struct rte_mtr_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_mtr_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_MTR_H__ */
diff --git a/lib/librte_ether/rte_mtr_driver.h b/lib/librte_ether/rte_mtr_driver.h
new file mode 100644
index 0000000..3798cf6
--- /dev/null
+++ b/lib/librte_ether/rte_mtr_driver.h
@@ -0,0 +1,188 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_MTR_DRIVER_H__
+#define __INCLUDE_RTE_MTR_DRIVER_H__
+
+/**
+ * @file
+ * RTE Generic Traffic Metering and Policing API (Driver Side)
+ *
+ * This file provides implementation helpers for internal use by PMDs, they
+ * are not intended to be exposed to applications and are not subject to ABI
+ * versioning.
+ */
+
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include "rte_ethdev.h"
+#include "rte_mtr.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int (*rte_mtr_meter_profile_add_t)(struct rte_eth_dev *dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_meter_profile *profile,
+ struct rte_mtr_error *error);
+/**< @internal MTR meter profile add */
+
+typedef int (*rte_mtr_meter_profile_delete_t)(struct rte_eth_dev *dev,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error);
+/**< @internal MTR meter profile delete */
+
+typedef int (*rte_mtr_create_t)(struct rte_eth_dev *dev,
+ uint32_t mtr_id,
+ struct rte_mtr_params *params,
+ int shared,
+ struct rte_mtr_error *error);
+/**< @internal MTR object create */
+
+typedef int (*rte_mtr_destroy_t)(struct rte_eth_dev *dev,
+ uint32_t mtr_id,
+ struct rte_mtr_error *error);
+/**< @internal MTR object destroy */
+
+typedef int (*rte_mtr_meter_profile_update_t)(struct rte_eth_dev *dev,
+ uint32_t mtr_id,
+ uint32_t meter_profile_id,
+ struct rte_mtr_error *error);
+/**< @internal MTR object meter profile update */
+
+typedef int (*rte_mtr_policer_action_update_t)(struct rte_eth_dev *dev,
+ uint32_t mtr_id,
+ enum rte_mtr_color color,
+ enum rte_mtr_policer_action action,
+ struct rte_mtr_error *error);
+/**< @internal MTR object policer action update*/
+
+typedef int (*rte_mtr_stats_update_t)(struct rte_eth_dev *dev,
+ uint32_t mtr_id,
+ uint64_t stats_mask,
+ struct rte_mtr_error *error);
+/**< @internal MTR object enabled stats update */
+
+typedef int (*rte_mtr_stats_read_t)(struct rte_eth_dev *dev,
+ uint32_t mtr_id,
+ struct rte_mtr_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_mtr_error *error);
+/**< @internal MTR object stats read */
+
+struct rte_mtr_ops {
+ /** MTR meter profile add */
+ rte_mtr_meter_profile_add_t meter_profile_add;
+
+ /** MTR meter profile delete */
+ rte_mtr_meter_profile_delete_t meter_profile_delete;
+
+ /** MTR object create */
+ rte_mtr_create_t create;
+
+ /** MTR object destroy */
+ rte_mtr_destroy_t destroy;
+
+ /** MTR object meter profile update */
+ rte_mtr_meter_profile_update_t meter_profile_update;
+
+ /** MTR object policer action update */
+ rte_mtr_policer_action_update_t policer_action_update;
+
+ /** MTR object enabled stats update */
+ rte_mtr_stats_update_t stats_update;
+
+ /** MTR object stats read */
+ rte_mtr_stats_read_t stats_read;
+};
+
+/**
+ * Initialize generic error structure.
+ *
+ * This function also sets rte_errno to a given value.
+ *
+ * @param[out] error
+ * Pointer to error structure (may be NULL).
+ * @param[in] code
+ * Related error code (rte_errno).
+ * @param[in] type
+ * Cause field and error type.
+ * @param[in] cause
+ * Object responsible for the error.
+ * @param[in] message
+ * Human-readable error message.
+ *
+ * @return
+ * Error code.
+ */
+static inline int
+rte_mtr_error_set(struct rte_mtr_error *error,
+ int code,
+ enum rte_mtr_error_type type,
+ const void *cause,
+ const char *message)
+{
+ if (error) {
+ *error = (struct rte_mtr_error){
+ .type = type,
+ .cause = cause,
+ .message = message,
+ };
+ }
+ rte_errno = code;
+ return code;
+}
+
+/**
+ * Get generic traffic metering and policing operations structure from a port
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] error
+ * Error details
+ *
+ * @return
+ * The traffic metering and policing operations structure associated with
+ * port_id on success, NULL otherwise.
+ */
+const struct rte_mtr_ops *
+rte_mtr_ops_get(uint8_t port_id, struct rte_mtr_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_MTR_DRIVER_H__ */
--
2.7.4
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH] ethdev: add roughly match pattern
@ 2017-05-30 12:46 4% ` Adrien Mazarguil
2017-06-13 3:07 2% ` [dpdk-dev] [PATCH v3] ethdev: add fuzzy match in flow API Qi Zhang
1 sibling, 0 replies; 200+ results
From: Adrien Mazarguil @ 2017-05-30 12:46 UTC (permalink / raw)
To: Qi Zhang; +Cc: dev, Mcnamara, John
Hi Zhang,
You should cram "flow API" somewhere in the title of such commits.
On Tue, May 23, 2017 at 07:28:54PM -0400, Qi Zhang wrote:
> Add new meta pattern item RTE_FLOW_TYPE_ITEM_ROUGHLY.
>
> This is for device that support no-perfect match option.
> Usually a no-perfect match is fast but the cost is accuracy.
> i.e. Signature Match only match pattern's hash value, but it is
> possible two different patterns have the same hash value.
>
> Matching accuracy level can be configure by subfield threshold.
> Driver can divide the range of threshold and map to different
> accuracy levels that device support.
>
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
While I really like the "roughly" pattern item name since it perfectly
describes its intended purpose in my opinion, perhaps some may not find this
name appropriate. I would like to hear other people's opinion on the matter
and not be the only one to ack this patch.
Several more comments below.
> ---
> app/test-pmd/cmdline_flow.c | 24 ++++++++++++++
> app/test-pmd/config.c | 1 +
> doc/guides/prog_guide/rte_flow.rst | 50 +++++++++++++++++++++++++++++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
> lib/librte_ether/rte_flow.h | 30 +++++++++++++++++
> 5 files changed, 107 insertions(+)
>
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 0fd69f9..18ffcff 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -107,6 +107,8 @@ enum index {
> ITEM_END,
> ITEM_VOID,
> ITEM_INVERT,
> + ITEM_ROUGHLY,
> + ITEM_ROUGHLY_THRESHOLD,
"Threshold" is commonly abbreviated as "thresh", I think it's fine if you
use this shorter form here and in the structure definition.
There is also an issue with the position of these enum entries. They should
come in the same order as rte_flow.h definitions like you did, but you are
supposed to add new entries at the end of the various lists in that file
instead of in the middle, otherwise you're destroying API/ABI compatibility
which is not supposed to happen. More on that topic below.
> ITEM_ANY,
> ITEM_ANY_NUM,
> ITEM_PF,
> @@ -426,6 +428,7 @@ static const enum index next_item[] = {
> ITEM_END,
> ITEM_VOID,
> ITEM_INVERT,
> + ITEM_ROUGHLY,
This will have to be moved at the end of the list.
> ITEM_ANY,
> ITEM_PF,
> ITEM_VF,
> @@ -447,6 +450,12 @@ static const enum index next_item[] = {
> ZERO,
> };
>
> +static const enum index item_roughly[] = {
> + ITEM_ROUGHLY_THRESHOLD,
I suggest "ITEM_ROUGHLY_THRESH".
> + ITEM_NEXT,
> + ZERO,
> +};
> +
> static const enum index item_any[] = {
> ITEM_ANY_NUM,
> ITEM_NEXT,
> @@ -954,6 +963,21 @@ static const struct token token_list[] = {
> .next = NEXT(NEXT_ENTRY(ITEM_NEXT)),
> .call = parse_vc,
> },
> + [ITEM_ROUGHLY] = {
This will have to be moved at the end of the list.
> + .name = "roughly",
> + .help = "match the pattern roughly",
Hehe, the question is who will go out of their way to match traffic roughly
instead of perfectly? They need a better incentive to do so, in a very short
sentence.
> + .priv = PRIV_ITEM(ROUGHLY,
> + sizeof(struct rte_flow_item_roughly)),
> + .next = NEXT(item_roughly),
> + .call = parse_vc,
> + },
> + [ITEM_ROUGHLY_THRESHOLD] = {
> + .name = "threshold",
"thresh" again, I won't comment them all, you get the idea.
> + .help = "match accuracy threshold",
> + .next = NEXT(item_roughly, NEXT_ENTRY(UNSIGNED), item_param),
> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_roughly,
> + threshold)),
> + },
> [ITEM_ANY] = {
> .name = "any",
> .help = "match any protocol for the current layer",
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 4d873cd..5b0cd4d 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -954,6 +954,7 @@ static const struct {
> MK_FLOW_ITEM(END, 0),
> MK_FLOW_ITEM(VOID, 0),
> MK_FLOW_ITEM(INVERT, 0),
> + MK_FLOW_ITEM(ROUGHLY, sizeof(struct rte_flow_item_roughly)),
This will have to be moved at the end of the list.
> MK_FLOW_ITEM(ANY, sizeof(struct rte_flow_item_any)),
> MK_FLOW_ITEM(PF, 0),
> MK_FLOW_ITEM(VF, sizeof(struct rte_flow_item_vf)),
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index b587ba9..4cc1876 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -491,6 +491,7 @@ Usage example, matching non-TCPv4 packets only:
>
> +-------+----------+
> | Index | Item |
> +
This change looks unnecessary.
> +=======+==========+
> | 0 | INVERT |
> +-------+----------+
> @@ -503,6 +504,55 @@ Usage example, matching non-TCPv4 packets only:
> | 4 | END |
> +-------+----------+
>
> +Item: ``ROUGHLY``
> +^^^^^^^^^^^^^^^^^
This will have to be moved at the end of the list (documentation also
follows the same order as rte_flow.h).
> +
> +Roughly matching, not perfect match.
> +
> +This is for device that support no-perfect match option.
> +Usually a no-perfect match is fast but the cost is accuracy.
> +i.e. Signature Match only match pattern's hash value, but it is
> +possible two different patterns have the same hash value.
> +
> +Matching accuracy level can be configure by threshold.
> +Driver can divide the range of threshold and map to different
> +accuracy levels that device support.
Please expand these paragraphs to fit 75-79 columns wide like the rest of
the file.
I think a better wording is necessary to provide incentives for applications
to use this mode. I have a few ideas but I'm not familiar enough with the
original signature mode for that. Perhaps John can help?
> +
> +.. _table_rte_flow_item_roughly:
> +
> +.. table:: ROUGHLY
> +
> + +----------+---------------+--------------------------------------------------+
> + | Field | Subfield | Value |
> + +==========+===========+======================================================+
> + | ``spec`` | ``threshold`` | 0 as perfect match, 0xffffffff as roughest match |
> + +----------+---------------+--------------------------------------------------+
> + | ``last`` | ``threshold`` | ignored |
> + +----------+-----------+------------------------------------------------------+
> + | ``mask`` | ``threshold`` | ignored |
> + +----------+-----------+------------------------------------------------------+
Last and mask cannot be ignored. The only items where they are ignored are
those that do not even take a spec definition.
This means that a mask set to 0 is supposed to be the same as no item
provided. PMDs should retrieve the threshold value using something like:
thresh = spec->thresh & mask->thresh;
if (last->thresh && (last->thresh & mask->thresh) < thresh)
complain_unsupported();
Ranges (last) can be ignored when otherwise valid because what matters is
only the lowest threshold value.
See 8.2.3 Pattern item [1] for more information.
> +
Extra empty line.
> +
> +Usage example, roughly match a TCPv4 packets:
> +
> +.. _table_rte_flow_item_roughly_example:
> +
> +.. table:: Roughly matching
How about "Rough matching".
> +
> + +-------+----------+
> + | Index | Item |
> + +=======+==========+
> + | 0 | ROUGHLY |
> + +-------+----------+
> + | 1 | Ethernet |
> + +-------+----------+
> + | 2 | IPv4 |
> + +-------+----------+
> + | 3 | TCP |
> + +-------+----------+
> + | 4 | END |
> + +-------+----------+
> +
> Item: ``PF``
> ^^^^^^^^^^^^
There is a missing change in this file. You must modify the following
statement:
- Signature mode of operation is not defined but could be handled through a
specific item type if needed.
And mention ROUGHLY in index 3 of the table below:
+---+-------------------+----------+-----+ |
| 3 | VF, PF (optional) | ``spec`` | any | |
| | +----------+-----+ |
| | | ``last`` | N/A | |
| | +----------+-----+ |
| | | ``mask`` | any | |
+---+-------------------+----------+-----+ |
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 0e50c10..08a88f8 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -2513,6 +2513,8 @@ This section lists supported pattern items and their attributes, if any.
>
> - ``invert``: perform actions when pattern does not match.
>
> +- ``roughly``: pattern is matched roughly.
> +
How about "match pattern roughly"? (remember to keep this in sync with
testpmd's inline help string though)
This will also have to be moved at the end of the list.
> - ``any``: match any protocol for the current layer.
>
> - ``num {unsigned}``: number of layers covered.
> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h
> index c47edbc..4921858 100644
> --- a/lib/librte_ether/rte_flow.h
> +++ b/lib/librte_ether/rte_flow.h
> @@ -148,6 +148,18 @@ enum rte_flow_item_type {
> RTE_FLOW_ITEM_TYPE_INVERT,
>
> /**
> + * [META]
> + *
> + * Roughly matching, not perfect matching
> + *
> + * This is for device that support no-perfect matching option.
> + * Usually a no-perfect matching is fast but the cost is accuracy.
Perhaps John can help here as well. This one should be kept mostly in sync
with the full description for struct rte_flow_item_roughly and also the
documentation in rte_flow.rst.
> + *
> + * See struct rte_flow_item_roughly
Missing colon.
> + */
> + RTE_FLOW_ITEM_TYPE_ROUGHLY,
> +
This new item type *must* be moved at the end of the list to avoid breaking
API/ABI and existing applications. Remember it's append-only.
You then need to update the rest of the patch accordingly.
> + /**
> * Matches any protocol in place of the current layer, a single ANY
> * may also stand for several protocol layers.
> *
> @@ -300,6 +312,24 @@ enum rte_flow_item_type {
> };
>
> /**
> + * RTE_FLOW_ITEM_TYPE_ROUGHLY
> + *
> + * Roughly matching, not perfect match.
> + *
> + * This is for device that support no-perfect match option.
> + * Usually a no-perfect match is fast but the cost is accuracy.
> + * i.e. Signature Match only match pattern's hash value, but it is
> + * possible two different patterns have the same hash value.
> + *
> + * Matching accuracy level can be configure by threshold.
> + * Driver can divide the range of threshold and map to different
> + * accuracy levels that device support.
John?
> + */
> +struct rte_flow_item_roughly {
> + uint32_t threshold; /**< accuracy threshold*/
How about:
uint32_t thresh; /**< Accuracy threshold. */
> +};
Again this structure must be defined after all the others to keep the same
order as enum rte_flow_item_type.
> +
> +/**
> * RTE_FLOW_ITEM_TYPE_ANY
> *
> * Matches any protocol in place of the current layer, a single ANY may also
> --
> 2.7.4
>
[1] http://dpdk.org/doc/guides/prog_guide/rte_flow.html#pattern-item
--
Adrien Mazarguil
6WIND
^ permalink raw reply [relevance 4%]
* [dpdk-dev] PCI domain size
@ 2017-05-24 23:40 3% Stephen Hemminger
2017-06-07 14:23 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2017-05-24 23:40 UTC (permalink / raw)
To: dev
While working on SR-IOV support on Azure, it was discovered that some applications
and drivers do not support full size PCI domains. On Azure environment the PCI pass
through device has a synthetic domain value (ie generated by host) which is > 16 bits.
The common PCI utilities (pci-utils) and Linux kernel both support
full 32 bits but DPDK does not. FreeBSD also supports 32 bit domains.
Changing the one place in DPDK (rte_pci.h) in source is trivial but of course
it is a major ABI breakage which is a complete flag day. I.e no binary compatiabilty
is possible.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v4 2/2] ethdev: add traffic management API
2017-05-19 17:12 1% ` [dpdk-dev] [PATCH v4 2/2] ethdev: add traffic management API Cristian Dumitrescu
@ 2017-05-24 11:28 0% ` Hemant Agrawal
0 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2017-05-24 11:28 UTC (permalink / raw)
To: Cristian Dumitrescu, dev
Cc: thomas.monjalon, jerin.jacob, balasubramanian.manoharan, shreyansh.jain
On 5/19/2017 10:42 PM, Cristian Dumitrescu wrote:
> This patch introduces the generic ethdev API for the traffic manager
> capability, which includes: hierarchical scheduling, traffic shaping,
> congestion management, packet marking.
>
> Main features:
> - Exposed as ethdev plugin capability (similar to rte_flow)
> - Capability query API per port, per level and per node
> - Scheduling algorithms: Strict Priority (SP), Weighed Fair Queuing (WFQ)
> - Traffic shaping: single/dual rate, private (per node) and shared (by
> multiple nodes) shapers
> - Congestion management for hierarchy leaf nodes: algorithms of tail drop,
> head drop, WRED; private (per node) and shared (by multiple nodes) WRED
> contexts
> - Packet marking: IEEE 802.1q (VLAN DEI), IETF RFC 3168 (IPv4/IPv6 ECN for
> TCP and SCTP), IETF RFC 2597 (IPv4 / IPv6 DSCP)
>
> Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> ---
> Changes in v4:
> - Implemented feedback from Hemant [6]
> - Capability API: Reworked the port, level and node capability API
> data structure to remove confusion due to "summary across all
> nodes" approach, which made it unclear whether a particular
> capability is supported by all nodes or by at least one node.
> - Capability API: Added flags for "all nodes have identical
> capability set"
> - Suspended state: documented the required behavior in Doxygen
> description
> - Implemented feedback from Jerin [7]
> - Node add: added level parameter (see new API function:
> rte_tm_node_add_check_level())
> - RTE_TM_ETH_FRAMING_OVERHEAD, RTE_TM_ETH_FRAMING_OVERHEAD_FCS:
> documented their usage in their Doxygen description
> - Capability API: for each function, mention the related
> capability field (Doxygen @see)
> - stats_mask, capability_mask: document the enum flags used to
> build each mask (Doxygen @see)
> - Rename rte_tm_get_leaf_nodes() to
> rte_tm_get_number_of_leaf_nodes()
> - Doxygen: add @param[in, out] to the description of all API funcs
> - Doxygen: fix hooks in doc/api/doxy-api-index.md
> - Rename rte_tm_hierarchy_set() to rte_tm_hierarchy_commit(), improved
> Doxygen description
> - Node add, node delete: improved Doxygen description
> - Fixed incorrect design assumption that packet-based weight mode for WFQ
> is identical to WRR. As result, removed all references to WRR support.
> Renamed the "scheduling mode" node parameters to "wfq_weight_mode".
>
> Changes in v3:
> - Implemented feedback from Jerin [5]
> - Changed naming convention: scheddev -> tm
> - Improvements on the capability API:
> - Specification of marking capabilities per color
> - WFQ/WRR groups: sp_n_children_max ->
> wfq_wrr_n_children_per_group_max, added wfq_wrr_n_groups_max,
> improved description of both, improved description of
> wfq_wrr_weight_max
> - Dynamic updates: added KEEP_LEVEL and CHANGE_LEVEL for parent
> update
> - Enforced/documented restrictions for root node (node_add() and
> update())
> - Enforced/documented shaper profile restrictions on PIR: PIR != 0,
> PIR >= CIR
> - Turned repetitive code in rte_tm.c into macro
> - Removed dependency on rte_red.h file (added RED params to rte_tm.h)
> - Color: removed "e_" from color names enum
> - Fixed small Doxygen style issues
>
> Changes in v2:
> - Implemented feedback from Hemant [4]
> - Improvements on the capability API
> - Added capability API for hierarchy level
> - Merged stats capability into the capability API
> - Added dynamic updates
> - Added non-leaf/leaf union to the node capability structure
> - Renamed sp_priority_min to sp_n_priorities_max, added
> clarifications
> - Fixed description for sp_n_children_max
> - Clarified and enforced rule on node ID range for leaf and non-leaf nodes
> - Added API functions to get node type (i.e. leaf/non-leaf):
> get_leaf_nodes(), node_type_get()
> - Added clarification for the root node: its creation, parent, role
> - Macro NODE_ID_NULL as root node's parent
> - Description of the node_add() and node_parent_update() API funcs
> - Added clarification for the first time add vs. subsequent updates rule
> - Cleaned up the description for the node_add() function
> - Statistics API improvements
> - Merged stats capability into the capability API
> - Added API function node_stats_update()
> - Added more stats per packet color
> - Added more error types
> - Fixed small Doxygen style issues
>
> Changes in v1 (since RFC [1]):
> - Implemented as ethdev plugin (similar to rte_flow) as opposed to more
> monolithic additions to ethdev itself
> - Implemented feedback from Jerin [2] and Hemant [3]. Implemented all the
> suggested items with only one exception, see the long list below,
> hopefully nothing was forgotten.
> - The item not done (hopefully for a good reason): driver-generated
> object IDs. IMO the choice to have application-generated object IDs
> adds marginal complexity to the driver (search ID function
> required), but it provides huge simplification for the application.
> The app does not need to worry about building & managing tree-like
> structure for storing driver-generated object IDs, the app can use
> its own convention for node IDs depending on the specific hierarchy
> that it needs. Trivial example: identify all level-2 nodes with IDs
> like 100, 200, 300, … and the level-3 nodes based on their level-2
> parents: 110, 120, 130, 140, …, 210, 220, 230, 240, …, 310, 320,
> 330, … and level-4 nodes based on their level-3 parents: 111, 112,
> 113, 114, …, 121, 122, 123, 124, …). Moreover, see the change log
> for the other related simplification that was implemented: leaf
> nodes now have predefined IDs that are the same with their Ethernet
> TX queue ID ( therefore no translation is required for leaf nodes).
> - Capability API. Done per port and per node as well.
> - Dual rate shapers
> - Added configuration of private shaper (per node) directly from the
> shaper profile as part of node API (no shaper ID needed for private
> shapers), while the shared shapers are configured outside of the node
> API using shaper profile and communicated to the node using shared
> shaper ID. So there is no configuration overhead for shared shapers if
> the app does not use any of them.
> - Leaf nodes now have predefined IDs that are the same with their Ethernet
> TX queue ID (therefore no translation is required for leaf nodes). This
> is also used to differentiate between a leaf node and a non-leaf node.
> - Domain-specific errors to give a precise indication of the error cause
> (same as done by rte_flow)
> - Packet marking API
> - Packet length optional adjustment for shapers, positive (e.g. for adding
> Ethernet framing overhead of 20 bytes) or negative (e.g. for rate
> limiting based on IP packet bytes)
>
> [1] RFC: http://dpdk.org/ml/archives/dev/2016-November/050956.html
> [2] Jerin’s feedback on RFC: http://www.dpdk.org/ml/archives/dev/2017-January/054484.html
> [3] Hemant’s feedback on RFC: http://www.dpdk.org/ml/archives/dev/2017-January/054866.html
> [4] Hemant's feedback on v1: http://www.dpdk.org/ml/archives/dev/2017-February/058033.html
> [5] Jerin's feedback on v1: http://www.dpdk.org/ml/archives/dev/2017-March/058895.html
> [6] Hemant's feedback on v3: http://www.dpdk.org/ml/archives/dev/2017-March/062354.html
> [7] Jerin's feedback on v3: http://www.dpdk.org/ml/archives/dev/2017-April/063429.html
>
> MAINTAINERS | 4 +
> lib/librte_ether/Makefile | 5 +-
> lib/librte_ether/rte_ether_version.map | 30 +
> lib/librte_ether/rte_tm.c | 448 ++++++++
> lib/librte_ether/rte_tm.h | 1923 ++++++++++++++++++++++++++++++++
> lib/librte_ether/rte_tm_driver.h | 373 +++++++
> 6 files changed, 2782 insertions(+), 1 deletion(-)
> create mode 100644 lib/librte_ether/rte_tm.c
> create mode 100644 lib/librte_ether/rte_tm.h
> create mode 100644 lib/librte_ether/rte_tm_driver.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index afb4cab..cdaf2ac 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -240,6 +240,10 @@ Flow API
> M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> F: lib/librte_ether/rte_flow*
>
> +Traffic Management API
> +M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> +F: lib/librte_ether/rte_tm*
> +
> Crypto API
> M: Declan Doherty <declan.doherty@intel.com>
> F: lib/librte_cryptodev/
> diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
> index 93fdde1..db692ae 100644
> --- a/lib/librte_ether/Makefile
> +++ b/lib/librte_ether/Makefile
> @@ -1,6 +1,6 @@
> # BSD LICENSE
> #
> -# Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
> +# Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> # All rights reserved.
> #
> # Redistribution and use in source and binary forms, with or without
> @@ -45,6 +45,7 @@ LIBABIVER := 6
>
> SRCS-y += rte_ethdev.c
> SRCS-y += rte_flow.c
> +SRCS-y += rte_tm.c
>
> #
> # Export include files
> @@ -56,5 +57,7 @@ SYMLINK-y-include += rte_eth_ctrl.h
> SYMLINK-y-include += rte_dev_info.h
> SYMLINK-y-include += rte_flow.h
> SYMLINK-y-include += rte_flow_driver.h
> +SYMLINK-y-include += rte_tm.h
> +SYMLINK-y-include += rte_tm_driver.h
>
> include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
> index ff056e8..7f39904 100644
> --- a/lib/librte_ether/rte_ether_version.map
> +++ b/lib/librte_ether/rte_ether_version.map
> @@ -161,4 +161,34 @@ DPDK_17.08 {
> global:
>
> rte_eth_dev_tm_ops_get;
> + rte_tm_get_leaf_nodes;
> + rte_tm_node_type_get;
> + rte_tm_capabilities_get;
> + rte_tm_level_capabilities_get;
> + rte_tm_node_capabilities_get;
> + rte_tm_wred_profile_add;
> + rte_tm_wred_profile_delete;
> + rte_tm_shared_wred_context_add_update;
> + rte_tm_shared_wred_context_delete;
> + rte_tm_shaper_profile_add;
> + rte_tm_shaper_profile_delete;
> + rte_tm_shared_shaper_add_update;
> + rte_tm_shared_shaper_delete;
> + rte_tm_node_add;
> + rte_tm_node_delete;
> + rte_tm_node_suspend;
> + rte_tm_node_resume;
> + rte_tm_hierarchy_commit;
> + rte_tm_node_parent_update;
> + rte_tm_node_shaper_update;
> + rte_tm_node_shared_shaper_update;
> + rte_tm_node_stats_update;
> + rte_tm_node_wfq_weight_mode_update;
> + rte_tm_node_cman_update;
> + rte_tm_node_wred_context_update;
> + rte_tm_node_shared_wred_context_update;
> + rte_tm_node_stats_read;
> + rte_tm_mark_vlan_dei;
> + rte_tm_mark_ip_ecn;
> + rte_tm_mark_ip_dscp;
> } DPDK_17.05
> diff --git a/lib/librte_ether/rte_tm.c b/lib/librte_ether/rte_tm.c
> new file mode 100644
> index 0000000..2617a1a
> --- /dev/null
> +++ b/lib/librte_ether/rte_tm.c
> @@ -0,0 +1,448 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2017 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <stdint.h>
> +
> +#include <rte_errno.h>
> +#include "rte_ethdev.h"
> +#include "rte_tm_driver.h"
> +#include "rte_tm.h"
> +
> +/* Get generic traffic manager operations structure from a port. */
> +const struct rte_tm_ops *
> +rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_tm_ops *ops;
> +
> + if (!rte_eth_dev_is_valid_port(port_id)) {
> + rte_tm_error_set(error,
> + ENODEV,
> + RTE_TM_ERROR_TYPE_UNSPECIFIED,
> + NULL,
> + rte_strerror(ENODEV));
> + return NULL;
> + }
> +
> + if ((dev->dev_ops->tm_ops_get == NULL) ||
> + (dev->dev_ops->tm_ops_get(dev, &ops) != 0) ||
> + (ops == NULL)) {
> + rte_tm_error_set(error,
> + ENOSYS,
> + RTE_TM_ERROR_TYPE_UNSPECIFIED,
> + NULL,
> + rte_strerror(ENOSYS));
> + return NULL;
> + }
> +
> + return ops;
> +}
> +
> +#define RTE_TM_FUNC(port_id, func) \
> +({ \
> + const struct rte_tm_ops *ops = \
> + rte_tm_ops_get(port_id, error); \
> + if (ops == NULL) \
> + return -rte_errno; \
> + \
> + if (ops->func == NULL) \
> + return -rte_tm_error_set(error, \
> + ENOSYS, \
> + RTE_TM_ERROR_TYPE_UNSPECIFIED, \
> + NULL, \
> + rte_strerror(ENOSYS)); \
> + \
> + ops->func; \
> +})
> +
> +/* Get number of leaf nodes */
> +int
> +rte_tm_get_number_of_leaf_nodes(uint8_t port_id,
> + uint32_t *n_leaf_nodes,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_tm_ops *ops =
> + rte_tm_ops_get(port_id, error);
> +
> + if (ops == NULL)
> + return -rte_errno;
> +
> + if (n_leaf_nodes == NULL) {
> + rte_tm_error_set(error,
> + EINVAL,
> + RTE_TM_ERROR_TYPE_UNSPECIFIED,
> + NULL,
> + rte_strerror(EINVAL));
> + return -rte_errno;
> + }
> +
> + *n_leaf_nodes = dev->data->nb_tx_queues;
> + return 0;
> +}
> +
> +/* Check node type (leaf or non-leaf) */
> +int
> +rte_tm_node_type_get(uint8_t port_id,
> + uint32_t node_id,
> + int *is_leaf,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_type_get)(dev,
> + node_id, is_leaf, error);
> +}
> +
> +/* Get node level */
> +int
> +rte_tm_node_level_get(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t *level_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_level_get)(dev,
> + node_id, level_id, error);
> +}
> +
> +/* Get capabilities */
> +int rte_tm_capabilities_get(uint8_t port_id,
> + struct rte_tm_capabilities *cap,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, capabilities_get)(dev,
> + cap, error);
> +}
> +
> +/* Get level capabilities */
> +int rte_tm_level_capabilities_get(uint8_t port_id,
> + uint32_t level_id,
> + struct rte_tm_level_capabilities *cap,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, level_capabilities_get)(dev,
> + level_id, cap, error);
> +}
> +
> +/* Get node capabilities */
> +int rte_tm_node_capabilities_get(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_node_capabilities *cap,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_capabilities_get)(dev,
> + node_id, cap, error);
> +}
> +
> +/* Add WRED profile */
> +int rte_tm_wred_profile_add(uint8_t port_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_wred_params *profile,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, wred_profile_add)(dev,
> + wred_profile_id, profile, error);
> +}
> +
> +/* Delete WRED profile */
> +int rte_tm_wred_profile_delete(uint8_t port_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, wred_profile_delete)(dev,
> + wred_profile_id, error);
> +}
> +
> +/* Add/update shared WRED context */
> +int rte_tm_shared_wred_context_add_update(uint8_t port_id,
> + uint32_t shared_wred_context_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, shared_wred_context_add_update)(dev,
> + shared_wred_context_id, wred_profile_id, error);
> +}
> +
> +/* Delete shared WRED context */
> +int rte_tm_shared_wred_context_delete(uint8_t port_id,
> + uint32_t shared_wred_context_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, shared_wred_context_delete)(dev,
> + shared_wred_context_id, error);
> +}
> +
> +/* Add shaper profile */
> +int rte_tm_shaper_profile_add(uint8_t port_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_shaper_params *profile,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, shaper_profile_add)(dev,
> + shaper_profile_id, profile, error);
> +}
> +
> +/* Delete WRED profile */
> +int rte_tm_shaper_profile_delete(uint8_t port_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, shaper_profile_delete)(dev,
> + shaper_profile_id, error);
> +}
> +
> +/* Add shared shaper */
> +int rte_tm_shared_shaper_add_update(uint8_t port_id,
> + uint32_t shared_shaper_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, shared_shaper_add_update)(dev,
> + shared_shaper_id, shaper_profile_id, error);
> +}
> +
> +/* Delete shared shaper */
> +int rte_tm_shared_shaper_delete(uint8_t port_id,
> + uint32_t shared_shaper_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, shared_shaper_delete)(dev,
> + shared_shaper_id, error);
> +}
> +
> +/* Add node to port traffic manager hierarchy */
> +int rte_tm_node_add(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t parent_node_id,
> + uint32_t priority,
> + uint32_t weight,
> + struct rte_tm_node_params *params,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_add)(dev,
> + node_id, parent_node_id, priority, weight, params, error);
> +}
> +
> +/* Delete node from traffic manager hierarchy */
> +int rte_tm_node_delete(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_delete)(dev,
> + node_id, error);
> +}
> +
> +/* Suspend node */
> +int rte_tm_node_suspend(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_suspend)(dev,
> + node_id, error);
> +}
> +
> +/* Resume node */
> +int rte_tm_node_resume(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_resume)(dev,
> + node_id, error);
> +}
> +
> +/* Commit the initial port traffic manager hierarchy */
> +int rte_tm_hierarchy_commit(uint8_t port_id,
> + int clear_on_fail,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, hierarchy_commit)(dev,
> + clear_on_fail, error);
> +}
> +
> +/* Update node parent */
> +int rte_tm_node_parent_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t parent_node_id,
> + uint32_t priority,
> + uint32_t weight,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_parent_update)(dev,
> + node_id, parent_node_id, priority, weight, error);
> +}
> +
> +/* Update node private shaper */
> +int rte_tm_node_shaper_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_shaper_update)(dev,
> + node_id, shaper_profile_id, error);
> +}
> +
> +/* Update node shared shapers */
> +int rte_tm_node_shared_shaper_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t shared_shaper_id,
> + int add,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_shared_shaper_update)(dev,
> + node_id, shared_shaper_id, add, error);
> +}
> +
> +/* Update node stats */
> +int rte_tm_node_stats_update(uint8_t port_id,
> + uint32_t node_id,
> + uint64_t stats_mask,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_stats_update)(dev,
> + node_id, stats_mask, error);
> +}
> +
> +/* Update WFQ weight mode */
> +int rte_tm_node_wfq_weight_mode_update(uint8_t port_id,
> + uint32_t node_id,
> + int *wfq_weight_mode,
> + uint32_t n_sp_priorities,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_wfq_weight_mode_update)(dev,
> + node_id, wfq_weight_mode, n_sp_priorities, error);
> +}
> +
> +/* Update node congestion management mode */
> +int rte_tm_node_cman_update(uint8_t port_id,
> + uint32_t node_id,
> + enum rte_tm_cman_mode cman,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_cman_update)(dev,
> + node_id, cman, error);
> +}
> +
> +/* Update node private WRED context */
> +int rte_tm_node_wred_context_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_wred_context_update)(dev,
> + node_id, wred_profile_id, error);
> +}
> +
> +/* Update node shared WRED context */
> +int rte_tm_node_shared_wred_context_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t shared_wred_context_id,
> + int add,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_shared_wred_context_update)(dev,
> + node_id, shared_wred_context_id, add, error);
> +}
> +
> +/* Read and/or clear stats counters for specific node */
> +int rte_tm_node_stats_read(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_node_stats *stats,
> + uint64_t *stats_mask,
> + int clear,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, node_stats_read)(dev,
> + node_id, stats, stats_mask, clear, error);
> +}
> +
> +/* Packet marking - VLAN DEI */
> +int rte_tm_mark_vlan_dei(uint8_t port_id,
> + int mark_green,
> + int mark_yellow,
> + int mark_red,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, mark_vlan_dei)(dev,
> + mark_green, mark_yellow, mark_red, error);
> +}
> +
> +/* Packet marking - IPv4/IPv6 ECN */
> +int rte_tm_mark_ip_ecn(uint8_t port_id,
> + int mark_green,
> + int mark_yellow,
> + int mark_red,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, mark_ip_ecn)(dev,
> + mark_green, mark_yellow, mark_red, error);
> +}
> +
> +/* Packet marking - IPv4/IPv6 DSCP */
> +int rte_tm_mark_ip_dscp(uint8_t port_id,
> + int mark_green,
> + int mark_yellow,
> + int mark_red,
> + struct rte_tm_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + return RTE_TM_FUNC(port_id, mark_ip_dscp)(dev,
> + mark_green, mark_yellow, mark_red, error);
> +}
> diff --git a/lib/librte_ether/rte_tm.h b/lib/librte_ether/rte_tm.h
> new file mode 100644
> index 0000000..22167c2
> --- /dev/null
> +++ b/lib/librte_ether/rte_tm.h
> @@ -0,0 +1,1923 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2017 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef __INCLUDE_RTE_TM_H__
> +#define __INCLUDE_RTE_TM_H__
> +
> +/**
> + * @file
> + * RTE Generic Traffic Manager API
> + *
> + * This interface provides the ability to configure the traffic manager in a
> + * generic way. It includes features such as: hierarchical scheduling,
> + * traffic shaping, congestion management, packet marking, etc.
> + */
> +
> +#include <stdint.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * Ethernet framing overhead.
> + *
> + * Overhead fields per Ethernet frame:
> + * 1. Preamble: 7 bytes;
> + * 2. Start of Frame Delimiter (SFD): 1 byte;
> + * 3. Inter-Frame Gap (IFG): 12 bytes.
> + *
> + * One of the typical values for the *pkt_length_adjust* field of the shaper
> + * profile.
> + *
> + * @see struct rte_tm_shaper_params
> + *
> + */
> +#define RTE_TM_ETH_FRAMING_OVERHEAD 20
> +
> +/**
> + * Ethernet framing overhead including the Frame Check Sequence (FCS) field.
> + * Useful when FCS is generated and added at the end of the Ethernet frame on
> + * TX side without any SW intervention.
> + *
> + * One of the typical values for the pkt_length_adjust field of the shaper
> + * profile.
> + *
> + * @see struct rte_tm_shaper_params
> + */
> +#define RTE_TM_ETH_FRAMING_OVERHEAD_FCS 24
> +
> +/**< Invalid WRED profile ID */
> +#define RTE_TM_WRED_PROFILE_ID_NONE UINT32_MAX
> +
> +/**< Invalid shaper profile ID */
> +#define RTE_TM_SHAPER_PROFILE_ID_NONE UINT32_MAX
> +
> +/**< Node ID for the parent of the root node */
> +#define RTE_TM_NODE_ID_NULL UINT32_MAX
> +
> +/**
> + * Color
> + */
> +enum rte_tm_color {
> + RTE_TM_GREEN = 0, /**< Green */
> + RTE_TM_YELLOW, /**< Yellow */
> + RTE_TM_RED, /**< Red */
> + RTE_TM_COLORS /**< Number of colors */
> +};
> +
> +/**
> + * Node statistics counter type
> + */
> +enum rte_tm_stats_type {
> + /**< Number of packets scheduled from current node. */
> + RTE_TM_STATS_N_PKTS = 1 << 0,
> +
> + /**< Number of bytes scheduled from current node. */
> + RTE_TM_STATS_N_BYTES = 1 << 1,
> +
> + /**< Number of green packets dropped by current leaf node. */
> + RTE_TM_STATS_N_PKTS_GREEN_DROPPED = 1 << 2,
> +
> + /**< Number of yellow packets dropped by current leaf node. */
> + RTE_TM_STATS_N_PKTS_YELLOW_DROPPED = 1 << 3,
> +
> + /**< Number of red packets dropped by current leaf node. */
> + RTE_TM_STATS_N_PKTS_RED_DROPPED = 1 << 4,
> +
> + /**< Number of green bytes dropped by current leaf node. */
> + RTE_TM_STATS_N_BYTES_GREEN_DROPPED = 1 << 5,
> +
> + /**< Number of yellow bytes dropped by current leaf node. */
> + RTE_TM_STATS_N_BYTES_YELLOW_DROPPED = 1 << 6,
> +
> + /**< Number of red bytes dropped by current leaf node. */
> + RTE_TM_STATS_N_BYTES_RED_DROPPED = 1 << 7,
> +
> + /**< Number of packets currently waiting in the packet queue of current
> + * leaf node.
> + */
> + RTE_TM_STATS_N_PKTS_QUEUED = 1 << 8,
> +
> + /**< Number of bytes currently waiting in the packet queue of current
> + * leaf node.
> + */
> + RTE_TM_STATS_N_BYTES_QUEUED = 1 << 9,
> +};
> +
> +/**
> + * Node statistics counters
> + */
> +struct rte_tm_node_stats {
> + /**< Number of packets scheduled from current node. */
> + uint64_t n_pkts;
> +
> + /**< Number of bytes scheduled from current node. */
> + uint64_t n_bytes;
> +
> + /**< Statistics counters for leaf nodes only. */
> + struct {
> + /**< Number of packets dropped by current leaf node per each
> + * color.
> + */
> + uint64_t n_pkts_dropped[RTE_TM_COLORS];
> +
> + /**< Number of bytes dropped by current leaf node per each
> + * color.
> + */
> + uint64_t n_bytes_dropped[RTE_TM_COLORS];
> +
> + /**< Number of packets currently waiting in the packet queue of
> + * current leaf node.
> + */
> + uint64_t n_pkts_queued;
> +
> + /**< Number of bytes currently waiting in the packet queue of
> + * current leaf node.
> + */
> + uint64_t n_bytes_queued;
> + } leaf;
> +};
> +
> +/**
> + * Traffic manager dynamic updates
> + */
> +enum rte_tm_dynamic_update_type {
> + /**< Dynamic parent node update. The new parent node is located on same
> + * hierarchy level as the former parent node. Consequently, the node
> + * whose parent is changed preserves its hierarchy level.
> + */
> + RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL = 1 << 0,
> +
> + /**< Dynamic parent node update. The new parent node is located on
> + * different hierarchy level than the former parent node. Consequently,
> + * the node whose parent is changed also changes its hierarchy level.
> + */
> + RTE_TM_UPDATE_NODE_PARENT_CHANGE_LEVEL = 1 << 1,
> +
> + /**< Dynamic node add/delete. */
> + RTE_TM_UPDATE_NODE_ADD_DELETE = 1 << 2,
> +
> + /**< Suspend/resume nodes. */
> + RTE_TM_UPDATE_NODE_SUSPEND_RESUME = 1 << 3,
> +
> + /**< Dynamic switch between byte-based and packet-based WFQ weights. */
> + RTE_TM_UPDATE_NODE_WFQ_WEIGHT_MODE = 1 << 4,
> +
> + /**< Dynamic update on number of SP priorities. */
> + RTE_TM_UPDATE_NODE_N_SP_PRIORITIES = 1 << 5,
> +
> + /**< Dynamic update of congestion management mode for leaf nodes. */
> + RTE_TM_UPDATE_NODE_CMAN = 1 << 6,
> +
> + /**< Dynamic update of the set of enabled stats counter types. */
> + RTE_TM_UPDATE_NODE_STATS = 1 << 7,
> +};
> +
> +/**
> + * Traffic manager capabilities
> + */
> +struct rte_tm_capabilities {
> + /**< Maximum number of nodes. */
> + uint32_t n_nodes_max;
> +
> + /**< Maximum number of levels (i.e. number of nodes connecting the root
> + * node with any leaf node, including the root and the leaf).
> + */
> + uint32_t n_levels_max;
> +
> + /**< When non-zero, this flag indicates that all the non-leaf nodes
> + * (with the exception of the root node) have identical capability set.
> + */
> + int non_leaf_nodes_identical;
> +
> + /**< When non-zero, this flag indicates that all the leaf nodes have
> + * identical capability set.
> + */
> + int leaf_nodes_identical;
> +
> + /**< Maximum number of shapers, either private or shared. In case the
> + * implementation does not share any resources between private and
> + * shared shapers, it is typically equal to the sum of
> + * *shaper_private_n_max* and *shaper_shared_n_max*.
> + */
> + uint32_t shaper_n_max;
> +
> + /**< Maximum number of private shapers. Indicates the maximum number of
> + * nodes that can concurrently have their private shaper enabled.
> + */
> + uint32_t shaper_private_n_max;
> +
> + /**< Maximum number of private shapers that support dual rate shaping.
> + * Indicates the maximum number of nodes that can concurrently have
> + * their private shaper enabled with dual rate support. Only valid when
> + * private shapers are supported. The value of zero indicates that dual
> + * rate shaping is not available for private shapers. The maximum value
> + * is *shaper_private_n_max*.
> + */
> + int shaper_private_dual_rate_n_max;
> +
> + /**< Minimum committed/peak rate (bytes per second) for any private
> + * shaper. Valid only when private shapers are supported.
> + */
> + uint64_t shaper_private_rate_min;
> +
> + /**< Maximum committed/peak rate (bytes per second) for any private
> + * shaper. Valid only when private shapers are supported.
> + */
> + uint64_t shaper_private_rate_max;
> +
> + /**< Maximum number of shared shapers. The value of zero indicates that
> + * shared shapers are not supported.
> + */
> + uint32_t shaper_shared_n_max;
> +
> + /**< Maximum number of nodes that can share the same shared shaper.
> + * Only valid when shared shapers are supported.
> + */
> + uint32_t shaper_shared_n_nodes_per_shaper_max;
> +
> + /**< Maximum number of shared shapers a node can be part of. This
> + * parameter indicates that there is at least one node that can be
> + * configured with this many shared shapers, which might not be true for
> + * all the nodes. Only valid when shared shapers are supported, in which
> + * case it ranges from 1 to *shaper_shared_n_max*.
> + */
> + uint32_t shaper_shared_n_shapers_per_node_max;
> +
> + /**< Maximum number of shared shapers that can be configured with dual
> + * rate shaping. The value of zero indicates that dual rate shaping
> + * support is not available for shared shapers.
> + */
> + uint32_t shaper_shared_dual_rate_n_max;
> +
> + /**< Minimum committed/peak rate (bytes per second) for any shared
> + * shaper. Only valid when shared shapers are supported.
> + */
> + uint64_t shaper_shared_rate_min;
> +
> + /**< Maximum committed/peak rate (bytes per second) for any shared
> + * shaper. Only valid when shared shapers are supported.
> + */
> + uint64_t shaper_shared_rate_max;
> +
> + /**< Minimum value allowed for packet length adjustment for any private
> + * or shared shaper.
> + */
> + int shaper_pkt_length_adjust_min;
> +
> + /**< Maximum value allowed for packet length adjustment for any private
> + * or shared shaper.
> + */
> + int shaper_pkt_length_adjust_max;
> +
> + /**< Maximum number of children nodes. This parameter indicates that
> + * there is at least one non-leaf node that can be configured with this
> + * many children nodes, which might not be true for all the non-leaf
> + * nodes.
> + */
> + uint32_t sched_n_children_max;
> +
> + /**< Maximum number of supported priority levels. This parameter
> + * indicates that there is at least one non-leaf node that can be
> + * configured with this many priority levels for managing its children
> + * nodes, which might not be true for all the non-leaf nodes. The value
> + * of zero is invalid. The value of 1 indicates that only priority 0 is
> + * supported, which essentially means that Strict Priority (SP)
> + * algorithm is not supported.
> + */
> + uint32_t sched_sp_n_priorities_max;
> +
> + /**< Maximum number of sibling nodes that can have the same priority at
> + * any given time, i.e. maximum size of the WFQ sibling node group. This
> + * parameter indicates there is at least one non-leaf node that meets
> + * this condition, which might not be true for all the non-leaf nodes.
> + * The value of zero is invalid. The value of 1 indicates that WFQ
> + * algorithm is not supported. The maximum value is
> + * *sched_n_children_max*.
> + */
> + uint32_t sched_wfq_n_children_per_group_max;
> +
> + /**< Maximum number of priority levels that can have more than one child
> + * node at any given time, i.e. maximum number of WFQ sibling node
> + * groups that have two or more members. This parameter indicates there
> + * is at least one non-leaf node that meets this condition, which might
> + * not be true for all the non-leaf nodes. The value of zero states that
> + * WFQ algorithm is not supported. The value of 1 indicates that
> + * (*sched_sp_n_priorities_max* - 1) priority levels have at most one
> + * child node, so there can be only one priority level with two or
> + * more sibling nodes making up a WFQ group. The maximum value is:
> + * min(floor(*sched_n_children_max* / 2), *sched_sp_n_priorities_max*).
> + */
> + uint32_t sched_wfq_n_groups_max;
> +
> + /**< Maximum WFQ weight. The value of 1 indicates that all sibling nodes
> + * with same priority have the same WFQ weight, so WFQ is reduced to FQ.
> + */
> + uint32_t sched_wfq_weight_max;
> +
> + /**< Head drop algorithm support. When non-zero, this parameter
> + * indicates that there is at least one leaf node that supports the head
> + * drop algorithm, which might not be true for all the leaf nodes.
> + */
> + int cman_head_drop_supported;
> +
> + /**< Maximum number of WRED contexts, either private or shared. In case
> + * the implementation does not share any resources between private and
> + * shared WRED contexts, it is typically equal to the sum of
> + * *cman_wred_context_private_n_max* and
> + * *cman_wred_context_shared_n_max*.
> + */
> + uint32_t cman_wred_context_n_max;
> +
> + /**< Maximum number of private WRED contexts. Indicates the maximum
> + * number of leaf nodes that can concurrently have their private WRED
> + * context enabled.
> + */
> + uint32_t cman_wred_context_private_n_max;
> +
> + /**< Maximum number of shared WRED contexts. The value of zero
> + * indicates that shared WRED contexts are not supported.
> + */
> + uint32_t cman_wred_context_shared_n_max;
> +
> + /**< Maximum number of leaf nodes that can share the same WRED context.
> + * Only valid when shared WRED contexts are supported.
> + */
> + uint32_t cman_wred_context_shared_n_nodes_per_context_max;
> +
> + /**< Maximum number of shared WRED contexts a leaf node can be part of.
> + * This parameter indicates that there is at least one leaf node that
> + * can be configured with this many shared WRED contexts, which might
> + * not be true for all the leaf nodes. Only valid when shared WRED
> + * contexts are supported, in which case it ranges from 1 to
> + * *cman_wred_context_shared_n_max*.
> + */
> + uint32_t cman_wred_context_shared_n_contexts_per_node_max;
> +
> + /**< Support for VLAN DEI packet marking (per color). */
> + int mark_vlan_dei_supported[RTE_TM_COLORS];
> +
> + /**< Support for IPv4/IPv6 ECN marking of TCP packets (per color). */
> + int mark_ip_ecn_tcp_supported[RTE_TM_COLORS];
> +
> + /**< Support for IPv4/IPv6 ECN marking of SCTP packets (per color). */
> + int mark_ip_ecn_sctp_supported[RTE_TM_COLORS];
> +
> + /**< Support for IPv4/IPv6 DSCP packet marking (per color). */
> + int mark_ip_dscp_supported[RTE_TM_COLORS];
> +
> + /**< Set of supported dynamic update operations.
> + * @see enum rte_tm_dynamic_update_type
> + */
> + uint64_t dynamic_update_mask;
> +
> + /**< Set of supported statistics counter types.
> + * @see enum rte_tm_stats_type
> + */
> + uint64_t stats_mask;
> +};
> +
> +/**
> + * Traffic manager level capabilities
> + */
> +struct rte_tm_level_capabilities {
> + /**< Maximum number of nodes for the current hierarchy level. */
> + uint32_t n_nodes_max;
> +
> + /**< Maximum number of non-leaf nodes for the current hierarchy level.
> + * The value of 0 indicates that current level only supports leaf
> + * nodes. The maximum value is *n_nodes_max*.
> + */
> + uint32_t n_nodes_nonleaf_max;
> +
> + /**< Maximum number of leaf nodes for the current hierarchy level. The
> + * value of 0 indicates that current level only supports non-leaf
> + * nodes. The maximum value is *n_nodes_max*.
> + */
> + uint32_t n_nodes_leaf_max;
> +
> + /**< When non-zero, this flag indicates that all the non-leaf nodes on
> + * this level have identical capability set. Valid only when
> + * *n_nodes_nonleaf_max* is non-zero.
> + */
> + int non_leaf_nodes_identical;
> +
> + /**< When non-zero, this flag indicates that all the leaf nodes on this
> + * level have identical capability set. Valid only when
> + * *n_nodes_leaf_max* is non-zero.
> + */
> + int leaf_nodes_identical;
> +
> + union {
> + /**< Items valid only for the non-leaf nodes on this level. */
> + struct {
> + /**< Private shaper support. When non-zero, it indicates
> + * there is at least one non-leaf node on this level
> + * with private shaper support, which may not be the
> + * case for all the non-leaf nodes on this level.
> + */
> + int shaper_private_supported;
> +
> + /**< Dual rate support for private shaper. Valid only
> + * when private shaper is supported for the non-leaf
> + * nodes on the current level. When non-zero, it
> + * indicates there is at least one non-leaf node on this
> + * level with dual rate private shaper support, which
> + * may not be the case for all the non-leaf nodes on
> + * this level.
> + */
> + int shaper_private_dual_rate_supported;
> +
> + /**< Minimum committed/peak rate (bytes per second) for
> + * private shapers of the non-leaf nodes of this level.
> + * Valid only when private shaper is supported on this
> + * level.
> + */
> + uint64_t shaper_private_rate_min;
> +
> + /**< Maximum committed/peak rate (bytes per second) for
> + * private shapers of the non-leaf nodes on this level.
> + * Valid only when private shaper is supported on this
> + * level.
> + */
> + uint64_t shaper_private_rate_max;
> +
> + /**< Maximum number of shared shapers that any non-leaf
> + * node on this level can be part of. The value of zero
> + * indicates that shared shapers are not supported by
> + * the non-leaf nodes on this level. When non-zero, it
> + * indicates there is at least one non-leaf node on this
> + * level that meets this condition, which may not be the
> + * case for all the non-leaf nodes on this level.
> + */
> + uint32_t shaper_shared_n_max;
> +
> + /**< Maximum number of children nodes. This parameter
> + * indicates that there is at least one non-leaf node on
> + * this level that can be configured with this many
> + * children nodes, which might not be true for all the
> + * non-leaf nodes on this level.
> + */
> + uint32_t sched_n_children_max;
> +
> + /**< Maximum number of supported priority levels. This
> + * parameter indicates that there is at least one
> + * non-leaf node on this level that can be configured
> + * with this many priority levels for managing its
> + * children nodes, which might not be true for all the
> + * non-leaf nodes on this level. The value of zero is
> + * invalid. The value of 1 indicates that only priority
> + * 0 is supported, which essentially means that Strict
> + * Priority (SP) algorithm is not supported on this
> + * level.
> + */
> + uint32_t sched_sp_n_priorities_max;
> +
> + /**< Maximum number of sibling nodes that can have the
> + * same priority at any given time, i.e. maximum size of
> + * the WFQ sibling node group. This parameter indicates
> + * there is at least one non-leaf node on this level
> + * that meets this condition, which may not be true for
> + * all the non-leaf nodes on this level. The value of
> + * zero is invalid. The value of 1 indicates that WFQ
> + * algorithm is not supported on this level. The maximum
> + * value is *sched_n_children_max*.
> + */
> + uint32_t sched_wfq_n_children_per_group_max;
> +
> + /**< Maximum number of priority levels that can have
> + * more than one child node at any given time, i.e.
> + * maximum number of WFQ sibling node groups that
> + * have two or more members. This parameter indicates
> + * there is at least one non-leaf node on this level
> + * that meets this condition, which might not be true
> + * for all the non-leaf nodes. The value of zero states
> + * that WFQ algorithm is not supported on this level.
> + * The value of 1 indicates that
> + * (*sched_sp_n_priorities_max* - 1) priority levels on
> + * this level have at most one child node, so there can
> + * be only one priority level with two or more sibling
> + * nodes making up a WFQ group on this level. The
> + * maximum value is:
> + * min(floor(*sched_n_children_max* / 2),
> + * *sched_sp_n_priorities_max*).
> + */
> + uint32_t sched_wfq_n_groups_max;
> +
> + /**< Maximum WFQ weight. The value of 1 indicates that
> + * all sibling nodes on this level with same priority
> + * have the same WFQ weight, so on this level WFQ is
> + * reduced to FQ.
> + */
> + uint32_t sched_wfq_weight_max;
> +
> + /**< Mask of statistics counter types supported by the
> + * non-leaf nodes on this level. Every supported
> + * statistics counter type is supported by at least one
> + * non-leaf node on this level, which may not be true
> + * for all the non-leaf nodes on this level.
> + * @see enum rte_tm_stats_type
> + */
> + uint64_t stats_mask;
> + } nonleaf;
> +
> + /**< Items valid only for the leaf nodes on this level. */
> + struct {
> + /**< Private shaper support. When non-zero, it indicates
> + * there is at least one leaf node on this level with
> + * private shaper support, which may not be the case for
> + * all the leaf nodes on this level.
> + */
> + int shaper_private_supported;
> +
> + /**< Dual rate support for private shaper. Valid only
> + * when private shaper is supported for the leaf nodes
> + * on this level. When non-zero, it indicates there is
> + * at least one leaf node on this level with dual rate
> + * private shaper support, which may not be the case for
> + * all the leaf nodes on this level.
> + */
> + int shaper_private_dual_rate_supported;
> +
> + /**< Minimum committed/peak rate (bytes per second) for
> + * private shapers of the leaf nodes of this level.
> + * Valid only when private shaper is supported for the
> + * leaf nodes on this level.
> + */
> + uint64_t shaper_private_rate_min;
> +
> + /**< Maximum committed/peak rate (bytes per second) for
> + * private shapers of the leaf nodes on this level.
> + * Valid only when private shaper is supported for the
> + * leaf nodes on this level.
> + */
> + uint64_t shaper_private_rate_max;
> +
> + /**< Maximum number of shared shapers that any leaf node
> + * on this level can be part of. The value of zero
> + * indicates that shared shapers are not supported by
> + * the leaf nodes on this level. When non-zero, it
> + * indicates there is at least one leaf node on this
> + * level that meets this condition, which may not be the
> + * case for all the leaf nodes on this level.
> + */
> + uint32_t shaper_shared_n_max;
> +
> + /**< Head drop algorithm support. When non-zero, this
> + * parameter indicates that there is at least one leaf
> + * node on this level that supports the head drop
> + * algorithm, which might not be true for all the leaf
> + * nodes on this level.
> + */
> + int cman_head_drop_supported;
> +
> + /**< Private WRED context support. When non-zero, it
> + * indicates there is at least one node on this level
> + * with private WRED context support, which may not be
> + * true for all the leaf nodes on this level. */
> + int cman_wred_context_private_supported;
> +
> + /**< Maximum number of shared WRED contexts that any
> + * leaf node on this level can be part of. The value of
> + * zero indicates that shared WRED contexts are not
> + * supported by the leaf nodes on this level. When
> + * non-zero, it indicates there is at least one leaf
> + * node on this level that meets this condition, which
> + * may not be the case for all the leaf nodes on this
> + * level.
> + */
> + uint32_t cman_wred_context_shared_n_max;
> +
> + /**< Mask of statistics counter types supported by the
> + * leaf nodes on this level. Every supported statistics
> + * counter type is supported by at least one leaf node
> + * on this level, which may not be true for all the leaf
> + * nodes on this level.
> + * @see enum rte_tm_stats_type
> + */
> + uint64_t stats_mask;
> + } leaf;
> + };
> +};
> +
> +/**
> + * Traffic manager node capabilities
> + */
> +struct rte_tm_node_capabilities {
> + /**< Private shaper support for the current node. */
> + int shaper_private_supported;
> +
> + /**< Dual rate shaping support for private shaper of current node.
> + * Valid only when private shaper is supported by the current node.
> + */
> + int shaper_private_dual_rate_supported;
> +
> + /**< Minimum committed/peak rate (bytes per second) for private
> + * shaper of current node. Valid only when private shaper is supported
> + * by the current node.
> + */
> + uint64_t shaper_private_rate_min;
> +
> + /**< Maximum committed/peak rate (bytes per second) for private
> + * shaper of current node. Valid only when private shaper is supported
> + * by the current node.
> + */
> + uint64_t shaper_private_rate_max;
> +
> + /**< Maximum number of shared shapers the current node can be part of.
> + * The value of zero indicates that shared shapers are not supported by
> + * the current node.
> + */
> + uint32_t shaper_shared_n_max;
> +
> + union {
> + /**< Items valid only for non-leaf nodes. */
> + struct {
> + /**< Maximum number of children nodes. */
> + uint32_t sched_n_children_max;
> +
> + /**< Maximum number of supported priority levels. The
> + * value of zero is invalid. The value of 1 indicates
> + * that only priority 0 is supported, which essentially
> + * means that Strict Priority (SP) algorithm is not
> + * supported.
> + */
> + uint32_t sched_sp_n_priorities_max;
> +
> + /**< Maximum number of sibling nodes that can have the
> + * same priority at any given time, i.e. maximum size
> + * of the WFQ sibling node group. The value of zero
> + * is invalid. The value of 1 indicates that WFQ
> + * algorithm is not supported. The maximum value is
> + * *sched_n_children_max*.
> + */
> + uint32_t sched_wfq_n_children_per_group_max;
> +
> + /**< Maximum number of priority levels that can have
> + * more than one child node at any given time, i.e.
> + * maximum number of WFQ sibling node groups that have
> + * two or more members. The value of zero states that
> + * WFQ algorithm is not supported. The value of 1
> + * indicates that (*sched_sp_n_priorities_max* - 1)
> + * priority levels have at most one child node, so there
> + * can be only one priority level with two or more
> + * sibling nodes making up a WFQ group. The maximum
> + * value is: min(floor(*sched_n_children_max* / 2),
> + * *sched_sp_n_priorities_max*).
> + */
> + uint32_t sched_wfq_n_groups_max;
> +
> + /**< Maximum WFQ weight. The value of 1 indicates that
> + * all sibling nodes with same priority have the same
> + * WFQ weight, so WFQ is reduced to FQ.
> + */
> + uint32_t sched_wfq_weight_max;
> + } nonleaf;
> +
> + /**< Items valid only for leaf nodes. */
> + struct {
> + /**< Head drop algorithm support for current node. */
> + int cman_head_drop_supported;
> +
> + /**< Private WRED context support for current node. */
> + int cman_wred_context_private_supported;
> +
> + /**< Maximum number of shared WRED contexts the current
> + * node can be part of. The value of zero indicates that
> + * shared WRED contexts are not supported by the current
> + * node.
> + */
> + uint32_t cman_wred_context_shared_n_max;
> + } leaf;
> + };
> +
> + /**< Mask of statistics counter types supported by the current node.
> + * @see enum rte_tm_stats_type
> + */
> + uint64_t stats_mask;
> +};
> +
> +/**
> + * Congestion management (CMAN) mode
> + *
> + * This is used for controlling the admission of packets into a packet queue or
> + * group of packet queues on congestion. On request of writing a new packet
> + * into the current queue while the queue is full, the *tail drop* algorithm
> + * drops the new packet while leaving the queue unmodified, as opposed to *head
> + * drop* algorithm, which drops the packet at the head of the queue (the oldest
> + * packet waiting in the queue) and admits the new packet at the tail of the
> + * queue.
> + *
> + * The *Random Early Detection (RED)* algorithm works by proactively dropping
> + * more and more input packets as the queue occupancy builds up. When the queue
> + * is full or almost full, RED effectively works as *tail drop*. The *Weighted
> + * RED* algorithm uses a separate set of RED thresholds for each packet color.
> + */
> +enum rte_tm_cman_mode {
> + RTE_TM_CMAN_TAIL_DROP = 0, /**< Tail drop */
> + RTE_TM_CMAN_HEAD_DROP, /**< Head drop */
> + RTE_TM_CMAN_WRED, /**< Weighted Random Early Detection (WRED) */
> +};
> +
> +/**
> + * Random Early Detection (RED) profile
> + */
> +struct rte_tm_red_params {
> + /**< Minimum queue threshold */
> + uint16_t min_th;
> +
> + /**< Maximum queue threshold */
> + uint16_t max_th;
> +
> + /**< Inverse of packet marking probability maximum value (maxp), i.e.
> + * maxp_inv = 1 / maxp
> + */
> + uint16_t maxp_inv;
> +
> + /**< Negated log2 of queue weight (wq), i.e. wq = 1 / (2 ^ wq_log2) */
> + uint16_t wq_log2;
> +};
> +
> +/**
> + * Weighted RED (WRED) profile
> + *
> + * Multiple WRED contexts can share the same WRED profile. Each leaf node with
> + * WRED enabled as its congestion management mode has zero or one private WRED
> + * context (only one leaf node using it) and/or zero, one or several shared
> + * WRED contexts (multiple leaf nodes use the same WRED context). A private
> + * WRED context is used to perform congestion management for a single leaf
> + * node, while a shared WRED context is used to perform congestion management
> + * for a group of leaf nodes.
> + */
> +struct rte_tm_wred_params {
> + /**< One set of RED parameters per packet color */
> + struct rte_tm_red_params red_params[RTE_TM_COLORS];
> +};
> +
> +/**
> + * Token bucket
> + */
> +struct rte_tm_token_bucket {
> + /**< Token bucket rate (bytes per second) */
> + uint64_t rate;
> +
> + /**< Token bucket size (bytes), a.k.a. max burst size */
> + uint64_t size;
> +};
> +
> +/**
> + * Shaper (rate limiter) profile
> + *
> + * Multiple shaper instances can share the same shaper profile. Each node has
> + * zero or one private shaper (only one node using it) and/or zero, one or
> + * several shared shapers (multiple nodes use the same shaper instance).
> + * A private shaper is used to perform traffic shaping for a single node, while
> + * a shared shaper is used to perform traffic shaping for a group of nodes.
> + *
> + * Single rate shapers use a single token bucket. A single rate shaper can be
> + * configured by setting the rate of the committed bucket to zero, which
> + * effectively disables this bucket. The peak bucket is used to limit the rate
> + * and the burst size for the current shaper.
> + *
> + * Dual rate shapers use both the committed and the peak token buckets. The
> + * rate of the peak bucket has to be bigger than zero, as well as greater than
> + * or equal to the rate of the committed bucket.
> + */
> +struct rte_tm_shaper_params {
> + /**< Committed token bucket */
> + struct rte_tm_token_bucket committed;
> +
> + /**< Peak token bucket */
> + struct rte_tm_token_bucket peak;
> +
> + /**< Signed value to be added to the length of each packet for the
> + * purpose of shaping. Can be used to correct the packet length with
> + * the framing overhead bytes that are also consumed on the wire (e.g.
> + * RTE_TM_ETH_FRAMING_OVERHEAD_FCS).
> + */
> + int32_t pkt_length_adjust;
> +};
> +
> +/**
> + * Node parameters
> + *
> + * Each non-leaf node has multiple inputs (its children nodes) and single output
> + * (which is input to its parent node). It arbitrates its inputs using Strict
> + * Priority (SP) and Weighted Fair Queuing (WFQ) algorithms to schedule input
> + * packets to its output while observing its shaping (rate limiting)
> + * constraints.
> + *
> + * Algorithms such as Weighted Round Robin (WRR), Byte-level WRR, Deficit WRR
> + * (DWRR), etc. are considered approximations of the WFQ ideal and are
> + * assimilated to WFQ, although an associated implementation-dependent trade-off
> + * on accuracy, performance and resource usage might exist.
> + *
> + * Children nodes with different priorities are scheduled using the SP algorithm
> + * based on their priority, with zero (0) as the highest priority. Children with
> + * the same priority are scheduled using the WFQ algorithm according to their
> + * weights. The WFQ weight of a given child node is relative to the sum of the
> + * weights of all its sibling nodes that have the same priority, with one (1) as
> + * the lowest weight. For each SP priority, the WFQ weight mode can be set as
> + * either byte-based or packet-based.
> + *
> + * Each leaf node sits on top of a TX queue of the current Ethernet port. Hence,
> + * the leaf nodes are predefined, with their node IDs set to 0 .. (N-1), where N
> + * is the number of TX queues configured for the current Ethernet port. The
> + * non-leaf nodes have their IDs generated by the application.
> + */
> +struct rte_tm_node_params {
> + /**< Shaper profile for the private shaper. The absence of the private
> + * shaper for the current node is indicated by setting this parameter
> + * to RTE_TM_SHAPER_PROFILE_ID_NONE.
> + */
> + uint32_t shaper_profile_id;
> +
> + /**< User allocated array of valid shared shaper IDs. */
> + uint32_t *shared_shaper_id;
> +
> + /**< Number of shared shaper IDs in the *shared_shaper_id* array. */
> + uint32_t n_shared_shapers;
> +
> + union {
> + /**< Parameters only valid for non-leaf nodes. */
> + struct {
> + /**< WFQ weight mode for each SP priority. When NULL, it
> + * indicates that WFQ is to be used for all priorities.
> + * When non-NULL, it points to a pre-allocated array of
> + * *n_sp_priorities* values, with non-zero value for
> + * byte-mode and zero for packet-mode.
> + */
> + int *wfq_weight_mode;
> +
> + /**< Number of SP priorities. */
> + uint32_t n_sp_priorities;
> + } nonleaf;
> +
> + /**< Parameters only valid for leaf nodes. */
> + struct {
> + /**< Congestion management mode */
> + enum rte_tm_cman_mode cman;
> +
> + /**< WRED parameters (only valid when *cman* is set to
> + * WRED).
> + */
> + struct {
> + /**< WRED profile for private WRED context. The
> + * absence of a private WRED context for the
> + * current leaf node is indicated by value
> + * RTE_TM_WRED_PROFILE_ID_NONE.
> + */
> + uint32_t wred_profile_id;
> +
> + /**< User allocated array of shared WRED context
> + * IDs. When set to NULL, it indicates that the
> + * current leaf node should not currently be
> + * part of any shared WRED contexts.
> + */
> + uint32_t *shared_wred_context_id;
> +
> + /**< Number of elements in the
> + * *shared_wred_context_id* array. Only valid
> + * when *shared_wred_context_id* is non-NULL,
> + * in which case it should be non-zero.
> + */
> + uint32_t n_shared_wred_contexts;
> + } wred;
> + } leaf;
> + };
> +
> + /**< Mask of statistics counter types to be enabled for this node. This
> + * needs to be a subset of the statistics counter types available for
> + * the current node. Any statistics counter type not included in this
> + * set is to be disabled for the current node.
> + * @see enum rte_tm_stats_type
> + */
> + uint64_t stats_mask;
> +};
> +
> +/**
> + * Verbose error types.
> + *
> + * Most of them provide the type of the object referenced by struct
> + * rte_tm_error::cause.
> + */
> +enum rte_tm_error_type {
> + RTE_TM_ERROR_TYPE_NONE, /**< No error. */
> + RTE_TM_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
> + RTE_TM_ERROR_TYPE_CAPABILITIES,
> + RTE_TM_ERROR_TYPE_LEVEL_ID,
> + RTE_TM_ERROR_TYPE_WRED_PROFILE,
> + RTE_TM_ERROR_TYPE_WRED_PROFILE_GREEN,
> + RTE_TM_ERROR_TYPE_WRED_PROFILE_YELLOW,
> + RTE_TM_ERROR_TYPE_WRED_PROFILE_RED,
> + RTE_TM_ERROR_TYPE_WRED_PROFILE_ID,
> + RTE_TM_ERROR_TYPE_SHARED_WRED_CONTEXT_ID,
> + RTE_TM_ERROR_TYPE_SHAPER_PROFILE,
> + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE,
> + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE,
> + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE,
> + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE,
> + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN,
> + RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID,
> + RTE_TM_ERROR_TYPE_SHARED_SHAPER_ID,
> + RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID,
> + RTE_TM_ERROR_TYPE_NODE_PRIORITY,
> + RTE_TM_ERROR_TYPE_NODE_WEIGHT,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS,
> + RTE_TM_ERROR_TYPE_NODE_PARAMS_STATS,
> + RTE_TM_ERROR_TYPE_NODE_ID,
> +};
> +
> +/**
> + * Verbose error structure definition.
> + *
> + * This object is normally allocated by applications and set by PMDs, the
> + * message points to a constant string which does not need to be freed by
> + * the application, however its pointer can be considered valid only as long
> + * as its associated DPDK port remains configured. Closing the underlying
> + * device or unloading the PMD invalidates it.
> + *
> + * Both cause and message may be NULL regardless of the error type.
> + */
> +struct rte_tm_error {
> + enum rte_tm_error_type type; /**< Cause field and error type. */
> + const void *cause; /**< Object responsible for the error. */
> + const char *message; /**< Human-readable error message. */
> +};
> +
> +/**
> + * Traffic manager get number of leaf nodes
> + *
> + * Each leaf node sits on on top of a TX queue of the current Ethernet port.
> + * Therefore, the set of leaf nodes is predefined, their number is always equal
> + * to N (where N is the number of TX queues configured for the current port)
> + * and their IDs are 0 .. (N-1).
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[out] n_leaf_nodes
> + * Number of leaf nodes for the current port.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_get_number_of_leaf_nodes(uint8_t port_id,
> + uint32_t *n_leaf_nodes,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node type (i.e. leaf or non-leaf) get
> + *
> + * The leaf nodes have predefined IDs in the range of 0 .. (N-1), where N is
> + * the number of TX queues of the current Ethernet port. The non-leaf nodes
> + * have their IDs generated by the application outside of the above range,
> + * which is reserved for leaf nodes.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID value. Needs to be valid.
> + * @param[out] is_leaf
> + * Set to non-zero value when node is leaf and to zero otherwise (non-leaf).
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_node_type_get(uint8_t port_id,
> + uint32_t node_id,
> + int *is_leaf,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node level get
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID value. Needs to be valid.
> + * @param[out] level_id
> + * Node level ID. Needs to be non-NULL.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_node_level_get(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t *level_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager capabilities get
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[out] cap
> + * Traffic manager capabilities. Needs to be pre-allocated and valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_capabilities_get(uint8_t port_id,
> + struct rte_tm_capabilities *cap,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager level capabilities get
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] level_id
> + * The hierarchy level identifier. The value of 0 identifies the level of the
> + * root node.
> + * @param[out] cap
> + * Traffic manager level capabilities. Needs to be pre-allocated and valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_level_capabilities_get(uint8_t port_id,
> + uint32_t level_id,
> + struct rte_tm_level_capabilities *cap,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node capabilities get
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid.
> + * @param[out] cap
> + * Traffic manager node capabilities. Needs to be pre-allocated and valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_node_capabilities_get(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_node_capabilities *cap,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager WRED profile add
> + *
> + * Create a new WRED profile with ID set to *wred_profile_id*. The new profile
> + * is used to create one or several WRED contexts.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] wred_profile_id
> + * WRED profile ID for the new profile. Needs to be unused.
> + * @param[in] profile
> + * WRED profile parameters. Needs to be pre-allocated and valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_wred_profile_add(uint8_t port_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_wred_params *profile,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager WRED profile delete
> + *
> + * Delete an existing WRED profile. This operation fails when there is
> + * currently at least one user (i.e. WRED context) of this WRED profile.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] wred_profile_id
> + * WRED profile ID. Needs to be the valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_wred_profile_delete(uint8_t port_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager shared WRED context add or update
> + *
> + * When *shared_wred_context_id* is invalid, a new WRED context with this ID is
> + * created by using the WRED profile identified by *wred_profile_id*.
> + *
> + * When *shared_wred_context_id* is valid, this WRED context is no longer using
> + * the profile previously assigned to it and is updated to use the profile
> + * identified by *wred_profile_id*.
> + *
> + * A valid shared WRED context can be assigned to several hierarchy leaf nodes
> + * configured to use WRED as the congestion management mode.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] shared_wred_context_id
> + * Shared WRED context ID
> + * @param[in] wred_profile_id
> + * WRED profile ID. Needs to be the valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_shared_wred_context_add_update(uint8_t port_id,
> + uint32_t shared_wred_context_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager shared WRED context delete
> + *
> + * Delete an existing shared WRED context. This operation fails when there is
> + * currently at least one user (i.e. hierarchy leaf node) of this shared WRED
> + * context.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] shared_wred_context_id
> + * Shared WRED context ID. Needs to be the valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_shared_wred_context_delete(uint8_t port_id,
> + uint32_t shared_wred_context_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager shaper profile add
> + *
> + * Create a new shaper profile with ID set to *shaper_profile_id*. The new
> + * shaper profile is used to create one or several shapers.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] shaper_profile_id
> + * Shaper profile ID for the new profile. Needs to be unused.
> + * @param[in] profile
> + * Shaper profile parameters. Needs to be pre-allocated and valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_shaper_profile_add(uint8_t port_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_shaper_params *profile,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager shaper profile delete
> + *
> + * Delete an existing shaper profile. This operation fails when there is
> + * currently at least one user (i.e. shaper) of this shaper profile.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] shaper_profile_id
> + * Shaper profile ID. Needs to be the valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_shaper_profile_delete(uint8_t port_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager shared shaper add or update
> + *
> + * When *shared_shaper_id* is not a valid shared shaper ID, a new shared shaper
> + * with this ID is created using the shaper profile identified by
> + * *shaper_profile_id*.
> + *
> + * When *shared_shaper_id* is a valid shared shaper ID, this shared shaper is
> + * no longer using the shaper profile previously assigned to it and is updated
> + * to use the shaper profile identified by *shaper_profile_id*.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] shared_shaper_id
> + * Shared shaper ID
> + * @param[in] shaper_profile_id
> + * Shaper profile ID. Needs to be the valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_shared_shaper_add_update(uint8_t port_id,
> + uint32_t shared_shaper_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager shared shaper delete
> + *
> + * Delete an existing shared shaper. This operation fails when there is
> + * currently at least one user (i.e. hierarchy node) of this shared shaper.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] shared_shaper_id
> + * Shared shaper ID. Needs to be the valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_shared_shaper_delete(uint8_t port_id,
> + uint32_t shared_shaper_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node add
> + *
> + * Create new node and connect it as child of an existing node. The new node is
> + * further identified by *node_id*, which needs to be unused by any of the
> + * existing nodes. The parent node is identified by *parent_node_id*, which
> + * needs to be the valid ID of an existing non-leaf node. The parent node is
> + * going to use the provided SP *priority* and WFQ *weight* to schedule its new
> + * child node.
> + *
> + * This function has to be called for both leaf and non-leaf nodes. In the case
> + * of leaf nodes (i.e. *node_id* is within the range of 0 .. (N-1), with N as
> + * the number of configured TX queues of the current port), the leaf node is
> + * configured rather than created (as the set of leaf nodes is predefined) and
> + * it is also connected as child of an existing node.
> + *
> + * The first node that is added becomes the root node and all the nodes that
> + * are subsequently added have to be added as descendants of the root node. The
> + * parent of the root node has to be specified as RTE_TM_NODE_ID_NULL and there
> + * can only be one node with this parent ID (i.e. the root node). Further
> + * restrictions for root node: needs to be non-leaf, its private shaper profile
> + * needs to be valid and single rate, cannot use any shared shapers.
> + *
> + * When called before rte_tm_hierarchy_commit() invocation, this function is
> + * typically used to define the initial start-up hierarchy for the port.
> + * Provided that dynamic hierarchy updates are supported by the current port (as
> + * advertised in the port capability set), this function can be also called
> + * after the rte_tm_hierarchy_commit() invocation.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be unused by any of the existing nodes.
> + * @param[in] parent_node_id
> + * Parent node ID. Needs to be the valid.
> + * @param[in] priority
> + * Node priority. The highest node priority is zero. Used by the SP algorithm
> + * running on the parent of the current node for scheduling this child node.
> + * @param[in] weight
> + * Node weight. The node weight is relative to the weight sum of all siblings
> + * that have the same priority. The lowest weight is one. Used by the WFQ
> + * algorithm running on the parent of the current node for scheduling this
> + * child node.
> + * @param[in] params
> + * Node parameters. Needs to be pre-allocated and valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see rte_tm_hierarchy_commit()
> + * @see RTE_TM_UPDATE_NODE_ADD_DELETE
> + */
> +int
> +rte_tm_node_add(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t parent_node_id,
> + uint32_t priority,
> + uint32_t weight,
> + struct rte_tm_node_params *params,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node add with node level check
> + *
> + * Simple rte_tm_node_add() wrapper that also checks the node level.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be unused by any of the existing nodes.
> + * @param[in] parent_node_id
> + * Parent node ID. Needs to be the valid.
> + * @param[in] priority
> + * Node priority. The highest node priority is zero. Used by the SP algorithm
> + * running on the parent of the current node for scheduling this child node.
> + * @param[in] weight
> + * Node weight. The node weight is relative to the weight sum of all siblings
> + * that have the same priority. The lowest weight is one. Used by the WFQ
> + * algorithm running on the parent of the current node for scheduling this
> + * child node.
> + * @param[in] level_id
> + * Level ID that should be met by this node.
> + * @param[in] params
> + * Node parameters. Needs to be pre-allocated and valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +static inline int
> +rte_tm_node_add_check_level(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t parent_node_id,
> + uint32_t priority,
> + uint32_t weight,
> + uint32_t level_id,
> + struct rte_tm_node_params *params,
> + struct rte_tm_error *error)
> +{
> + uint32_t lid;
> + int status;
> +
> + status = rte_tm_node_add(port_id, node_id,
> + parent_node_id, priority, weight, params, error);
> + if (status)
> + return status;
> +
> + status = rte_tm_node_level_get(port_id, node_id, &lid, error);
> + if (status)
> + return status;
> +
> + if (lid != level_id){
> + if (error){
> + error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
> + error->cause = NULL;
> + error->message = rte_strerror(EINVAL);
> + }
> + rte_errno = EINVAL;
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +/**
> + * Traffic manager node delete
> + *
> + * Delete an existing node. This operation fails when this node currently has
> + * at least one user (i.e. child node).
> + *
> + * When called before rte_tm_hierarchy_commit() invocation, this function is
> + * typically used to define the initial start-up hierarchy for the port.
> + * Provided that dynamic hierarchy updates are supported by the current port (as
> + * advertised in the port capability set), this function can be also called
> + * after the rte_tm_hierarchy_commit() invocation.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see RTE_TM_UPDATE_NODE_ADD_DELETE
> + */
> +int
> +rte_tm_node_delete(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node suspend
> + *
> + * Suspend an existing node. While the node is in suspended state, no packet is
> + * scheduled from this node and its descendants. The node exits the suspended
> + * state through the node resume operation.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see rte_tm_node_resume()
> + * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME
> + */
> +int
> +rte_tm_node_suspend(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node resume
> + *
> + * Resume an existing node that is currently in suspended state. The node
> + * entered the suspended state as result of a previous node suspend operation.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see rte_tm_node_suspend()
> + * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME
> + */
> +int
> +rte_tm_node_resume(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager hierarchy commit
> + *
> + * This function is called during the port initialization phase (before the
> + * Ethernet port is started) to freeze the start-up hierarchy.
> + *
> + * This function typically performs the following steps:
> + * a) It validates the start-up hierarchy that was previously defined for the
> + * current port through successive rte_tm_node_add() invocations;
> + * b) Assuming successful validation, it performs all the necessary port
> + * specific configuration operations to install the specified hierarchy on
> + * the current port, with immediate effect once the port is started.
> + *
> + * This function fails when the currently configured hierarchy is not supported
> + * by the Ethernet port, in which case the user can abort or try out another
> + * hierarchy configuration (e.g. a hierarchy with less leaf nodes), which can be
> + * build from scratch (when *clear_on_fail* is enabled) or by modifying the
> + * existing hierarchy configuration (when *clear_on_fail* is disabled).
> + *
> + * Note that this function can still fail due to other causes (e.g. not enough
> + * memory available in the system, etc), even though the specified hierarchy is
> + * supported in principle by the current port.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] clear_on_fail
> + * On function call failure, hierarchy is cleared when this parameter is
> + * non-zero and preserved when this parameter is equal to zero.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see rte_tm_node_add()
> + * @see rte_tm_node_delete()
> + */
> +int
> +rte_tm_hierarchy_commit(uint8_t port_id,
> + int clear_on_fail,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node parent update
> + *
> + * Restriction for root node: its parent cannot be changed.
> + *
> + * This function can only be called after the rte_tm_hierarchy_commit()
> + * invocation. Its success depends on the port support for this operation, as
> + * advertised through the port capability set.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid.
> + * @param[in] parent_node_id
> + * Node ID for the new parent. Needs to be valid.
> + * @param[in] priority
> + * Node priority. The highest node priority is zero. Used by the SP algorithm
> + * running on the parent of the current node for scheduling this child node.
> + * @param[in] weight
> + * Node weight. The node weight is relative to the weight sum of all siblings
> + * that have the same priority. The lowest weight is zero. Used by the WFQ
> + * algorithm running on the parent of the current node for scheduling this
> + * child node.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL
> + * @see RTE_TM_UPDATE_NODE_PARENT_CHANGE_LEVEL
> + */
> +int
> +rte_tm_node_parent_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t parent_node_id,
> + uint32_t priority,
> + uint32_t weight,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node private shaper update
> + *
> + * Restriction for the root node: its private shaper profile needs to be valid
> + * and single rate.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid.
> + * @param[in] shaper_profile_id
> + * Shaper profile ID for the private shaper of the current node. Needs to be
> + * either valid shaper profile ID or RTE_TM_SHAPER_PROFILE_ID_NONE, with
> + * the latter disabling the private shaper of the current node.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_node_shaper_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node shared shapers update
> + *
> + * Restriction for root node: cannot use any shared rate shapers.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid.
> + * @param[in] shared_shaper_id
> + * Shared shaper ID. Needs to be valid.
> + * @param[in] add
> + * Set to non-zero value to add this shared shaper to current node or to zero
> + * to delete this shared shaper from current node.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_node_shared_shaper_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t shared_shaper_id,
> + int add,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node enabled statistics counters update
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid.
> + * @param[in] stats_mask
> + * Mask of statistics counter types to be enabled for the current node. This
> + * needs to be a subset of the statistics counter types available for the
> + * current node. Any statistics counter type not included in this set is to
> + * be disabled for the current node.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see enum rte_tm_stats_type
> + * @see RTE_TM_UPDATE_NODE_STATS
> + */
> +int
> +rte_tm_node_stats_update(uint8_t port_id,
> + uint32_t node_id,
> + uint64_t stats_mask,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node WFQ weight mode update
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid leaf node ID.
> + * @param[in] wfq_weight_mode
> + * WFQ weight mode for each SP priority. When NULL, it indicates that WFQ is
> + * to be used for all priorities. When non-NULL, it points to a pre-allocated
> + * array of *n_sp_priorities* values, with non-zero value for byte-mode and
> + * zero for packet-mode.
> + * @param[in] n_sp_priorities
> + * Number of SP priorities.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see RTE_TM_UPDATE_NODE_WFQ_WEIGHT_MODE
> + * @see RTE_TM_UPDATE_NODE_N_SP_PRIORITIES
> + */
> +int
> +rte_tm_node_wfq_weight_mode_update(uint8_t port_id,
> + uint32_t node_id,
> + int *wfq_weight_mode,
> + uint32_t n_sp_priorities,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node congestion management mode update
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid leaf node ID.
> + * @param[in] cman
> + * Congestion management mode.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see RTE_TM_UPDATE_NODE_CMAN
> + */
> +int
> +rte_tm_node_cman_update(uint8_t port_id,
> + uint32_t node_id,
> + enum rte_tm_cman_mode cman,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node private WRED context update
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid leaf node ID.
> + * @param[in] wred_profile_id
> + * WRED profile ID for the private WRED context of the current node. Needs to
> + * be either valid WRED profile ID or RTE_TM_WRED_PROFILE_ID_NONE, with the
> + * latter disabling the private WRED context of the current node.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_node_wred_context_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node shared WRED context update
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid leaf node ID.
> + * @param[in] shared_wred_context_id
> + * Shared WRED context ID. Needs to be valid.
> + * @param[in] add
> + * Set to non-zero value to add this shared WRED context to current node or
> + * to zero to delete this shared WRED context from current node.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + */
> +int
> +rte_tm_node_shared_wred_context_update(uint8_t port_id,
> + uint32_t node_id,
> + uint32_t shared_wred_context_id,
> + int add,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager node statistics counters read
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] node_id
> + * Node ID. Needs to be valid.
> + * @param[out] stats
> + * When non-NULL, it contains the current value for the statistics counters
> + * enabled for the current node.
> + * @param[out] stats_mask
> + * When non-NULL, it contains the mask of statistics counter types that are
> + * currently enabled for this node, indicating which of the counters
> + * retrieved with the *stats* structure are valid.
> + * @param[in] clear
> + * When this parameter has a non-zero value, the statistics counters are
> + * cleared (i.e. set to zero) immediately after they have been read,
> + * otherwise the statistics counters are left untouched.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see enum rte_tm_stats_type
> + */
> +int
> +rte_tm_node_stats_read(uint8_t port_id,
> + uint32_t node_id,
> + struct rte_tm_node_stats *stats,
> + uint64_t *stats_mask,
> + int clear,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager packet marking - VLAN DEI (IEEE 802.1Q)
> + *
> + * IEEE 802.1p maps the traffic class to the VLAN Priority Code Point (PCP)
> + * field (3 bits), while IEEE 802.1q maps the drop priority to the VLAN Drop
> + * Eligible Indicator (DEI) field (1 bit), which was previously named Canonical
> + * Format Indicator (CFI).
> + *
> + * All VLAN frames of a given color get their DEI bit set if marking is enabled
> + * for this color; otherwise, their DEI bit is left as is (either set or not).
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] mark_green
> + * Set to non-zero value to enable marking of green packets and to zero to
> + * disable it.
> + * @param[in] mark_yellow
> + * Set to non-zero value to enable marking of yellow packets and to zero to
> + * disable it.
> + * @param[in] mark_red
> + * Set to non-zero value to enable marking of red packets and to zero to
> + * disable it.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see struct rte_tm_capabilities::mark_vlan_dei_supported
> + */
> +int
> +rte_tm_mark_vlan_dei(uint8_t port_id,
> + int mark_green,
> + int mark_yellow,
> + int mark_red,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager packet marking - IPv4 / IPv6 ECN (IETF RFC 3168)
> + *
> + * IETF RFCs 2474 and 3168 reorganize the IPv4 Type of Service (TOS) field
> + * (8 bits) and the IPv6 Traffic Class (TC) field (8 bits) into Differentiated
> + * Services Codepoint (DSCP) field (6 bits) and Explicit Congestion
> + * Notification (ECN) field (2 bits). The DSCP field is typically used to
> + * encode the traffic class and/or drop priority (RFC 2597), while the ECN
> + * field is used by RFC 3168 to implement a congestion notification mechanism
> + * to be leveraged by transport layer protocols such as TCP and SCTP that have
> + * congestion control mechanisms.
> + *
> + * When congestion is experienced, as alternative to dropping the packet,
> + * routers can change the ECN field of input packets from 2'b01 or 2'b10
> + * (values indicating that source endpoint is ECN-capable) to 2'b11 (meaning
> + * that congestion is experienced). The destination endpoint can use the
> + * ECN-Echo (ECE) TCP flag to relay the congestion indication back to the
> + * source endpoint, which acknowledges it back to the destination endpoint with
> + * the Congestion Window Reduced (CWR) TCP flag.
> + *
> + * All IPv4/IPv6 packets of a given color with ECN set to 2’b01 or 2’b10
> + * carrying TCP or SCTP have their ECN set to 2’b11 if the marking feature is
> + * enabled for the current color, otherwise the ECN field is left as is.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] mark_green
> + * Set to non-zero value to enable marking of green packets and to zero to
> + * disable it.
> + * @param[in] mark_yellow
> + * Set to non-zero value to enable marking of yellow packets and to zero to
> + * disable it.
> + * @param[in] mark_red
> + * Set to non-zero value to enable marking of red packets and to zero to
> + * disable it.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see struct rte_tm_capabilities::mark_ip_ecn_tcp_supported
> + * @see struct rte_tm_capabilities::mark_ip_ecn_sctp_supported
> + */
> +int
> +rte_tm_mark_ip_ecn(uint8_t port_id,
> + int mark_green,
> + int mark_yellow,
> + int mark_red,
> + struct rte_tm_error *error);
> +
> +/**
> + * Traffic manager packet marking - IPv4 / IPv6 DSCP (IETF RFC 2597)
> + *
> + * IETF RFC 2597 maps the traffic class and the drop priority to the IPv4/IPv6
> + * Differentiated Services Codepoint (DSCP) field (6 bits). Here are the DSCP
> + * values proposed by this RFC:
> + *
> + * Class 1 Class 2 Class 3 Class 4
> + * +----------+----------+----------+----------+
> + * Low Drop Prec | 001010 | 010010 | 011010 | 100010 |
> + * Medium Drop Prec | 001100 | 010100 | 011100 | 100100 |
> + * High Drop Prec | 001110 | 010110 | 011110 | 100110 |
> + * +----------+----------+----------+----------+
> + *
> + * There are 4 traffic classes (classes 1 .. 4) encoded by DSCP bits 1 and 2,
> + * as well as 3 drop priorities (low/medium/high) encoded by DSCP bits 3 and 4.
> + *
> + * All IPv4/IPv6 packets have their color marked into DSCP bits 3 and 4 as
> + * follows: green mapped to Low Drop Precedence (2’b01), yellow to Medium
> + * (2’b10) and red to High (2’b11). Marking needs to be explicitly enabled
> + * for each color; when not enabled for a given color, the DSCP field of all
> + * packets with that color is left as is.
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[in] mark_green
> + * Set to non-zero value to enable marking of green packets and to zero to
> + * disable it.
> + * @param[in] mark_yellow
> + * Set to non-zero value to enable marking of yellow packets and to zero to
> + * disable it.
> + * @param[in] mark_red
> + * Set to non-zero value to enable marking of red packets and to zero to
> + * disable it.
> + * @param[out] error
> + * Error details. Filled in only on error, when not NULL.
> + * @return
> + * 0 on success, non-zero error code otherwise.
> + *
> + * @see struct rte_tm_capabilities::mark_ip_dscp_supported
> + */
> +int
> +rte_tm_mark_ip_dscp(uint8_t port_id,
> + int mark_green,
> + int mark_yellow,
> + int mark_red,
> + struct rte_tm_error *error);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* __INCLUDE_RTE_TM_H__ */
> diff --git a/lib/librte_ether/rte_tm_driver.h b/lib/librte_ether/rte_tm_driver.h
> new file mode 100644
> index 0000000..c25f102
> --- /dev/null
> +++ b/lib/librte_ether/rte_tm_driver.h
> @@ -0,0 +1,373 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2017 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef __INCLUDE_RTE_TM_DRIVER_H__
> +#define __INCLUDE_RTE_TM_DRIVER_H__
> +
> +/**
> + * @file
> + * RTE Generic Traffic Manager API (Driver Side)
> + *
> + * This file provides implementation helpers for internal use by PMDs, they
> + * are not intended to be exposed to applications and are not subject to ABI
> + * versioning.
> + */
> +
> +#include <stdint.h>
> +
> +#include <rte_errno.h>
> +#include "rte_ethdev.h"
> +#include "rte_tm.h"
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +typedef int (*rte_tm_node_type_get_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + int *is_leaf,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node type get */
> +
> +typedef int (*rte_tm_node_level_get_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + uint32_t *level_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node level get */
> +
> +typedef int (*rte_tm_capabilities_get_t)(struct rte_eth_dev *dev,
> + struct rte_tm_capabilities *cap,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager capabilities get */
> +
> +typedef int (*rte_tm_level_capabilities_get_t)(struct rte_eth_dev *dev,
> + uint32_t level_id,
> + struct rte_tm_level_capabilities *cap,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager level capabilities get */
> +
> +typedef int (*rte_tm_node_capabilities_get_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + struct rte_tm_node_capabilities *cap,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node capabilities get */
> +
> +typedef int (*rte_tm_wred_profile_add_t)(struct rte_eth_dev *dev,
> + uint32_t wred_profile_id,
> + struct rte_tm_wred_params *profile,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager WRED profile add */
> +
> +typedef int (*rte_tm_wred_profile_delete_t)(struct rte_eth_dev *dev,
> + uint32_t wred_profile_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager WRED profile delete */
> +
> +typedef int (*rte_tm_shared_wred_context_add_update_t)(
> + struct rte_eth_dev *dev,
> + uint32_t shared_wred_context_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager shared WRED context add */
> +
> +typedef int (*rte_tm_shared_wred_context_delete_t)(
> + struct rte_eth_dev *dev,
> + uint32_t shared_wred_context_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager shared WRED context delete */
> +
> +typedef int (*rte_tm_shaper_profile_add_t)(struct rte_eth_dev *dev,
> + uint32_t shaper_profile_id,
> + struct rte_tm_shaper_params *profile,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager shaper profile add */
> +
> +typedef int (*rte_tm_shaper_profile_delete_t)(struct rte_eth_dev *dev,
> + uint32_t shaper_profile_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager shaper profile delete */
> +
> +typedef int (*rte_tm_shared_shaper_add_update_t)(struct rte_eth_dev *dev,
> + uint32_t shared_shaper_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager shared shaper add/update */
> +
> +typedef int (*rte_tm_shared_shaper_delete_t)(struct rte_eth_dev *dev,
> + uint32_t shared_shaper_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager shared shaper delete */
> +
> +typedef int (*rte_tm_node_add_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + uint32_t parent_node_id,
> + uint32_t priority,
> + uint32_t weight,
> + struct rte_tm_node_params *params,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node add */
> +
> +typedef int (*rte_tm_node_delete_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node delete */
> +
> +typedef int (*rte_tm_node_suspend_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node suspend */
> +
> +typedef int (*rte_tm_node_resume_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node resume */
> +
> +typedef int (*rte_tm_hierarchy_commit_t)(struct rte_eth_dev *dev,
> + int clear_on_fail,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager hierarchy commit */
> +
> +typedef int (*rte_tm_node_parent_update_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + uint32_t parent_node_id,
> + uint32_t priority,
> + uint32_t weight,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node parent update */
> +
> +typedef int (*rte_tm_node_shaper_update_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + uint32_t shaper_profile_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node shaper update */
> +
> +typedef int (*rte_tm_node_shared_shaper_update_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + uint32_t shared_shaper_id,
> + int32_t add,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node shaper update */
> +
> +typedef int (*rte_tm_node_stats_update_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + uint64_t stats_mask,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node stats update */
> +
> +typedef int (*rte_tm_node_wfq_weight_mode_update_t)(
> + struct rte_eth_dev *dev,
> + uint32_t node_id,
> + int *wfq_weigth_mode,
> + uint32_t n_sp_priorities,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node WFQ weight mode update */
> +
> +typedef int (*rte_tm_node_cman_update_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + enum rte_tm_cman_mode cman,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node congestion management mode update */
> +
> +typedef int (*rte_tm_node_wred_context_update_t)(
> + struct rte_eth_dev *dev,
> + uint32_t node_id,
> + uint32_t wred_profile_id,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node WRED context update */
> +
> +typedef int (*rte_tm_node_shared_wred_context_update_t)(
> + struct rte_eth_dev *dev,
> + uint32_t node_id,
> + uint32_t shared_wred_context_id,
> + int add,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager node WRED context update */
> +
> +typedef int (*rte_tm_node_stats_read_t)(struct rte_eth_dev *dev,
> + uint32_t node_id,
> + struct rte_tm_node_stats *stats,
> + uint64_t *stats_mask,
> + int clear,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager read stats counters for specific node */
> +
> +typedef int (*rte_tm_mark_vlan_dei_t)(struct rte_eth_dev *dev,
> + int mark_green,
> + int mark_yellow,
> + int mark_red,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager packet marking - VLAN DEI */
> +
> +typedef int (*rte_tm_mark_ip_ecn_t)(struct rte_eth_dev *dev,
> + int mark_green,
> + int mark_yellow,
> + int mark_red,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager packet marking - IPv4/IPv6 ECN */
> +
> +typedef int (*rte_tm_mark_ip_dscp_t)(struct rte_eth_dev *dev,
> + int mark_green,
> + int mark_yellow,
> + int mark_red,
> + struct rte_tm_error *error);
> +/**< @internal Traffic manager packet marking - IPv4/IPv6 DSCP */
> +
> +struct rte_tm_ops {
> + /** Traffic manager node type get */
> + rte_tm_node_type_get_t node_type_get;
> + /** Traffic manager node level get */
> + rte_tm_node_level_get_t node_level_get;
> +
> + /** Traffic manager capabilities_get */
> + rte_tm_capabilities_get_t capabilities_get;
> + /** Traffic manager level capabilities_get */
> + rte_tm_level_capabilities_get_t level_capabilities_get;
> + /** Traffic manager node capabilities get */
> + rte_tm_node_capabilities_get_t node_capabilities_get;
> +
> + /** Traffic manager WRED profile add */
> + rte_tm_wred_profile_add_t wred_profile_add;
> + /** Traffic manager WRED profile delete */
> + rte_tm_wred_profile_delete_t wred_profile_delete;
> + /** Traffic manager shared WRED context add/update */
> + rte_tm_shared_wred_context_add_update_t
> + shared_wred_context_add_update;
> + /** Traffic manager shared WRED context delete */
> + rte_tm_shared_wred_context_delete_t
> + shared_wred_context_delete;
> +
> + /** Traffic manager shaper profile add */
> + rte_tm_shaper_profile_add_t shaper_profile_add;
> + /** Traffic manager shaper profile delete */
> + rte_tm_shaper_profile_delete_t shaper_profile_delete;
> + /** Traffic manager shared shaper add/update */
> + rte_tm_shared_shaper_add_update_t shared_shaper_add_update;
> + /** Traffic manager shared shaper delete */
> + rte_tm_shared_shaper_delete_t shared_shaper_delete;
> +
> + /** Traffic manager node add */
> + rte_tm_node_add_t node_add;
> + /** Traffic manager node delete */
> + rte_tm_node_delete_t node_delete;
> + /** Traffic manager node suspend */
> + rte_tm_node_suspend_t node_suspend;
> + /** Traffic manager node resume */
> + rte_tm_node_resume_t node_resume;
> + /** Traffic manager hierarchy commit */
> + rte_tm_hierarchy_commit_t hierarchy_commit;
> +
> + /** Traffic manager node parent update */
> + rte_tm_node_parent_update_t node_parent_update;
> + /** Traffic manager node shaper update */
> + rte_tm_node_shaper_update_t node_shaper_update;
> + /** Traffic manager node shared shaper update */
> + rte_tm_node_shared_shaper_update_t node_shared_shaper_update;
> + /** Traffic manager node stats update */
> + rte_tm_node_stats_update_t node_stats_update;
> + /** Traffic manager node WFQ weight mode update */
> + rte_tm_node_wfq_weight_mode_update_t node_wfq_weight_mode_update;
> + /** Traffic manager node congestion management mode update */
> + rte_tm_node_cman_update_t node_cman_update;
> + /** Traffic manager node WRED context update */
> + rte_tm_node_wred_context_update_t node_wred_context_update;
> + /** Traffic manager node shared WRED context update */
> + rte_tm_node_shared_wred_context_update_t
> + node_shared_wred_context_update;
> + /** Traffic manager read statistics counters for current node */
> + rte_tm_node_stats_read_t node_stats_read;
> +
> + /** Traffic manager packet marking - VLAN DEI */
> + rte_tm_mark_vlan_dei_t mark_vlan_dei;
> + /** Traffic manager packet marking - IPv4/IPv6 ECN */
> + rte_tm_mark_ip_ecn_t mark_ip_ecn;
> + /** Traffic manager packet marking - IPv4/IPv6 DSCP */
> + rte_tm_mark_ip_dscp_t mark_ip_dscp;
> +};
> +
> +/**
> + * Initialize generic error structure.
> + *
> + * This function also sets rte_errno to a given value.
> + *
> + * @param[out] error
> + * Pointer to error structure (may be NULL).
> + * @param[in] code
> + * Related error code (rte_errno).
> + * @param[in] type
> + * Cause field and error type.
> + * @param[in] cause
> + * Object responsible for the error.
> + * @param[in] message
> + * Human-readable error message.
> + *
> + * @return
> + * Error code.
> + */
> +static inline int
> +rte_tm_error_set(struct rte_tm_error *error,
> + int code,
> + enum rte_tm_error_type type,
> + const void *cause,
> + const char *message)
> +{
> + if (error) {
> + *error = (struct rte_tm_error){
> + .type = type,
> + .cause = cause,
> + .message = message,
> + };
> + }
> + rte_errno = code;
> + return code;
> +}
> +
> +/**
> + * Get generic traffic manager operations structure from a port
> + *
> + * @param[in] port_id
> + * The port identifier of the Ethernet device.
> + * @param[out] error
> + * Error details
> + *
> + * @return
> + * The traffic manager operations structure associated with port_id on
> + * success, NULL otherwise.
> + */
> +const struct rte_tm_ops *
> +rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* __INCLUDE_RTE_TM_DRIVER_H__ */
>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 01/25] ethdev: introduce generic flow API
@ 2017-05-23 6:07 0% ` Zhao1, Wei
0 siblings, 0 replies; 200+ results
From: Zhao1, Wei @ 2017-05-23 6:07 UTC (permalink / raw)
To: Adrien Mazarguil, dev; +Cc: Xing, Beilei
Hi, Adrien
> +struct rte_flow_item_raw {
> + uint32_t relative:1; /**< Look for pattern after the previous item. */
> + uint32_t search:1; /**< Search pattern from offset (see also limit). */
> + uint32_t reserved:30; /**< Reserved, must be set to zero. */
> + int32_t offset; /**< Absolute or relative offset for pattern. */
> + uint16_t limit; /**< Search area limit for start of pattern. */
> + uint16_t length; /**< Pattern length. */
> + uint8_t pattern[]; /**< Byte string to look for. */ };
When I use this API to test igb flex filter, I find that
in the struct rte_flow_item_raw, the member pattern is not the same as my purpose.
For example, If I type in " flow create 0 ingress pattern raw relative is 0 pattern is 0123 / end actions queue index 1 / end "
What I get in NIC layer is pattern[]={ 0x30, 0x31, 0x32, 0x33, 0x0 <repeats 124 times> }.
But what I need is pattern[]={0x01, 0x23, 0x0 <repeats 126 times>}
About the format change of flex_filter, I have reference to the testpmd function cmd_flex_filter_parsed(),
There is details of format change from ASIC code to data, for example:
for (i = 0; i < len; i++) {
c = bytes_ptr[i];
if (isxdigit(c) == 0) {
/* invalid characters. */
printf("invalid input\n");
return;
}
val = xdigit2val(c);
if (i % 2) {
byte |= val;
filter.bytes[j] = byte;
printf("bytes[%d]:%02x ", j, filter.bytes[j]);
j++;
byte = 0;
} else
byte |= val << 4;
}
and there is also usage example in the DPDK document testpmd_app_ug-16.11.pdf:
(it also not use ASIC code)
testpmd> flex_filter 0 add len 16 bytes 0x00000000000000000000000008060000 \
mask 000C priority 3 queue 3
so, will our new generic flow API align to the old format in flex byte filter in 17.08 or in the future?
At least in the struct rte_flow_item_raw, the member pattern is the same as old filter?
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Adrien Mazarguil
> Sent: Tuesday, December 20, 2016 1:49 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 01/25] ethdev: introduce generic flow API
>
> This new API supersedes all the legacy filter types described in rte_eth_ctrl.h.
> It is slightly higher level and as a result relies more on PMDs to process and
> validate flow rules.
>
> Benefits:
>
> - A unified API is easier to program for, applications do not have to be
> written for a specific filter type which may or may not be supported by
> the underlying device.
>
> - The behavior of a flow rule is the same regardless of the underlying
> device, applications do not need to be aware of hardware quirks.
>
> - Extensible by design, API/ABI breakage should rarely occur if at all.
>
> - Documentation is self-standing, no need to look up elsewhere.
>
> Existing filter types will be deprecated and removed in the near future.
>
> Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> Acked-by: Olga Shern <olgas@mellanox.com>
> ---
> MAINTAINERS | 4 +
> doc/api/doxy-api-index.md | 2 +
> lib/librte_ether/Makefile | 3 +
> lib/librte_ether/rte_eth_ctrl.h | 1 +
> lib/librte_ether/rte_ether_version.map | 11 +
> lib/librte_ether/rte_flow.c | 159 +++++
> lib/librte_ether/rte_flow.h | 947 ++++++++++++++++++++++++++++
> lib/librte_ether/rte_flow_driver.h | 182 ++++++
> 8 files changed, 1309 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 26d9590..5975cff 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -243,6 +243,10 @@ M: Thomas Monjalon
> <thomas.monjalon@6wind.com>
> F: lib/librte_ether/
> F: scripts/test-null.sh
>
> +Generic flow API
> +M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> +F: lib/librte_ether/rte_flow*
> +
> Crypto API
> M: Declan Doherty <declan.doherty@intel.com>
> F: lib/librte_cryptodev/
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index
> de65b4c..4951552 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -39,6 +39,8 @@ There are many libraries, so their headers may be
> grouped by topics:
> [dev] (@ref rte_dev.h),
> [ethdev] (@ref rte_ethdev.h),
> [ethctrl] (@ref rte_eth_ctrl.h),
> + [rte_flow] (@ref rte_flow.h),
> + [rte_flow_driver] (@ref rte_flow_driver.h),
> [cryptodev] (@ref rte_cryptodev.h),
> [devargs] (@ref rte_devargs.h),
> [bond] (@ref rte_eth_bond.h),
> diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile index
> efe1e5f..9335361 100644
> --- a/lib/librte_ether/Makefile
> +++ b/lib/librte_ether/Makefile
> @@ -44,6 +44,7 @@ EXPORT_MAP := rte_ether_version.map LIBABIVER := 5
>
> SRCS-y += rte_ethdev.c
> +SRCS-y += rte_flow.c
>
> #
> # Export include files
> @@ -51,6 +52,8 @@ SRCS-y += rte_ethdev.c SYMLINK-y-include +=
> rte_ethdev.h SYMLINK-y-include += rte_eth_ctrl.h SYMLINK-y-include +=
> rte_dev_info.h
> +SYMLINK-y-include += rte_flow.h
> +SYMLINK-y-include += rte_flow_driver.h
>
> # this lib depends upon:
> DEPDIRS-y += lib/librte_net lib/librte_eal lib/librte_mempool lib/librte_ring
> lib/librte_mbuf diff --git a/lib/librte_ether/rte_eth_ctrl.h
> b/lib/librte_ether/rte_eth_ctrl.h index fe80eb0..8386904 100644
> --- a/lib/librte_ether/rte_eth_ctrl.h
> +++ b/lib/librte_ether/rte_eth_ctrl.h
> @@ -99,6 +99,7 @@ enum rte_filter_type {
> RTE_ETH_FILTER_FDIR,
> RTE_ETH_FILTER_HASH,
> RTE_ETH_FILTER_L2_TUNNEL,
> + RTE_ETH_FILTER_GENERIC,
> RTE_ETH_FILTER_MAX
> };
>
> diff --git a/lib/librte_ether/rte_ether_version.map
> b/lib/librte_ether/rte_ether_version.map
> index 72be66d..384cdee 100644
> --- a/lib/librte_ether/rte_ether_version.map
> +++ b/lib/librte_ether/rte_ether_version.map
> @@ -147,3 +147,14 @@ DPDK_16.11 {
> rte_eth_dev_pci_remove;
>
> } DPDK_16.07;
> +
> +DPDK_17.02 {
> + global:
> +
> + rte_flow_validate;
> + rte_flow_create;
> + rte_flow_destroy;
> + rte_flow_flush;
> + rte_flow_query;
> +
> +} DPDK_16.11;
> diff --git a/lib/librte_ether/rte_flow.c b/lib/librte_ether/rte_flow.c new file
> mode 100644 index 0000000..d98fb1b
> --- /dev/null
> +++ b/lib/librte_ether/rte_flow.c
> @@ -0,0 +1,159 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright 2016 6WIND S.A.
> + * Copyright 2016 Mellanox.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of 6WIND S.A. nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#include <stdint.h>
> +
> +#include <rte_errno.h>
> +#include <rte_branch_prediction.h>
> +#include "rte_ethdev.h"
> +#include "rte_flow_driver.h"
> +#include "rte_flow.h"
> +
> +/* Get generic flow operations structure from a port. */ const struct
> +rte_flow_ops * rte_flow_ops_get(uint8_t port_id, struct rte_flow_error
> +*error) {
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops;
> + int code;
> +
> + if (unlikely(!rte_eth_dev_is_valid_port(port_id)))
> + code = ENODEV;
> + else if (unlikely(!dev->dev_ops->filter_ctrl ||
> + dev->dev_ops->filter_ctrl(dev,
> + RTE_ETH_FILTER_GENERIC,
> + RTE_ETH_FILTER_GET,
> + &ops) ||
> + !ops))
> + code = ENOSYS;
> + else
> + return ops;
> + rte_flow_error_set(error, code,
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(code));
> + return NULL;
> +}
> +
> +/* Check whether a flow rule can be created on a given port. */ int
> +rte_flow_validate(uint8_t port_id,
> + const struct rte_flow_attr *attr,
> + const struct rte_flow_item pattern[],
> + const struct rte_flow_action actions[],
> + struct rte_flow_error *error)
> +{
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +
> + if (unlikely(!ops))
> + return -rte_errno;
> + if (likely(!!ops->validate))
> + return ops->validate(dev, attr, pattern, actions, error);
> + rte_flow_error_set(error, ENOSYS,
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOSYS));
> + return -rte_errno;
> +}
> +
> +/* Create a flow rule on a given port. */ struct rte_flow *
> +rte_flow_create(uint8_t port_id,
> + const struct rte_flow_attr *attr,
> + const struct rte_flow_item pattern[],
> + const struct rte_flow_action actions[],
> + struct rte_flow_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> + if (unlikely(!ops))
> + return NULL;
> + if (likely(!!ops->create))
> + return ops->create(dev, attr, pattern, actions, error);
> + rte_flow_error_set(error, ENOSYS,
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOSYS));
> + return NULL;
> +}
> +
> +/* Destroy a flow rule on a given port. */ int rte_flow_destroy(uint8_t
> +port_id,
> + struct rte_flow *flow,
> + struct rte_flow_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> + if (unlikely(!ops))
> + return -rte_errno;
> + if (likely(!!ops->destroy))
> + return ops->destroy(dev, flow, error);
> + rte_flow_error_set(error, ENOSYS,
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOSYS));
> + return -rte_errno;
> +}
> +
> +/* Destroy all flow rules associated with a port. */ int
> +rte_flow_flush(uint8_t port_id,
> + struct rte_flow_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> + if (unlikely(!ops))
> + return -rte_errno;
> + if (likely(!!ops->flush))
> + return ops->flush(dev, error);
> + rte_flow_error_set(error, ENOSYS,
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOSYS));
> + return -rte_errno;
> +}
> +
> +/* Query an existing flow rule. */
> +int
> +rte_flow_query(uint8_t port_id,
> + struct rte_flow *flow,
> + enum rte_flow_action_type action,
> + void *data,
> + struct rte_flow_error *error)
> +{
> + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> + if (!ops)
> + return -rte_errno;
> + if (likely(!!ops->query))
> + return ops->query(dev, flow, action, data, error);
> + rte_flow_error_set(error, ENOSYS,
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, rte_strerror(ENOSYS));
> + return -rte_errno;
> +}
> diff --git a/lib/librte_ether/rte_flow.h b/lib/librte_ether/rte_flow.h new file
> mode 100644 index 0000000..98084ac
> --- /dev/null
> +++ b/lib/librte_ether/rte_flow.h
> @@ -0,0 +1,947 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright 2016 6WIND S.A.
> + * Copyright 2016 Mellanox.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of 6WIND S.A. nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#ifndef RTE_FLOW_H_
> +#define RTE_FLOW_H_
> +
> +/**
> + * @file
> + * RTE generic flow API
> + *
> + * This interface provides the ability to program packet matching and
> + * associated actions in hardware through flow rules.
> + */
> +
> +#include <rte_arp.h>
> +#include <rte_ether.h>
> +#include <rte_icmp.h>
> +#include <rte_ip.h>
> +#include <rte_sctp.h>
> +#include <rte_tcp.h>
> +#include <rte_udp.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * Flow rule attributes.
> + *
> + * Priorities are set on two levels: per group and per rule within groups.
> + *
> + * Lower values denote higher priority, the highest priority for both
> +levels
> + * is 0, so that a rule with priority 0 in group 8 is always matched
> +after a
> + * rule with priority 8 in group 0.
> + *
> + * Although optional, applications are encouraged to group similar
> +rules as
> + * much as possible to fully take advantage of hardware capabilities
> + * (e.g. optimized matching) and work around limitations (e.g. a single
> + * pattern type possibly allowed in a given group).
> + *
> + * Group and priority levels are arbitrary and up to the application,
> +they
> + * do not need to be contiguous nor start from 0, however the maximum
> +number
> + * varies between devices and may be affected by existing flow rules.
> + *
> + * If a packet is matched by several rules of a given group for a given
> + * priority level, the outcome is undefined. It can take any path, may
> +be
> + * duplicated or even cause unrecoverable errors.
> + *
> + * Note that support for more than a single group and priority level is
> +not
> + * guaranteed.
> + *
> + * Flow rules can apply to inbound and/or outbound traffic (ingress/egress).
> + *
> + * Several pattern items and actions are valid and can be used in both
> + * directions. Those valid for only one direction are described as such.
> + *
> + * At least one direction must be specified.
> + *
> + * Specifying both directions at once for a given rule is not
> +recommended
> + * but may be valid in a few cases (e.g. shared counter).
> + */
> +struct rte_flow_attr {
> + uint32_t group; /**< Priority group. */
> + uint32_t priority; /**< Priority level within group. */
> + uint32_t ingress:1; /**< Rule applies to ingress traffic. */
> + uint32_t egress:1; /**< Rule applies to egress traffic. */
> + uint32_t reserved:30; /**< Reserved, must be zero. */ };
> +
> +/**
> + * Matching pattern item types.
> + *
> + * Pattern items fall in two categories:
> + *
> + * - Matching protocol headers and packet data (ANY, RAW, ETH, VLAN,
> IPV4,
> + * IPV6, ICMP, UDP, TCP, SCTP, VXLAN and so on), usually associated with a
> + * specification structure. These must be stacked in the same order as the
> + * protocol layers to match, starting from the lowest.
> + *
> + * - Matching meta-data or affecting pattern processing (END, VOID, INVERT,
> + * PF, VF, PORT and so on), often without a specification structure. Since
> + * they do not match packet contents, these can be specified anywhere
> + * within item lists without affecting others.
> + *
> + * See the description of individual types for more information. Those
> + * marked with [META] fall into the second category.
> + */
> +enum rte_flow_item_type {
> + /**
> + * [META]
> + *
> + * End marker for item lists. Prevents further processing of items,
> + * thereby ending the pattern.
> + *
> + * No associated specification structure.
> + */
> + RTE_FLOW_ITEM_TYPE_END,
> +
> + /**
> + * [META]
> + *
> + * Used as a placeholder for convenience. It is ignored and simply
> + * discarded by PMDs.
> + *
> + * No associated specification structure.
> + */
> + RTE_FLOW_ITEM_TYPE_VOID,
> +
> + /**
> + * [META]
> + *
> + * Inverted matching, i.e. process packets that do not match the
> + * pattern.
> + *
> + * No associated specification structure.
> + */
> + RTE_FLOW_ITEM_TYPE_INVERT,
> +
> + /**
> + * Matches any protocol in place of the current layer, a single ANY
> + * may also stand for several protocol layers.
> + *
> + * See struct rte_flow_item_any.
> + */
> + RTE_FLOW_ITEM_TYPE_ANY,
> +
> + /**
> + * [META]
> + *
> + * Matches packets addressed to the physical function of the device.
> + *
> + * If the underlying device function differs from the one that would
> + * normally receive the matched traffic, specifying this item
> + * prevents it from reaching that device unless the flow rule
> + * contains a PF action. Packets are not duplicated between device
> + * instances by default.
> + *
> + * No associated specification structure.
> + */
> + RTE_FLOW_ITEM_TYPE_PF,
> +
> + /**
> + * [META]
> + *
> + * Matches packets addressed to a virtual function ID of the device.
> + *
> + * If the underlying device function differs from the one that would
> + * normally receive the matched traffic, specifying this item
> + * prevents it from reaching that device unless the flow rule
> + * contains a VF action. Packets are not duplicated between device
> + * instances by default.
> + *
> + * See struct rte_flow_item_vf.
> + */
> + RTE_FLOW_ITEM_TYPE_VF,
> +
> + /**
> + * [META]
> + *
> + * Matches packets coming from the specified physical port of the
> + * underlying device.
> + *
> + * The first PORT item overrides the physical port normally
> + * associated with the specified DPDK input port (port_id). This
> + * item can be provided several times to match additional physical
> + * ports.
> + *
> + * See struct rte_flow_item_port.
> + */
> + RTE_FLOW_ITEM_TYPE_PORT,
> +
> + /**
> + * Matches a byte string of a given length at a given offset.
> + *
> + * See struct rte_flow_item_raw.
> + */
> + RTE_FLOW_ITEM_TYPE_RAW,
> +
> + /**
> + * Matches an Ethernet header.
> + *
> + * See struct rte_flow_item_eth.
> + */
> + RTE_FLOW_ITEM_TYPE_ETH,
> +
> + /**
> + * Matches an 802.1Q/ad VLAN tag.
> + *
> + * See struct rte_flow_item_vlan.
> + */
> + RTE_FLOW_ITEM_TYPE_VLAN,
> +
> + /**
> + * Matches an IPv4 header.
> + *
> + * See struct rte_flow_item_ipv4.
> + */
> + RTE_FLOW_ITEM_TYPE_IPV4,
> +
> + /**
> + * Matches an IPv6 header.
> + *
> + * See struct rte_flow_item_ipv6.
> + */
> + RTE_FLOW_ITEM_TYPE_IPV6,
> +
> + /**
> + * Matches an ICMP header.
> + *
> + * See struct rte_flow_item_icmp.
> + */
> + RTE_FLOW_ITEM_TYPE_ICMP,
> +
> + /**
> + * Matches a UDP header.
> + *
> + * See struct rte_flow_item_udp.
> + */
> + RTE_FLOW_ITEM_TYPE_UDP,
> +
> + /**
> + * Matches a TCP header.
> + *
> + * See struct rte_flow_item_tcp.
> + */
> + RTE_FLOW_ITEM_TYPE_TCP,
> +
> + /**
> + * Matches a SCTP header.
> + *
> + * See struct rte_flow_item_sctp.
> + */
> + RTE_FLOW_ITEM_TYPE_SCTP,
> +
> + /**
> + * Matches a VXLAN header.
> + *
> + * See struct rte_flow_item_vxlan.
> + */
> + RTE_FLOW_ITEM_TYPE_VXLAN,
> +};
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_ANY
> + *
> + * Matches any protocol in place of the current layer, a single ANY may
> +also
> + * stand for several protocol layers.
> + *
> + * This is usually specified as the first pattern item when looking for
> +a
> + * protocol anywhere in a packet.
> + *
> + * A zeroed mask stands for any number of layers.
> + */
> +struct rte_flow_item_any {
> + uint32_t num; /* Number of layers covered. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_VF
> + *
> + * Matches packets addressed to a virtual function ID of the device.
> + *
> + * If the underlying device function differs from the one that would
> + * normally receive the matched traffic, specifying this item prevents
> +it
> + * from reaching that device unless the flow rule contains a VF
> + * action. Packets are not duplicated between device instances by default.
> + *
> + * - Likely to return an error or never match any traffic if this causes a
> + * VF device to match traffic addressed to a different VF.
> + * - Can be specified multiple times to match traffic addressed to several
> + * VF IDs.
> + * - Can be combined with a PF item to match both PF and VF traffic.
> + *
> + * A zeroed mask can be used to match any VF ID.
> + */
> +struct rte_flow_item_vf {
> + uint32_t id; /**< Destination VF ID. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_PORT
> + *
> + * Matches packets coming from the specified physical port of the
> +underlying
> + * device.
> + *
> + * The first PORT item overrides the physical port normally associated
> +with
> + * the specified DPDK input port (port_id). This item can be provided
> + * several times to match additional physical ports.
> + *
> + * Note that physical ports are not necessarily tied to DPDK input
> +ports
> + * (port_id) when those are not under DPDK control. Possible values are
> + * specific to each device, they are not necessarily indexed from zero
> +and
> + * may not be contiguous.
> + *
> + * As a device property, the list of allowed values as well as the
> +value
> + * associated with a port_id should be retrieved by other means.
> + *
> + * A zeroed mask can be used to match any port index.
> + */
> +struct rte_flow_item_port {
> + uint32_t index; /**< Physical port index. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_RAW
> + *
> + * Matches a byte string of a given length at a given offset.
> + *
> + * Offset is either absolute (using the start of the packet) or
> +relative to
> + * the end of the previous matched item in the stack, in which case
> +negative
> + * values are allowed.
> + *
> + * If search is enabled, offset is used as the starting point. The
> +search
> + * area can be delimited by setting limit to a nonzero value, which is
> +the
> + * maximum number of bytes after offset where the pattern may start.
> + *
> + * Matching a zero-length pattern is allowed, doing so resets the
> +relative
> + * offset for subsequent items.
> + *
> + * This type does not support ranges (struct rte_flow_item.last).
> + */
> +struct rte_flow_item_raw {
> + uint32_t relative:1; /**< Look for pattern after the previous item. */
> + uint32_t search:1; /**< Search pattern from offset (see also limit). */
> + uint32_t reserved:30; /**< Reserved, must be set to zero. */
> + int32_t offset; /**< Absolute or relative offset for pattern. */
> + uint16_t limit; /**< Search area limit for start of pattern. */
> + uint16_t length; /**< Pattern length. */
> + uint8_t pattern[]; /**< Byte string to look for. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_ETH
> + *
> + * Matches an Ethernet header.
> + */
> +struct rte_flow_item_eth {
> + struct ether_addr dst; /**< Destination MAC. */
> + struct ether_addr src; /**< Source MAC. */
> + uint16_t type; /**< EtherType. */
> +};
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_VLAN
> + *
> + * Matches an 802.1Q/ad VLAN tag.
> + *
> + * This type normally follows either RTE_FLOW_ITEM_TYPE_ETH or
> + * RTE_FLOW_ITEM_TYPE_VLAN.
> + */
> +struct rte_flow_item_vlan {
> + uint16_t tpid; /**< Tag protocol identifier. */
> + uint16_t tci; /**< Tag control information. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_IPV4
> + *
> + * Matches an IPv4 header.
> + *
> + * Note: IPv4 options are handled by dedicated pattern items.
> + */
> +struct rte_flow_item_ipv4 {
> + struct ipv4_hdr hdr; /**< IPv4 header definition. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_IPV6.
> + *
> + * Matches an IPv6 header.
> + *
> + * Note: IPv6 options are handled by dedicated pattern items.
> + */
> +struct rte_flow_item_ipv6 {
> + struct ipv6_hdr hdr; /**< IPv6 header definition. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_ICMP.
> + *
> + * Matches an ICMP header.
> + */
> +struct rte_flow_item_icmp {
> + struct icmp_hdr hdr; /**< ICMP header definition. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_UDP.
> + *
> + * Matches a UDP header.
> + */
> +struct rte_flow_item_udp {
> + struct udp_hdr hdr; /**< UDP header definition. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_TCP.
> + *
> + * Matches a TCP header.
> + */
> +struct rte_flow_item_tcp {
> + struct tcp_hdr hdr; /**< TCP header definition. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_SCTP.
> + *
> + * Matches a SCTP header.
> + */
> +struct rte_flow_item_sctp {
> + struct sctp_hdr hdr; /**< SCTP header definition. */ };
> +
> +/**
> + * RTE_FLOW_ITEM_TYPE_VXLAN.
> + *
> + * Matches a VXLAN header (RFC 7348).
> + */
> +struct rte_flow_item_vxlan {
> + uint8_t flags; /**< Normally 0x08 (I flag). */
> + uint8_t rsvd0[3]; /**< Reserved, normally 0x000000. */
> + uint8_t vni[3]; /**< VXLAN identifier. */
> + uint8_t rsvd1; /**< Reserved, normally 0x00. */ };
> +
> +/**
> + * Matching pattern item definition.
> + *
> + * A pattern is formed by stacking items starting from the lowest
> +protocol
> + * layer to match. This stacking restriction does not apply to meta
> +items
> + * which can be placed anywhere in the stack without affecting the
> +meaning
> + * of the resulting pattern.
> + *
> + * Patterns are terminated by END items.
> + *
> + * The spec field should be a valid pointer to a structure of the
> +related
> + * item type. It may be set to NULL in many cases to use default values.
> + *
> + * Optionally, last can point to a structure of the same type to define
> +an
> + * inclusive range. This is mostly supported by integer and address
> +fields,
> + * may cause errors otherwise. Fields that do not support ranges must
> +be set
> + * to 0 or to the same value as the corresponding fields in spec.
> + *
> + * By default all fields present in spec are considered relevant (see
> +note
> + * below). This behavior can be altered by providing a mask structure
> +of the
> + * same type with applicable bits set to one. It can also be used to
> + * partially filter out specific fields (e.g. as an alternate mean to
> +match
> + * ranges of IP addresses).
> + *
> + * Mask is a simple bit-mask applied before interpreting the contents
> +of
> + * spec and last, which may yield unexpected results if not used
> + * carefully. For example, if for an IPv4 address field, spec provides
> + * 10.1.2.3, last provides 10.3.4.5 and mask provides 255.255.0.0, the
> + * effective range becomes 10.1.0.0 to 10.3.255.255.
> + *
> + * Note: the defaults for data-matching items such as IPv4 when mask is
> +not
> + * specified actually depend on the underlying implementation since
> +only
> + * recognized fields can be taken into account.
> + */
> +struct rte_flow_item {
> + enum rte_flow_item_type type; /**< Item type. */
> + const void *spec; /**< Pointer to item specification structure. */
> + const void *last; /**< Defines an inclusive range (spec to last). */
> + const void *mask; /**< Bit-mask applied to spec and last. */ };
> +
> +/**
> + * Action types.
> + *
> + * Each possible action is represented by a type. Some have associated
> + * configuration structures. Several actions combined in a list can be
> + * affected to a flow rule. That list is not ordered.
> + *
> + * They fall in three categories:
> + *
> + * - Terminating actions (such as QUEUE, DROP, RSS, PF, VF) that prevent
> + * processing matched packets by subsequent flow rules, unless
> overridden
> + * with PASSTHRU.
> + *
> + * - Non terminating actions (PASSTHRU, DUP) that leave matched packets
> up
> + * for additional processing by subsequent flow rules.
> + *
> + * - Other non terminating meta actions that do not affect the fate of
> + * packets (END, VOID, MARK, FLAG, COUNT).
> + *
> + * When several actions are combined in a flow rule, they should all
> +have
> + * different types (e.g. dropping a packet twice is not possible).
> + *
> + * Only the last action of a given type is taken into account. PMDs
> +still
> + * perform error checking on the entire list.
> + *
> + * Note that PASSTHRU is the only action able to override a terminating
> + * rule.
> + */
> +enum rte_flow_action_type {
> + /**
> + * [META]
> + *
> + * End marker for action lists. Prevents further processing of
> + * actions, thereby ending the list.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_END,
> +
> + /**
> + * [META]
> + *
> + * Used as a placeholder for convenience. It is ignored and simply
> + * discarded by PMDs.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_VOID,
> +
> + /**
> + * Leaves packets up for additional processing by subsequent flow
> + * rules. This is the default when a rule does not contain a
> + * terminating action, but can be specified to force a rule to
> + * become non-terminating.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_PASSTHRU,
> +
> + /**
> + * [META]
> + *
> + * Attaches a 32 bit value to packets.
> + *
> + * See struct rte_flow_action_mark.
> + */
> + RTE_FLOW_ACTION_TYPE_MARK,
> +
> + /**
> + * [META]
> + *
> + * Flag packets. Similar to MARK but only affects ol_flags.
> + *
> + * Note: a distinctive flag must be defined for it.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_FLAG,
> +
> + /**
> + * Assigns packets to a given queue index.
> + *
> + * See struct rte_flow_action_queue.
> + */
> + RTE_FLOW_ACTION_TYPE_QUEUE,
> +
> + /**
> + * Drops packets.
> + *
> + * PASSTHRU overrides this action if both are specified.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_DROP,
> +
> + /**
> + * [META]
> + *
> + * Enables counters for this rule.
> + *
> + * These counters can be retrieved and reset through
> rte_flow_query(),
> + * see struct rte_flow_query_count.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_COUNT,
> +
> + /**
> + * Duplicates packets to a given queue index.
> + *
> + * This is normally combined with QUEUE, however when used alone,
> it
> + * is actually similar to QUEUE + PASSTHRU.
> + *
> + * See struct rte_flow_action_dup.
> + */
> + RTE_FLOW_ACTION_TYPE_DUP,
> +
> + /**
> + * Similar to QUEUE, except RSS is additionally performed on packets
> + * to spread them among several queues according to the provided
> + * parameters.
> + *
> + * See struct rte_flow_action_rss.
> + */
> + RTE_FLOW_ACTION_TYPE_RSS,
> +
> + /**
> + * Redirects packets to the physical function (PF) of the current
> + * device.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_PF,
> +
> + /**
> + * Redirects packets to the virtual function (VF) of the current
> + * device with the specified ID.
> + *
> + * See struct rte_flow_action_vf.
> + */
> + RTE_FLOW_ACTION_TYPE_VF,
> +};
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_MARK
> + *
> + * Attaches a 32 bit value to packets.
> + *
> + * This value is arbitrary and application-defined. For compatibility
> +with
> + * FDIR it is returned in the hash.fdir.hi mbuf field. PKT_RX_FDIR_ID
> +is
> + * also set in ol_flags.
> + */
> +struct rte_flow_action_mark {
> + uint32_t id; /**< 32 bit value to return with packets. */ };
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_QUEUE
> + *
> + * Assign packets to a given queue index.
> + *
> + * Terminating by default.
> + */
> +struct rte_flow_action_queue {
> + uint16_t index; /**< Queue index to use. */ };
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_COUNT (query)
> + *
> + * Query structure to retrieve and reset flow rule counters.
> + */
> +struct rte_flow_query_count {
> + uint32_t reset:1; /**< Reset counters after query [in]. */
> + uint32_t hits_set:1; /**< hits field is set [out]. */
> + uint32_t bytes_set:1; /**< bytes field is set [out]. */
> + uint32_t reserved:29; /**< Reserved, must be zero [in, out]. */
> + uint64_t hits; /**< Number of hits for this rule [out]. */
> + uint64_t bytes; /**< Number of bytes through this rule [out]. */ };
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_DUP
> + *
> + * Duplicates packets to a given queue index.
> + *
> + * This is normally combined with QUEUE, however when used alone, it is
> + * actually similar to QUEUE + PASSTHRU.
> + *
> + * Non-terminating by default.
> + */
> +struct rte_flow_action_dup {
> + uint16_t index; /**< Queue index to duplicate packets to. */ };
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_RSS
> + *
> + * Similar to QUEUE, except RSS is additionally performed on packets to
> + * spread them among several queues according to the provided
> parameters.
> + *
> + * Note: RSS hash result is normally stored in the hash.rss mbuf field,
> + * however it conflicts with the MARK action as they share the same
> + * space. When both actions are specified, the RSS hash is discarded
> +and
> + * PKT_RX_RSS_HASH is not set in ol_flags. MARK has priority. The mbuf
> + * structure should eventually evolve to store both.
> + *
> + * Terminating by default.
> + */
> +struct rte_flow_action_rss {
> + const struct rte_eth_rss_conf *rss_conf; /**< RSS parameters. */
> + uint16_t num; /**< Number of entries in queue[]. */
> + uint16_t queue[]; /**< Queues indices to use. */ };
> +
> +/**
> + * RTE_FLOW_ACTION_TYPE_VF
> + *
> + * Redirects packets to a virtual function (VF) of the current device.
> + *
> + * Packets matched by a VF pattern item can be redirected to their
> +original
> + * VF ID instead of the specified one. This parameter may not be
> +available
> + * and is not guaranteed to work properly if the VF part is matched by
> +a
> + * prior flow rule or if packets are not addressed to a VF in the first
> + * place.
> + *
> + * Terminating by default.
> + */
> +struct rte_flow_action_vf {
> + uint32_t original:1; /**< Use original VF ID if possible. */
> + uint32_t reserved:31; /**< Reserved, must be zero. */
> + uint32_t id; /**< VF ID to redirect packets to. */ };
> +
> +/**
> + * Definition of a single action.
> + *
> + * A list of actions is terminated by a END action.
> + *
> + * For simple actions without a configuration structure, conf remains NULL.
> + */
> +struct rte_flow_action {
> + enum rte_flow_action_type type; /**< Action type. */
> + const void *conf; /**< Pointer to action configuration structure. */
> +};
> +
> +/**
> + * Opaque type returned after successfully creating a flow.
> + *
> + * This handle can be used to manage and query the related flow (e.g.
> +to
> + * destroy it or retrieve counters).
> + */
> +struct rte_flow;
> +
> +/**
> + * Verbose error types.
> + *
> + * Most of them provide the type of the object referenced by struct
> + * rte_flow_error.cause.
> + */
> +enum rte_flow_error_type {
> + RTE_FLOW_ERROR_TYPE_NONE, /**< No error. */
> + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
> + RTE_FLOW_ERROR_TYPE_HANDLE, /**< Flow rule (handle). */
> + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, /**< Group field. */
> + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, /**< Priority field. */
> + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, /**< Ingress field. */
> + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, /**< Egress field. */
> + RTE_FLOW_ERROR_TYPE_ATTR, /**< Attributes structure. */
> + RTE_FLOW_ERROR_TYPE_ITEM_NUM, /**< Pattern length. */
> + RTE_FLOW_ERROR_TYPE_ITEM, /**< Specific pattern item. */
> + RTE_FLOW_ERROR_TYPE_ACTION_NUM, /**< Number of actions.
> */
> + RTE_FLOW_ERROR_TYPE_ACTION, /**< Specific action. */ };
> +
> +/**
> + * Verbose error structure definition.
> + *
> + * This object is normally allocated by applications and set by PMDs,
> +the
> + * message points to a constant string which does not need to be freed
> +by
> + * the application, however its pointer can be considered valid only as
> +long
> + * as its associated DPDK port remains configured. Closing the
> +underlying
> + * device or unloading the PMD invalidates it.
> + *
> + * Both cause and message may be NULL regardless of the error type.
> + */
> +struct rte_flow_error {
> + enum rte_flow_error_type type; /**< Cause field and error types.
> */
> + const void *cause; /**< Object responsible for the error. */
> + const char *message; /**< Human-readable error message. */ };
> +
> +/**
> + * Check whether a flow rule can be created on a given port.
> + *
> + * While this function has no effect on the target device, the flow
> +rule is
> + * validated against its current configuration state and the returned
> +value
> + * should be considered valid by the caller for that state only.
> + *
> + * The returned value is guaranteed to remain valid only as long as no
> + * successful calls to rte_flow_create() or rte_flow_destroy() are made
> +in
> + * the meantime and no device parameter affecting flow rules in any way
> +are
> + * modified, due to possible collisions or resource limitations
> +(although in
> + * such cases EINVAL should not be returned).
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param[in] attr
> + * Flow rule attributes.
> + * @param[in] pattern
> + * Pattern specification (list terminated by the END pattern item).
> + * @param[in] actions
> + * Associated actions (list terminated by the END action).
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * 0 if flow rule is valid and can be created. A negative errno value
> + * otherwise (rte_errno is also set), the following errors are defined:
> + *
> + * -ENOSYS: underlying device does not support this functionality.
> + *
> + * -EINVAL: unknown or invalid rule specification.
> + *
> + * -ENOTSUP: valid but unsupported rule specification (e.g. partial
> + * bit-masks are unsupported).
> + *
> + * -EEXIST: collision with an existing rule.
> + *
> + * -ENOMEM: not enough resources.
> + *
> + * -EBUSY: action cannot be performed due to busy device resources, may
> + * succeed if the affected queues or even the entire port are in a stopped
> + * state (see rte_eth_dev_rx_queue_stop() and rte_eth_dev_stop()).
> + */
> +int
> +rte_flow_validate(uint8_t port_id,
> + const struct rte_flow_attr *attr,
> + const struct rte_flow_item pattern[],
> + const struct rte_flow_action actions[],
> + struct rte_flow_error *error);
> +
> +/**
> + * Create a flow rule on a given port.
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param[in] attr
> + * Flow rule attributes.
> + * @param[in] pattern
> + * Pattern specification (list terminated by the END pattern item).
> + * @param[in] actions
> + * Associated actions (list terminated by the END action).
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * A valid handle in case of success, NULL otherwise and rte_errno is set
> + * to the positive version of one of the error codes defined for
> + * rte_flow_validate().
> + */
> +struct rte_flow *
> +rte_flow_create(uint8_t port_id,
> + const struct rte_flow_attr *attr,
> + const struct rte_flow_item pattern[],
> + const struct rte_flow_action actions[],
> + struct rte_flow_error *error);
> +
> +/**
> + * Destroy a flow rule on a given port.
> + *
> + * Failure to destroy a flow rule handle may occur when other flow
> +rules
> + * depend on it, and destroying it would result in an inconsistent state.
> + *
> + * This function is only guaranteed to succeed if handles are destroyed
> +in
> + * reverse order of their creation.
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param flow
> + * Flow rule handle to destroy.
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +int
> +rte_flow_destroy(uint8_t port_id,
> + struct rte_flow *flow,
> + struct rte_flow_error *error);
> +
> +/**
> + * Destroy all flow rules associated with a port.
> + *
> + * In the unlikely event of failure, handles are still considered
> +destroyed
> + * and no longer valid but the port must be assumed to be in an
> +inconsistent
> + * state.
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +int
> +rte_flow_flush(uint8_t port_id,
> + struct rte_flow_error *error);
> +
> +/**
> + * Query an existing flow rule.
> + *
> + * This function allows retrieving flow-specific data such as counters.
> + * Data is gathered by special actions which must be present in the
> +flow
> + * rule definition.
> + *
> + * \see RTE_FLOW_ACTION_TYPE_COUNT
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param flow
> + * Flow rule handle to query.
> + * @param action
> + * Action type to query.
> + * @param[in, out] data
> + * Pointer to storage for the associated query data type.
> + * @param[out] error
> + * Perform verbose error reporting if not NULL. PMDs initialize this
> + * structure in case of error only.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +int
> +rte_flow_query(uint8_t port_id,
> + struct rte_flow *flow,
> + enum rte_flow_action_type action,
> + void *data,
> + struct rte_flow_error *error);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* RTE_FLOW_H_ */
> diff --git a/lib/librte_ether/rte_flow_driver.h
> b/lib/librte_ether/rte_flow_driver.h
> new file mode 100644
> index 0000000..274562c
> --- /dev/null
> +++ b/lib/librte_ether/rte_flow_driver.h
> @@ -0,0 +1,182 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright 2016 6WIND S.A.
> + * Copyright 2016 Mellanox.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of 6WIND S.A. nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> AND ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#ifndef RTE_FLOW_DRIVER_H_
> +#define RTE_FLOW_DRIVER_H_
> +
> +/**
> + * @file
> + * RTE generic flow API (driver side)
> + *
> + * This file provides implementation helpers for internal use by PMDs,
> +they
> + * are not intended to be exposed to applications and are not subject
> +to ABI
> + * versioning.
> + */
> +
> +#include <stdint.h>
> +
> +#include <rte_errno.h>
> +#include <rte_ethdev.h>
> +#include "rte_flow.h"
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * Generic flow operations structure implemented and returned by PMDs.
> + *
> + * To implement this API, PMDs must handle the RTE_ETH_FILTER_GENERIC
> +filter
> + * type in their .filter_ctrl callback function (struct eth_dev_ops) as
> +well
> + * as the RTE_ETH_FILTER_GET filter operation.
> + *
> + * If successful, this operation must result in a pointer to a
> +PMD-specific
> + * struct rte_flow_ops written to the argument address as described below:
> + *
> + * \code
> + *
> + * // PMD filter_ctrl callback
> + *
> + * static const struct rte_flow_ops pmd_flow_ops = { ... };
> + *
> + * switch (filter_type) {
> + * case RTE_ETH_FILTER_GENERIC:
> + * if (filter_op != RTE_ETH_FILTER_GET)
> + * return -EINVAL;
> + * *(const void **)arg = &pmd_flow_ops;
> + * return 0;
> + * }
> + *
> + * \endcode
> + *
> + * See also rte_flow_ops_get().
> + *
> + * These callback functions are not supposed to be used by applications
> + * directly, which must rely on the API defined in rte_flow.h.
> + *
> + * Public-facing wrapper functions perform a few consistency checks so
> +that
> + * unimplemented (i.e. NULL) callbacks simply return -ENOTSUP. These
> + * callbacks otherwise only differ by their first argument (with port
> +ID
> + * already resolved to a pointer to struct rte_eth_dev).
> + */
> +struct rte_flow_ops {
> + /** See rte_flow_validate(). */
> + int (*validate)
> + (struct rte_eth_dev *,
> + const struct rte_flow_attr *,
> + const struct rte_flow_item [],
> + const struct rte_flow_action [],
> + struct rte_flow_error *);
> + /** See rte_flow_create(). */
> + struct rte_flow *(*create)
> + (struct rte_eth_dev *,
> + const struct rte_flow_attr *,
> + const struct rte_flow_item [],
> + const struct rte_flow_action [],
> + struct rte_flow_error *);
> + /** See rte_flow_destroy(). */
> + int (*destroy)
> + (struct rte_eth_dev *,
> + struct rte_flow *,
> + struct rte_flow_error *);
> + /** See rte_flow_flush(). */
> + int (*flush)
> + (struct rte_eth_dev *,
> + struct rte_flow_error *);
> + /** See rte_flow_query(). */
> + int (*query)
> + (struct rte_eth_dev *,
> + struct rte_flow *,
> + enum rte_flow_action_type,
> + void *,
> + struct rte_flow_error *);
> +};
> +
> +/**
> + * Initialize generic flow error structure.
> + *
> + * This function also sets rte_errno to a given value.
> + *
> + * @param[out] error
> + * Pointer to flow error structure (may be NULL).
> + * @param code
> + * Related error code (rte_errno).
> + * @param type
> + * Cause field and error types.
> + * @param cause
> + * Object responsible for the error.
> + * @param message
> + * Human-readable error message.
> + *
> + * @return
> + * Pointer to flow error structure.
> + */
> +static inline struct rte_flow_error *
> +rte_flow_error_set(struct rte_flow_error *error,
> + int code,
> + enum rte_flow_error_type type,
> + const void *cause,
> + const char *message)
> +{
> + if (error) {
> + *error = (struct rte_flow_error){
> + .type = type,
> + .cause = cause,
> + .message = message,
> + };
> + }
> + rte_errno = code;
> + return error;
> +}
> +
> +/**
> + * Get generic flow operations structure from a port.
> + *
> + * @param port_id
> + * Port identifier to query.
> + * @param[out] error
> + * Pointer to flow error structure.
> + *
> + * @return
> + * The flow operations structure associated with port_id, NULL in case of
> + * error, in which case rte_errno is set and the error structure contains
> + * additional details.
> + */
> +const struct rte_flow_ops *
> +rte_flow_ops_get(uint8_t port_id, struct rte_flow_error *error);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* RTE_FLOW_DRIVER_H_ */
> --
> 2.1.4
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] cryptodev: remove crypto device driver name
@ 2017-05-22 13:54 3% Slawomir Mrozowicz
0 siblings, 0 replies; 200+ results
From: Slawomir Mrozowicz @ 2017-05-22 13:54 UTC (permalink / raw)
To: declan.doherty; +Cc: dev, Slawomir Mrozowicz
Remove crypto device driver name string definitions from librte_cryptodev,
which avoid to library changes every time a new crypto driver was added.
The driver name is predefined internaly in the each PMD.
The applications could use the crypto device driver names based on
options with the driver name string provided in command line.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
---
doc/guides/rel_notes/release_17_08.rst | 2 ++
drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h | 3 +++
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h | 3 +++
drivers/crypto/armv8/rte_armv8_pmd_private.h | 3 +++
drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 3 +++
drivers/crypto/kasumi/rte_kasumi_pmd_private.h | 3 +++
drivers/crypto/null/null_crypto_pmd_private.h | 3 +++
drivers/crypto/openssl/rte_openssl_pmd_private.h | 2 ++
drivers/crypto/qat/qat_crypto.h | 3 +++
drivers/crypto/scheduler/scheduler_pmd_private.h | 3 +++
drivers/crypto/snow3g/rte_snow3g_pmd_private.h | 3 +++
drivers/crypto/zuc/rte_zuc_pmd_private.h | 3 +++
lib/librte_cryptodev/rte_cryptodev.h | 23 ----------------------
test/test/test_cryptodev.h | 12 +++++++++++
14 files changed, 46 insertions(+), 23 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 347828d..542076f 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -123,6 +123,8 @@ API Changes
* Changes device type identification to be based on a unique
driver id uint8_t type replacing the previous device type enumeration.
+* Moved crypto device driver names definitions to the particular PMDs.
+ These names are not public anymore.
ABI Changes
-----------
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
index 0496b44..8c43eae 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_private.h
@@ -35,6 +35,9 @@
#include "aesni_gcm_ops.h"
+#define CRYPTODEV_NAME_AESNI_GCM_PMD crypto_aesni_gcm
+/**< AES-NI GCM PMD device name */
+
#define GCM_LOG_ERR(fmt, args...) \
RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), \
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
index 0d82699..5b61a6c 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_private.h
@@ -35,6 +35,9 @@
#include "aesni_mb_ops.h"
+#define CRYPTODEV_NAME_AESNI_MB_PMD crypto_aesni_mb
+/**< AES-NI Multi buffer PMD device name */
+
#define MB_LOG_ERR(fmt, args...) \
RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), \
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_private.h b/drivers/crypto/armv8/rte_armv8_pmd_private.h
index b75107f..864e7ad 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_private.h
+++ b/drivers/crypto/armv8/rte_armv8_pmd_private.h
@@ -33,6 +33,9 @@
#ifndef _RTE_ARMV8_PMD_PRIVATE_H_
#define _RTE_ARMV8_PMD_PRIVATE_H_
+#define CRYPTODEV_NAME_ARMV8_PMD crypto_armv8
+/**< ARMv8 Crypto PMD device name */
+
#define ARMV8_CRYPTO_LOG_ERR(fmt, args...) \
RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
RTE_STR(CRYPTODEV_NAME_ARMV8_CRYPTO_PMD), \
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index f5c6169..84d18e2 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -34,6 +34,9 @@
#ifndef _RTE_DPAA2_SEC_PMD_PRIVATE_H_
#define _RTE_DPAA2_SEC_PMD_PRIVATE_H_
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD crypto_dpaa2_sec_pmd
+/**< Open SSL Crypto PMD device name */
+
#define MAX_QUEUES 64
#define MAX_DESC_SIZE 64
/** private data structure for each DPAA2_SEC device */
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
index fb586ca..cd018ae 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_private.h
@@ -35,6 +35,9 @@
#include <sso_kasumi.h>
+#define CRYPTODEV_NAME_KASUMI_PMD crypto_kasumi
+/**< KASUMI PMD device name */
+
#define KASUMI_LOG_ERR(fmt, args...) \
RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
RTE_STR(CRYPTODEV_NAME_KASUMI_PMD), \
diff --git a/drivers/crypto/null/null_crypto_pmd_private.h b/drivers/crypto/null/null_crypto_pmd_private.h
index acebc97..4d1c3c9 100644
--- a/drivers/crypto/null/null_crypto_pmd_private.h
+++ b/drivers/crypto/null/null_crypto_pmd_private.h
@@ -35,6 +35,9 @@
#include "rte_config.h"
+#define CRYPTODEV_NAME_NULL_PMD crypto_null
+/**< Null crypto PMD device name */
+
#define NULL_CRYPTO_LOG_ERR(fmt, args...) \
RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
RTE_STR(CRYPTODEV_NAME_NULL_PMD), \
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_private.h b/drivers/crypto/openssl/rte_openssl_pmd_private.h
index 4d820c5..67ae55e 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_private.h
+++ b/drivers/crypto/openssl/rte_openssl_pmd_private.h
@@ -36,6 +36,8 @@
#include <openssl/evp.h>
#include <openssl/des.h>
+#define CRYPTODEV_NAME_OPENSSL_PMD crypto_openssl
+/**< Open SSL Crypto PMD device name */
#define OPENSSL_LOG_ERR(fmt, args...) \
RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index efcf607..ed27e3b 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -39,6 +39,9 @@
#include "qat_crypto_capabilities.h"
+#define CRYPTODEV_NAME_QAT_SYM_PMD crypto_qat
+/**< Intel QAT Symmetric Crypto PMD device name */
+
/*
* This macro rounds up a number to a be a multiple of
* the alignment when the alignment is a power of 2
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
index 615a207..c41bfb0 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_private.h
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -36,6 +36,9 @@
#include "rte_cryptodev_scheduler.h"
+#define CRYPTODEV_NAME_SCHEDULER_PMD crypto_scheduler
+/**< Scheduler Crypto PMD device name */
+
#define PER_SLAVE_BUFF_SIZE (256)
#define CS_LOG_ERR(fmt, args...) \
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_private.h b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
index 03973b9..30cf3f1 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_private.h
@@ -35,6 +35,9 @@
#include <sso_snow3g.h>
+#define CRYPTODEV_NAME_SNOW3G_PMD crypto_snow3g
+/**< SNOW 3G PMD device name */
+
#define SNOW3G_LOG_ERR(fmt, args...) \
RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD), \
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_private.h b/drivers/crypto/zuc/rte_zuc_pmd_private.h
index 030f120..92bcfc0 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_private.h
+++ b/drivers/crypto/zuc/rte_zuc_pmd_private.h
@@ -35,6 +35,9 @@
#include <sso_zuc.h>
+#define CRYPTODEV_NAME_ZUC_PMD crypto_zuc
+/**< KASUMI PMD device name */
+
#define ZUC_LOG_ERR(fmt, args...) \
RTE_LOG(ERR, CRYPTODEV, "[%s] %s() line %u: " fmt "\n", \
RTE_STR(CRYPTODEV_NAME_ZUC_PMD), \
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index eaa7d90..27a3a07 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -50,29 +50,6 @@ extern "C" {
#include "rte_dev.h"
#include <rte_common.h>
-#define CRYPTODEV_NAME_NULL_PMD crypto_null
-/**< Null crypto PMD device name */
-#define CRYPTODEV_NAME_AESNI_MB_PMD crypto_aesni_mb
-/**< AES-NI Multi buffer PMD device name */
-#define CRYPTODEV_NAME_AESNI_GCM_PMD crypto_aesni_gcm
-/**< AES-NI GCM PMD device name */
-#define CRYPTODEV_NAME_OPENSSL_PMD crypto_openssl
-/**< Open SSL Crypto PMD device name */
-#define CRYPTODEV_NAME_QAT_SYM_PMD crypto_qat
-/**< Intel QAT Symmetric Crypto PMD device name */
-#define CRYPTODEV_NAME_SNOW3G_PMD crypto_snow3g
-/**< SNOW 3G PMD device name */
-#define CRYPTODEV_NAME_KASUMI_PMD crypto_kasumi
-/**< KASUMI PMD device name */
-#define CRYPTODEV_NAME_ZUC_PMD crypto_zuc
-/**< KASUMI PMD device name */
-#define CRYPTODEV_NAME_ARMV8_PMD crypto_armv8
-/**< ARMv8 Crypto PMD device name */
-#define CRYPTODEV_NAME_SCHEDULER_PMD crypto_scheduler
-/**< Scheduler Crypto PMD device name */
-#define CRYPTODEV_NAME_DPAA2_SEC_PMD cryptodev_dpaa2_sec_pmd
-/**< NXP DPAA2 - SEC PMD device name */
-
extern const char **rte_cyptodev_names;
/* Logging Macros */
diff --git a/test/test/test_cryptodev.h b/test/test/test_cryptodev.h
index 67354a9..8d803fe 100644
--- a/test/test/test_cryptodev.h
+++ b/test/test/test_cryptodev.h
@@ -71,6 +71,18 @@
#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA384 (24)
#define TRUNCATED_DIGEST_BYTE_LENGTH_SHA512 (32)
+#define CRYPTODEV_NAME_NULL_PMD crypto_null
+#define CRYPTODEV_NAME_AESNI_MB_PMD crypto_aesni_mb
+#define CRYPTODEV_NAME_AESNI_GCM_PMD crypto_aesni_gcm
+#define CRYPTODEV_NAME_OPENSSL_PMD crypto_openssl
+#define CRYPTODEV_NAME_QAT_SYM_PMD crypto_qat
+#define CRYPTODEV_NAME_SNOW3G_PMD crypto_snow3g
+#define CRYPTODEV_NAME_KASUMI_PMD crypto_kasumi
+#define CRYPTODEV_NAME_ZUC_PMD crypto_zuc
+#define CRYPTODEV_NAME_ARMV8_PMD crypto_armv8
+#define CRYPTODEV_NAME_DPAA2_SEC_PMD crypto_dpaa2_sec_pmd
+#define CRYPTODEV_NAME_SCHEDULER_PMD crypto_scheduler
+
/**
* Write (spread) data from buffer to mbuf data
*
--
2.5.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2] cryptodev: remove crypto device type enumeration
@ 2017-05-22 11:10 3% ` Slawomir Mrozowicz
2017-06-30 14:10 2% ` [dpdk-dev] [PATCH v3] " Pablo de Lara
0 siblings, 1 reply; 200+ results
From: Slawomir Mrozowicz @ 2017-05-22 11:10 UTC (permalink / raw)
To: declan.doherty; +Cc: dev, Slawomir Mrozowicz
Changes device type identification to be based on a unique
driver id replacing the current device type enumeration, which needed
library changes every time a new crypto driver was added.
The driver id is assigned dynamically during driver registration using
the new macro RTE_PMD_REGISTER_CRYPTO_DRIVER which returns a unique
uint8_t identifier for that driver. New APIs are also introduced
to allow retrieval of the driver id using the driver name.
Signed-off-by: Slawomir Mrozowicz <slawomirx.mrozowicz@intel.com>
---
v2 changes:
- add release notes information
- reduce some call of rte_cryptodev_driver_id_get
- remove clang compiler error
- add internal mark for function rte_cryptodev_allocate_driver_id
---
doc/guides/prog_guide/cryptodev_lib.rst | 5 +-
doc/guides/rel_notes/release_17_08.rst | 30 +++-
drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 10 +-
drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c | 2 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 10 +-
drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c | 2 +-
drivers/crypto/armv8/rte_armv8_pmd.c | 10 +-
drivers/crypto/armv8/rte_armv8_pmd_ops.c | 2 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 8 +-
drivers/crypto/kasumi/rte_kasumi_pmd.c | 10 +-
drivers/crypto/kasumi/rte_kasumi_pmd_ops.c | 2 +-
drivers/crypto/null/null_crypto_pmd.c | 10 +-
drivers/crypto/null/null_crypto_pmd_ops.c | 2 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 10 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 2 +-
drivers/crypto/qat/qat_crypto.c | 5 +-
drivers/crypto/qat/qat_crypto.h | 2 +
drivers/crypto/qat/rte_qat_cryptodev.c | 7 +-
drivers/crypto/scheduler/rte_cryptodev_scheduler.c | 22 +--
drivers/crypto/scheduler/scheduler_pmd.c | 6 +-
drivers/crypto/scheduler/scheduler_pmd_ops.c | 2 +-
drivers/crypto/scheduler/scheduler_pmd_private.h | 4 +-
drivers/crypto/snow3g/rte_snow3g_pmd.c | 10 +-
drivers/crypto/snow3g/rte_snow3g_pmd_ops.c | 2 +-
drivers/crypto/zuc/rte_zuc_pmd.c | 10 +-
drivers/crypto/zuc/rte_zuc_pmd_ops.c | 2 +-
lib/librte_cryptodev/rte_cryptodev.c | 41 ++++-
lib/librte_cryptodev/rte_cryptodev.h | 69 +++++---
lib/librte_cryptodev/rte_cryptodev_pmd.h | 2 +-
lib/librte_cryptodev/rte_cryptodev_version.map | 11 +-
test/test/test_cryptodev.c | 195 ++++++++++++++-------
test/test/test_cryptodev_blockcipher.c | 68 ++++---
test/test/test_cryptodev_blockcipher.h | 2 +-
test/test/test_cryptodev_perf.c | 124 ++++++++-----
34 files changed, 477 insertions(+), 222 deletions(-)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 4f98f28..4644802 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -291,7 +291,7 @@ relevant information for the device.
struct rte_cryptodev_info {
const char *driver_name;
- enum rte_cryptodev_type dev_type;
+ uint8_t driver_id;
struct rte_pci_device *pci_dev;
uint64_t feature_flags;
@@ -451,7 +451,8 @@ functions for the configuration of the session parameters and freeing function
so the PMD can managed the memory on destruction of a session.
**Note**: Sessions created on a particular device can only be used on Crypto
-devices of the same type, and if you try to use a session on a device different
+devices of the same type - the same driver id used by this devices,
+and if you try to use a session on a device different
to that on which it was created then the Crypto operation will fail.
``rte_cryptodev_sym_session_create()`` is used to create a symmetric session on
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
index 74aae10..347828d 100644
--- a/doc/guides/rel_notes/release_17_08.rst
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -42,6 +42,15 @@ New Features
=========================================================
+* **Add a helper functions for crypto device driver identification.**
+
+ Added functions to:
+ * Provide crypto device driver unique identification.
+ * Provide crypto device driver name.
+
+ Added macro to register crypto driver.
+
+
Resolved Issues
---------------
@@ -111,6 +120,10 @@ API Changes
=========================================================
+* Changes device type identification to be based on a unique
+ driver id uint8_t type replacing the previous device type enumeration.
+
+
ABI Changes
-----------
@@ -125,6 +138,21 @@ ABI Changes
=========================================================
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+ * Add a short 1-2 sentence description of the removed item in the past
+ tense.
+
+ This section is a comment. do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =========================================================
+
+
+* The crypto device type enumeration has been removed from librte_cryptodev.
+
Shared Library Versions
-----------------------
@@ -195,4 +223,4 @@ Tested Platforms
This section is a comment. do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
- =========================================================
+ =========================================================
\ No newline at end of file
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 101ef98..b2a0606 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -42,6 +42,8 @@
#include "aesni_gcm_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
/** GCM encode functions pointer table */
static const struct aesni_gcm_ops aesni_gcm_enc[] = {
[AESNI_GCM_KEY_128] = {
@@ -143,8 +145,8 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_sym_op *op)
struct aesni_gcm_session *sess = NULL;
if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session->dev_type
- != RTE_CRYPTODEV_AESNI_GCM_PMD))
+ if (unlikely(op->session->driver_id !=
+ cryptodev_driver_id))
return sess;
sess = (struct aesni_gcm_session *)op->session->_private;
@@ -456,7 +458,7 @@ aesni_gcm_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_AESNI_GCM_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_aesni_gcm_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -539,3 +541,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_GCM_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_AESNI_GCM_PMD,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
index 1fc047b..2aa0266 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd_ops.c
@@ -181,7 +181,7 @@ aesni_gcm_pmd_info_get(struct rte_cryptodev *dev,
struct aesni_gcm_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = aesni_gcm_pmd_capabilities;
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 45b25c9..9a701ce 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -40,6 +40,8 @@
#include "rte_aesni_mb_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
typedef void (*hash_one_block_t)(const void *data, void *digest);
typedef void (*aes_keyexp_t)(const void *key, void *enc_exp_keys, void *dec_exp_keys);
@@ -345,8 +347,8 @@ get_session(struct aesni_mb_qp *qp, struct rte_crypto_op *op)
struct aesni_mb_session *sess = NULL;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_AESNI_MB_PMD)) {
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id)) {
return NULL;
}
@@ -705,7 +707,7 @@ cryptodev_aesni_mb_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_aesni_mb_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -806,3 +808,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_AESNI_MB_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_AESNI_MB_PMD,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
index d1bc28e..3a2683b 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd_ops.c
@@ -321,7 +321,7 @@ aesni_mb_pmd_info_get(struct rte_cryptodev *dev,
struct aesni_mb_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = aesni_mb_pmd_capabilities;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c
index 3d603a5..17dde0b 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd.c
@@ -44,6 +44,8 @@
#include "rte_armv8_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
static int cryptodev_armv8_crypto_uninit(struct rte_vdev_device *vdev);
/**
@@ -547,8 +549,8 @@ get_session(struct armv8_crypto_qp *qp, struct rte_crypto_op *op)
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
- op->sym->session->dev_type ==
- RTE_CRYPTODEV_ARMV8_PMD)) {
+ op->sym->session->driver_id ==
+ cryptodev_driver_id)) {
sess = (struct armv8_crypto_session *)
op->sym->session->_private;
}
@@ -814,7 +816,7 @@ cryptodev_armv8_crypto_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_ARMV8_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_armv8_crypto_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -904,3 +906,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_ARMV8_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_ARMV8_PMD,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/armv8/rte_armv8_pmd_ops.c b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
index 4d9ccbf..2911417 100644
--- a/drivers/crypto/armv8/rte_armv8_pmd_ops.c
+++ b/drivers/crypto/armv8/rte_armv8_pmd_ops.c
@@ -178,7 +178,7 @@ armv8_crypto_pmd_info_get(struct rte_cryptodev *dev,
struct armv8_crypto_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = armv8_crypto_pmd_capabilities;
dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 4e01fe8..5bb9041 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -73,6 +73,8 @@
#define AES_CBC_IV_LEN 16
enum rta_sec_era rta_sec_era = RTA_SEC_ERA_8;
+static uint8_t cryptodev_driver_id;
+
static inline int
build_authenc_fd(dpaa2_sec_session *sess,
struct rte_crypto_op *op,
@@ -1383,7 +1385,7 @@ dpaa2_sec_dev_infos_get(struct rte_cryptodev *dev,
info->feature_flags = dev->feature_flags;
info->capabilities = dpaa2_sec_capabilities;
info->sym.max_nb_sessions = internals->max_nb_sessions;
- info->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ info->driver_id = cryptodev_driver_id;
}
}
@@ -1509,7 +1511,7 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
}
hw_id = dpaa2_dev->object_id;
- cryptodev->dev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ cryptodev->driver_id = cryptodev_driver_id;
cryptodev->dev_ops = &crypto_ops;
cryptodev->enqueue_burst = dpaa2_sec_enqueue_burst;
@@ -1654,3 +1656,5 @@ static struct rte_dpaa2_driver rte_dpaa2_sec_driver = {
};
RTE_PMD_REGISTER_DPAA2(dpaa2_sec_pmd, rte_dpaa2_sec_driver);
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_DPAA2_SEC_PMD,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c
index 9da9e89..aa7eab5 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c
@@ -47,6 +47,8 @@
#define KASUMI_MAX_BURST 4
#define BYTE_LEN 8
+static uint8_t cryptodev_driver_id;
+
/** Get xform chain order. */
static enum kasumi_operation
kasumi_get_mode(const struct rte_crypto_sym_xform *xform)
@@ -143,8 +145,8 @@ kasumi_get_session(struct kasumi_qp *qp, struct rte_crypto_op *op)
struct kasumi_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_KASUMI_PMD))
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct kasumi_session *)op->sym->session->_private;
@@ -580,7 +582,7 @@ cryptodev_kasumi_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_KASUMI_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_kasumi_pmd_ops;
/* Register RX/TX burst functions for data path. */
@@ -664,3 +666,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_KASUMI_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_KASUMI_PMD,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
index 62ebdbd..343c9b3 100644
--- a/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
+++ b/drivers/crypto/kasumi/rte_kasumi_pmd_ops.c
@@ -156,7 +156,7 @@ kasumi_pmd_info_get(struct rte_cryptodev *dev,
struct kasumi_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/drivers/crypto/null/null_crypto_pmd.c b/drivers/crypto/null/null_crypto_pmd.c
index 023450a..6788865 100644
--- a/drivers/crypto/null/null_crypto_pmd.c
+++ b/drivers/crypto/null/null_crypto_pmd.c
@@ -38,6 +38,8 @@
#include "null_crypto_pmd_private.h"
+static uint8_t cryptodev_driver_id;
+
/** verify and set session parameters */
int
null_crypto_set_session_parameters(
@@ -94,8 +96,8 @@ get_session(struct null_crypto_qp *qp, struct rte_crypto_sym_op *op)
struct null_crypto_session *sess;
if (op->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->session == NULL ||
- op->session->dev_type != RTE_CRYPTODEV_NULL_PMD))
+ if (unlikely(op->session == NULL || op->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct null_crypto_session *)op->session->_private;
@@ -183,7 +185,7 @@ cryptodev_null_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_NULL_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = null_crypto_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -268,3 +270,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_NULL_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_NULL_PMD,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/null/null_crypto_pmd_ops.c b/drivers/crypto/null/null_crypto_pmd_ops.c
index 12c946c..a7c891e 100644
--- a/drivers/crypto/null/null_crypto_pmd_ops.c
+++ b/drivers/crypto/null/null_crypto_pmd_ops.c
@@ -151,7 +151,7 @@ null_crypto_pmd_info_get(struct rte_cryptodev *dev,
struct null_crypto_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index f0c5ca3..eb5dda3 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -44,6 +44,8 @@
#define DES_BLOCK_SIZE 8
+static uint8_t cryptodev_driver_id;
+
static int cryptodev_openssl_remove(struct rte_vdev_device *vdev);
/*----------------------------------------------------------------------------*/
@@ -448,8 +450,8 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
/* get existing session */
if (likely(op->sym->session != NULL &&
- op->sym->session->dev_type ==
- RTE_CRYPTODEV_OPENSSL_PMD))
+ op->sym->session->driver_id ==
+ cryptodev_driver_id))
sess = (struct openssl_session *)
op->sym->session->_private;
} else {
@@ -1283,7 +1285,7 @@ cryptodev_openssl_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_OPENSSL_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_openssl_pmd_ops;
/* register rx/tx burst functions for data path */
@@ -1372,3 +1374,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_OPENSSL_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_OPENSSL_PMD,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 22a6873..f65de53 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -536,7 +536,7 @@ openssl_pmd_info_get(struct rte_cryptodev *dev,
struct openssl_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = openssl_pmd_capabilities;
dev_info->max_nb_queue_pairs = internals->max_nb_qpairs;
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 386aa45..9dc3150 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -913,7 +913,8 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg,
return -EINVAL;
}
- if (unlikely(op->sym->session->dev_type != RTE_CRYPTODEV_QAT_SYM_PMD)) {
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_qat_driver_id)) {
PMD_DRV_LOG(ERR, "Session was not created for this device");
return -EINVAL;
}
@@ -1253,7 +1254,7 @@ void qat_dev_info_get(__rte_unused struct rte_cryptodev *dev,
info->feature_flags = dev->feature_flags;
info->capabilities = internals->qat_dev_capabilities;
info->sym.max_nb_sessions = internals->max_nb_sessions;
- info->dev_type = RTE_CRYPTODEV_QAT_SYM_PMD;
+ info->driver_id = cryptodev_qat_driver_id;
}
}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index b740d6b..efcf607 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -85,6 +85,8 @@ struct qat_pmd_private {
const struct rte_cryptodev_capabilities *qat_dev_capabilities;
};
+extern uint8_t cryptodev_qat_driver_id;
+
int qat_dev_config(struct rte_cryptodev *dev,
struct rte_cryptodev_config *config);
int qat_dev_start(struct rte_cryptodev *dev);
diff --git a/drivers/crypto/qat/rte_qat_cryptodev.c b/drivers/crypto/qat/rte_qat_cryptodev.c
index 1bdd30d..6bfd2bb 100644
--- a/drivers/crypto/qat/rte_qat_cryptodev.c
+++ b/drivers/crypto/qat/rte_qat_cryptodev.c
@@ -39,6 +39,8 @@
#include "qat_crypto.h"
#include "qat_logs.h"
+uint8_t cryptodev_qat_driver_id;
+
static const struct rte_cryptodev_capabilities qat_cpm16_capabilities[] = {
QAT_BASE_CPM16_SYM_CAPABILITIES,
RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
@@ -106,7 +108,7 @@ crypto_qat_dev_init(__attribute__((unused)) struct rte_cryptodev_driver *crypto_
RTE_DEV_TO_PCI(cryptodev->device)->addr.devid,
RTE_DEV_TO_PCI(cryptodev->device)->addr.function);
- cryptodev->dev_type = RTE_CRYPTODEV_QAT_SYM_PMD;
+ cryptodev->driver_id = cryptodev_qat_driver_id;
cryptodev->dev_ops = &crypto_qat_ops;
cryptodev->enqueue_burst = qat_pmd_enqueue_op_burst;
@@ -160,4 +162,5 @@ static struct rte_cryptodev_driver rte_qat_pmd = {
RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_QAT_SYM_PMD, rte_qat_pmd.pci_drv);
RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_QAT_SYM_PMD, pci_id_qat_map);
-
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_QAT_SYM_PMD,
+ cryptodev_qat_driver_id);
diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
index 319dcf0..c1b43c7 100644
--- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
+++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
@@ -198,7 +198,7 @@ rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_scheduler_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -226,12 +226,12 @@ rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t slave_id)
rte_cryptodev_info_get(slave_id, &dev_info);
slave->dev_id = slave_id;
- slave->dev_type = dev_info.dev_type;
+ slave->driver_id = dev_info.driver_id;
sched_ctx->nb_slaves++;
if (update_scheduler_capability(sched_ctx) < 0) {
slave->dev_id = 0;
- slave->dev_type = 0;
+ slave->driver_id = 0;
sched_ctx->nb_slaves--;
CS_LOG_ERR("capabilities update failed");
@@ -257,7 +257,7 @@ rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t slave_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_scheduler_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -314,7 +314,7 @@ rte_cryptodev_scheduler_mode_set(uint8_t scheduler_id,
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_scheduler_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -377,7 +377,7 @@ rte_cryptodev_scheduler_mode_get(uint8_t scheduler_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_scheduler_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -405,7 +405,7 @@ rte_cryptodev_scheduler_ordering_set(uint8_t scheduler_id,
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_scheduler_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -433,7 +433,7 @@ rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_scheduler_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -455,7 +455,7 @@ rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_scheduler_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -512,7 +512,7 @@ rte_cryptodev_scheduler_slaves_get(uint8_t scheduler_id, uint8_t *slaves)
return -ENOTSUP;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_scheduler_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
@@ -580,7 +580,7 @@ rte_cryptodev_scheduler_option_get(uint8_t scheduler_id,
return -EINVAL;
}
- if (dev->dev_type != RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (dev->driver_id != cryptodev_scheduler_driver_id) {
CS_LOG_ERR("Operation not supported");
return -ENOTSUP;
}
diff --git a/drivers/crypto/scheduler/scheduler_pmd.c b/drivers/crypto/scheduler/scheduler_pmd.c
index 0b63c20..7168491 100644
--- a/drivers/crypto/scheduler/scheduler_pmd.c
+++ b/drivers/crypto/scheduler/scheduler_pmd.c
@@ -41,6 +41,8 @@
#include "rte_cryptodev_scheduler.h"
#include "scheduler_pmd_private.h"
+uint8_t cryptodev_scheduler_driver_id;
+
struct scheduler_init_params {
struct rte_crypto_vdev_init_params def_p;
uint32_t nb_slaves;
@@ -110,7 +112,7 @@ cryptodev_scheduler_create(const char *name,
return -EFAULT;
}
- dev->dev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+ dev->driver_id = cryptodev_scheduler_driver_id;
dev->dev_ops = rte_crypto_scheduler_pmd_ops;
sched_ctx = dev->data->dev_private;
@@ -454,3 +456,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PMD,
"max_nb_sessions=<int> "
"socket_id=<int> "
"slave=<name>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_SCHEDULER_PMD,
+ cryptodev_scheduler_driver_id);
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index 2b5858d..1f91bce 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -368,7 +368,7 @@ scheduler_pmd_info_get(struct rte_cryptodev *dev,
max_nb_sessions;
}
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->feature_flags = dev->feature_flags;
dev_info->capabilities = sched_ctx->capabilities;
dev_info->max_nb_queue_pairs = sched_ctx->max_nb_queue_pairs;
diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h
index 421dae3..615a207 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_private.h
+++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
@@ -63,7 +63,7 @@ struct scheduler_slave {
uint16_t qp_id;
uint32_t nb_inflight_cops;
- enum rte_cryptodev_type dev_type;
+ uint8_t driver_id;
};
struct scheduler_ctx {
@@ -105,6 +105,8 @@ struct scheduler_session {
RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES];
};
+extern uint8_t cryptodev_scheduler_driver_id;
+
static inline uint16_t __attribute__((always_inline))
get_max_enqueue_order_count(struct rte_ring *order_ring, uint16_t nb_ops)
{
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c
index 960956c..7db6409 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c
@@ -46,6 +46,8 @@
#define SNOW3G_MAX_BURST 8
#define BYTE_LEN 8
+static uint8_t cryptodev_driver_id;
+
/** Get xform chain order. */
static enum snow3g_operation
snow3g_get_mode(const struct rte_crypto_sym_xform *xform)
@@ -143,8 +145,8 @@ snow3g_get_session(struct snow3g_qp *qp, struct rte_crypto_op *op)
struct snow3g_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_SNOW3G_PMD))
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct snow3g_session *)op->sym->session->_private;
@@ -569,7 +571,7 @@ cryptodev_snow3g_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_SNOW3G_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_snow3g_pmd_ops;
/* Register RX/TX burst functions for data path. */
@@ -653,3 +655,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SNOW3G_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_SNOW3G_PMD,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
index 7ce96be..26cc3e9 100644
--- a/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
+++ b/drivers/crypto/snow3g/rte_snow3g_pmd_ops.c
@@ -156,7 +156,7 @@ snow3g_pmd_info_get(struct rte_cryptodev *dev,
struct snow3g_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c
index 1020544..5d50bab 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd.c
@@ -45,6 +45,8 @@
#define ZUC_MAX_BURST 8
#define BYTE_LEN 8
+static uint8_t cryptodev_driver_id;
+
/** Get xform chain order. */
static enum zuc_operation
zuc_get_mode(const struct rte_crypto_sym_xform *xform)
@@ -142,8 +144,8 @@ zuc_get_session(struct zuc_qp *qp, struct rte_crypto_op *op)
struct zuc_session *sess;
if (op->sym->sess_type == RTE_CRYPTO_SYM_OP_WITH_SESSION) {
- if (unlikely(op->sym->session->dev_type !=
- RTE_CRYPTODEV_ZUC_PMD))
+ if (unlikely(op->sym->session->driver_id !=
+ cryptodev_driver_id))
return NULL;
sess = (struct zuc_session *)op->sym->session->_private;
@@ -469,7 +471,7 @@ cryptodev_zuc_create(const char *name,
goto init_error;
}
- dev->dev_type = RTE_CRYPTODEV_ZUC_PMD;
+ dev->driver_id = cryptodev_driver_id;
dev->dev_ops = rte_zuc_pmd_ops;
/* Register RX/TX burst functions for data path. */
@@ -552,3 +554,5 @@ RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_ZUC_PMD,
"max_nb_queue_pairs=<int> "
"max_nb_sessions=<int> "
"socket_id=<int>");
+RTE_PMD_REGISTER_CRYPTO_DRIVER(CRYPTODEV_NAME_ZUC_PMD,
+ cryptodev_driver_id);
diff --git a/drivers/crypto/zuc/rte_zuc_pmd_ops.c b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
index e793459..645b80c 100644
--- a/drivers/crypto/zuc/rte_zuc_pmd_ops.c
+++ b/drivers/crypto/zuc/rte_zuc_pmd_ops.c
@@ -156,7 +156,7 @@ zuc_pmd_info_get(struct rte_cryptodev *dev,
struct zuc_private *internals = dev->data->dev_private;
if (dev_info != NULL) {
- dev_info->dev_type = dev->dev_type;
+ dev_info->driver_id = dev->driver_id;
dev_info->max_nb_queue_pairs = internals->max_nb_queue_pairs;
dev_info->sym.max_nb_sessions = internals->max_nb_sessions;
dev_info->feature_flags = dev->feature_flags;
diff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c
index b65cd9c..b8e1018 100644
--- a/lib/librte_cryptodev/rte_cryptodev.c
+++ b/lib/librte_cryptodev/rte_cryptodev.c
@@ -509,12 +509,12 @@ rte_cryptodev_count(void)
}
uint8_t
-rte_cryptodev_count_devtype(enum rte_cryptodev_type type)
+rte_cryptodev_device_count_by_driver(uint8_t driver_id)
{
uint8_t i, dev_count = 0;
for (i = 0; i < rte_cryptodev_globals->max_devs; i++)
- if (rte_cryptodev_globals->devs[i].dev_type == type &&
+ if (rte_cryptodev_globals->devs[i].driver_id == driver_id &&
rte_cryptodev_globals->devs[i].attached ==
RTE_CRYPTODEV_ATTACHED)
dev_count++;
@@ -1293,7 +1293,7 @@ rte_cryptodev_sym_session_init(struct rte_mempool *mp,
memset(sess, 0, mp->elt_size);
sess->dev_id = dev->data->dev_id;
- sess->dev_type = dev->dev_type;
+ sess->driver_id = dev->driver_id;
sess->mp = mp;
if (dev->dev_ops->session_initialize)
@@ -1460,7 +1460,7 @@ rte_cryptodev_sym_session_free(uint8_t dev_id,
dev = &rte_crypto_devices[dev_id];
/* Check the session belongs to this device type */
- if (sess->dev_type != dev->dev_type)
+ if (sess->driver_id != dev->driver_id)
return sess;
/* Let device implementation clear session material */
@@ -1572,3 +1572,36 @@ rte_cryptodev_pmd_create_dev_name(char *name, const char *dev_name_prefix)
return -1;
}
+
+static struct {
+ char name[RTE_CRYPTODEV_NAME_LEN];
+} rte_cryptodev_registered_drivers[RTE_CRYPTO_MAX_DEVS];
+
+static uint8_t rte_cryptodev_driver_id;
+
+uint8_t
+rte_cryptodev_driver_id_get(const char *name)
+{
+ uint8_t id;
+
+ for (id = 0; id < rte_cryptodev_driver_id; id++)
+ if (strcmp(name, rte_cryptodev_registered_drivers[id].name)
+ == 0)
+ return id;
+ return 0;
+}
+
+char *
+rte_cryptodev_driver_name_get(uint8_t driver_id)
+{
+ return rte_cryptodev_registered_drivers[driver_id].name;
+}
+
+uint8_t
+rte_cryptodev_allocate_driver_id(const char *name)
+{
+ rte_cryptodev_driver_id++;
+ strcpy(rte_cryptodev_registered_drivers
+ [rte_cryptodev_driver_id].name, name);
+ return rte_cryptodev_driver_id;
+}
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 88aeb87..eaa7d90 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -73,21 +73,6 @@ extern "C" {
#define CRYPTODEV_NAME_DPAA2_SEC_PMD cryptodev_dpaa2_sec_pmd
/**< NXP DPAA2 - SEC PMD device name */
-/** Crypto device type */
-enum rte_cryptodev_type {
- RTE_CRYPTODEV_NULL_PMD = 1, /**< Null crypto PMD */
- RTE_CRYPTODEV_AESNI_GCM_PMD, /**< AES-NI GCM PMD */
- RTE_CRYPTODEV_AESNI_MB_PMD, /**< AES-NI multi buffer PMD */
- RTE_CRYPTODEV_QAT_SYM_PMD, /**< QAT PMD Symmetric Crypto */
- RTE_CRYPTODEV_SNOW3G_PMD, /**< SNOW 3G PMD */
- RTE_CRYPTODEV_KASUMI_PMD, /**< KASUMI PMD */
- RTE_CRYPTODEV_ZUC_PMD, /**< ZUC PMD */
- RTE_CRYPTODEV_OPENSSL_PMD, /**< OpenSSL PMD */
- RTE_CRYPTODEV_ARMV8_PMD, /**< ARMv8 crypto PMD */
- RTE_CRYPTODEV_SCHEDULER_PMD, /**< Crypto Scheduler PMD */
- RTE_CRYPTODEV_DPAA2_SEC_PMD, /**< NXP DPAA2 - SEC PMD */
-};
-
extern const char **rte_cyptodev_names;
/* Logging Macros */
@@ -321,7 +306,7 @@ rte_cryptodev_get_feature_name(uint64_t flag);
/** Crypto device information */
struct rte_cryptodev_info {
const char *driver_name; /**< Driver name. */
- enum rte_cryptodev_type dev_type; /**< Device type */
+ uint8_t driver_id; /**< Driver identifier */
struct rte_pci_device *pci_dev; /**< PCI information. */
uint64_t feature_flags; /**< Feature flags */
@@ -454,13 +439,13 @@ rte_cryptodev_count(void);
/**
* Get number of crypto device defined type.
*
- * @param type type of device.
+ * @param driver_id driver identifier.
*
* @return
* Returns number of crypto device.
*/
extern uint8_t
-rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
+rte_cryptodev_device_count_by_driver(uint8_t driver_id);
/**
* Get number and identifiers of attached crypto device.
@@ -475,6 +460,7 @@ rte_cryptodev_count_devtype(enum rte_cryptodev_type type);
uint8_t
rte_cryptodev_devices_get(const char *dev_name, uint8_t *devices,
uint8_t nb_devices);
+
/*
* Return the NUMA socket to which a device is connected
*
@@ -732,8 +718,8 @@ struct rte_cryptodev {
struct rte_device *device;
/**< Backing device */
- enum rte_cryptodev_type dev_type;
- /**< Crypto device type */
+ uint8_t driver_id;
+ /**< Crypto driver identifier*/
struct rte_cryptodev_cb_list link_intr_cbs;
/**< User application callback for interrupts if present */
@@ -870,8 +856,8 @@ struct rte_cryptodev_sym_session {
struct {
uint8_t dev_id;
/**< Device Id */
- enum rte_cryptodev_type dev_type;
- /** Crypto Device type session created on */
+ uint8_t driver_id;
+ /** Crypto driver identifier session created on */
struct rte_mempool *mp;
/**< Mempool session allocated from */
} __rte_aligned(8);
@@ -952,6 +938,45 @@ int
rte_cryptodev_queue_pair_detach_sym_session(uint16_t qp_id,
struct rte_cryptodev_sym_session *session);
+/**
+ * Provide driver identifier.
+ *
+ * @param name
+ * The pointer to a driver name.
+ * @return
+ * The driver type identifier or 0 if no driver found
+ */
+uint8_t rte_cryptodev_driver_id_get(const char *name);
+
+/**
+ * Provide driver name.
+ *
+ * @param driver_id
+ * The driver identifier.
+ * @return
+ * The driver name or null if no driver found
+ */
+char *rte_cryptodev_driver_name_get(uint8_t driver_id);
+
+/**
+ * @internal
+ * Allocate driver identifier.
+ *
+ * @param name
+ * The pointer to a driver name to be initialized.
+ * @return
+ * The driver type identifier
+ */
+uint8_t rte_cryptodev_allocate_driver_id(const char *name);
+
+
+#define RTE_PMD_REGISTER_CRYPTO_DRIVER(name, driver_id)\
+RTE_INIT(init_ ##driver_id);\
+static void init_ ##driver_id(void)\
+{\
+ driver_id = rte_cryptodev_allocate_driver_id(RTE_STR(name));\
+}
+
#ifdef __cplusplus
}
diff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h
index 17ef37c..fb3972b 100644
--- a/lib/librte_cryptodev/rte_cryptodev_pmd.h
+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h
@@ -61,7 +61,7 @@ struct rte_cryptodev_session {
RTE_STD_C11
struct {
uint8_t dev_id;
- enum rte_cryptodev_type type;
+ uint8_t driver_id;
struct rte_mempool *mp;
} __rte_aligned(8);
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 9ac510e..91f2e56 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -6,7 +6,6 @@ DPDK_16.04 {
rte_cryptodev_callback_unregister;
rte_cryptodev_close;
rte_cryptodev_count;
- rte_cryptodev_count_devtype;
rte_cryptodev_configure;
rte_cryptodev_create_vdev;
rte_cryptodev_get_dev_id;
@@ -74,3 +73,13 @@ DPDK_17.05 {
rte_cryptodev_queue_pair_detach_sym_session;
} DPDK_17.02;
+
+DPDK_17.08 {
+ global:
+
+ rte_cryptodev_allocate_driver_id;
+ rte_cryptodev_device_count_by_driver;
+ rte_cryptodev_driver_id_get;
+ rte_cryptodev_driver_name_get;
+
+} DPDK_17.05;
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 029ce8a..d1384ef 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -60,7 +60,7 @@
#include "test_cryptodev_gcm_test_vectors.h"
#include "test_cryptodev_hmac_test_vectors.h"
-static enum rte_cryptodev_type gbl_cryptodev_type;
+static uint8_t gbl_driver_id;
struct crypto_testsuite_params {
struct rte_mempool *mbuf_pool;
@@ -210,14 +210,16 @@ testsuite_setup(void)
}
/* Create an AESNI MB device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))) {
#ifndef RTE_LIBRTE_PMD_AESNI_MB
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_AESNI_MB_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL);
@@ -230,14 +232,16 @@ testsuite_setup(void)
}
/* Create an AESNI GCM device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_GCM_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) {
#ifndef RTE_LIBRTE_PMD_AESNI_GCM
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_AESNI_GCM_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL),
@@ -248,13 +252,16 @@ testsuite_setup(void)
}
/* Create a SNOW 3G device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_SNOW3G_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD))) {
#ifndef RTE_LIBRTE_PMD_SNOW3G
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_SNOW3G must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_SNOW3G_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD), NULL),
@@ -265,13 +272,16 @@ testsuite_setup(void)
}
/* Create a KASUMI device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_KASUMI_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_KASUMI_PMD))) {
#ifndef RTE_LIBRTE_PMD_KASUMI
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_KASUMI must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_KASUMI_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_KASUMI_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_KASUMI_PMD), NULL),
@@ -282,13 +292,16 @@ testsuite_setup(void)
}
/* Create a ZUC device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_ZUC_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ZUC_PMD))) {
#ifndef RTE_LIBRTE_PMD_ZUC
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_ZUC must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_ZUC_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ZUC_PMD)));
if (nb_devs < 1) {
TEST_ASSERT_SUCCESS(rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_ZUC_PMD), NULL),
@@ -299,14 +312,16 @@ testsuite_setup(void)
}
/* Create a NULL device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_NULL_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD))) {
#ifndef RTE_LIBRTE_PMD_NULL_CRYPTO
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_NULL_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_NULL_PMD), NULL);
@@ -319,14 +334,16 @@ testsuite_setup(void)
}
/* Create an OPENSSL device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_OPENSSL_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD))) {
#ifndef RTE_LIBRTE_PMD_OPENSSL
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_OPENSSL must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_OPENSSL_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD),
@@ -339,14 +356,16 @@ testsuite_setup(void)
}
/* Create a ARMv8 device if required */
- if (gbl_cryptodev_type == RTE_CRYPTODEV_ARMV8_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD))) {
#ifndef RTE_LIBRTE_PMD_ARMV8_CRYPTO
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_ARMV8_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_ARMV8_PMD),
@@ -359,15 +378,17 @@ testsuite_setup(void)
}
#ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
- if (gbl_cryptodev_type == RTE_CRYPTODEV_SCHEDULER_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD))) {
#ifndef RTE_LIBRTE_PMD_AESNI_MB
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_SCHEDULER_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD),
@@ -382,7 +403,8 @@ testsuite_setup(void)
#endif /* RTE_LIBRTE_PMD_CRYPTO_SCHEDULER */
#ifndef RTE_LIBRTE_PMD_QAT
- if (gbl_cryptodev_type == RTE_CRYPTODEV_QAT_SYM_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD))) {
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_QAT must be enabled "
"in config file to run this testsuite.\n");
return TEST_FAILED;
@@ -398,7 +420,7 @@ testsuite_setup(void)
/* Create list of valid crypto devs */
for (i = 0; i < nb_devs; i++) {
rte_cryptodev_info_get(i, &info);
- if (info.dev_type == gbl_cryptodev_type)
+ if (info.driver_id == gbl_driver_id)
ts_params->valid_devs[ts_params->valid_dev_count++] = i;
}
@@ -1341,7 +1363,8 @@ test_AES_CBC_HMAC_SHA1_encrypt_digest(void)
TEST_ASSERT_BUFFERS_ARE_EQUAL(digest,
catch_22_quote_2_512_bytes_AES_CBC_HMAC_SHA1_digest,
- gbl_cryptodev_type == RTE_CRYPTODEV_AESNI_MB_PMD ?
+ gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)) ?
TRUNCATED_DIGEST_BYTE_LENGTH_SHA1 :
DIGEST_BYTE_LENGTH_SHA1,
"Generated digest data not as expected");
@@ -1506,7 +1529,8 @@ test_AES_cipheronly_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1522,7 +1546,8 @@ test_AES_docsis_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1538,7 +1563,8 @@ test_AES_docsis_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1554,7 +1580,8 @@ test_DES_docsis_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_DES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1570,7 +1597,8 @@ test_authonly_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1586,7 +1614,8 @@ test_AES_chain_mb_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_AESNI_MB_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1604,7 +1633,8 @@ test_AES_cipheronly_scheduler_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_SCHEDULER_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1620,7 +1650,8 @@ test_AES_chain_scheduler_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_SCHEDULER_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1636,7 +1667,8 @@ test_authonly_scheduler_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_SCHEDULER_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1654,7 +1686,8 @@ test_AES_chain_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1670,7 +1703,8 @@ test_AES_cipheronly_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1686,7 +1720,8 @@ test_AES_chain_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1702,7 +1737,8 @@ test_AES_cipheronly_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1718,7 +1754,8 @@ test_AES_chain_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1734,7 +1771,8 @@ test_AES_cipheronly_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_AES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1750,7 +1788,8 @@ test_authonly_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_AUTHONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -1766,7 +1805,8 @@ test_AES_chain_armv8_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_ARMV8_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)),
BLKCIPHER_AES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4691,7 +4731,8 @@ test_3DES_chain_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4707,7 +4748,8 @@ test_DES_cipheronly_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4723,7 +4765,8 @@ test_DES_docsis_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_DES_DOCSIS_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4739,7 +4782,8 @@ test_3DES_chain_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4755,7 +4799,8 @@ test_3DES_cipheronly_dpaa2_sec_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_DPAA2_SEC_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4771,7 +4816,8 @@ test_3DES_cipheronly_qat_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_QAT_SYM_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4787,7 +4833,8 @@ test_3DES_chain_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_3DES_CHAIN_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -4803,7 +4850,8 @@ test_3DES_cipheronly_openssl_all(void)
status = test_blockcipher_all_tests(ts_params->mbuf_pool,
ts_params->op_mpool, ts_params->valid_devs[0],
- RTE_CRYPTODEV_OPENSSL_PMD,
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)),
BLKCIPHER_3DES_CIPHERONLY_TYPE);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -7818,8 +7866,9 @@ test_scheduler_attach_slave_op(void)
char vdev_name[32];
/* create 2 AESNI_MB if necessary */
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_AESNI_MB_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)));
if (nb_devs < 2) {
for (i = nb_devs; i < 2; i++) {
snprintf(vdev_name, sizeof(vdev_name), "%s_%u",
@@ -7840,7 +7889,8 @@ test_scheduler_attach_slave_op(void)
struct rte_cryptodev_info info;
rte_cryptodev_info_get(i, &info);
- if (info.dev_type != RTE_CRYPTODEV_AESNI_MB_PMD)
+ if (info.driver_id != rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
continue;
ret = rte_cryptodev_scheduler_slave_attach(sched_id,
@@ -8607,14 +8657,16 @@ static struct unit_test_suite cryptodev_armv8_testsuite = {
static int
test_cryptodev_qat(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
return unit_test_suite_runner(&cryptodev_qat_testsuite);
}
static int
test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_MB_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
return unit_test_suite_runner(&cryptodev_aesni_mb_testsuite);
}
@@ -8622,7 +8674,8 @@ test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_openssl(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_OPENSSL_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
return unit_test_suite_runner(&cryptodev_openssl_testsuite);
}
@@ -8630,7 +8683,8 @@ test_cryptodev_openssl(void)
static int
test_cryptodev_aesni_gcm(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_AESNI_GCM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
return unit_test_suite_runner(&cryptodev_aesni_gcm_testsuite);
}
@@ -8638,7 +8692,8 @@ test_cryptodev_aesni_gcm(void)
static int
test_cryptodev_null(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_NULL_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD));
return unit_test_suite_runner(&cryptodev_null_testsuite);
}
@@ -8646,7 +8701,8 @@ test_cryptodev_null(void)
static int
test_cryptodev_sw_snow3g(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_SNOW3G_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD));
return unit_test_suite_runner(&cryptodev_sw_snow3g_testsuite);
}
@@ -8654,7 +8710,8 @@ test_cryptodev_sw_snow3g(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_sw_kasumi(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_KASUMI_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_KASUMI_PMD));
return unit_test_suite_runner(&cryptodev_sw_kasumi_testsuite);
}
@@ -8662,7 +8719,8 @@ test_cryptodev_sw_kasumi(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_sw_zuc(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_ZUC_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ZUC_PMD));
return unit_test_suite_runner(&cryptodev_sw_zuc_testsuite);
}
@@ -8670,7 +8728,8 @@ test_cryptodev_sw_zuc(void /*argv __rte_unused, int argc __rte_unused*/)
static int
test_cryptodev_armv8(void)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_ARMV8_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
return unit_test_suite_runner(&cryptodev_armv8_testsuite);
}
@@ -8680,7 +8739,8 @@ test_cryptodev_armv8(void)
static int
test_cryptodev_scheduler(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_SCHEDULER_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
return unit_test_suite_runner(&cryptodev_scheduler_testsuite);
}
@@ -8691,7 +8751,8 @@ REGISTER_TEST_COMMAND(cryptodev_scheduler_autotest, test_cryptodev_scheduler);
static int
test_cryptodev_dpaa2_sec(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_type = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
}
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 603c776..c2c140a 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -53,7 +53,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
uint8_t dev_id,
- enum rte_cryptodev_type cryptodev_type,
+ uint8_t driver_id,
char *test_msg)
{
struct rte_mbuf *ibuf = NULL;
@@ -79,6 +79,17 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
uint8_t tmp_src_buf[MBUF_SIZE];
uint8_t tmp_dst_buf[MBUF_SIZE];
+ uint8_t openssl_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+ uint8_t scheduler_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+ uint8_t armv8_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+ uint8_t aesni_mb_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+ uint8_t qat_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
int nb_segs = 1;
if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SG) {
@@ -99,17 +110,14 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t,
memcpy(auth_key, tdata->auth_key.data,
tdata->auth_key.len);
- switch (cryptodev_type) {
- case RTE_CRYPTODEV_QAT_SYM_PMD:
- case RTE_CRYPTODEV_OPENSSL_PMD:
- case RTE_CRYPTODEV_ARMV8_PMD: /* Fall through */
+ if (driver_id == qat_pmd ||
+ driver_id == openssl_pmd ||
+ driver_id == armv8_pmd) { /* Fall through */
digest_len = tdata->digest.len;
- break;
- case RTE_CRYPTODEV_AESNI_MB_PMD:
- case RTE_CRYPTODEV_SCHEDULER_PMD:
+ } else if (driver_id == aesni_mb_pmd ||
+ driver_id == scheduler_pmd) {
digest_len = tdata->digest.truncated_len;
- break;
- default:
+ } else {
snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN,
"line %u FAILED: %s",
__LINE__, "Unsupported PMD type");
@@ -592,7 +600,7 @@ int
test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
uint8_t dev_id,
- enum rte_cryptodev_type cryptodev_type,
+ uint8_t driver_id,
enum blockcipher_test_type test_type)
{
int status, overall_status = TEST_SUCCESS;
@@ -602,6 +610,19 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
uint32_t target_pmd_mask = 0;
const struct blockcipher_test_case *tcs = NULL;
+ uint8_t openssl_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
+ uint8_t dpaa2_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+ uint8_t scheduler_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD));
+ uint8_t armv8_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
+ uint8_t aesni_mb_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+ uint8_t qat_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
switch (test_type) {
case BLKCIPHER_AES_CHAIN_TYPE:
n_test_cases = sizeof(aes_chain_test_cases) /
@@ -647,29 +668,20 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
break;
}
- switch (cryptodev_type) {
- case RTE_CRYPTODEV_AESNI_MB_PMD:
+ if (driver_id == aesni_mb_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MB;
- break;
- case RTE_CRYPTODEV_QAT_SYM_PMD:
+ else if (driver_id == qat_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_QAT;
- break;
- case RTE_CRYPTODEV_OPENSSL_PMD:
+ else if (driver_id == openssl_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL;
- break;
- case RTE_CRYPTODEV_ARMV8_PMD:
+ else if (driver_id == armv8_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_ARMV8;
- break;
- case RTE_CRYPTODEV_SCHEDULER_PMD:
+ else if (driver_id == scheduler_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER;
- break;
- case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+ else if (driver_id == dpaa2_pmd)
target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC;
- break;
- default:
+ else
TEST_ASSERT(0, "Unrecognized cryptodev type");
- break;
- }
for (i = 0; i < n_test_cases; i++) {
const struct blockcipher_test_case *tc = &tcs[i];
@@ -678,7 +690,7 @@ test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
continue;
status = test_blockcipher_one_case(tc, mbuf_pool, op_mpool,
- dev_id, cryptodev_type, test_msg);
+ dev_id, driver_id, test_msg);
printf(" %u) TestCase %s %s\n", test_index ++,
tc->test_descr, test_msg);
diff --git a/test/test/test_cryptodev_blockcipher.h b/test/test/test_cryptodev_blockcipher.h
index 004122f..f5058ef 100644
--- a/test/test/test_cryptodev_blockcipher.h
+++ b/test/test/test_cryptodev_blockcipher.h
@@ -126,7 +126,7 @@ int
test_blockcipher_all_tests(struct rte_mempool *mbuf_pool,
struct rte_mempool *op_mpool,
uint8_t dev_id,
- enum rte_cryptodev_type cryptodev_type,
+ uint8_t driver_id,
enum blockcipher_test_type test_type);
#endif /* TEST_CRYPTODEV_BLOCKCIPHER_H_ */
diff --git a/test/test/test_cryptodev_perf.c b/test/test/test_cryptodev_perf.c
index d60028d..e943e6b 100644
--- a/test/test/test_cryptodev_perf.c
+++ b/test/test/test_cryptodev_perf.c
@@ -195,23 +195,35 @@ static const char *chain_mode_name(enum chain_mode mode)
}
}
-static const char *pmd_name(enum rte_cryptodev_type pmd)
+static const char *pmd_name(uint8_t driver_id)
{
- switch (pmd) {
- case RTE_CRYPTODEV_NULL_PMD: return RTE_STR(CRYPTODEV_NAME_NULL_PMD); break;
- case RTE_CRYPTODEV_AESNI_GCM_PMD:
+ uint8_t null_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_NULL_PMD));
+ uint8_t dpaa2_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
+ uint8_t snow3g_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD));
+ uint8_t aesni_gcm_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
+ uint8_t aesni_mb_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
+ uint8_t qat_pmd = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
+
+ if (driver_id == null_pmd)
+ return RTE_STR(CRYPTODEV_NAME_NULL_PMD);
+ else if (driver_id == aesni_gcm_pmd)
return RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD);
- case RTE_CRYPTODEV_AESNI_MB_PMD:
+ else if (driver_id == aesni_mb_pmd)
return RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD);
- case RTE_CRYPTODEV_QAT_SYM_PMD:
+ else if (driver_id == qat_pmd)
return RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD);
- case RTE_CRYPTODEV_SNOW3G_PMD:
+ else if (driver_id == snow3g_pmd)
return RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD);
- case RTE_CRYPTODEV_DPAA2_SEC_PMD:
+ else if (driver_id == dpaa2_pmd)
return RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
- default:
+ else
return "";
- }
}
static const char *cipher_algo_name(enum rte_crypto_cipher_algorithm cipher_algo)
@@ -287,7 +299,7 @@ setup_test_string(struct rte_mempool *mpool,
static struct crypto_testsuite_params testsuite_params = { NULL };
static struct crypto_unittest_params unittest_params;
-static enum rte_cryptodev_type gbl_cryptodev_perftest_devtype;
+static uint8_t gbl_driver_id;
static int
testsuite_setup(void)
@@ -324,13 +336,16 @@ testsuite_setup(void)
}
/* Create an AESNI MB device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_AESNI_MB_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD))) {
#ifndef RTE_LIBRTE_PMD_AESNI_MB
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_MB must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_MB_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD), NULL);
@@ -342,13 +357,16 @@ testsuite_setup(void)
}
/* Create an AESNI GCM device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_AESNI_GCM_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD))) {
#ifndef RTE_LIBRTE_PMD_AESNI_GCM
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_AESNI_GCM must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_AESNI_GCM_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD), NULL);
@@ -360,13 +378,16 @@ testsuite_setup(void)
}
/* Create a SNOW3G device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_SNOW3G_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD))) {
#ifndef RTE_LIBRTE_PMD_SNOW3G
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_SNOW3G must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(RTE_CRYPTODEV_SNOW3G_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD), NULL);
@@ -378,14 +399,16 @@ testsuite_setup(void)
}
/* Create an OPENSSL device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_OPENSSL_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD))) {
#ifndef RTE_LIBRTE_PMD_OPENSSL
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_OPENSSL must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_OPENSSL_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD),
@@ -398,14 +421,16 @@ testsuite_setup(void)
}
/* Create an ARMv8 device if required */
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_ARMV8_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD))) {
#ifndef RTE_LIBRTE_PMD_ARMV8_CRYPTO
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_ARMV8_CRYPTO must be"
" enabled in config file to run this testsuite.\n");
return TEST_FAILED;
#endif
- nb_devs = rte_cryptodev_count_devtype(
- RTE_CRYPTODEV_ARMV8_PMD);
+ nb_devs = rte_cryptodev_device_count_by_driver(
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD)));
if (nb_devs < 1) {
ret = rte_vdev_init(
RTE_STR(CRYPTODEV_NAME_ARMV8_PMD),
@@ -418,7 +443,8 @@ testsuite_setup(void)
}
#ifndef RTE_LIBRTE_PMD_QAT
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_QAT_SYM_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD))) {
RTE_LOG(ERR, USER1, "CONFIG_RTE_LIBRTE_PMD_QAT must be enabled "
"in config file to run this testsuite.\n");
return TEST_FAILED;
@@ -434,7 +460,7 @@ testsuite_setup(void)
/* Search for the first valid */
for (i = 0; i < nb_devs; i++) {
rte_cryptodev_info_get(i, &info);
- if (info.dev_type == gbl_cryptodev_perftest_devtype) {
+ if (info.driver_id == gbl_driver_id) {
ts_params->dev_id = i;
valid_dev_id = 1;
break;
@@ -2093,8 +2119,9 @@ test_perf_crypto_qp_vary_burst_size(uint16_t dev_num)
}
while (num_received != num_to_submit) {
- if (gbl_cryptodev_perftest_devtype ==
- RTE_CRYPTODEV_AESNI_MB_PMD)
+ if (gbl_driver_id ==
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
rte_cryptodev_enqueue_burst(dev_num, 0,
NULL, 0);
@@ -2165,7 +2192,7 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
printf("\nOn %s dev%u qp%u, %s, cipher algo:%s, auth_algo:%s, "
"Packet Size %u bytes",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
ts_params->dev_id, 0,
chain_mode_name(pparams->chain),
cipher_algo_name(pparams->cipher_algo),
@@ -2209,8 +2236,9 @@ test_perf_snow3G_optimise_cyclecount(struct perf_test_params *pparams)
}
while (num_ops_received != num_to_submit) {
- if (gbl_cryptodev_perftest_devtype ==
- RTE_CRYPTODEV_AESNI_MB_PMD)
+ if (gbl_driver_id ==
+ rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD)))
rte_cryptodev_enqueue_burst(ts_params->dev_id, 0,
NULL, 0);
start_cycles = rte_rdtsc_precise();
@@ -2360,7 +2388,7 @@ test_perf_openssl_optimise_cyclecount(struct perf_test_params *pparams)
printf("\nOn %s dev%u qp%u, %s, cipher algo:%s, cipher key length:%u, "
"auth_algo:%s, Packet Size %u bytes",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
ts_params->dev_id, 0,
chain_mode_name(pparams->chain),
cipher_algo_name(pparams->cipher_algo),
@@ -2495,7 +2523,7 @@ test_perf_armv8_optimise_cyclecount(struct perf_test_params *pparams)
printf("\nOn %s dev%u qp%u, %s, cipher algo:%s, cipher key length:%u, "
"auth_algo:%s, Packet Size %u bytes",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
ts_params->dev_id, 0,
chain_mode_name(pparams->chain),
cipher_algo_name(pparams->cipher_algo),
@@ -3410,7 +3438,8 @@ test_perf_snow3g(uint8_t dev_id, uint16_t queue_id,
double cycles_B = cycles_buff / pparams->buf_size;
double throughput = (ops_s * pparams->buf_size * 8) / 1000000;
- if (gbl_cryptodev_perftest_devtype == RTE_CRYPTODEV_QAT_SYM_PMD) {
+ if (gbl_driver_id == rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD))) {
/* Cycle count misleading on HW devices for this test, so don't print */
printf("%4u\t%6.2f\t%10.2f\t n/a \t\t n/a "
"\t\t n/a \t\t%8"PRIu64"\t%8"PRIu64,
@@ -3845,7 +3874,7 @@ test_perf_snow3G_vary_pkt_size(void)
for (k = 0; k < RTE_DIM(burst_sizes); k++) {
printf("\nOn %s dev%u qp%u, %s, "
"cipher algo:%s, auth algo:%s, burst_size: %d ops",
- pmd_name(gbl_cryptodev_perftest_devtype),
+ pmd_name(gbl_driver_id),
testsuite_params.dev_id, 0,
chain_mode_name(params_set[i].chain),
cipher_algo_name(params_set[i].cipher_algo),
@@ -4726,7 +4755,8 @@ static struct unit_test_suite cryptodev_armv8_testsuite = {
static int
perftest_aesni_gcm_cryptodev(void)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_AESNI_GCM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_GCM_PMD));
return unit_test_suite_runner(&cryptodev_gcm_testsuite);
}
@@ -4734,7 +4764,8 @@ perftest_aesni_gcm_cryptodev(void)
static int
perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_AESNI_MB_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_AESNI_MB_PMD));
return unit_test_suite_runner(&cryptodev_aes_testsuite);
}
@@ -4742,7 +4773,8 @@ perftest_aesni_mb_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
return unit_test_suite_runner(&cryptodev_testsuite);
}
@@ -4750,7 +4782,8 @@ perftest_qat_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_sw_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_SNOW3G_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_SNOW3G_PMD));
return unit_test_suite_runner(&cryptodev_snow3g_testsuite);
}
@@ -4758,7 +4791,8 @@ perftest_sw_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_qat_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
return unit_test_suite_runner(&cryptodev_snow3g_testsuite);
}
@@ -4766,7 +4800,8 @@ perftest_qat_snow3g_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_openssl_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_OPENSSL_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_OPENSSL_PMD));
return unit_test_suite_runner(&cryptodev_openssl_testsuite);
}
@@ -4774,7 +4809,8 @@ perftest_openssl_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_qat_continual_cryptodev(void)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_QAT_SYM_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
return unit_test_suite_runner(&cryptodev_qat_continual_testsuite);
}
@@ -4782,7 +4818,8 @@ perftest_qat_continual_cryptodev(void)
static int
perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_ARMV8_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_ARMV8_PMD));
return unit_test_suite_runner(&cryptodev_armv8_testsuite);
}
@@ -4790,7 +4827,8 @@ perftest_sw_armv8_cryptodev(void /*argv __rte_unused, int argc __rte_unused*/)
static int
perftest_dpaa2_sec_cryptodev(void)
{
- gbl_cryptodev_perftest_devtype = RTE_CRYPTODEV_DPAA2_SEC_PMD;
+ gbl_driver_id = rte_cryptodev_driver_id_get(
+ RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD));
return unit_test_suite_runner(&cryptodev_dpaa2_sec_testsuite);
}
--
2.5.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 2/2] ethdev: add traffic management API
@ 2017-05-19 17:12 1% ` Cristian Dumitrescu
2017-05-24 11:28 0% ` Hemant Agrawal
0 siblings, 1 reply; 200+ results
From: Cristian Dumitrescu @ 2017-05-19 17:12 UTC (permalink / raw)
To: dev
Cc: thomas.monjalon, jerin.jacob, balasubramanian.manoharan,
hemant.agrawal, shreyansh.jain
This patch introduces the generic ethdev API for the traffic manager
capability, which includes: hierarchical scheduling, traffic shaping,
congestion management, packet marking.
Main features:
- Exposed as ethdev plugin capability (similar to rte_flow)
- Capability query API per port, per level and per node
- Scheduling algorithms: Strict Priority (SP), Weighed Fair Queuing (WFQ)
- Traffic shaping: single/dual rate, private (per node) and shared (by
multiple nodes) shapers
- Congestion management for hierarchy leaf nodes: algorithms of tail drop,
head drop, WRED; private (per node) and shared (by multiple nodes) WRED
contexts
- Packet marking: IEEE 802.1q (VLAN DEI), IETF RFC 3168 (IPv4/IPv6 ECN for
TCP and SCTP), IETF RFC 2597 (IPv4 / IPv6 DSCP)
Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
Changes in v4:
- Implemented feedback from Hemant [6]
- Capability API: Reworked the port, level and node capability API
data structure to remove confusion due to "summary across all
nodes" approach, which made it unclear whether a particular
capability is supported by all nodes or by at least one node.
- Capability API: Added flags for "all nodes have identical
capability set"
- Suspended state: documented the required behavior in Doxygen
description
- Implemented feedback from Jerin [7]
- Node add: added level parameter (see new API function:
rte_tm_node_add_check_level())
- RTE_TM_ETH_FRAMING_OVERHEAD, RTE_TM_ETH_FRAMING_OVERHEAD_FCS:
documented their usage in their Doxygen description
- Capability API: for each function, mention the related
capability field (Doxygen @see)
- stats_mask, capability_mask: document the enum flags used to
build each mask (Doxygen @see)
- Rename rte_tm_get_leaf_nodes() to
rte_tm_get_number_of_leaf_nodes()
- Doxygen: add @param[in, out] to the description of all API funcs
- Doxygen: fix hooks in doc/api/doxy-api-index.md
- Rename rte_tm_hierarchy_set() to rte_tm_hierarchy_commit(), improved
Doxygen description
- Node add, node delete: improved Doxygen description
- Fixed incorrect design assumption that packet-based weight mode for WFQ
is identical to WRR. As result, removed all references to WRR support.
Renamed the "scheduling mode" node parameters to "wfq_weight_mode".
Changes in v3:
- Implemented feedback from Jerin [5]
- Changed naming convention: scheddev -> tm
- Improvements on the capability API:
- Specification of marking capabilities per color
- WFQ/WRR groups: sp_n_children_max ->
wfq_wrr_n_children_per_group_max, added wfq_wrr_n_groups_max,
improved description of both, improved description of
wfq_wrr_weight_max
- Dynamic updates: added KEEP_LEVEL and CHANGE_LEVEL for parent
update
- Enforced/documented restrictions for root node (node_add() and
update())
- Enforced/documented shaper profile restrictions on PIR: PIR != 0,
PIR >= CIR
- Turned repetitive code in rte_tm.c into macro
- Removed dependency on rte_red.h file (added RED params to rte_tm.h)
- Color: removed "e_" from color names enum
- Fixed small Doxygen style issues
Changes in v2:
- Implemented feedback from Hemant [4]
- Improvements on the capability API
- Added capability API for hierarchy level
- Merged stats capability into the capability API
- Added dynamic updates
- Added non-leaf/leaf union to the node capability structure
- Renamed sp_priority_min to sp_n_priorities_max, added
clarifications
- Fixed description for sp_n_children_max
- Clarified and enforced rule on node ID range for leaf and non-leaf nodes
- Added API functions to get node type (i.e. leaf/non-leaf):
get_leaf_nodes(), node_type_get()
- Added clarification for the root node: its creation, parent, role
- Macro NODE_ID_NULL as root node's parent
- Description of the node_add() and node_parent_update() API funcs
- Added clarification for the first time add vs. subsequent updates rule
- Cleaned up the description for the node_add() function
- Statistics API improvements
- Merged stats capability into the capability API
- Added API function node_stats_update()
- Added more stats per packet color
- Added more error types
- Fixed small Doxygen style issues
Changes in v1 (since RFC [1]):
- Implemented as ethdev plugin (similar to rte_flow) as opposed to more
monolithic additions to ethdev itself
- Implemented feedback from Jerin [2] and Hemant [3]. Implemented all the
suggested items with only one exception, see the long list below,
hopefully nothing was forgotten.
- The item not done (hopefully for a good reason): driver-generated
object IDs. IMO the choice to have application-generated object IDs
adds marginal complexity to the driver (search ID function
required), but it provides huge simplification for the application.
The app does not need to worry about building & managing tree-like
structure for storing driver-generated object IDs, the app can use
its own convention for node IDs depending on the specific hierarchy
that it needs. Trivial example: identify all level-2 nodes with IDs
like 100, 200, 300, … and the level-3 nodes based on their level-2
parents: 110, 120, 130, 140, …, 210, 220, 230, 240, …, 310, 320,
330, … and level-4 nodes based on their level-3 parents: 111, 112,
113, 114, …, 121, 122, 123, 124, …). Moreover, see the change log
for the other related simplification that was implemented: leaf
nodes now have predefined IDs that are the same with their Ethernet
TX queue ID ( therefore no translation is required for leaf nodes).
- Capability API. Done per port and per node as well.
- Dual rate shapers
- Added configuration of private shaper (per node) directly from the
shaper profile as part of node API (no shaper ID needed for private
shapers), while the shared shapers are configured outside of the node
API using shaper profile and communicated to the node using shared
shaper ID. So there is no configuration overhead for shared shapers if
the app does not use any of them.
- Leaf nodes now have predefined IDs that are the same with their Ethernet
TX queue ID (therefore no translation is required for leaf nodes). This
is also used to differentiate between a leaf node and a non-leaf node.
- Domain-specific errors to give a precise indication of the error cause
(same as done by rte_flow)
- Packet marking API
- Packet length optional adjustment for shapers, positive (e.g. for adding
Ethernet framing overhead of 20 bytes) or negative (e.g. for rate
limiting based on IP packet bytes)
[1] RFC: http://dpdk.org/ml/archives/dev/2016-November/050956.html
[2] Jerin’s feedback on RFC: http://www.dpdk.org/ml/archives/dev/2017-January/054484.html
[3] Hemant’s feedback on RFC: http://www.dpdk.org/ml/archives/dev/2017-January/054866.html
[4] Hemant's feedback on v1: http://www.dpdk.org/ml/archives/dev/2017-February/058033.html
[5] Jerin's feedback on v1: http://www.dpdk.org/ml/archives/dev/2017-March/058895.html
[6] Hemant's feedback on v3: http://www.dpdk.org/ml/archives/dev/2017-March/062354.html
[7] Jerin's feedback on v3: http://www.dpdk.org/ml/archives/dev/2017-April/063429.html
MAINTAINERS | 4 +
lib/librte_ether/Makefile | 5 +-
lib/librte_ether/rte_ether_version.map | 30 +
lib/librte_ether/rte_tm.c | 448 ++++++++
lib/librte_ether/rte_tm.h | 1923 ++++++++++++++++++++++++++++++++
lib/librte_ether/rte_tm_driver.h | 373 +++++++
6 files changed, 2782 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_ether/rte_tm.c
create mode 100644 lib/librte_ether/rte_tm.h
create mode 100644 lib/librte_ether/rte_tm_driver.h
diff --git a/MAINTAINERS b/MAINTAINERS
index afb4cab..cdaf2ac 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -240,6 +240,10 @@ Flow API
M: Adrien Mazarguil <adrien.mazarguil@6wind.com>
F: lib/librte_ether/rte_flow*
+Traffic Management API
+M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+F: lib/librte_ether/rte_tm*
+
Crypto API
M: Declan Doherty <declan.doherty@intel.com>
F: lib/librte_cryptodev/
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index 93fdde1..db692ae 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -1,6 +1,6 @@
# BSD LICENSE
#
-# Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
+# Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
@@ -45,6 +45,7 @@ LIBABIVER := 6
SRCS-y += rte_ethdev.c
SRCS-y += rte_flow.c
+SRCS-y += rte_tm.c
#
# Export include files
@@ -56,5 +57,7 @@ SYMLINK-y-include += rte_eth_ctrl.h
SYMLINK-y-include += rte_dev_info.h
SYMLINK-y-include += rte_flow.h
SYMLINK-y-include += rte_flow_driver.h
+SYMLINK-y-include += rte_tm.h
+SYMLINK-y-include += rte_tm_driver.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index ff056e8..7f39904 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -161,4 +161,34 @@ DPDK_17.08 {
global:
rte_eth_dev_tm_ops_get;
+ rte_tm_get_leaf_nodes;
+ rte_tm_node_type_get;
+ rte_tm_capabilities_get;
+ rte_tm_level_capabilities_get;
+ rte_tm_node_capabilities_get;
+ rte_tm_wred_profile_add;
+ rte_tm_wred_profile_delete;
+ rte_tm_shared_wred_context_add_update;
+ rte_tm_shared_wred_context_delete;
+ rte_tm_shaper_profile_add;
+ rte_tm_shaper_profile_delete;
+ rte_tm_shared_shaper_add_update;
+ rte_tm_shared_shaper_delete;
+ rte_tm_node_add;
+ rte_tm_node_delete;
+ rte_tm_node_suspend;
+ rte_tm_node_resume;
+ rte_tm_hierarchy_commit;
+ rte_tm_node_parent_update;
+ rte_tm_node_shaper_update;
+ rte_tm_node_shared_shaper_update;
+ rte_tm_node_stats_update;
+ rte_tm_node_wfq_weight_mode_update;
+ rte_tm_node_cman_update;
+ rte_tm_node_wred_context_update;
+ rte_tm_node_shared_wred_context_update;
+ rte_tm_node_stats_read;
+ rte_tm_mark_vlan_dei;
+ rte_tm_mark_ip_ecn;
+ rte_tm_mark_ip_dscp;
} DPDK_17.05
diff --git a/lib/librte_ether/rte_tm.c b/lib/librte_ether/rte_tm.c
new file mode 100644
index 0000000..2617a1a
--- /dev/null
+++ b/lib/librte_ether/rte_tm.c
@@ -0,0 +1,448 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include "rte_ethdev.h"
+#include "rte_tm_driver.h"
+#include "rte_tm.h"
+
+/* Get generic traffic manager operations structure from a port. */
+const struct rte_tm_ops *
+rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ const struct rte_tm_ops *ops;
+
+ if (!rte_eth_dev_is_valid_port(port_id)) {
+ rte_tm_error_set(error,
+ ENODEV,
+ RTE_TM_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(ENODEV));
+ return NULL;
+ }
+
+ if ((dev->dev_ops->tm_ops_get == NULL) ||
+ (dev->dev_ops->tm_ops_get(dev, &ops) != 0) ||
+ (ops == NULL)) {
+ rte_tm_error_set(error,
+ ENOSYS,
+ RTE_TM_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(ENOSYS));
+ return NULL;
+ }
+
+ return ops;
+}
+
+#define RTE_TM_FUNC(port_id, func) \
+({ \
+ const struct rte_tm_ops *ops = \
+ rte_tm_ops_get(port_id, error); \
+ if (ops == NULL) \
+ return -rte_errno; \
+ \
+ if (ops->func == NULL) \
+ return -rte_tm_error_set(error, \
+ ENOSYS, \
+ RTE_TM_ERROR_TYPE_UNSPECIFIED, \
+ NULL, \
+ rte_strerror(ENOSYS)); \
+ \
+ ops->func; \
+})
+
+/* Get number of leaf nodes */
+int
+rte_tm_get_number_of_leaf_nodes(uint8_t port_id,
+ uint32_t *n_leaf_nodes,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ const struct rte_tm_ops *ops =
+ rte_tm_ops_get(port_id, error);
+
+ if (ops == NULL)
+ return -rte_errno;
+
+ if (n_leaf_nodes == NULL) {
+ rte_tm_error_set(error,
+ EINVAL,
+ RTE_TM_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ rte_strerror(EINVAL));
+ return -rte_errno;
+ }
+
+ *n_leaf_nodes = dev->data->nb_tx_queues;
+ return 0;
+}
+
+/* Check node type (leaf or non-leaf) */
+int
+rte_tm_node_type_get(uint8_t port_id,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_type_get)(dev,
+ node_id, is_leaf, error);
+}
+
+/* Get node level */
+int
+rte_tm_node_level_get(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t *level_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_level_get)(dev,
+ node_id, level_id, error);
+}
+
+/* Get capabilities */
+int rte_tm_capabilities_get(uint8_t port_id,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, capabilities_get)(dev,
+ cap, error);
+}
+
+/* Get level capabilities */
+int rte_tm_level_capabilities_get(uint8_t port_id,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, level_capabilities_get)(dev,
+ level_id, cap, error);
+}
+
+/* Get node capabilities */
+int rte_tm_node_capabilities_get(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_capabilities_get)(dev,
+ node_id, cap, error);
+}
+
+/* Add WRED profile */
+int rte_tm_wred_profile_add(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_wred_params *profile,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, wred_profile_add)(dev,
+ wred_profile_id, profile, error);
+}
+
+/* Delete WRED profile */
+int rte_tm_wred_profile_delete(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, wred_profile_delete)(dev,
+ wred_profile_id, error);
+}
+
+/* Add/update shared WRED context */
+int rte_tm_shared_wred_context_add_update(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_wred_context_add_update)(dev,
+ shared_wred_context_id, wred_profile_id, error);
+}
+
+/* Delete shared WRED context */
+int rte_tm_shared_wred_context_delete(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_wred_context_delete)(dev,
+ shared_wred_context_id, error);
+}
+
+/* Add shaper profile */
+int rte_tm_shaper_profile_add(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shaper_profile_add)(dev,
+ shaper_profile_id, profile, error);
+}
+
+/* Delete WRED profile */
+int rte_tm_shaper_profile_delete(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shaper_profile_delete)(dev,
+ shaper_profile_id, error);
+}
+
+/* Add shared shaper */
+int rte_tm_shared_shaper_add_update(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_shaper_add_update)(dev,
+ shared_shaper_id, shaper_profile_id, error);
+}
+
+/* Delete shared shaper */
+int rte_tm_shared_shaper_delete(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, shared_shaper_delete)(dev,
+ shared_shaper_id, error);
+}
+
+/* Add node to port traffic manager hierarchy */
+int rte_tm_node_add(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_add)(dev,
+ node_id, parent_node_id, priority, weight, params, error);
+}
+
+/* Delete node from traffic manager hierarchy */
+int rte_tm_node_delete(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_delete)(dev,
+ node_id, error);
+}
+
+/* Suspend node */
+int rte_tm_node_suspend(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_suspend)(dev,
+ node_id, error);
+}
+
+/* Resume node */
+int rte_tm_node_resume(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_resume)(dev,
+ node_id, error);
+}
+
+/* Commit the initial port traffic manager hierarchy */
+int rte_tm_hierarchy_commit(uint8_t port_id,
+ int clear_on_fail,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, hierarchy_commit)(dev,
+ clear_on_fail, error);
+}
+
+/* Update node parent */
+int rte_tm_node_parent_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_parent_update)(dev,
+ node_id, parent_node_id, priority, weight, error);
+}
+
+/* Update node private shaper */
+int rte_tm_node_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_shaper_update)(dev,
+ node_id, shaper_profile_id, error);
+}
+
+/* Update node shared shapers */
+int rte_tm_node_shared_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_shaper_id,
+ int add,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_shared_shaper_update)(dev,
+ node_id, shared_shaper_id, add, error);
+}
+
+/* Update node stats */
+int rte_tm_node_stats_update(uint8_t port_id,
+ uint32_t node_id,
+ uint64_t stats_mask,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_stats_update)(dev,
+ node_id, stats_mask, error);
+}
+
+/* Update WFQ weight mode */
+int rte_tm_node_wfq_weight_mode_update(uint8_t port_id,
+ uint32_t node_id,
+ int *wfq_weight_mode,
+ uint32_t n_sp_priorities,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_wfq_weight_mode_update)(dev,
+ node_id, wfq_weight_mode, n_sp_priorities, error);
+}
+
+/* Update node congestion management mode */
+int rte_tm_node_cman_update(uint8_t port_id,
+ uint32_t node_id,
+ enum rte_tm_cman_mode cman,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_cman_update)(dev,
+ node_id, cman, error);
+}
+
+/* Update node private WRED context */
+int rte_tm_node_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_wred_context_update)(dev,
+ node_id, wred_profile_id, error);
+}
+
+/* Update node shared WRED context */
+int rte_tm_node_shared_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_wred_context_id,
+ int add,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_shared_wred_context_update)(dev,
+ node_id, shared_wred_context_id, add, error);
+}
+
+/* Read and/or clear stats counters for specific node */
+int rte_tm_node_stats_read(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, node_stats_read)(dev,
+ node_id, stats, stats_mask, clear, error);
+}
+
+/* Packet marking - VLAN DEI */
+int rte_tm_mark_vlan_dei(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, mark_vlan_dei)(dev,
+ mark_green, mark_yellow, mark_red, error);
+}
+
+/* Packet marking - IPv4/IPv6 ECN */
+int rte_tm_mark_ip_ecn(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, mark_ip_ecn)(dev,
+ mark_green, mark_yellow, mark_red, error);
+}
+
+/* Packet marking - IPv4/IPv6 DSCP */
+int rte_tm_mark_ip_dscp(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ return RTE_TM_FUNC(port_id, mark_ip_dscp)(dev,
+ mark_green, mark_yellow, mark_red, error);
+}
diff --git a/lib/librte_ether/rte_tm.h b/lib/librte_ether/rte_tm.h
new file mode 100644
index 0000000..22167c2
--- /dev/null
+++ b/lib/librte_ether/rte_tm.h
@@ -0,0 +1,1923 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TM_H__
+#define __INCLUDE_RTE_TM_H__
+
+/**
+ * @file
+ * RTE Generic Traffic Manager API
+ *
+ * This interface provides the ability to configure the traffic manager in a
+ * generic way. It includes features such as: hierarchical scheduling,
+ * traffic shaping, congestion management, packet marking, etc.
+ */
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Ethernet framing overhead.
+ *
+ * Overhead fields per Ethernet frame:
+ * 1. Preamble: 7 bytes;
+ * 2. Start of Frame Delimiter (SFD): 1 byte;
+ * 3. Inter-Frame Gap (IFG): 12 bytes.
+ *
+ * One of the typical values for the *pkt_length_adjust* field of the shaper
+ * profile.
+ *
+ * @see struct rte_tm_shaper_params
+ *
+ */
+#define RTE_TM_ETH_FRAMING_OVERHEAD 20
+
+/**
+ * Ethernet framing overhead including the Frame Check Sequence (FCS) field.
+ * Useful when FCS is generated and added at the end of the Ethernet frame on
+ * TX side without any SW intervention.
+ *
+ * One of the typical values for the pkt_length_adjust field of the shaper
+ * profile.
+ *
+ * @see struct rte_tm_shaper_params
+ */
+#define RTE_TM_ETH_FRAMING_OVERHEAD_FCS 24
+
+/**< Invalid WRED profile ID */
+#define RTE_TM_WRED_PROFILE_ID_NONE UINT32_MAX
+
+/**< Invalid shaper profile ID */
+#define RTE_TM_SHAPER_PROFILE_ID_NONE UINT32_MAX
+
+/**< Node ID for the parent of the root node */
+#define RTE_TM_NODE_ID_NULL UINT32_MAX
+
+/**
+ * Color
+ */
+enum rte_tm_color {
+ RTE_TM_GREEN = 0, /**< Green */
+ RTE_TM_YELLOW, /**< Yellow */
+ RTE_TM_RED, /**< Red */
+ RTE_TM_COLORS /**< Number of colors */
+};
+
+/**
+ * Node statistics counter type
+ */
+enum rte_tm_stats_type {
+ /**< Number of packets scheduled from current node. */
+ RTE_TM_STATS_N_PKTS = 1 << 0,
+
+ /**< Number of bytes scheduled from current node. */
+ RTE_TM_STATS_N_BYTES = 1 << 1,
+
+ /**< Number of green packets dropped by current leaf node. */
+ RTE_TM_STATS_N_PKTS_GREEN_DROPPED = 1 << 2,
+
+ /**< Number of yellow packets dropped by current leaf node. */
+ RTE_TM_STATS_N_PKTS_YELLOW_DROPPED = 1 << 3,
+
+ /**< Number of red packets dropped by current leaf node. */
+ RTE_TM_STATS_N_PKTS_RED_DROPPED = 1 << 4,
+
+ /**< Number of green bytes dropped by current leaf node. */
+ RTE_TM_STATS_N_BYTES_GREEN_DROPPED = 1 << 5,
+
+ /**< Number of yellow bytes dropped by current leaf node. */
+ RTE_TM_STATS_N_BYTES_YELLOW_DROPPED = 1 << 6,
+
+ /**< Number of red bytes dropped by current leaf node. */
+ RTE_TM_STATS_N_BYTES_RED_DROPPED = 1 << 7,
+
+ /**< Number of packets currently waiting in the packet queue of current
+ * leaf node.
+ */
+ RTE_TM_STATS_N_PKTS_QUEUED = 1 << 8,
+
+ /**< Number of bytes currently waiting in the packet queue of current
+ * leaf node.
+ */
+ RTE_TM_STATS_N_BYTES_QUEUED = 1 << 9,
+};
+
+/**
+ * Node statistics counters
+ */
+struct rte_tm_node_stats {
+ /**< Number of packets scheduled from current node. */
+ uint64_t n_pkts;
+
+ /**< Number of bytes scheduled from current node. */
+ uint64_t n_bytes;
+
+ /**< Statistics counters for leaf nodes only. */
+ struct {
+ /**< Number of packets dropped by current leaf node per each
+ * color.
+ */
+ uint64_t n_pkts_dropped[RTE_TM_COLORS];
+
+ /**< Number of bytes dropped by current leaf node per each
+ * color.
+ */
+ uint64_t n_bytes_dropped[RTE_TM_COLORS];
+
+ /**< Number of packets currently waiting in the packet queue of
+ * current leaf node.
+ */
+ uint64_t n_pkts_queued;
+
+ /**< Number of bytes currently waiting in the packet queue of
+ * current leaf node.
+ */
+ uint64_t n_bytes_queued;
+ } leaf;
+};
+
+/**
+ * Traffic manager dynamic updates
+ */
+enum rte_tm_dynamic_update_type {
+ /**< Dynamic parent node update. The new parent node is located on same
+ * hierarchy level as the former parent node. Consequently, the node
+ * whose parent is changed preserves its hierarchy level.
+ */
+ RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL = 1 << 0,
+
+ /**< Dynamic parent node update. The new parent node is located on
+ * different hierarchy level than the former parent node. Consequently,
+ * the node whose parent is changed also changes its hierarchy level.
+ */
+ RTE_TM_UPDATE_NODE_PARENT_CHANGE_LEVEL = 1 << 1,
+
+ /**< Dynamic node add/delete. */
+ RTE_TM_UPDATE_NODE_ADD_DELETE = 1 << 2,
+
+ /**< Suspend/resume nodes. */
+ RTE_TM_UPDATE_NODE_SUSPEND_RESUME = 1 << 3,
+
+ /**< Dynamic switch between byte-based and packet-based WFQ weights. */
+ RTE_TM_UPDATE_NODE_WFQ_WEIGHT_MODE = 1 << 4,
+
+ /**< Dynamic update on number of SP priorities. */
+ RTE_TM_UPDATE_NODE_N_SP_PRIORITIES = 1 << 5,
+
+ /**< Dynamic update of congestion management mode for leaf nodes. */
+ RTE_TM_UPDATE_NODE_CMAN = 1 << 6,
+
+ /**< Dynamic update of the set of enabled stats counter types. */
+ RTE_TM_UPDATE_NODE_STATS = 1 << 7,
+};
+
+/**
+ * Traffic manager capabilities
+ */
+struct rte_tm_capabilities {
+ /**< Maximum number of nodes. */
+ uint32_t n_nodes_max;
+
+ /**< Maximum number of levels (i.e. number of nodes connecting the root
+ * node with any leaf node, including the root and the leaf).
+ */
+ uint32_t n_levels_max;
+
+ /**< When non-zero, this flag indicates that all the non-leaf nodes
+ * (with the exception of the root node) have identical capability set.
+ */
+ int non_leaf_nodes_identical;
+
+ /**< When non-zero, this flag indicates that all the leaf nodes have
+ * identical capability set.
+ */
+ int leaf_nodes_identical;
+
+ /**< Maximum number of shapers, either private or shared. In case the
+ * implementation does not share any resources between private and
+ * shared shapers, it is typically equal to the sum of
+ * *shaper_private_n_max* and *shaper_shared_n_max*.
+ */
+ uint32_t shaper_n_max;
+
+ /**< Maximum number of private shapers. Indicates the maximum number of
+ * nodes that can concurrently have their private shaper enabled.
+ */
+ uint32_t shaper_private_n_max;
+
+ /**< Maximum number of private shapers that support dual rate shaping.
+ * Indicates the maximum number of nodes that can concurrently have
+ * their private shaper enabled with dual rate support. Only valid when
+ * private shapers are supported. The value of zero indicates that dual
+ * rate shaping is not available for private shapers. The maximum value
+ * is *shaper_private_n_max*.
+ */
+ int shaper_private_dual_rate_n_max;
+
+ /**< Minimum committed/peak rate (bytes per second) for any private
+ * shaper. Valid only when private shapers are supported.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /**< Maximum committed/peak rate (bytes per second) for any private
+ * shaper. Valid only when private shapers are supported.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /**< Maximum number of shared shapers. The value of zero indicates that
+ * shared shapers are not supported.
+ */
+ uint32_t shaper_shared_n_max;
+
+ /**< Maximum number of nodes that can share the same shared shaper.
+ * Only valid when shared shapers are supported.
+ */
+ uint32_t shaper_shared_n_nodes_per_shaper_max;
+
+ /**< Maximum number of shared shapers a node can be part of. This
+ * parameter indicates that there is at least one node that can be
+ * configured with this many shared shapers, which might not be true for
+ * all the nodes. Only valid when shared shapers are supported, in which
+ * case it ranges from 1 to *shaper_shared_n_max*.
+ */
+ uint32_t shaper_shared_n_shapers_per_node_max;
+
+ /**< Maximum number of shared shapers that can be configured with dual
+ * rate shaping. The value of zero indicates that dual rate shaping
+ * support is not available for shared shapers.
+ */
+ uint32_t shaper_shared_dual_rate_n_max;
+
+ /**< Minimum committed/peak rate (bytes per second) for any shared
+ * shaper. Only valid when shared shapers are supported.
+ */
+ uint64_t shaper_shared_rate_min;
+
+ /**< Maximum committed/peak rate (bytes per second) for any shared
+ * shaper. Only valid when shared shapers are supported.
+ */
+ uint64_t shaper_shared_rate_max;
+
+ /**< Minimum value allowed for packet length adjustment for any private
+ * or shared shaper.
+ */
+ int shaper_pkt_length_adjust_min;
+
+ /**< Maximum value allowed for packet length adjustment for any private
+ * or shared shaper.
+ */
+ int shaper_pkt_length_adjust_max;
+
+ /**< Maximum number of children nodes. This parameter indicates that
+ * there is at least one non-leaf node that can be configured with this
+ * many children nodes, which might not be true for all the non-leaf
+ * nodes.
+ */
+ uint32_t sched_n_children_max;
+
+ /**< Maximum number of supported priority levels. This parameter
+ * indicates that there is at least one non-leaf node that can be
+ * configured with this many priority levels for managing its children
+ * nodes, which might not be true for all the non-leaf nodes. The value
+ * of zero is invalid. The value of 1 indicates that only priority 0 is
+ * supported, which essentially means that Strict Priority (SP)
+ * algorithm is not supported.
+ */
+ uint32_t sched_sp_n_priorities_max;
+
+ /**< Maximum number of sibling nodes that can have the same priority at
+ * any given time, i.e. maximum size of the WFQ sibling node group. This
+ * parameter indicates there is at least one non-leaf node that meets
+ * this condition, which might not be true for all the non-leaf nodes.
+ * The value of zero is invalid. The value of 1 indicates that WFQ
+ * algorithm is not supported. The maximum value is
+ * *sched_n_children_max*.
+ */
+ uint32_t sched_wfq_n_children_per_group_max;
+
+ /**< Maximum number of priority levels that can have more than one child
+ * node at any given time, i.e. maximum number of WFQ sibling node
+ * groups that have two or more members. This parameter indicates there
+ * is at least one non-leaf node that meets this condition, which might
+ * not be true for all the non-leaf nodes. The value of zero states that
+ * WFQ algorithm is not supported. The value of 1 indicates that
+ * (*sched_sp_n_priorities_max* - 1) priority levels have at most one
+ * child node, so there can be only one priority level with two or
+ * more sibling nodes making up a WFQ group. The maximum value is:
+ * min(floor(*sched_n_children_max* / 2), *sched_sp_n_priorities_max*).
+ */
+ uint32_t sched_wfq_n_groups_max;
+
+ /**< Maximum WFQ weight. The value of 1 indicates that all sibling nodes
+ * with same priority have the same WFQ weight, so WFQ is reduced to FQ.
+ */
+ uint32_t sched_wfq_weight_max;
+
+ /**< Head drop algorithm support. When non-zero, this parameter
+ * indicates that there is at least one leaf node that supports the head
+ * drop algorithm, which might not be true for all the leaf nodes.
+ */
+ int cman_head_drop_supported;
+
+ /**< Maximum number of WRED contexts, either private or shared. In case
+ * the implementation does not share any resources between private and
+ * shared WRED contexts, it is typically equal to the sum of
+ * *cman_wred_context_private_n_max* and
+ * *cman_wred_context_shared_n_max*.
+ */
+ uint32_t cman_wred_context_n_max;
+
+ /**< Maximum number of private WRED contexts. Indicates the maximum
+ * number of leaf nodes that can concurrently have their private WRED
+ * context enabled.
+ */
+ uint32_t cman_wred_context_private_n_max;
+
+ /**< Maximum number of shared WRED contexts. The value of zero
+ * indicates that shared WRED contexts are not supported.
+ */
+ uint32_t cman_wred_context_shared_n_max;
+
+ /**< Maximum number of leaf nodes that can share the same WRED context.
+ * Only valid when shared WRED contexts are supported.
+ */
+ uint32_t cman_wred_context_shared_n_nodes_per_context_max;
+
+ /**< Maximum number of shared WRED contexts a leaf node can be part of.
+ * This parameter indicates that there is at least one leaf node that
+ * can be configured with this many shared WRED contexts, which might
+ * not be true for all the leaf nodes. Only valid when shared WRED
+ * contexts are supported, in which case it ranges from 1 to
+ * *cman_wred_context_shared_n_max*.
+ */
+ uint32_t cman_wred_context_shared_n_contexts_per_node_max;
+
+ /**< Support for VLAN DEI packet marking (per color). */
+ int mark_vlan_dei_supported[RTE_TM_COLORS];
+
+ /**< Support for IPv4/IPv6 ECN marking of TCP packets (per color). */
+ int mark_ip_ecn_tcp_supported[RTE_TM_COLORS];
+
+ /**< Support for IPv4/IPv6 ECN marking of SCTP packets (per color). */
+ int mark_ip_ecn_sctp_supported[RTE_TM_COLORS];
+
+ /**< Support for IPv4/IPv6 DSCP packet marking (per color). */
+ int mark_ip_dscp_supported[RTE_TM_COLORS];
+
+ /**< Set of supported dynamic update operations.
+ * @see enum rte_tm_dynamic_update_type
+ */
+ uint64_t dynamic_update_mask;
+
+ /**< Set of supported statistics counter types.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+};
+
+/**
+ * Traffic manager level capabilities
+ */
+struct rte_tm_level_capabilities {
+ /**< Maximum number of nodes for the current hierarchy level. */
+ uint32_t n_nodes_max;
+
+ /**< Maximum number of non-leaf nodes for the current hierarchy level.
+ * The value of 0 indicates that current level only supports leaf
+ * nodes. The maximum value is *n_nodes_max*.
+ */
+ uint32_t n_nodes_nonleaf_max;
+
+ /**< Maximum number of leaf nodes for the current hierarchy level. The
+ * value of 0 indicates that current level only supports non-leaf
+ * nodes. The maximum value is *n_nodes_max*.
+ */
+ uint32_t n_nodes_leaf_max;
+
+ /**< When non-zero, this flag indicates that all the non-leaf nodes on
+ * this level have identical capability set. Valid only when
+ * *n_nodes_nonleaf_max* is non-zero.
+ */
+ int non_leaf_nodes_identical;
+
+ /**< When non-zero, this flag indicates that all the leaf nodes on this
+ * level have identical capability set. Valid only when
+ * *n_nodes_leaf_max* is non-zero.
+ */
+ int leaf_nodes_identical;
+
+ union {
+ /**< Items valid only for the non-leaf nodes on this level. */
+ struct {
+ /**< Private shaper support. When non-zero, it indicates
+ * there is at least one non-leaf node on this level
+ * with private shaper support, which may not be the
+ * case for all the non-leaf nodes on this level.
+ */
+ int shaper_private_supported;
+
+ /**< Dual rate support for private shaper. Valid only
+ * when private shaper is supported for the non-leaf
+ * nodes on the current level. When non-zero, it
+ * indicates there is at least one non-leaf node on this
+ * level with dual rate private shaper support, which
+ * may not be the case for all the non-leaf nodes on
+ * this level.
+ */
+ int shaper_private_dual_rate_supported;
+
+ /**< Minimum committed/peak rate (bytes per second) for
+ * private shapers of the non-leaf nodes of this level.
+ * Valid only when private shaper is supported on this
+ * level.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /**< Maximum committed/peak rate (bytes per second) for
+ * private shapers of the non-leaf nodes on this level.
+ * Valid only when private shaper is supported on this
+ * level.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /**< Maximum number of shared shapers that any non-leaf
+ * node on this level can be part of. The value of zero
+ * indicates that shared shapers are not supported by
+ * the non-leaf nodes on this level. When non-zero, it
+ * indicates there is at least one non-leaf node on this
+ * level that meets this condition, which may not be the
+ * case for all the non-leaf nodes on this level.
+ */
+ uint32_t shaper_shared_n_max;
+
+ /**< Maximum number of children nodes. This parameter
+ * indicates that there is at least one non-leaf node on
+ * this level that can be configured with this many
+ * children nodes, which might not be true for all the
+ * non-leaf nodes on this level.
+ */
+ uint32_t sched_n_children_max;
+
+ /**< Maximum number of supported priority levels. This
+ * parameter indicates that there is at least one
+ * non-leaf node on this level that can be configured
+ * with this many priority levels for managing its
+ * children nodes, which might not be true for all the
+ * non-leaf nodes on this level. The value of zero is
+ * invalid. The value of 1 indicates that only priority
+ * 0 is supported, which essentially means that Strict
+ * Priority (SP) algorithm is not supported on this
+ * level.
+ */
+ uint32_t sched_sp_n_priorities_max;
+
+ /**< Maximum number of sibling nodes that can have the
+ * same priority at any given time, i.e. maximum size of
+ * the WFQ sibling node group. This parameter indicates
+ * there is at least one non-leaf node on this level
+ * that meets this condition, which may not be true for
+ * all the non-leaf nodes on this level. The value of
+ * zero is invalid. The value of 1 indicates that WFQ
+ * algorithm is not supported on this level. The maximum
+ * value is *sched_n_children_max*.
+ */
+ uint32_t sched_wfq_n_children_per_group_max;
+
+ /**< Maximum number of priority levels that can have
+ * more than one child node at any given time, i.e.
+ * maximum number of WFQ sibling node groups that
+ * have two or more members. This parameter indicates
+ * there is at least one non-leaf node on this level
+ * that meets this condition, which might not be true
+ * for all the non-leaf nodes. The value of zero states
+ * that WFQ algorithm is not supported on this level.
+ * The value of 1 indicates that
+ * (*sched_sp_n_priorities_max* - 1) priority levels on
+ * this level have at most one child node, so there can
+ * be only one priority level with two or more sibling
+ * nodes making up a WFQ group on this level. The
+ * maximum value is:
+ * min(floor(*sched_n_children_max* / 2),
+ * *sched_sp_n_priorities_max*).
+ */
+ uint32_t sched_wfq_n_groups_max;
+
+ /**< Maximum WFQ weight. The value of 1 indicates that
+ * all sibling nodes on this level with same priority
+ * have the same WFQ weight, so on this level WFQ is
+ * reduced to FQ.
+ */
+ uint32_t sched_wfq_weight_max;
+
+ /**< Mask of statistics counter types supported by the
+ * non-leaf nodes on this level. Every supported
+ * statistics counter type is supported by at least one
+ * non-leaf node on this level, which may not be true
+ * for all the non-leaf nodes on this level.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+ } nonleaf;
+
+ /**< Items valid only for the leaf nodes on this level. */
+ struct {
+ /**< Private shaper support. When non-zero, it indicates
+ * there is at least one leaf node on this level with
+ * private shaper support, which may not be the case for
+ * all the leaf nodes on this level.
+ */
+ int shaper_private_supported;
+
+ /**< Dual rate support for private shaper. Valid only
+ * when private shaper is supported for the leaf nodes
+ * on this level. When non-zero, it indicates there is
+ * at least one leaf node on this level with dual rate
+ * private shaper support, which may not be the case for
+ * all the leaf nodes on this level.
+ */
+ int shaper_private_dual_rate_supported;
+
+ /**< Minimum committed/peak rate (bytes per second) for
+ * private shapers of the leaf nodes of this level.
+ * Valid only when private shaper is supported for the
+ * leaf nodes on this level.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /**< Maximum committed/peak rate (bytes per second) for
+ * private shapers of the leaf nodes on this level.
+ * Valid only when private shaper is supported for the
+ * leaf nodes on this level.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /**< Maximum number of shared shapers that any leaf node
+ * on this level can be part of. The value of zero
+ * indicates that shared shapers are not supported by
+ * the leaf nodes on this level. When non-zero, it
+ * indicates there is at least one leaf node on this
+ * level that meets this condition, which may not be the
+ * case for all the leaf nodes on this level.
+ */
+ uint32_t shaper_shared_n_max;
+
+ /**< Head drop algorithm support. When non-zero, this
+ * parameter indicates that there is at least one leaf
+ * node on this level that supports the head drop
+ * algorithm, which might not be true for all the leaf
+ * nodes on this level.
+ */
+ int cman_head_drop_supported;
+
+ /**< Private WRED context support. When non-zero, it
+ * indicates there is at least one node on this level
+ * with private WRED context support, which may not be
+ * true for all the leaf nodes on this level. */
+ int cman_wred_context_private_supported;
+
+ /**< Maximum number of shared WRED contexts that any
+ * leaf node on this level can be part of. The value of
+ * zero indicates that shared WRED contexts are not
+ * supported by the leaf nodes on this level. When
+ * non-zero, it indicates there is at least one leaf
+ * node on this level that meets this condition, which
+ * may not be the case for all the leaf nodes on this
+ * level.
+ */
+ uint32_t cman_wred_context_shared_n_max;
+
+ /**< Mask of statistics counter types supported by the
+ * leaf nodes on this level. Every supported statistics
+ * counter type is supported by at least one leaf node
+ * on this level, which may not be true for all the leaf
+ * nodes on this level.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+ } leaf;
+ };
+};
+
+/**
+ * Traffic manager node capabilities
+ */
+struct rte_tm_node_capabilities {
+ /**< Private shaper support for the current node. */
+ int shaper_private_supported;
+
+ /**< Dual rate shaping support for private shaper of current node.
+ * Valid only when private shaper is supported by the current node.
+ */
+ int shaper_private_dual_rate_supported;
+
+ /**< Minimum committed/peak rate (bytes per second) for private
+ * shaper of current node. Valid only when private shaper is supported
+ * by the current node.
+ */
+ uint64_t shaper_private_rate_min;
+
+ /**< Maximum committed/peak rate (bytes per second) for private
+ * shaper of current node. Valid only when private shaper is supported
+ * by the current node.
+ */
+ uint64_t shaper_private_rate_max;
+
+ /**< Maximum number of shared shapers the current node can be part of.
+ * The value of zero indicates that shared shapers are not supported by
+ * the current node.
+ */
+ uint32_t shaper_shared_n_max;
+
+ union {
+ /**< Items valid only for non-leaf nodes. */
+ struct {
+ /**< Maximum number of children nodes. */
+ uint32_t sched_n_children_max;
+
+ /**< Maximum number of supported priority levels. The
+ * value of zero is invalid. The value of 1 indicates
+ * that only priority 0 is supported, which essentially
+ * means that Strict Priority (SP) algorithm is not
+ * supported.
+ */
+ uint32_t sched_sp_n_priorities_max;
+
+ /**< Maximum number of sibling nodes that can have the
+ * same priority at any given time, i.e. maximum size
+ * of the WFQ sibling node group. The value of zero
+ * is invalid. The value of 1 indicates that WFQ
+ * algorithm is not supported. The maximum value is
+ * *sched_n_children_max*.
+ */
+ uint32_t sched_wfq_n_children_per_group_max;
+
+ /**< Maximum number of priority levels that can have
+ * more than one child node at any given time, i.e.
+ * maximum number of WFQ sibling node groups that have
+ * two or more members. The value of zero states that
+ * WFQ algorithm is not supported. The value of 1
+ * indicates that (*sched_sp_n_priorities_max* - 1)
+ * priority levels have at most one child node, so there
+ * can be only one priority level with two or more
+ * sibling nodes making up a WFQ group. The maximum
+ * value is: min(floor(*sched_n_children_max* / 2),
+ * *sched_sp_n_priorities_max*).
+ */
+ uint32_t sched_wfq_n_groups_max;
+
+ /**< Maximum WFQ weight. The value of 1 indicates that
+ * all sibling nodes with same priority have the same
+ * WFQ weight, so WFQ is reduced to FQ.
+ */
+ uint32_t sched_wfq_weight_max;
+ } nonleaf;
+
+ /**< Items valid only for leaf nodes. */
+ struct {
+ /**< Head drop algorithm support for current node. */
+ int cman_head_drop_supported;
+
+ /**< Private WRED context support for current node. */
+ int cman_wred_context_private_supported;
+
+ /**< Maximum number of shared WRED contexts the current
+ * node can be part of. The value of zero indicates that
+ * shared WRED contexts are not supported by the current
+ * node.
+ */
+ uint32_t cman_wred_context_shared_n_max;
+ } leaf;
+ };
+
+ /**< Mask of statistics counter types supported by the current node.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+};
+
+/**
+ * Congestion management (CMAN) mode
+ *
+ * This is used for controlling the admission of packets into a packet queue or
+ * group of packet queues on congestion. On request of writing a new packet
+ * into the current queue while the queue is full, the *tail drop* algorithm
+ * drops the new packet while leaving the queue unmodified, as opposed to *head
+ * drop* algorithm, which drops the packet at the head of the queue (the oldest
+ * packet waiting in the queue) and admits the new packet at the tail of the
+ * queue.
+ *
+ * The *Random Early Detection (RED)* algorithm works by proactively dropping
+ * more and more input packets as the queue occupancy builds up. When the queue
+ * is full or almost full, RED effectively works as *tail drop*. The *Weighted
+ * RED* algorithm uses a separate set of RED thresholds for each packet color.
+ */
+enum rte_tm_cman_mode {
+ RTE_TM_CMAN_TAIL_DROP = 0, /**< Tail drop */
+ RTE_TM_CMAN_HEAD_DROP, /**< Head drop */
+ RTE_TM_CMAN_WRED, /**< Weighted Random Early Detection (WRED) */
+};
+
+/**
+ * Random Early Detection (RED) profile
+ */
+struct rte_tm_red_params {
+ /**< Minimum queue threshold */
+ uint16_t min_th;
+
+ /**< Maximum queue threshold */
+ uint16_t max_th;
+
+ /**< Inverse of packet marking probability maximum value (maxp), i.e.
+ * maxp_inv = 1 / maxp
+ */
+ uint16_t maxp_inv;
+
+ /**< Negated log2 of queue weight (wq), i.e. wq = 1 / (2 ^ wq_log2) */
+ uint16_t wq_log2;
+};
+
+/**
+ * Weighted RED (WRED) profile
+ *
+ * Multiple WRED contexts can share the same WRED profile. Each leaf node with
+ * WRED enabled as its congestion management mode has zero or one private WRED
+ * context (only one leaf node using it) and/or zero, one or several shared
+ * WRED contexts (multiple leaf nodes use the same WRED context). A private
+ * WRED context is used to perform congestion management for a single leaf
+ * node, while a shared WRED context is used to perform congestion management
+ * for a group of leaf nodes.
+ */
+struct rte_tm_wred_params {
+ /**< One set of RED parameters per packet color */
+ struct rte_tm_red_params red_params[RTE_TM_COLORS];
+};
+
+/**
+ * Token bucket
+ */
+struct rte_tm_token_bucket {
+ /**< Token bucket rate (bytes per second) */
+ uint64_t rate;
+
+ /**< Token bucket size (bytes), a.k.a. max burst size */
+ uint64_t size;
+};
+
+/**
+ * Shaper (rate limiter) profile
+ *
+ * Multiple shaper instances can share the same shaper profile. Each node has
+ * zero or one private shaper (only one node using it) and/or zero, one or
+ * several shared shapers (multiple nodes use the same shaper instance).
+ * A private shaper is used to perform traffic shaping for a single node, while
+ * a shared shaper is used to perform traffic shaping for a group of nodes.
+ *
+ * Single rate shapers use a single token bucket. A single rate shaper can be
+ * configured by setting the rate of the committed bucket to zero, which
+ * effectively disables this bucket. The peak bucket is used to limit the rate
+ * and the burst size for the current shaper.
+ *
+ * Dual rate shapers use both the committed and the peak token buckets. The
+ * rate of the peak bucket has to be bigger than zero, as well as greater than
+ * or equal to the rate of the committed bucket.
+ */
+struct rte_tm_shaper_params {
+ /**< Committed token bucket */
+ struct rte_tm_token_bucket committed;
+
+ /**< Peak token bucket */
+ struct rte_tm_token_bucket peak;
+
+ /**< Signed value to be added to the length of each packet for the
+ * purpose of shaping. Can be used to correct the packet length with
+ * the framing overhead bytes that are also consumed on the wire (e.g.
+ * RTE_TM_ETH_FRAMING_OVERHEAD_FCS).
+ */
+ int32_t pkt_length_adjust;
+};
+
+/**
+ * Node parameters
+ *
+ * Each non-leaf node has multiple inputs (its children nodes) and single output
+ * (which is input to its parent node). It arbitrates its inputs using Strict
+ * Priority (SP) and Weighted Fair Queuing (WFQ) algorithms to schedule input
+ * packets to its output while observing its shaping (rate limiting)
+ * constraints.
+ *
+ * Algorithms such as Weighted Round Robin (WRR), Byte-level WRR, Deficit WRR
+ * (DWRR), etc. are considered approximations of the WFQ ideal and are
+ * assimilated to WFQ, although an associated implementation-dependent trade-off
+ * on accuracy, performance and resource usage might exist.
+ *
+ * Children nodes with different priorities are scheduled using the SP algorithm
+ * based on their priority, with zero (0) as the highest priority. Children with
+ * the same priority are scheduled using the WFQ algorithm according to their
+ * weights. The WFQ weight of a given child node is relative to the sum of the
+ * weights of all its sibling nodes that have the same priority, with one (1) as
+ * the lowest weight. For each SP priority, the WFQ weight mode can be set as
+ * either byte-based or packet-based.
+ *
+ * Each leaf node sits on top of a TX queue of the current Ethernet port. Hence,
+ * the leaf nodes are predefined, with their node IDs set to 0 .. (N-1), where N
+ * is the number of TX queues configured for the current Ethernet port. The
+ * non-leaf nodes have their IDs generated by the application.
+ */
+struct rte_tm_node_params {
+ /**< Shaper profile for the private shaper. The absence of the private
+ * shaper for the current node is indicated by setting this parameter
+ * to RTE_TM_SHAPER_PROFILE_ID_NONE.
+ */
+ uint32_t shaper_profile_id;
+
+ /**< User allocated array of valid shared shaper IDs. */
+ uint32_t *shared_shaper_id;
+
+ /**< Number of shared shaper IDs in the *shared_shaper_id* array. */
+ uint32_t n_shared_shapers;
+
+ union {
+ /**< Parameters only valid for non-leaf nodes. */
+ struct {
+ /**< WFQ weight mode for each SP priority. When NULL, it
+ * indicates that WFQ is to be used for all priorities.
+ * When non-NULL, it points to a pre-allocated array of
+ * *n_sp_priorities* values, with non-zero value for
+ * byte-mode and zero for packet-mode.
+ */
+ int *wfq_weight_mode;
+
+ /**< Number of SP priorities. */
+ uint32_t n_sp_priorities;
+ } nonleaf;
+
+ /**< Parameters only valid for leaf nodes. */
+ struct {
+ /**< Congestion management mode */
+ enum rte_tm_cman_mode cman;
+
+ /**< WRED parameters (only valid when *cman* is set to
+ * WRED).
+ */
+ struct {
+ /**< WRED profile for private WRED context. The
+ * absence of a private WRED context for the
+ * current leaf node is indicated by value
+ * RTE_TM_WRED_PROFILE_ID_NONE.
+ */
+ uint32_t wred_profile_id;
+
+ /**< User allocated array of shared WRED context
+ * IDs. When set to NULL, it indicates that the
+ * current leaf node should not currently be
+ * part of any shared WRED contexts.
+ */
+ uint32_t *shared_wred_context_id;
+
+ /**< Number of elements in the
+ * *shared_wred_context_id* array. Only valid
+ * when *shared_wred_context_id* is non-NULL,
+ * in which case it should be non-zero.
+ */
+ uint32_t n_shared_wred_contexts;
+ } wred;
+ } leaf;
+ };
+
+ /**< Mask of statistics counter types to be enabled for this node. This
+ * needs to be a subset of the statistics counter types available for
+ * the current node. Any statistics counter type not included in this
+ * set is to be disabled for the current node.
+ * @see enum rte_tm_stats_type
+ */
+ uint64_t stats_mask;
+};
+
+/**
+ * Verbose error types.
+ *
+ * Most of them provide the type of the object referenced by struct
+ * rte_tm_error::cause.
+ */
+enum rte_tm_error_type {
+ RTE_TM_ERROR_TYPE_NONE, /**< No error. */
+ RTE_TM_ERROR_TYPE_UNSPECIFIED, /**< Cause unspecified. */
+ RTE_TM_ERROR_TYPE_CAPABILITIES,
+ RTE_TM_ERROR_TYPE_LEVEL_ID,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_GREEN,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_YELLOW,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_RED,
+ RTE_TM_ERROR_TYPE_WRED_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_SHARED_WRED_CONTEXT_ID,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_RATE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN,
+ RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_SHARED_SHAPER_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID,
+ RTE_TM_ERROR_TYPE_NODE_PRIORITY,
+ RTE_TM_ERROR_TYPE_NODE_WEIGHT,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_SHAPERS,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_CMAN,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_WRED_PROFILE_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_WRED_CONTEXT_ID,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SHARED_WRED_CONTEXTS,
+ RTE_TM_ERROR_TYPE_NODE_PARAMS_STATS,
+ RTE_TM_ERROR_TYPE_NODE_ID,
+};
+
+/**
+ * Verbose error structure definition.
+ *
+ * This object is normally allocated by applications and set by PMDs, the
+ * message points to a constant string which does not need to be freed by
+ * the application, however its pointer can be considered valid only as long
+ * as its associated DPDK port remains configured. Closing the underlying
+ * device or unloading the PMD invalidates it.
+ *
+ * Both cause and message may be NULL regardless of the error type.
+ */
+struct rte_tm_error {
+ enum rte_tm_error_type type; /**< Cause field and error type. */
+ const void *cause; /**< Object responsible for the error. */
+ const char *message; /**< Human-readable error message. */
+};
+
+/**
+ * Traffic manager get number of leaf nodes
+ *
+ * Each leaf node sits on on top of a TX queue of the current Ethernet port.
+ * Therefore, the set of leaf nodes is predefined, their number is always equal
+ * to N (where N is the number of TX queues configured for the current port)
+ * and their IDs are 0 .. (N-1).
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] n_leaf_nodes
+ * Number of leaf nodes for the current port.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_get_number_of_leaf_nodes(uint8_t port_id,
+ uint32_t *n_leaf_nodes,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node type (i.e. leaf or non-leaf) get
+ *
+ * The leaf nodes have predefined IDs in the range of 0 .. (N-1), where N is
+ * the number of TX queues of the current Ethernet port. The non-leaf nodes
+ * have their IDs generated by the application outside of the above range,
+ * which is reserved for leaf nodes.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID value. Needs to be valid.
+ * @param[out] is_leaf
+ * Set to non-zero value when node is leaf and to zero otherwise (non-leaf).
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_type_get(uint8_t port_id,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node level get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID value. Needs to be valid.
+ * @param[out] level_id
+ * Node level ID. Needs to be non-NULL.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_level_get(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t *level_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager capabilities get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] cap
+ * Traffic manager capabilities. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_capabilities_get(uint8_t port_id,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager level capabilities get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] level_id
+ * The hierarchy level identifier. The value of 0 identifies the level of the
+ * root node.
+ * @param[out] cap
+ * Traffic manager level capabilities. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_level_capabilities_get(uint8_t port_id,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node capabilities get
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] cap
+ * Traffic manager node capabilities. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_capabilities_get(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager WRED profile add
+ *
+ * Create a new WRED profile with ID set to *wred_profile_id*. The new profile
+ * is used to create one or several WRED contexts.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] wred_profile_id
+ * WRED profile ID for the new profile. Needs to be unused.
+ * @param[in] profile
+ * WRED profile parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_wred_profile_add(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_wred_params *profile,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager WRED profile delete
+ *
+ * Delete an existing WRED profile. This operation fails when there is
+ * currently at least one user (i.e. WRED context) of this WRED profile.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] wred_profile_id
+ * WRED profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_wred_profile_delete(uint8_t port_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared WRED context add or update
+ *
+ * When *shared_wred_context_id* is invalid, a new WRED context with this ID is
+ * created by using the WRED profile identified by *wred_profile_id*.
+ *
+ * When *shared_wred_context_id* is valid, this WRED context is no longer using
+ * the profile previously assigned to it and is updated to use the profile
+ * identified by *wred_profile_id*.
+ *
+ * A valid shared WRED context can be assigned to several hierarchy leaf nodes
+ * configured to use WRED as the congestion management mode.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_wred_context_id
+ * Shared WRED context ID
+ * @param[in] wred_profile_id
+ * WRED profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_shared_wred_context_add_update(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared WRED context delete
+ *
+ * Delete an existing shared WRED context. This operation fails when there is
+ * currently at least one user (i.e. hierarchy leaf node) of this shared WRED
+ * context.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_wred_context_id
+ * Shared WRED context ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_shared_wred_context_delete(uint8_t port_id,
+ uint32_t shared_wred_context_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shaper profile add
+ *
+ * Create a new shaper profile with ID set to *shaper_profile_id*. The new
+ * shaper profile is used to create one or several shapers.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shaper_profile_id
+ * Shaper profile ID for the new profile. Needs to be unused.
+ * @param[in] profile
+ * Shaper profile parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_shaper_profile_add(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shaper profile delete
+ *
+ * Delete an existing shaper profile. This operation fails when there is
+ * currently at least one user (i.e. shaper) of this shaper profile.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shaper_profile_id
+ * Shaper profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_shaper_profile_delete(uint8_t port_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared shaper add or update
+ *
+ * When *shared_shaper_id* is not a valid shared shaper ID, a new shared shaper
+ * with this ID is created using the shaper profile identified by
+ * *shaper_profile_id*.
+ *
+ * When *shared_shaper_id* is a valid shared shaper ID, this shared shaper is
+ * no longer using the shaper profile previously assigned to it and is updated
+ * to use the shaper profile identified by *shaper_profile_id*.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_shaper_id
+ * Shared shaper ID
+ * @param[in] shaper_profile_id
+ * Shaper profile ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_shared_shaper_add_update(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager shared shaper delete
+ *
+ * Delete an existing shared shaper. This operation fails when there is
+ * currently at least one user (i.e. hierarchy node) of this shared shaper.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] shared_shaper_id
+ * Shared shaper ID. Needs to be the valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_shared_shaper_delete(uint8_t port_id,
+ uint32_t shared_shaper_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node add
+ *
+ * Create new node and connect it as child of an existing node. The new node is
+ * further identified by *node_id*, which needs to be unused by any of the
+ * existing nodes. The parent node is identified by *parent_node_id*, which
+ * needs to be the valid ID of an existing non-leaf node. The parent node is
+ * going to use the provided SP *priority* and WFQ *weight* to schedule its new
+ * child node.
+ *
+ * This function has to be called for both leaf and non-leaf nodes. In the case
+ * of leaf nodes (i.e. *node_id* is within the range of 0 .. (N-1), with N as
+ * the number of configured TX queues of the current port), the leaf node is
+ * configured rather than created (as the set of leaf nodes is predefined) and
+ * it is also connected as child of an existing node.
+ *
+ * The first node that is added becomes the root node and all the nodes that
+ * are subsequently added have to be added as descendants of the root node. The
+ * parent of the root node has to be specified as RTE_TM_NODE_ID_NULL and there
+ * can only be one node with this parent ID (i.e. the root node). Further
+ * restrictions for root node: needs to be non-leaf, its private shaper profile
+ * needs to be valid and single rate, cannot use any shared shapers.
+ *
+ * When called before rte_tm_hierarchy_commit() invocation, this function is
+ * typically used to define the initial start-up hierarchy for the port.
+ * Provided that dynamic hierarchy updates are supported by the current port (as
+ * advertised in the port capability set), this function can be also called
+ * after the rte_tm_hierarchy_commit() invocation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be unused by any of the existing nodes.
+ * @param[in] parent_node_id
+ * Parent node ID. Needs to be the valid.
+ * @param[in] priority
+ * Node priority. The highest node priority is zero. Used by the SP algorithm
+ * running on the parent of the current node for scheduling this child node.
+ * @param[in] weight
+ * Node weight. The node weight is relative to the weight sum of all siblings
+ * that have the same priority. The lowest weight is one. Used by the WFQ
+ * algorithm running on the parent of the current node for scheduling this
+ * child node.
+ * @param[in] params
+ * Node parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_hierarchy_commit()
+ * @see RTE_TM_UPDATE_NODE_ADD_DELETE
+ */
+int
+rte_tm_node_add(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node add with node level check
+ *
+ * Simple rte_tm_node_add() wrapper that also checks the node level.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be unused by any of the existing nodes.
+ * @param[in] parent_node_id
+ * Parent node ID. Needs to be the valid.
+ * @param[in] priority
+ * Node priority. The highest node priority is zero. Used by the SP algorithm
+ * running on the parent of the current node for scheduling this child node.
+ * @param[in] weight
+ * Node weight. The node weight is relative to the weight sum of all siblings
+ * that have the same priority. The lowest weight is one. Used by the WFQ
+ * algorithm running on the parent of the current node for scheduling this
+ * child node.
+ * @param[in] level_id
+ * Level ID that should be met by this node.
+ * @param[in] params
+ * Node parameters. Needs to be pre-allocated and valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+static inline int
+rte_tm_node_add_check_level(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ uint32_t level_id,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error)
+{
+ uint32_t lid;
+ int status;
+
+ status = rte_tm_node_add(port_id, node_id,
+ parent_node_id, priority, weight, params, error);
+ if (status)
+ return status;
+
+ status = rte_tm_node_level_get(port_id, node_id, &lid, error);
+ if (status)
+ return status;
+
+ if (lid != level_id){
+ if (error){
+ error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
+ error->cause = NULL;
+ error->message = rte_strerror(EINVAL);
+ }
+ rte_errno = EINVAL;
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * Traffic manager node delete
+ *
+ * Delete an existing node. This operation fails when this node currently has
+ * at least one user (i.e. child node).
+ *
+ * When called before rte_tm_hierarchy_commit() invocation, this function is
+ * typically used to define the initial start-up hierarchy for the port.
+ * Provided that dynamic hierarchy updates are supported by the current port (as
+ * advertised in the port capability set), this function can be also called
+ * after the rte_tm_hierarchy_commit() invocation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_ADD_DELETE
+ */
+int
+rte_tm_node_delete(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node suspend
+ *
+ * Suspend an existing node. While the node is in suspended state, no packet is
+ * scheduled from this node and its descendants. The node exits the suspended
+ * state through the node resume operation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_node_resume()
+ * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME
+ */
+int
+rte_tm_node_suspend(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node resume
+ *
+ * Resume an existing node that is currently in suspended state. The node
+ * entered the suspended state as result of a previous node suspend operation.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_node_suspend()
+ * @see RTE_TM_UPDATE_NODE_SUSPEND_RESUME
+ */
+int
+rte_tm_node_resume(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager hierarchy commit
+ *
+ * This function is called during the port initialization phase (before the
+ * Ethernet port is started) to freeze the start-up hierarchy.
+ *
+ * This function typically performs the following steps:
+ * a) It validates the start-up hierarchy that was previously defined for the
+ * current port through successive rte_tm_node_add() invocations;
+ * b) Assuming successful validation, it performs all the necessary port
+ * specific configuration operations to install the specified hierarchy on
+ * the current port, with immediate effect once the port is started.
+ *
+ * This function fails when the currently configured hierarchy is not supported
+ * by the Ethernet port, in which case the user can abort or try out another
+ * hierarchy configuration (e.g. a hierarchy with less leaf nodes), which can be
+ * build from scratch (when *clear_on_fail* is enabled) or by modifying the
+ * existing hierarchy configuration (when *clear_on_fail* is disabled).
+ *
+ * Note that this function can still fail due to other causes (e.g. not enough
+ * memory available in the system, etc), even though the specified hierarchy is
+ * supported in principle by the current port.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] clear_on_fail
+ * On function call failure, hierarchy is cleared when this parameter is
+ * non-zero and preserved when this parameter is equal to zero.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see rte_tm_node_add()
+ * @see rte_tm_node_delete()
+ */
+int
+rte_tm_hierarchy_commit(uint8_t port_id,
+ int clear_on_fail,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node parent update
+ *
+ * Restriction for root node: its parent cannot be changed.
+ *
+ * This function can only be called after the rte_tm_hierarchy_commit()
+ * invocation. Its success depends on the port support for this operation, as
+ * advertised through the port capability set.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] parent_node_id
+ * Node ID for the new parent. Needs to be valid.
+ * @param[in] priority
+ * Node priority. The highest node priority is zero. Used by the SP algorithm
+ * running on the parent of the current node for scheduling this child node.
+ * @param[in] weight
+ * Node weight. The node weight is relative to the weight sum of all siblings
+ * that have the same priority. The lowest weight is zero. Used by the WFQ
+ * algorithm running on the parent of the current node for scheduling this
+ * child node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL
+ * @see RTE_TM_UPDATE_NODE_PARENT_CHANGE_LEVEL
+ */
+int
+rte_tm_node_parent_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node private shaper update
+ *
+ * Restriction for the root node: its private shaper profile needs to be valid
+ * and single rate.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] shaper_profile_id
+ * Shaper profile ID for the private shaper of the current node. Needs to be
+ * either valid shaper profile ID or RTE_TM_SHAPER_PROFILE_ID_NONE, with
+ * the latter disabling the private shaper of the current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node shared shapers update
+ *
+ * Restriction for root node: cannot use any shared rate shapers.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] shared_shaper_id
+ * Shared shaper ID. Needs to be valid.
+ * @param[in] add
+ * Set to non-zero value to add this shared shaper to current node or to zero
+ * to delete this shared shaper from current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_shared_shaper_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_shaper_id,
+ int add,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node enabled statistics counters update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[in] stats_mask
+ * Mask of statistics counter types to be enabled for the current node. This
+ * needs to be a subset of the statistics counter types available for the
+ * current node. Any statistics counter type not included in this set is to
+ * be disabled for the current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see enum rte_tm_stats_type
+ * @see RTE_TM_UPDATE_NODE_STATS
+ */
+int
+rte_tm_node_stats_update(uint8_t port_id,
+ uint32_t node_id,
+ uint64_t stats_mask,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node WFQ weight mode update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] wfq_weight_mode
+ * WFQ weight mode for each SP priority. When NULL, it indicates that WFQ is
+ * to be used for all priorities. When non-NULL, it points to a pre-allocated
+ * array of *n_sp_priorities* values, with non-zero value for byte-mode and
+ * zero for packet-mode.
+ * @param[in] n_sp_priorities
+ * Number of SP priorities.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_WFQ_WEIGHT_MODE
+ * @see RTE_TM_UPDATE_NODE_N_SP_PRIORITIES
+ */
+int
+rte_tm_node_wfq_weight_mode_update(uint8_t port_id,
+ uint32_t node_id,
+ int *wfq_weight_mode,
+ uint32_t n_sp_priorities,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node congestion management mode update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] cman
+ * Congestion management mode.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see RTE_TM_UPDATE_NODE_CMAN
+ */
+int
+rte_tm_node_cman_update(uint8_t port_id,
+ uint32_t node_id,
+ enum rte_tm_cman_mode cman,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node private WRED context update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] wred_profile_id
+ * WRED profile ID for the private WRED context of the current node. Needs to
+ * be either valid WRED profile ID or RTE_TM_WRED_PROFILE_ID_NONE, with the
+ * latter disabling the private WRED context of the current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node shared WRED context update
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid leaf node ID.
+ * @param[in] shared_wred_context_id
+ * Shared WRED context ID. Needs to be valid.
+ * @param[in] add
+ * Set to non-zero value to add this shared WRED context to current node or
+ * to zero to delete this shared WRED context from current node.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ */
+int
+rte_tm_node_shared_wred_context_update(uint8_t port_id,
+ uint32_t node_id,
+ uint32_t shared_wred_context_id,
+ int add,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager node statistics counters read
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] node_id
+ * Node ID. Needs to be valid.
+ * @param[out] stats
+ * When non-NULL, it contains the current value for the statistics counters
+ * enabled for the current node.
+ * @param[out] stats_mask
+ * When non-NULL, it contains the mask of statistics counter types that are
+ * currently enabled for this node, indicating which of the counters
+ * retrieved with the *stats* structure are valid.
+ * @param[in] clear
+ * When this parameter has a non-zero value, the statistics counters are
+ * cleared (i.e. set to zero) immediately after they have been read,
+ * otherwise the statistics counters are left untouched.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see enum rte_tm_stats_type
+ */
+int
+rte_tm_node_stats_read(uint8_t port_id,
+ uint32_t node_id,
+ struct rte_tm_node_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager packet marking - VLAN DEI (IEEE 802.1Q)
+ *
+ * IEEE 802.1p maps the traffic class to the VLAN Priority Code Point (PCP)
+ * field (3 bits), while IEEE 802.1q maps the drop priority to the VLAN Drop
+ * Eligible Indicator (DEI) field (1 bit), which was previously named Canonical
+ * Format Indicator (CFI).
+ *
+ * All VLAN frames of a given color get their DEI bit set if marking is enabled
+ * for this color; otherwise, their DEI bit is left as is (either set or not).
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mark_green
+ * Set to non-zero value to enable marking of green packets and to zero to
+ * disable it.
+ * @param[in] mark_yellow
+ * Set to non-zero value to enable marking of yellow packets and to zero to
+ * disable it.
+ * @param[in] mark_red
+ * Set to non-zero value to enable marking of red packets and to zero to
+ * disable it.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::mark_vlan_dei_supported
+ */
+int
+rte_tm_mark_vlan_dei(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager packet marking - IPv4 / IPv6 ECN (IETF RFC 3168)
+ *
+ * IETF RFCs 2474 and 3168 reorganize the IPv4 Type of Service (TOS) field
+ * (8 bits) and the IPv6 Traffic Class (TC) field (8 bits) into Differentiated
+ * Services Codepoint (DSCP) field (6 bits) and Explicit Congestion
+ * Notification (ECN) field (2 bits). The DSCP field is typically used to
+ * encode the traffic class and/or drop priority (RFC 2597), while the ECN
+ * field is used by RFC 3168 to implement a congestion notification mechanism
+ * to be leveraged by transport layer protocols such as TCP and SCTP that have
+ * congestion control mechanisms.
+ *
+ * When congestion is experienced, as alternative to dropping the packet,
+ * routers can change the ECN field of input packets from 2'b01 or 2'b10
+ * (values indicating that source endpoint is ECN-capable) to 2'b11 (meaning
+ * that congestion is experienced). The destination endpoint can use the
+ * ECN-Echo (ECE) TCP flag to relay the congestion indication back to the
+ * source endpoint, which acknowledges it back to the destination endpoint with
+ * the Congestion Window Reduced (CWR) TCP flag.
+ *
+ * All IPv4/IPv6 packets of a given color with ECN set to 2’b01 or 2’b10
+ * carrying TCP or SCTP have their ECN set to 2’b11 if the marking feature is
+ * enabled for the current color, otherwise the ECN field is left as is.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mark_green
+ * Set to non-zero value to enable marking of green packets and to zero to
+ * disable it.
+ * @param[in] mark_yellow
+ * Set to non-zero value to enable marking of yellow packets and to zero to
+ * disable it.
+ * @param[in] mark_red
+ * Set to non-zero value to enable marking of red packets and to zero to
+ * disable it.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::mark_ip_ecn_tcp_supported
+ * @see struct rte_tm_capabilities::mark_ip_ecn_sctp_supported
+ */
+int
+rte_tm_mark_ip_ecn(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+/**
+ * Traffic manager packet marking - IPv4 / IPv6 DSCP (IETF RFC 2597)
+ *
+ * IETF RFC 2597 maps the traffic class and the drop priority to the IPv4/IPv6
+ * Differentiated Services Codepoint (DSCP) field (6 bits). Here are the DSCP
+ * values proposed by this RFC:
+ *
+ * Class 1 Class 2 Class 3 Class 4
+ * +----------+----------+----------+----------+
+ * Low Drop Prec | 001010 | 010010 | 011010 | 100010 |
+ * Medium Drop Prec | 001100 | 010100 | 011100 | 100100 |
+ * High Drop Prec | 001110 | 010110 | 011110 | 100110 |
+ * +----------+----------+----------+----------+
+ *
+ * There are 4 traffic classes (classes 1 .. 4) encoded by DSCP bits 1 and 2,
+ * as well as 3 drop priorities (low/medium/high) encoded by DSCP bits 3 and 4.
+ *
+ * All IPv4/IPv6 packets have their color marked into DSCP bits 3 and 4 as
+ * follows: green mapped to Low Drop Precedence (2’b01), yellow to Medium
+ * (2’b10) and red to High (2’b11). Marking needs to be explicitly enabled
+ * for each color; when not enabled for a given color, the DSCP field of all
+ * packets with that color is left as is.
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[in] mark_green
+ * Set to non-zero value to enable marking of green packets and to zero to
+ * disable it.
+ * @param[in] mark_yellow
+ * Set to non-zero value to enable marking of yellow packets and to zero to
+ * disable it.
+ * @param[in] mark_red
+ * Set to non-zero value to enable marking of red packets and to zero to
+ * disable it.
+ * @param[out] error
+ * Error details. Filled in only on error, when not NULL.
+ * @return
+ * 0 on success, non-zero error code otherwise.
+ *
+ * @see struct rte_tm_capabilities::mark_ip_dscp_supported
+ */
+int
+rte_tm_mark_ip_dscp(uint8_t port_id,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_TM_H__ */
diff --git a/lib/librte_ether/rte_tm_driver.h b/lib/librte_ether/rte_tm_driver.h
new file mode 100644
index 0000000..c25f102
--- /dev/null
+++ b/lib/librte_ether/rte_tm_driver.h
@@ -0,0 +1,373 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __INCLUDE_RTE_TM_DRIVER_H__
+#define __INCLUDE_RTE_TM_DRIVER_H__
+
+/**
+ * @file
+ * RTE Generic Traffic Manager API (Driver Side)
+ *
+ * This file provides implementation helpers for internal use by PMDs, they
+ * are not intended to be exposed to applications and are not subject to ABI
+ * versioning.
+ */
+
+#include <stdint.h>
+
+#include <rte_errno.h>
+#include "rte_ethdev.h"
+#include "rte_tm.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int (*rte_tm_node_type_get_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ int *is_leaf,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node type get */
+
+typedef int (*rte_tm_node_level_get_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t *level_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node level get */
+
+typedef int (*rte_tm_capabilities_get_t)(struct rte_eth_dev *dev,
+ struct rte_tm_capabilities *cap,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager capabilities get */
+
+typedef int (*rte_tm_level_capabilities_get_t)(struct rte_eth_dev *dev,
+ uint32_t level_id,
+ struct rte_tm_level_capabilities *cap,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager level capabilities get */
+
+typedef int (*rte_tm_node_capabilities_get_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_capabilities *cap,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node capabilities get */
+
+typedef int (*rte_tm_wred_profile_add_t)(struct rte_eth_dev *dev,
+ uint32_t wred_profile_id,
+ struct rte_tm_wred_params *profile,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager WRED profile add */
+
+typedef int (*rte_tm_wred_profile_delete_t)(struct rte_eth_dev *dev,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager WRED profile delete */
+
+typedef int (*rte_tm_shared_wred_context_add_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t shared_wred_context_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager shared WRED context add */
+
+typedef int (*rte_tm_shared_wred_context_delete_t)(
+ struct rte_eth_dev *dev,
+ uint32_t shared_wred_context_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager shared WRED context delete */
+
+typedef int (*rte_tm_shaper_profile_add_t)(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_shaper_params *profile,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager shaper profile add */
+
+typedef int (*rte_tm_shaper_profile_delete_t)(struct rte_eth_dev *dev,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager shaper profile delete */
+
+typedef int (*rte_tm_shared_shaper_add_update_t)(struct rte_eth_dev *dev,
+ uint32_t shared_shaper_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager shared shaper add/update */
+
+typedef int (*rte_tm_shared_shaper_delete_t)(struct rte_eth_dev *dev,
+ uint32_t shared_shaper_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager shared shaper delete */
+
+typedef int (*rte_tm_node_add_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_node_params *params,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node add */
+
+typedef int (*rte_tm_node_delete_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node delete */
+
+typedef int (*rte_tm_node_suspend_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node suspend */
+
+typedef int (*rte_tm_node_resume_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node resume */
+
+typedef int (*rte_tm_hierarchy_commit_t)(struct rte_eth_dev *dev,
+ int clear_on_fail,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager hierarchy commit */
+
+typedef int (*rte_tm_node_parent_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t parent_node_id,
+ uint32_t priority,
+ uint32_t weight,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node parent update */
+
+typedef int (*rte_tm_node_shaper_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t shaper_profile_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node shaper update */
+
+typedef int (*rte_tm_node_shared_shaper_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t shared_shaper_id,
+ int32_t add,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node shaper update */
+
+typedef int (*rte_tm_node_stats_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint64_t stats_mask,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node stats update */
+
+typedef int (*rte_tm_node_wfq_weight_mode_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t node_id,
+ int *wfq_weigth_mode,
+ uint32_t n_sp_priorities,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node WFQ weight mode update */
+
+typedef int (*rte_tm_node_cman_update_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ enum rte_tm_cman_mode cman,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node congestion management mode update */
+
+typedef int (*rte_tm_node_wred_context_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t wred_profile_id,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node WRED context update */
+
+typedef int (*rte_tm_node_shared_wred_context_update_t)(
+ struct rte_eth_dev *dev,
+ uint32_t node_id,
+ uint32_t shared_wred_context_id,
+ int add,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager node WRED context update */
+
+typedef int (*rte_tm_node_stats_read_t)(struct rte_eth_dev *dev,
+ uint32_t node_id,
+ struct rte_tm_node_stats *stats,
+ uint64_t *stats_mask,
+ int clear,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager read stats counters for specific node */
+
+typedef int (*rte_tm_mark_vlan_dei_t)(struct rte_eth_dev *dev,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager packet marking - VLAN DEI */
+
+typedef int (*rte_tm_mark_ip_ecn_t)(struct rte_eth_dev *dev,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager packet marking - IPv4/IPv6 ECN */
+
+typedef int (*rte_tm_mark_ip_dscp_t)(struct rte_eth_dev *dev,
+ int mark_green,
+ int mark_yellow,
+ int mark_red,
+ struct rte_tm_error *error);
+/**< @internal Traffic manager packet marking - IPv4/IPv6 DSCP */
+
+struct rte_tm_ops {
+ /** Traffic manager node type get */
+ rte_tm_node_type_get_t node_type_get;
+ /** Traffic manager node level get */
+ rte_tm_node_level_get_t node_level_get;
+
+ /** Traffic manager capabilities_get */
+ rte_tm_capabilities_get_t capabilities_get;
+ /** Traffic manager level capabilities_get */
+ rte_tm_level_capabilities_get_t level_capabilities_get;
+ /** Traffic manager node capabilities get */
+ rte_tm_node_capabilities_get_t node_capabilities_get;
+
+ /** Traffic manager WRED profile add */
+ rte_tm_wred_profile_add_t wred_profile_add;
+ /** Traffic manager WRED profile delete */
+ rte_tm_wred_profile_delete_t wred_profile_delete;
+ /** Traffic manager shared WRED context add/update */
+ rte_tm_shared_wred_context_add_update_t
+ shared_wred_context_add_update;
+ /** Traffic manager shared WRED context delete */
+ rte_tm_shared_wred_context_delete_t
+ shared_wred_context_delete;
+
+ /** Traffic manager shaper profile add */
+ rte_tm_shaper_profile_add_t shaper_profile_add;
+ /** Traffic manager shaper profile delete */
+ rte_tm_shaper_profile_delete_t shaper_profile_delete;
+ /** Traffic manager shared shaper add/update */
+ rte_tm_shared_shaper_add_update_t shared_shaper_add_update;
+ /** Traffic manager shared shaper delete */
+ rte_tm_shared_shaper_delete_t shared_shaper_delete;
+
+ /** Traffic manager node add */
+ rte_tm_node_add_t node_add;
+ /** Traffic manager node delete */
+ rte_tm_node_delete_t node_delete;
+ /** Traffic manager node suspend */
+ rte_tm_node_suspend_t node_suspend;
+ /** Traffic manager node resume */
+ rte_tm_node_resume_t node_resume;
+ /** Traffic manager hierarchy commit */
+ rte_tm_hierarchy_commit_t hierarchy_commit;
+
+ /** Traffic manager node parent update */
+ rte_tm_node_parent_update_t node_parent_update;
+ /** Traffic manager node shaper update */
+ rte_tm_node_shaper_update_t node_shaper_update;
+ /** Traffic manager node shared shaper update */
+ rte_tm_node_shared_shaper_update_t node_shared_shaper_update;
+ /** Traffic manager node stats update */
+ rte_tm_node_stats_update_t node_stats_update;
+ /** Traffic manager node WFQ weight mode update */
+ rte_tm_node_wfq_weight_mode_update_t node_wfq_weight_mode_update;
+ /** Traffic manager node congestion management mode update */
+ rte_tm_node_cman_update_t node_cman_update;
+ /** Traffic manager node WRED context update */
+ rte_tm_node_wred_context_update_t node_wred_context_update;
+ /** Traffic manager node shared WRED context update */
+ rte_tm_node_shared_wred_context_update_t
+ node_shared_wred_context_update;
+ /** Traffic manager read statistics counters for current node */
+ rte_tm_node_stats_read_t node_stats_read;
+
+ /** Traffic manager packet marking - VLAN DEI */
+ rte_tm_mark_vlan_dei_t mark_vlan_dei;
+ /** Traffic manager packet marking - IPv4/IPv6 ECN */
+ rte_tm_mark_ip_ecn_t mark_ip_ecn;
+ /** Traffic manager packet marking - IPv4/IPv6 DSCP */
+ rte_tm_mark_ip_dscp_t mark_ip_dscp;
+};
+
+/**
+ * Initialize generic error structure.
+ *
+ * This function also sets rte_errno to a given value.
+ *
+ * @param[out] error
+ * Pointer to error structure (may be NULL).
+ * @param[in] code
+ * Related error code (rte_errno).
+ * @param[in] type
+ * Cause field and error type.
+ * @param[in] cause
+ * Object responsible for the error.
+ * @param[in] message
+ * Human-readable error message.
+ *
+ * @return
+ * Error code.
+ */
+static inline int
+rte_tm_error_set(struct rte_tm_error *error,
+ int code,
+ enum rte_tm_error_type type,
+ const void *cause,
+ const char *message)
+{
+ if (error) {
+ *error = (struct rte_tm_error){
+ .type = type,
+ .cause = cause,
+ .message = message,
+ };
+ }
+ rte_errno = code;
+ return code;
+}
+
+/**
+ * Get generic traffic manager operations structure from a port
+ *
+ * @param[in] port_id
+ * The port identifier of the Ethernet device.
+ * @param[out] error
+ * Error details
+ *
+ * @return
+ * The traffic manager operations structure associated with port_id on
+ * success, NULL otherwise.
+ */
+const struct rte_tm_ops *
+rte_tm_ops_get(uint8_t port_id, struct rte_tm_error *error);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* __INCLUDE_RTE_TM_DRIVER_H__ */
--
2.7.4
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [RFC 4/4] mk: default to compiling shared libraries
@ 2017-05-19 16:39 4% ` Anatoly Burakov
0 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2017-05-19 16:39 UTC (permalink / raw)
To: dev
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
config/common_base | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/config/common_base b/config/common_base
index 8907bea..1c088e9 100644
--- a/config/common_base
+++ b/config/common_base
@@ -67,7 +67,7 @@ CONFIG_RTE_ARCH_STRICT_ALIGN=n
#
# Compile to share library
#
-CONFIG_RTE_BUILD_SHARED_LIB=n
+CONFIG_RTE_BUILD_SHARED_LIB=y
#
# Use newest code breaking previous ABI
--
2.7.4
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] Issue->Dpdk for arm cortex-a15 compilation
2017-05-16 13:27 0% ` Jimmy Carter
@ 2017-05-16 14:00 0% ` Jan Viktorin
0 siblings, 0 replies; 200+ results
From: Jan Viktorin @ 2017-05-16 14:00 UTC (permalink / raw)
To: Jimmy Carter; +Cc: Neil Horman, users, dev, maintainers, jianbo.liu, kosar
On Tue, 16 May 2017 18:57:41 +0530
Jimmy Carter <jimmycarter256@gmail.com> wrote:
> I assume after git clone https://github.com/RehiveTech/buildroot
> I need to git checkout dpdk-support-v5
Yes, I forgot to mention...
> I get legacy error on running make
> root@xav101000739:~/Downloads/dpdk/newbuildroot/buildroot# *make *
> *Makefile.legacy:12: *** "You have legacy configuration in your .config!
> Please check your configuration.". Stop.*
This is very strange. Did you use qemu_arm_vexpress_defconfig or some
other?
I didn't have any issue during the build except of a mismatch in the
SHA256 checksum of the dpdk-16.04.tar.gz which is strange. After
fixing:
diff --git a/package/dpdk/dpdk.hash b/package/dpdk/dpdk.hash
index 3780c665b..c0158e477 100644
--- a/package/dpdk/dpdk.hash
+++ b/package/dpdk/dpdk.hash
@@ -1,2 +1,2 @@
# Locally calculated
-sha256 d631495bc6e8d4c4aec72999ac03c3ce213bb996cb88f3bf14bb980dad1d3f7b dpdk-16.04.tar.gz
+sha256 f917875b1432adaaebb2761c154623bb101e0308153aa011f06a69bd1e9e98fb dpdk-16.04.tar.gz
it works.
$ ls output/images/
rootfs.ext2 vexpress-v2p-ca9.dtb zImage
Regards
Jan
>
>
> Thanks
>
> On Tue, May 16, 2017 at 5:58 PM, Jan Viktorin <viktorin@rehivetech.com>
> wrote:
>
> > On Tue, 16 May 2017 17:25:20 +0530
> > Jimmy Carter <jimmycarter256@gmail.com> wrote:
> >
> > > Hi All
> > >
> > > Attached is the complete env variables file
> > > I have added RTE_KERNELDIR too
> > > Also I am now using gnu-eabi version 5.4.0
> > > [arm-openwrt-linux-muslgnueabi-gcc (LEDE GCC 5.4.0 r3909-6411a12) 5.4.0]
> > > But I am still getting the same error
> > >
> > > Currently I am not using buildroot
> > > Is there any step by step available guide for cross compiling dpdk using
> > > buildroot for target arm cortex-a15 using some external toolchain.
> > > I found this http://dpdk.org/ml/archives/announce/2015-October/000066.
> > html
> >
> > This short tutorial points to some older version of the Buildroot
> > support. That was before the ARM support has been merged into DPDK.
> >
> > I've just pushed the branch dpdk-support-v5 (d25ddaadf2) into
> > the RehiveTech repository. It contains the latest patch sent to the
> > Buildroot mailing list [1] and some more. By the way, it cleanly
> > applies to the latest Buildroot master as well.
> >
> > This branch assumes DPDK 16.04 which is quite old but if you drop the
> > 0001-mk-do-not-enforce-any-specific-ARM-ABI.patch, it might work for newer
> > DPDK as well.
> >
> > Steps:
> >
> > $ git clone https://github.com/RehiveTech/buildroot
> > $ cd buildroot
> > $ make qemu_arm_vexpress_defconfig
> > $ make menuconfig
> >
> > * set libc library to glibc
> > * enable DPDK in Target packages/Libraries/Networking/DPDK
> >
> > $ make linux-menuconfig
> >
> > * enable UIO, PCI and MSI-X (if applicable)
> >
> > $ make
> >
> > I didn't test it myself recently but I belive that it should work well.
> > Instead of qemu_arm_vexpress_defconfig, you should select your target
> > board, if applicable.
> >
> > I hope, it would help you.
> >
> > Regards
> > Jan
> >
> > [1] https://patchwork.ozlabs.org/patch/611383/
> >
> > >
> > >
> > > Please advise
> > >
> > >
> > >
> > > Thanks
> > >
> > > On Tue, May 16, 2017 at 5:14 PM, Neil Horman <nhorman@tuxdriver.com>
> > wrote:
> > >
> > > > On Tue, May 16, 2017 at 12:51:40PM +0200, Jan Viktorin wrote:
> > > > > Hello Jimmy,
> > > > >
> > > > > On Tue, 16 May 2017 15:38:22 +0530
> > > > > Jimmy Carter <jimmycarter256@gmail.com> wrote:
> > > > >
> > > > > > Hi All
> > > > > >
> > > > > > I am using dpdk16.11.1 and want to use openwrt external toolchain
> > so
> > > > that I
> > > > > > can cross compile for arm cortex 15
> > > > > > neon.(arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi)
> > > > >
> > > > > I've never built DPDK with musl-eabi. I don't think that your issue
> > is
> > > > > related but just note that my builds have always been done with
> > gnueabi.
> > > > >
> > > > > > My target board is Tp link archer C2600.
> > > > > > I am have assigned these env variables but still getting
> > compilation
> > > > error
> > > > > >
> > > > > > export
> > > > > > STAGING_DIR=/home/xav-101000739/ovslede/source/
> > > > staging_dir/toolchain-arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi
> > > > > > export
> > > > > > PATH=$PATH:/home/xav-101000739/ovslede/source/
> > > > staging_dir/toolchain-arm_cortex-a15+neon-vfpv4_gcc-5.4.
> > 0_musl_eabi/bin
> > > > > >
> > > > > >
> > > > > > export CROSS=arm-openwrt-linux-
> > > > > > export DPDK_TARGET=arm-armv7a-linuxapp-gcc
> > > > > > export DPDK_DIR=$PWD
> > > > > > export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
> > > > > > export
> > > > > > CFLAGS+=-I/home/xav-101000739/ovslede/source/staging_dir/
> > > > toolchain-arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi
> > > > > > export RTE_SDK=$PWD
> > > > > > export RTE_TARGET=arm-armv7a-linuxapp-gcc
> > > > > > export DPDK_BUILD_DIR=arm-armv7a-linuxapp-gcc
> > > > > >
> > > > >
> > > > > There is a patch to Buildroot that can help you with the setup. See:
> > > > >
> > > > > https://patchwork.ozlabs.org/patch/611383/
> > > > >
> > > > > >
> > > > > > Error:Attached file
> > > > >
> > > > > Your build fails on
> > > > >
> > > > > eal_memory.c:92:
> > > > > /home/xav-101000739/Downloads/dpdk/dpdk-stable-16.11.1/
> > > > build/include/rte_lcore.h:56:10: error: unknown type name 'cpu_set_t'
> > > > > typedef cpu_set_t rte_cpuset_t;
> > > > >
> > > > > This looks like there is some issue with Linux Kernel headers.
> > > > >
> > > > > lib/librte_eal/common/include/rte_lcore.h:
> > > > >
> > > > > 53 #if defined(__linux__)
> > > > > 54 typedef cpu_set_t rte_cpuset_t;
> > > > > 55 #elif defined(__FreeBSD__)
> > > > > 56 #include <pthread_np.h>
> > > > > 57 typedef cpuset_t rte_cpuset_t;
> > > > > 58 #endif
> > > > >
> > > > > Probably, you should set the RTE_KERNELDIR properly.
> > > > >
> > > > I don't think so. cpu_set_t is most recently defined in
> > > > /usr/include/bits/shced.h, which is a glibc header. What version of
> > glibc
> > > > are
> > > > you building with?
> > > >
> > > > Neil
> > > >
> > > > > >
> > > > > > Please advise
> > > > > > Does dpdk have support for openwrt (arm cortex a15)
> > > > >
> > > > > DPDK does not support OpenWRT because (as far as I know) nobody from
> > > > > the DPDK community is using it in this way. I build DPDK via
> > Buildroot
> > > > > but this is unsupported by the DPDK upstream.
> > > > >
> > > > > I could build DPDK for Cortex-A7, Cortex-A9 and Cortex-A15 in the
> > past.
> > > > >
> > > > > I run regular builds of the master branch and I can see no breakage
> > > > > for the arm-armv7a-linuxapp-gcc configuration.
> > > > >
> > > > > Regards
> > > > > Jan
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > > Akshay
> > > > >
> > > >
> >
> >
> >
> > --
> > Jan Viktorin E-mail: Viktorin@RehiveTech.com
> > System Architect Web: www.RehiveTech.com
> > RehiveTech
> > Brno, Czech Republic
> >
--
Jan Viktorin E-mail: Viktorin@RehiveTech.com
System Architect Web: www.RehiveTech.com
RehiveTech
Brno, Czech Republic
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] Issue->Dpdk for arm cortex-a15 compilation
2017-05-16 12:28 3% ` Jan Viktorin
@ 2017-05-16 13:27 0% ` Jimmy Carter
2017-05-16 14:00 0% ` Jan Viktorin
0 siblings, 1 reply; 200+ results
From: Jimmy Carter @ 2017-05-16 13:27 UTC (permalink / raw)
To: Jan Viktorin; +Cc: Neil Horman, users, dev, maintainers, jianbo.liu, kosar
I assume after git clone https://github.com/RehiveTech/buildroot
I need to git checkout dpdk-support-v5
I get legacy error on running make
root@xav101000739:~/Downloads/dpdk/newbuildroot/buildroot# *make *
*Makefile.legacy:12: *** "You have legacy configuration in your .config!
Please check your configuration.". Stop.*
Thanks
On Tue, May 16, 2017 at 5:58 PM, Jan Viktorin <viktorin@rehivetech.com>
wrote:
> On Tue, 16 May 2017 17:25:20 +0530
> Jimmy Carter <jimmycarter256@gmail.com> wrote:
>
> > Hi All
> >
> > Attached is the complete env variables file
> > I have added RTE_KERNELDIR too
> > Also I am now using gnu-eabi version 5.4.0
> > [arm-openwrt-linux-muslgnueabi-gcc (LEDE GCC 5.4.0 r3909-6411a12) 5.4.0]
> > But I am still getting the same error
> >
> > Currently I am not using buildroot
> > Is there any step by step available guide for cross compiling dpdk using
> > buildroot for target arm cortex-a15 using some external toolchain.
> > I found this http://dpdk.org/ml/archives/announce/2015-October/000066.
> html
>
> This short tutorial points to some older version of the Buildroot
> support. That was before the ARM support has been merged into DPDK.
>
> I've just pushed the branch dpdk-support-v5 (d25ddaadf2) into
> the RehiveTech repository. It contains the latest patch sent to the
> Buildroot mailing list [1] and some more. By the way, it cleanly
> applies to the latest Buildroot master as well.
>
> This branch assumes DPDK 16.04 which is quite old but if you drop the
> 0001-mk-do-not-enforce-any-specific-ARM-ABI.patch, it might work for newer
> DPDK as well.
>
> Steps:
>
> $ git clone https://github.com/RehiveTech/buildroot
> $ cd buildroot
> $ make qemu_arm_vexpress_defconfig
> $ make menuconfig
>
> * set libc library to glibc
> * enable DPDK in Target packages/Libraries/Networking/DPDK
>
> $ make linux-menuconfig
>
> * enable UIO, PCI and MSI-X (if applicable)
>
> $ make
>
> I didn't test it myself recently but I belive that it should work well.
> Instead of qemu_arm_vexpress_defconfig, you should select your target
> board, if applicable.
>
> I hope, it would help you.
>
> Regards
> Jan
>
> [1] https://patchwork.ozlabs.org/patch/611383/
>
> >
> >
> > Please advise
> >
> >
> >
> > Thanks
> >
> > On Tue, May 16, 2017 at 5:14 PM, Neil Horman <nhorman@tuxdriver.com>
> wrote:
> >
> > > On Tue, May 16, 2017 at 12:51:40PM +0200, Jan Viktorin wrote:
> > > > Hello Jimmy,
> > > >
> > > > On Tue, 16 May 2017 15:38:22 +0530
> > > > Jimmy Carter <jimmycarter256@gmail.com> wrote:
> > > >
> > > > > Hi All
> > > > >
> > > > > I am using dpdk16.11.1 and want to use openwrt external toolchain
> so
> > > that I
> > > > > can cross compile for arm cortex 15
> > > > > neon.(arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi)
> > > >
> > > > I've never built DPDK with musl-eabi. I don't think that your issue
> is
> > > > related but just note that my builds have always been done with
> gnueabi.
> > > >
> > > > > My target board is Tp link archer C2600.
> > > > > I am have assigned these env variables but still getting
> compilation
> > > error
> > > > >
> > > > > export
> > > > > STAGING_DIR=/home/xav-101000739/ovslede/source/
> > > staging_dir/toolchain-arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi
> > > > > export
> > > > > PATH=$PATH:/home/xav-101000739/ovslede/source/
> > > staging_dir/toolchain-arm_cortex-a15+neon-vfpv4_gcc-5.4.
> 0_musl_eabi/bin
> > > > >
> > > > >
> > > > > export CROSS=arm-openwrt-linux-
> > > > > export DPDK_TARGET=arm-armv7a-linuxapp-gcc
> > > > > export DPDK_DIR=$PWD
> > > > > export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
> > > > > export
> > > > > CFLAGS+=-I/home/xav-101000739/ovslede/source/staging_dir/
> > > toolchain-arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi
> > > > > export RTE_SDK=$PWD
> > > > > export RTE_TARGET=arm-armv7a-linuxapp-gcc
> > > > > export DPDK_BUILD_DIR=arm-armv7a-linuxapp-gcc
> > > > >
> > > >
> > > > There is a patch to Buildroot that can help you with the setup. See:
> > > >
> > > > https://patchwork.ozlabs.org/patch/611383/
> > > >
> > > > >
> > > > > Error:Attached file
> > > >
> > > > Your build fails on
> > > >
> > > > eal_memory.c:92:
> > > > /home/xav-101000739/Downloads/dpdk/dpdk-stable-16.11.1/
> > > build/include/rte_lcore.h:56:10: error: unknown type name 'cpu_set_t'
> > > > typedef cpu_set_t rte_cpuset_t;
> > > >
> > > > This looks like there is some issue with Linux Kernel headers.
> > > >
> > > > lib/librte_eal/common/include/rte_lcore.h:
> > > >
> > > > 53 #if defined(__linux__)
> > > > 54 typedef cpu_set_t rte_cpuset_t;
> > > > 55 #elif defined(__FreeBSD__)
> > > > 56 #include <pthread_np.h>
> > > > 57 typedef cpuset_t rte_cpuset_t;
> > > > 58 #endif
> > > >
> > > > Probably, you should set the RTE_KERNELDIR properly.
> > > >
> > > I don't think so. cpu_set_t is most recently defined in
> > > /usr/include/bits/shced.h, which is a glibc header. What version of
> glibc
> > > are
> > > you building with?
> > >
> > > Neil
> > >
> > > > >
> > > > > Please advise
> > > > > Does dpdk have support for openwrt (arm cortex a15)
> > > >
> > > > DPDK does not support OpenWRT because (as far as I know) nobody from
> > > > the DPDK community is using it in this way. I build DPDK via
> Buildroot
> > > > but this is unsupported by the DPDK upstream.
> > > >
> > > > I could build DPDK for Cortex-A7, Cortex-A9 and Cortex-A15 in the
> past.
> > > >
> > > > I run regular builds of the master branch and I can see no breakage
> > > > for the arm-armv7a-linuxapp-gcc configuration.
> > > >
> > > > Regards
> > > > Jan
> > > >
> > > > >
> > > > > Thanks
> > > > > Akshay
> > > >
> > >
>
>
>
> --
> Jan Viktorin E-mail: Viktorin@RehiveTech.com
> System Architect Web: www.RehiveTech.com
> RehiveTech
> Brno, Czech Republic
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] Issue->Dpdk for arm cortex-a15 compilation
@ 2017-05-16 12:28 3% ` Jan Viktorin
2017-05-16 13:27 0% ` Jimmy Carter
0 siblings, 1 reply; 200+ results
From: Jan Viktorin @ 2017-05-16 12:28 UTC (permalink / raw)
To: Jimmy Carter; +Cc: Neil Horman, users, dev, maintainers, jianbo.liu, kosar
On Tue, 16 May 2017 17:25:20 +0530
Jimmy Carter <jimmycarter256@gmail.com> wrote:
> Hi All
>
> Attached is the complete env variables file
> I have added RTE_KERNELDIR too
> Also I am now using gnu-eabi version 5.4.0
> [arm-openwrt-linux-muslgnueabi-gcc (LEDE GCC 5.4.0 r3909-6411a12) 5.4.0]
> But I am still getting the same error
>
> Currently I am not using buildroot
> Is there any step by step available guide for cross compiling dpdk using
> buildroot for target arm cortex-a15 using some external toolchain.
> I found this http://dpdk.org/ml/archives/announce/2015-October/000066.html
This short tutorial points to some older version of the Buildroot
support. That was before the ARM support has been merged into DPDK.
I've just pushed the branch dpdk-support-v5 (d25ddaadf2) into
the RehiveTech repository. It contains the latest patch sent to the
Buildroot mailing list [1] and some more. By the way, it cleanly
applies to the latest Buildroot master as well.
This branch assumes DPDK 16.04 which is quite old but if you drop the
0001-mk-do-not-enforce-any-specific-ARM-ABI.patch, it might work for newer
DPDK as well.
Steps:
$ git clone https://github.com/RehiveTech/buildroot
$ cd buildroot
$ make qemu_arm_vexpress_defconfig
$ make menuconfig
* set libc library to glibc
* enable DPDK in Target packages/Libraries/Networking/DPDK
$ make linux-menuconfig
* enable UIO, PCI and MSI-X (if applicable)
$ make
I didn't test it myself recently but I belive that it should work well.
Instead of qemu_arm_vexpress_defconfig, you should select your target
board, if applicable.
I hope, it would help you.
Regards
Jan
[1] https://patchwork.ozlabs.org/patch/611383/
>
>
> Please advise
>
>
>
> Thanks
>
> On Tue, May 16, 2017 at 5:14 PM, Neil Horman <nhorman@tuxdriver.com> wrote:
>
> > On Tue, May 16, 2017 at 12:51:40PM +0200, Jan Viktorin wrote:
> > > Hello Jimmy,
> > >
> > > On Tue, 16 May 2017 15:38:22 +0530
> > > Jimmy Carter <jimmycarter256@gmail.com> wrote:
> > >
> > > > Hi All
> > > >
> > > > I am using dpdk16.11.1 and want to use openwrt external toolchain so
> > that I
> > > > can cross compile for arm cortex 15
> > > > neon.(arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi)
> > >
> > > I've never built DPDK with musl-eabi. I don't think that your issue is
> > > related but just note that my builds have always been done with gnueabi.
> > >
> > > > My target board is Tp link archer C2600.
> > > > I am have assigned these env variables but still getting compilation
> > error
> > > >
> > > > export
> > > > STAGING_DIR=/home/xav-101000739/ovslede/source/
> > staging_dir/toolchain-arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi
> > > > export
> > > > PATH=$PATH:/home/xav-101000739/ovslede/source/
> > staging_dir/toolchain-arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi/bin
> > > >
> > > >
> > > > export CROSS=arm-openwrt-linux-
> > > > export DPDK_TARGET=arm-armv7a-linuxapp-gcc
> > > > export DPDK_DIR=$PWD
> > > > export DPDK_BUILD=$DPDK_DIR/$DPDK_TARGET
> > > > export
> > > > CFLAGS+=-I/home/xav-101000739/ovslede/source/staging_dir/
> > toolchain-arm_cortex-a15+neon-vfpv4_gcc-5.4.0_musl_eabi
> > > > export RTE_SDK=$PWD
> > > > export RTE_TARGET=arm-armv7a-linuxapp-gcc
> > > > export DPDK_BUILD_DIR=arm-armv7a-linuxapp-gcc
> > > >
> > >
> > > There is a patch to Buildroot that can help you with the setup. See:
> > >
> > > https://patchwork.ozlabs.org/patch/611383/
> > >
> > > >
> > > > Error:Attached file
> > >
> > > Your build fails on
> > >
> > > eal_memory.c:92:
> > > /home/xav-101000739/Downloads/dpdk/dpdk-stable-16.11.1/
> > build/include/rte_lcore.h:56:10: error: unknown type name 'cpu_set_t'
> > > typedef cpu_set_t rte_cpuset_t;
> > >
> > > This looks like there is some issue with Linux Kernel headers.
> > >
> > > lib/librte_eal/common/include/rte_lcore.h:
> > >
> > > 53 #if defined(__linux__)
> > > 54 typedef cpu_set_t rte_cpuset_t;
> > > 55 #elif defined(__FreeBSD__)
> > > 56 #include <pthread_np.h>
> > > 57 typedef cpuset_t rte_cpuset_t;
> > > 58 #endif
> > >
> > > Probably, you should set the RTE_KERNELDIR properly.
> > >
> > I don't think so. cpu_set_t is most recently defined in
> > /usr/include/bits/shced.h, which is a glibc header. What version of glibc
> > are
> > you building with?
> >
> > Neil
> >
> > > >
> > > > Please advise
> > > > Does dpdk have support for openwrt (arm cortex a15)
> > >
> > > DPDK does not support OpenWRT because (as far as I know) nobody from
> > > the DPDK community is using it in this way. I build DPDK via Buildroot
> > > but this is unsupported by the DPDK upstream.
> > >
> > > I could build DPDK for Cortex-A7, Cortex-A9 and Cortex-A15 in the past.
> > >
> > > I run regular builds of the master branch and I can see no breakage
> > > for the arm-armv7a-linuxapp-gcc configuration.
> > >
> > > Regards
> > > Jan
> > >
> > > >
> > > > Thanks
> > > > Akshay
> > >
> >
--
Jan Viktorin E-mail: Viktorin@RehiveTech.com
System Architect Web: www.RehiveTech.com
RehiveTech
Brno, Czech Republic
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v1] doc: add template release notes for 17.08
@ 2017-05-11 12:57 6% John McNamara
0 siblings, 0 replies; 200+ results
From: John McNamara @ 2017-05-11 12:57 UTC (permalink / raw)
To: dev; +Cc: John McNamara
Add template release notes for DPDK 17.08 with inline
comments and explanations of the various sections.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/rel_notes/release_17_08.rst | 198 +++++++++++++++++++++++++++++++++
1 file changed, 198 insertions(+)
create mode 100644 doc/guides/rel_notes/release_17_08.rst
diff --git a/doc/guides/rel_notes/release_17_08.rst b/doc/guides/rel_notes/release_17_08.rst
new file mode 100644
index 0000000..74aae10
--- /dev/null
+++ b/doc/guides/rel_notes/release_17_08.rst
@@ -0,0 +1,198 @@
+DPDK Release 17.08
+==================
+
+.. **Read this first.**
+
+ The text in the sections below explains how to update the release notes.
+
+ Use proper spelling, capitalization and punctuation in all sections.
+
+ Variable and config names should be quoted as fixed width text:
+ ``LIKE_THIS``.
+
+ Build the docs and view the output file to ensure the changes are correct::
+
+ make doc-guides-html
+
+ xdg-open build/doc/html/guides/rel_notes/release_17_08.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release. Sample
+ format:
+
+ * **Add a title in the past tense with a full stop.**
+
+ Add a short 1-2 sentence description in the past tense. The description
+ should be enough to allow someone scanning the release notes to
+ understand the new feature.
+
+ If the feature adds a lot of sub-features you can use a bullet list like
+ this:
+
+ * Added feature foo to do something.
+ * Enhanced feature bar to do something else.
+
+ Refer to the previous release notes for examples.
+
+ This section is a comment. do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =========================================================
+
+
+Resolved Issues
+---------------
+
+.. This section should contain bug fixes added to the relevant
+ sections. Sample format:
+
+ * **code/section Fixed issue in the past tense with a full stop.**
+
+ Add a short 1-2 sentence description of the resolved issue in the past
+ tense.
+
+ The title should contain the code/lib section like a commit message.
+
+ Add the entries in alphabetic order in the relevant sections below.
+
+ This section is a comment. do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =========================================================
+
+
+EAL
+~~~
+
+
+Drivers
+~~~~~~~
+
+
+Libraries
+~~~~~~~~~
+
+
+Examples
+~~~~~~~~
+
+
+Other
+~~~~~
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+ * **Add title in present tense with full stop.**
+
+ Add a short 1-2 sentence description of the known issue in the present
+ tense. Add information on any known workarounds.
+
+ This section is a comment. do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =========================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+ * Add a short 1-2 sentence description of the API change. Use fixed width
+ quotes for ``rte_function_names`` or ``rte_struct_names``. Use the past
+ tense.
+
+ This section is a comment. do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =========================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+ * Add a short 1-2 sentence description of the ABI change that was announced
+ in the previous releases and made in this release. Use fixed width quotes
+ for ``rte_function_names`` or ``rte_struct_names``. Use the past tense.
+
+ This section is a comment. do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =========================================================
+
+
+
+Shared Library Versions
+-----------------------
+
+.. Update any library version updated in this release and prepend with a ``+``
+ sign, like this:
+
+ librte_acl.so.2
+ + librte_cfgfile.so.2
+ librte_cmdline.so.2
+
+ This section is a comment. do not overwrite or remove it.
+ =========================================================
+
+
+The libraries prepended with a plus sign were incremented in this version.
+
+.. code-block:: diff
+
+ librte_acl.so.2
+ librte_bitratestats.so.1
+ librte_cfgfile.so.2
+ librte_cmdline.so.2
+ librte_cryptodev.so.2
+ librte_distributor.so.1
+ librte_eal.so.4
+ librte_ethdev.so.6
+ librte_hash.so.2
+ librte_ip_frag.so.1
+ librte_jobstats.so.1
+ librte_kni.so.2
+ librte_kvargs.so.1
+ librte_latencystats.so.1
+ librte_lpm.so.2
+ librte_mbuf.so.3
+ librte_mempool.so.2
+ librte_meter.so.1
+ librte_metrics.so.1
+ librte_net.so.1
+ librte_pdump.so.1
+ librte_pipeline.so.3
+ librte_pmd_bond.so.1
+ librte_pmd_ring.so.2
+ librte_port.so.3
+ librte_power.so.1
+ librte_reorder.so.1
+ librte_ring.so.1
+ librte_sched.so.1
+ librte_table.so.2
+ librte_timer.so.1
+ librte_vhost.so.3
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested with this
+ release.
+
+ The format is:
+
+ * <vendor> platform with <vendor> <type of devices> combinations
+
+ * List of CPU
+ * List of OS
+ * List of devices
+ * Other relevant details...
+
+ This section is a comment. do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =========================================================
--
2.7.4
^ permalink raw reply [relevance 6%]
* Re: [dpdk-dev] [PATCH 2/2] doc: postpone unaccomplished deprecation notices
2017-05-11 0:25 3% ` [dpdk-dev] [PATCH 2/2] doc: postpone unaccomplished " Thomas Monjalon
@ 2017-05-11 0:29 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-05-11 0:29 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, jblunck, olivier.matz
11/05/2017 02:25, Thomas Monjalon:
> Some work remains for VDEV bus move.
> Not sure if there will be some API or ABI changes.
> The notice is kept and postponed until the end of this rework.
>
> The PCI fields must be removed from cryptodev and the newly
> introduced eventdev, in order to complete the bus rework.
>
> The VLAN flags rework should be completed in 17.08.
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Series applied shortly for the release 17.05
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 2/2] doc: postpone unaccomplished deprecation notices
@ 2017-05-11 0:25 3% ` Thomas Monjalon
2017-05-11 0:29 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2017-05-11 0:25 UTC (permalink / raw)
To: dev
Some work remains for VDEV bus move.
Not sure if there will be some API or ABI changes.
The notice is kept and postponed until the end of this rework.
The PCI fields must be removed from cryptodev and the newly
introduced eventdev, in order to complete the bus rework.
The VLAN flags rework should be completed in 17.08.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/rel_notes/deprecation.rst | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index f38c13f31..aa9cc57d8 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -29,10 +29,10 @@ Deprecation Notices
targets release 17.05.
* The VDEV subsystem will be converted as driver of the new bus model.
- It will imply some EAL API changes in 17.05.
+ It may imply some EAL API changes in 17.08.
* The struct ``rte_pci_driver`` is planned to be removed from
- ``rte_cryptodev_driver`` in 17.05.
+ ``rte_cryptodev_driver`` and ``rte_eventdev_driver`` in 17.08.
* ethdev: An API change is planned for 17.08 for the function
``_rte_eth_dev_callback_process``. In 17.08 the function will return an ``int``
@@ -64,7 +64,7 @@ Deprecation Notices
* The mbuf flags PKT_RX_VLAN_PKT and PKT_RX_QINQ_PKT are deprecated and
are respectively replaced by PKT_RX_VLAN_STRIPPED and
PKT_RX_QINQ_STRIPPED, that are better described. The old flags and
- their behavior will be kept until 17.02 and will be removed in 17.05.
+ their behavior will be kept until 17.05 and will be removed in 17.08.
* ethdev: Tx offloads will no longer be enabled by default in 17.08.
Instead, the ``rte_eth_txmode`` structure will be extended with
--
2.12.2
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev
2017-04-26 15:02 4% ` Mcnamara, John
@ 2017-05-10 23:27 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-05-10 23:27 UTC (permalink / raw)
To: Iremonger, Bernard; +Cc: dev, Mcnamara, John, Yigit, Ferruh
> The change of _rte_eth_dev_callback_process has not been done in 17.05.
> > Let's postpone to 17.08.
> >
> > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
>
> Acked-by: John McNamara <john.mcnamara@intel.com>
Applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
2017-05-09 17:04 4% ` Jerin Jacob
@ 2017-05-10 23:17 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-05-10 23:17 UTC (permalink / raw)
To: Shahaf Shuler
Cc: dev, Jerin Jacob, Adrien Mazarguil, Konstantin Ananyev,
Olivier Matz, Tomasz Kulasek
> > > This is an ABI change notice for DPDK 17.08 in librte_ether
> > > about changes in rte_eth_txmode structure.
> > >
> > > Currently Tx offloads are enabled by default, and can be disabled
> > > using ETH_TXQ_FLAGS_NO* flags. This behaviour is not consistent with
> > > the Rx side where the Rx offloads are disabled by default and enabled
> > > according to bit field in rte_eth_rxmode structure.
> > >
> > > The proposal is to disable the Tx offloads by default, and provide
> > > a way for the application to enable them in rte_eth_txmode structure.
> > > Besides of making the Tx configuration API more consistent for
> > > applications, PMDs will be able to provide a better out of the
> > > box performance.
> > > Finally, as part of the work, the ETH_TXQ_FLAGS_NO* will
> > > be superseded as well.
> > >
> > > Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> >
> > Basically, TX mbuf flags like TSO and checksum offloads won't have to be
> > honored by PMDs unless applications request them first while configuring the
> > device, just like RX offloads.
> >
> > Considering more and more TX offloads will be added over time, I do not
> > think expecting them all to be enabled by default is sane. There will always
> > be an associated software cost in PMDs, and this solution allows
> > applications to selectively enable them as needed for maximum performance.
> >
> > Konstantin/Olivier/Tomasz, I do not want to resume the thread about
> > tx_prepare(), however this could provide an alternative means to benefit
> > from improved performance when applications do not need TSO (or any other
> > offload for that matter), while adding consistency to device configuration.
> >
> > What's your opinion?
> >
> > In any case I'm fine with this change:
> >
> > Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
>
> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] devargs: announce ABI change for device parameters
2017-05-10 15:46 13% ` [dpdk-dev] [PATCH v2] " Gaetan Rivet
` (3 preceding siblings ...)
2017-05-10 19:54 4% ` Maxime Coquelin
@ 2017-05-10 23:14 4% ` Thomas Monjalon
4 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-05-10 23:14 UTC (permalink / raw)
To: Gaetan Rivet; +Cc: dev
10/05/2017 17:46, Gaetan Rivet:
> The PCI and virtual bus are planned to be moved to the generic
> drivers/bus directory in v17.08. For this change to be possible, the EAL
> must be made completely independent.
>
> The rte_devargs structure currently holds device representation internal
> to those two busses. It must be made generic before this work can be
> completed.
>
> Instead of using either a driver name for a vdev or a PCI address for a
> PCI device, a devargs structure will have to be able to describe any
> possible device on all busses, without introducing dependencies on
> any bus-specific device representation. This will break the ABI for this
> structure.
>
> Additionally, an evolution will occur regarding the device parsing
> from the command-line. A user must be able to set which bus will handle
> which device, and this setting is integral to the definition of a
> device.
>
> The format has not yet been formally defined, but a proposition will
> follow soon for a new command line parameter format for all devices.
>
> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
> ---
> +* devargs: An ABI change is planned for 17.08 for the structure ``rte_devargs``.
> + The current version is dependent on bus-specific device identifier, which will
> + be made generic and abstracted, in order to make the EAL bus-agnostic.
> +
> + Accompanying this evolution, device command line parameters will thus support
> + explicit bus definition in a device declaration.
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Applied, thanks
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] devargs: announce ABI change for device parameters
2017-05-10 17:54 4% ` Stephen Hemminger
@ 2017-05-10 21:59 4% ` Gaëtan Rivet
0 siblings, 0 replies; 200+ results
From: Gaëtan Rivet @ 2017-05-10 21:59 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, Maxime Coquelin
On Wed, May 10, 2017 at 10:54:07AM -0700, Stephen Hemminger wrote:
>On Wed, 10 May 2017 17:46:10 +0200
>Gaetan Rivet <gaetan.rivet@6wind.com> wrote:
>
>> The PCI and virtual bus are planned to be moved to the generic
>> drivers/bus directory in v17.08. For this change to be possible, the EAL
>> must be made completely independent.
>>
>> The rte_devargs structure currently holds device representation internal
>> to those two busses. It must be made generic before this work can be
>> completed.
>>
>> Instead of using either a driver name for a vdev or a PCI address for a
>> PCI device, a devargs structure will have to be able to describe any
>> possible device on all busses, without introducing dependencies on
>> any bus-specific device representation. This will break the ABI for this
>> structure.
>>
>> Additionally, an evolution will occur regarding the device parsing
>> from the command-line. A user must be able to set which bus will handle
>> which device, and this setting is integral to the definition of a
>> device.
>>
>> The format has not yet been formally defined, but a proposition will
>> follow soon for a new command line parameter format for all devices.
>>
>> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
>
>I understand why having a union of all bus types is an issues since it
>means that if a new bus type (like VMBUS with GUID) has a bigger value
>then the existing representation would break.
>
>Perhaps give an example of what the new model would look like?
Two functionalities from the EAL are currently still dependent on
bus-specific implementation from vdev and PCI: attach/detach and
device argument parsing.
attach/detach is currently being reworked by Jan Blunck. I am
handling the device parsing.
Device parsing consists in two core functionalities: parsing a device
name and storing the resulting information.
The semantic of validating a device from a bus PoV is to report whether it
would be able to parse the given name and generate information relevant to
its implementation.
The model is thus similar to that of the driver probe within a bus: if a
bus returns success on validate, then we register this bus as the handler
for this device. Otherwise, we try other busses. We only have to keep the
textual representation of the device within the devargs, given that the bus
is the one capable later on to parse it and re-generate its internal
representation for scanning / probing purposes.
So the model is pretty straightforward: a new rte_bus method and a
purely textual representation of devices within rte_devargs.
One problem that remains is the possible ambiguity regarding device
names. Several separate teams will maintain their busses and nothing
prevents the device syntax of one bus to conflict with that of another
one.
Two possible approaches regarding a generic device name syntax:
* Explicit bus-name header
+ Non-ambiguous, future-proof.
- Forces scripts, tests and users to be updated everywhere.
eg. --dev "virtual:net_ring0" --dev "pci:0000:00:02.0"
* Best effort
+ Maintains backward compatibility for device parameters.
- Possible ambiguities if conflicting device name syntaxes.
eg. --dev "net_ring0" --dev "pci:0000:00:02.0"
The bus name separator must be carefully considered. ':' plays nice in
scripts and complex command lines, but obviously conflicts with the PCI
device syntax. The compromise option seems appealing, but I think that
the probability of conflicts between bus names and device syntax is pretty
high.
What's missing is the "whitelisted/blacklisted" flag. This is
considered in the design, but this email is starting to get long (using
-w and -b instead of --dev is always possible, the question is whether
an evolution here could be useful).
This new syntax will be proposed alongside the new rte_bus method.
I will send the relevant patches pretty soon. I have a
dependency on the work from Jan Blunck however and I might wait for him
to submit his patches first. Probably next week.
--
Gaëtan Rivet
6WIND
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] devargs: announce ABI change for device parameters
2017-05-10 15:46 13% ` [dpdk-dev] [PATCH v2] " Gaetan Rivet
` (2 preceding siblings ...)
2017-05-10 18:50 4% ` David Marchand
@ 2017-05-10 19:54 4% ` Maxime Coquelin
2017-05-10 23:14 4% ` Thomas Monjalon
4 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2017-05-10 19:54 UTC (permalink / raw)
To: Gaetan Rivet, dev
On 05/10/2017 05:46 PM, Gaetan Rivet wrote:
> The PCI and virtual bus are planned to be moved to the generic
> drivers/bus directory in v17.08. For this change to be possible, the EAL
> must be made completely independent.
>
> The rte_devargs structure currently holds device representation internal
> to those two busses. It must be made generic before this work can be
> completed.
>
> Instead of using either a driver name for a vdev or a PCI address for a
> PCI device, a devargs structure will have to be able to describe any
> possible device on all busses, without introducing dependencies on
> any bus-specific device representation. This will break the ABI for this
> structure.
>
> Additionally, an evolution will occur regarding the device parsing
> from the command-line. A user must be able to set which bus will handle
> which device, and this setting is integral to the definition of a
> device.
>
> The format has not yet been formally defined, but a proposition will
> follow soon for a new command line parameter format for all devices.
>
> Signed-off-by: Gaetan Rivet<gaetan.rivet@6wind.com>
I understand the change is necessary, so:
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Looking forward for the proposal, I guess you already have some ideas
that you could share?
Thanks,
Maxime
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3] doc: update release notes for 17.05
2017-05-09 16:56 2% ` [dpdk-dev] [PATCH v2] " John McNamara
@ 2017-05-10 19:01 5% ` John McNamara
0 siblings, 0 replies; 200+ results
From: John McNamara @ 2017-05-10 19:01 UTC (permalink / raw)
To: dev; +Cc: John McNamara
Fix grammar, spelling and formatting of DPDK 17.05 release notes.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
Acked-By: Shreyansh Jain <shreyansh.jain@nxp.com>
---
V3: Fix mailing list comments.
doc/guides/rel_notes/release_17_05.rst | 208 +++++++++++++++------------------
1 file changed, 97 insertions(+), 111 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 4b47ae1..33ebc3a 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -41,31 +41,34 @@ New Features
Also, make sure to start the actual text at the margin.
=========================================================
-* **Reorganized the mbuf structure.**
+* **Reorganized mbuf structure.**
+
+ The mbuf structure has been reorganized as follows:
* Align fields to facilitate the writing of ``data_off``, ``refcnt``, and
``nb_segs`` in one operation.
* Use 2 bytes for port and number of segments.
- * Move the sequence number in the second cache line.
+ * Move the sequence number to the second cache line.
* Add a timestamp field.
* Set default value for ``refcnt``, ``next`` and ``nb_segs`` at mbuf free.
-* **Added mbuf raw free API**
+* **Added mbuf raw free API.**
Moved ``rte_mbuf_raw_free()`` and ``rte_pktmbuf_prefree_seg()`` functions to
the public API.
* **Added free Tx mbuf on demand API.**
- Added a new function ``rte_eth_tx_done_cleanup()`` which allows an application
- to request the driver to release mbufs from their Tx ring that are no longer
- in use, independent of whether or not the ``tx_rs_thresh`` has been crossed.
+ Added a new function ``rte_eth_tx_done_cleanup()`` which allows an
+ application to request the driver to release mbufs that are no longer in use
+ from a Tx ring, independent of whether or not the ``tx_rs_thresh`` has been
+ crossed.
* **Added device removal interrupt.**
Added a new ethdev event ``RTE_ETH_DEV_INTR_RMV`` to signify
the sudden removal of a device.
- This event can be advertized by PCI drivers and enabled accordingly.
+ This event can be advertised by PCI drivers and enabled accordingly.
* **Added EAL dynamic log framework.**
@@ -77,25 +80,25 @@ New Features
Added a new API to get the status of a descriptor.
For Rx, it is almost similar to the ``rx_descriptor_done`` API, except
- it differentiates descriptors which are hold by the driver and not
+ it differentiates descriptors which are held by the driver and not
returned to the hardware. For Tx, it is a new API.
* **Increased number of next hops for LPM IPv6 to 2^21.**
- The next_hop field is extended from 8 bits to 21 bits for IPv6.
+ The next_hop field has been extended from 8 bits to 21 bits for IPv6.
* **Added VFIO hotplug support.**
- How hotplug supported with UIO and VFIO drivers.
+ Added hotplug support for VFIO in addition to the existing UIO support.
-* **Added powerpc support in pci probing for vfio-pci devices.**
+* **Added PowerPC support to pci probing for vfio-pci devices.**
- sPAPR IOMMU based pci probing enabled for vfio-pci devices.
+ Enabled sPAPR IOMMU based pci probing for vfio-pci devices.
-* **Kept consistent PMD batching behaviour.**
+* **Kept consistent PMD batching behavior.**
- Removed the limit of fm10k/i40e/ixgbe TX burst size and vhost RX/TX burst size
- in order to support the same policy of "make an best effort to RX/TX pkts"
+ Removed the limit of fm10k/i40e/ixgbe Tx burst size and vhost Rx/Tx burst size
+ in order to support the same policy of "make an best effort to Rx/Tx pkts"
for PMDs.
* **Updated the ixgbe base driver.**
@@ -106,64 +109,62 @@ New Features
* Complete HW initialization even if SFP is not present.
* Add VF xcast promiscuous mode.
-* **Added powerpc support for i40e and its vector PMD .**
-
- i40e PMD and its vector PMD enabled by default in powerpc.
-
-* **Added VF max bandwidth setting on i40e.**
+* **Added PowerPC support for i40e and its vector PMD.**
- i40e HW supports to set the max bandwidth for a VF. Enable this capability.
+ Enabled i40e PMD and its vector PMD by default in PowerPC.
-* **Added VF TC min bandwidth setting on i40e.**
+* **Added VF max bandwidth setting in i40e.**
- i40e HW supports to set the allocated bandwidth for a TC on a VF. Enable this
- capability.
+ Enabled capability to set the max bandwidth for a VF in i40e.
-* **Added VF TC max bandwidth setting on i40e.**
+* **Added VF TC min and max bandwidth setting in i40e.**
- i40e HW supports to set the max bandwidth for a TC on a VF. Enable this
- capability.
+ Enabled capability to set the min and max allocated bandwidth for a TC on a
+ VF in i40.
* **Added TC strict priority mode setting on i40e.**
- There're 2 TX scheduling modes supported for TCs by i40e HW, round ribon mode
- and strict priority mode. By default it's round robin mode. Enable the
- capability to change the TX scheduling mode for a TC. It's a global setting
- on a physical port.
+ There are 2 Tx scheduling modes supported for TCs by i40e HW: round robin
+ mode and strict priority mode. By default the round robin mode is used. It
+ is now possible to change the Tx scheduling mode for a TC. This is a global
+ setting on a physical port.
* **Added i40e dynamic device personalization support.**
- * Added dynamic device personalization processing to i40e FW.
+ * Added dynamic device personalization processing to i40e firmware.
* **Added Cloud Filter for QinQ steering to i40e.**
* Added a QinQ cloud filter on the i40e PMD, for steering traffic to a VM
- using both VLAN tags.
- * QinQ is not supported in Vector Mode on the i40e PMD.
- * Vector Mode must be disabled when using the QinQ Cloud Filter.
+ using both VLAN tags. Note, this feature is not supported in Vector Mode.
* **Updated mlx5 PMD.**
- * Supported ether type in flow item.
- * Extended IPv6 flow item with Vtc flow, Protocol and Hop limit.
- * Supported flag flow action.
- * Supported RSS action flow rule.
- * Supported TSO for tunneled and non-tunneled packets.
- * Supported hardware checksum offloads for tunneled packets.
- * Supported user space Rx interrupt event.
- * Enhanced multi-packet send function for ConnectX-5.
+ Updated the mlx5 driver, including the following changes:
+
+ * Added Generic flow API support for classification according to ether type.
+ * Extended Generic flow API support for classification of IPv6 flow
+ according to Vtc flow, Protocol and Hop limit.
+ * Added Generic flow API support for FLAG action.
+ * Added Generic flow API support for RSS action.
+ * Added support for TSO for non-tunneled and VXLAN packets.
+ * Added support for hardware Tx checksum offloads for VXLAN packets.
+ * Added support for user space Rx interrupt mode.
+ * Improved ConnectX-5 single core and maximum performance.
* **Updated mlx4 PMD.**
- * Supported basic flow items and actions.
- * Supported device removal event.
+ Updated the mlx4 driver, including the following changes:
+
+ * Added support for Generic flow API basic flow items and actions.
+ * Added support for device removal event.
* **Updated the sfc_efx driver.**
- * Generic flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP
+ * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP
pattern items with QUEUE action for ingress traffic.
- * Support virtual functions (VFs)
+ * Added support for virtual functions (VFs).
* **Added LiquidIO network PMD.**
@@ -172,19 +173,19 @@ New Features
* **Added Atomic Rules Arkville PMD.**
Added a new poll mode driver for the Arkville family of
- devices from Atomic Rules. The net/ark PMD supports line-rate
+ devices from Atomic Rules. The net/ark PMD supports line-rate
agnostic, multi-queue data movement on Arkville core FPGA instances.
* **Added support for NXP DPAA2 - FSLMC bus.**
Added the new bus "fslmc" driver for NXP DPAA2 devices. See the
- "Network Interface Controller Drivers" document for more details on this new
+ "Network Interface Controller Drivers" document for more details of this new
driver.
* **Added support for NXP DPAA2 Network PMD.**
Added the new "dpaa2" net driver for NXP DPAA2 devices. See the
- "Network Interface Controller Drivers" document for more details on this new
+ "Network Interface Controller Drivers" document for more details of this new
driver.
* **Added support for the Wind River Systems AVP PMD.**
@@ -195,23 +196,26 @@ New Features
* **Added vmxnet3 version 3 support.**
Added support for vmxnet3 version 3 which includes several
- performance enhancements viz. configurable TX data ring, Receive
- Data Ring, ability to register memory regions.
+ performance enhancements such as configurable Tx data ring, Receive
+ Data Ring, and the ability to register memory regions.
-* **Updated the tap driver.**
+* **Updated the TAP driver.**
+
+ Updated the TAP PMD to:
* Support MTU modification.
* Support packet type for Rx.
* Support segmented packets on Rx and Tx.
- * Speed up Rx on tap when no packets are available.
+ * Speed up Rx on TAP when no packets are available.
* Support capturing traffic from another netdevice.
* Dynamically change link status when the underlying interface state changes.
- * Generic flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP pattern
- items with DROP, QUEUE and PASSTHRU actions for ingress traffic.
+ * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and
+ TCP pattern items with DROP, QUEUE and PASSTHRU actions for ingress
+ traffic.
* **Added MTU feature support to Virtio and Vhost.**
- Implemented new Virtio MTU feature into Vhost and Virtio:
+ Implemented new Virtio MTU feature in Vhost and Virtio:
* Add ``rte_vhost_mtu_get()`` API to Vhost library.
* Enable Vhost PMD's MTU get feature.
@@ -228,21 +232,21 @@ New Features
* **Added event driven programming model library (rte_eventdev).**
- This API introduces event driven programming model.
+ This API introduces an event driven programming model.
In a polling model, lcores poll ethdev ports and associated
- rx queues directly to look for packet. In an event driven model,
- by contrast, lcores call the scheduler that selects packets for
- them based on programmer-specified criteria. Eventdev library
- added support for event driven programming model, which offer
+ Rx queues directly to look for a packet. By contrast in an event
+ driven model, lcores call the scheduler that selects packets for
+ them based on programmer-specified criteria. The Eventdev library
+ adds support for an event driven programming model, which offers
applications automatic multicore scaling, dynamic load balancing,
pipelining, packet ingress order maintenance and
synchronization services to simplify application packet processing.
- By introducing event driven programming model, DPDK can support
+ By introducing an event driven programming model, DPDK can support
both polling and event driven programming models for packet processing,
and applications are free to choose whatever model
- (or combination of the two) that best suits their needs.
+ (or combination of the two) best suits their needs.
* **Added Software Eventdev PMD.**
@@ -256,9 +260,9 @@ New Features
Added the new octeontx ssovf eventdev driver for OCTEONTX devices. See the
"Event Device Drivers" document for more details on this new driver.
-* **Added information metric library.**
+* **Added information metrics library.**
- A library that allows information metrics to be added and updated
+ Added a library that allows information metrics to be added and updated
by producers, typically other libraries, for later retrieval by
consumers such as applications. It is intended to provide a
reporting mechanism that is independent of other libraries such
@@ -266,13 +270,14 @@ New Features
* **Added bit-rate calculation library.**
- A library that can be used to calculate device bit-rates. Calculated
+ Added a library that can be used to calculate device bit-rates. Calculated
bitrates are reported using the metrics library.
* **Added latency stats library.**
- A library that measures packet latency. The collected statistics are jitter
- and latency. For latency the minimum, average, and maximum is measured.
+ Added a library that measures packet latency. The collected statistics are
+ jitter and latency. For latency the minimum, average, and maximum is
+ measured.
* **Added NXP DPAA2 SEC crypto PMD.**
@@ -282,13 +287,13 @@ New Features
* **Updated the Cryptodev Scheduler PMD.**
- * Added packet-size based distribution mode, which distributes the enqueued
+ * Added a packet-size based distribution mode, which distributes the enqueued
crypto operations among two slaves, based on their data lengths.
* Added fail-over scheduling mode, which enqueues crypto operations to a
primary slave first. Then, any operation that cannot be enqueued is
enqueued to a secondary slave.
- * Added mode specific option support, so each scheduleing mode can
- now be configured individually by the new added API.
+ * Added mode specific option support, so each scheduling mode can
+ now be configured individually by the new API.
* **Updated the QAT PMD.**
@@ -331,31 +336,12 @@ Resolved Issues
=========================================================
-EAL
-~~~
-
-
-Drivers
-~~~~~~~
-
-
-Libraries
-~~~~~~~~~
-
-
-Examples
-~~~~~~~~
-
* **l2fwd-keepalive: Fixed unclean shutdowns.**
Added clean shutdown to l2fwd-keepalive so that it can free up
stale resources used for inter-process communication.
-Other
-~~~~~
-
-
Known Issues
------------
@@ -370,7 +356,7 @@ Known Issues
Also, make sure to start the actual text at the margin.
=========================================================
-* **LSC interrupt cannot work for virtio-user + vhost-kernel.**
+* **LSC interrupt doesn't work for virtio-user + vhost-kernel.**
LSC interrupt cannot be detected when setting the backend, tap device,
up/down as we fail to find a way to monitor such event.
@@ -392,22 +378,22 @@ API Changes
* The LPM ``next_hop`` field is extended from 8 bits to 21 bits for IPv6
while keeping ABI compatibility.
-* **Reworked rte_ring library**
+* **Reworked rte_ring library.**
The rte_ring library has been reworked and updated. The following changes
have been made to it:
- * removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS``
- * removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG``
- * removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT``
- * removed the function ``rte_ring_set_water_mark`` as part of a general
+ * Removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS``.
+ * Removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG``.
+ * Removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT``.
+ * Removed the function ``rte_ring_set_water_mark`` as part of a general
removal of watermarks support in the library.
- * added an extra parameter to the burst/bulk enqueue functions to
+ * Added an extra parameter to the burst/bulk enqueue functions to
return the number of free spaces in the ring after enqueue. This can
be used by an application to implement its own watermark functionality.
- * added an extra parameter to the burst/bulk dequeue functions to return
+ * Added an extra parameter to the burst/bulk dequeue functions to return
the number elements remaining in the ring after dequeue.
- * changed the return value of the enqueue and dequeue bulk functions to
+ * Changed the return value of the enqueue and dequeue bulk functions to
match that of the burst equivalents. In all cases, ring functions which
operate on multiple packets now return the number of elements enqueued
or dequeued, as appropriate. The updated functions are:
@@ -425,11 +411,11 @@ API Changes
flagged by the compiler. The return value usage should be checked
while fixing the compiler error due to the extra parameter.
-* **Reworked rte_vhost library**
+* **Reworked rte_vhost library.**
The rte_vhost library has been reworked to make it generic enough so that
- user could build other vhost-user drivers on top of it. To achieve that,
- following changes have been made:
+ the user could build other vhost-user drivers on top of it. To achieve this
+ the following changes have been made:
* The following vhost-pmd APIs are removed:
@@ -444,13 +430,13 @@ API Changes
* The vhost API ``rte_vhost_get_queue_num`` is deprecated, instead,
``rte_vhost_get_vring_num`` should be used.
- * Following macros are removed in ``rte_virtio_net.h``
+ * The following macros are removed in ``rte_virtio_net.h``
* ``VIRTIO_RXQ``
* ``VIRTIO_TXQ``
* ``VIRTIO_QNUM``
- * Following net specific header files are removed in ``rte_virtio_net.h``
+ * The following net specific header files are removed in ``rte_virtio_net.h``
* ``linux/virtio_net.h``
* ``sys/socket.h``
@@ -461,8 +447,8 @@ API Changes
``vhost_device_ops``
* The vhost API ``rte_vhost_driver_session_start`` is removed. Instead,
- ``rte_vhost_driver_start`` should be used, and no need to create a
- thread to call it.
+ ``rte_vhost_driver_start`` should be used, and there is no need to create
+ a thread to call it.
* The vhost public header file ``rte_virtio_net.h`` is renamed to
``rte_vhost.h``
@@ -486,8 +472,8 @@ ABI Changes
The order and size of the fields in the ``mbuf`` structure changed,
as described in the `New Features`_ section.
-* The ``rte_cryptodev_info.sym`` structure has new field ``max_nb_sessions_per_qp``
- to support drivers which may support limited number of sessions per queue_pair.
+* The ``rte_cryptodev_info.sym`` structure has a new field ``max_nb_sessions_per_qp``
+ to support drivers which may support a limited number of sessions per queue_pair.
Removed Items
@@ -502,9 +488,9 @@ Removed Items
Also, make sure to start the actual text at the margin.
=========================================================
-* KNI vhost support removed.
+* KNI vhost support has been removed.
-* dpdk_qat sample application removed.
+* The dpdk_qat sample application has been removed.
Shared Library Versions
-----------------------
--
2.7.4
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH v2] devargs: announce ABI change for device parameters
2017-05-10 15:46 13% ` [dpdk-dev] [PATCH v2] " Gaetan Rivet
2017-05-10 17:28 4% ` Jerin Jacob
2017-05-10 17:54 4% ` Stephen Hemminger
@ 2017-05-10 18:50 4% ` David Marchand
2017-05-10 19:54 4% ` Maxime Coquelin
2017-05-10 23:14 4% ` Thomas Monjalon
4 siblings, 0 replies; 200+ results
From: David Marchand @ 2017-05-10 18:50 UTC (permalink / raw)
To: Gaetan Rivet; +Cc: dev
Hey Gaetan,
On Wed, May 10, 2017 at 5:46 PM, Gaetan Rivet <gaetan.rivet@6wind.com> wrote:
> The PCI and virtual bus are planned to be moved to the generic
> drivers/bus directory in v17.08. For this change to be possible, the EAL
> must be made completely independent.
>
> The rte_devargs structure currently holds device representation internal
> to those two busses. It must be made generic before this work can be
> completed.
>
> Instead of using either a driver name for a vdev or a PCI address for a
> PCI device, a devargs structure will have to be able to describe any
> possible device on all busses, without introducing dependencies on
> any bus-specific device representation. This will break the ABI for this
> structure.
>
> Additionally, an evolution will occur regarding the device parsing
> from the command-line. A user must be able to set which bus will handle
> which device, and this setting is integral to the definition of a
> device.
>
> The format has not yet been formally defined, but a proposition will
> follow soon for a new command line parameter format for all devices.
>
> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Glad to see people working on this.
Acked-by: David Marchand <david.marchand@6wind.com>
--
David Marchand
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] devargs: announce ABI change for device parameters
2017-05-10 15:46 13% ` [dpdk-dev] [PATCH v2] " Gaetan Rivet
2017-05-10 17:28 4% ` Jerin Jacob
@ 2017-05-10 17:54 4% ` Stephen Hemminger
2017-05-10 21:59 4% ` Gaëtan Rivet
2017-05-10 18:50 4% ` David Marchand
` (2 subsequent siblings)
4 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2017-05-10 17:54 UTC (permalink / raw)
To: Gaetan Rivet; +Cc: dev
On Wed, 10 May 2017 17:46:10 +0200
Gaetan Rivet <gaetan.rivet@6wind.com> wrote:
> The PCI and virtual bus are planned to be moved to the generic
> drivers/bus directory in v17.08. For this change to be possible, the EAL
> must be made completely independent.
>
> The rte_devargs structure currently holds device representation internal
> to those two busses. It must be made generic before this work can be
> completed.
>
> Instead of using either a driver name for a vdev or a PCI address for a
> PCI device, a devargs structure will have to be able to describe any
> possible device on all busses, without introducing dependencies on
> any bus-specific device representation. This will break the ABI for this
> structure.
>
> Additionally, an evolution will occur regarding the device parsing
> from the command-line. A user must be able to set which bus will handle
> which device, and this setting is integral to the definition of a
> device.
>
> The format has not yet been formally defined, but a proposition will
> follow soon for a new command line parameter format for all devices.
>
> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
I understand why having a union of all bus types is an issues since it
means that if a new bus type (like VMBUS with GUID) has a bigger value
then the existing representation would break.
Perhaps give an example of what the new model would look like?
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] devargs: announce ABI change for device parameters
2017-05-10 15:46 13% ` [dpdk-dev] [PATCH v2] " Gaetan Rivet
@ 2017-05-10 17:28 4% ` Jerin Jacob
2017-05-10 17:54 4% ` Stephen Hemminger
` (3 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2017-05-10 17:28 UTC (permalink / raw)
To: Gaetan Rivet; +Cc: dev
-----Original Message-----
> Date: Wed, 10 May 2017 17:46:10 +0200
> From: Gaetan Rivet <gaetan.rivet@6wind.com>
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2] devargs: announce ABI change for device
> parameters
> X-Mailer: git-send-email 2.1.4
>
> The PCI and virtual bus are planned to be moved to the generic
> drivers/bus directory in v17.08. For this change to be possible, the EAL
> must be made completely independent.
>
> The rte_devargs structure currently holds device representation internal
> to those two busses. It must be made generic before this work can be
> completed.
>
> Instead of using either a driver name for a vdev or a PCI address for a
> PCI device, a devargs structure will have to be able to describe any
> possible device on all busses, without introducing dependencies on
> any bus-specific device representation. This will break the ABI for this
> structure.
>
> Additionally, an evolution will occur regarding the device parsing
> from the command-line. A user must be able to set which bus will handle
> which device, and this setting is integral to the definition of a
> device.
>
> The format has not yet been formally defined, but a proposition will
> follow soon for a new command line parameter format for all devices.
>
> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
Looks forward to seeing vdev and PCI under driver/bus/
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
> v1 -> v2
> * The first part of this series has been dropped.
> After discussion with Thomas, it was decided to postpone the removal
> of the relevant rte_pci_* functions.
> * Add the parameters evolution in-tree additionally to the commit log.
> ---
> doc/guides/rel_notes/deprecation.rst | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index a3e7c72..8f800dc 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -81,3 +81,10 @@ Deprecation Notices
>
> - ``rte_crpytodev_scheduler_mode_get``, replaced by ``rte_cryptodev_scheduler_mode_get``
> - ``rte_crpytodev_scheduler_mode_set``, replaced by ``rte_cryptodev_scheduler_mode_set``
> +
> +* devargs: An ABI change is planned for 17.08 for the structure ``rte_devargs``.
> + The current version is dependent on bus-specific device identifier, which will
> + be made generic and abstracted, in order to make the EAL bus-agnostic.
> +
> + Accompanying this evolution, device command line parameters will thus support
> + explicit bus definition in a device declaration.
> --
> 2.1.4
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev
2017-04-18 15:48 4% [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev Bernard Iremonger
2017-04-26 15:02 4% ` Mcnamara, John
2017-05-10 15:18 4% ` Pattan, Reshma
@ 2017-05-10 16:31 4% ` Ananyev, Konstantin
2 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2017-05-10 16:31 UTC (permalink / raw)
To: Iremonger, Bernard, dev; +Cc: Mcnamara, John, Yigit, Ferruh, Iremonger, Bernard
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bernard Iremonger
> Sent: Tuesday, April 18, 2017 4:49 PM
> To: dev@dpdk.org
> Cc: Mcnamara, John <john.mcnamara@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Subject: [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev
>
> The change of _rte_eth_dev_callback_process has not been done in 17.05.
> Let's postpone to 17.08.
>
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index a3e7c720c..00e379c00 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -36,8 +36,8 @@ Deprecation Notices
> ``eth_driver``. Similarly, ``rte_pci_driver`` is planned to be removed from
> ``rte_cryptodev_driver`` in 17.05.
>
> -* ethdev: An API change is planned for 17.05 for the function
> - ``_rte_eth_dev_callback_process``. In 17.05 the function will return an ``int``
> +* ethdev: An API change is planned for 17.08 for the function
> + ``_rte_eth_dev_callback_process``. In 17.08 the function will return an ``int``
> instead of ``void`` and a fourth parameter ``void *ret_param`` will be added.
>
> * ethdev: for 17.05 it is planned to deprecate the following nine rte_eth_dev_*
> --
> 2.11.0
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2] devargs: announce ABI change for device parameters
2017-05-10 12:52 13% ` [dpdk-dev] [PATCH 2/2] devargs: announce ABI change for device parameters Gaetan Rivet
@ 2017-05-10 15:46 13% ` Gaetan Rivet
2017-05-10 17:28 4% ` Jerin Jacob
` (4 more replies)
1 sibling, 5 replies; 200+ results
From: Gaetan Rivet @ 2017-05-10 15:46 UTC (permalink / raw)
To: dev
The PCI and virtual bus are planned to be moved to the generic
drivers/bus directory in v17.08. For this change to be possible, the EAL
must be made completely independent.
The rte_devargs structure currently holds device representation internal
to those two busses. It must be made generic before this work can be
completed.
Instead of using either a driver name for a vdev or a PCI address for a
PCI device, a devargs structure will have to be able to describe any
possible device on all busses, without introducing dependencies on
any bus-specific device representation. This will break the ABI for this
structure.
Additionally, an evolution will occur regarding the device parsing
from the command-line. A user must be able to set which bus will handle
which device, and this setting is integral to the definition of a
device.
The format has not yet been formally defined, but a proposition will
follow soon for a new command line parameter format for all devices.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
---
v1 -> v2
* The first part of this series has been dropped.
After discussion with Thomas, it was decided to postpone the removal
of the relevant rte_pci_* functions.
* Add the parameters evolution in-tree additionally to the commit log.
---
doc/guides/rel_notes/deprecation.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a3e7c72..8f800dc 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -81,3 +81,10 @@ Deprecation Notices
- ``rte_crpytodev_scheduler_mode_get``, replaced by ``rte_cryptodev_scheduler_mode_get``
- ``rte_crpytodev_scheduler_mode_set``, replaced by ``rte_cryptodev_scheduler_mode_set``
+
+* devargs: An ABI change is planned for 17.08 for the structure ``rte_devargs``.
+ The current version is dependent on bus-specific device identifier, which will
+ be made generic and abstracted, in order to make the EAL bus-agnostic.
+
+ Accompanying this evolution, device command line parameters will thus support
+ explicit bus definition in a device declaration.
--
2.1.4
^ permalink raw reply [relevance 13%]
* Re: [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev
2017-04-18 15:48 4% [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev Bernard Iremonger
2017-04-26 15:02 4% ` Mcnamara, John
@ 2017-05-10 15:18 4% ` Pattan, Reshma
2017-05-10 16:31 4% ` Ananyev, Konstantin
2 siblings, 0 replies; 200+ results
From: Pattan, Reshma @ 2017-05-10 15:18 UTC (permalink / raw)
To: Iremonger, Bernard, dev; +Cc: Mcnamara, John, Yigit, Ferruh, Iremonger, Bernard
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bernard Iremonger
> Sent: Tuesday, April 18, 2017 4:49 PM
> To: dev@dpdk.org
> Cc: Mcnamara, John <john.mcnamara@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Subject: [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev
>
> The change of _rte_eth_dev_callback_process has not been done in 17.05.
> Let's postpone to 17.08.
>
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Reshma Pattan <reshma.pattan@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] custom align for mempool elements
2017-05-05 17:23 0% ` Gregory Etelson
@ 2017-05-10 15:01 4% ` Olivier Matz
0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2017-05-10 15:01 UTC (permalink / raw)
To: Gregory Etelson; +Cc: dev
Hi Gregory,
On Fri, 5 May 2017 20:23:15 +0300, Gregory Etelson <gregory@weka.io> wrote:
> Hello Oliver,
>
> Our application writes data from incoming MBUFs to storage device.
> For performance considerations we use O_DIRECT storage access
> and work in 'zero copy' data mode.
> To achieve the 'zero copy' we MUST arrange data in all MBUFs to be 512
> bytes aligned
> With pre-calculated custom pool element alignment and right
> RTE_PKTMBUF_HEADROOM value
> we can set MBUF data alignment to any value. In our case, 512 bytes
>
> Current implementation sets custom mempool alignment like this:
>
> struct rte_mempool *mp = rte_mempool_create_empty("MBUF pool",
> mbufs_count,
> elt_size, cache_size, sizeof(struct
> rte_pktmbuf_pool_private), rte_socket_id(), 0);
>
> rte_pktmbuf_pool_init(mp, &mbp_priv);
> *mp->elt_align = align*;
> rte_mempool_populate_default(mp);
I think we should try to avoid modifying mp->elt_align directly.
A new api could be added for that, maybe something like:
int rte_mempool_set_obj_align(struct rte_mempool *mp, size_t align)
size_t rte_mempool_get_obj_align(struct rte_mempool *mp)
The set() function would have to be called before mempool_populate().
We need to take care about conflict with the MEMPOOL_F_NO_CACHE_ALIGN
flag. I think we should keep the compat for this flag user. This flag
could be deprecated and removed later. I think there may also be some
conflicts with MEMPOOL_F_NO_SPREAD.
As I said in my previous mail, if the patch breaks the ABI (the mempool
structure), it has to follow the abi deprecation process (= a notice in
17.05 and the patch for 17.08).
I'm afraid it would be quite late for the notice though.
Regards,
Olivier
>
> Regards,
> Gregory
>
> On Fri, May 5, 2017 at 2:26 PM, Olivier Matz <olivier.matz@6wind.com> wrote:
>
> > Hi Gregory,
> >
> > On Wed, 26 Apr 2017 07:00:49 +0300, Gregory Etelson <gregory@weka.io>
> > wrote:
> > > Signed-off-by: Gregory Etelson <gregory@weka.io>
> > > ---
> > > lib/librte_mempool/rte_mempool.c | 27 ++++++++++++++++++++-------
> > > lib/librte_mempool/rte_mempool.h | 1 +
> > > 2 files changed, 21 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_
> > mempool.c
> > > index f65310f..c780df3 100644
> > > --- a/lib/librte_mempool/rte_mempool.c
> > > +++ b/lib/librte_mempool/rte_mempool.c
> > > @@ -382,7 +382,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp,
> > char *vaddr,
> > > if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
> > > off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
> > > else
> > > - off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) -
> > vaddr;
> > > + off = RTE_PTR_ALIGN_CEIL(vaddr, mp->elt_align) - vaddr;
> > >
> > > while (off + total_elt_sz <= len && mp->populated_size < mp->size)
> > {
> > > off += mp->header_size;
> > > @@ -392,6 +392,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp,
> > char *vaddr,
> > > else
> > > mempool_add_elem(mp, (char *)vaddr + off, paddr +
> > off);
> > > off += mp->elt_size + mp->trailer_size;
> > > + off = RTE_ALIGN_CEIL(off, mp->elt_align);
> > > i++;
> > > }
> > >
> > > @@ -508,6 +509,20 @@ rte_mempool_populate_virt(struct rte_mempool *mp,
> > char *addr,
> > > return ret;
> > > }
> > >
> > > +static uint32_t
> > > +mempool_default_elt_aligment(void)
> > > +{
> > > + uint32_t align;
> > > + if (rte_xen_dom0_supported()) {
> > > + align = RTE_PGSIZE_2M;;
> > > + } else if (rte_eal_has_hugepages()) {
> > > + align = RTE_CACHE_LINE_SIZE;
> > > + } else {
> > > + align = getpagesize();
> > > + }
> > > + return align;
> > > +}
> > > +
> > > /* Default function to populate the mempool: allocate memory in
> > memzones,
> > > * and populate them. Return the number of objects added, or a negative
> > > * value on error.
> > > @@ -518,7 +533,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> > > int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
> > > char mz_name[RTE_MEMZONE_NAMESIZE];
> > > const struct rte_memzone *mz;
> > > - size_t size, total_elt_sz, align, pg_sz, pg_shift;
> > > + size_t size, total_elt_sz, pg_sz, pg_shift;
> > > phys_addr_t paddr;
> > > unsigned mz_id, n;
> > > int ret;
> > > @@ -530,15 +545,12 @@ rte_mempool_populate_default(struct rte_mempool
> > *mp)
> > > if (rte_xen_dom0_supported()) {
> > > pg_sz = RTE_PGSIZE_2M;
> > > pg_shift = rte_bsf32(pg_sz);
> > > - align = pg_sz;
> > > } else if (rte_eal_has_hugepages()) {
> > > pg_shift = 0; /* not needed, zone is physically contiguous
> > */
> > > pg_sz = 0;
> > > - align = RTE_CACHE_LINE_SIZE;
> > > } else {
> > > pg_sz = getpagesize();
> > > pg_shift = rte_bsf32(pg_sz);
> > > - align = pg_sz;
> > > }
> > >
> > > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
> > > @@ -553,11 +565,11 @@ rte_mempool_populate_default(struct rte_mempool
> > *mp)
> > > }
> > >
> > > mz = rte_memzone_reserve_aligned(mz_name, size,
> > > - mp->socket_id, mz_flags, align);
> > > + mp->socket_id, mz_flags, mp->elt_align);
> > > /* not enough memory, retry with the biggest zone we have
> > */
> > > if (mz == NULL)
> > > mz = rte_memzone_reserve_aligned(mz_name, 0,
> > > - mp->socket_id, mz_flags, align);
> > > + mp->socket_id, mz_flags, mp->elt_align);
> > > if (mz == NULL) {
> > > ret = -rte_errno;
> > > goto fail;
> > > @@ -827,6 +839,7 @@ rte_mempool_create_empty(const char *name, unsigned
> > n, unsigned elt_size,
> > > /* Size of default caches, zero means disabled. */
> > > mp->cache_size = cache_size;
> > > mp->private_data_size = private_data_size;
> > > + mp->elt_align = mempool_default_elt_aligment();
> > > STAILQ_INIT(&mp->elt_list);
> > > STAILQ_INIT(&mp->mem_list);
> > >
> > > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_
> > mempool.h
> > > index 48bc8ea..6631973 100644
> > > --- a/lib/librte_mempool/rte_mempool.h
> > > +++ b/lib/librte_mempool/rte_mempool.h
> > > @@ -245,6 +245,7 @@ struct rte_mempool {
> > > * this mempool.
> > > */
> > > int32_t ops_index;
> > > + uint32_t elt_align;
> > >
> > > struct rte_mempool_cache *local_cache; /**< Per-lcore local cache
> > */
> > >
> >
> > It looks the patch will break the ABI (the mempool structure), so it
> > has to follow the abi deprecation process (= a notice in 17.05 and
> > the patch for 17.08).
> >
> > Could you give us some details about why you need such feature, how you
> > use it (since no API is added)?
> >
> > Thanks,
> > Olivier
> >
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
2017-05-01 6:58 13% [dpdk-dev] [PATCH] doc: announce ABI change on ethdev Shahaf Shuler
` (3 preceding siblings ...)
2017-05-09 18:09 4% ` Ananyev, Konstantin
@ 2017-05-10 14:29 4% ` Bruce Richardson
4 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2017-05-10 14:29 UTC (permalink / raw)
To: Shahaf Shuler; +Cc: dev
On Mon, May 01, 2017 at 09:58:12AM +0300, Shahaf Shuler wrote:
> This is an ABI change notice for DPDK 17.08 in librte_ether
> about changes in rte_eth_txmode structure.
>
> Currently Tx offloads are enabled by default, and can be disabled
> using ETH_TXQ_FLAGS_NO* flags. This behaviour is not consistent with
> the Rx side where the Rx offloads are disabled by default and enabled
> according to bit field in rte_eth_rxmode structure.
>
> The proposal is to disable the Tx offloads by default, and provide
> a way for the application to enable them in rte_eth_txmode structure.
> Besides of making the Tx configuration API more consistent for
> applications, PMDs will be able to provide a better out of the
> box performance.
> Finally, as part of the work, the ETH_TXQ_FLAGS_NO* will
> be superseded as well.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
Sounds a great idea to me. I never liked the fact that offloads were on
by default and needed to be explicitly disabled.
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 2/2] devargs: announce ABI change for device parameters
2017-05-10 12:52 13% ` [dpdk-dev] [PATCH 2/2] devargs: announce ABI change for device parameters Gaetan Rivet
@ 2017-05-10 13:04 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-05-10 13:04 UTC (permalink / raw)
To: Gaetan Rivet; +Cc: dev
10/05/2017 14:52, Gaetan Rivet:
> The PCI and virtual bus are planned to be moved to the generic
> drivers/bus directory in v17.08. For this change to be possible, the EAL
> must be made completely independent.
>
> The rte_devargs structure currently holds device representation internal
> to those two busses. It must be made generic before this work can be
> completed.
>
> Instead of using either a driver name for a vdev or a PCI address for a
> PCI device, a devargs structure will have to be able to describe any
> possible device on all busses, without introducing dependencies on
> any bus-specific device representation. This will break the ABI for this
> structure.
>
> Additionally, an evolution will occur regarding the device parsing
> from the command-line. A user must be able to set which bus will handle
> which device, and this setting is integral to the definition of a
> device.
This syntax evolution is not announced below.
> The format has not yet been formally defined, but a proposition will
> follow soon for a new command line parameter format for all devices.
>
> Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index c2f58eb..e91fc99 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -87,3 +87,7 @@ Deprecation Notices
>
> - ``rte_eal_pci_detach``, replaced by using the corresponding bus generic
> method ``detach``.
> +
> +* devargs: An ABI change is planned for 17.08 for the structure ``rte_devargs``.
> + The current version is dependent on bus-specific device identifier, which will
> + be made generic and abstracted, in order to make the EAL bus-agnostic.
>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 2/2] devargs: announce ABI change for device parameters
@ 2017-05-10 12:52 13% ` Gaetan Rivet
2017-05-10 13:04 4% ` Thomas Monjalon
2017-05-10 15:46 13% ` [dpdk-dev] [PATCH v2] " Gaetan Rivet
1 sibling, 1 reply; 200+ results
From: Gaetan Rivet @ 2017-05-10 12:52 UTC (permalink / raw)
To: dev
The PCI and virtual bus are planned to be moved to the generic
drivers/bus directory in v17.08. For this change to be possible, the EAL
must be made completely independent.
The rte_devargs structure currently holds device representation internal
to those two busses. It must be made generic before this work can be
completed.
Instead of using either a driver name for a vdev or a PCI address for a
PCI device, a devargs structure will have to be able to describe any
possible device on all busses, without introducing dependencies on
any bus-specific device representation. This will break the ABI for this
structure.
Additionally, an evolution will occur regarding the device parsing
from the command-line. A user must be able to set which bus will handle
which device, and this setting is integral to the definition of a
device.
The format has not yet been formally defined, but a proposition will
follow soon for a new command line parameter format for all devices.
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c2f58eb..e91fc99 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -87,3 +87,7 @@ Deprecation Notices
- ``rte_eal_pci_detach``, replaced by using the corresponding bus generic
method ``detach``.
+
+* devargs: An ABI change is planned for 17.08 for the structure ``rte_devargs``.
+ The current version is dependent on bus-specific device identifier, which will
+ be made generic and abstracted, in order to make the EAL bus-agnostic.
--
2.1.4
^ permalink raw reply [relevance 13%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
2017-05-01 6:58 13% [dpdk-dev] [PATCH] doc: announce ABI change on ethdev Shahaf Shuler
` (2 preceding siblings ...)
2017-05-09 13:49 4% ` Ferruh Yigit
@ 2017-05-09 18:09 4% ` Ananyev, Konstantin
2017-05-10 14:29 4% ` Bruce Richardson
4 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2017-05-09 18:09 UTC (permalink / raw)
To: Shahaf Shuler, dev
>
> This is an ABI change notice for DPDK 17.08 in librte_ether
> about changes in rte_eth_txmode structure.
>
> Currently Tx offloads are enabled by default, and can be disabled
> using ETH_TXQ_FLAGS_NO* flags. This behaviour is not consistent with
> the Rx side where the Rx offloads are disabled by default and enabled
> according to bit field in rte_eth_rxmode structure.
>
> The proposal is to disable the Tx offloads by default, and provide
> a way for the application to enable them in rte_eth_txmode structure.
> Besides of making the Tx configuration API more consistent for
> applications, PMDs will be able to provide a better out of the
> box performance.
> Finally, as part of the work, the ETH_TXQ_FLAGS_NO* will
> be superseded as well.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
> looks like this patch has arrived to everyone
> besides dev@dpdk.org resending it again. sorry for
> the noise.
> ---
> doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index a3e7c720c..0920b4766 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -81,3 +81,11 @@ Deprecation Notices
>
> - ``rte_crpytodev_scheduler_mode_get``, replaced by ``rte_cryptodev_scheduler_mode_get``
> - ``rte_crpytodev_scheduler_mode_set``, replaced by ``rte_cryptodev_scheduler_mode_set``
> +
> +* ethdev: in 17.08 ABI changes are planned:
> + Tx offloads will no longer be enabled by default.
> + Instead, the ``rte_eth_txmode`` structure will be extended with bit field to enable
> + each Tx offload.
> + Besides of making the Rx/Tx configuration API more consistent for the
> + application, PMDs will be able to provide a better out of the box performance.
> + as part of the work, ``ETH_TXQ_FLAGS_NO*`` will be superseded as well.
Seems ok to me, the only extra suggestion I have:
instead of introducing new bit-fields can we make
txmode (and rxmode) to use DEV_TX_OFFLOAD_(DEV_RX_OFFLOAD_)* values
to specify desired tx(/rx) offloads.
Konstantin
> --
> 2.12.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
2017-05-09 13:40 4% ` Adrien Mazarguil
@ 2017-05-09 17:04 4% ` Jerin Jacob
2017-05-10 23:17 4% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2017-05-09 17:04 UTC (permalink / raw)
To: Adrien Mazarguil
Cc: Shahaf Shuler, Konstantin Ananyev, Olivier Matz, Tomasz Kulasek, dev
-----Original Message-----
> Date: Tue, 9 May 2017 15:40:04 +0200
> From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> To: Shahaf Shuler <shahafs@mellanox.com>, Konstantin Ananyev
> <konstantin.ananyev@intel.com>, Olivier Matz <olivier.matz@6wind.com>,
> Tomasz Kulasek <tomaszx.kulasek@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
>
> On Mon, May 01, 2017 at 09:58:12AM +0300, Shahaf Shuler wrote:
> > This is an ABI change notice for DPDK 17.08 in librte_ether
> > about changes in rte_eth_txmode structure.
> >
> > Currently Tx offloads are enabled by default, and can be disabled
> > using ETH_TXQ_FLAGS_NO* flags. This behaviour is not consistent with
> > the Rx side where the Rx offloads are disabled by default and enabled
> > according to bit field in rte_eth_rxmode structure.
> >
> > The proposal is to disable the Tx offloads by default, and provide
> > a way for the application to enable them in rte_eth_txmode structure.
> > Besides of making the Tx configuration API more consistent for
> > applications, PMDs will be able to provide a better out of the
> > box performance.
> > Finally, as part of the work, the ETH_TXQ_FLAGS_NO* will
> > be superseded as well.
> >
> > Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
>
> Basically, TX mbuf flags like TSO and checksum offloads won't have to be
> honored by PMDs unless applications request them first while configuring the
> device, just like RX offloads.
>
> Considering more and more TX offloads will be added over time, I do not
> think expecting them all to be enabled by default is sane. There will always
> be an associated software cost in PMDs, and this solution allows
> applications to selectively enable them as needed for maximum performance.
>
> Konstantin/Olivier/Tomasz, I do not want to resume the thread about
> tx_prepare(), however this could provide an alternative means to benefit
> from improved performance when applications do not need TSO (or any other
> offload for that matter), while adding consistency to device configuration.
>
> What's your opinion?
>
> In any case I'm fine with this change:
>
> Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2] doc: update release notes for 17.05
2017-05-09 16:25 2% [dpdk-dev] [PATCH v1] doc: update release notes for 17.05 John McNamara
@ 2017-05-09 16:56 2% ` John McNamara
2017-05-10 19:01 5% ` [dpdk-dev] [PATCH v3] " John McNamara
0 siblings, 1 reply; 200+ results
From: John McNamara @ 2017-05-09 16:56 UTC (permalink / raw)
To: dev; +Cc: John McNamara
Fix grammar, spelling and formatting of DPDK 17.05 release notes.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
V2: Added updated MLX4/5 text. This patch supercedes patch 24166.
doc/guides/rel_notes/release_17_05.rst | 197 +++++++++++++++------------------
1 file changed, 91 insertions(+), 106 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 4b47ae1..f911711 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -41,16 +41,18 @@ New Features
Also, make sure to start the actual text at the margin.
=========================================================
-* **Reorganized the mbuf structure.**
+* **Reorganized mbuf structure.**
+
+ The mbuf structure has been reorganized as follows:
* Align fields to facilitate the writing of ``data_off``, ``refcnt``, and
``nb_segs`` in one operation.
* Use 2 bytes for port and number of segments.
- * Move the sequence number in the second cache line.
+ * Move the sequence number to the second cache line.
* Add a timestamp field.
* Set default value for ``refcnt``, ``next`` and ``nb_segs`` at mbuf free.
-* **Added mbuf raw free API**
+* **Added mbuf raw free API.**
Moved ``rte_mbuf_raw_free()`` and ``rte_pktmbuf_prefree_seg()`` functions to
the public API.
@@ -58,14 +60,14 @@ New Features
* **Added free Tx mbuf on demand API.**
Added a new function ``rte_eth_tx_done_cleanup()`` which allows an application
- to request the driver to release mbufs from their Tx ring that are no longer
- in use, independent of whether or not the ``tx_rs_thresh`` has been crossed.
+ to request the driver to release mbufs from that are no longer in use from a
+ Tx ring, independent of whether or not the ``tx_rs_thresh`` has been crossed.
* **Added device removal interrupt.**
Added a new ethdev event ``RTE_ETH_DEV_INTR_RMV`` to signify
the sudden removal of a device.
- This event can be advertized by PCI drivers and enabled accordingly.
+ This event can be advertised by PCI drivers and enabled accordingly.
* **Added EAL dynamic log framework.**
@@ -77,25 +79,25 @@ New Features
Added a new API to get the status of a descriptor.
For Rx, it is almost similar to the ``rx_descriptor_done`` API, except
- it differentiates descriptors which are hold by the driver and not
+ it differentiates descriptors which are held by the driver and not
returned to the hardware. For Tx, it is a new API.
* **Increased number of next hops for LPM IPv6 to 2^21.**
- The next_hop field is extended from 8 bits to 21 bits for IPv6.
+ The next_hop field has been extended from 8 bits to 21 bits for IPv6.
* **Added VFIO hotplug support.**
- How hotplug supported with UIO and VFIO drivers.
+ Added hotplug support for VFIO in addition to the existing UIO support.
-* **Added powerpc support in pci probing for vfio-pci devices.**
+* **Added PowerPC support to pci probing for vfio-pci devices.**
- sPAPR IOMMU based pci probing enabled for vfio-pci devices.
+ Enabled sPAPR IOMMU based pci probing for vfio-pci devices.
-* **Kept consistent PMD batching behaviour.**
+* **Kept consistent PMD batching behavior.**
- Removed the limit of fm10k/i40e/ixgbe TX burst size and vhost RX/TX burst size
- in order to support the same policy of "make an best effort to RX/TX pkts"
+ Removed the limit of fm10k/i40e/ixgbe Tx burst size and vhost Rx/Tx burst size
+ in order to support the same policy of "make an best effort to Rx/Tx pkts"
for PMDs.
* **Updated the ixgbe base driver.**
@@ -106,64 +108,62 @@ New Features
* Complete HW initialization even if SFP is not present.
* Add VF xcast promiscuous mode.
-* **Added powerpc support for i40e and its vector PMD .**
+* **Added PowerPC support for i40e and its vector PMD.**
- i40e PMD and its vector PMD enabled by default in powerpc.
+ Enabled i40e PMD and its vector PMD by default in PowerPC.
* **Added VF max bandwidth setting on i40e.**
- i40e HW supports to set the max bandwidth for a VF. Enable this capability.
-
-* **Added VF TC min bandwidth setting on i40e.**
-
- i40e HW supports to set the allocated bandwidth for a TC on a VF. Enable this
- capability.
+ Enabled capability to set the max bandwidth for a VF, in i40e.
-* **Added VF TC max bandwidth setting on i40e.**
+* **Added VF TC min and max bandwidth setting on i40e.**
- i40e HW supports to set the max bandwidth for a TC on a VF. Enable this
- capability.
+ Enabled capability to set the min and max allocated bandwidth for a TC on a
+ VF, in i40.
* **Added TC strict priority mode setting on i40e.**
- There're 2 TX scheduling modes supported for TCs by i40e HW, round ribon mode
- and strict priority mode. By default it's round robin mode. Enable the
- capability to change the TX scheduling mode for a TC. It's a global setting
- on a physical port.
+ There are 2 Tx scheduling modes supported for TCs by i40e HW, round robin
+ mode and strict priority mode. By default the round robin mode is used. It
+ is now possible to change the Tx scheduling mode for a TC. This is a global
+ setting on a physical port.
* **Added i40e dynamic device personalization support.**
- * Added dynamic device personalization processing to i40e FW.
+ * Added dynamic device personalization processing to i40e firmware.
* **Added Cloud Filter for QinQ steering to i40e.**
* Added a QinQ cloud filter on the i40e PMD, for steering traffic to a VM
- using both VLAN tags.
- * QinQ is not supported in Vector Mode on the i40e PMD.
- * Vector Mode must be disabled when using the QinQ Cloud Filter.
+ using both VLAN tags. Note, this feature is not supported in Vector Mode.
* **Updated mlx5 PMD.**
- * Supported ether type in flow item.
- * Extended IPv6 flow item with Vtc flow, Protocol and Hop limit.
- * Supported flag flow action.
- * Supported RSS action flow rule.
- * Supported TSO for tunneled and non-tunneled packets.
- * Supported hardware checksum offloads for tunneled packets.
- * Supported user space Rx interrupt event.
- * Enhanced multi-packet send function for ConnectX-5.
+ Updated the mlx5 driver, including the following changes:
+
+ * Added Generic flow API support for classification according to ether type.
+ * Extended Generic flow API support for classification of IPv6 flow
+ according to Vtc flow, Protocol and Hop limit.
+ * Added Generic flow API support for FLAG action.
+ * Added Generic flow API support for RSS action.
+ * Added support for TSO for non-tunneled and VXLAN packets.
+ * Added support for hardware Tx checksum offloads for VXLAN packets.
+ * Added support for user space Rx interrupt mode.
+ * Improved ConnectX-5 single core and maximum performance.
* **Updated mlx4 PMD.**
- * Supported basic flow items and actions.
- * Supported device removal event.
+ Updated the mlx4 driver, including the following changes:
+
+ * Added support for Generic flow API basic flow items and actions.
+ * Added support for device removal event.
* **Updated the sfc_efx driver.**
- * Generic flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP
+ * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP
pattern items with QUEUE action for ingress traffic.
- * Support virtual functions (VFs)
+ * Added support for virtual functions (VFs).
* **Added LiquidIO network PMD.**
@@ -172,19 +172,19 @@ New Features
* **Added Atomic Rules Arkville PMD.**
Added a new poll mode driver for the Arkville family of
- devices from Atomic Rules. The net/ark PMD supports line-rate
+ devices from Atomic Rules. The net/ark PMD supports line-rate
agnostic, multi-queue data movement on Arkville core FPGA instances.
* **Added support for NXP DPAA2 - FSLMC bus.**
Added the new bus "fslmc" driver for NXP DPAA2 devices. See the
- "Network Interface Controller Drivers" document for more details on this new
+ "Network Interface Controller Drivers" document for more details of this new
driver.
* **Added support for NXP DPAA2 Network PMD.**
Added the new "dpaa2" net driver for NXP DPAA2 devices. See the
- "Network Interface Controller Drivers" document for more details on this new
+ "Network Interface Controller Drivers" document for more details of this new
driver.
* **Added support for the Wind River Systems AVP PMD.**
@@ -195,23 +195,26 @@ New Features
* **Added vmxnet3 version 3 support.**
Added support for vmxnet3 version 3 which includes several
- performance enhancements viz. configurable TX data ring, Receive
- Data Ring, ability to register memory regions.
+ performance enhancements such as configurable Tx data ring, Receive
+ Data Ring, and the ability to register memory regions.
-* **Updated the tap driver.**
+* **Updated the TAP driver.**
+
+ Updated the TAP PMD to:
* Support MTU modification.
* Support packet type for Rx.
* Support segmented packets on Rx and Tx.
- * Speed up Rx on tap when no packets are available.
+ * Speed up Rx on TAP when no packets are available.
* Support capturing traffic from another netdevice.
* Dynamically change link status when the underlying interface state changes.
- * Generic flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP pattern
- items with DROP, QUEUE and PASSTHRU actions for ingress traffic.
+ * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and
+ TCP pattern items with DROP, QUEUE and PASSTHRU actions for ingress
+ traffic.
* **Added MTU feature support to Virtio and Vhost.**
- Implemented new Virtio MTU feature into Vhost and Virtio:
+ Implemented new Virtio MTU feature in Vhost and Virtio:
* Add ``rte_vhost_mtu_get()`` API to Vhost library.
* Enable Vhost PMD's MTU get feature.
@@ -228,21 +231,21 @@ New Features
* **Added event driven programming model library (rte_eventdev).**
- This API introduces event driven programming model.
+ This API introduces an event driven programming model.
In a polling model, lcores poll ethdev ports and associated
- rx queues directly to look for packet. In an event driven model,
- by contrast, lcores call the scheduler that selects packets for
- them based on programmer-specified criteria. Eventdev library
- added support for event driven programming model, which offer
+ Rx queues directly to look for a packet. By contrast in an event
+ driven model, lcores call the scheduler that selects packets for
+ them based on programmer-specified criteria. The Eventdev library
+ adds support for an event driven programming model, which offers
applications automatic multicore scaling, dynamic load balancing,
pipelining, packet ingress order maintenance and
synchronization services to simplify application packet processing.
- By introducing event driven programming model, DPDK can support
+ By introducing an event driven programming model, DPDK can support
both polling and event driven programming models for packet processing,
and applications are free to choose whatever model
- (or combination of the two) that best suits their needs.
+ (or combination of the two) best suits their needs.
* **Added Software Eventdev PMD.**
@@ -256,9 +259,9 @@ New Features
Added the new octeontx ssovf eventdev driver for OCTEONTX devices. See the
"Event Device Drivers" document for more details on this new driver.
-* **Added information metric library.**
+* **Added information metrics library.**
- A library that allows information metrics to be added and updated
+ Added a library that allows information metrics to be added and updated
by producers, typically other libraries, for later retrieval by
consumers such as applications. It is intended to provide a
reporting mechanism that is independent of other libraries such
@@ -266,13 +269,14 @@ New Features
* **Added bit-rate calculation library.**
- A library that can be used to calculate device bit-rates. Calculated
+ Added a library that can be used to calculate device bit-rates. Calculated
bitrates are reported using the metrics library.
* **Added latency stats library.**
- A library that measures packet latency. The collected statistics are jitter
- and latency. For latency the minimum, average, and maximum is measured.
+ Added a library that measures packet latency. The collected statistics are
+ jitter and latency. For latency the minimum, average, and maximum is
+ measured.
* **Added NXP DPAA2 SEC crypto PMD.**
@@ -282,13 +286,13 @@ New Features
* **Updated the Cryptodev Scheduler PMD.**
- * Added packet-size based distribution mode, which distributes the enqueued
+ * Added a packet-size based distribution mode, which distributes the enqueued
crypto operations among two slaves, based on their data lengths.
* Added fail-over scheduling mode, which enqueues crypto operations to a
primary slave first. Then, any operation that cannot be enqueued is
enqueued to a secondary slave.
- * Added mode specific option support, so each scheduleing mode can
- now be configured individually by the new added API.
+ * Added mode specific option support, so each scheduling mode can
+ now be configured individually by the new API.
* **Updated the QAT PMD.**
@@ -331,31 +335,12 @@ Resolved Issues
=========================================================
-EAL
-~~~
-
-
-Drivers
-~~~~~~~
-
-
-Libraries
-~~~~~~~~~
-
-
-Examples
-~~~~~~~~
-
* **l2fwd-keepalive: Fixed unclean shutdowns.**
Added clean shutdown to l2fwd-keepalive so that it can free up
stale resources used for inter-process communication.
-Other
-~~~~~
-
-
Known Issues
------------
@@ -397,17 +382,17 @@ API Changes
The rte_ring library has been reworked and updated. The following changes
have been made to it:
- * removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS``
- * removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG``
- * removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT``
- * removed the function ``rte_ring_set_water_mark`` as part of a general
+ * Removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS``.
+ * Removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG``.
+ * Removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT``.
+ * Removed the function ``rte_ring_set_water_mark`` as part of a general
removal of watermarks support in the library.
- * added an extra parameter to the burst/bulk enqueue functions to
+ * Added an extra parameter to the burst/bulk enqueue functions to
return the number of free spaces in the ring after enqueue. This can
be used by an application to implement its own watermark functionality.
- * added an extra parameter to the burst/bulk dequeue functions to return
+ * Added an extra parameter to the burst/bulk dequeue functions to return
the number elements remaining in the ring after dequeue.
- * changed the return value of the enqueue and dequeue bulk functions to
+ * Changed the return value of the enqueue and dequeue bulk functions to
match that of the burst equivalents. In all cases, ring functions which
operate on multiple packets now return the number of elements enqueued
or dequeued, as appropriate. The updated functions are:
@@ -428,8 +413,8 @@ API Changes
* **Reworked rte_vhost library**
The rte_vhost library has been reworked to make it generic enough so that
- user could build other vhost-user drivers on top of it. To achieve that,
- following changes have been made:
+ user could build other vhost-user drivers on top of it. To achieve this
+ the following changes have been made:
* The following vhost-pmd APIs are removed:
@@ -444,13 +429,13 @@ API Changes
* The vhost API ``rte_vhost_get_queue_num`` is deprecated, instead,
``rte_vhost_get_vring_num`` should be used.
- * Following macros are removed in ``rte_virtio_net.h``
+ * The following macros are removed in ``rte_virtio_net.h``
* ``VIRTIO_RXQ``
* ``VIRTIO_TXQ``
* ``VIRTIO_QNUM``
- * Following net specific header files are removed in ``rte_virtio_net.h``
+ * The following net specific header files are removed in ``rte_virtio_net.h``
* ``linux/virtio_net.h``
* ``sys/socket.h``
@@ -461,8 +446,8 @@ API Changes
``vhost_device_ops``
* The vhost API ``rte_vhost_driver_session_start`` is removed. Instead,
- ``rte_vhost_driver_start`` should be used, and no need to create a
- thread to call it.
+ ``rte_vhost_driver_start`` should be used, and there is no need to create
+ a thread to call it.
* The vhost public header file ``rte_virtio_net.h`` is renamed to
``rte_vhost.h``
@@ -486,8 +471,8 @@ ABI Changes
The order and size of the fields in the ``mbuf`` structure changed,
as described in the `New Features`_ section.
-* The ``rte_cryptodev_info.sym`` structure has new field ``max_nb_sessions_per_qp``
- to support drivers which may support limited number of sessions per queue_pair.
+* The ``rte_cryptodev_info.sym`` structure has a new field ``max_nb_sessions_per_qp``
+ to support drivers which may support a limited number of sessions per queue_pair.
Removed Items
@@ -502,9 +487,9 @@ Removed Items
Also, make sure to start the actual text at the margin.
=========================================================
-* KNI vhost support removed.
+* KNI vhost support has been removed.
-* dpdk_qat sample application removed.
+* The dpdk_qat sample application has been removed.
Shared Library Versions
-----------------------
--
2.7.4
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
2017-05-09 13:49 4% ` Ferruh Yigit
@ 2017-05-09 16:55 4% ` Shahaf Shuler
0 siblings, 0 replies; 200+ results
From: Shahaf Shuler @ 2017-05-09 16:55 UTC (permalink / raw)
To: Ferruh Yigit, dev
Tuesday, May 9, 2017 4:49 PM, Ferruh Yigit:
> On 5/1/2017 7:58 AM, Shahaf Shuler wrote:
>
> I understand the consistency part, but why PMD performs better when Tx
> offload disabled?
>
Well Adrien pretty much summarized it [1].
Tx offload consumes cycles, for examples checksum or TSO.
Since those offloads are enabled by default, the application will need to pay for them, even if they are not used.
True, it is possible to disable them, however when new offloads are introduced the application will need to be constantly modified in order to disable the new ones.
A better approach will be to disable all the Tx offload by default, and enable them according to the application needs.
This will enable to the PMD to provide the Tx burst function which suits the most to the application specific needs, with no extra overhead.
[1] http://dpdk.org/ml/archives/dev/2017-May/065509.html
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v1] doc: update release notes for 17.05
@ 2017-05-09 16:25 2% John McNamara
2017-05-09 16:56 2% ` [dpdk-dev] [PATCH v2] " John McNamara
0 siblings, 1 reply; 200+ results
From: John McNamara @ 2017-05-09 16:25 UTC (permalink / raw)
To: dev; +Cc: John McNamara
Fix grammar, spelling and formatting of DPDK 17.05 release notes.
Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/rel_notes/release_17_05.rst | 192 +++++++++++++++------------------
1 file changed, 88 insertions(+), 104 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 4b47ae1..266f678 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -41,16 +41,18 @@ New Features
Also, make sure to start the actual text at the margin.
=========================================================
-* **Reorganized the mbuf structure.**
+* **Reorganized mbuf structure.**
+
+ The mbuf structure has been reorganized as follows:
* Align fields to facilitate the writing of ``data_off``, ``refcnt``, and
``nb_segs`` in one operation.
* Use 2 bytes for port and number of segments.
- * Move the sequence number in the second cache line.
+ * Move the sequence number to the second cache line.
* Add a timestamp field.
* Set default value for ``refcnt``, ``next`` and ``nb_segs`` at mbuf free.
-* **Added mbuf raw free API**
+* **Added mbuf raw free API.**
Moved ``rte_mbuf_raw_free()`` and ``rte_pktmbuf_prefree_seg()`` functions to
the public API.
@@ -58,14 +60,14 @@ New Features
* **Added free Tx mbuf on demand API.**
Added a new function ``rte_eth_tx_done_cleanup()`` which allows an application
- to request the driver to release mbufs from their Tx ring that are no longer
- in use, independent of whether or not the ``tx_rs_thresh`` has been crossed.
+ to request the driver to release mbufs from that are no longer in use from a
+ Tx ring, independent of whether or not the ``tx_rs_thresh`` has been crossed.
* **Added device removal interrupt.**
Added a new ethdev event ``RTE_ETH_DEV_INTR_RMV`` to signify
the sudden removal of a device.
- This event can be advertized by PCI drivers and enabled accordingly.
+ This event can be advertised by PCI drivers and enabled accordingly.
* **Added EAL dynamic log framework.**
@@ -77,25 +79,25 @@ New Features
Added a new API to get the status of a descriptor.
For Rx, it is almost similar to the ``rx_descriptor_done`` API, except
- it differentiates descriptors which are hold by the driver and not
+ it differentiates descriptors which are held by the driver and not
returned to the hardware. For Tx, it is a new API.
* **Increased number of next hops for LPM IPv6 to 2^21.**
- The next_hop field is extended from 8 bits to 21 bits for IPv6.
+ The next_hop field has been extended from 8 bits to 21 bits for IPv6.
* **Added VFIO hotplug support.**
- How hotplug supported with UIO and VFIO drivers.
+ Added hotplug support for VFIO in addition to the existing UIO support.
-* **Added powerpc support in pci probing for vfio-pci devices.**
+* **Added PowerPC support to pci probing for vfio-pci devices.**
- sPAPR IOMMU based pci probing enabled for vfio-pci devices.
+ Enabled sPAPR IOMMU based pci probing for vfio-pci devices.
-* **Kept consistent PMD batching behaviour.**
+* **Kept consistent PMD batching behavior.**
- Removed the limit of fm10k/i40e/ixgbe TX burst size and vhost RX/TX burst size
- in order to support the same policy of "make an best effort to RX/TX pkts"
+ Removed the limit of fm10k/i40e/ixgbe Tx burst size and vhost Rx/Tx burst size
+ in order to support the same policy of "make an best effort to Rx/Tx pkts"
for PMDs.
* **Updated the ixgbe base driver.**
@@ -106,64 +108,61 @@ New Features
* Complete HW initialization even if SFP is not present.
* Add VF xcast promiscuous mode.
-* **Added powerpc support for i40e and its vector PMD .**
+* **Added PowerPC support for i40e and its vector PMD.**
- i40e PMD and its vector PMD enabled by default in powerpc.
+ Enabled i40e PMD and its vector PMD by default in PowerPC.
* **Added VF max bandwidth setting on i40e.**
- i40e HW supports to set the max bandwidth for a VF. Enable this capability.
-
-* **Added VF TC min bandwidth setting on i40e.**
-
- i40e HW supports to set the allocated bandwidth for a TC on a VF. Enable this
- capability.
+ Enabled capability to set the max bandwidth for a VF, in i40e.
-* **Added VF TC max bandwidth setting on i40e.**
+* **Added VF TC min and max bandwidth setting on i40e.**
- i40e HW supports to set the max bandwidth for a TC on a VF. Enable this
- capability.
+ Enabled capability to set the min and max allocated bandwidth for a TC on a
+ VF, in i40.
* **Added TC strict priority mode setting on i40e.**
- There're 2 TX scheduling modes supported for TCs by i40e HW, round ribon mode
- and strict priority mode. By default it's round robin mode. Enable the
- capability to change the TX scheduling mode for a TC. It's a global setting
- on a physical port.
+ There are 2 Tx scheduling modes supported for TCs by i40e HW, round robin
+ mode and strict priority mode. By default the round robin mode is used. It
+ is now possible to change the Tx scheduling mode for a TC. This is a global
+ setting on a physical port.
* **Added i40e dynamic device personalization support.**
- * Added dynamic device personalization processing to i40e FW.
+ * Added dynamic device personalization processing to i40e firmware.
* **Added Cloud Filter for QinQ steering to i40e.**
* Added a QinQ cloud filter on the i40e PMD, for steering traffic to a VM
- using both VLAN tags.
- * QinQ is not supported in Vector Mode on the i40e PMD.
- * Vector Mode must be disabled when using the QinQ Cloud Filter.
+ using both VLAN tags. Note, this feature is not supported in Vector Mode.
* **Updated mlx5 PMD.**
- * Supported ether type in flow item.
+ Updated the mlx5 driver, including the following changes:
+
+ * Added support for ether type in flow item.
* Extended IPv6 flow item with Vtc flow, Protocol and Hop limit.
- * Supported flag flow action.
- * Supported RSS action flow rule.
- * Supported TSO for tunneled and non-tunneled packets.
- * Supported hardware checksum offloads for tunneled packets.
- * Supported user space Rx interrupt event.
+ * Added support for flag flow action.
+ * Added support for RSS action flow rule.
+ * Added support for TSO for tunneled and non-tunneled packets.
+ * Added support for hardware checksum offloads for tunneled packets.
+ * Added support for user space Rx interrupt event.
* Enhanced multi-packet send function for ConnectX-5.
* **Updated mlx4 PMD.**
- * Supported basic flow items and actions.
- * Supported device removal event.
+ Updated the mlx4 driver, including the following changes:
+
+ * Added support for basic flow items and actions.
+ * Added support for device removal event.
* **Updated the sfc_efx driver.**
- * Generic flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP
+ * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP
pattern items with QUEUE action for ingress traffic.
- * Support virtual functions (VFs)
+ * Added support for virtual functions (VFs).
* **Added LiquidIO network PMD.**
@@ -172,19 +171,19 @@ New Features
* **Added Atomic Rules Arkville PMD.**
Added a new poll mode driver for the Arkville family of
- devices from Atomic Rules. The net/ark PMD supports line-rate
+ devices from Atomic Rules. The net/ark PMD supports line-rate
agnostic, multi-queue data movement on Arkville core FPGA instances.
* **Added support for NXP DPAA2 - FSLMC bus.**
Added the new bus "fslmc" driver for NXP DPAA2 devices. See the
- "Network Interface Controller Drivers" document for more details on this new
+ "Network Interface Controller Drivers" document for more details of this new
driver.
* **Added support for NXP DPAA2 Network PMD.**
Added the new "dpaa2" net driver for NXP DPAA2 devices. See the
- "Network Interface Controller Drivers" document for more details on this new
+ "Network Interface Controller Drivers" document for more details of this new
driver.
* **Added support for the Wind River Systems AVP PMD.**
@@ -195,23 +194,26 @@ New Features
* **Added vmxnet3 version 3 support.**
Added support for vmxnet3 version 3 which includes several
- performance enhancements viz. configurable TX data ring, Receive
- Data Ring, ability to register memory regions.
+ performance enhancements such as configurable Tx data ring, Receive
+ Data Ring, and the ability to register memory regions.
-* **Updated the tap driver.**
+* **Updated the TAP driver.**
+
+ Updated the TAP PMD to:
* Support MTU modification.
* Support packet type for Rx.
* Support segmented packets on Rx and Tx.
- * Speed up Rx on tap when no packets are available.
+ * Speed up Rx on TAP when no packets are available.
* Support capturing traffic from another netdevice.
* Dynamically change link status when the underlying interface state changes.
- * Generic flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP pattern
- items with DROP, QUEUE and PASSTHRU actions for ingress traffic.
+ * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and
+ TCP pattern items with DROP, QUEUE and PASSTHRU actions for ingress
+ traffic.
* **Added MTU feature support to Virtio and Vhost.**
- Implemented new Virtio MTU feature into Vhost and Virtio:
+ Implemented new Virtio MTU feature in Vhost and Virtio:
* Add ``rte_vhost_mtu_get()`` API to Vhost library.
* Enable Vhost PMD's MTU get feature.
@@ -228,21 +230,21 @@ New Features
* **Added event driven programming model library (rte_eventdev).**
- This API introduces event driven programming model.
+ This API introduces an event driven programming model.
In a polling model, lcores poll ethdev ports and associated
- rx queues directly to look for packet. In an event driven model,
- by contrast, lcores call the scheduler that selects packets for
- them based on programmer-specified criteria. Eventdev library
- added support for event driven programming model, which offer
+ Rx queues directly to look for a packet. By contrast in an event
+ driven model, lcores call the scheduler that selects packets for
+ them based on programmer-specified criteria. The Eventdev library
+ adds support for an event driven programming model, which offers
applications automatic multicore scaling, dynamic load balancing,
pipelining, packet ingress order maintenance and
synchronization services to simplify application packet processing.
- By introducing event driven programming model, DPDK can support
+ By introducing an event driven programming model, DPDK can support
both polling and event driven programming models for packet processing,
and applications are free to choose whatever model
- (or combination of the two) that best suits their needs.
+ (or combination of the two) best suits their needs.
* **Added Software Eventdev PMD.**
@@ -256,9 +258,9 @@ New Features
Added the new octeontx ssovf eventdev driver for OCTEONTX devices. See the
"Event Device Drivers" document for more details on this new driver.
-* **Added information metric library.**
+* **Added information metrics library.**
- A library that allows information metrics to be added and updated
+ Added a library that allows information metrics to be added and updated
by producers, typically other libraries, for later retrieval by
consumers such as applications. It is intended to provide a
reporting mechanism that is independent of other libraries such
@@ -266,13 +268,14 @@ New Features
* **Added bit-rate calculation library.**
- A library that can be used to calculate device bit-rates. Calculated
+ Added a library that can be used to calculate device bit-rates. Calculated
bitrates are reported using the metrics library.
* **Added latency stats library.**
- A library that measures packet latency. The collected statistics are jitter
- and latency. For latency the minimum, average, and maximum is measured.
+ Added a library that measures packet latency. The collected statistics are
+ jitter and latency. For latency the minimum, average, and maximum is
+ measured.
* **Added NXP DPAA2 SEC crypto PMD.**
@@ -282,13 +285,13 @@ New Features
* **Updated the Cryptodev Scheduler PMD.**
- * Added packet-size based distribution mode, which distributes the enqueued
+ * Added a packet-size based distribution mode, which distributes the enqueued
crypto operations among two slaves, based on their data lengths.
* Added fail-over scheduling mode, which enqueues crypto operations to a
primary slave first. Then, any operation that cannot be enqueued is
enqueued to a secondary slave.
- * Added mode specific option support, so each scheduleing mode can
- now be configured individually by the new added API.
+ * Added mode specific option support, so each scheduling mode can
+ now be configured individually by the new API.
* **Updated the QAT PMD.**
@@ -331,31 +334,12 @@ Resolved Issues
=========================================================
-EAL
-~~~
-
-
-Drivers
-~~~~~~~
-
-
-Libraries
-~~~~~~~~~
-
-
-Examples
-~~~~~~~~
-
* **l2fwd-keepalive: Fixed unclean shutdowns.**
Added clean shutdown to l2fwd-keepalive so that it can free up
stale resources used for inter-process communication.
-Other
-~~~~~
-
-
Known Issues
------------
@@ -397,17 +381,17 @@ API Changes
The rte_ring library has been reworked and updated. The following changes
have been made to it:
- * removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS``
- * removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG``
- * removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT``
- * removed the function ``rte_ring_set_water_mark`` as part of a general
+ * Removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS``.
+ * Removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG``.
+ * Removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT``.
+ * Removed the function ``rte_ring_set_water_mark`` as part of a general
removal of watermarks support in the library.
- * added an extra parameter to the burst/bulk enqueue functions to
+ * Added an extra parameter to the burst/bulk enqueue functions to
return the number of free spaces in the ring after enqueue. This can
be used by an application to implement its own watermark functionality.
- * added an extra parameter to the burst/bulk dequeue functions to return
+ * Added an extra parameter to the burst/bulk dequeue functions to return
the number elements remaining in the ring after dequeue.
- * changed the return value of the enqueue and dequeue bulk functions to
+ * Changed the return value of the enqueue and dequeue bulk functions to
match that of the burst equivalents. In all cases, ring functions which
operate on multiple packets now return the number of elements enqueued
or dequeued, as appropriate. The updated functions are:
@@ -428,8 +412,8 @@ API Changes
* **Reworked rte_vhost library**
The rte_vhost library has been reworked to make it generic enough so that
- user could build other vhost-user drivers on top of it. To achieve that,
- following changes have been made:
+ user could build other vhost-user drivers on top of it. To achieve this
+ the following changes have been made:
* The following vhost-pmd APIs are removed:
@@ -444,13 +428,13 @@ API Changes
* The vhost API ``rte_vhost_get_queue_num`` is deprecated, instead,
``rte_vhost_get_vring_num`` should be used.
- * Following macros are removed in ``rte_virtio_net.h``
+ * The following macros are removed in ``rte_virtio_net.h``
* ``VIRTIO_RXQ``
* ``VIRTIO_TXQ``
* ``VIRTIO_QNUM``
- * Following net specific header files are removed in ``rte_virtio_net.h``
+ * The following net specific header files are removed in ``rte_virtio_net.h``
* ``linux/virtio_net.h``
* ``sys/socket.h``
@@ -461,8 +445,8 @@ API Changes
``vhost_device_ops``
* The vhost API ``rte_vhost_driver_session_start`` is removed. Instead,
- ``rte_vhost_driver_start`` should be used, and no need to create a
- thread to call it.
+ ``rte_vhost_driver_start`` should be used, and there is no need to create
+ a thread to call it.
* The vhost public header file ``rte_virtio_net.h`` is renamed to
``rte_vhost.h``
@@ -486,8 +470,8 @@ ABI Changes
The order and size of the fields in the ``mbuf`` structure changed,
as described in the `New Features`_ section.
-* The ``rte_cryptodev_info.sym`` structure has new field ``max_nb_sessions_per_qp``
- to support drivers which may support limited number of sessions per queue_pair.
+* The ``rte_cryptodev_info.sym`` structure has a new field ``max_nb_sessions_per_qp``
+ to support drivers which may support a limited number of sessions per queue_pair.
Removed Items
@@ -502,9 +486,9 @@ Removed Items
Also, make sure to start the actual text at the margin.
=========================================================
-* KNI vhost support removed.
+* KNI vhost support has been removed.
-* dpdk_qat sample application removed.
+* The dpdk_qat sample application has been removed.
Shared Library Versions
-----------------------
--
2.7.4
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
2017-05-01 6:58 13% [dpdk-dev] [PATCH] doc: announce ABI change on ethdev Shahaf Shuler
2017-05-09 10:24 4% ` Shahaf Shuler
2017-05-09 13:40 4% ` Adrien Mazarguil
@ 2017-05-09 13:49 4% ` Ferruh Yigit
2017-05-09 16:55 4% ` Shahaf Shuler
2017-05-09 18:09 4% ` Ananyev, Konstantin
2017-05-10 14:29 4% ` Bruce Richardson
4 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2017-05-09 13:49 UTC (permalink / raw)
To: Shahaf Shuler, dev
On 5/1/2017 7:58 AM, Shahaf Shuler wrote:
> This is an ABI change notice for DPDK 17.08 in librte_ether
> about changes in rte_eth_txmode structure.
>
> Currently Tx offloads are enabled by default, and can be disabled
> using ETH_TXQ_FLAGS_NO* flags. This behaviour is not consistent with
> the Rx side where the Rx offloads are disabled by default and enabled
> according to bit field in rte_eth_rxmode structure.
>
> The proposal is to disable the Tx offloads by default, and provide
> a way for the application to enable them in rte_eth_txmode structure.
> Besides of making the Tx configuration API more consistent for
> applications, PMDs will be able to provide a better out of the
> box performance.
> Finally, as part of the work, the ETH_TXQ_FLAGS_NO* will
> be superseded as well.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
> looks like this patch has arrived to everyone
> besides dev@dpdk.org resending it again. sorry for
> the noise.
> ---
> doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index a3e7c720c..0920b4766 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -81,3 +81,11 @@ Deprecation Notices
>
> - ``rte_crpytodev_scheduler_mode_get``, replaced by ``rte_cryptodev_scheduler_mode_get``
> - ``rte_crpytodev_scheduler_mode_set``, replaced by ``rte_cryptodev_scheduler_mode_set``
> +
> +* ethdev: in 17.08 ABI changes are planned:
> + Tx offloads will no longer be enabled by default.
> + Instead, the ``rte_eth_txmode`` structure will be extended with bit field to enable
> + each Tx offload.
> + Besides of making the Rx/Tx configuration API more consistent for the
> + application, PMDs will be able to provide a better out of the box performance.
I understand the consistency part, but why PMD performs better when Tx
offload disabled?
> + as part of the work, ``ETH_TXQ_FLAGS_NO*`` will be superseded as well.
>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: notice for changes in kni structures
2017-05-08 9:46 0% ` Hemant Agrawal
@ 2017-05-09 13:42 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2017-05-09 13:42 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: dev
On 5/8/2017 10:46 AM, Hemant Agrawal wrote:
> On 5/4/2017 10:20 PM, Ferruh Yigit wrote:
>> On 5/3/2017 12:31 PM, Hemant Agrawal wrote:
>>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>>> ---
>>> doc/guides/rel_notes/deprecation.rst | 7 +++++++
>>> 1 file changed, 7 insertions(+)
>>>
>>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>>> index a3e7c72..0c1ef2c 100644
>>> --- a/doc/guides/rel_notes/deprecation.rst
>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>> @@ -81,3 +81,10 @@ Deprecation Notices
>>>
>>> - ``rte_crpytodev_scheduler_mode_get``, replaced by ``rte_cryptodev_scheduler_mode_get``
>>> - ``rte_crpytodev_scheduler_mode_set``, replaced by ``rte_cryptodev_scheduler_mode_set``
>>> +
>>> +* kni: additional functionality is planned to be added in kni to support mtu, macaddr,
>>> + gso_size, promiscusity configuration.
>>> + some of the kni structure will be changed to support additional functionality
>>> + e.g ``rte_kni_request`` to support promiscusity`` and mac_addr,
>>
>> rte_kni_request is between KNI library and KNI kernel module, shouldn't
>> be part of API.
>>
>>> + ``rte_kni_mbu`` to support the configured gso_size,
>>
>> Again, rte_kni_mbuf should be only concern of KNI kernel module.
>>
>>> + ``rte_kni_device_info`` and ``rte_kni_conf`` to also support mtu and macaddr.
>>
>> rte_kni_device_info also between KNI library and KNI kernel module.
>>
>> I think deprecation notice not required for above ones.
>>
>> But you KNI patchset updates rte_kni_conf and rte_kni_ops.
>> These are part of KNI API and changing them cause ABI breakage,
>> but if new fields appended in these structs, this will not cause an ABI
>> breakage, and I think that is better to do instead of deprecation
>> notice, what do you think?
>
> I agree.
>>
>>
>> And apart from above ABI issues,
>> adding new fields to "rte_kni_ops" means DPDK application that use KNI
>> should implement them, right?
>
> Well, it depend, if the application is interested in this information or
> not?
>
>> So this suggest everyone require to set promiscuity of KNI device should
>> implement this.
>
> yes!
>
>> Can't we find another way that all can benefit from a common implementation?
>
> how you want it differently? Any ideas?
Can having default implementations in librte_kni work? Would
applications be doing something different, lets say to set MTU?
>
>
>>
>> Thanks,
>> ferruh
>>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
2017-05-01 6:58 13% [dpdk-dev] [PATCH] doc: announce ABI change on ethdev Shahaf Shuler
2017-05-09 10:24 4% ` Shahaf Shuler
@ 2017-05-09 13:40 4% ` Adrien Mazarguil
2017-05-09 17:04 4% ` Jerin Jacob
2017-05-09 13:49 4% ` Ferruh Yigit
` (2 subsequent siblings)
4 siblings, 1 reply; 200+ results
From: Adrien Mazarguil @ 2017-05-09 13:40 UTC (permalink / raw)
To: Shahaf Shuler, Konstantin Ananyev, Olivier Matz, Tomasz Kulasek; +Cc: dev
On Mon, May 01, 2017 at 09:58:12AM +0300, Shahaf Shuler wrote:
> This is an ABI change notice for DPDK 17.08 in librte_ether
> about changes in rte_eth_txmode structure.
>
> Currently Tx offloads are enabled by default, and can be disabled
> using ETH_TXQ_FLAGS_NO* flags. This behaviour is not consistent with
> the Rx side where the Rx offloads are disabled by default and enabled
> according to bit field in rte_eth_rxmode structure.
>
> The proposal is to disable the Tx offloads by default, and provide
> a way for the application to enable them in rte_eth_txmode structure.
> Besides of making the Tx configuration API more consistent for
> applications, PMDs will be able to provide a better out of the
> box performance.
> Finally, as part of the work, the ETH_TXQ_FLAGS_NO* will
> be superseded as well.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Basically, TX mbuf flags like TSO and checksum offloads won't have to be
honored by PMDs unless applications request them first while configuring the
device, just like RX offloads.
Considering more and more TX offloads will be added over time, I do not
think expecting them all to be enabled by default is sane. There will always
be an associated software cost in PMDs, and this solution allows
applications to selectively enable them as needed for maximum performance.
Konstantin/Olivier/Tomasz, I do not want to resume the thread about
tx_prepare(), however this could provide an alternative means to benefit
from improved performance when applications do not need TSO (or any other
offload for that matter), while adding consistency to device configuration.
What's your opinion?
In any case I'm fine with this change:
Acked-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index a3e7c720c..0920b4766 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -81,3 +81,11 @@ Deprecation Notices
>
> - ``rte_crpytodev_scheduler_mode_get``, replaced by ``rte_cryptodev_scheduler_mode_get``
> - ``rte_crpytodev_scheduler_mode_set``, replaced by ``rte_cryptodev_scheduler_mode_set``
> +
> +* ethdev: in 17.08 ABI changes are planned:
> + Tx offloads will no longer be enabled by default.
> + Instead, the ``rte_eth_txmode`` structure will be extended with bit field to enable
> + each Tx offload.
> + Besides of making the Rx/Tx configuration API more consistent for the
> + application, PMDs will be able to provide a better out of the box performance.
> + as part of the work, ``ETH_TXQ_FLAGS_NO*`` will be superseded as well.
> --
> 2.12.0
>
--
Adrien Mazarguil
6WIND
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
2017-05-01 6:58 13% [dpdk-dev] [PATCH] doc: announce ABI change on ethdev Shahaf Shuler
@ 2017-05-09 10:24 4% ` Shahaf Shuler
2017-05-09 13:40 4% ` Adrien Mazarguil
` (3 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Shahaf Shuler @ 2017-05-09 10:24 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon
Monday, May 1, 2017 9:58 AM, Shahaf Shuler:
>
> This is an ABI change notice for DPDK 17.08 in librte_ether about changes in
> rte_eth_txmode structure.
>
> Currently Tx offloads are enabled by default, and can be disabled using
> ETH_TXQ_FLAGS_NO* flags. This behaviour is not consistent with the Rx side
> where the Rx offloads are disabled by default and enabled according to bit
> field in rte_eth_rxmode structure.
>
> The proposal is to disable the Tx offloads by default, and provide a way for
> the application to enable them in rte_eth_txmode structure.
> Besides of making the Tx configuration API more consistent for applications,
> PMDs will be able to provide a better out of the box performance.
> Finally, as part of the work, the ETH_TXQ_FLAGS_NO* will be superseded as
> well.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Hi,
Any comments on this announcement?
Please have a min to read and comment.
This work can improve all PMDs data path code, which will be according to application needs.
> ---
> looks like this patch has arrived to everyone
> besides dev@dpdk.org resending it again. sorry for
> the noise.
> ---
> doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index a3e7c720c..0920b4766 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -81,3 +81,11 @@ Deprecation Notices
>
> - ``rte_crpytodev_scheduler_mode_get``, replaced by
> ``rte_cryptodev_scheduler_mode_get``
> - ``rte_crpytodev_scheduler_mode_set``, replaced by
> ``rte_cryptodev_scheduler_mode_set``
> +
> +* ethdev: in 17.08 ABI changes are planned:
> + Tx offloads will no longer be enabled by default.
> + Instead, the ``rte_eth_txmode`` structure will be extended with bit
> +field to enable
> + each Tx offload.
> + Besides of making the Rx/Tx configuration API more consistent for the
> + application, PMDs will be able to provide a better out of the box
> performance.
> + as part of the work, ``ETH_TXQ_FLAGS_NO*`` will be superseded as well.
> --
> 2.12.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: notice for changes in kni structures
2017-05-04 16:50 4% ` Ferruh Yigit
@ 2017-05-08 9:46 0% ` Hemant Agrawal
2017-05-09 13:42 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2017-05-08 9:46 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev
On 5/4/2017 10:20 PM, Ferruh Yigit wrote:
> On 5/3/2017 12:31 PM, Hemant Agrawal wrote:
>> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>> ---
>> doc/guides/rel_notes/deprecation.rst | 7 +++++++
>> 1 file changed, 7 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index a3e7c72..0c1ef2c 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -81,3 +81,10 @@ Deprecation Notices
>>
>> - ``rte_crpytodev_scheduler_mode_get``, replaced by ``rte_cryptodev_scheduler_mode_get``
>> - ``rte_crpytodev_scheduler_mode_set``, replaced by ``rte_cryptodev_scheduler_mode_set``
>> +
>> +* kni: additional functionality is planned to be added in kni to support mtu, macaddr,
>> + gso_size, promiscusity configuration.
>> + some of the kni structure will be changed to support additional functionality
>> + e.g ``rte_kni_request`` to support promiscusity`` and mac_addr,
>
> rte_kni_request is between KNI library and KNI kernel module, shouldn't
> be part of API.
>
>> + ``rte_kni_mbu`` to support the configured gso_size,
>
> Again, rte_kni_mbuf should be only concern of KNI kernel module.
>
>> + ``rte_kni_device_info`` and ``rte_kni_conf`` to also support mtu and macaddr.
>
> rte_kni_device_info also between KNI library and KNI kernel module.
>
> I think deprecation notice not required for above ones.
>
> But you KNI patchset updates rte_kni_conf and rte_kni_ops.
> These are part of KNI API and changing them cause ABI breakage,
> but if new fields appended in these structs, this will not cause an ABI
> breakage, and I think that is better to do instead of deprecation
> notice, what do you think?
I agree.
>
>
> And apart from above ABI issues,
> adding new fields to "rte_kni_ops" means DPDK application that use KNI
> should implement them, right?
Well, it depend, if the application is interested in this information or
not?
> So this suggest everyone require to set promiscuity of KNI device should
> implement this.
yes!
> Can't we find another way that all can benefit from a common implementation?
how you want it differently? Any ideas?
>
> Thanks,
> ferruh
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] custom align for mempool elements
2017-05-05 11:26 4% ` Olivier Matz
@ 2017-05-05 17:23 0% ` Gregory Etelson
2017-05-10 15:01 4% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Gregory Etelson @ 2017-05-05 17:23 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev
Hello Oliver,
Our application writes data from incoming MBUFs to storage device.
For performance considerations we use O_DIRECT storage access
and work in 'zero copy' data mode.
To achieve the 'zero copy' we MUST arrange data in all MBUFs to be 512
bytes aligned
With pre-calculated custom pool element alignment and right
RTE_PKTMBUF_HEADROOM value
we can set MBUF data alignment to any value. In our case, 512 bytes
Current implementation sets custom mempool alignment like this:
struct rte_mempool *mp = rte_mempool_create_empty("MBUF pool",
mbufs_count,
elt_size, cache_size, sizeof(struct
rte_pktmbuf_pool_private), rte_socket_id(), 0);
rte_pktmbuf_pool_init(mp, &mbp_priv);
*mp->elt_align = align*;
rte_mempool_populate_default(mp);
Regards,
Gregory
On Fri, May 5, 2017 at 2:26 PM, Olivier Matz <olivier.matz@6wind.com> wrote:
> Hi Gregory,
>
> On Wed, 26 Apr 2017 07:00:49 +0300, Gregory Etelson <gregory@weka.io>
> wrote:
> > Signed-off-by: Gregory Etelson <gregory@weka.io>
> > ---
> > lib/librte_mempool/rte_mempool.c | 27 ++++++++++++++++++++-------
> > lib/librte_mempool/rte_mempool.h | 1 +
> > 2 files changed, 21 insertions(+), 7 deletions(-)
> >
> > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_
> mempool.c
> > index f65310f..c780df3 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -382,7 +382,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp,
> char *vaddr,
> > if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
> > off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
> > else
> > - off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) -
> vaddr;
> > + off = RTE_PTR_ALIGN_CEIL(vaddr, mp->elt_align) - vaddr;
> >
> > while (off + total_elt_sz <= len && mp->populated_size < mp->size)
> {
> > off += mp->header_size;
> > @@ -392,6 +392,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp,
> char *vaddr,
> > else
> > mempool_add_elem(mp, (char *)vaddr + off, paddr +
> off);
> > off += mp->elt_size + mp->trailer_size;
> > + off = RTE_ALIGN_CEIL(off, mp->elt_align);
> > i++;
> > }
> >
> > @@ -508,6 +509,20 @@ rte_mempool_populate_virt(struct rte_mempool *mp,
> char *addr,
> > return ret;
> > }
> >
> > +static uint32_t
> > +mempool_default_elt_aligment(void)
> > +{
> > + uint32_t align;
> > + if (rte_xen_dom0_supported()) {
> > + align = RTE_PGSIZE_2M;;
> > + } else if (rte_eal_has_hugepages()) {
> > + align = RTE_CACHE_LINE_SIZE;
> > + } else {
> > + align = getpagesize();
> > + }
> > + return align;
> > +}
> > +
> > /* Default function to populate the mempool: allocate memory in
> memzones,
> > * and populate them. Return the number of objects added, or a negative
> > * value on error.
> > @@ -518,7 +533,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> > int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
> > char mz_name[RTE_MEMZONE_NAMESIZE];
> > const struct rte_memzone *mz;
> > - size_t size, total_elt_sz, align, pg_sz, pg_shift;
> > + size_t size, total_elt_sz, pg_sz, pg_shift;
> > phys_addr_t paddr;
> > unsigned mz_id, n;
> > int ret;
> > @@ -530,15 +545,12 @@ rte_mempool_populate_default(struct rte_mempool
> *mp)
> > if (rte_xen_dom0_supported()) {
> > pg_sz = RTE_PGSIZE_2M;
> > pg_shift = rte_bsf32(pg_sz);
> > - align = pg_sz;
> > } else if (rte_eal_has_hugepages()) {
> > pg_shift = 0; /* not needed, zone is physically contiguous
> */
> > pg_sz = 0;
> > - align = RTE_CACHE_LINE_SIZE;
> > } else {
> > pg_sz = getpagesize();
> > pg_shift = rte_bsf32(pg_sz);
> > - align = pg_sz;
> > }
> >
> > total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
> > @@ -553,11 +565,11 @@ rte_mempool_populate_default(struct rte_mempool
> *mp)
> > }
> >
> > mz = rte_memzone_reserve_aligned(mz_name, size,
> > - mp->socket_id, mz_flags, align);
> > + mp->socket_id, mz_flags, mp->elt_align);
> > /* not enough memory, retry with the biggest zone we have
> */
> > if (mz == NULL)
> > mz = rte_memzone_reserve_aligned(mz_name, 0,
> > - mp->socket_id, mz_flags, align);
> > + mp->socket_id, mz_flags, mp->elt_align);
> > if (mz == NULL) {
> > ret = -rte_errno;
> > goto fail;
> > @@ -827,6 +839,7 @@ rte_mempool_create_empty(const char *name, unsigned
> n, unsigned elt_size,
> > /* Size of default caches, zero means disabled. */
> > mp->cache_size = cache_size;
> > mp->private_data_size = private_data_size;
> > + mp->elt_align = mempool_default_elt_aligment();
> > STAILQ_INIT(&mp->elt_list);
> > STAILQ_INIT(&mp->mem_list);
> >
> > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_
> mempool.h
> > index 48bc8ea..6631973 100644
> > --- a/lib/librte_mempool/rte_mempool.h
> > +++ b/lib/librte_mempool/rte_mempool.h
> > @@ -245,6 +245,7 @@ struct rte_mempool {
> > * this mempool.
> > */
> > int32_t ops_index;
> > + uint32_t elt_align;
> >
> > struct rte_mempool_cache *local_cache; /**< Per-lcore local cache
> */
> >
>
> It looks the patch will break the ABI (the mempool structure), so it
> has to follow the abi deprecation process (= a notice in 17.05 and
> the patch for 17.08).
>
> Could you give us some details about why you need such feature, how you
> use it (since no API is added)?
>
> Thanks,
> Olivier
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 2/3] doc: change type of return value of adding MAC addr
2017-05-05 0:40 5% ` [dpdk-dev] [PATCH v7 2/3] doc: change type of return value of adding MAC addr Wei Dai
@ 2017-05-05 14:23 3% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-05-05 14:23 UTC (permalink / raw)
To: Wei Dai
Cc: dev, wenzhuo.lu, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, helin.zhang, konstantin.ananyev, jingjing.wu,
jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin, shepard.siegel,
ed.czeck, john.miller
05/05/2017 02:40, Wei Dai:
> Add following lines in section of API change in release note.
>
> If a MAC address fails to be added without this change, it is still
> stored and may be regarded as a valid one. This may lead to errors
> in application. The type of return value of eth_mac_addr_add_t in
> rte_ethdev.h is changed. Any specific NIC also follows this change.
>
> Signed-off-by: Wei Dai <wei.dai@intel.com>
> Acked-by: John McNamara <john.mcnamara@intel.com>
This patch is rejected because it is neither an API nor an ABI change.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] custom align for mempool elements
@ 2017-05-05 11:26 4% ` Olivier Matz
2017-05-05 17:23 0% ` Gregory Etelson
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2017-05-05 11:26 UTC (permalink / raw)
To: Gregory Etelson; +Cc: dev
Hi Gregory,
On Wed, 26 Apr 2017 07:00:49 +0300, Gregory Etelson <gregory@weka.io> wrote:
> Signed-off-by: Gregory Etelson <gregory@weka.io>
> ---
> lib/librte_mempool/rte_mempool.c | 27 ++++++++++++++++++++-------
> lib/librte_mempool/rte_mempool.h | 1 +
> 2 files changed, 21 insertions(+), 7 deletions(-)
>
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index f65310f..c780df3 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -382,7 +382,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
> if (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)
> off = RTE_PTR_ALIGN_CEIL(vaddr, 8) - vaddr;
> else
> - off = RTE_PTR_ALIGN_CEIL(vaddr, RTE_CACHE_LINE_SIZE) - vaddr;
> + off = RTE_PTR_ALIGN_CEIL(vaddr, mp->elt_align) - vaddr;
>
> while (off + total_elt_sz <= len && mp->populated_size < mp->size) {
> off += mp->header_size;
> @@ -392,6 +392,7 @@ rte_mempool_populate_phys(struct rte_mempool *mp, char *vaddr,
> else
> mempool_add_elem(mp, (char *)vaddr + off, paddr + off);
> off += mp->elt_size + mp->trailer_size;
> + off = RTE_ALIGN_CEIL(off, mp->elt_align);
> i++;
> }
>
> @@ -508,6 +509,20 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr,
> return ret;
> }
>
> +static uint32_t
> +mempool_default_elt_aligment(void)
> +{
> + uint32_t align;
> + if (rte_xen_dom0_supported()) {
> + align = RTE_PGSIZE_2M;;
> + } else if (rte_eal_has_hugepages()) {
> + align = RTE_CACHE_LINE_SIZE;
> + } else {
> + align = getpagesize();
> + }
> + return align;
> +}
> +
> /* Default function to populate the mempool: allocate memory in memzones,
> * and populate them. Return the number of objects added, or a negative
> * value on error.
> @@ -518,7 +533,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
> char mz_name[RTE_MEMZONE_NAMESIZE];
> const struct rte_memzone *mz;
> - size_t size, total_elt_sz, align, pg_sz, pg_shift;
> + size_t size, total_elt_sz, pg_sz, pg_shift;
> phys_addr_t paddr;
> unsigned mz_id, n;
> int ret;
> @@ -530,15 +545,12 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> if (rte_xen_dom0_supported()) {
> pg_sz = RTE_PGSIZE_2M;
> pg_shift = rte_bsf32(pg_sz);
> - align = pg_sz;
> } else if (rte_eal_has_hugepages()) {
> pg_shift = 0; /* not needed, zone is physically contiguous */
> pg_sz = 0;
> - align = RTE_CACHE_LINE_SIZE;
> } else {
> pg_sz = getpagesize();
> pg_shift = rte_bsf32(pg_sz);
> - align = pg_sz;
> }
>
> total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
> @@ -553,11 +565,11 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> }
>
> mz = rte_memzone_reserve_aligned(mz_name, size,
> - mp->socket_id, mz_flags, align);
> + mp->socket_id, mz_flags, mp->elt_align);
> /* not enough memory, retry with the biggest zone we have */
> if (mz == NULL)
> mz = rte_memzone_reserve_aligned(mz_name, 0,
> - mp->socket_id, mz_flags, align);
> + mp->socket_id, mz_flags, mp->elt_align);
> if (mz == NULL) {
> ret = -rte_errno;
> goto fail;
> @@ -827,6 +839,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
> /* Size of default caches, zero means disabled. */
> mp->cache_size = cache_size;
> mp->private_data_size = private_data_size;
> + mp->elt_align = mempool_default_elt_aligment();
> STAILQ_INIT(&mp->elt_list);
> STAILQ_INIT(&mp->mem_list);
>
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index 48bc8ea..6631973 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -245,6 +245,7 @@ struct rte_mempool {
> * this mempool.
> */
> int32_t ops_index;
> + uint32_t elt_align;
>
> struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
>
It looks the patch will break the ABI (the mempool structure), so it
has to follow the abi deprecation process (= a notice in 17.05 and
the patch for 17.08).
Could you give us some details about why you need such feature, how you
use it (since no API is added)?
Thanks,
Olivier
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v7 2/3] doc: change type of return value of adding MAC addr
2017-05-05 0:39 3% ` [dpdk-dev] [PATCH v7 0/3] MAC address fail to be added shouldn't be stored Wei Dai
@ 2017-05-05 0:40 5% ` Wei Dai
2017-05-05 14:23 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Wei Dai @ 2017-05-05 0:40 UTC (permalink / raw)
To: wenzhuo.lu, thomas, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, helin.zhang, konstantin.ananyev, jingjing.wu,
jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin, shepard.siegel,
ed.czeck, john.miller
Cc: dev, Wei Dai
Add following lines in section of API change in release note.
If a MAC address fails to be added without this change, it is still
stored and may be regarded as a valid one. This may lead to errors
in application. The type of return value of eth_mac_addr_add_t in
rte_ethdev.h is changed. Any specific NIC also follows this change.
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/rel_notes/release_17_05.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 4b47ae1..0bd07f1 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -489,6 +489,13 @@ ABI Changes
* The ``rte_cryptodev_info.sym`` structure has new field ``max_nb_sessions_per_qp``
to support drivers which may support limited number of sessions per queue_pair.
+* **Return if the MAC address is added successfully or not.**
+
+ If a MAC address fails to be added without this change, it is still stored
+ and may be regarded as a valid one. This may lead to errors in application.
+ The type of return value of eth_mac_addr_add_t in rte_ethdev.h is changed.
+ Any specific NIC also follows this change.
+
Removed Items
-------------
--
2.7.4
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v7 0/3] MAC address fail to be added shouldn't be stored
2017-05-02 12:44 3% ` [dpdk-dev] [PATCH v6 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-05-02 12:44 5% ` [dpdk-dev] [PATCH v6 2/3] doc: change type of return value of adding MAC addr Wei Dai
@ 2017-05-05 0:39 3% ` Wei Dai
2017-05-05 0:40 5% ` [dpdk-dev] [PATCH v7 2/3] doc: change type of return value of adding MAC addr Wei Dai
1 sibling, 1 reply; 200+ results
From: Wei Dai @ 2017-05-05 0:39 UTC (permalink / raw)
To: wenzhuo.lu, thomas, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, helin.zhang, konstantin.ananyev, jingjing.wu,
jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin, shepard.siegel,
ed.czeck, john.miller
Cc: dev, Wei Dai
Current ethdev always stores the MAC address even it fails to be added.
Other function may regard the failed MAC address valid and lead to
some errors. So There is a need to check if the addr is added
successfully or not and discard it if it fails.
In 3rd patch, add a command "add_more_mac_addr port_id base_mac_addr count"
to add more than one MAC address one time.
This command can simplify the test for the first patch.
Normally a MAC address may fails to be added only after many MAC
addresses have been added.
Without this command, a tester may only trigger failed MAC address
by running many times of testpmd command 'mac_addr add' .
This patch set has got acknowledgements from
Nelio Laranjeiro <nelio.laranjeiro@6wind.com> for mlx changes
Yuanhan Liu <yuanhan.liu@linux.intel.com> for virtio changes
Wenzhuo Lu <wenzhuo.lu@intel.com> for igb, e1000 and ixgbe changes
---
Changes
v7:
1. remove "Cc: stable@dpdk.org" in patch 1/3
2. add "Acked by: Wenzhuo.Lu <wenzhuo.lu@intel.com>" in patch 1/3
v6:
1. rebase master branch to v17.05-rc3
2. not touch e1000 base driver code
3. fix some minor defects
v5:
1. rebase master branch
2. add support to drivers/net/ark
3. fix some minor defects
v4:
1. rebase master branch
2. follow code style
v3:
1. Change return value for some specific NIC according to feedbacks
from the community;
2. Add ABI change in release note;
3. Add more detailed commit message.
v2:
fix warnings and erros from check-git-log.sh and checkpatch.pl
Wei Dai (3):
ethdev: fix adding invalid MAC addr
doc: change type of return value of adding MAC addr
app/testpmd: add a command to add many MAC addrs
app/test-pmd/cmdline.c | 55 ++++++++++++++++++++++++++++++++++
doc/guides/rel_notes/release_17_05.rst | 7 +++++
drivers/net/ark/ark_ethdev.c | 15 ++++++----
drivers/net/bnx2x/bnx2x_ethdev.c | 7 +++--
drivers/net/bnxt/bnxt_ethdev.c | 16 +++++-----
drivers/net/e1000/em_ethdev.c | 8 ++---
drivers/net/e1000/igb_ethdev.c | 9 +++---
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 4 +--
drivers/net/enic/enic_main.c | 9 +++---
drivers/net/fm10k/fm10k_ethdev.c | 3 +-
drivers/net/i40e/i40e_ethdev.c | 17 ++++++-----
drivers/net/i40e/i40e_ethdev_vf.c | 14 ++++-----
drivers/net/ixgbe/ixgbe_ethdev.c | 33 ++++++++++++--------
drivers/net/mlx4/mlx4.c | 16 ++++++----
drivers/net/mlx5/mlx5.h | 4 +--
drivers/net/mlx5/mlx5_mac.c | 16 ++++++----
drivers/net/qede/qede_ethdev.c | 6 ++--
drivers/net/ring/rte_eth_ring.c | 3 +-
drivers/net/virtio/virtio_ethdev.c | 13 ++++----
lib/librte_ether/rte_ethdev.c | 15 ++++++----
lib/librte_ether/rte_ethdev.h | 2 +-
22 files changed, 184 insertions(+), 90 deletions(-)
--
2.7.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] doc: notice for changes in kni structures
@ 2017-05-04 16:50 4% ` Ferruh Yigit
2017-05-08 9:46 0% ` Hemant Agrawal
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2017-05-04 16:50 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: dev
On 5/3/2017 12:31 PM, Hemant Agrawal wrote:
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index a3e7c72..0c1ef2c 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -81,3 +81,10 @@ Deprecation Notices
>
> - ``rte_crpytodev_scheduler_mode_get``, replaced by ``rte_cryptodev_scheduler_mode_get``
> - ``rte_crpytodev_scheduler_mode_set``, replaced by ``rte_cryptodev_scheduler_mode_set``
> +
> +* kni: additional functionality is planned to be added in kni to support mtu, macaddr,
> + gso_size, promiscusity configuration.
> + some of the kni structure will be changed to support additional functionality
> + e.g ``rte_kni_request`` to support promiscusity`` and mac_addr,
rte_kni_request is between KNI library and KNI kernel module, shouldn't
be part of API.
> + ``rte_kni_mbu`` to support the configured gso_size,
Again, rte_kni_mbuf should be only concern of KNI kernel module.
> + ``rte_kni_device_info`` and ``rte_kni_conf`` to also support mtu and macaddr.
rte_kni_device_info also between KNI library and KNI kernel module.
I think deprecation notice not required for above ones.
But you KNI patchset updates rte_kni_conf and rte_kni_ops.
These are part of KNI API and changing them cause ABI breakage,
but if new fields appended in these structs, this will not cause an ABI
breakage, and I think that is better to do instead of deprecation
notice, what do you think?
And apart from above ABI issues,
adding new fields to "rte_kni_ops" means DPDK application that use KNI
should implement them, right?
So this suggest everyone require to set promiscuity of KNI device should
implement this.
Can't we find another way that all can benefit from a common implementation?
Thanks,
ferruh
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v6 2/3] doc: change type of return value of adding MAC addr
2017-05-02 12:44 3% ` [dpdk-dev] [PATCH v6 0/3] MAC address fail to be added shouldn't be stored Wei Dai
@ 2017-05-02 12:44 5% ` Wei Dai
2017-05-05 0:39 3% ` [dpdk-dev] [PATCH v7 0/3] MAC address fail to be added shouldn't be stored Wei Dai
1 sibling, 0 replies; 200+ results
From: Wei Dai @ 2017-05-02 12:44 UTC (permalink / raw)
To: wenzhuo.lu, thomas, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, helin.zhang, konstantin.ananyev, jingjing.wu,
jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin, shepard.siegel,
ed.czeck, john.miller
Cc: dev, Wei Dai
Add following lines in section of API change in release note.
If a MAC address fails to be added without this change, it is still
stored and may be regarded as a valid one. This may lead to errors
in application. The type of return value of eth_mac_addr_add_t in
rte_ethdev.h is changed. Any specific NIC also follows this change.
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/rel_notes/release_17_05.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 4b47ae1..0bd07f1 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -489,6 +489,13 @@ ABI Changes
* The ``rte_cryptodev_info.sym`` structure has new field ``max_nb_sessions_per_qp``
to support drivers which may support limited number of sessions per queue_pair.
+* **Return if the MAC address is added successfully or not.**
+
+ If a MAC address fails to be added without this change, it is still stored
+ and may be regarded as a valid one. This may lead to errors in application.
+ The type of return value of eth_mac_addr_add_t in rte_ethdev.h is changed.
+ Any specific NIC also follows this change.
+
Removed Items
-------------
--
2.7.4
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v6 0/3] MAC address fail to be added shouldn't be stored
2017-04-30 4:11 3% [dpdk-dev] [PATCH v5 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-04-30 4:11 5% ` [dpdk-dev] [PATCH v5 2/3] doc: change type of return value of adding MAC addr Wei Dai
@ 2017-05-02 12:44 3% ` Wei Dai
2017-05-02 12:44 5% ` [dpdk-dev] [PATCH v6 2/3] doc: change type of return value of adding MAC addr Wei Dai
2017-05-05 0:39 3% ` [dpdk-dev] [PATCH v7 0/3] MAC address fail to be added shouldn't be stored Wei Dai
1 sibling, 2 replies; 200+ results
From: Wei Dai @ 2017-05-02 12:44 UTC (permalink / raw)
To: wenzhuo.lu, thomas, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, helin.zhang, konstantin.ananyev, jingjing.wu,
jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin, shepard.siegel,
ed.czeck, john.miller
Cc: dev, Wei Dai
Current ethdev always stores MAC address even it fails to be added.
Other function may regard the failed MAC address valid and lead to
some errors. So There is a need to check if the addr is added
successfully or not and discard it if it fails.
In 3rd patch, add a command "add_more_mac_addr port_id base_mac_addr count"
to add more than one MAC address one time.
This command can simplify the test for the first patch.
Normally a MAC address may fails to be added only after many MAC
addresses have been added.
Without this command, a tester may only trigger failed MAC address
by running many times of testpmd command 'mac_addr add' .
For v4 patch set, have got acknowledgements from
Nelio Laranjeiro <nelio.laranjeiro@6wind.com> for mlx changes
Yuanhan Liu <yuanhan.liu@linux.intel.com> for virtio changes
---
Changes
v6:
1. rebase master branch to v17.05-rc3
2. not touch e1000 base driver code
3. fix some minor defects
v5:
1. rebase master branch
2. add support to drivers/net/ark
3. fix some minor defects
v4:
1. rebase master branch
2. follow code style
v3:
1. Change return value for some specific NIC according to feedbacks
from the community;
2. Add ABI change in release note;
3. Add more detailed commit message.
v2:
fix warnings and erros from check-git-log.sh and checkpatch.pl
Wei Dai (3):
ethdev: fix adding invalid MAC addr
doc: change type of return value of adding MAC addr
app/testpmd: add a command to add many MAC addrs
app/test-pmd/cmdline.c | 55 ++++++++++++++++++++++++++++++++++
doc/guides/rel_notes/release_17_05.rst | 7 +++++
drivers/net/ark/ark_ethdev.c | 15 ++++++----
drivers/net/bnx2x/bnx2x_ethdev.c | 7 +++--
drivers/net/bnxt/bnxt_ethdev.c | 16 +++++-----
drivers/net/e1000/em_ethdev.c | 8 ++---
drivers/net/e1000/igb_ethdev.c | 9 +++---
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 4 +--
drivers/net/enic/enic_main.c | 9 +++---
drivers/net/fm10k/fm10k_ethdev.c | 3 +-
drivers/net/i40e/i40e_ethdev.c | 17 ++++++-----
drivers/net/i40e/i40e_ethdev_vf.c | 14 ++++-----
drivers/net/ixgbe/ixgbe_ethdev.c | 33 ++++++++++++--------
drivers/net/mlx4/mlx4.c | 16 ++++++----
drivers/net/mlx5/mlx5.h | 4 +--
drivers/net/mlx5/mlx5_mac.c | 16 ++++++----
drivers/net/qede/qede_ethdev.c | 6 ++--
drivers/net/ring/rte_eth_ring.c | 3 +-
drivers/net/virtio/virtio_ethdev.c | 13 ++++----
lib/librte_ether/rte_ethdev.c | 15 ++++++----
lib/librte_ether/rte_ethdev.h | 2 +-
22 files changed, 184 insertions(+), 90 deletions(-)
--
2.7.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH] doc: announce ABI change on ethdev
@ 2017-05-01 6:58 13% Shahaf Shuler
2017-05-09 10:24 4% ` Shahaf Shuler
` (4 more replies)
0 siblings, 5 replies; 200+ results
From: Shahaf Shuler @ 2017-05-01 6:58 UTC (permalink / raw)
To: dev
This is an ABI change notice for DPDK 17.08 in librte_ether
about changes in rte_eth_txmode structure.
Currently Tx offloads are enabled by default, and can be disabled
using ETH_TXQ_FLAGS_NO* flags. This behaviour is not consistent with
the Rx side where the Rx offloads are disabled by default and enabled
according to bit field in rte_eth_rxmode structure.
The proposal is to disable the Tx offloads by default, and provide
a way for the application to enable them in rte_eth_txmode structure.
Besides of making the Tx configuration API more consistent for
applications, PMDs will be able to provide a better out of the
box performance.
Finally, as part of the work, the ETH_TXQ_FLAGS_NO* will
be superseded as well.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
looks like this patch has arrived to everyone
besides dev@dpdk.org resending it again. sorry for
the noise.
---
doc/guides/rel_notes/deprecation.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a3e7c720c..0920b4766 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -81,3 +81,11 @@ Deprecation Notices
- ``rte_crpytodev_scheduler_mode_get``, replaced by ``rte_cryptodev_scheduler_mode_get``
- ``rte_crpytodev_scheduler_mode_set``, replaced by ``rte_cryptodev_scheduler_mode_set``
+
+* ethdev: in 17.08 ABI changes are planned:
+ Tx offloads will no longer be enabled by default.
+ Instead, the ``rte_eth_txmode`` structure will be extended with bit field to enable
+ each Tx offload.
+ Besides of making the Rx/Tx configuration API more consistent for the
+ application, PMDs will be able to provide a better out of the box performance.
+ as part of the work, ``ETH_TXQ_FLAGS_NO*`` will be superseded as well.
--
2.12.0
^ permalink raw reply [relevance 13%]
* [dpdk-dev] [PATCH v5 2/3] doc: change type of return value of adding MAC addr
2017-04-30 4:11 3% [dpdk-dev] [PATCH v5 0/3] MAC address fail to be added shouldn't be stored Wei Dai
@ 2017-04-30 4:11 5% ` Wei Dai
2017-05-02 12:44 3% ` [dpdk-dev] [PATCH v6 0/3] MAC address fail to be added shouldn't be stored Wei Dai
1 sibling, 0 replies; 200+ results
From: Wei Dai @ 2017-04-30 4:11 UTC (permalink / raw)
To: wenzhuo.lu, thomas, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, helin.zhang, konstantin.ananyev, jingjing.wu,
jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin, shepard.siegel,
ed.czeck, john.miller
Cc: dev, Wei Dai
Add following lines in section of API change in release note.
If a MAC address fails to be added without this change, it is still
stored and may be regarded as a valid one. This may lead to errors
in application. The type of return value of eth_mac_addr_add_t in
rte_ethdev.h is changed. Any specific NIC also follows this change.
Signed-off-by: Wei Dai <wei.dai@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/rel_notes/release_17_05.rst | 7 +++++++
1 file changed, 7 insertions(+)
mode change 100644 => 100755 doc/guides/rel_notes/release_17_05.rst
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
old mode 100644
new mode 100755
index ad20e86..8850fbe
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -481,6 +481,13 @@ ABI Changes
* The ``rte_cryptodev_info.sym`` structure has new field ``max_nb_sessions_per_qp``
to support drivers which may support limited number of sessions per queue_pair.
+* **Return if the MAC address is added successfully or not.**
+
+ If a MAC address fails to be added without this change, it is still stored
+ and may be regarded as a valid one. This may lead to errors in application.
+ The type of return value of eth_mac_addr_add_t in rte_ethdev.h is changed.
+ Any specific NIC also follows this change.
+
Removed Items
-------------
--
2.7.4
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v5 0/3] MAC address fail to be added shouldn't be stored
@ 2017-04-30 4:11 3% Wei Dai
2017-04-30 4:11 5% ` [dpdk-dev] [PATCH v5 2/3] doc: change type of return value of adding MAC addr Wei Dai
2017-05-02 12:44 3% ` [dpdk-dev] [PATCH v6 0/3] MAC address fail to be added shouldn't be stored Wei Dai
0 siblings, 2 replies; 200+ results
From: Wei Dai @ 2017-04-30 4:11 UTC (permalink / raw)
To: wenzhuo.lu, thomas, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, helin.zhang, konstantin.ananyev, jingjing.wu,
jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin, shepard.siegel,
ed.czeck, john.miller
Cc: dev, Wei Dai
Current ethdev always stores MAC address even it fails to be added.
Other function may regard the failed MAC address valid and lead to
some errors. So There is a need to check if the addr is added
successfully or not and discard it if it fails.
In 3rd patch, add a command "add_more_mac_addr port_id base_mac_addr count"
to add more than one MAC address one time.
This command can simplify the test for the first patch.
Normally a MAC address may fails to be added only after many MAC
addresses have been added.
Without this command, a tester may only trigger failed MAC address
by running many times of testpmd command 'mac_addr add' .
For v4 patch set, have got acknowledgement from
Nelio Laranjeiro <nelio.laranjeiro@6wind.com> for mlx changes
Yuanhan Liu <yuanhan.liu@linux.intel.com> for virtio changes
---
Changes
v5:
1. rebase master branch
2. add support to drivers/net/ark
3. fix some minor defects
v4:
1. rebase master branch
2. follow code style
v3:
1. Change return value for some specific NIC according to feedbacks
from the community;
2. Add ABI change in release note;
3. Add more detailed commit message.
v2:
fix warnings and erros from check-git-log.sh and checkpatch.pl
Wei Dai (3):
ethdev: fix adding invalid MAC addr
doc: change type of return value of adding MAC addr
app/testpmd: add a command to add many MAC addrs
app/test-pmd/cmdline.c | 55 ++++++++++++++++++++++++++++++++++
doc/guides/rel_notes/release_17_05.rst | 7 +++++
drivers/net/ark/ark_ethdev.c | 15 ++++++----
drivers/net/bnx2x/bnx2x_ethdev.c | 7 +++--
drivers/net/bnxt/bnxt_ethdev.c | 16 +++++-----
drivers/net/e1000/base/e1000_api.c | 2 +-
drivers/net/e1000/em_ethdev.c | 8 ++---
drivers/net/e1000/igb_ethdev.c | 9 +++---
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 4 +--
drivers/net/enic/enic_main.c | 9 +++---
drivers/net/fm10k/fm10k_ethdev.c | 3 +-
drivers/net/i40e/i40e_ethdev.c | 17 ++++++-----
drivers/net/i40e/i40e_ethdev_vf.c | 14 ++++-----
drivers/net/ixgbe/ixgbe_ethdev.c | 33 ++++++++++++--------
drivers/net/mlx4/mlx4.c | 16 ++++++----
drivers/net/mlx5/mlx5.h | 4 +--
drivers/net/mlx5/mlx5_mac.c | 16 ++++++----
drivers/net/qede/qede_ethdev.c | 6 ++--
drivers/net/ring/rte_eth_ring.c | 3 +-
drivers/net/virtio/virtio_ethdev.c | 13 ++++----
lib/librte_ether/rte_ethdev.c | 15 ++++++----
lib/librte_ether/rte_ethdev.h | 2 +-
23 files changed, 185 insertions(+), 91 deletions(-)
mode change 100644 => 100755 app/test-pmd/cmdline.c
mode change 100644 => 100755 doc/guides/rel_notes/release_17_05.rst
--
2.7.4
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 2/3] doc: change type of return value of adding MAC addr
2017-04-29 6:08 3% [dpdk-dev] [PATCH v5 0/3] MAC address fail to be added shouldn't be stored Wei Dai
@ 2017-04-29 6:08 5% ` Wei Dai
0 siblings, 0 replies; 200+ results
From: Wei Dai @ 2017-04-29 6:08 UTC (permalink / raw)
To: wenzhuo.lu, thomas, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, helin.zhang, konstantin.ananyev, jingjing.wu,
jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin, shepard.siegel,
ed.czeck, john.miller
Cc: dev, Wei Dai
Add following lines in section of API change in release note.
If a MAC address fails to be added without this change, it is still
stored and may be regarded as a valid one. This may lead to errors
in application. The type of return value of eth_mac_addr_add_t in
rte_ethdev.h is changed. Any specific NIC also follows this change.
Signed-off-by: Wei Dai <wei.dai@intel.com>
---
doc/guides/rel_notes/release_17_05.rst | 7 +++++++
1 file changed, 7 insertions(+)
mode change 100644 => 100755 doc/guides/rel_notes/release_17_05.rst
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
old mode 100644
new mode 100755
index ad20e86..8850fbe
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -481,6 +481,13 @@ ABI Changes
* The ``rte_cryptodev_info.sym`` structure has new field ``max_nb_sessions_per_qp``
to support drivers which may support limited number of sessions per queue_pair.
+* **Return if the MAC address is added successfully or not.**
+
+ If a MAC address fails to be added without this change, it is still stored
+ and may be regarded as a valid one. This may lead to errors in application.
+ The type of return value of eth_mac_addr_add_t in rte_ethdev.h is changed.
+ Any specific NIC also follows this change.
+
Removed Items
-------------
--
2.7.4
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v5 0/3] MAC address fail to be added shouldn't be stored
@ 2017-04-29 6:08 3% Wei Dai
2017-04-29 6:08 5% ` [dpdk-dev] [PATCH v5 2/3] doc: change type of return value of adding MAC addr Wei Dai
0 siblings, 1 reply; 200+ results
From: Wei Dai @ 2017-04-29 6:08 UTC (permalink / raw)
To: wenzhuo.lu, thomas, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, helin.zhang, konstantin.ananyev, jingjing.wu,
jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin, shepard.siegel,
ed.czeck, john.miller
Cc: dev, Wei Dai
Current ethdev always stores MAC address even it fails to be added.
Other function may regard the failed MAC address valid and lead to
some errors. So There is a need to check if the addr is added
successfully or not and discard it if it fails.
In 3rd patch, add a command "add_more_mac_addr port_id base_mac_addr count"
to add more than one MAC address one time.
This command can simplify the test for the first patch.
Normally a MAC address may fails to be added only after many MAC
addresses have been added.
Without this command, a tester may only trigger failed MAC address
by running many times of testpmd command 'mac_addr add' .
For v4 patch set, have got acknowledgement from
Nelio Laranjeiro <nelio.laranjeiro@6wind.com> for mlx changes
Yuanhan Liu <yuanhan.liu@linux.intel.com> for virtio changes
---
Changes
v5:
1. rebase master branch
2. add support to drivers/net/ark
3. fix some minor defects
v4:
1. rebase master branch
2. follow code style
v3:
1. Change return value for some specific NIC according to feedbacks
from the community;
2. Add ABI change in release note;
3. Add more detailed commit message.
v2:
fix warnings and erros from check-git-log.sh and checkpatch.pl
Wei Dai (3):
ethdev: fix adding invalid MAC addr
doc: change type of return value of adding MAC addr
app/testpmd: add a command to add many MAC addrs
app/test-pmd/cmdline.c | 55 ++++++++++++++++++++++++++++++++++
doc/guides/rel_notes/release_17_05.rst | 7 +++++
drivers/net/ark/ark_ethdev.c | 15 ++++++----
drivers/net/bnx2x/bnx2x_ethdev.c | 7 +++--
drivers/net/bnxt/bnxt_ethdev.c | 16 +++++-----
drivers/net/e1000/base/e1000_api.c | 2 +-
drivers/net/e1000/em_ethdev.c | 8 ++---
drivers/net/e1000/igb_ethdev.c | 9 +++---
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 4 +--
drivers/net/enic/enic_main.c | 9 +++---
drivers/net/fm10k/fm10k_ethdev.c | 3 +-
drivers/net/i40e/i40e_ethdev.c | 17 ++++++-----
drivers/net/i40e/i40e_ethdev_vf.c | 14 ++++-----
drivers/net/ixgbe/ixgbe_ethdev.c | 33 ++++++++++++--------
drivers/net/mlx4/mlx4.c | 16 ++++++----
drivers/net/mlx5/mlx5.h | 4 +--
drivers/net/mlx5/mlx5_mac.c | 16 ++++++----
drivers/net/qede/qede_ethdev.c | 6 ++--
drivers/net/ring/rte_eth_ring.c | 3 +-
drivers/net/virtio/virtio_ethdev.c | 13 ++++----
lib/librte_ether/rte_ethdev.c | 15 ++++++----
lib/librte_ether/rte_ethdev.h | 2 +-
23 files changed, 185 insertions(+), 91 deletions(-)
mode change 100644 => 100755 app/test-pmd/cmdline.c
mode change 100644 => 100755 doc/guides/rel_notes/release_17_05.rst
--
2.7.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC PATCH] timer: inform periodic timers of multiple expiries
@ 2017-04-28 13:29 4% ` Bruce Richardson
2017-05-31 9:16 4% ` [dpdk-dev] [PATCH 0/3] " Bruce Richardson
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2017-04-28 13:29 UTC (permalink / raw)
To: dev
I'd like some agreement soon on the approach to be taken to fix this issue,
in case we need an ABI change notice in 17.05 - i.e. if we take the
approach given in the patch below.
Also, while the alternative solution of calling a function multiple
times is not an ABI/API change, I view it as more problematic from a
compatibility point of view, as it would be a "silent" functionality
change for existing apps.
Regards,
/Bruce
On Fri, Apr 28, 2017 at 02:25:38PM +0100, Bruce Richardson wrote:
> if timer_manage is called much less frequently than the period of a
> periodic timer, then timer expiries will be missed. For example, if a timer
> has a period of 300us, but timer_manage is called every 1ms, then there
> will only be one timer callback called every 1ms instead of 3 within that
> time.
>
> While we can fix this by having each function called multiple times within
> timer-manage, this will lead to out-of-order timeouts, and will be slower
> with all the function call overheads - especially in the case of a timeout
> doing something trivial like incrementing a counter. Therefore, we instead
> modify the callback functions to take a counter value of the number of
> expiries that have passed since the last time it was called.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> examples/l2fwd-jobstats/main.c | 7 +++++--
> examples/l2fwd-keepalive/main.c | 8 ++++----
> examples/l3fwd-power/main.c | 5 +++--
> examples/performance-thread/common/lthread_sched.c | 4 +++-
> examples/performance-thread/common/lthread_sched.h | 2 +-
> examples/timer/main.c | 10 ++++++----
> lib/librte_timer/rte_timer.c | 9 ++++++++-
> lib/librte_timer/rte_timer.h | 2 +-
> test/test/test_timer.c | 14 +++++++++-----
> test/test/test_timer_perf.c | 4 +++-
> test/test/test_timer_racecond.c | 3 ++-
> 11 files changed, 45 insertions(+), 23 deletions(-)
>
> diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
> index e6e6c22..b264344 100644
> --- a/examples/l2fwd-jobstats/main.c
> +++ b/examples/l2fwd-jobstats/main.c
> @@ -410,7 +410,8 @@ l2fwd_job_update_cb(struct rte_jobstats *job, int64_t result)
> }
>
> static void
> -l2fwd_fwd_job(__rte_unused struct rte_timer *timer, void *arg)
> +l2fwd_fwd_job(__rte_unused struct rte_timer *timer,
> + __rte_unused unsigned int count, void *arg)
> {
> struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
> struct rte_mbuf *m;
> @@ -460,7 +461,9 @@ l2fwd_fwd_job(__rte_unused struct rte_timer *timer, void *arg)
> }
>
> static void
> -l2fwd_flush_job(__rte_unused struct rte_timer *timer, __rte_unused void *arg)
> +l2fwd_flush_job(__rte_unused struct rte_timer *timer,
> + __rte_unused unsigned int count,
> + __rte_unused void *arg)
> {
> uint64_t now;
> unsigned lcore_id;
> diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
> index 4623d2a..26eba12 100644
> --- a/examples/l2fwd-keepalive/main.c
> +++ b/examples/l2fwd-keepalive/main.c
> @@ -144,8 +144,9 @@ struct rte_keepalive *rte_global_keepalive_info;
>
> /* Print out statistics on packets dropped */
> static void
> -print_stats(__attribute__((unused)) struct rte_timer *ptr_timer,
> - __attribute__((unused)) void *ptr_data)
> +print_stats(__rte_unused struct rte_timer *ptr_timer,
> + __rte_unused unsigned int count,
> + __rte_unused void *ptr_data)
> {
> uint64_t total_packets_dropped, total_packets_tx, total_packets_rx;
> unsigned portid;
> @@ -748,8 +749,7 @@ main(int argc, char **argv)
> (check_period * rte_get_timer_hz()) / 1000,
> PERIODICAL,
> rte_lcore_id(),
> - (void(*)(struct rte_timer*, void*))
> - &rte_keepalive_dispatch_pings,
> + (void *)&rte_keepalive_dispatch_pings,
> rte_global_keepalive_info
> ) != 0 )
> rte_exit(EXIT_FAILURE, "Keepalive setup failure.\n");
> diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
> index ec40a17..318aefd 100644
> --- a/examples/l3fwd-power/main.c
> +++ b/examples/l3fwd-power/main.c
> @@ -410,8 +410,9 @@ signal_exit_now(int sigtype)
>
> /* Freqency scale down timer callback */
> static void
> -power_timer_cb(__attribute__((unused)) struct rte_timer *tim,
> - __attribute__((unused)) void *arg)
> +power_timer_cb(__rte_unused struct rte_timer *tim,
> + __rte_unused unsigned int count,
> + __rte_unused void *arg)
> {
> uint64_t hz;
> float sleep_time_ratio;
> diff --git a/examples/performance-thread/common/lthread_sched.c b/examples/performance-thread/common/lthread_sched.c
> index c64c21f..c4c7d3b 100644
> --- a/examples/performance-thread/common/lthread_sched.c
> +++ b/examples/performance-thread/common/lthread_sched.c
> @@ -437,7 +437,9 @@ static inline void _lthread_resume(struct lthread *lt)
> * Handle sleep timer expiry
> */
> void
> -_sched_timer_cb(struct rte_timer *tim, void *arg)
> +_sched_timer_cb(struct rte_timer *tim,
> + unsigned int count __rte_unused,
> + void *arg)
> {
> struct lthread *lt = (struct lthread *) arg;
> uint64_t state = lt->state;
> diff --git a/examples/performance-thread/common/lthread_sched.h b/examples/performance-thread/common/lthread_sched.h
> index 7cddda9..f2af8b3 100644
> --- a/examples/performance-thread/common/lthread_sched.h
> +++ b/examples/performance-thread/common/lthread_sched.h
> @@ -149,7 +149,7 @@ _reschedule(void)
> }
>
> extern struct lthread_sched *schedcore[];
> -void _sched_timer_cb(struct rte_timer *tim, void *arg);
> +void _sched_timer_cb(struct rte_timer *tim, unsigned int count, void *arg);
> void _sched_shutdown(__rte_unused void *arg);
>
> #ifdef __cplusplus
> diff --git a/examples/timer/main.c b/examples/timer/main.c
> index 37ad559..92a6a1f 100644
> --- a/examples/timer/main.c
> +++ b/examples/timer/main.c
> @@ -55,8 +55,9 @@ static struct rte_timer timer1;
>
> /* timer0 callback */
> static void
> -timer0_cb(__attribute__((unused)) struct rte_timer *tim,
> - __attribute__((unused)) void *arg)
> +timer0_cb(__rte_unused struct rte_timer *tim,
> + __rte_unused unsigned int count,
> + __rte_unused void *arg)
> {
> static unsigned counter = 0;
> unsigned lcore_id = rte_lcore_id();
> @@ -71,8 +72,9 @@ timer0_cb(__attribute__((unused)) struct rte_timer *tim,
>
> /* timer1 callback */
> static void
> -timer1_cb(__attribute__((unused)) struct rte_timer *tim,
> - __attribute__((unused)) void *arg)
> +timer1_cb(__rte_unused struct rte_timer *tim,
> + __rte_unused unsigned int count,
> + __rte_unused void *arg)
> {
> unsigned lcore_id = rte_lcore_id();
> uint64_t hz;
> diff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c
> index 18782fa..d1e2c12 100644
> --- a/lib/librte_timer/rte_timer.c
> +++ b/lib/librte_timer/rte_timer.c
> @@ -590,7 +590,14 @@ void rte_timer_manage(void)
> priv_timer[lcore_id].running_tim = tim;
>
> /* execute callback function with list unlocked */
> - tim->f(tim, tim->arg);
> + if (tim->period == 0)
> + tim->f(tim, 1, tim->arg);
> + else {
> + /* for periodic check how many expiries we have */
> + uint64_t over_time = cur_time - tim->expire;
> + unsigned int extra_expiries = over_time / tim->period;
> + tim->f(tim, 1 + extra_expiries, tim->arg);
> + }
>
> __TIMER_STAT_ADD(pending, -1);
> /* the timer was stopped or reloaded by the callback
> diff --git a/lib/librte_timer/rte_timer.h b/lib/librte_timer/rte_timer.h
> index a276a73..bc434ec 100644
> --- a/lib/librte_timer/rte_timer.h
> +++ b/lib/librte_timer/rte_timer.h
> @@ -117,7 +117,7 @@ struct rte_timer;
> /**
> * Callback function type for timer expiry.
> */
> -typedef void (*rte_timer_cb_t)(struct rte_timer *, void *);
> +typedef void (*rte_timer_cb_t)(struct rte_timer *, unsigned int count, void *);
>
> #define MAX_SKIPLIST_DEPTH 10
>
> diff --git a/test/test/test_timer.c b/test/test/test_timer.c
> index 2f6525a..0b86d3c 100644
> --- a/test/test/test_timer.c
> +++ b/test/test/test_timer.c
> @@ -153,7 +153,8 @@ struct mytimerinfo {
>
> static struct mytimerinfo mytiminfo[NB_TIMER];
>
> -static void timer_basic_cb(struct rte_timer *tim, void *arg);
> +static void
> +timer_basic_cb(struct rte_timer *tim, unsigned int count, void *arg);
>
> static void
> mytimer_reset(struct mytimerinfo *timinfo, uint64_t ticks,
> @@ -167,6 +168,7 @@ mytimer_reset(struct mytimerinfo *timinfo, uint64_t ticks,
> /* timer callback for stress tests */
> static void
> timer_stress_cb(__attribute__((unused)) struct rte_timer *tim,
> + __attribute__((unused)) unsigned int count,
> __attribute__((unused)) void *arg)
> {
> long r;
> @@ -293,9 +295,11 @@ static volatile int cb_count = 0;
> /* callback for second stress test. will only be called
> * on master lcore */
> static void
> -timer_stress2_cb(struct rte_timer *tim __rte_unused, void *arg __rte_unused)
> +timer_stress2_cb(struct rte_timer *tim __rte_unused,
> + unsigned int count,
> + void *arg __rte_unused)
> {
> - cb_count++;
> + cb_count += count;
> }
>
> #define NB_STRESS2_TIMERS 8192
> @@ -430,7 +434,7 @@ timer_stress2_main_loop(__attribute__((unused)) void *arg)
>
> /* timer callback for basic tests */
> static void
> -timer_basic_cb(struct rte_timer *tim, void *arg)
> +timer_basic_cb(struct rte_timer *tim, unsigned int count, void *arg)
> {
> struct mytimerinfo *timinfo = arg;
> uint64_t hz = rte_get_timer_hz();
> @@ -440,7 +444,7 @@ timer_basic_cb(struct rte_timer *tim, void *arg)
> if (rte_timer_pending(tim))
> return;
>
> - timinfo->count ++;
> + timinfo->count += count;
>
> RTE_LOG(INFO, TESTTIMER,
> "%"PRIu64": callback id=%u count=%u on core %u\n",
> diff --git a/test/test/test_timer_perf.c b/test/test/test_timer_perf.c
> index fa77efb..5b3867d 100644
> --- a/test/test/test_timer_perf.c
> +++ b/test/test/test_timer_perf.c
> @@ -48,7 +48,9 @@
> int outstanding_count = 0;
>
> static void
> -timer_cb(struct rte_timer *t __rte_unused, void *param __rte_unused)
> +timer_cb(struct rte_timer *t __rte_unused,
> + unsigned int count __rte_unused,
> + void *param __rte_unused)
> {
> outstanding_count--;
> }
> diff --git a/test/test/test_timer_racecond.c b/test/test/test_timer_racecond.c
> index 7824ec4..b1c5c86 100644
> --- a/test/test/test_timer_racecond.c
> +++ b/test/test/test_timer_racecond.c
> @@ -65,7 +65,8 @@ static volatile unsigned stop_slaves;
> static int reload_timer(struct rte_timer *tim);
>
> static void
> -timer_cb(struct rte_timer *tim, void *arg __rte_unused)
> +timer_cb(struct rte_timer *tim, unsigned int count __rte_unused,
> + void *arg __rte_unused)
> {
> /* Simulate slow callback function, 100 us. */
> rte_delay_us(100);
> --
> 2.9.3
>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v1 3/6] ethdev: get xstats ID by name
2017-04-27 14:42 1% ` [dpdk-dev] [PATCH v1 1/6] ethdev: revert patches extending xstats API in ethdev Michal Jastrzebski
2017-04-27 14:42 2% ` [dpdk-dev] [PATCH v1 2/6] ethdev: retrieve xstats by ID Michal Jastrzebski
@ 2017-04-27 14:42 4% ` Michal Jastrzebski
2 siblings, 0 replies; 200+ results
From: Michal Jastrzebski @ 2017-04-27 14:42 UTC (permalink / raw)
To: dev; +Cc: harry.van.haaren, deepak.k.jain, Kuba Kozak
From: Kuba Kozak <kubax.kozak@intel.com>
Introduced new function: rte_eth_xstats_get_id_by_name
to retrieve xstats ids by its names.
doc: added release note
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
---
doc/guides/rel_notes/release_17_05.rst | 2 ++
lib/librte_ether/rte_ethdev.c | 44 ++++++++++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 20 ++++++++++++++++
lib/librte_ether/rte_ether_version.map | 1 +
4 files changed, 67 insertions(+)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 12d6b49..2e1463f 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -454,6 +454,8 @@ API Changes
* Added new functions ``rte_eth_xstats_get_by_id`` and ``rte_eth_xstats_get_names_by_id``
+ * Added new function ``rte_eth_xstats_get_id_by_name``
+
ABI Changes
-----------
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 331d2e3..8aa6331 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1382,6 +1382,50 @@ get_xstats_count(uint8_t port_id)
}
int
+rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id)
+{
+ int cnt_xstats, idx_xstat;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (!id) {
+ RTE_PMD_DEBUG_TRACE("Error: id pointer is NULL\n");
+ return -ENOMEM;
+ }
+
+ if (!xstat_name) {
+ RTE_PMD_DEBUG_TRACE("Error: xstat_name pointer is NULL\n");
+ return -ENOMEM;
+ }
+
+ /* Get count */
+ cnt_xstats = rte_eth_xstats_get_names_by_id(port_id, NULL, 0, NULL);
+ if (cnt_xstats < 0) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get count of xstats\n");
+ return -ENODEV;
+ }
+
+ /* Get id-name lookup table */
+ struct rte_eth_xstat_name xstats_names[cnt_xstats];
+
+ if (cnt_xstats != rte_eth_xstats_get_names_by_id(
+ port_id, xstats_names, cnt_xstats, NULL)) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get xstats lookup\n");
+ return -1;
+ }
+
+ for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++) {
+ if (!strcmp(xstats_names[idx_xstat].name, xstat_name)) {
+ *id = idx_xstat;
+ return 0;
+ };
+ }
+
+ return -EINVAL;
+}
+
+int
rte_eth_xstats_get_names_by_id(uint8_t port_id,
struct rte_eth_xstat_name *xstats_names, unsigned int size,
uint64_t *ids)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8594416..782a6a8 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -2389,6 +2389,26 @@ int rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids,
uint64_t *values, unsigned int n);
/**
+ * Gets the ID of a statistic from its name.
+ *
+ * This function searches for the statistics using string compares, and
+ * as such should not be used on the fast-path. For fast-path retrieval of
+ * specific statistics, store the ID as provided in *id* from this function,
+ * and pass the ID to rte_eth_xstats_get()
+ *
+ * @param port_id The port to look up statistics from
+ * @param xstat_name The name of the statistic to return
+ * @param[out] id A pointer to an app-supplied uint64_t which should be
+ * set to the ID of the stat if the stat exists.
+ * @return
+ * 0 on success
+ * -ENODEV for invalid port_id,
+ * -EINVAL if the xstat_name doesn't exist in port_id
+ */
+int rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id);
+
+/**
* Reset extended statistics of an Ethernet device.
*
* @param port_id
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 1abd717..d6726bb 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -152,6 +152,7 @@ DPDK_17.05 {
rte_eth_dev_attach_secondary;
rte_eth_find_next;
rte_eth_xstats_get_by_id;
+ rte_eth_xstats_get_id_by_name;
rte_eth_xstats_get_names_by_id;
} DPDK_17.02;
--
2.7.4
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v1 2/6] ethdev: retrieve xstats by ID
2017-04-27 14:42 1% ` [dpdk-dev] [PATCH v1 1/6] ethdev: revert patches extending xstats API in ethdev Michal Jastrzebski
@ 2017-04-27 14:42 2% ` Michal Jastrzebski
2017-04-27 14:42 4% ` [dpdk-dev] [PATCH v1 3/6] ethdev: get xstats ID by name Michal Jastrzebski
2 siblings, 0 replies; 200+ results
From: Michal Jastrzebski @ 2017-04-27 14:42 UTC (permalink / raw)
To: dev; +Cc: harry.van.haaren, deepak.k.jain, Kuba Kozak
From: Kuba Kozak <kubax.kozak@intel.com>
Extended xstats API in ethdev library to allow grouping of stats
logically so they can be retrieved per logical grouping managed
by the application.
Added new functions rte_eth_xstats_get_names_by_id and
rte_eth_xstats_get_by_id using additional arguments (in compare
to rte_eth_xstats_get_names and rte_eth_xstats_get) - array of ids
and array of values.
doc: add description for modified xstats API
Documentation change for new extended statistics API functions.
The old API only allows retrieval of *all* of the NIC statistics
at once. Given this requires a MMIO read PCI transaction per statistic
it is an inefficient way of retrieving just a few key statistics.
Often a monitoring agent only has an interest in a few key statistics,
and the old API forces wasting CPU time and PCIe bandwidth in retrieving
*all* statistics; even those that the application didn't explicitly
show an interest in.
The new, more flexible API allow retrieval of statistics per ID.
If a PMD wishes, it can be implemented to read just the required
NIC registers. As a result, the monitoring application no longer wastes
PCIe bandwidth and CPU time.
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
---
doc/guides/prog_guide/poll_mode_drv.rst | 167 ++++++++++++++++++---
doc/guides/rel_notes/release_17_05.rst | 4 +
lib/librte_ether/rte_ethdev.c | 250 ++++++++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 70 +++++++++
lib/librte_ether/rte_ether_version.map | 2 +
5 files changed, 476 insertions(+), 17 deletions(-)
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index e48c121..4987f70 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -334,24 +334,21 @@ The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK
Extended Statistics API
~~~~~~~~~~~~~~~~~~~~~~~
-The extended statistics API allows each individual PMD to expose a unique set
-of statistics. Accessing these from application programs is done via two
-functions:
-
-* ``rte_eth_xstats_get``: Fills in an array of ``struct rte_eth_xstat``
- with extended statistics.
-* ``rte_eth_xstats_get_names``: Fills in an array of
- ``struct rte_eth_xstat_name`` with extended statistic name lookup
- information.
-
-Each ``struct rte_eth_xstat`` contains an identifier and value pair, and
-each ``struct rte_eth_xstat_name`` contains a string. Each identifier
-within the ``struct rte_eth_xstat`` lookup array must have a corresponding
-entry in the ``struct rte_eth_xstat_name`` lookup array. Within the latter
-the index of the entry is the identifier the string is associated with.
-These identifiers, as well as the number of extended statistic exposed, must
-remain constant during runtime. Note that extended statistic identifiers are
+The extended statistics API allows a PMD to expose all statistics that are
+available to it, including statistics that are unique to the device.
+Each statistic has three properties ``name``, ``id`` and ``value``:
+
+* ``name``: A human readable string formatted by the scheme detailed below.
+* ``id``: An integer that represents only that statistic.
+* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+
+Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
+The API consists of various ``rte_eth_xstats_*()`` functions, and allows an
+application to be flexible in how it retrieves statistics.
+
+Scheme for Human Readable Names
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A naming scheme exists for the strings exposed to clients of the API. This is
to allow scraping of the API for statistics of interest. The naming scheme uses
@@ -392,3 +389,139 @@ Some additions in the metadata scheme are as follows:
An example where queue numbers are used is as follows: ``tx_q7_bytes`` which
indicates this statistic applies to queue number 7, and represents the number
of transmitted bytes on that queue.
+
+API Design
+^^^^^^^^^^
+
+The xstats API uses the ``name``, ``id``, and ``value`` to allow performant
+lookup of specific statistics. Performant lookup means two things;
+
+* No string comparisons with the ``name`` of the statistic in fast-path
+* Allow requesting of only the statistics of interest
+
+The API ensures these requirements are met by mapping the ``name`` of the
+statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
+The API allows applications to request an array of ``id`` values, so that the
+PMD only performs the required calculations. Expected usage is that the
+application scans the ``name`` of each statistic, and caches the ``id``
+if it has an interest in that statistic. On the fast-path, the integer can be used
+to retrieve the actual ``value`` of the statistic that the ``id`` represents.
+
+API Functions
+^^^^^^^^^^^^^
+
+The API is built out of a small number of functions, which can be used to
+retrieve the number of statistics and the names, IDs and values of those
+statistics.
+
+* ``rte_eth_xstats_get_names_by_id()``: returns the names of the statistics. When given a
+ ``NULL`` parameter the function returns the number of statistics that are available.
+
+* ``rte_eth_xstats_get_id_by_name()``: Searches for the statistic ID that matches
+ ``xstat_name``. If found, the ``id`` integer is set.
+
+* ``rte_eth_xstats_get_by_id()``: Fills in an array of ``uint64_t`` values
+ with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
+ returns all statistics that are available.
+
+
+Application Usage
+^^^^^^^^^^^^^^^^^
+
+Imagine an application that wants to view the dropped packet count. If no
+packets are dropped, the application does not read any other metrics for
+performance reasons. If packets are dropped, the application has a particular
+set of statistics that it requests. This "set" of statistics allows the app to
+decide what next steps to perform. The following code-snippets show how the
+xstats API can be used to achieve this goal.
+
+First step is to get all statistics names and list them:
+
+.. code-block:: c
+
+ struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
+ int len, i;
+
+ /* Get number of stats */
+ len = rte_eth_xstats_get_names_by_id(port_id, NULL, NULL, 0);
+ if (len < 0) {
+ printf("Cannot get xstats count\n");
+ goto err;
+ }
+
+ xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (xstats_names == NULL) {
+ printf("Cannot allocate memory for xstat names\n");
+ goto err;
+ }
+
+ /* Retrieve xstats names, passing NULL for IDs to return all statistics */
+ if (len != rte_eth_xstats_get_names_by_id(port_id, xstats_names, NULL, len)) {
+ printf("Cannot get xstat names\n");
+ goto err;
+ }
+
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
+ printf("Cannot allocate memory for xstats\n");
+ goto err;
+ }
+
+ /* Getting xstats values */
+ if (len != rte_eth_xstats_get_by_id(port_id, NULL, values, len)) {
+ printf("Cannot get xstat values\n");
+ goto err;
+ }
+
+ /* Print all xstats names and values */
+ for (i = 0; i < len; i++) {
+ printf("%s: %"PRIu64"\n", xstats_names[i].name, values[i]);
+ }
+
+The application has access to the names of all of the statistics that the PMD
+exposes. The application can decide which statistics are of interest, cache the
+ids of those statistics by looking up the name as follows:
+
+.. code-block:: c
+
+ uint64_t id;
+ uint64_t value;
+ const char *xstat_name = "rx_errors";
+
+ if(!rte_eth_xstats_get_id_by_name(port_id, xstat_name, &id)) {
+ rte_eth_xstats_get_by_id(port_id, &id, &value, 1);
+ printf("%s: %"PRIu64"\n", xstat_name, value);
+ }
+ else {
+ printf("Cannot find xstats with a given name\n");
+ goto err;
+ }
+
+The API provides flexibility to the application so that it can look up multiple
+statistics using an array containing multiple ``id`` numbers. This reduces the
+function call overhead of retrieving statistics, and makes lookup of multiple
+statistics simpler for the application.
+
+.. code-block:: c
+
+ #define APP_NUM_STATS 4
+ /* application cached these ids previously; see above */
+ uint64_t ids_array[APP_NUM_STATS] = {3,4,7,21};
+ uint64_t value_array[APP_NUM_STATS];
+
+ /* Getting multiple xstats values from array of IDs */
+ rte_eth_xstats_get_by_id(port_id, ids_array, value_array, APP_NUM_STATS);
+
+ uint32_t i;
+ for(i = 0; i < APP_NUM_STATS; i++) {
+ printf("%d: %"PRIu64"\n", ids_array[i], value_array[i]);
+ }
+
+
+This array lookup API for xstats allows the application create multiple
+"groups" of statistics, and look up the values of those IDs using a single API
+call. As an end result, the application is able to achieve its goal of
+monitoring a single statistic ("rx_errors" in this case), and if that shows
+packets being dropped, it can easily retrieve a "set" of statistics using the
+IDs array parameter to ``rte_eth_xstats_get_by_id`` function.
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index dcd55ff..12d6b49 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -450,6 +450,10 @@ API Changes
* The vhost public header file ``rte_virtio_net.h`` is renamed to
``rte_vhost.h``
+* **Reworked rte_ethdev library**
+
+ * Added new functions ``rte_eth_xstats_get_by_id`` and ``rte_eth_xstats_get_names_by_id``
+
ABI Changes
-----------
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 9922430..331d2e3 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1360,12 +1360,19 @@ get_xstats_count(uint8_t port_id)
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
dev = &rte_eth_devices[port_id];
+ if (dev->dev_ops->xstats_get_names_by_id != NULL) {
+ count = (*dev->dev_ops->xstats_get_names_by_id)(dev, NULL,
+ NULL, 0);
+ if (count < 0)
+ return count;
+ }
if (dev->dev_ops->xstats_get_names != NULL) {
count = (*dev->dev_ops->xstats_get_names)(dev, NULL, 0);
if (count < 0)
return count;
} else
count = 0;
+
count += RTE_NB_STATS;
count += RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS) *
RTE_NB_RXQ_STATS;
@@ -1375,6 +1382,123 @@ get_xstats_count(uint8_t port_id)
}
int
+rte_eth_xstats_get_names_by_id(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids)
+{
+ /* Get all xstats */
+ if (!ids) {
+ struct rte_eth_dev *dev;
+ int cnt_used_entries;
+ int cnt_expected_entries;
+ int cnt_driver_entries;
+ uint32_t idx, id_queue;
+ uint16_t num_q;
+
+ cnt_expected_entries = get_xstats_count(port_id);
+ if (xstats_names == NULL || cnt_expected_entries < 0 ||
+ (int)size < cnt_expected_entries)
+ return cnt_expected_entries;
+
+ /* port_id checked in get_xstats_count() */
+ dev = &rte_eth_devices[port_id];
+ cnt_used_entries = 0;
+
+ for (idx = 0; idx < RTE_NB_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "%s", rte_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+ num_q = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "rx_q%u%s",
+ id_queue,
+ rte_rxq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+
+ }
+ num_q = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "tx_q%u%s",
+ id_queue,
+ rte_txq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+ }
+
+ if (dev->dev_ops->xstats_get_names_by_id != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries =
+ (*dev->dev_ops->xstats_get_names_by_id)(
+ dev,
+ xstats_names + cnt_used_entries,
+ NULL,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+
+ } else if (dev->dev_ops->xstats_get_names != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
+ dev,
+ xstats_names + cnt_used_entries,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+ }
+
+ return cnt_used_entries;
+ }
+ /* Get only xstats given by IDS */
+ else {
+ uint16_t len, i;
+ struct rte_eth_xstat_name *xstats_names_copy;
+
+ len = rte_eth_xstats_get_names_by_id(port_id, NULL, 0, NULL);
+
+ xstats_names_copy =
+ malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (!xstats_names_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ free(xstats_names_copy);
+ return -1;
+ }
+
+ rte_eth_xstats_get_names_by_id(port_id, xstats_names_copy,
+ len, NULL);
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] >= len) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ strcpy(xstats_names[i].name,
+ xstats_names_copy[ids[i]].name);
+ }
+ free(xstats_names_copy);
+ return size;
+ }
+}
+
+int
rte_eth_xstats_get_names(uint8_t port_id,
struct rte_eth_xstat_name *xstats_names,
unsigned int size)
@@ -1441,6 +1565,132 @@ rte_eth_xstats_get_names(uint8_t port_id,
/* retrieve ethdev extended statistics */
int
+rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids, uint64_t *values,
+ unsigned int n)
+{
+ /* If need all xstats */
+ if (!ids) {
+ struct rte_eth_stats eth_stats;
+ struct rte_eth_dev *dev;
+ unsigned int count = 0, i, q;
+ signed int xcount = 0;
+ uint64_t val, *stats_ptr;
+ uint16_t nb_rxqs, nb_txqs;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ nb_rxqs = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ nb_txqs = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+
+ /* Return generic statistics */
+ count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
+ (nb_txqs * RTE_NB_TXQ_STATS);
+
+
+ /* implemented by the driver */
+ if (dev->dev_ops->xstats_get_by_id != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ */
+ xcount = (*dev->dev_ops->xstats_get_by_id)(dev,
+ NULL,
+ values ? values + count : NULL,
+ (n > count) ? n - count : 0);
+
+ if (xcount < 0)
+ return xcount;
+ /* implemented by the driver */
+ } else if (dev->dev_ops->xstats_get != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ * Compatibility for PMD without xstats_get_by_ids
+ */
+ unsigned int size = (n > count) ? n - count : 1;
+ struct rte_eth_xstat xstats[size];
+
+ xcount = (*dev->dev_ops->xstats_get)(dev,
+ values ? xstats : NULL, size);
+
+ if (xcount < 0)
+ return xcount;
+
+ if (values != NULL)
+ for (i = 0 ; i < (unsigned int)xcount; i++)
+ values[i + count] = xstats[i].value;
+ }
+
+ if (n < count + xcount || values == NULL)
+ return count + xcount;
+
+ /* now fill the xstats structure */
+ count = 0;
+ rte_eth_stats_get(port_id, ð_stats);
+
+ /* global stats */
+ for (i = 0; i < RTE_NB_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_stats_strings[i].offset);
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+
+ /* per-rxq stats */
+ for (q = 0; q < nb_rxqs; q++) {
+ for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_rxq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+ }
+
+ /* per-txq stats */
+ for (q = 0; q < nb_txqs; q++) {
+ for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_txq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+ }
+
+ return count + xcount;
+ }
+ /* Need only xstats given by IDS array */
+ else {
+ uint16_t i, size;
+ uint64_t *values_copy;
+
+ size = rte_eth_xstats_get_by_id(port_id, NULL, NULL, 0);
+
+ values_copy = malloc(sizeof(values_copy) * size);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ return -1;
+ }
+
+ rte_eth_xstats_get_by_id(port_id, NULL, values_copy, size);
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] >= size) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ values[i] = values_copy[ids[i]];
+ }
+ free(values_copy);
+ return n;
+ }
+}
+
+int
rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
unsigned int n)
{
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index a46290c..8594416 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -185,6 +185,7 @@ extern "C" {
#include "rte_ether.h"
#include "rte_eth_ctrl.h"
#include "rte_dev_info.h"
+#include "rte_compat.h"
struct rte_mbuf;
@@ -1121,6 +1122,12 @@ typedef int (*eth_xstats_get_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat *stats, unsigned n);
/**< @internal Get extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_by_id_t)(struct rte_eth_dev *dev,
+ const uint64_t *ids,
+ uint64_t *values,
+ unsigned int n);
+/**< @internal Get extended stats of an Ethernet device. */
+
typedef void (*eth_xstats_reset_t)(struct rte_eth_dev *dev);
/**< @internal Reset extended stats of an Ethernet device. */
@@ -1128,6 +1135,11 @@ typedef int (*eth_xstats_get_names_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, unsigned size);
/**< @internal Get names of extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_names_by_id_t)(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names, const uint64_t *ids,
+ unsigned int size);
+/**< @internal Get names of extended stats of an Ethernet device. */
+
typedef int (*eth_queue_stats_mapping_set_t)(struct rte_eth_dev *dev,
uint16_t queue_id,
uint8_t stat_idx,
@@ -1566,6 +1578,11 @@ struct eth_dev_ops {
eth_timesync_adjust_time timesync_adjust_time; /** Adjust the device clock. */
eth_timesync_read_time timesync_read_time; /** Get the device clock time. */
eth_timesync_write_time timesync_write_time; /** Set the device clock time. */
+
+ eth_xstats_get_by_id_t xstats_get_by_id;
+ /**< Get extended device statistic values by ID. */
+ eth_xstats_get_names_by_id_t xstats_get_names_by_id;
+ /**< Get name of extended device statistics by ID. */
};
/**
@@ -2319,6 +2336,59 @@ int rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
unsigned int n);
/**
+ * Retrieve names of extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats_names
+ * An rte_eth_xstat_name array of at least *size* elements to
+ * be filled. If set to NULL, the function returns the required number
+ * of elements.
+ * @param ids
+ * IDs array given by app to retrieve specific statistics
+ * @param size
+ * The size of the xstats_names array (number of elements).
+ * @return
+ * - A positive value lower or equal to size: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than size: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int
+rte_eth_xstats_get_names_by_id(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids);
+
+/**
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param ids
+ * A pointer to an ids array passed by application. This tells wich
+ * statistics values function should retrieve. This parameter
+ * can be set to NULL if n is 0. In this case function will retrieve
+ * all avalible statistics.
+ * @param values
+ * A pointer to a table to be filled with device statistics values.
+ * @param n
+ * The size of the ids array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids,
+ uint64_t *values, unsigned int n);
+
+/**
* Reset extended statistics of an Ethernet device.
*
* @param port_id
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 238c2a1..1abd717 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -151,5 +151,7 @@ DPDK_17.05 {
rte_eth_dev_attach_secondary;
rte_eth_find_next;
+ rte_eth_xstats_get_by_id;
+ rte_eth_xstats_get_names_by_id;
} DPDK_17.02;
--
2.7.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v1 1/6] ethdev: revert patches extending xstats API in ethdev
@ 2017-04-27 14:42 1% ` Michal Jastrzebski
2017-04-27 14:42 2% ` [dpdk-dev] [PATCH v1 2/6] ethdev: retrieve xstats by ID Michal Jastrzebski
2017-04-27 14:42 4% ` [dpdk-dev] [PATCH v1 3/6] ethdev: get xstats ID by name Michal Jastrzebski
2 siblings, 0 replies; 200+ results
From: Michal Jastrzebski @ 2017-04-27 14:42 UTC (permalink / raw)
To: dev; +Cc: harry.van.haaren, deepak.k.jain, Kuba Kozak
From: Kuba Kozak <kubax.kozak@intel.com>
Revert patches to provide clear view for
upcoming changes. Reverted patches are listed below:
commit ea85e7d711b6 ("ethdev: retrieve xstats by ID")
commit a954495245c4 ("ethdev: get xstats ID by name")
commit 1223608adb9b ("app/proc-info: support xstats by ID")
commit 25e38f09af9c ("net/e1000: support xstats by ID")
commit 923419333f5a ("net/ixgbe: support xstats by ID")
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
---
app/proc_info/main.c | 148 +----------
app/test-pmd/config.c | 19 +-
doc/guides/prog_guide/poll_mode_drv.rst | 173 ++-----------
doc/guides/rel_notes/release_17_05.rst | 9 -
drivers/net/e1000/igb_ethdev.c | 92 +------
drivers/net/ixgbe/ixgbe_ethdev.c | 179 -------------
lib/librte_ether/rte_ethdev.c | 430 ++++++++------------------------
lib/librte_ether/rte_ethdev.h | 167 +------------
lib/librte_ether/rte_ether_version.map | 5 -
9 files changed, 151 insertions(+), 1071 deletions(-)
diff --git a/app/proc_info/main.c b/app/proc_info/main.c
index 16b27b2..d576b42 100644
--- a/app/proc_info/main.c
+++ b/app/proc_info/main.c
@@ -86,14 +86,6 @@ static uint32_t reset_stats;
static uint32_t reset_xstats;
/**< Enable memory info. */
static uint32_t mem_info;
-/**< Enable displaying xstat name. */
-static uint32_t enable_xstats_name;
-static char *xstats_name;
-
-/**< Enable xstats by ids. */
-#define MAX_NB_XSTATS_IDS 1024
-static uint32_t nb_xstats_ids;
-static uint64_t xstats_ids[MAX_NB_XSTATS_IDS];
/**< display usage */
static void
@@ -107,9 +99,6 @@ proc_info_usage(const char *prgname)
"default\n"
" --metrics: to display derived metrics of the ports, disabled by "
"default\n"
- " --xstats-name NAME: displays the ID of a single xstats NAME\n"
- " --xstats-ids IDLIST: to display xstat values by id. "
- "The argument is comma-separated list of xstat ids to print out.\n"
" --stats-reset: to reset port statistics\n"
" --xstats-reset: to reset port extended statistics\n"
" --collectd-format: to print statistics to STDOUT in expected by collectd format\n"
@@ -143,33 +132,6 @@ parse_portmask(const char *portmask)
}
-/*
- * Parse ids value list into array
- */
-static int
-parse_xstats_ids(char *list, uint64_t *ids, int limit) {
- int length;
- char *token;
- char *ctx = NULL;
- char *endptr;
-
- length = 0;
- token = strtok_r(list, ",", &ctx);
- while (token != NULL) {
- ids[length] = strtoull(token, &endptr, 10);
- if (*endptr != '\0')
- return -EINVAL;
-
- length++;
- if (length >= limit)
- return -E2BIG;
-
- token = strtok_r(NULL, ",", &ctx);
- }
-
- return length;
-}
-
static int
proc_info_preparse_args(int argc, char **argv)
{
@@ -216,9 +178,7 @@ proc_info_parse_args(int argc, char **argv)
{"xstats", 0, NULL, 0},
{"metrics", 0, NULL, 0},
{"xstats-reset", 0, NULL, 0},
- {"xstats-name", required_argument, NULL, 1},
{"collectd-format", 0, NULL, 0},
- {"xstats-ids", 1, NULL, 1},
{"host-id", 0, NULL, 0},
{NULL, 0, 0, 0}
};
@@ -264,28 +224,7 @@ proc_info_parse_args(int argc, char **argv)
MAX_LONG_OPT_SZ))
reset_xstats = 1;
break;
- case 1:
- /* Print xstat single value given by name*/
- if (!strncmp(long_option[option_index].name,
- "xstats-name", MAX_LONG_OPT_SZ)) {
- enable_xstats_name = 1;
- xstats_name = optarg;
- printf("name:%s:%s\n",
- long_option[option_index].name,
- optarg);
- } else if (!strncmp(long_option[option_index].name,
- "xstats-ids",
- MAX_LONG_OPT_SZ)) {
- nb_xstats_ids = parse_xstats_ids(optarg,
- xstats_ids, MAX_NB_XSTATS_IDS);
-
- if (nb_xstats_ids <= 0) {
- printf("xstats-id list parse error.\n");
- return -1;
- }
- }
- break;
default:
proc_info_usage(prgname);
return -1;
@@ -412,82 +351,20 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
}
static void
-nic_xstats_by_name_display(uint8_t port_id, char *name)
-{
- uint64_t id;
-
- printf("###### NIC statistics for port %-2d, statistic name '%s':\n",
- port_id, name);
-
- if (rte_eth_xstats_get_id_by_name(port_id, name, &id) == 0)
- printf("%s: %"PRIu64"\n", name, id);
- else
- printf("Statistic not found...\n");
-
-}
-
-static void
-nic_xstats_by_ids_display(uint8_t port_id, uint64_t *ids, int len)
-{
- struct rte_eth_xstat_name *xstats_names;
- uint64_t *values;
- int ret, i;
- static const char *nic_stats_border = "########################";
-
- values = malloc(sizeof(values) * len);
- if (values == NULL) {
- printf("Cannot allocate memory for xstats\n");
- return;
- }
-
- xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
- if (xstats_names == NULL) {
- printf("Cannot allocate memory for xstat names\n");
- free(values);
- return;
- }
-
- if (len != rte_eth_xstats_get_names(
- port_id, xstats_names, len, ids)) {
- printf("Cannot get xstat names\n");
- goto err;
- }
-
- printf("###### NIC extended statistics for port %-2d #########\n",
- port_id);
- printf("%s############################\n", nic_stats_border);
- ret = rte_eth_xstats_get(port_id, ids, values, len);
- if (ret < 0 || ret > len) {
- printf("Cannot get xstats\n");
- goto err;
- }
-
- for (i = 0; i < len; i++)
- printf("%s: %"PRIu64"\n",
- xstats_names[i].name,
- values[i]);
-
- printf("%s############################\n", nic_stats_border);
-err:
- free(values);
- free(xstats_names);
-}
-
-static void
nic_xstats_display(uint8_t port_id)
{
struct rte_eth_xstat_name *xstats_names;
- uint64_t *values;
+ struct rte_eth_xstat *xstats;
int len, ret, i;
static const char *nic_stats_border = "########################";
- len = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
+ len = rte_eth_xstats_get_names(port_id, NULL, 0);
if (len < 0) {
printf("Cannot get xstats count\n");
return;
}
- values = malloc(sizeof(values) * len);
- if (values == NULL) {
+ xstats = malloc(sizeof(xstats[0]) * len);
+ if (xstats == NULL) {
printf("Cannot allocate memory for xstats\n");
return;
}
@@ -495,11 +372,11 @@ nic_xstats_display(uint8_t port_id)
xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
if (xstats_names == NULL) {
printf("Cannot allocate memory for xstat names\n");
- free(values);
+ free(xstats);
return;
}
if (len != rte_eth_xstats_get_names(
- port_id, xstats_names, len, NULL)) {
+ port_id, xstats_names, len)) {
printf("Cannot get xstat names\n");
goto err;
}
@@ -508,7 +385,7 @@ nic_xstats_display(uint8_t port_id)
port_id);
printf("%s############################\n",
nic_stats_border);
- ret = rte_eth_xstats_get(port_id, NULL, values, len);
+ ret = rte_eth_xstats_get(port_id, xstats, len);
if (ret < 0 || ret > len) {
printf("Cannot get xstats\n");
goto err;
@@ -524,18 +401,18 @@ nic_xstats_display(uint8_t port_id)
xstats_names[i].name);
sprintf(buf, "PUTVAL %s/dpdkstat-port.%u/%s-%s N:%"
PRIu64"\n", host_id, port_id, counter_type,
- xstats_names[i].name, values[i]);
+ xstats_names[i].name, xstats[i].value);
write(stdout_fd, buf, strlen(buf));
} else {
printf("%s: %"PRIu64"\n", xstats_names[i].name,
- values[i]);
+ xstats[i].value);
}
}
printf("%s############################\n",
nic_stats_border);
err:
- free(values);
+ free(xstats);
free(xstats_names);
}
@@ -674,11 +551,6 @@ main(int argc, char **argv)
nic_stats_clear(i);
else if (reset_xstats)
nic_xstats_clear(i);
- else if (enable_xstats_name)
- nic_xstats_by_name_display(i, xstats_name);
- else if (nb_xstats_ids > 0)
- nic_xstats_by_ids_display(i, xstats_ids,
- nb_xstats_ids);
else if (enable_metrics)
metrics_display(i);
}
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ef07925..4d873cd 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -264,9 +264,9 @@ nic_stats_clear(portid_t port_id)
void
nic_xstats_display(portid_t port_id)
{
+ struct rte_eth_xstat *xstats;
int cnt_xstats, idx_xstat;
struct rte_eth_xstat_name *xstats_names;
- uint64_t *values;
printf("###### NIC extended statistics for port %-2d\n", port_id);
if (!rte_eth_dev_is_valid_port(port_id)) {
@@ -275,7 +275,7 @@ nic_xstats_display(portid_t port_id)
}
/* Get count */
- cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
+ cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0);
if (cnt_xstats < 0) {
printf("Error: Cannot get count of xstats\n");
return;
@@ -288,24 +288,23 @@ nic_xstats_display(portid_t port_id)
return;
}
if (cnt_xstats != rte_eth_xstats_get_names(
- port_id, xstats_names, cnt_xstats, NULL)) {
+ port_id, xstats_names, cnt_xstats)) {
printf("Error: Cannot get xstats lookup\n");
free(xstats_names);
return;
}
/* Get stats themselves */
- values = malloc(sizeof(values) * cnt_xstats);
- if (values == NULL) {
+ xstats = malloc(sizeof(struct rte_eth_xstat) * cnt_xstats);
+ if (xstats == NULL) {
printf("Cannot allocate memory for xstats\n");
free(xstats_names);
return;
}
- if (cnt_xstats != rte_eth_xstats_get(port_id, NULL, values,
- cnt_xstats)) {
+ if (cnt_xstats != rte_eth_xstats_get(port_id, xstats, cnt_xstats)) {
printf("Error: Unable to get xstats\n");
free(xstats_names);
- free(values);
+ free(xstats);
return;
}
@@ -313,9 +312,9 @@ nic_xstats_display(portid_t port_id)
for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++)
printf("%s: %"PRIu64"\n",
xstats_names[idx_xstat].name,
- values[idx_xstat]);
+ xstats[idx_xstat].value);
free(xstats_names);
- free(values);
+ free(xstats);
}
void
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index a1a758b..e48c121 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -334,21 +334,24 @@ The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK
Extended Statistics API
~~~~~~~~~~~~~~~~~~~~~~~
-The extended statistics API allows a PMD to expose all statistics that are
-available to it, including statistics that are unique to the device.
-Each statistic has three properties ``name``, ``id`` and ``value``:
-
-* ``name``: A human readable string formatted by the scheme detailed below.
-* ``id``: An integer that represents only that statistic.
-* ``value``: A unsigned 64-bit integer that is the value of the statistic.
-
-Note that extended statistic identifiers are
+The extended statistics API allows each individual PMD to expose a unique set
+of statistics. Accessing these from application programs is done via two
+functions:
+
+* ``rte_eth_xstats_get``: Fills in an array of ``struct rte_eth_xstat``
+ with extended statistics.
+* ``rte_eth_xstats_get_names``: Fills in an array of
+ ``struct rte_eth_xstat_name`` with extended statistic name lookup
+ information.
+
+Each ``struct rte_eth_xstat`` contains an identifier and value pair, and
+each ``struct rte_eth_xstat_name`` contains a string. Each identifier
+within the ``struct rte_eth_xstat`` lookup array must have a corresponding
+entry in the ``struct rte_eth_xstat_name`` lookup array. Within the latter
+the index of the entry is the identifier the string is associated with.
+These identifiers, as well as the number of extended statistic exposed, must
+remain constant during runtime. Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
-The API consists of various ``rte_eth_xstats_*()`` functions, and allows an
-application to be flexible in how it retrieves statistics.
-
-Scheme for Human Readable Names
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A naming scheme exists for the strings exposed to clients of the API. This is
to allow scraping of the API for statistics of interest. The naming scheme uses
@@ -360,8 +363,8 @@ strings split by a single underscore ``_``. The scheme is as follows:
* detail n
* unit
-Examples of common statistics xstats strings, formatted to comply to the
-above scheme:
+Examples of common statistics xstats strings, formatted to comply to the scheme
+proposed above:
* ``rx_bytes``
* ``rx_crc_errors``
@@ -375,7 +378,7 @@ associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc. are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
@@ -389,139 +392,3 @@ Some additions in the metadata scheme are as follows:
An example where queue numbers are used is as follows: ``tx_q7_bytes`` which
indicates this statistic applies to queue number 7, and represents the number
of transmitted bytes on that queue.
-
-API Design
-^^^^^^^^^^
-
-The xstats API uses the ``name``, ``id``, and ``value`` to allow performant
-lookup of specific statistics. Performant lookup means two things;
-
-* No string comparisons with the ``name`` of the statistic in fast-path
-* Allow requesting of only the statistics of interest
-
-The API ensures these requirements are met by mapping the ``name`` of the
-statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
-The API allows applications to request an array of ``id`` values, so that the
-PMD only performs the required calculations. Expected usage is that the
-application scans the ``name`` of each statistic, and caches the ``id``
-if it has an interest in that statistic. On the fast-path, the integer can be used
-to retrieve the actual ``value`` of the statistic that the ``id`` represents.
-
-API Functions
-^^^^^^^^^^^^^
-
-The API is built out of a small number of functions, which can be used to
-retrieve the number of statistics and the names, IDs and values of those
-statistics.
-
-* ``rte_eth_xstats_get_names()``: returns the names of the statistics. When given a
- ``NULL`` parameter the function returns the number of statistics that are available.
-
-* ``rte_eth_xstats_get_id_by_name()``: Searches for the statistic ID that matches
- ``xstat_name``. If found, the ``id`` integer is set.
-
-* ``rte_eth_xstats_get()``: Fills in an array of ``uint64_t`` values
- with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
- returns all statistics that are available.
-
-
-Application Usage
-^^^^^^^^^^^^^^^^^
-
-Imagine an application that wants to view the dropped packet count. If no
-packets are dropped, the application does not read any other metrics for
-performance reasons. If packets are dropped, the application has a particular
-set of statistics that it requests. This "set" of statistics allows the app to
-decide what next steps to perform. The following code-snippets show how the
-xstats API can be used to achieve this goal.
-
-First step is to get all statistics names and list them:
-
-.. code-block:: c
-
- struct rte_eth_xstat_name *xstats_names;
- uint64_t *values;
- int len, i;
-
- /* Get number of stats */
- len = rte_eth_xstats_get_names(port_id, NULL, NULL, 0);
- if (len < 0) {
- printf("Cannot get xstats count\n");
- goto err;
- }
-
- xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
- if (xstats_names == NULL) {
- printf("Cannot allocate memory for xstat names\n");
- goto err;
- }
-
- /* Retrieve xstats names, passing NULL for IDs to return all statistics */
- if (len != rte_eth_xstats_get_names(port_id, xstats_names, NULL, len)) {
- printf("Cannot get xstat names\n");
- goto err;
- }
-
- values = malloc(sizeof(values) * len);
- if (values == NULL) {
- printf("Cannot allocate memory for xstats\n");
- goto err;
- }
-
- /* Getting xstats values */
- if (len != rte_eth_xstats_get(port_id, NULL, values, len)) {
- printf("Cannot get xstat values\n");
- goto err;
- }
-
- /* Print all xstats names and values */
- for (i = 0; i < len; i++) {
- printf("%s: %"PRIu64"\n", xstats_names[i].name, values[i]);
- }
-
-The application has access to the names of all of the statistics that the PMD
-exposes. The application can decide which statistics are of interest, cache the
-ids of those statistics by looking up the name as follows:
-
-.. code-block:: c
-
- uint64_t id;
- uint64_t value;
- const char *xstat_name = "rx_errors";
-
- if(!rte_eth_xstats_get_id_by_name(port_id, xstat_name, &id)) {
- rte_eth_xstats_get(port_id, &id, &value, 1);
- printf("%s: %"PRIu64"\n", xstat_name, value);
- }
- else {
- printf("Cannot find xstats with a given name\n");
- goto err;
- }
-
-The API provides flexibility to the application so that it can look up multiple
-statistics using an array containing multiple ``id`` numbers. This reduces the
-function call overhead of retrieving statistics, and makes lookup of multiple
-statistics simpler for the application.
-
-.. code-block:: c
-
- #define APP_NUM_STATS 4
- /* application cached these ids previously; see above */
- uint64_t ids_array[APP_NUM_STATS] = {3,4,7,21};
- uint64_t value_array[APP_NUM_STATS];
-
- /* Getting multiple xstats values from array of IDs */
- rte_eth_xstats_get(port_id, ids_array, value_array, APP_NUM_STATS);
-
- uint32_t i;
- for(i = 0; i < APP_NUM_STATS; i++) {
- printf("%d: %"PRIu64"\n", ids_array[i], value_array[i]);
- }
-
-
-This array lookup API for xstats allows the application create multiple
-"groups" of statistics, and look up the values of those IDs using a single API
-call. As an end result, the application is able to achieve its goal of
-monitoring a single statistic ("rx_errors" in this case), and if that shows
-packets being dropped, it can easily retrieve a "set" of statistics using the
-IDs array parameter to ``rte_eth_xstats_get`` function.
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index ad20e86..dcd55ff 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -450,15 +450,6 @@ API Changes
* The vhost public header file ``rte_virtio_net.h`` is renamed to
``rte_vhost.h``
-* **Reworked rte_ethdev library**
-
- * Changed set of input parameters for ``rte_eth_xstats_get`` and ``rte_eth_xstats_get_names`` functions.
-
- * Added new functions ``rte_eth_xstats_get_all`` and ``rte_eth_xstats_get_names_all to provide backward compatibility for
- ``rte_eth_xstats_get`` and ``rte_eth_xstats_get_names``
-
- * Added new function ``rte_eth_xstats_get_id_by_name``
-
ABI Changes
-----------
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index ca9f98c..137780e 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -116,13 +116,9 @@ static void eth_igb_stats_get(struct rte_eth_dev *dev,
struct rte_eth_stats *rte_stats);
static int eth_igb_xstats_get(struct rte_eth_dev *dev,
struct rte_eth_xstat *xstats, unsigned n);
-static int eth_igb_xstats_get_by_ids(struct rte_eth_dev *dev, uint64_t *ids,
- uint64_t *values, unsigned int n);
static int eth_igb_xstats_get_names(struct rte_eth_dev *dev,
- struct rte_eth_xstat_name *xstats_names, unsigned int limit);
-static int eth_igb_xstats_get_names_by_ids(struct rte_eth_dev *dev,
- struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
- unsigned int limit);
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int limit);
static void eth_igb_stats_reset(struct rte_eth_dev *dev);
static void eth_igb_xstats_reset(struct rte_eth_dev *dev);
static int eth_igb_fw_version_get(struct rte_eth_dev *dev,
@@ -393,8 +389,6 @@ static const struct eth_dev_ops eth_igb_ops = {
.link_update = eth_igb_link_update,
.stats_get = eth_igb_stats_get,
.xstats_get = eth_igb_xstats_get,
- .xstats_get_by_ids = eth_igb_xstats_get_by_ids,
- .xstats_get_names_by_ids = eth_igb_xstats_get_names_by_ids,
.xstats_get_names = eth_igb_xstats_get_names,
.stats_reset = eth_igb_stats_reset,
.xstats_reset = eth_igb_xstats_reset,
@@ -1868,41 +1862,6 @@ static int eth_igb_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
return IGB_NB_XSTATS;
}
-static int eth_igb_xstats_get_names_by_ids(struct rte_eth_dev *dev,
- struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
- unsigned int limit)
-{
- unsigned int i;
-
- if (!ids) {
- if (xstats_names == NULL)
- return IGB_NB_XSTATS;
-
- for (i = 0; i < IGB_NB_XSTATS; i++)
- snprintf(xstats_names[i].name,
- sizeof(xstats_names[i].name),
- "%s", rte_igb_stats_strings[i].name);
-
- return IGB_NB_XSTATS;
-
- } else {
- struct rte_eth_xstat_name xstats_names_copy[IGB_NB_XSTATS];
-
- eth_igb_xstats_get_names_by_ids(dev, xstats_names_copy, NULL,
- IGB_NB_XSTATS);
-
- for (i = 0; i < limit; i++) {
- if (ids[i] >= IGB_NB_XSTATS) {
- PMD_INIT_LOG(ERR, "id value isn't valid");
- return -1;
- }
- strcpy(xstats_names[i].name,
- xstats_names_copy[ids[i]].name);
- }
- return limit;
- }
-}
-
static int
eth_igb_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
unsigned n)
@@ -1933,53 +1892,6 @@ eth_igb_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
return IGB_NB_XSTATS;
}
-static int
-eth_igb_xstats_get_by_ids(struct rte_eth_dev *dev, uint64_t *ids,
- uint64_t *values, unsigned int n)
-{
- unsigned int i;
-
- if (!ids) {
- struct e1000_hw *hw =
- E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct e1000_hw_stats *hw_stats =
- E1000_DEV_PRIVATE_TO_STATS(dev->data->dev_private);
-
- if (n < IGB_NB_XSTATS)
- return IGB_NB_XSTATS;
-
- igb_read_stats_registers(hw, hw_stats);
-
- /* If this is a reset xstats is NULL, and we have cleared the
- * registers by reading them.
- */
- if (!values)
- return 0;
-
- /* Extended stats */
- for (i = 0; i < IGB_NB_XSTATS; i++)
- values[i] = *(uint64_t *)(((char *)hw_stats) +
- rte_igb_stats_strings[i].offset);
-
- return IGB_NB_XSTATS;
-
- } else {
- uint64_t values_copy[IGB_NB_XSTATS];
-
- eth_igb_xstats_get_by_ids(dev, NULL, values_copy,
- IGB_NB_XSTATS);
-
- for (i = 0; i < n; i++) {
- if (ids[i] >= IGB_NB_XSTATS) {
- PMD_INIT_LOG(ERR, "id value isn't valid");
- return -1;
- }
- values[i] = values_copy[ids[i]];
- }
- return n;
- }
-}
-
static void
igbvf_read_stats_registers(struct e1000_hw *hw, struct e1000_vf_stats *hw_stats)
{
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 7b856bb..c226e0a 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -182,20 +182,12 @@ static int ixgbe_dev_xstats_get(struct rte_eth_dev *dev,
struct rte_eth_xstat *xstats, unsigned n);
static int ixgbevf_dev_xstats_get(struct rte_eth_dev *dev,
struct rte_eth_xstat *xstats, unsigned n);
-static int
-ixgbe_dev_xstats_get_by_ids(struct rte_eth_dev *dev, uint64_t *ids,
- uint64_t *values, unsigned int n);
static void ixgbe_dev_stats_reset(struct rte_eth_dev *dev);
static void ixgbe_dev_xstats_reset(struct rte_eth_dev *dev);
static int ixgbe_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned limit);
static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, __rte_unused unsigned limit);
-static int ixgbe_dev_xstats_get_names_by_ids(
- __rte_unused struct rte_eth_dev *dev,
- struct rte_eth_xstat_name *xstats_names,
- uint64_t *ids,
- unsigned int limit);
static int ixgbe_dev_queue_stats_mapping_set(struct rte_eth_dev *eth_dev,
uint16_t queue_id,
uint8_t stat_idx,
@@ -531,11 +523,9 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = {
.link_update = ixgbe_dev_link_update,
.stats_get = ixgbe_dev_stats_get,
.xstats_get = ixgbe_dev_xstats_get,
- .xstats_get_by_ids = ixgbe_dev_xstats_get_by_ids,
.stats_reset = ixgbe_dev_stats_reset,
.xstats_reset = ixgbe_dev_xstats_reset,
.xstats_get_names = ixgbe_dev_xstats_get_names,
- .xstats_get_names_by_ids = ixgbe_dev_xstats_get_names_by_ids,
.queue_stats_mapping_set = ixgbe_dev_queue_stats_mapping_set,
.fw_version_get = ixgbe_fw_version_get,
.dev_infos_get = ixgbe_dev_info_get,
@@ -3192,84 +3182,6 @@ static int ixgbe_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
return cnt_stats;
}
-static int ixgbe_dev_xstats_get_names_by_ids(
- __rte_unused struct rte_eth_dev *dev,
- struct rte_eth_xstat_name *xstats_names,
- uint64_t *ids,
- unsigned int limit)
-{
- if (!ids) {
- const unsigned int cnt_stats = ixgbe_xstats_calc_num();
- unsigned int stat, i, count;
-
- if (xstats_names != NULL) {
- count = 0;
-
- /* Note: limit >= cnt_stats checked upstream
- * in rte_eth_xstats_names()
- */
-
- /* Extended stats from ixgbe_hw_stats */
- for (i = 0; i < IXGBE_NB_HW_STATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s",
- rte_ixgbe_stats_strings[i].name);
- count++;
- }
-
- /* MACsec Stats */
- for (i = 0; i < IXGBE_NB_MACSEC_STATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s",
- rte_ixgbe_macsec_strings[i].name);
- count++;
- }
-
- /* RX Priority Stats */
- for (stat = 0; stat < IXGBE_NB_RXQ_PRIO_STATS; stat++) {
- for (i = 0; i < IXGBE_NB_RXQ_PRIO_VALUES; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "rx_priority%u_%s", i,
- rte_ixgbe_rxq_strings[stat].name);
- count++;
- }
- }
-
- /* TX Priority Stats */
- for (stat = 0; stat < IXGBE_NB_TXQ_PRIO_STATS; stat++) {
- for (i = 0; i < IXGBE_NB_TXQ_PRIO_VALUES; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "tx_priority%u_%s", i,
- rte_ixgbe_txq_strings[stat].name);
- count++;
- }
- }
- }
- return cnt_stats;
- }
-
- uint16_t i;
- uint16_t size = ixgbe_xstats_calc_num();
- struct rte_eth_xstat_name xstats_names_copy[size];
-
- ixgbe_dev_xstats_get_names_by_ids(dev, xstats_names_copy, NULL,
- size);
-
- for (i = 0; i < limit; i++) {
- if (ids[i] >= size) {
- PMD_INIT_LOG(ERR, "id value isn't valid");
- return -1;
- }
- strcpy(xstats_names[i].name,
- xstats_names_copy[ids[i]].name);
- }
- return limit;
-}
-
static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, unsigned limit)
{
@@ -3360,97 +3272,6 @@ ixgbe_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
return count;
}
-static int
-ixgbe_dev_xstats_get_by_ids(struct rte_eth_dev *dev, uint64_t *ids,
- uint64_t *values, unsigned int n)
-{
- if (!ids) {
- struct ixgbe_hw *hw =
- IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct ixgbe_hw_stats *hw_stats =
- IXGBE_DEV_PRIVATE_TO_STATS(
- dev->data->dev_private);
- struct ixgbe_macsec_stats *macsec_stats =
- IXGBE_DEV_PRIVATE_TO_MACSEC_STATS(
- dev->data->dev_private);
- uint64_t total_missed_rx, total_qbrc, total_qprc, total_qprdc;
- unsigned int i, stat, count = 0;
-
- count = ixgbe_xstats_calc_num();
-
- if (!ids && n < count)
- return count;
-
- total_missed_rx = 0;
- total_qbrc = 0;
- total_qprc = 0;
- total_qprdc = 0;
-
- ixgbe_read_stats_registers(hw, hw_stats, macsec_stats,
- &total_missed_rx, &total_qbrc, &total_qprc,
- &total_qprdc);
-
- /* If this is a reset xstats is NULL, and we have cleared the
- * registers by reading them.
- */
- if (!ids && !values)
- return 0;
-
- /* Extended stats from ixgbe_hw_stats */
- count = 0;
- for (i = 0; i < IXGBE_NB_HW_STATS; i++) {
- values[count] = *(uint64_t *)(((char *)hw_stats) +
- rte_ixgbe_stats_strings[i].offset);
- count++;
- }
-
- /* MACsec Stats */
- for (i = 0; i < IXGBE_NB_MACSEC_STATS; i++) {
- values[count] = *(uint64_t *)(((char *)macsec_stats) +
- rte_ixgbe_macsec_strings[i].offset);
- count++;
- }
-
- /* RX Priority Stats */
- for (stat = 0; stat < IXGBE_NB_RXQ_PRIO_STATS; stat++) {
- for (i = 0; i < IXGBE_NB_RXQ_PRIO_VALUES; i++) {
- values[count] =
- *(uint64_t *)(((char *)hw_stats) +
- rte_ixgbe_rxq_strings[stat].offset +
- (sizeof(uint64_t) * i));
- count++;
- }
- }
-
- /* TX Priority Stats */
- for (stat = 0; stat < IXGBE_NB_TXQ_PRIO_STATS; stat++) {
- for (i = 0; i < IXGBE_NB_TXQ_PRIO_VALUES; i++) {
- values[count] =
- *(uint64_t *)(((char *)hw_stats) +
- rte_ixgbe_txq_strings[stat].offset +
- (sizeof(uint64_t) * i));
- count++;
- }
- }
- return count;
- }
-
- uint16_t i;
- uint16_t size = ixgbe_xstats_calc_num();
- uint64_t values_copy[size];
-
- ixgbe_dev_xstats_get_by_ids(dev, NULL, values_copy, size);
-
- for (i = 0; i < n; i++) {
- if (ids[i] >= size) {
- PMD_INIT_LOG(ERR, "id value isn't valid");
- return -1;
- }
- values[i] = values_copy[ids[i]];
- }
- return n;
-}
-
static void
ixgbe_dev_xstats_reset(struct rte_eth_dev *dev)
{
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 89f6514..9922430 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1360,19 +1360,12 @@ get_xstats_count(uint8_t port_id)
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
dev = &rte_eth_devices[port_id];
- if (dev->dev_ops->xstats_get_names_by_ids != NULL) {
- count = (*dev->dev_ops->xstats_get_names_by_ids)(dev, NULL,
- NULL, 0);
- if (count < 0)
- return count;
- }
if (dev->dev_ops->xstats_get_names != NULL) {
count = (*dev->dev_ops->xstats_get_names)(dev, NULL, 0);
if (count < 0)
return count;
} else
count = 0;
-
count += RTE_NB_STATS;
count += RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS) *
RTE_NB_RXQ_STATS;
@@ -1382,367 +1375,150 @@ get_xstats_count(uint8_t port_id)
}
int
-rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
- uint64_t *id)
-{
- int cnt_xstats, idx_xstat;
-
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
-
- if (!id) {
- RTE_PMD_DEBUG_TRACE("Error: id pointer is NULL\n");
- return -1;
- }
-
- if (!xstat_name) {
- RTE_PMD_DEBUG_TRACE("Error: xstat_name pointer is NULL\n");
- return -1;
- }
-
- /* Get count */
- cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
- if (cnt_xstats < 0) {
- RTE_PMD_DEBUG_TRACE("Error: Cannot get count of xstats\n");
- return -1;
- }
-
- /* Get id-name lookup table */
- struct rte_eth_xstat_name xstats_names[cnt_xstats];
-
- if (cnt_xstats != rte_eth_xstats_get_names(
- port_id, xstats_names, cnt_xstats, NULL)) {
- RTE_PMD_DEBUG_TRACE("Error: Cannot get xstats lookup\n");
- return -1;
- }
-
- for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++) {
- if (!strcmp(xstats_names[idx_xstat].name, xstat_name)) {
- *id = idx_xstat;
- return 0;
- };
- }
-
- return -EINVAL;
-}
-
-int
-rte_eth_xstats_get_names_v1607(uint8_t port_id,
+rte_eth_xstats_get_names(uint8_t port_id,
struct rte_eth_xstat_name *xstats_names,
unsigned int size)
{
- return rte_eth_xstats_get_names(port_id, xstats_names, size, NULL);
-}
-VERSION_SYMBOL(rte_eth_xstats_get_names, _v1607, 16.07);
-
-int
-rte_eth_xstats_get_names_v1705(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names, unsigned int size,
- uint64_t *ids)
-{
- /* Get all xstats */
- if (!ids) {
- struct rte_eth_dev *dev;
- int cnt_used_entries;
- int cnt_expected_entries;
- int cnt_driver_entries;
- uint32_t idx, id_queue;
- uint16_t num_q;
+ struct rte_eth_dev *dev;
+ int cnt_used_entries;
+ int cnt_expected_entries;
+ int cnt_driver_entries;
+ uint32_t idx, id_queue;
+ uint16_t num_q;
- cnt_expected_entries = get_xstats_count(port_id);
- if (xstats_names == NULL || cnt_expected_entries < 0 ||
- (int)size < cnt_expected_entries)
- return cnt_expected_entries;
+ cnt_expected_entries = get_xstats_count(port_id);
+ if (xstats_names == NULL || cnt_expected_entries < 0 ||
+ (int)size < cnt_expected_entries)
+ return cnt_expected_entries;
- /* port_id checked in get_xstats_count() */
- dev = &rte_eth_devices[port_id];
- cnt_used_entries = 0;
+ /* port_id checked in get_xstats_count() */
+ dev = &rte_eth_devices[port_id];
+ cnt_used_entries = 0;
- for (idx = 0; idx < RTE_NB_STATS; idx++) {
+ for (idx = 0; idx < RTE_NB_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "%s", rte_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+ num_q = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
snprintf(xstats_names[cnt_used_entries].name,
sizeof(xstats_names[0].name),
- "%s", rte_stats_strings[idx].name);
+ "rx_q%u%s",
+ id_queue, rte_rxq_stats_strings[idx].name);
cnt_used_entries++;
}
- num_q = RTE_MIN(dev->data->nb_rx_queues,
- RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "rx_q%u%s",
- id_queue,
- rte_rxq_stats_strings[idx].name);
- cnt_used_entries++;
- }
- }
- num_q = RTE_MIN(dev->data->nb_tx_queues,
- RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "tx_q%u%s",
- id_queue,
- rte_txq_stats_strings[idx].name);
- cnt_used_entries++;
- }
- }
-
- if (dev->dev_ops->xstats_get_names_by_ids != NULL) {
- /* If there are any driver-specific xstats, append them
- * to end of list.
- */
- cnt_driver_entries =
- (*dev->dev_ops->xstats_get_names_by_ids)(
- dev,
- xstats_names + cnt_used_entries,
- NULL,
- size - cnt_used_entries);
- if (cnt_driver_entries < 0)
- return cnt_driver_entries;
- cnt_used_entries += cnt_driver_entries;
-
- } else if (dev->dev_ops->xstats_get_names != NULL) {
- /* If there are any driver-specific xstats, append them
- * to end of list.
- */
- cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
- dev,
- xstats_names + cnt_used_entries,
- size - cnt_used_entries);
- if (cnt_driver_entries < 0)
- return cnt_driver_entries;
- cnt_used_entries += cnt_driver_entries;
- }
-
- return cnt_used_entries;
}
- /* Get only xstats given by IDS */
- else {
- uint16_t len, i;
- struct rte_eth_xstat_name *xstats_names_copy;
-
- len = rte_eth_xstats_get_names_v1705(port_id, NULL, 0, NULL);
-
- xstats_names_copy =
- malloc(sizeof(struct rte_eth_xstat_name) * len);
- if (!xstats_names_copy) {
- RTE_PMD_DEBUG_TRACE(
- "ERROR: can't allocate memory for values_copy\n");
- free(xstats_names_copy);
- return -1;
- }
-
- rte_eth_xstats_get_names_v1705(port_id, xstats_names_copy,
- len, NULL);
-
- for (i = 0; i < size; i++) {
- if (ids[i] >= len) {
- RTE_PMD_DEBUG_TRACE(
- "ERROR: id value isn't valid\n");
- return -1;
- }
- strcpy(xstats_names[i].name,
- xstats_names_copy[ids[i]].name);
+ num_q = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "tx_q%u%s",
+ id_queue, rte_txq_stats_strings[idx].name);
+ cnt_used_entries++;
}
- free(xstats_names_copy);
- return size;
}
-}
-BIND_DEFAULT_SYMBOL(rte_eth_xstats_get_names, _v1705, 17.05);
-
-MAP_STATIC_SYMBOL(int
- rte_eth_xstats_get_names(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int size,
- uint64_t *ids), rte_eth_xstats_get_names_v1705);
-
-/* retrieve ethdev extended statistics */
-int
-rte_eth_xstats_get_v22(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned int n)
-{
- uint64_t *values_copy;
- uint16_t size, i;
- values_copy = malloc(sizeof(values_copy) * n);
- if (!values_copy) {
- RTE_PMD_DEBUG_TRACE(
- "ERROR: Cannot allocate memory for xstats\n");
- return -1;
+ if (dev->dev_ops->xstats_get_names != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
+ dev,
+ xstats_names + cnt_used_entries,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
}
- size = rte_eth_xstats_get(port_id, 0, values_copy, n);
- for (i = 0; i < n; i++) {
- xstats[i].id = i;
- xstats[i].value = values_copy[i];
- }
- free(values_copy);
- return size;
+ return cnt_used_entries;
}
-VERSION_SYMBOL(rte_eth_xstats_get, _v22, 2.2);
/* retrieve ethdev extended statistics */
int
-rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
+rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
unsigned int n)
{
- /* If need all xstats */
- if (!ids) {
- struct rte_eth_stats eth_stats;
- struct rte_eth_dev *dev;
- unsigned int count = 0, i, q;
- signed int xcount = 0;
- uint64_t val, *stats_ptr;
- uint16_t nb_rxqs, nb_txqs;
-
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
- dev = &rte_eth_devices[port_id];
-
- nb_rxqs = RTE_MIN(dev->data->nb_rx_queues,
- RTE_ETHDEV_QUEUE_STAT_CNTRS);
- nb_txqs = RTE_MIN(dev->data->nb_tx_queues,
- RTE_ETHDEV_QUEUE_STAT_CNTRS);
-
- /* Return generic statistics */
- count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
- (nb_txqs * RTE_NB_TXQ_STATS);
-
-
- /* implemented by the driver */
- if (dev->dev_ops->xstats_get_by_ids != NULL) {
- /* Retrieve the xstats from the driver at the end of the
- * xstats struct. Retrieve all xstats.
- */
- xcount = (*dev->dev_ops->xstats_get_by_ids)(dev,
- NULL,
- values ? values + count : NULL,
- (n > count) ? n - count : 0);
-
- if (xcount < 0)
- return xcount;
- /* implemented by the driver */
- } else if (dev->dev_ops->xstats_get != NULL) {
- /* Retrieve the xstats from the driver at the end of the
- * xstats struct. Retrieve all xstats.
- * Compatibility for PMD without xstats_get_by_ids
- */
- unsigned int size = (n > count) ? n - count : 1;
- struct rte_eth_xstat xstats[size];
-
- xcount = (*dev->dev_ops->xstats_get)(dev,
- values ? xstats : NULL, size);
-
- if (xcount < 0)
- return xcount;
-
- if (values != NULL)
- for (i = 0 ; i < (unsigned int)xcount; i++)
- values[i + count] = xstats[i].value;
- }
+ struct rte_eth_stats eth_stats;
+ struct rte_eth_dev *dev;
+ unsigned int count = 0, i, q;
+ signed int xcount = 0;
+ uint64_t val, *stats_ptr;
+ uint16_t nb_rxqs, nb_txqs;
- if (n < count + xcount || values == NULL)
- return count + xcount;
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
- /* now fill the xstats structure */
- count = 0;
- rte_eth_stats_get(port_id, ð_stats);
+ dev = &rte_eth_devices[port_id];
- /* global stats */
- for (i = 0; i < RTE_NB_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_stats_strings[i].offset);
- val = *stats_ptr;
- values[count++] = val;
- }
+ nb_rxqs = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ nb_txqs = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- /* per-rxq stats */
- for (q = 0; q < nb_rxqs; q++) {
- for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_rxq_stats_strings[i].offset +
- q * sizeof(uint64_t));
- val = *stats_ptr;
- values[count++] = val;
- }
- }
+ /* Return generic statistics */
+ count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
+ (nb_txqs * RTE_NB_TXQ_STATS);
- /* per-txq stats */
- for (q = 0; q < nb_txqs; q++) {
- for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_txq_stats_strings[i].offset +
- q * sizeof(uint64_t));
- val = *stats_ptr;
- values[count++] = val;
- }
- }
+ /* implemented by the driver */
+ if (dev->dev_ops->xstats_get != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct.
+ */
+ xcount = (*dev->dev_ops->xstats_get)(dev,
+ xstats ? xstats + count : NULL,
+ (n > count) ? n - count : 0);
- return count + xcount;
+ if (xcount < 0)
+ return xcount;
}
- /* Need only xstats given by IDS array */
- else {
- uint16_t i, size;
- uint64_t *values_copy;
- size = rte_eth_xstats_get_v1705(port_id, NULL, NULL, 0);
+ if (n < count + xcount || xstats == NULL)
+ return count + xcount;
- values_copy = malloc(sizeof(values_copy) * size);
- if (!values_copy) {
- RTE_PMD_DEBUG_TRACE(
- "ERROR: can't allocate memory for values_copy\n");
- return -1;
- }
+ /* now fill the xstats structure */
+ count = 0;
+ rte_eth_stats_get(port_id, ð_stats);
- rte_eth_xstats_get_v1705(port_id, NULL, values_copy, size);
+ /* global stats */
+ for (i = 0; i < RTE_NB_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_stats_strings[i].offset);
+ val = *stats_ptr;
+ xstats[count++].value = val;
+ }
- for (i = 0; i < n; i++) {
- if (ids[i] >= size) {
- RTE_PMD_DEBUG_TRACE(
- "ERROR: id value isn't valid\n");
- return -1;
- }
- values[i] = values_copy[ids[i]];
+ /* per-rxq stats */
+ for (q = 0; q < nb_rxqs; q++) {
+ for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_rxq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ xstats[count++].value = val;
}
- free(values_copy);
- return n;
}
-}
-BIND_DEFAULT_SYMBOL(rte_eth_xstats_get, _v1705, 17.05);
-
-MAP_STATIC_SYMBOL(int
- rte_eth_xstats_get(uint8_t port_id, uint64_t *ids,
- uint64_t *values, unsigned int n), rte_eth_xstats_get_v1705);
-__rte_deprecated int
-rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned int n)
-{
- uint64_t *values_copy;
- uint16_t size, i;
-
- values_copy = malloc(sizeof(values_copy) * n);
- if (!values_copy) {
- RTE_PMD_DEBUG_TRACE(
- "ERROR: Cannot allocate memory for xstats\n");
- return -1;
+ /* per-txq stats */
+ for (q = 0; q < nb_txqs; q++) {
+ for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_txq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ xstats[count++].value = val;
+ }
}
- size = rte_eth_xstats_get(port_id, 0, values_copy, n);
- for (i = 0; i < n; i++) {
+ for (i = 0; i < count; i++)
xstats[i].id = i;
- xstats[i].value = values_copy[i];
- }
- free(values_copy);
- return size;
-}
+ /* add an offset to driver-specific stats */
+ for ( ; i < count + xcount; i++)
+ xstats[i].id += count;
-__rte_deprecated int
-rte_eth_xstats_get_names_all(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names, unsigned int n)
-{
- return rte_eth_xstats_get_names(port_id, xstats_names, n, NULL);
+ return count + xcount;
}
/* reset ethdev extended statistics */
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index e0f7ee5..a46290c 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -185,7 +185,6 @@ extern "C" {
#include "rte_ether.h"
#include "rte_eth_ctrl.h"
#include "rte_dev_info.h"
-#include "rte_compat.h"
struct rte_mbuf;
@@ -1122,10 +1121,6 @@ typedef int (*eth_xstats_get_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat *stats, unsigned n);
/**< @internal Get extended stats of an Ethernet device. */
-typedef int (*eth_xstats_get_by_ids_t)(struct rte_eth_dev *dev,
- uint64_t *ids, uint64_t *values, unsigned int n);
-/**< @internal Get extended stats of an Ethernet device. */
-
typedef void (*eth_xstats_reset_t)(struct rte_eth_dev *dev);
/**< @internal Reset extended stats of an Ethernet device. */
@@ -1133,17 +1128,6 @@ typedef int (*eth_xstats_get_names_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, unsigned size);
/**< @internal Get names of extended stats of an Ethernet device. */
-typedef int (*eth_xstats_get_names_by_ids_t)(struct rte_eth_dev *dev,
- struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
- unsigned int size);
-/**< @internal Get names of extended stats of an Ethernet device. */
-
-typedef int (*eth_xstats_get_by_name_t)(struct rte_eth_dev *dev,
- struct rte_eth_xstat_name *xstats_names,
- struct rte_eth_xstat *xstat,
- const char *name);
-/**< @internal Get xstat specified by name of an Ethernet device. */
-
typedef int (*eth_queue_stats_mapping_set_t)(struct rte_eth_dev *dev,
uint16_t queue_id,
uint8_t stat_idx,
@@ -1582,12 +1566,6 @@ struct eth_dev_ops {
eth_timesync_adjust_time timesync_adjust_time; /** Adjust the device clock. */
eth_timesync_read_time timesync_read_time; /** Get the device clock time. */
eth_timesync_write_time timesync_write_time; /** Set the device clock time. */
- eth_xstats_get_by_ids_t xstats_get_by_ids;
- /**< Get extended device statistics by ID. */
- eth_xstats_get_names_by_ids_t xstats_get_names_by_ids;
- /**< Get name of extended device statistics by ID. */
- eth_xstats_get_by_name_t xstats_get_by_name;
- /**< Get extended device statistics by name. */
};
/**
@@ -2291,55 +2269,8 @@ int rte_eth_stats_get(uint8_t port_id, struct rte_eth_stats *stats);
*/
void rte_eth_stats_reset(uint8_t port_id);
-
-/**
- * Gets the ID of a statistic from its name.
- *
- * This function searches for the statistics using string compares, and
- * as such should not be used on the fast-path. For fast-path retrieval of
- * specific statistics, store the ID as provided in *id* from this function,
- * and pass the ID to rte_eth_xstats_get()
- *
- * @param port_id The port to look up statistics from
- * @param xstat_name The name of the statistic to return
- * @param[out] id A pointer to an app-supplied uint64_t which should be
- * set to the ID of the stat if the stat exists.
- * @return
- * 0 on success
- * -ENODEV for invalid port_id,
- * -EINVAL if the xstat_name doesn't exist in port_id
- */
-int rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
- uint64_t *id);
-
-/**
- * Retrieve all extended statistics of an Ethernet device.
- *
- * @param port_id
- * The port identifier of the Ethernet device.
- * @param xstats
- * A pointer to a table of structure of type *rte_eth_xstat*
- * to be filled with device statistics ids and values: id is the
- * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
- * and value is the statistic counter.
- * This parameter can be set to NULL if n is 0.
- * @param n
- * The size of the xstats array (number of elements).
- * @return
- * - A positive value lower or equal to n: success. The return value
- * is the number of entries filled in the stats table.
- * - A positive value higher than n: error, the given statistics table
- * is too small. The return value corresponds to the size that should
- * be given to succeed. The entries in the table are not valid and
- * shall not be used by the caller.
- * - A negative value on error (invalid port id).
- */
-__rte_deprecated
-int rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned int n);
-
/**
- * Retrieve names of all extended statistics of an Ethernet device.
+ * Retrieve names of extended statistics of an Ethernet device.
*
* @param port_id
* The port identifier of the Ethernet device.
@@ -2347,7 +2278,7 @@ int rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
* An rte_eth_xstat_name array of at least *size* elements to
* be filled. If set to NULL, the function returns the required number
* of elements.
- * @param n
+ * @param size
* The size of the xstats_names array (number of elements).
* @return
* - A positive value lower or equal to size: success. The return value
@@ -2358,9 +2289,9 @@ int rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-__rte_deprecated
-int rte_eth_xstats_get_names_all(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names, unsigned int n);
+int rte_eth_xstats_get_names(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size);
/**
* Retrieve extended statistics of an Ethernet device.
@@ -2384,92 +2315,8 @@ int rte_eth_xstats_get_names_all(uint8_t port_id,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-int rte_eth_xstats_get_v22(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned int n);
-
-/**
- * Retrieve extended statistics of an Ethernet device.
- *
- * @param port_id
- * The port identifier of the Ethernet device.
- * @param ids
- * A pointer to an ids array passed by application. This tells wich
- * statistics values function should retrieve. This parameter
- * can be set to NULL if n is 0. In this case function will retrieve
- * all avalible statistics.
- * @param values
- * A pointer to a table to be filled with device statistics values.
- * @param n
- * The size of the ids array (number of elements).
- * @return
- * - A positive value lower or equal to n: success. The return value
- * is the number of entries filled in the stats table.
- * - A positive value higher than n: error, the given statistics table
- * is too small. The return value corresponds to the size that should
- * be given to succeed. The entries in the table are not valid and
- * shall not be used by the caller.
- * - A negative value on error (invalid port id).
- */
-int rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
- unsigned int n);
-
-int rte_eth_xstats_get(uint8_t port_id, uint64_t *ids, uint64_t *values,
- unsigned int n);
-
-/**
- * Retrieve extended statistics of an Ethernet device.
- *
- * @param port_id
- * The port identifier of the Ethernet device.
- * @param xstats_names
- * A pointer to a table of structure of type *rte_eth_xstat*
- * to be filled with device statistics ids and values: id is the
- * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
- * and value is the statistic counter.
- * This parameter can be set to NULL if n is 0.
- * @param n
- * The size of the xstats array (number of elements).
- * @return
- * - A positive value lower or equal to n: success. The return value
- * is the number of entries filled in the stats table.
- * - A positive value higher than n: error, the given statistics table
- * is too small. The return value corresponds to the size that should
- * be given to succeed. The entries in the table are not valid and
- * shall not be used by the caller.
- * - A negative value on error (invalid port id).
- */
-int rte_eth_xstats_get_names_v1607(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names, unsigned int n);
-
-/**
- * Retrieve names of extended statistics of an Ethernet device.
- *
- * @param port_id
- * The port identifier of the Ethernet device.
- * @param xstats_names
- * An rte_eth_xstat_name array of at least *size* elements to
- * be filled. If set to NULL, the function returns the required number
- * of elements.
- * @param ids
- * IDs array given by app to retrieve specific statistics
- * @param size
- * The size of the xstats_names array (number of elements).
- * @return
- * - A positive value lower or equal to size: success. The return value
- * is the number of entries filled in the stats table.
- * - A positive value higher than size: error, the given statistics table
- * is too small. The return value corresponds to the size that should
- * be given to succeed. The entries in the table are not valid and
- * shall not be used by the caller.
- * - A negative value on error (invalid port id).
- */
-int rte_eth_xstats_get_names_v1705(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names, unsigned int size,
- uint64_t *ids);
-
-int rte_eth_xstats_get_names(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names, unsigned int size,
- uint64_t *ids);
+int rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n);
/**
* Reset extended statistics of an Ethernet device.
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 6e118c4..238c2a1 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -151,10 +151,5 @@ DPDK_17.05 {
rte_eth_dev_attach_secondary;
rte_eth_find_next;
- rte_eth_xstats_get;
- rte_eth_xstats_get_all;
- rte_eth_xstats_get_id_by_name;
- rte_eth_xstats_get_names;
- rte_eth_xstats_get_names_all;
} DPDK_17.02;
--
2.7.4
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev
2017-04-18 15:48 4% [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev Bernard Iremonger
@ 2017-04-26 15:02 4% ` Mcnamara, John
2017-05-10 23:27 4% ` Thomas Monjalon
2017-05-10 15:18 4% ` Pattan, Reshma
2017-05-10 16:31 4% ` Ananyev, Konstantin
2 siblings, 1 reply; 200+ results
From: Mcnamara, John @ 2017-04-26 15:02 UTC (permalink / raw)
To: Iremonger, Bernard, dev; +Cc: Yigit, Ferruh
> -----Original Message-----
> From: Iremonger, Bernard
> Sent: Tuesday, April 18, 2017 4:49 PM
> To: dev@dpdk.org
> Cc: Mcnamara, John <john.mcnamara@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Iremonger, Bernard <bernard.iremonger@intel.com>
> Subject: [PATCH] doc: postpone ABI change in ethdev
>
> The change of _rte_eth_dev_callback_process has not been done in 17.05.
> Let's postpone to 17.08.
>
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 2/3] doc: add device removal event to release note
@ 2017-04-25 10:10 4% ` Gaetan Rivet
0 siblings, 0 replies; 200+ results
From: Gaetan Rivet @ 2017-04-25 10:10 UTC (permalink / raw)
To: dev; +Cc: Ferruh Yigit
Signed-off-by: Gaetan Rivet <gaetan.rivet@6wind.com>
---
doc/guides/rel_notes/release_17_05.rst | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index eb5b30f..293d3fd 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -298,6 +298,11 @@ New Features
* DES DOCSIS BPI algorithm.
+* **Added Device Removal Interrupt.**
+
+ * Added a new ethdev event ``RTE_ETH_DEV_INTR_RMV`` to signify the sudden removal of a device. This
+ event can be advertized by PCI drivers and enabled accordingly.
+
Resolved Issues
---------------
@@ -459,7 +464,6 @@ API Changes
* Added new function ``rte_eth_xstats_get_id_by_name``
-
ABI Changes
-----------
--
2.1.4
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats
2017-04-24 12:32 3% ` Olivier Matz
@ 2017-04-24 12:41 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-04-24 12:41 UTC (permalink / raw)
To: Kuba Kozak
Cc: dev, Olivier Matz, harry.van.haaren, deepak.k.jain, Jacek Piasecki
24/04/2017 14:32, Olivier Matz:
> Hi,
>
> On Thu, 20 Apr 2017 22:31:35 +0200, Thomas Monjalon <thomas@monjalon.net>
wrote:
> > 13/04/2017 16:59, Kuba Kozak:
> > > Extended xstats API in ethdev library to allow grouping of stats
> > > logically
> > > so they can be retrieved per logical grouping managed by the
> > > application.
> > > Changed existing functions rte_eth_xstats_get_names and
> > > rte_eth_xstats_get
> > > to use a new list of arguments: array of ids and array of values.
> > > ABI versioning mechanism was used to support backward compatibility.
> > > Introduced two new functions rte_eth_xstats_get_all and
> > > rte_eth_xstats_get_names_all which keeps functionality of the previous
> > > ones (respectively rte_eth_xstats_get and rte_eth_xstats_get_names)
> > > but use new API inside. Both functions marked as deprecated.
> > > Introduced new function: rte_eth_xstats_get_id_by_name to retrieve
> > > xstats ids by its names.
> > > Extended functionality of proc_info application:
> > > --xstats-name NAME: to display single xstat value by NAME
> > > Updated test-pmd application to use new API.
> >
> > Applied, thanks
>
> I'm adapting my application to the upcoming dpdk 17.05. I see
> several problems with this patchset:
>
> - the API of rte_eth_xstats_get() and rte_eth_xstats_get_names()
> has been modified, and from what I see it was not announced.
> It looks that ABI is preserved however.
>
> - the new functions rte_eth_xstats_get_all() and
> rte_eth_xstats_get_names_all() are marked as deprecated, which
> looks strange for new functions.
>
> About the new api:
>
> int rte_eth_xstats_get(uint8_t port_id, uint64_t *ids, uint64_t *values,
> unsigned int n);
>
> int rte_eth_xstats_get_names(uint8_t port_id,
> struct rte_eth_xstat_name *xstats_names, unsigned int size,
> uint64_t *ids);
>
> - the argument "id" is not at the same place
>
> - why having "size" in one function and "n" in the second (it was
> renamed in the patch)?
>
> - the argument "id" should be const
>
> - a table of uint64_t is returned in place of the struct rte_eth_xstat
> table: if no ids are given, the driver cannot return partial or
> disordered stats anymore. See
> 513c78ae3fd6 ("ethdev: fix extended statistics name index")
>
>
> So, I wonder if it wouldn't be more simple to keep the old
> API intact (it would avoid unannounced breakage). The new feature
> can be implemented in an additional API:
>
> rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids,
> uint64_t *values, unsigned int size)
> rte_eth_xstats_get_names_by_id(uint8_t port_id, const uint64_t *ids,
> struct rte_eth_xstat_name *xstats_names, unsigned int size)
>
> Or:
>
> rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids,
> struct rte_eth_xstat *values, unsigned int size)
> rte_eth_xstats_get_names_by_id(uint8_t port_id, const uint64_t *ids,
> struct rte_eth_xstat_name *xstats_names, unsigned int size)
>
> (which would allow to deprecate the old API, but I'm not sure
> we need to)
>
>
> Can we fix that for 17.05?
Bottom line is that I have skipped the complete review of this patchset
because it was not properly split, preventing a good review.
Lesson learned: I won't accept anymore a patchset which is not split
enough to allow a proper overview.
Back to the issues, please try to fix it quickly or we should revert
it for 17.05-rc3.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats
2017-04-20 20:31 0% ` Thomas Monjalon
@ 2017-04-24 12:32 3% ` Olivier Matz
2017-04-24 12:41 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2017-04-24 12:32 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: Kuba Kozak, dev, harry.van.haaren, deepak.k.jain
Hi,
On Thu, 20 Apr 2017 22:31:35 +0200, Thomas Monjalon <thomas@monjalon.net> wrote:
> 13/04/2017 16:59, Kuba Kozak:
> > Extended xstats API in ethdev library to allow grouping of stats logically
> > so they can be retrieved per logical grouping managed by the application.
> > Changed existing functions rte_eth_xstats_get_names and rte_eth_xstats_get
> > to use a new list of arguments: array of ids and array of values.
> > ABI versioning mechanism was used to support backward compatibility.
> > Introduced two new functions rte_eth_xstats_get_all and
> > rte_eth_xstats_get_names_all which keeps functionality of the previous
> > ones (respectively rte_eth_xstats_get and rte_eth_xstats_get_names)
> > but use new API inside. Both functions marked as deprecated.
> > Introduced new function: rte_eth_xstats_get_id_by_name to retrieve
> > xstats ids by its names.
> > Extended functionality of proc_info application:
> > --xstats-name NAME: to display single xstat value by NAME
> > Updated test-pmd application to use new API.
>
> Applied, thanks
I'm adapting my application to the upcoming dpdk 17.05. I see
several problems with this patchset:
- the API of rte_eth_xstats_get() and rte_eth_xstats_get_names()
has been modified, and from what I see it was not announced.
It looks that ABI is preserved however.
- the new functions rte_eth_xstats_get_all() and
rte_eth_xstats_get_names_all() are marked as deprecated, which
looks strange for new functions.
About the new api:
int rte_eth_xstats_get(uint8_t port_id, uint64_t *ids, uint64_t *values,
unsigned int n);
int rte_eth_xstats_get_names(uint8_t port_id,
struct rte_eth_xstat_name *xstats_names, unsigned int size,
uint64_t *ids);
- the argument "id" is not at the same place
- why having "size" in one function and "n" in the second (it was
renamed in the patch)?
- the argument "id" should be const
- a table of uint64_t is returned in place of the struct rte_eth_xstat
table: if no ids are given, the driver cannot return partial or
disordered stats anymore. See
513c78ae3fd6 ("ethdev: fix extended statistics name index")
So, I wonder if it wouldn't be more simple to keep the old
API intact (it would avoid unannounced breakage). The new feature
can be implemented in an additional API:
rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids,
uint64_t *values, unsigned int size)
rte_eth_xstats_get_names_by_id(uint8_t port_id, const uint64_t *ids,
struct rte_eth_xstat_name *xstats_names, unsigned int size)
Or:
rte_eth_xstats_get_by_id(uint8_t port_id, const uint64_t *ids,
struct rte_eth_xstat *values, unsigned int size)
rte_eth_xstats_get_names_by_id(uint8_t port_id, const uint64_t *ids,
struct rte_eth_xstat_name *xstats_names, unsigned int size)
(which would allow to deprecate the old API, but I'm not sure
we need to)
Can we fix that for 17.05?
Regards,
Olivier
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v7 1/3] lib/librte_ether: add support for port reset
2017-04-20 20:49 3% ` Thomas Monjalon
@ 2017-04-21 3:20 0% ` Zhao1, Wei
0 siblings, 0 replies; 200+ results
From: Zhao1, Wei @ 2017-04-21 3:20 UTC (permalink / raw)
To: Thomas Monjalon, Lu, Wenzhuo; +Cc: dev
Hi, Thomas
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, April 21, 2017 4:50 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v7 1/3] lib/librte_ether: add support for port
> reset
>
> 10/04/2017 05:02, Wei Zhao:
> > --- a/lib/librte_ether/rte_ethdev.h
> > +++ b/lib/librte_ether/rte_ethdev.h
> > @@ -1509,6 +1512,9 @@ struct eth_dev_ops {
> > eth_l2_tunnel_offload_set_t l2_tunnel_offload_set;
> > /** Enable/disable l2 tunnel offload functions. */
> >
> > + /** Reset device. */
> > + eth_dev_reset_t dev_reset;
> > +
> > eth_set_queue_rate_limit_t set_queue_rate_limit; /**< Set queue
> rate
> > limit. */
> >
> > rss_hash_update_t rss_hash_update; /** Configure RSS hash
>
> This new op should be added at the end of the structure to avoid ABI issue.
Ok, I will change as your suggestion in next version.
>
> > protocols. */ @@ -4413,6 +4419,28 @@ int
> > rte_eth_dev_get_name_by_port(uint8_t port_id, char *name);
> >
> > /**
> > + * Reset an ethernet device when it's not working. One scenario is,
> > + after
> > PF + * port is down and up, the related VF port should be reset.
> > + * The API will stop the port, clear the rx/tx queues, re-setup the
> > + rx/tx
> > + * queues, restart the port.
> > + * Before calling this API, APP should stop the rx/tx. When tx is
> > + being
> > stopped, + * APP can drop the packets and release the buffer instead
> > of sending them. + * This function can also do some restore work for
> > the port, for example, it can + * restore the added parameters of
> > vlan, mac_addrs, promisc_unicast_enabled + * flag and
> promisc_multicast_enabled flag.
> > + *
> > + * @param port_id
> > + * The port identifier of the Ethernet device.
> > + *
> > + * @return
> > + * - (0) if successful.
> > + * - (-ENODEV) if port identifier is invalid.
> > + * - (-ENOTSUP) if hardware doesn't support this function.
> > + */
> > +int
> > +rte_eth_dev_reset(uint8_t port_id);
>
> The declarations and function definitions should be better placed after start
> and stop functions.
Ok, I will change as your suggestion in next version.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 1/3] lib/librte_ether: add support for port reset
@ 2017-04-20 20:49 3% ` Thomas Monjalon
2017-04-21 3:20 0% ` Zhao1, Wei
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2017-04-20 20:49 UTC (permalink / raw)
To: Wei Zhao, Wenzhuo Lu; +Cc: dev
10/04/2017 05:02, Wei Zhao:
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -1509,6 +1512,9 @@ struct eth_dev_ops {
> eth_l2_tunnel_offload_set_t l2_tunnel_offload_set;
> /** Enable/disable l2 tunnel offload functions. */
>
> + /** Reset device. */
> + eth_dev_reset_t dev_reset;
> +
> eth_set_queue_rate_limit_t set_queue_rate_limit; /**< Set queue rate
> limit. */
>
> rss_hash_update_t rss_hash_update; /** Configure RSS hash
This new op should be added at the end of the structure
to avoid ABI issue.
> protocols. */ @@ -4413,6 +4419,28 @@ int
> rte_eth_dev_get_name_by_port(uint8_t port_id, char *name);
>
> /**
> + * Reset an ethernet device when it's not working. One scenario is, after
> PF + * port is down and up, the related VF port should be reset.
> + * The API will stop the port, clear the rx/tx queues, re-setup the rx/tx
> + * queues, restart the port.
> + * Before calling this API, APP should stop the rx/tx. When tx is being
> stopped, + * APP can drop the packets and release the buffer instead of
> sending them. + * This function can also do some restore work for the port,
> for example, it can + * restore the added parameters of vlan, mac_addrs,
> promisc_unicast_enabled + * flag and promisc_multicast_enabled flag.
> + *
> + * @param port_id
> + * The port identifier of the Ethernet device.
> + *
> + * @return
> + * - (0) if successful.
> + * - (-ENODEV) if port identifier is invalid.
> + * - (-ENOTSUP) if hardware doesn't support this function.
> + */
> +int
> +rte_eth_dev_reset(uint8_t port_id);
The declarations and function definitions should be better placed
after start and stop functions.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Kuba Kozak
` (2 preceding siblings ...)
2017-04-13 16:21 0% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Van Haaren, Harry
@ 2017-04-20 20:31 0% ` Thomas Monjalon
2017-04-24 12:32 3% ` Olivier Matz
3 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2017-04-20 20:31 UTC (permalink / raw)
To: Kuba Kozak; +Cc: dev, harry.van.haaren, deepak.k.jain
13/04/2017 16:59, Kuba Kozak:
> Extended xstats API in ethdev library to allow grouping of stats logically
> so they can be retrieved per logical grouping managed by the application.
> Changed existing functions rte_eth_xstats_get_names and rte_eth_xstats_get
> to use a new list of arguments: array of ids and array of values.
> ABI versioning mechanism was used to support backward compatibility.
> Introduced two new functions rte_eth_xstats_get_all and
> rte_eth_xstats_get_names_all which keeps functionality of the previous
> ones (respectively rte_eth_xstats_get and rte_eth_xstats_get_names)
> but use new API inside. Both functions marked as deprecated.
> Introduced new function: rte_eth_xstats_get_id_by_name to retrieve
> xstats ids by its names.
> Extended functionality of proc_info application:
> --xstats-name NAME: to display single xstat value by NAME
> Updated test-pmd application to use new API.
Applied, thanks
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] 17.08 Roadmap
2017-04-14 14:30 4% ` Adrien Mazarguil
@ 2017-04-19 1:23 0% ` Lu, Wenzhuo
0 siblings, 0 replies; 200+ results
From: Lu, Wenzhuo @ 2017-04-19 1:23 UTC (permalink / raw)
To: Adrien Mazarguil, O'Driscoll, Tim; +Cc: dev
Hi Adrien,
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Adrien Mazarguil
> Sent: Friday, April 14, 2017 10:30 PM
> To: O'Driscoll, Tim
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] 17.08 Roadmap
>
> On Fri, Apr 14, 2017 at 01:27:13PM +0000, O'Driscoll, Tim wrote:
> >
> > API to Configure Queue Regions for RSS: This provides support for
> configuration of queue regions for RSS, so that different traffic classes or
> different packet classification types to be separated into different queues.
> This will be implemented for I40E.
>
> About this last topic, do you mean devising a new API is necessary or do you
> plan to implement it through rte_flow? I'm asking as it looks like this is what
> the rte_flow RSS action is defined for, see [1]. The mlx5 PMD adds support
> for it in 17.05 [2].
Yes. We'll try to use rte_flow if appropriate. And this RSS action is planned to be used.
>
> I also intend to submit a few changes to rte_flow:
>
> - VLAN item fix (according to this thread [3]). Impacts PMDs that implement
> the VLAN and associated items. Not sure it can be accepted or 17.08 due to
> ABI breakage, but it will be submitted regardless.
BTW,
Ixgbe has removed the tpid check. http://dpdk.org/dev/patchwork/patch/23637/
I don't find another one using tpid.
So, PMD should be ready. We can remove the tpid from rte now.
>
> - A new isolated operation mode for rte_flow, guaranteeing applications can
> expect to receive packets from the flow rules they define *only* for
> complete control. No more "default" RX rules, RSS and so on. It means
> PMDs
> are free to reassign these resources to flow rules. No planned ABI
> breakage.
>
> [1] http://dpdk.org/doc/guides/prog_guide/rte_flow.html#action-rss
> [2] http://dpdk.org/browse/next/dpdk-next-
> net/commit/?id=1bfb7bb4423349ab13decead0af8ffd006e8e398
> [3] http://dpdk.org/ml/archives/dev/2017-March/060231.html
>
> --
> Adrien Mazarguil
> 6WIND
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev
@ 2017-04-18 15:48 4% Bernard Iremonger
2017-04-26 15:02 4% ` Mcnamara, John
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Bernard Iremonger @ 2017-04-18 15:48 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, ferruh.yigit, Bernard Iremonger
The change of _rte_eth_dev_callback_process has not been done in 17.05.
Let's postpone to 17.08.
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index a3e7c720c..00e379c00 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -36,8 +36,8 @@ Deprecation Notices
``eth_driver``. Similarly, ``rte_pci_driver`` is planned to be removed from
``rte_cryptodev_driver`` in 17.05.
-* ethdev: An API change is planned for 17.05 for the function
- ``_rte_eth_dev_callback_process``. In 17.05 the function will return an ``int``
+* ethdev: An API change is planned for 17.08 for the function
+ ``_rte_eth_dev_callback_process``. In 17.08 the function will return an ``int``
instead of ``void`` and a fourth parameter ``void *ret_param`` will be added.
* ethdev: for 17.05 it is planned to deprecate the following nine rte_eth_dev_*
--
2.11.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] 17.08 Roadmap
@ 2017-04-14 14:30 4% ` Adrien Mazarguil
2017-04-19 1:23 0% ` Lu, Wenzhuo
0 siblings, 1 reply; 200+ results
From: Adrien Mazarguil @ 2017-04-14 14:30 UTC (permalink / raw)
To: O'Driscoll, Tim; +Cc: dev
On Fri, Apr 14, 2017 at 01:27:13PM +0000, O'Driscoll, Tim wrote:
> Below are the features that we're planning to submit for the 17.08 release. We'll submit a patch to update the roadmap page with this info.
>
> It would be good if others are also willing to share their plans so that we can build up a complete picture of what's planned for 17.08 and make sure there's no duplication.
>
> Generic QoS API: The proposed API is currently being added to the next-tm repository (http://dpdk.org/browse/next/dpdk-next-tm/). In 17.08, implementations of that API will be added for I40E, IXGBE, and the existing software QoS implementation. The API will move from next-tm to the main DPDK repository.
>
> Support for IPFIX: An RFC will be created in the next few weeks to support IPFIX (IP Flow Information Export - see https://en.wikipedia.org/wiki/IP_Flow_Information_Export for details) within DPDK. The actual implementation of IPFIX will be the responsibility of the application, but a library will be proposed which will enable an application to implement an IPFIX Observation Point, Metering Process and Exporter Process. Depending on the response to this RFC, we'll consider implementing the IPFIX library for 17.08.
>
> GRO (Generic Receive Offload): Generic Receive Offload is a widely used SW-based offloading technique to reduce per-packet processing overhead. It improves performance by reassembling small packets into large ones. A new library will be added to DPDK which will implement GRO.
>
> Generic Flow Enhancements: The rte_flow API was added in 17.02 and implemented for IXGBE and I40E. Support will be added for IGB, and enhancements will also be implemented for IXGBE (NVGRE/L2 Tunnel filters).
>
> Add Packet Type Recognition to IXGBE Vector PMD: The I40E Vector PMD supports packet type recognition but the IXGBE vPMD doesn't. This is a problem for VPP (https://wiki.fd.io/view/VPP) as they have to add a patch on top of DPDK to implement this.
>
> VF Port Reset for IXGBE: This was implemented in 17.05 for I40E (see http://dpdk.org/ml/archives/dev/2017-April/063538.html). Support will be added for IXGBE in 17.08.
>
> Cryptodev Multi-Core SW Scheduler: The cryptodev scheduler was first introduced in 17.02 and further enhanced in 17.05. It will be enhanced again in 17.08 to support the ability to use a pool of cores for software crypto. This will allow sufficient capacity to be provisioned for encrypting/decrypting large flows in software.
>
> API to Configure Queue Regions for RSS: This provides support for configuration of queue regions for RSS, so that different traffic classes or different packet classification types to be separated into different queues. This will be implemented for I40E.
About this last topic, do you mean devising a new API is necessary or do you
plan to implement it through rte_flow? I'm asking as it looks like this is
what the rte_flow RSS action is defined for, see [1]. The mlx5 PMD adds
support for it in 17.05 [2].
I also intend to submit a few changes to rte_flow:
- VLAN item fix (according to this thread [3]). Impacts PMDs that implement
the VLAN and associated items. Not sure it can be accepted or 17.08 due to
ABI breakage, but it will be submitted regardless.
- A new isolated operation mode for rte_flow, guaranteeing applications can
expect to receive packets from the flow rules they define *only* for
complete control. No more "default" RX rules, RSS and so on. It means PMDs
are free to reassign these resources to flow rules. No planned ABI
breakage.
[1] http://dpdk.org/doc/guides/prog_guide/rte_flow.html#action-rss
[2] http://dpdk.org/browse/next/dpdk-next-net/commit/?id=1bfb7bb4423349ab13decead0af8ffd006e8e398
[3] http://dpdk.org/ml/archives/dev/2017-March/060231.html
--
Adrien Mazarguil
6WIND
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 1/5] ethdev: new xstats API add retrieving by ID
2017-04-13 14:59 3% ` [dpdk-dev] [PATCH v6 1/5] ethdev: new xstats API add retrieving by ID Kuba Kozak
@ 2017-04-13 16:23 0% ` Van Haaren, Harry
0 siblings, 0 replies; 200+ results
From: Van Haaren, Harry @ 2017-04-13 16:23 UTC (permalink / raw)
To: Kozak, KubaX, dev; +Cc: Jain, Deepak K, Piasecki, JacekX, Kulasek, TomaszX
> From: Kozak, KubaX
> Sent: Thursday, April 13, 2017 3:59 PM
> To: dev@dpdk.org
> Cc: Van Haaren, Harry <harry.van.haaren@intel.com>; Jain, Deepak K <deepak.k.jain@intel.com>;
> Piasecki, JacekX <jacekx.piasecki@intel.com>; Kozak, KubaX <kubax.kozak@intel.com>; Kulasek,
> TomaszX <tomaszx.kulasek@intel.com>
> Subject: [PATCH v6 1/5] ethdev: new xstats API add retrieving by ID
>
> From: Jacek Piasecki <jacekx.piasecki@intel.com>
>
> Extended xstats API in ethdev library to allow grouping of stats
> logically so they can be retrieved per logical grouping managed
> by the application.
> Changed existing functions rte_eth_xstats_get_names and
> rte_eth_xstats_get to use a new list of arguments: array of ids
> and array of values. ABI versioning mechanism was used to
> support backward compatibility.
> Introduced two new functions rte_eth_xstats_get_all and
> rte_eth_xstats_get_names_all which keeps functionality of the
> previous ones (respectively rte_eth_xstats_get and
> rte_eth_xstats_get_names) but use new API inside.
>
> test-pmd: add support for new xstats API retrieving by id in
> testpmd application: xstats_get() and
> xstats_get_names() call with modified parameters.
>
> doc: add description for modified xstats API
> Documentation change for modified extended statistics API functions.
> The old API only allows retrieval of *all* of the NIC statistics
> at once. Given this requires a MMIO read PCI transaction per statistic
> it is an inefficient way of retrieving just a few key statistics.
> Often a monitoring agent only has an interest in a few key statistics,
> and the old API forces wasting CPU time and PCIe bandwidth in retrieving
> *all* statistics; even those that the application didn't explicitly
> show an interest in.
> The new, more flexible API allow retrieval of statistics per ID.
> If a PMD wishes, it can be implemented to read just the required
> NIC registers. As a result, the monitoring application no longer wastes
> PCIe bandwidth and CPU time.
>
> Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
> Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
> Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Kuba Kozak
2017-04-13 14:59 3% ` [dpdk-dev] [PATCH v6 1/5] ethdev: new xstats API add retrieving by ID Kuba Kozak
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 2/5] ethdev: added new function for xstats ID Kuba Kozak
@ 2017-04-13 16:21 0% ` Van Haaren, Harry
2017-04-20 20:31 0% ` Thomas Monjalon
3 siblings, 0 replies; 200+ results
From: Van Haaren, Harry @ 2017-04-13 16:21 UTC (permalink / raw)
To: Kozak, KubaX, dev; +Cc: Jain, Deepak K
> From: Kozak, KubaX
> Sent: Thursday, April 13, 2017 3:59 PM
> To: dev@dpdk.org
> Cc: Van Haaren, Harry <harry.van.haaren@intel.com>; Jain, Deepak K <deepak.k.jain@intel.com>;
> Kozak, KubaX <kubax.kozak@intel.com>
> Subject: [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats
>
> Extended xstats API in ethdev library to allow grouping of stats logically
> so they can be retrieved per logical grouping managed by the application.
> Changed existing functions rte_eth_xstats_get_names and rte_eth_xstats_get
> to use a new list of arguments: array of ids and array of values.
> ABI versioning mechanism was used to support backward compatibility.
> Introduced two new functions rte_eth_xstats_get_all and
> rte_eth_xstats_get_names_all which keeps functionality of the previous
> ones (respectively rte_eth_xstats_get and rte_eth_xstats_get_names)
> but use new API inside. Both functions marked as deprecated.
> Introduced new function: rte_eth_xstats_get_id_by_name to retrieve
> xstats ids by its names.
> Extended functionality of proc_info application:
> --xstats-name NAME: to display single xstat value by NAME
> Updated test-pmd application to use new API.
>
> v6 changes:
> * patches arrangement in patchset
> * fixes spelling bugs
> * release notes
>
> v5 changes:
> * fix clang shared build compilation
> * remove wrong versioning macros
> * Makefile LIBABIVER 6 change
>
> v4 changes:
> * documentation change after API modification
> * fix xstats display for PMD without _by_ids() functions
> * fix ABI validator errors
>
> v3 changes:
> * checkpatch fixes
> * removed malloc bug in ethdev
> * add new command to proc_info and IDs parsing
> * merged testpmd and proc_info patch with library patch
Please keep Acks when only re-arranging patches and changes.
I'll re-Ack the patches on patchwork now :)
Thanks, -Harry
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 0/3] MAC address fail to be added shouldn't be stored
2017-04-13 8:21 3% ` [dpdk-dev] [PATCH v4 0/3] MAC address fail to be added shouldn't be stored Wei Dai
@ 2017-04-13 13:54 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2017-04-13 13:54 UTC (permalink / raw)
To: Dai, Wei, thomas.monjalon, harish.patil, rasesh.mody,
stephen.hurd, ajit.khaparde, Lu, Wenzhuo, Zhang, Helin, Wu,
Jingjing, Chen, Jing D, adrien.mazarguil, nelio.laranjeiro,
Richardson, Bruce, yuanhan.liu, maxime.coquelin
Cc: dev
>
> Current ethdev always stores MAC address even it fails to be added.
> Other function may regard the failed MAC address valid and lead to
> some errors. So There is a need to check if the addr is added
> successfully or not and discard it if it fails.
>
> In 3rd patch, add a command "add_more_mac_addr port_id base_mac_addr count"
> to add more than one MAC address one time.
> This command can simplify the test for the first patch.
> Normally a MAC address may fails to be added only after many MAC
> addresses have been added.
> Without this command, a tester may only trigger failed MAC address
> by running many times of testpmd command 'mac_addr add' .
>
> ---
> Changes
> v4:
> 1. rebase master branch
> 2. follow code style
>
> v3:
> 1. Change return value for some specific NIC according to feedbacks
> from the community;
> 2. Add ABI change in release note;
> 3. Add more detailed commit message.
>
> v2:
> fix warnings and erros from check-git-log.sh and checkpatch.pl
>
> Wei Dai (3):
> ethdev: fix adding invalid MAC addr
> doc: change type of return value of adding MAC addr
> app/testpmd: add a command to add many MAC addrs
>
> app/test-pmd/cmdline.c | 55 ++++++++++++++++++++++++++++++++++
> doc/guides/rel_notes/release_17_05.rst | 7 +++++
> drivers/net/bnx2x/bnx2x_ethdev.c | 7 +++--
> drivers/net/bnxt/bnxt_ethdev.c | 12 ++++----
> drivers/net/e1000/base/e1000_api.c | 2 +-
> drivers/net/e1000/em_ethdev.c | 6 ++--
> drivers/net/e1000/igb_ethdev.c | 5 ++--
> drivers/net/enic/enic.h | 2 +-
> drivers/net/enic/enic_ethdev.c | 4 +--
> drivers/net/enic/enic_main.c | 6 ++--
> drivers/net/fm10k/fm10k_ethdev.c | 3 +-
> drivers/net/i40e/i40e_ethdev.c | 11 +++----
> drivers/net/i40e/i40e_ethdev_vf.c | 8 ++---
> drivers/net/ixgbe/ixgbe_ethdev.c | 27 +++++++++++------
> drivers/net/mlx4/mlx4.c | 18 ++++++-----
> drivers/net/mlx5/mlx5.h | 4 +--
> drivers/net/mlx5/mlx5_mac.c | 16 ++++++----
> drivers/net/qede/qede_ethdev.c | 6 ++--
> drivers/net/ring/rte_eth_ring.c | 3 +-
> drivers/net/virtio/virtio_ethdev.c | 13 ++++----
> lib/librte_ether/rte_ethdev.c | 15 ++++++----
> lib/librte_ether/rte_ethdev.h | 2 +-
> 22 files changed, 162 insertions(+), 70 deletions(-)
>
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.7.4
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v6 2/5] ethdev: added new function for xstats ID
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Kuba Kozak
2017-04-13 14:59 3% ` [dpdk-dev] [PATCH v6 1/5] ethdev: new xstats API add retrieving by ID Kuba Kozak
@ 2017-04-13 14:59 4% ` Kuba Kozak
2017-04-13 16:21 0% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Van Haaren, Harry
2017-04-20 20:31 0% ` Thomas Monjalon
3 siblings, 0 replies; 200+ results
From: Kuba Kozak @ 2017-04-13 14:59 UTC (permalink / raw)
To: dev; +Cc: harry.van.haaren, deepak.k.jain, Kuba Kozak
Introduced new function: rte_eth_xstats_get_id_by_name
to retrieve xstats ids by its names.
doc: added release note
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
---
doc/guides/rel_notes/release_17_05.rst | 2 ++
lib/librte_ether/rte_ethdev.c | 44 ++++++++++++++++++++++++++++++++++
lib/librte_ether/rte_ethdev.h | 21 ++++++++++++++++
lib/librte_ether/rte_ether_version.map | 1 +
4 files changed, 68 insertions(+)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 5b77226..dae4261 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -423,6 +423,8 @@ API Changes
* Added new functions ``rte_eth_xstats_get_all`` and ``rte_eth_xstats_get_names_all to provide backward compatibility for
``rte_eth_xstats_get`` and ``rte_eth_xstats_get_names``
+ * Added new function ``rte_eth_xstats_get_id_by_name``
+
ABI Changes
-----------
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 0adc1d0..ef30883 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1476,6 +1476,50 @@ struct rte_eth_dev *
}
int
+rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id)
+{
+ int cnt_xstats, idx_xstat;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+
+ if (!id) {
+ RTE_PMD_DEBUG_TRACE("Error: id pointer is NULL\n");
+ return -1;
+ }
+
+ if (!xstat_name) {
+ RTE_PMD_DEBUG_TRACE("Error: xstat_name pointer is NULL\n");
+ return -1;
+ }
+
+ /* Get count */
+ cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
+ if (cnt_xstats < 0) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get count of xstats\n");
+ return -1;
+ }
+
+ /* Get id-name lookup table */
+ struct rte_eth_xstat_name xstats_names[cnt_xstats];
+
+ if (cnt_xstats != rte_eth_xstats_get_names(
+ port_id, xstats_names, cnt_xstats, NULL)) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get xstats lookup\n");
+ return -1;
+ }
+
+ for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++) {
+ if (!strcmp(xstats_names[idx_xstat].name, xstat_name)) {
+ *id = idx_xstat;
+ return 0;
+ };
+ }
+
+ return -EINVAL;
+}
+
+int
rte_eth_xstats_get_names_v1607(uint8_t port_id,
struct rte_eth_xstat_name *xstats_names,
unsigned int size)
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 8c94b88..058c435 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -2346,6 +2346,27 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
*/
void rte_eth_stats_reset(uint8_t port_id);
+
+/**
+ * Gets the ID of a statistic from its name.
+ *
+ * This function searches for the statistics using string compares, and
+ * as such should not be used on the fast-path. For fast-path retrieval of
+ * specific statistics, store the ID as provided in *id* from this function,
+ * and pass the ID to rte_eth_xstats_get()
+ *
+ * @param port_id The port to look up statistics from
+ * @param xstat_name The name of the statistic to return
+ * @param[out] id A pointer to an app-supplied uint64_t which should be
+ * set to the ID of the stat if the stat exists.
+ * @return
+ * 0 on success
+ * -ENODEV for invalid port_id,
+ * -EINVAL if the xstat_name doesn't exist in port_id
+ */
+int rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id);
+
/**
* Retrieve all extended statistics of an Ethernet device.
*
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index a404434..7c41617 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -161,6 +161,7 @@ DPDK_17.05 {
rte_eth_find_next;
rte_eth_xstats_get;
rte_eth_xstats_get_all;
+ rte_eth_xstats_get_id_by_name;
rte_eth_xstats_get_names;
rte_eth_xstats_get_names_all;
--
1.9.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v6 1/5] ethdev: new xstats API add retrieving by ID
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Kuba Kozak
@ 2017-04-13 14:59 3% ` Kuba Kozak
2017-04-13 16:23 0% ` Van Haaren, Harry
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 2/5] ethdev: added new function for xstats ID Kuba Kozak
` (2 subsequent siblings)
3 siblings, 1 reply; 200+ results
From: Kuba Kozak @ 2017-04-13 14:59 UTC (permalink / raw)
To: dev
Cc: harry.van.haaren, deepak.k.jain, Jacek Piasecki, Kuba Kozak,
Tomasz Kulasek
From: Jacek Piasecki <jacekx.piasecki@intel.com>
Extended xstats API in ethdev library to allow grouping of stats
logically so they can be retrieved per logical grouping managed
by the application.
Changed existing functions rte_eth_xstats_get_names and
rte_eth_xstats_get to use a new list of arguments: array of ids
and array of values. ABI versioning mechanism was used to
support backward compatibility.
Introduced two new functions rte_eth_xstats_get_all and
rte_eth_xstats_get_names_all which keeps functionality of the
previous ones (respectively rte_eth_xstats_get and
rte_eth_xstats_get_names) but use new API inside.
test-pmd: add support for new xstats API retrieving by id in
testpmd application: xstats_get() and
xstats_get_names() call with modified parameters.
doc: add description for modified xstats API
Documentation change for modified extended statistics API functions.
The old API only allows retrieval of *all* of the NIC statistics
at once. Given this requires a MMIO read PCI transaction per statistic
it is an inefficient way of retrieving just a few key statistics.
Often a monitoring agent only has an interest in a few key statistics,
and the old API forces wasting CPU time and PCIe bandwidth in retrieving
*all* statistics; even those that the application didn't explicitly
show an interest in.
The new, more flexible API allow retrieval of statistics per ID.
If a PMD wishes, it can be implemented to read just the required
NIC registers. As a result, the monitoring application no longer wastes
PCIe bandwidth and CPU time.
Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
---
app/proc_info/main.c | 6 +-
app/test-pmd/config.c | 19 +-
doc/guides/prog_guide/poll_mode_drv.rst | 173 ++++++++++++--
doc/guides/rel_notes/release_17_05.rst | 6 +
lib/librte_ether/rte_ethdev.c | 386 +++++++++++++++++++++++---------
lib/librte_ether/rte_ethdev.h | 144 +++++++++++-
lib/librte_ether/rte_ether_version.map | 4 +
7 files changed, 596 insertions(+), 142 deletions(-)
diff --git a/app/proc_info/main.c b/app/proc_info/main.c
index d576b42..9f5e219 100644
--- a/app/proc_info/main.c
+++ b/app/proc_info/main.c
@@ -358,7 +358,7 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
int len, ret, i;
static const char *nic_stats_border = "########################";
- len = rte_eth_xstats_get_names(port_id, NULL, 0);
+ len = rte_eth_xstats_get_names_all(port_id, NULL, 0);
if (len < 0) {
printf("Cannot get xstats count\n");
return;
@@ -375,7 +375,7 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
free(xstats);
return;
}
- if (len != rte_eth_xstats_get_names(
+ if (len != rte_eth_xstats_get_names_all(
port_id, xstats_names, len)) {
printf("Cannot get xstat names\n");
goto err;
@@ -385,7 +385,7 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
port_id);
printf("%s############################\n",
nic_stats_border);
- ret = rte_eth_xstats_get(port_id, xstats, len);
+ ret = rte_eth_xstats_get_all(port_id, xstats, len);
if (ret < 0 || ret > len) {
printf("Cannot get xstats\n");
goto err;
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 4d873cd..ef07925 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -264,9 +264,9 @@ struct rss_type_info {
void
nic_xstats_display(portid_t port_id)
{
- struct rte_eth_xstat *xstats;
int cnt_xstats, idx_xstat;
struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
printf("###### NIC extended statistics for port %-2d\n", port_id);
if (!rte_eth_dev_is_valid_port(port_id)) {
@@ -275,7 +275,7 @@ struct rss_type_info {
}
/* Get count */
- cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0);
+ cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
if (cnt_xstats < 0) {
printf("Error: Cannot get count of xstats\n");
return;
@@ -288,23 +288,24 @@ struct rss_type_info {
return;
}
if (cnt_xstats != rte_eth_xstats_get_names(
- port_id, xstats_names, cnt_xstats)) {
+ port_id, xstats_names, cnt_xstats, NULL)) {
printf("Error: Cannot get xstats lookup\n");
free(xstats_names);
return;
}
/* Get stats themselves */
- xstats = malloc(sizeof(struct rte_eth_xstat) * cnt_xstats);
- if (xstats == NULL) {
+ values = malloc(sizeof(values) * cnt_xstats);
+ if (values == NULL) {
printf("Cannot allocate memory for xstats\n");
free(xstats_names);
return;
}
- if (cnt_xstats != rte_eth_xstats_get(port_id, xstats, cnt_xstats)) {
+ if (cnt_xstats != rte_eth_xstats_get(port_id, NULL, values,
+ cnt_xstats)) {
printf("Error: Unable to get xstats\n");
free(xstats_names);
- free(xstats);
+ free(values);
return;
}
@@ -312,9 +313,9 @@ struct rss_type_info {
for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++)
printf("%s: %"PRIu64"\n",
xstats_names[idx_xstat].name,
- xstats[idx_xstat].value);
+ values[idx_xstat]);
free(xstats_names);
- free(xstats);
+ free(values);
}
void
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index e48c121..a1a758b 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -334,24 +334,21 @@ The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK
Extended Statistics API
~~~~~~~~~~~~~~~~~~~~~~~
-The extended statistics API allows each individual PMD to expose a unique set
-of statistics. Accessing these from application programs is done via two
-functions:
-
-* ``rte_eth_xstats_get``: Fills in an array of ``struct rte_eth_xstat``
- with extended statistics.
-* ``rte_eth_xstats_get_names``: Fills in an array of
- ``struct rte_eth_xstat_name`` with extended statistic name lookup
- information.
-
-Each ``struct rte_eth_xstat`` contains an identifier and value pair, and
-each ``struct rte_eth_xstat_name`` contains a string. Each identifier
-within the ``struct rte_eth_xstat`` lookup array must have a corresponding
-entry in the ``struct rte_eth_xstat_name`` lookup array. Within the latter
-the index of the entry is the identifier the string is associated with.
-These identifiers, as well as the number of extended statistic exposed, must
-remain constant during runtime. Note that extended statistic identifiers are
+The extended statistics API allows a PMD to expose all statistics that are
+available to it, including statistics that are unique to the device.
+Each statistic has three properties ``name``, ``id`` and ``value``:
+
+* ``name``: A human readable string formatted by the scheme detailed below.
+* ``id``: An integer that represents only that statistic.
+* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+
+Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
+The API consists of various ``rte_eth_xstats_*()`` functions, and allows an
+application to be flexible in how it retrieves statistics.
+
+Scheme for Human Readable Names
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A naming scheme exists for the strings exposed to clients of the API. This is
to allow scraping of the API for statistics of interest. The naming scheme uses
@@ -363,8 +360,8 @@ strings split by a single underscore ``_``. The scheme is as follows:
* detail n
* unit
-Examples of common statistics xstats strings, formatted to comply to the scheme
-proposed above:
+Examples of common statistics xstats strings, formatted to comply to the
+above scheme:
* ``rx_bytes``
* ``rx_crc_errors``
@@ -378,7 +375,7 @@ associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc. are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
@@ -392,3 +389,139 @@ Some additions in the metadata scheme are as follows:
An example where queue numbers are used is as follows: ``tx_q7_bytes`` which
indicates this statistic applies to queue number 7, and represents the number
of transmitted bytes on that queue.
+
+API Design
+^^^^^^^^^^
+
+The xstats API uses the ``name``, ``id``, and ``value`` to allow performant
+lookup of specific statistics. Performant lookup means two things;
+
+* No string comparisons with the ``name`` of the statistic in fast-path
+* Allow requesting of only the statistics of interest
+
+The API ensures these requirements are met by mapping the ``name`` of the
+statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
+The API allows applications to request an array of ``id`` values, so that the
+PMD only performs the required calculations. Expected usage is that the
+application scans the ``name`` of each statistic, and caches the ``id``
+if it has an interest in that statistic. On the fast-path, the integer can be used
+to retrieve the actual ``value`` of the statistic that the ``id`` represents.
+
+API Functions
+^^^^^^^^^^^^^
+
+The API is built out of a small number of functions, which can be used to
+retrieve the number of statistics and the names, IDs and values of those
+statistics.
+
+* ``rte_eth_xstats_get_names()``: returns the names of the statistics. When given a
+ ``NULL`` parameter the function returns the number of statistics that are available.
+
+* ``rte_eth_xstats_get_id_by_name()``: Searches for the statistic ID that matches
+ ``xstat_name``. If found, the ``id`` integer is set.
+
+* ``rte_eth_xstats_get()``: Fills in an array of ``uint64_t`` values
+ with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
+ returns all statistics that are available.
+
+
+Application Usage
+^^^^^^^^^^^^^^^^^
+
+Imagine an application that wants to view the dropped packet count. If no
+packets are dropped, the application does not read any other metrics for
+performance reasons. If packets are dropped, the application has a particular
+set of statistics that it requests. This "set" of statistics allows the app to
+decide what next steps to perform. The following code-snippets show how the
+xstats API can be used to achieve this goal.
+
+First step is to get all statistics names and list them:
+
+.. code-block:: c
+
+ struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
+ int len, i;
+
+ /* Get number of stats */
+ len = rte_eth_xstats_get_names(port_id, NULL, NULL, 0);
+ if (len < 0) {
+ printf("Cannot get xstats count\n");
+ goto err;
+ }
+
+ xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (xstats_names == NULL) {
+ printf("Cannot allocate memory for xstat names\n");
+ goto err;
+ }
+
+ /* Retrieve xstats names, passing NULL for IDs to return all statistics */
+ if (len != rte_eth_xstats_get_names(port_id, xstats_names, NULL, len)) {
+ printf("Cannot get xstat names\n");
+ goto err;
+ }
+
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
+ printf("Cannot allocate memory for xstats\n");
+ goto err;
+ }
+
+ /* Getting xstats values */
+ if (len != rte_eth_xstats_get(port_id, NULL, values, len)) {
+ printf("Cannot get xstat values\n");
+ goto err;
+ }
+
+ /* Print all xstats names and values */
+ for (i = 0; i < len; i++) {
+ printf("%s: %"PRIu64"\n", xstats_names[i].name, values[i]);
+ }
+
+The application has access to the names of all of the statistics that the PMD
+exposes. The application can decide which statistics are of interest, cache the
+ids of those statistics by looking up the name as follows:
+
+.. code-block:: c
+
+ uint64_t id;
+ uint64_t value;
+ const char *xstat_name = "rx_errors";
+
+ if(!rte_eth_xstats_get_id_by_name(port_id, xstat_name, &id)) {
+ rte_eth_xstats_get(port_id, &id, &value, 1);
+ printf("%s: %"PRIu64"\n", xstat_name, value);
+ }
+ else {
+ printf("Cannot find xstats with a given name\n");
+ goto err;
+ }
+
+The API provides flexibility to the application so that it can look up multiple
+statistics using an array containing multiple ``id`` numbers. This reduces the
+function call overhead of retrieving statistics, and makes lookup of multiple
+statistics simpler for the application.
+
+.. code-block:: c
+
+ #define APP_NUM_STATS 4
+ /* application cached these ids previously; see above */
+ uint64_t ids_array[APP_NUM_STATS] = {3,4,7,21};
+ uint64_t value_array[APP_NUM_STATS];
+
+ /* Getting multiple xstats values from array of IDs */
+ rte_eth_xstats_get(port_id, ids_array, value_array, APP_NUM_STATS);
+
+ uint32_t i;
+ for(i = 0; i < APP_NUM_STATS; i++) {
+ printf("%d: %"PRIu64"\n", ids_array[i], value_array[i]);
+ }
+
+
+This array lookup API for xstats allows the application create multiple
+"groups" of statistics, and look up the values of those IDs using a single API
+call. As an end result, the application is able to achieve its goal of
+monitoring a single statistic ("rx_errors" in this case), and if that shows
+packets being dropped, it can easily retrieve a "set" of statistics using the
+IDs array parameter to ``rte_eth_xstats_get`` function.
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 4968b8f..5b77226 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -416,6 +416,12 @@ API Changes
* The vhost public header file ``rte_virtio_net.h`` is renamed to
``rte_vhost.h``
+* **Reworked rte_ethdev library**
+
+ * Changed set of input parameters for ``rte_eth_xstats_get`` and ``rte_eth_xstats_get_names`` functions.
+
+ * Added new functions ``rte_eth_xstats_get_all`` and ``rte_eth_xstats_get_names_all to provide backward compatibility for
+ ``rte_eth_xstats_get`` and ``rte_eth_xstats_get_names``
ABI Changes
-----------
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 4e1e6dc..0adc1d0 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1454,12 +1454,19 @@ struct rte_eth_dev *
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
dev = &rte_eth_devices[port_id];
+ if (dev->dev_ops->xstats_get_names_by_ids != NULL) {
+ count = (*dev->dev_ops->xstats_get_names_by_ids)(dev, NULL,
+ NULL, 0);
+ if (count < 0)
+ return count;
+ }
if (dev->dev_ops->xstats_get_names != NULL) {
count = (*dev->dev_ops->xstats_get_names)(dev, NULL, 0);
if (count < 0)
return count;
} else
count = 0;
+
count += RTE_NB_STATS;
count += RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS) *
RTE_NB_RXQ_STATS;
@@ -1469,150 +1476,323 @@ struct rte_eth_dev *
}
int
-rte_eth_xstats_get_names(uint8_t port_id,
+rte_eth_xstats_get_names_v1607(uint8_t port_id,
struct rte_eth_xstat_name *xstats_names,
- unsigned size)
+ unsigned int size)
{
- struct rte_eth_dev *dev;
- int cnt_used_entries;
- int cnt_expected_entries;
- int cnt_driver_entries;
- uint32_t idx, id_queue;
- uint16_t num_q;
+ return rte_eth_xstats_get_names(port_id, xstats_names, size, NULL);
+}
+VERSION_SYMBOL(rte_eth_xstats_get_names, _v1607, 16.07);
- cnt_expected_entries = get_xstats_count(port_id);
- if (xstats_names == NULL || cnt_expected_entries < 0 ||
- (int)size < cnt_expected_entries)
- return cnt_expected_entries;
+int
+rte_eth_xstats_get_names_v1705(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids)
+{
+ /* Get all xstats */
+ if (!ids) {
+ struct rte_eth_dev *dev;
+ int cnt_used_entries;
+ int cnt_expected_entries;
+ int cnt_driver_entries;
+ uint32_t idx, id_queue;
+ uint16_t num_q;
- /* port_id checked in get_xstats_count() */
- dev = &rte_eth_devices[port_id];
- cnt_used_entries = 0;
+ cnt_expected_entries = get_xstats_count(port_id);
+ if (xstats_names == NULL || cnt_expected_entries < 0 ||
+ (int)size < cnt_expected_entries)
+ return cnt_expected_entries;
- for (idx = 0; idx < RTE_NB_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "%s", rte_stats_strings[idx].name);
- cnt_used_entries++;
- }
- num_q = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
+ /* port_id checked in get_xstats_count() */
+ dev = &rte_eth_devices[port_id];
+ cnt_used_entries = 0;
+
+ for (idx = 0; idx < RTE_NB_STATS; idx++) {
snprintf(xstats_names[cnt_used_entries].name,
sizeof(xstats_names[0].name),
- "rx_q%u%s",
- id_queue, rte_rxq_stats_strings[idx].name);
+ "%s", rte_stats_strings[idx].name);
cnt_used_entries++;
}
+ num_q = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "rx_q%u%s",
+ id_queue,
+ rte_rxq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+ }
+ num_q = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "tx_q%u%s",
+ id_queue,
+ rte_txq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+ }
+
+ if (dev->dev_ops->xstats_get_names_by_ids != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries =
+ (*dev->dev_ops->xstats_get_names_by_ids)(
+ dev,
+ xstats_names + cnt_used_entries,
+ NULL,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+
+ } else if (dev->dev_ops->xstats_get_names != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
+ dev,
+ xstats_names + cnt_used_entries,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+ }
+
+ return cnt_used_entries;
}
- num_q = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "tx_q%u%s",
- id_queue, rte_txq_stats_strings[idx].name);
- cnt_used_entries++;
+ /* Get only xstats given by IDS */
+ else {
+ uint16_t len, i;
+ struct rte_eth_xstat_name *xstats_names_copy;
+
+ len = rte_eth_xstats_get_names_v1705(port_id, NULL, 0, NULL);
+
+ xstats_names_copy =
+ malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (!xstats_names_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ free(xstats_names_copy);
+ return -1;
+ }
+
+ rte_eth_xstats_get_names_v1705(port_id, xstats_names_copy,
+ len, NULL);
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] >= len) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ strcpy(xstats_names[i].name,
+ xstats_names_copy[ids[i]].name);
}
+ free(xstats_names_copy);
+ return size;
}
+}
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get_names, _v1705, 17.05);
- if (dev->dev_ops->xstats_get_names != NULL) {
- /* If there are any driver-specific xstats, append them
- * to end of list.
- */
- cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
- dev,
- xstats_names + cnt_used_entries,
- size - cnt_used_entries);
- if (cnt_driver_entries < 0)
- return cnt_driver_entries;
- cnt_used_entries += cnt_driver_entries;
+MAP_STATIC_SYMBOL(int
+ rte_eth_xstats_get_names(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size,
+ uint64_t *ids), rte_eth_xstats_get_names_v1705);
+
+/* retrieve ethdev extended statistics */
+int
+rte_eth_xstats_get_v22(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ uint64_t *values_copy;
+ uint16_t size, i;
+
+ values_copy = malloc(sizeof(values_copy) * n);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: Cannot allocate memory for xstats\n");
+ return -1;
}
+ size = rte_eth_xstats_get(port_id, 0, values_copy, n);
- return cnt_used_entries;
+ for (i = 0; i < n; i++) {
+ xstats[i].id = i;
+ xstats[i].value = values_copy[i];
+ }
+ free(values_copy);
+ return size;
}
+VERSION_SYMBOL(rte_eth_xstats_get, _v22, 2.2);
/* retrieve ethdev extended statistics */
int
-rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned n)
+rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n)
{
- struct rte_eth_stats eth_stats;
- struct rte_eth_dev *dev;
- unsigned count = 0, i, q;
- signed xcount = 0;
- uint64_t val, *stats_ptr;
- uint16_t nb_rxqs, nb_txqs;
+ /* If need all xstats */
+ if (!ids) {
+ struct rte_eth_stats eth_stats;
+ struct rte_eth_dev *dev;
+ unsigned int count = 0, i, q;
+ signed int xcount = 0;
+ uint64_t val, *stats_ptr;
+ uint16_t nb_rxqs, nb_txqs;
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
- dev = &rte_eth_devices[port_id];
+ nb_rxqs = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ nb_txqs = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
- nb_rxqs = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- nb_txqs = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ /* Return generic statistics */
+ count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
+ (nb_txqs * RTE_NB_TXQ_STATS);
- /* Return generic statistics */
- count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
- (nb_txqs * RTE_NB_TXQ_STATS);
- /* implemented by the driver */
- if (dev->dev_ops->xstats_get != NULL) {
- /* Retrieve the xstats from the driver at the end of the
- * xstats struct.
- */
- xcount = (*dev->dev_ops->xstats_get)(dev,
- xstats ? xstats + count : NULL,
- (n > count) ? n - count : 0);
+ /* implemented by the driver */
+ if (dev->dev_ops->xstats_get_by_ids != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ */
+ xcount = (*dev->dev_ops->xstats_get_by_ids)(dev,
+ NULL,
+ values ? values + count : NULL,
+ (n > count) ? n - count : 0);
+
+ if (xcount < 0)
+ return xcount;
+ /* implemented by the driver */
+ } else if (dev->dev_ops->xstats_get != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ * Compatibility for PMD without xstats_get_by_ids
+ */
+ unsigned int size = (n > count) ? n - count : 1;
+ struct rte_eth_xstat xstats[size];
- if (xcount < 0)
- return xcount;
- }
+ xcount = (*dev->dev_ops->xstats_get)(dev,
+ values ? xstats : NULL, size);
- if (n < count + xcount || xstats == NULL)
- return count + xcount;
+ if (xcount < 0)
+ return xcount;
- /* now fill the xstats structure */
- count = 0;
- rte_eth_stats_get(port_id, ð_stats);
+ if (values != NULL)
+ for (i = 0 ; i < (unsigned int)xcount; i++)
+ values[i + count] = xstats[i].value;
+ }
- /* global stats */
- for (i = 0; i < RTE_NB_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_stats_strings[i].offset);
- val = *stats_ptr;
- xstats[count++].value = val;
- }
+ if (n < count + xcount || values == NULL)
+ return count + xcount;
- /* per-rxq stats */
- for (q = 0; q < nb_rxqs; q++) {
- for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ /* now fill the xstats structure */
+ count = 0;
+ rte_eth_stats_get(port_id, ð_stats);
+
+ /* global stats */
+ for (i = 0; i < RTE_NB_STATS; i++) {
stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_rxq_stats_strings[i].offset +
- q * sizeof(uint64_t));
+ rte_stats_strings[i].offset);
val = *stats_ptr;
- xstats[count++].value = val;
+ values[count++] = val;
+ }
+
+ /* per-rxq stats */
+ for (q = 0; q < nb_rxqs; q++) {
+ for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_rxq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+ }
+
+ /* per-txq stats */
+ for (q = 0; q < nb_txqs; q++) {
+ for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_txq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
}
+
+ return count + xcount;
}
+ /* Need only xstats given by IDS array */
+ else {
+ uint16_t i, size;
+ uint64_t *values_copy;
- /* per-txq stats */
- for (q = 0; q < nb_txqs; q++) {
- for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_txq_stats_strings[i].offset +
- q * sizeof(uint64_t));
- val = *stats_ptr;
- xstats[count++].value = val;
+ size = rte_eth_xstats_get_v1705(port_id, NULL, NULL, 0);
+
+ values_copy = malloc(sizeof(values_copy) * size);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ return -1;
+ }
+
+ rte_eth_xstats_get_v1705(port_id, NULL, values_copy, size);
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] >= size) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ values[i] = values_copy[ids[i]];
}
+ free(values_copy);
+ return n;
+ }
+}
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get, _v1705, 17.05);
+
+MAP_STATIC_SYMBOL(int
+ rte_eth_xstats_get(uint8_t port_id, uint64_t *ids,
+ uint64_t *values, unsigned int n), rte_eth_xstats_get_v1705);
+
+int
+rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ uint64_t *values_copy;
+ uint16_t size, i;
+
+ values_copy = malloc(sizeof(values_copy) * n);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: Cannot allocate memory for xstats\n");
+ return -1;
}
+ size = rte_eth_xstats_get(port_id, 0, values_copy, n);
- for (i = 0; i < count; i++)
+ for (i = 0; i < n; i++) {
xstats[i].id = i;
- /* add an offset to driver-specific stats */
- for ( ; i < count + xcount; i++)
- xstats[i].id += count;
+ xstats[i].value = values_copy[i];
+ }
+ free(values_copy);
+ return size;
+}
- return count + xcount;
+int
+rte_eth_xstats_get_names_all(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n)
+{
+ return rte_eth_xstats_get_names(port_id, xstats_names, n, NULL);
}
/* reset ethdev extended statistics */
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index d072538..8c94b88 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -186,6 +186,7 @@
#include "rte_ether.h"
#include "rte_eth_ctrl.h"
#include "rte_dev_info.h"
+#include "rte_compat.h"
struct rte_mbuf;
@@ -1118,6 +1119,10 @@ typedef int (*eth_xstats_get_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat *stats, unsigned n);
/**< @internal Get extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_by_ids_t)(struct rte_eth_dev *dev,
+ uint64_t *ids, uint64_t *values, unsigned int n);
+/**< @internal Get extended stats of an Ethernet device. */
+
typedef void (*eth_xstats_reset_t)(struct rte_eth_dev *dev);
/**< @internal Reset extended stats of an Ethernet device. */
@@ -1125,6 +1130,17 @@ typedef int (*eth_xstats_get_names_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, unsigned size);
/**< @internal Get names of extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_names_by_ids_t)(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
+ unsigned int size);
+/**< @internal Get names of extended stats of an Ethernet device. */
+
+typedef int (*eth_xstats_get_by_name_t)(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ struct rte_eth_xstat *xstat,
+ const char *name);
+/**< @internal Get xstat specified by name of an Ethernet device. */
+
typedef int (*eth_queue_stats_mapping_set_t)(struct rte_eth_dev *dev,
uint16_t queue_id,
uint8_t stat_idx,
@@ -1563,6 +1579,12 @@ struct eth_dev_ops {
eth_timesync_adjust_time timesync_adjust_time; /** Adjust the device clock. */
eth_timesync_read_time timesync_read_time; /** Get the device clock time. */
eth_timesync_write_time timesync_write_time; /** Set the device clock time. */
+ eth_xstats_get_by_ids_t xstats_get_by_ids;
+ /**< Get extended device statistics by ID. */
+ eth_xstats_get_names_by_ids_t xstats_get_names_by_ids;
+ /**< Get name of extended device statistics by ID. */
+ eth_xstats_get_by_name_t xstats_get_by_name;
+ /**< Get extended device statistics by name. */
};
/**
@@ -2325,7 +2347,32 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
void rte_eth_stats_reset(uint8_t port_id);
/**
- * Retrieve names of extended statistics of an Ethernet device.
+ * Retrieve all extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats
+ * A pointer to a table of structure of type *rte_eth_xstat*
+ * to be filled with device statistics ids and values: id is the
+ * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
+ * and value is the statistic counter.
+ * This parameter can be set to NULL if n is 0.
+ * @param n
+ * The size of the xstats array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n);
+
+/**
+ * Retrieve names of all extended statistics of an Ethernet device.
*
* @param port_id
* The port identifier of the Ethernet device.
@@ -2333,7 +2380,7 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
* An rte_eth_xstat_name array of at least *size* elements to
* be filled. If set to NULL, the function returns the required number
* of elements.
- * @param size
+ * @param n
* The size of the xstats_names array (number of elements).
* @return
* - A positive value lower or equal to size: success. The return value
@@ -2344,9 +2391,8 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-int rte_eth_xstats_get_names(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names,
- unsigned size);
+int rte_eth_xstats_get_names_all(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n);
/**
* Retrieve extended statistics of an Ethernet device.
@@ -2370,8 +2416,92 @@ int rte_eth_xstats_get_names(uint8_t port_id,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-int rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned n);
+int rte_eth_xstats_get_v22(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n);
+
+/**
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param ids
+ * A pointer to an ids array passed by application. This tells wich
+ * statistics values function should retrieve. This parameter
+ * can be set to NULL if n is 0. In this case function will retrieve
+ * all avalible statistics.
+ * @param values
+ * A pointer to a table to be filled with device statistics values.
+ * @param n
+ * The size of the ids array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n);
+
+int rte_eth_xstats_get(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n);
+
+/**
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats_names
+ * A pointer to a table of structure of type *rte_eth_xstat*
+ * to be filled with device statistics ids and values: id is the
+ * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
+ * and value is the statistic counter.
+ * This parameter can be set to NULL if n is 0.
+ * @param n
+ * The size of the xstats array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_names_v1607(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n);
+
+/**
+ * Retrieve names of extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats_names
+ * An rte_eth_xstat_name array of at least *size* elements to
+ * be filled. If set to NULL, the function returns the required number
+ * of elements.
+ * @param ids
+ * IDs array given by app to retrieve specific statistics
+ * @param size
+ * The size of the xstats_names array (number of elements).
+ * @return
+ * - A positive value lower or equal to size: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than size: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_names_v1705(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids);
+
+int rte_eth_xstats_get_names(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids);
/**
* Reset extended statistics of an Ethernet device.
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 0ea3856..a404434 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -159,5 +159,9 @@ DPDK_17.05 {
global:
rte_eth_find_next;
+ rte_eth_xstats_get;
+ rte_eth_xstats_get_all;
+ rte_eth_xstats_get_names;
+ rte_eth_xstats_get_names_all;
} DPDK_17.02;
--
1.9.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats
2017-04-11 16:37 1% ` [dpdk-dev] [PATCH v5 1/3] ethdev: new xstats API add retrieving by ID Michal Jastrzebski
2017-04-12 8:56 5% ` Van Haaren, Harry
@ 2017-04-13 14:59 4% ` Kuba Kozak
2017-04-13 14:59 3% ` [dpdk-dev] [PATCH v6 1/5] ethdev: new xstats API add retrieving by ID Kuba Kozak
` (3 more replies)
1 sibling, 4 replies; 200+ results
From: Kuba Kozak @ 2017-04-13 14:59 UTC (permalink / raw)
To: dev; +Cc: harry.van.haaren, deepak.k.jain, Kuba Kozak
Extended xstats API in ethdev library to allow grouping of stats logically
so they can be retrieved per logical grouping managed by the application.
Changed existing functions rte_eth_xstats_get_names and rte_eth_xstats_get
to use a new list of arguments: array of ids and array of values.
ABI versioning mechanism was used to support backward compatibility.
Introduced two new functions rte_eth_xstats_get_all and
rte_eth_xstats_get_names_all which keeps functionality of the previous
ones (respectively rte_eth_xstats_get and rte_eth_xstats_get_names)
but use new API inside. Both functions marked as deprecated.
Introduced new function: rte_eth_xstats_get_id_by_name to retrieve
xstats ids by its names.
Extended functionality of proc_info application:
--xstats-name NAME: to display single xstat value by NAME
Updated test-pmd application to use new API.
v6 changes:
* patches arrangement in patchset
* fixes spelling bugs
* release notes
v5 changes:
* fix clang shared build compilation
* remove wrong versioning macros
* Makefile LIBABIVER 6 change
v4 changes:
* documentation change after API modification
* fix xstats display for PMD without _by_ids() functions
* fix ABI validator errors
v3 changes:
* checkpatch fixes
* removed malloc bug in ethdev
* add new command to proc_info and IDs parsing
* merged testpmd and proc_info patch with library patch
Jacek Piasecki (3):
ethdev: new xstats API add retrieving by ID
net/e1000: new xstats API add ID support for e1000
net/ixgbe: new xstats API add ID support for ixgbe
Kuba Kozak (2):
ethdev: added new function for xstats ID
proc-info: add support for new xstats API
app/proc_info/main.c | 148 ++++++++++-
app/test-pmd/config.c | 19 +-
doc/guides/prog_guide/poll_mode_drv.rst | 173 +++++++++++--
doc/guides/rel_notes/release_17_05.rst | 8 +
drivers/net/e1000/igb_ethdev.c | 92 ++++++-
drivers/net/ixgbe/ixgbe_ethdev.c | 179 +++++++++++++
lib/librte_ether/rte_ethdev.c | 430 ++++++++++++++++++++++++--------
lib/librte_ether/rte_ethdev.h | 167 ++++++++++++-
lib/librte_ether/rte_ether_version.map | 5 +
9 files changed, 1070 insertions(+), 151 deletions(-)
--
1.9.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] ethdev: fix compilation issue with strict flags
@ 2017-04-13 9:36 5% ` Van Haaren, Harry
0 siblings, 0 replies; 200+ results
From: Van Haaren, Harry @ 2017-04-13 9:36 UTC (permalink / raw)
To: Shahaf Shuler, thomas.monjalon; +Cc: adrien.mazarguil, nelio.laranjeiro, dev
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shahaf Shuler
> Sent: Thursday, April 13, 2017 6:29 AM
> To: thomas.monjalon@6wind.com
> Cc: adrien.mazarguil@6wind.com; nelio.laranjeiro@6wind.com; dev@dpdk.org
> Subject: [dpdk-dev] [PATCH] ethdev: fix compilation issue with strict flags
>
> Compilation error seen while compiling mlx5 in debug mode
> under RHEL 7.3:
>
> rte_ethdev.h:1670:7: error: type of bit-field 'state' is a GCC extension
> [-Werror=pedantic]
>
> Address it by removing the unnecessary bit-field width limitation.
>
> Fixes: d52268a8b24b ("ethdev: expose device states")
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
> lib/librte_ether/rte_ethdev.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
> index d07253874..2d1bc12aa 100644
> --- a/lib/librte_ether/rte_ethdev.h
> +++ b/lib/librte_ether/rte_ethdev.h
> @@ -1667,7 +1667,7 @@ struct rte_eth_dev {
> * received packets before passing them to the driver for transmission.
> */
> struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
> - enum rte_eth_dev_state state:8; /**< Flag indicating the port state */
> + enum rte_eth_dev_state state; /**< Flag indicating the port state */
> } __rte_cache_aligned;
>
> struct rte_eth_dev_sriov {
What is the guidelines of changing ABI of an @internal structure?
If I understand correctly, this @internal structure shouldn't be allocated in the app - so we can extend it at the end without breaking ABI.
Since the state is at the end of the struct, I think this change is safe.
ABI Validate tool with GCC 5.4.0 says ABI compatible, so
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v4 2/3] doc: change type of return value of adding MAC addr
2017-04-12 9:02 3% [dpdk-dev] [PATCH v3 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-04-12 9:02 5% ` [dpdk-dev] [PATCH v3 2/3] doc: change type of return value of adding MAC addr Wei Dai
2017-04-13 8:21 3% ` [dpdk-dev] [PATCH v4 0/3] MAC address fail to be added shouldn't be stored Wei Dai
@ 2017-04-13 8:21 5% ` Wei Dai
2 siblings, 0 replies; 200+ results
From: Wei Dai @ 2017-04-13 8:21 UTC (permalink / raw)
To: thomas.monjalon, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, wenzhuo.lu, helin.zhang, konstantin.ananyev,
jingjing.wu, jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin
Cc: dev, Wei Dai
Add following lines in section of API change in release note.
If a MAC address fails to be added without this change, it is still stored
and may be regarded as a valid one. This may lead to errors in application.
The type of return value of eth_mac_addr_add_t in rte_ethdev.h is changed.
Any specific NIC also follows this change.
Signed-off-by: Wei Dai <wei.dai@intel.com>
---
doc/guides/rel_notes/release_17_05.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 4968b8f..c9ba484 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -438,6 +438,13 @@ ABI Changes
* The ``rte_cryptodev_info.sym`` structure has new field ``max_nb_sessions_per_qp``
to support drivers which may support limited number of sessions per queue_pair.
+* **Return if the MAC address is added successfully or not.**
+
+ If a MAC address fails to be added without this change, it is still stored
+ and may be regarded as a valid one. This may lead to errors in application.
+ The type of return value of eth_mac_addr_add_t in rte_ethdev.h is changed.
+ Any specific NIC also follows this change.
+
Removed Items
-------------
--
2.7.4
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v4 0/3] MAC address fail to be added shouldn't be stored
2017-04-12 9:02 3% [dpdk-dev] [PATCH v3 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-04-12 9:02 5% ` [dpdk-dev] [PATCH v3 2/3] doc: change type of return value of adding MAC addr Wei Dai
@ 2017-04-13 8:21 3% ` Wei Dai
2017-04-13 13:54 0% ` Ananyev, Konstantin
2017-04-13 8:21 5% ` [dpdk-dev] [PATCH v4 2/3] doc: change type of return value of adding MAC addr Wei Dai
2 siblings, 1 reply; 200+ results
From: Wei Dai @ 2017-04-13 8:21 UTC (permalink / raw)
To: thomas.monjalon, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, wenzhuo.lu, helin.zhang, konstantin.ananyev,
jingjing.wu, jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin
Cc: dev, Wei Dai
Current ethdev always stores MAC address even it fails to be added.
Other function may regard the failed MAC address valid and lead to
some errors. So There is a need to check if the addr is added
successfully or not and discard it if it fails.
In 3rd patch, add a command "add_more_mac_addr port_id base_mac_addr count"
to add more than one MAC address one time.
This command can simplify the test for the first patch.
Normally a MAC address may fails to be added only after many MAC
addresses have been added.
Without this command, a tester may only trigger failed MAC address
by running many times of testpmd command 'mac_addr add' .
---
Changes
v4:
1. rebase master branch
2. follow code style
v3:
1. Change return value for some specific NIC according to feedbacks
from the community;
2. Add ABI change in release note;
3. Add more detailed commit message.
v2:
fix warnings and erros from check-git-log.sh and checkpatch.pl
Wei Dai (3):
ethdev: fix adding invalid MAC addr
doc: change type of return value of adding MAC addr
app/testpmd: add a command to add many MAC addrs
app/test-pmd/cmdline.c | 55 ++++++++++++++++++++++++++++++++++
doc/guides/rel_notes/release_17_05.rst | 7 +++++
drivers/net/bnx2x/bnx2x_ethdev.c | 7 +++--
drivers/net/bnxt/bnxt_ethdev.c | 12 ++++----
drivers/net/e1000/base/e1000_api.c | 2 +-
drivers/net/e1000/em_ethdev.c | 6 ++--
drivers/net/e1000/igb_ethdev.c | 5 ++--
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 4 +--
drivers/net/enic/enic_main.c | 6 ++--
drivers/net/fm10k/fm10k_ethdev.c | 3 +-
drivers/net/i40e/i40e_ethdev.c | 11 +++----
drivers/net/i40e/i40e_ethdev_vf.c | 8 ++---
drivers/net/ixgbe/ixgbe_ethdev.c | 27 +++++++++++------
drivers/net/mlx4/mlx4.c | 18 ++++++-----
drivers/net/mlx5/mlx5.h | 4 +--
drivers/net/mlx5/mlx5_mac.c | 16 ++++++----
drivers/net/qede/qede_ethdev.c | 6 ++--
drivers/net/ring/rte_eth_ring.c | 3 +-
drivers/net/virtio/virtio_ethdev.c | 13 ++++----
lib/librte_ether/rte_ethdev.c | 15 ++++++----
lib/librte_ether/rte_ethdev.h | 2 +-
22 files changed, 162 insertions(+), 70 deletions(-)
--
2.7.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 07/14] ring: make bulk and burst fn return vals consistent
@ 2017-04-13 6:42 0% ` Wang, Zhihong
0 siblings, 0 replies; 200+ results
From: Wang, Zhihong @ 2017-04-13 6:42 UTC (permalink / raw)
To: Richardson, Bruce, olivier.matz; +Cc: dev, Richardson, Bruce
Hi Bruce,
This patch changes the behavior and causes some existing code to
malfunction, e.g. bond_ethdev_stop() will get stuck here:
while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
rte_pktmbuf_free(pkt);
Another example in test/test/virtual_pmd.c: virtual_ethdev_stop().
Thanks
Zhihong
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> Sent: Wednesday, March 29, 2017 9:10 PM
> To: olivier.matz@6wind.com
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>
> Subject: [dpdk-dev] [PATCH v5 07/14] ring: make bulk and burst fn return
> vals consistent
>
> The bulk fns for rings returns 0 for all elements enqueued and negative
> for no space. Change that to make them consistent with the burst functions
> in returning the number of elements enqueued/dequeued, i.e. 0 or N.
> This change also allows the return value from enq/deq to be used directly
> without a branch for error checking.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> Reviewed-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> Acked-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> doc/guides/rel_notes/release_17_05.rst | 11 +++
> doc/guides/sample_app_ug/server_node_efd.rst | 2 +-
> examples/load_balancer/runtime.c | 16 ++-
> .../client_server_mp/mp_client/client.c | 8 +-
> .../client_server_mp/mp_server/main.c | 2 +-
> examples/qos_sched/app_thread.c | 8 +-
> examples/server_node_efd/node/node.c | 2 +-
> examples/server_node_efd/server/main.c | 2 +-
> lib/librte_mempool/rte_mempool_ring.c | 12 ++-
> lib/librte_ring/rte_ring.h | 109 +++++++--------------
> test/test-pipeline/pipeline_hash.c | 2 +-
> test/test-pipeline/runtime.c | 8 +-
> test/test/test_ring.c | 46 +++++----
> test/test/test_ring_perf.c | 8 +-
> 14 files changed, 106 insertions(+), 130 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_17_05.rst
> b/doc/guides/rel_notes/release_17_05.rst
> index 084b359..6da2612 100644
> --- a/doc/guides/rel_notes/release_17_05.rst
> +++ b/doc/guides/rel_notes/release_17_05.rst
> @@ -137,6 +137,17 @@ API Changes
> * removed the build-time setting
> ``CONFIG_RTE_RING_PAUSE_REP_COUNT``
> * removed the function ``rte_ring_set_water_mark`` as part of a general
> removal of watermarks support in the library.
> + * changed the return value of the enqueue and dequeue bulk functions to
> + match that of the burst equivalents. In all cases, ring functions which
> + operate on multiple packets now return the number of elements
> enqueued
> + or dequeued, as appropriate. The updated functions are:
> +
> + - ``rte_ring_mp_enqueue_bulk``
> + - ``rte_ring_sp_enqueue_bulk``
> + - ``rte_ring_enqueue_bulk``
> + - ``rte_ring_mc_dequeue_bulk``
> + - ``rte_ring_sc_dequeue_bulk``
> + - ``rte_ring_dequeue_bulk``
>
> ABI Changes
> -----------
> diff --git a/doc/guides/sample_app_ug/server_node_efd.rst
> b/doc/guides/sample_app_ug/server_node_efd.rst
> index 9b69cfe..e3a63c8 100644
> --- a/doc/guides/sample_app_ug/server_node_efd.rst
> +++ b/doc/guides/sample_app_ug/server_node_efd.rst
> @@ -286,7 +286,7 @@ repeated infinitely.
>
> cl = &nodes[node];
> if (rte_ring_enqueue_bulk(cl->rx_q, (void **)cl_rx_buf[node].buffer,
> - cl_rx_buf[node].count) != 0){
> + cl_rx_buf[node].count) != cl_rx_buf[node].count){
> for (j = 0; j < cl_rx_buf[node].count; j++)
> rte_pktmbuf_free(cl_rx_buf[node].buffer[j]);
> cl->stats.rx_drop += cl_rx_buf[node].count;
> diff --git a/examples/load_balancer/runtime.c
> b/examples/load_balancer/runtime.c
> index 6944325..82b10bc 100644
> --- a/examples/load_balancer/runtime.c
> +++ b/examples/load_balancer/runtime.c
> @@ -146,7 +146,7 @@ app_lcore_io_rx_buffer_to_send (
> (void **) lp->rx.mbuf_out[worker].array,
> bsz);
>
> - if (unlikely(ret == -ENOBUFS)) {
> + if (unlikely(ret == 0)) {
> uint32_t k;
> for (k = 0; k < bsz; k ++) {
> struct rte_mbuf *m = lp-
> >rx.mbuf_out[worker].array[k];
> @@ -312,7 +312,7 @@ app_lcore_io_rx_flush(struct app_lcore_params_io
> *lp, uint32_t n_workers)
> (void **) lp->rx.mbuf_out[worker].array,
> lp->rx.mbuf_out[worker].n_mbufs);
>
> - if (unlikely(ret < 0)) {
> + if (unlikely(ret == 0)) {
> uint32_t k;
> for (k = 0; k < lp->rx.mbuf_out[worker].n_mbufs; k
> ++) {
> struct rte_mbuf *pkt_to_free = lp-
> >rx.mbuf_out[worker].array[k];
> @@ -349,9 +349,8 @@ app_lcore_io_tx(
> (void **) &lp-
> >tx.mbuf_out[port].array[n_mbufs],
> bsz_rd);
>
> - if (unlikely(ret == -ENOENT)) {
> + if (unlikely(ret == 0))
> continue;
> - }
>
> n_mbufs += bsz_rd;
>
> @@ -505,9 +504,8 @@ app_lcore_worker(
> (void **) lp->mbuf_in.array,
> bsz_rd);
>
> - if (unlikely(ret == -ENOENT)) {
> + if (unlikely(ret == 0))
> continue;
> - }
>
> #if APP_WORKER_DROP_ALL_PACKETS
> for (j = 0; j < bsz_rd; j ++) {
> @@ -559,7 +557,7 @@ app_lcore_worker(
>
> #if APP_STATS
> lp->rings_out_iters[port] ++;
> - if (ret == 0) {
> + if (ret > 0) {
> lp->rings_out_count[port] += 1;
> }
> if (lp->rings_out_iters[port] == APP_STATS){
> @@ -572,7 +570,7 @@ app_lcore_worker(
> }
> #endif
>
> - if (unlikely(ret == -ENOBUFS)) {
> + if (unlikely(ret == 0)) {
> uint32_t k;
> for (k = 0; k < bsz_wr; k ++) {
> struct rte_mbuf *pkt_to_free = lp-
> >mbuf_out[port].array[k];
> @@ -609,7 +607,7 @@ app_lcore_worker_flush(struct
> app_lcore_params_worker *lp)
> (void **) lp->mbuf_out[port].array,
> lp->mbuf_out[port].n_mbufs);
>
> - if (unlikely(ret < 0)) {
> + if (unlikely(ret == 0)) {
> uint32_t k;
> for (k = 0; k < lp->mbuf_out[port].n_mbufs; k ++) {
> struct rte_mbuf *pkt_to_free = lp-
> >mbuf_out[port].array[k];
> diff --git a/examples/multi_process/client_server_mp/mp_client/client.c
> b/examples/multi_process/client_server_mp/mp_client/client.c
> index d4f9ca3..dca9eb9 100644
> --- a/examples/multi_process/client_server_mp/mp_client/client.c
> +++ b/examples/multi_process/client_server_mp/mp_client/client.c
> @@ -276,14 +276,10 @@ main(int argc, char *argv[])
> printf("[Press Ctrl-C to quit ...]\n");
>
> for (;;) {
> - uint16_t i, rx_pkts = PKT_READ_SIZE;
> + uint16_t i, rx_pkts;
> uint8_t port;
>
> - /* try dequeuing max possible packets first, if that fails, get
> the
> - * most we can. Loop body should only execute once,
> maximum */
> - while (rx_pkts > 0 &&
> - unlikely(rte_ring_dequeue_bulk(rx_ring,
> pkts, rx_pkts) != 0))
> - rx_pkts =
> (uint16_t)RTE_MIN(rte_ring_count(rx_ring), PKT_READ_SIZE);
> + rx_pkts = rte_ring_dequeue_burst(rx_ring, pkts,
> PKT_READ_SIZE);
>
> if (unlikely(rx_pkts == 0)){
> if (need_flush)
> diff --git a/examples/multi_process/client_server_mp/mp_server/main.c
> b/examples/multi_process/client_server_mp/mp_server/main.c
> index a6dc12d..19c95b2 100644
> --- a/examples/multi_process/client_server_mp/mp_server/main.c
> +++ b/examples/multi_process/client_server_mp/mp_server/main.c
> @@ -227,7 +227,7 @@ flush_rx_queue(uint16_t client)
>
> cl = &clients[client];
> if (rte_ring_enqueue_bulk(cl->rx_q, (void
> **)cl_rx_buf[client].buffer,
> - cl_rx_buf[client].count) != 0){
> + cl_rx_buf[client].count) == 0){
> for (j = 0; j < cl_rx_buf[client].count; j++)
> rte_pktmbuf_free(cl_rx_buf[client].buffer[j]);
> cl->stats.rx_drop += cl_rx_buf[client].count;
> diff --git a/examples/qos_sched/app_thread.c
> b/examples/qos_sched/app_thread.c
> index 70fdcdb..dab4594 100644
> --- a/examples/qos_sched/app_thread.c
> +++ b/examples/qos_sched/app_thread.c
> @@ -107,7 +107,7 @@ app_rx_thread(struct thread_conf **confs)
> }
>
> if (unlikely(rte_ring_sp_enqueue_bulk(conf->rx_ring,
> - (void
> **)rx_mbufs, nb_rx) != 0)) {
> + (void **)rx_mbufs, nb_rx) == 0)) {
> for(i = 0; i < nb_rx; i++) {
> rte_pktmbuf_free(rx_mbufs[i]);
>
> @@ -180,7 +180,7 @@ app_tx_thread(struct thread_conf **confs)
> while ((conf = confs[conf_idx])) {
> retval = rte_ring_sc_dequeue_bulk(conf->tx_ring, (void
> **)mbufs,
> burst_conf.qos_dequeue);
> - if (likely(retval == 0)) {
> + if (likely(retval != 0)) {
> app_send_packets(conf, mbufs,
> burst_conf.qos_dequeue);
>
> conf->counter = 0; /* reset empty read loop counter
> */
> @@ -230,7 +230,9 @@ app_worker_thread(struct thread_conf **confs)
> nb_pkt = rte_sched_port_dequeue(conf->sched_port,
> mbufs,
> burst_conf.qos_dequeue);
> if (likely(nb_pkt > 0))
> - while (rte_ring_sp_enqueue_bulk(conf->tx_ring,
> (void **)mbufs, nb_pkt) != 0);
> + while (rte_ring_sp_enqueue_bulk(conf->tx_ring,
> + (void **)mbufs, nb_pkt) == 0)
> + ; /* empty body */
>
> conf_idx++;
> if (confs[conf_idx] == NULL)
> diff --git a/examples/server_node_efd/node/node.c
> b/examples/server_node_efd/node/node.c
> index a6c0c70..9ec6a05 100644
> --- a/examples/server_node_efd/node/node.c
> +++ b/examples/server_node_efd/node/node.c
> @@ -392,7 +392,7 @@ main(int argc, char *argv[])
> */
> while (rx_pkts > 0 &&
> unlikely(rte_ring_dequeue_bulk(rx_ring,
> pkts,
> - rx_pkts) != 0))
> + rx_pkts) == 0))
> rx_pkts =
> (uint16_t)RTE_MIN(rte_ring_count(rx_ring),
> PKT_READ_SIZE);
>
> diff --git a/examples/server_node_efd/server/main.c
> b/examples/server_node_efd/server/main.c
> index 1a54d1b..3eb7fac 100644
> --- a/examples/server_node_efd/server/main.c
> +++ b/examples/server_node_efd/server/main.c
> @@ -247,7 +247,7 @@ flush_rx_queue(uint16_t node)
>
> cl = &nodes[node];
> if (rte_ring_enqueue_bulk(cl->rx_q, (void
> **)cl_rx_buf[node].buffer,
> - cl_rx_buf[node].count) != 0){
> + cl_rx_buf[node].count) != cl_rx_buf[node].count){
> for (j = 0; j < cl_rx_buf[node].count; j++)
> rte_pktmbuf_free(cl_rx_buf[node].buffer[j]);
> cl->stats.rx_drop += cl_rx_buf[node].count;
> diff --git a/lib/librte_mempool/rte_mempool_ring.c
> b/lib/librte_mempool/rte_mempool_ring.c
> index b9aa64d..409b860 100644
> --- a/lib/librte_mempool/rte_mempool_ring.c
> +++ b/lib/librte_mempool/rte_mempool_ring.c
> @@ -42,26 +42,30 @@ static int
> common_ring_mp_enqueue(struct rte_mempool *mp, void * const
> *obj_table,
> unsigned n)
> {
> - return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n);
> + return rte_ring_mp_enqueue_bulk(mp->pool_data,
> + obj_table, n) == 0 ? -ENOBUFS : 0;
> }
>
> static int
> common_ring_sp_enqueue(struct rte_mempool *mp, void * const
> *obj_table,
> unsigned n)
> {
> - return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n);
> + return rte_ring_sp_enqueue_bulk(mp->pool_data,
> + obj_table, n) == 0 ? -ENOBUFS : 0;
> }
>
> static int
> common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table,
> unsigned n)
> {
> - return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n);
> + return rte_ring_mc_dequeue_bulk(mp->pool_data,
> + obj_table, n) == 0 ? -ENOBUFS : 0;
> }
>
> static int
> common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table,
> unsigned n)
> {
> - return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n);
> + return rte_ring_sc_dequeue_bulk(mp->pool_data,
> + obj_table, n) == 0 ? -ENOBUFS : 0;
> }
>
> static unsigned
> diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
> index 906e8ae..34b438c 100644
> --- a/lib/librte_ring/rte_ring.h
> +++ b/lib/librte_ring/rte_ring.h
> @@ -349,14 +349,10 @@ void rte_ring_dump(FILE *f, const struct rte_ring
> *r);
> * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring
> * RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from
> ring
> * @return
> - * Depend on the behavior value
> - * if behavior = RTE_RING_QUEUE_FIXED
> - * - 0: Success; objects enqueue.
> - * - -ENOBUFS: Not enough room in the ring to enqueue, no object is
> enqueued.
> - * if behavior = RTE_RING_QUEUE_VARIABLE
> - * - n: Actual number of objects enqueued.
> + * Actual number of objects enqueued.
> + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
> */
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,
> unsigned n, enum rte_ring_queue_behavior
> behavior)
> {
> @@ -388,7 +384,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void
> * const *obj_table,
> /* check that we have enough room in ring */
> if (unlikely(n > free_entries)) {
> if (behavior == RTE_RING_QUEUE_FIXED)
> - return -ENOBUFS;
> + return 0;
> else {
> /* No free entry available */
> if (unlikely(free_entries == 0))
> @@ -414,7 +410,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void
> * const *obj_table,
> rte_pause();
>
> r->prod.tail = prod_next;
> - return (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
> + return n;
> }
>
> /**
> @@ -430,14 +426,10 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r,
> void * const *obj_table,
> * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a ring
> * RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from
> ring
> * @return
> - * Depend on the behavior value
> - * if behavior = RTE_RING_QUEUE_FIXED
> - * - 0: Success; objects enqueue.
> - * - -ENOBUFS: Not enough room in the ring to enqueue, no object is
> enqueued.
> - * if behavior = RTE_RING_QUEUE_VARIABLE
> - * - n: Actual number of objects enqueued.
> + * Actual number of objects enqueued.
> + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
> */
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,
> unsigned n, enum rte_ring_queue_behavior
> behavior)
> {
> @@ -457,7 +449,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void *
> const *obj_table,
> /* check that we have enough room in ring */
> if (unlikely(n > free_entries)) {
> if (behavior == RTE_RING_QUEUE_FIXED)
> - return -ENOBUFS;
> + return 0;
> else {
> /* No free entry available */
> if (unlikely(free_entries == 0))
> @@ -474,7 +466,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void *
> const *obj_table,
> rte_smp_wmb();
>
> r->prod.tail = prod_next;
> - return (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;
> + return n;
> }
>
> /**
> @@ -495,16 +487,11 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r,
> void * const *obj_table,
> * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a
> ring
> * RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from
> ring
> * @return
> - * Depend on the behavior value
> - * if behavior = RTE_RING_QUEUE_FIXED
> - * - 0: Success; objects dequeued.
> - * - -ENOENT: Not enough entries in the ring to dequeue; no object is
> - * dequeued.
> - * if behavior = RTE_RING_QUEUE_VARIABLE
> - * - n: Actual number of objects dequeued.
> + * - Actual number of objects dequeued.
> + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
> */
>
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,
> unsigned n, enum rte_ring_queue_behavior behavior)
> {
> @@ -536,7 +523,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void
> **obj_table,
> /* Set the actual entries for dequeue */
> if (n > entries) {
> if (behavior == RTE_RING_QUEUE_FIXED)
> - return -ENOENT;
> + return 0;
> else {
> if (unlikely(entries == 0))
> return 0;
> @@ -562,7 +549,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void
> **obj_table,
>
> r->cons.tail = cons_next;
>
> - return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
> + return n;
> }
>
> /**
> @@ -580,15 +567,10 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r,
> void **obj_table,
> * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a
> ring
> * RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from
> ring
> * @return
> - * Depend on the behavior value
> - * if behavior = RTE_RING_QUEUE_FIXED
> - * - 0: Success; objects dequeued.
> - * - -ENOENT: Not enough entries in the ring to dequeue; no object is
> - * dequeued.
> - * if behavior = RTE_RING_QUEUE_VARIABLE
> - * - n: Actual number of objects dequeued.
> + * - Actual number of objects dequeued.
> + * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
> */
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,
> unsigned n, enum rte_ring_queue_behavior behavior)
> {
> @@ -607,7 +589,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void
> **obj_table,
>
> if (n > entries) {
> if (behavior == RTE_RING_QUEUE_FIXED)
> - return -ENOENT;
> + return 0;
> else {
> if (unlikely(entries == 0))
> return 0;
> @@ -623,7 +605,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void
> **obj_table,
> rte_smp_rmb();
>
> r->cons.tail = cons_next;
> - return behavior == RTE_RING_QUEUE_FIXED ? 0 : n;
> + return n;
> }
>
> /**
> @@ -639,10 +621,9 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void
> **obj_table,
> * @param n
> * The number of objects to add in the ring from the obj_table.
> * @return
> - * - 0: Success; objects enqueue.
> - * - -ENOBUFS: Not enough room in the ring to enqueue, no object is
> enqueued.
> + * The number of objects enqueued, either 0 or n
> */
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
> unsigned n)
> {
> @@ -659,10 +640,9 @@ rte_ring_mp_enqueue_bulk(struct rte_ring *r, void
> * const *obj_table,
> * @param n
> * The number of objects to add in the ring from the obj_table.
> * @return
> - * - 0: Success; objects enqueued.
> - * - -ENOBUFS: Not enough room in the ring to enqueue; no object is
> enqueued.
> + * The number of objects enqueued, either 0 or n
> */
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
> unsigned n)
> {
> @@ -683,10 +663,9 @@ rte_ring_sp_enqueue_bulk(struct rte_ring *r, void *
> const *obj_table,
> * @param n
> * The number of objects to add in the ring from the obj_table.
> * @return
> - * - 0: Success; objects enqueued.
> - * - -ENOBUFS: Not enough room in the ring to enqueue; no object is
> enqueued.
> + * The number of objects enqueued, either 0 or n
> */
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
> unsigned n)
> {
> @@ -713,7 +692,7 @@ rte_ring_enqueue_bulk(struct rte_ring *r, void *
> const *obj_table,
> static inline int __attribute__((always_inline))
> rte_ring_mp_enqueue(struct rte_ring *r, void *obj)
> {
> - return rte_ring_mp_enqueue_bulk(r, &obj, 1);
> + return rte_ring_mp_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS;
> }
>
> /**
> @@ -730,7 +709,7 @@ rte_ring_mp_enqueue(struct rte_ring *r, void *obj)
> static inline int __attribute__((always_inline))
> rte_ring_sp_enqueue(struct rte_ring *r, void *obj)
> {
> - return rte_ring_sp_enqueue_bulk(r, &obj, 1);
> + return rte_ring_sp_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS;
> }
>
> /**
> @@ -751,10 +730,7 @@ rte_ring_sp_enqueue(struct rte_ring *r, void *obj)
> static inline int __attribute__((always_inline))
> rte_ring_enqueue(struct rte_ring *r, void *obj)
> {
> - if (r->prod.single)
> - return rte_ring_sp_enqueue(r, obj);
> - else
> - return rte_ring_mp_enqueue(r, obj);
> + return rte_ring_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS;
> }
>
> /**
> @@ -770,11 +746,9 @@ rte_ring_enqueue(struct rte_ring *r, void *obj)
> * @param n
> * The number of objects to dequeue from the ring to the obj_table.
> * @return
> - * - 0: Success; objects dequeued.
> - * - -ENOENT: Not enough entries in the ring to dequeue; no object is
> - * dequeued.
> + * The number of objects dequeued, either 0 or n
> */
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
> {
> return __rte_ring_mc_do_dequeue(r, obj_table, n,
> RTE_RING_QUEUE_FIXED);
> @@ -791,11 +765,9 @@ rte_ring_mc_dequeue_bulk(struct rte_ring *r, void
> **obj_table, unsigned n)
> * The number of objects to dequeue from the ring to the obj_table,
> * must be strictly positive.
> * @return
> - * - 0: Success; objects dequeued.
> - * - -ENOENT: Not enough entries in the ring to dequeue; no object is
> - * dequeued.
> + * The number of objects dequeued, either 0 or n
> */
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
> {
> return __rte_ring_sc_do_dequeue(r, obj_table, n,
> RTE_RING_QUEUE_FIXED);
> @@ -815,11 +787,9 @@ rte_ring_sc_dequeue_bulk(struct rte_ring *r, void
> **obj_table, unsigned n)
> * @param n
> * The number of objects to dequeue from the ring to the obj_table.
> * @return
> - * - 0: Success; objects dequeued.
> - * - -ENOENT: Not enough entries in the ring to dequeue, no object is
> - * dequeued.
> + * The number of objects dequeued, either 0 or n
> */
> -static inline int __attribute__((always_inline))
> +static inline unsigned int __attribute__((always_inline))
> rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)
> {
> if (r->cons.single)
> @@ -846,7 +816,7 @@ rte_ring_dequeue_bulk(struct rte_ring *r, void
> **obj_table, unsigned n)
> static inline int __attribute__((always_inline))
> rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p)
> {
> - return rte_ring_mc_dequeue_bulk(r, obj_p, 1);
> + return rte_ring_mc_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS;
> }
>
> /**
> @@ -864,7 +834,7 @@ rte_ring_mc_dequeue(struct rte_ring *r, void
> **obj_p)
> static inline int __attribute__((always_inline))
> rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p)
> {
> - return rte_ring_sc_dequeue_bulk(r, obj_p, 1);
> + return rte_ring_sc_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS;
> }
>
> /**
> @@ -886,10 +856,7 @@ rte_ring_sc_dequeue(struct rte_ring *r, void
> **obj_p)
> static inline int __attribute__((always_inline))
> rte_ring_dequeue(struct rte_ring *r, void **obj_p)
> {
> - if (r->cons.single)
> - return rte_ring_sc_dequeue(r, obj_p);
> - else
> - return rte_ring_mc_dequeue(r, obj_p);
> + return rte_ring_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS;
> }
>
> /**
> diff --git a/test/test-pipeline/pipeline_hash.c b/test/test-
> pipeline/pipeline_hash.c
> index 10d2869..1ac0aa8 100644
> --- a/test/test-pipeline/pipeline_hash.c
> +++ b/test/test-pipeline/pipeline_hash.c
> @@ -547,6 +547,6 @@ app_main_loop_rx_metadata(void) {
> app.rings_rx[i],
> (void **) app.mbuf_rx.array,
> n_mbufs);
> - } while (ret < 0);
> + } while (ret == 0);
> }
> }
> diff --git a/test/test-pipeline/runtime.c b/test/test-pipeline/runtime.c
> index 42a6142..4e20669 100644
> --- a/test/test-pipeline/runtime.c
> +++ b/test/test-pipeline/runtime.c
> @@ -98,7 +98,7 @@ app_main_loop_rx(void) {
> app.rings_rx[i],
> (void **) app.mbuf_rx.array,
> n_mbufs);
> - } while (ret < 0);
> + } while (ret == 0);
> }
> }
>
> @@ -123,7 +123,7 @@ app_main_loop_worker(void) {
> (void **) worker_mbuf->array,
> app.burst_size_worker_read);
>
> - if (ret == -ENOENT)
> + if (ret == 0)
> continue;
>
> do {
> @@ -131,7 +131,7 @@ app_main_loop_worker(void) {
> app.rings_tx[i ^ 1],
> (void **) worker_mbuf->array,
> app.burst_size_worker_write);
> - } while (ret < 0);
> + } while (ret == 0);
> }
> }
>
> @@ -152,7 +152,7 @@ app_main_loop_tx(void) {
> (void **) &app.mbuf_tx[i].array[n_mbufs],
> app.burst_size_tx_read);
>
> - if (ret == -ENOENT)
> + if (ret == 0)
> continue;
>
> n_mbufs += app.burst_size_tx_read;
> diff --git a/test/test/test_ring.c b/test/test/test_ring.c
> index 666a451..112433b 100644
> --- a/test/test/test_ring.c
> +++ b/test/test/test_ring.c
> @@ -117,20 +117,18 @@ test_ring_basic_full_empty(void * const src[], void
> *dst[])
> rand = RTE_MAX(rte_rand() % RING_SIZE, 1UL);
> printf("%s: iteration %u, random shift: %u;\n",
> __func__, i, rand);
> - TEST_RING_VERIFY(-ENOBUFS != rte_ring_enqueue_bulk(r,
> src,
> - rand));
> - TEST_RING_VERIFY(0 == rte_ring_dequeue_bulk(r, dst,
> rand));
> + TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rand) !=
> 0);
> + TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rand) ==
> rand);
>
> /* fill the ring */
> - TEST_RING_VERIFY(-ENOBUFS != rte_ring_enqueue_bulk(r,
> src,
> - rsz));
> + TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rsz) != 0);
> TEST_RING_VERIFY(0 == rte_ring_free_count(r));
> TEST_RING_VERIFY(rsz == rte_ring_count(r));
> TEST_RING_VERIFY(rte_ring_full(r));
> TEST_RING_VERIFY(0 == rte_ring_empty(r));
>
> /* empty the ring */
> - TEST_RING_VERIFY(0 == rte_ring_dequeue_bulk(r, dst, rsz));
> + TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rsz) ==
> rsz);
> TEST_RING_VERIFY(rsz == rte_ring_free_count(r));
> TEST_RING_VERIFY(0 == rte_ring_count(r));
> TEST_RING_VERIFY(0 == rte_ring_full(r));
> @@ -171,37 +169,37 @@ test_ring_basic(void)
> printf("enqueue 1 obj\n");
> ret = rte_ring_sp_enqueue_bulk(r, cur_src, 1);
> cur_src += 1;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("enqueue 2 objs\n");
> ret = rte_ring_sp_enqueue_bulk(r, cur_src, 2);
> cur_src += 2;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("enqueue MAX_BULK objs\n");
> ret = rte_ring_sp_enqueue_bulk(r, cur_src, MAX_BULK);
> cur_src += MAX_BULK;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("dequeue 1 obj\n");
> ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 1);
> cur_dst += 1;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("dequeue 2 objs\n");
> ret = rte_ring_sc_dequeue_bulk(r, cur_dst, 2);
> cur_dst += 2;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("dequeue MAX_BULK objs\n");
> ret = rte_ring_sc_dequeue_bulk(r, cur_dst, MAX_BULK);
> cur_dst += MAX_BULK;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> /* check data */
> @@ -217,37 +215,37 @@ test_ring_basic(void)
> printf("enqueue 1 obj\n");
> ret = rte_ring_mp_enqueue_bulk(r, cur_src, 1);
> cur_src += 1;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("enqueue 2 objs\n");
> ret = rte_ring_mp_enqueue_bulk(r, cur_src, 2);
> cur_src += 2;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("enqueue MAX_BULK objs\n");
> ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK);
> cur_src += MAX_BULK;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("dequeue 1 obj\n");
> ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 1);
> cur_dst += 1;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("dequeue 2 objs\n");
> ret = rte_ring_mc_dequeue_bulk(r, cur_dst, 2);
> cur_dst += 2;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> printf("dequeue MAX_BULK objs\n");
> ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK);
> cur_dst += MAX_BULK;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
>
> /* check data */
> @@ -264,11 +262,11 @@ test_ring_basic(void)
> for (i = 0; i<RING_SIZE/MAX_BULK; i++) {
> ret = rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK);
> cur_src += MAX_BULK;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
> ret = rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK);
> cur_dst += MAX_BULK;
> - if (ret != 0)
> + if (ret == 0)
> goto fail;
> }
>
> @@ -294,25 +292,25 @@ test_ring_basic(void)
>
> ret = rte_ring_enqueue_bulk(r, cur_src, num_elems);
> cur_src += num_elems;
> - if (ret != 0) {
> + if (ret == 0) {
> printf("Cannot enqueue\n");
> goto fail;
> }
> ret = rte_ring_enqueue_bulk(r, cur_src, num_elems);
> cur_src += num_elems;
> - if (ret != 0) {
> + if (ret == 0) {
> printf("Cannot enqueue\n");
> goto fail;
> }
> ret = rte_ring_dequeue_bulk(r, cur_dst, num_elems);
> cur_dst += num_elems;
> - if (ret != 0) {
> + if (ret == 0) {
> printf("Cannot dequeue\n");
> goto fail;
> }
> ret = rte_ring_dequeue_bulk(r, cur_dst, num_elems);
> cur_dst += num_elems;
> - if (ret != 0) {
> + if (ret == 0) {
> printf("Cannot dequeue2\n");
> goto fail;
> }
> diff --git a/test/test/test_ring_perf.c b/test/test/test_ring_perf.c
> index 320c20c..8ccbdef 100644
> --- a/test/test/test_ring_perf.c
> +++ b/test/test/test_ring_perf.c
> @@ -195,13 +195,13 @@ enqueue_bulk(void *p)
>
> const uint64_t sp_start = rte_rdtsc();
> for (i = 0; i < iterations; i++)
> - while (rte_ring_sp_enqueue_bulk(r, burst, size) != 0)
> + while (rte_ring_sp_enqueue_bulk(r, burst, size) == 0)
> rte_pause();
> const uint64_t sp_end = rte_rdtsc();
>
> const uint64_t mp_start = rte_rdtsc();
> for (i = 0; i < iterations; i++)
> - while (rte_ring_mp_enqueue_bulk(r, burst, size) != 0)
> + while (rte_ring_mp_enqueue_bulk(r, burst, size) == 0)
> rte_pause();
> const uint64_t mp_end = rte_rdtsc();
>
> @@ -230,13 +230,13 @@ dequeue_bulk(void *p)
>
> const uint64_t sc_start = rte_rdtsc();
> for (i = 0; i < iterations; i++)
> - while (rte_ring_sc_dequeue_bulk(r, burst, size) != 0)
> + while (rte_ring_sc_dequeue_bulk(r, burst, size) == 0)
> rte_pause();
> const uint64_t sc_end = rte_rdtsc();
>
> const uint64_t mc_start = rte_rdtsc();
> for (i = 0; i < iterations; i++)
> - while (rte_ring_mc_dequeue_bulk(r, burst, size) != 0)
> + while (rte_ring_mc_dequeue_bulk(r, burst, size) == 0)
> rte_pause();
> const uint64_t mc_end = rte_rdtsc();
>
> --
> 2.9.3
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3 2/3] doc: change type of return value of adding MAC addr
2017-04-12 9:02 3% [dpdk-dev] [PATCH v3 0/3] MAC address fail to be added shouldn't be stored Wei Dai
@ 2017-04-12 9:02 5% ` Wei Dai
2017-04-13 8:21 3% ` [dpdk-dev] [PATCH v4 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-04-13 8:21 5% ` [dpdk-dev] [PATCH v4 2/3] doc: change type of return value of adding MAC addr Wei Dai
2 siblings, 0 replies; 200+ results
From: Wei Dai @ 2017-04-12 9:02 UTC (permalink / raw)
To: dev, thomas.monjalon, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, wenzhuo.lu, helin.zhang, konstantin.ananyev,
jingjing.wu, jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin
Cc: Wei Dai
Add following lines in section of API change in release note.
If a MAC address fails to be added without this change, it is still stored
and may be regarded as a valid one. This may lead to errors in application.
The type of return value of eth_mac_addr_add_t in rte_ethdev.h is changed.
Any specific NIC also follows this change.
Signed-off-by: Wei Dai <wei.dai@intel.com>
---
doc/guides/rel_notes/release_17_05.rst | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index aa3e1e0..b3aac0c 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -408,6 +408,12 @@ ABI Changes
The order and size of the fields in the ``mbuf`` structure changed,
as described in the `New Features`_ section.
+* **Return if the MAC address is added successfully or not.**
+
+ If a MAC address fails to be added without this change, it is still stored
+ and may be regarded as a valid one. This may lead to errors in application.
+ The type of return value of eth_mac_addr_add_t in rte_ethdev.h is changed.
+ Any specific NIC also follows this change.
Removed Items
-------------
--
2.7.4
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v3 0/3] MAC address fail to be added shouldn't be stored
@ 2017-04-12 9:02 3% Wei Dai
2017-04-12 9:02 5% ` [dpdk-dev] [PATCH v3 2/3] doc: change type of return value of adding MAC addr Wei Dai
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Wei Dai @ 2017-04-12 9:02 UTC (permalink / raw)
To: dev, thomas.monjalon, harish.patil, rasesh.mody, stephen.hurd,
ajit.khaparde, wenzhuo.lu, helin.zhang, konstantin.ananyev,
jingjing.wu, jing.d.chen, adrien.mazarguil, nelio.laranjeiro,
bruce.richardson, yuanhan.liu, maxime.coquelin
Cc: Wei Dai
Current ethdev always stores MAC address even it fails to be added.
Other function may regard the failed MAC address valid and lead to
some errors. So There is a need to check if the addr is added
successfully or not and discard it if it fails.
In 3rd patch, add a command "add_more_mac_addr port_id base_mac_addr count"
to add more than one MAC address one time.
This command can simplify the test for the first patch.
Normally a MAC address may fails to be added only after many MAC
addresses have been added.
Without this command, a tester may only trigger failed MAC address
by running many times of testpmd command 'mac_addr add' .
---
Changes
v3:
1. Change return value for some specific NIC according to feedbacks
from the community;
2. Add ABI change in release note;
3. Add more detailed commit message.
v2:
fix warnings and erros from check-git-log.sh and checkpatch.pl
Wei Dai (3):
ethdev: fix adding invalid MAC addr
doc: change type of return value of adding MAC addr
app/testpmd: add a command to add many MAC addrs
app/test-pmd/cmdline.c | 55 ++++++++++++++++++++++++++++++++++
doc/guides/rel_notes/release_17_05.rst | 6 ++++
drivers/net/bnx2x/bnx2x_ethdev.c | 7 +++--
drivers/net/bnxt/bnxt_ethdev.c | 12 ++++----
drivers/net/e1000/base/e1000_api.c | 2 +-
drivers/net/e1000/em_ethdev.c | 6 ++--
drivers/net/e1000/igb_ethdev.c | 5 ++--
drivers/net/enic/enic.h | 2 +-
drivers/net/enic/enic_ethdev.c | 4 +--
drivers/net/enic/enic_main.c | 6 ++--
drivers/net/fm10k/fm10k_ethdev.c | 3 +-
drivers/net/i40e/i40e_ethdev.c | 11 +++----
drivers/net/i40e/i40e_ethdev_vf.c | 8 ++---
drivers/net/ixgbe/ixgbe_ethdev.c | 27 +++++++++++------
drivers/net/mlx4/mlx4.c | 14 +++++----
drivers/net/mlx5/mlx5.h | 4 +--
drivers/net/mlx5/mlx5_mac.c | 12 +++++---
drivers/net/qede/qede_ethdev.c | 6 ++--
drivers/net/ring/rte_eth_ring.c | 3 +-
drivers/net/virtio/virtio_ethdev.c | 13 ++++----
lib/librte_ether/rte_ethdev.c | 15 ++++++----
lib/librte_ether/rte_ethdev.h | 2 +-
22 files changed, 157 insertions(+), 66 deletions(-)
--
2.7.4
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v5 1/3] ethdev: new xstats API add retrieving by ID
2017-04-11 16:37 1% ` [dpdk-dev] [PATCH v5 1/3] ethdev: new xstats API add retrieving by ID Michal Jastrzebski
@ 2017-04-12 8:56 5% ` Van Haaren, Harry
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Kuba Kozak
1 sibling, 0 replies; 200+ results
From: Van Haaren, Harry @ 2017-04-12 8:56 UTC (permalink / raw)
To: Jastrzebski, MichalX K, dev
Cc: Jain, Deepak K, Piasecki, JacekX, Kozak, KubaX, Kulasek, TomaszX
> From: Jastrzebski, MichalX K
> Sent: Tuesday, April 11, 2017 5:37 PM
> To: dev@dpdk.org
> Cc: Jain, Deepak K <deepak.k.jain@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> Piasecki, JacekX <jacekx.piasecki@intel.com>; Kozak, KubaX <kubax.kozak@intel.com>; Kulasek,
> TomaszX <tomaszx.kulasek@intel.com>
> Subject: [PATCH v5 1/3] ethdev: new xstats API add retrieving by ID
>
> From: Jacek Piasecki <jacekx.piasecki@intel.com>
>
> Extended xstats API in ethdev library to allow grouping of stats
> logically so they can be retrieved per logical grouping managed
> by the application.
> Changed existing functions rte_eth_xstats_get_names and
> rte_eth_xstats_get to use a new list of arguments: array of ids
> and array of values. ABI versioning mechanism was used to
> support backward compatibility.
> Introduced two new functions rte_eth_xstats_get_all and
> rte_eth_xstats_get_names_all which keeps functionality of the
> previous ones (respectively rte_eth_xstats_get and
> rte_eth_xstats_get_names) but use new API inside.
> Both functions marked as deprecated.
> Introduced new function: rte_eth_xstats_get_id_by_name
> to retrieve xstats ids by its names.
>
> test-pmd: add support for new xstats API retrieving by id in
> testpmd application: xstats_get() and
> xstats_get_names() call with modified parameters.
>
> proc_info: add support for new xstats API retrieving by id to
> proc_info application. There is a new argument --xstats-ids
> in proc_info command line to retrieve statistics given by ids.
> E.g. --xstats-ids="1,3,5,7,8"
>
> doc: add description for modified xstats API
> Documentation change for modified extended statistics API functions.
> The old API only allows retrieval of *all* of the NIC statistics
> at once. Given this requires a MMIO read PCI transaction per statistic
> it is an inefficient way of retrieving just a few key statistics.
> Often a monitoring agent only has an interest in a few key statistics,
> and the old API forces wasting CPU time and PCIe bandwidth in retrieving
> *all* statistics; even those that the application didn't explicitly
> show an interest in.
> The new, more flexible API allow retrieval of statistics per ID.
> If a PMD wishes, it can be implemented to read just the required
> NIC registers. As a result, the monitoring application no longer wastes
> PCIe bandwidth and CPU time.
>
> Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
> Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
> Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
As part of this patchset 3 functions were added to struct eth_dev_ops, to provide more flexible xstats() APIs.
This patchset uses symbol versioning to keep ABI stable. I have checked ABI using ./devtools/validate-abi.sh script for both GCC 5.4.0 and Clang 3.8.0. GCC indicates Compatible, while Clang says there is a single Medium issue, which I believe to be a false positive (details below).
The clang Medium issue is described as follows by the ABI report;
- struct rte_eth_dev :
Change: Size of field dev_ops has been changed from 624 bytes to 648 bytes [HvH: due to adding 3 xstats function pointers to end of struct]
Effect: Previous accesses of applications and library functions to this field and fields at higher positions of the structure definition may be broken.
The reason I believe this is a false positive is that the "dev_ops" field is defined in the rte_eth_dev struct as follows:
const struct eth_dev_ops *dev_ops;
Any accesses made to dev_ops will be by this pointer-dereference, so the *size* of dev_ops *in* rte_eth_dev struct is still a pointer - it hasn't changed. Hence "accesses to this field and fields at higher positions of the structure" will not be changed - the pointer in the rte_eth_dev struct remains a pointer.
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v5 1/3] ethdev: new xstats API add retrieving by ID
2017-04-11 16:37 4% ` [dpdk-dev] [PATCH v5 0/3] Extended xstats API in ethdev library to allow grouping of stats Michal Jastrzebski
@ 2017-04-11 16:37 1% ` Michal Jastrzebski
2017-04-12 8:56 5% ` Van Haaren, Harry
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Kuba Kozak
0 siblings, 2 replies; 200+ results
From: Michal Jastrzebski @ 2017-04-11 16:37 UTC (permalink / raw)
To: dev
Cc: deepak.k.jain, harry.van.haaren, Jacek Piasecki, Kuba Kozak,
Tomasz Kulasek
From: Jacek Piasecki <jacekx.piasecki@intel.com>
Extended xstats API in ethdev library to allow grouping of stats
logically so they can be retrieved per logical grouping managed
by the application.
Changed existing functions rte_eth_xstats_get_names and
rte_eth_xstats_get to use a new list of arguments: array of ids
and array of values. ABI versioning mechanism was used to
support backward compatibility.
Introduced two new functions rte_eth_xstats_get_all and
rte_eth_xstats_get_names_all which keeps functionality of the
previous ones (respectively rte_eth_xstats_get and
rte_eth_xstats_get_names) but use new API inside.
Both functions marked as deprecated.
Introduced new function: rte_eth_xstats_get_id_by_name
to retrieve xstats ids by its names.
test-pmd: add support for new xstats API retrieving by id in
testpmd application: xstats_get() and
xstats_get_names() call with modified parameters.
proc_info: add support for new xstats API retrieving by id to
proc_info application. There is a new argument --xstats-ids
in proc_info command line to retrieve statistics given by ids.
E.g. --xstats-ids="1,3,5,7,8"
doc: add description for modified xstats API
Documentation change for modified extended statistics API functions.
The old API only allows retrieval of *all* of the NIC statistics
at once. Given this requires a MMIO read PCI transaction per statistic
it is an inefficient way of retrieving just a few key statistics.
Often a monitoring agent only has an interest in a few key statistics,
and the old API forces wasting CPU time and PCIe bandwidth in retrieving
*all* statistics; even those that the application didn't explicitly
show an interest in.
The new, more flexible API allow retrieval of statistics per ID.
If a PMD wishes, it can be implemented to read just the required
NIC registers. As a result, the monitoring application no longer wastes
PCIe bandwidth and CPU time.
Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
---
app/proc_info/main.c | 150 ++++++++++-
app/test-pmd/config.c | 19 +-
doc/guides/prog_guide/poll_mode_drv.rst | 174 +++++++++++--
lib/librte_ether/rte_ethdev.c | 430 ++++++++++++++++++++++++--------
lib/librte_ether/rte_ethdev.h | 170 ++++++++++++-
lib/librte_ether/rte_ether_version.map | 5 +
6 files changed, 794 insertions(+), 154 deletions(-)
diff --git a/app/proc_info/main.c b/app/proc_info/main.c
index d576b42..d0eae4b 100644
--- a/app/proc_info/main.c
+++ b/app/proc_info/main.c
@@ -86,6 +86,14 @@
static uint32_t reset_xstats;
/**< Enable memory info. */
static uint32_t mem_info;
+/**< Enable displaying xstat name. */
+static uint32_t enable_xstats_name;
+static char *xstats_name;
+
+/**< Enable xstats by ids. */
+#define MAX_NB_XSTATS_IDS 1024
+static uint32_t nb_xstats_ids;
+static uint64_t xstats_ids[MAX_NB_XSTATS_IDS];
/**< display usage */
static void
@@ -97,8 +105,9 @@
" --stats: to display port statistics, enabled by default\n"
" --xstats: to display extended port statistics, disabled by "
"default\n"
- " --metrics: to display derived metrics of the ports, disabled by "
- "default\n"
+ " --xstats-name NAME: to display single xstat value by NAME\n"
+ " --xstats-ids IDLIST: to display xstat values by id. "
+ "The argument is comma-separated list of xstat ids to print out.\n"
" --stats-reset: to reset port statistics\n"
" --xstats-reset: to reset port extended statistics\n"
" --collectd-format: to print statistics to STDOUT in expected by collectd format\n"
@@ -132,6 +141,33 @@
}
+/*
+ * Parse ids value list into array
+ */
+static int
+parse_xstats_ids(char *list, uint64_t *ids, int limit) {
+ int length;
+ char *token;
+ char *ctx = NULL;
+ char *endptr;
+
+ length = 0;
+ token = strtok_r(list, ",", &ctx);
+ while (token != NULL) {
+ ids[length] = strtoull(token, &endptr, 10);
+ if (*endptr != '\0')
+ return -EINVAL;
+
+ length++;
+ if (length >= limit)
+ return -E2BIG;
+
+ token = strtok_r(NULL, ",", &ctx);
+ }
+
+ return length;
+}
+
static int
proc_info_preparse_args(int argc, char **argv)
{
@@ -178,7 +214,9 @@
{"xstats", 0, NULL, 0},
{"metrics", 0, NULL, 0},
{"xstats-reset", 0, NULL, 0},
+ {"xstats-name", required_argument, NULL, 1},
{"collectd-format", 0, NULL, 0},
+ {"xstats-ids", 1, NULL, 1},
{"host-id", 0, NULL, 0},
{NULL, 0, 0, 0}
};
@@ -224,7 +262,28 @@
MAX_LONG_OPT_SZ))
reset_xstats = 1;
break;
+ case 1:
+ /* Print xstat single value given by name*/
+ if (!strncmp(long_option[option_index].name,
+ "xstats-name", MAX_LONG_OPT_SZ)) {
+ enable_xstats_name = 1;
+ xstats_name = optarg;
+ printf("name:%s:%s\n",
+ long_option[option_index].name,
+ optarg);
+ } else if (!strncmp(long_option[option_index].name,
+ "xstats-ids",
+ MAX_LONG_OPT_SZ)) {
+ nb_xstats_ids = parse_xstats_ids(optarg,
+ xstats_ids, MAX_NB_XSTATS_IDS);
+
+ if (nb_xstats_ids <= 0) {
+ printf("xstats-id list parse error.\n");
+ return -1;
+ }
+ }
+ break;
default:
proc_info_usage(prgname);
return -1;
@@ -351,20 +410,82 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
}
static void
+nic_xstats_by_name_display(uint8_t port_id, char *name)
+{
+ uint64_t id;
+
+ printf("###### NIC statistics for port %-2d, statistic name '%s':\n",
+ port_id, name);
+
+ if (rte_eth_xstats_get_id_by_name(port_id, name, &id) == 0)
+ printf("%s: %"PRIu64"\n", name, id);
+ else
+ printf("Statistic not found...\n");
+
+}
+
+static void
+nic_xstats_by_ids_display(uint8_t port_id, uint64_t *ids, int len)
+{
+ struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
+ int ret, i;
+ static const char *nic_stats_border = "########################";
+
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
+ printf("Cannot allocate memory for xstats\n");
+ return;
+ }
+
+ xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (xstats_names == NULL) {
+ printf("Cannot allocate memory for xstat names\n");
+ free(values);
+ return;
+ }
+
+ if (len != rte_eth_xstats_get_names(
+ port_id, xstats_names, len, ids)) {
+ printf("Cannot get xstat names\n");
+ goto err;
+ }
+
+ printf("###### NIC extended statistics for port %-2d #########\n",
+ port_id);
+ printf("%s############################\n", nic_stats_border);
+ ret = rte_eth_xstats_get(port_id, ids, values, len);
+ if (ret < 0 || ret > len) {
+ printf("Cannot get xstats\n");
+ goto err;
+ }
+
+ for (i = 0; i < len; i++)
+ printf("%s: %"PRIu64"\n",
+ xstats_names[i].name,
+ values[i]);
+
+ printf("%s############################\n", nic_stats_border);
+err:
+ free(values);
+ free(xstats_names);
+}
+
+static void
nic_xstats_display(uint8_t port_id)
{
struct rte_eth_xstat_name *xstats_names;
- struct rte_eth_xstat *xstats;
+ uint64_t *values;
int len, ret, i;
static const char *nic_stats_border = "########################";
- len = rte_eth_xstats_get_names(port_id, NULL, 0);
+ len = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
if (len < 0) {
printf("Cannot get xstats count\n");
return;
}
- xstats = malloc(sizeof(xstats[0]) * len);
- if (xstats == NULL) {
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
printf("Cannot allocate memory for xstats\n");
return;
}
@@ -372,11 +493,11 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
if (xstats_names == NULL) {
printf("Cannot allocate memory for xstat names\n");
- free(xstats);
+ free(values);
return;
}
if (len != rte_eth_xstats_get_names(
- port_id, xstats_names, len)) {
+ port_id, xstats_names, len, NULL)) {
printf("Cannot get xstat names\n");
goto err;
}
@@ -385,7 +506,7 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
port_id);
printf("%s############################\n",
nic_stats_border);
- ret = rte_eth_xstats_get(port_id, xstats, len);
+ ret = rte_eth_xstats_get(port_id, NULL, values, len);
if (ret < 0 || ret > len) {
printf("Cannot get xstats\n");
goto err;
@@ -401,18 +522,18 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
xstats_names[i].name);
sprintf(buf, "PUTVAL %s/dpdkstat-port.%u/%s-%s N:%"
PRIu64"\n", host_id, port_id, counter_type,
- xstats_names[i].name, xstats[i].value);
+ xstats_names[i].name, values[i]);
write(stdout_fd, buf, strlen(buf));
} else {
printf("%s: %"PRIu64"\n", xstats_names[i].name,
- xstats[i].value);
+ values[i]);
}
}
printf("%s############################\n",
nic_stats_border);
err:
- free(xstats);
+ free(values);
free(xstats_names);
}
@@ -551,6 +672,11 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
nic_stats_clear(i);
else if (reset_xstats)
nic_xstats_clear(i);
+ else if (enable_xstats_name)
+ nic_xstats_by_name_display(i, xstats_name);
+ else if (nb_xstats_ids > 0)
+ nic_xstats_by_ids_display(i, xstats_ids,
+ nb_xstats_ids);
else if (enable_metrics)
metrics_display(i);
}
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 4d873cd..ef07925 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -264,9 +264,9 @@ struct rss_type_info {
void
nic_xstats_display(portid_t port_id)
{
- struct rte_eth_xstat *xstats;
int cnt_xstats, idx_xstat;
struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
printf("###### NIC extended statistics for port %-2d\n", port_id);
if (!rte_eth_dev_is_valid_port(port_id)) {
@@ -275,7 +275,7 @@ struct rss_type_info {
}
/* Get count */
- cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0);
+ cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
if (cnt_xstats < 0) {
printf("Error: Cannot get count of xstats\n");
return;
@@ -288,23 +288,24 @@ struct rss_type_info {
return;
}
if (cnt_xstats != rte_eth_xstats_get_names(
- port_id, xstats_names, cnt_xstats)) {
+ port_id, xstats_names, cnt_xstats, NULL)) {
printf("Error: Cannot get xstats lookup\n");
free(xstats_names);
return;
}
/* Get stats themselves */
- xstats = malloc(sizeof(struct rte_eth_xstat) * cnt_xstats);
- if (xstats == NULL) {
+ values = malloc(sizeof(values) * cnt_xstats);
+ if (values == NULL) {
printf("Cannot allocate memory for xstats\n");
free(xstats_names);
return;
}
- if (cnt_xstats != rte_eth_xstats_get(port_id, xstats, cnt_xstats)) {
+ if (cnt_xstats != rte_eth_xstats_get(port_id, NULL, values,
+ cnt_xstats)) {
printf("Error: Unable to get xstats\n");
free(xstats_names);
- free(xstats);
+ free(values);
return;
}
@@ -312,9 +313,9 @@ struct rss_type_info {
for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++)
printf("%s: %"PRIu64"\n",
xstats_names[idx_xstat].name,
- xstats[idx_xstat].value);
+ values[idx_xstat]);
free(xstats_names);
- free(xstats);
+ free(values);
}
void
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index e48c121..3372f30 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -334,24 +334,20 @@ The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK
Extended Statistics API
~~~~~~~~~~~~~~~~~~~~~~~
-The extended statistics API allows each individual PMD to expose a unique set
-of statistics. Accessing these from application programs is done via two
-functions:
-
-* ``rte_eth_xstats_get``: Fills in an array of ``struct rte_eth_xstat``
- with extended statistics.
-* ``rte_eth_xstats_get_names``: Fills in an array of
- ``struct rte_eth_xstat_name`` with extended statistic name lookup
- information.
-
-Each ``struct rte_eth_xstat`` contains an identifier and value pair, and
-each ``struct rte_eth_xstat_name`` contains a string. Each identifier
-within the ``struct rte_eth_xstat`` lookup array must have a corresponding
-entry in the ``struct rte_eth_xstat_name`` lookup array. Within the latter
-the index of the entry is the identifier the string is associated with.
-These identifiers, as well as the number of extended statistic exposed, must
-remain constant during runtime. Note that extended statistic identifiers are
-driver-specific, and hence might not be the same for different ports.
+The extended statistics API allows a PMD to expose all statistics that are
+available to it, including statistics that are unique to the device.
+Each statistic has three properties ``name``, ``id`` and ``value``:
+
+* ``name``: A human readable string formatted by the scheme detailed below.
+* ``id``: An integer that represents only that statistic.
+* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+
+The API consists of various ``rte_eth_xstats_*()`` functions, and allows an
+application to be flexible in how it retrieves statistics.
+
+
+Scheme for Human Readable Names
+_______________________________
A naming scheme exists for the strings exposed to clients of the API. This is
to allow scraping of the API for statistics of interest. The naming scheme uses
@@ -363,8 +359,8 @@ strings split by a single underscore ``_``. The scheme is as follows:
* detail n
* unit
-Examples of common statistics xstats strings, formatted to comply to the scheme
-proposed above:
+Examples of common statistics xstats strings, formatted to comply to the
+above scheme:
* ``rx_bytes``
* ``rx_crc_errors``
@@ -378,7 +374,7 @@ associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc. are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
@@ -392,3 +388,139 @@ Some additions in the metadata scheme are as follows:
An example where queue numbers are used is as follows: ``tx_q7_bytes`` which
indicates this statistic applies to queue number 7, and represents the number
of transmitted bytes on that queue.
+
+API Design
+__________
+
+The xstats API uses the ``name``, ``id``, and ``value`` to allow performant
+lookup of specific statistics. Performant lookup means two things;
+
+* No string comparisons with the ``name`` of the statistic in fast-path
+* Allow requesting of only the statistics of interest
+
+The API ensures these requirements are met by mapping the ``name`` of the
+statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
+The API allows applications to request an array of ``id`` values, so that the
+PMD only performs the required calculations. Expected usage is that the
+application scans the ``name`` of each statistic, and caches the ``id``
+if it has an interest in that statistic. On the fast-path, the integer can be used
+to retrieve the actual ``value`` of the statistic that the ``id`` represents.
+
+API Functions
+_____________
+
+The API is built out of a small number of functions, which can be used to
+retrieve the number of statistics and the names, IDs and values of those
+statistics.
+
+* ``rte_eth_xstats_get_names()``: returns the names of the statistics. When given a
+ ``NULL`` parameter the function returns the number of statistics that are available.
+
+* ``rte_eth_xstats_get_id_by_name()``: Searches for the statistic ID that matches
+ ``xstat_name``. If found, the ``id`` integer is set.
+
+* ``rte_eth_xstats_get()``: Fills in an array of ``uint64_t`` values
+ with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
+ returns all statistics that are available.
+
+
+Application Usage
+_________________
+
+Imagine an application that wants to view the dropped packet count. If no
+packets are dropped, the application does not read any other metrics for
+performance reasons. If packets are dropped, the application has a particular
+set of statistics that it requests. This "set" of statistics allows the app to
+decide what next steps to perform. The following code-snippets show how the
+xstats API can be used to achieve this goal.
+
+First step is to get all statistics names and list them:
+
+.. code-block:: c
+
+ struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
+ int len, i;
+
+ /* Get number of stats */
+ len = rte_eth_xstats_get_names(port_id, NULL, NULL, 0);
+ if (len < 0) {
+ printf("Cannot get xstats count\n");
+ goto err;
+ }
+
+ xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (xstats_names == NULL) {
+ printf("Cannot allocate memory for xstat names\n");
+ goto err;
+ }
+
+ /* Retrieve xstats names, passing NULL for IDs to return all statistics */
+ if (len != rte_eth_xstats_get_names(port_id, xstats_names, NULL, len)) {
+ printf("Cannot get xstat names\n");
+ goto err;
+ }
+
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
+ printf("Cannot allocate memory for xstats\n");
+ goto err;
+ }
+
+ /* Getting xstats values */
+ if (len != rte_eth_xstats_get(port_id, NULL, values, len)) {
+ printf("Cannot get xstat values\n");
+ goto err;
+ }
+
+ /* Print all xstats names and values */
+ for (i = 0; i < len; i++) {
+ printf("%s: %"PRIu64"\n", xstats_names[i].name, values[i]);
+ }
+
+The application has access to the names of all of the statistics that the PMD
+exposes. The application can decide which statistics are of interest, cache the
+ids of those statistics by looking up the name as follows:
+
+.. code-block:: c
+
+ uint64_t id;
+ uint64_t value;
+ const char *xstat_name = "rx_errors";
+
+ if(!rte_eth_xstats_get_id_by_name(port_id, xstat_name, &id)) {
+ rte_eth_xstats_get(port_id, &id, &value, 1);
+ printf("%s: %"PRIu64"\n", xstat_name, value);
+ }
+ else {
+ printf("Cannot find xstats with a given name\n");
+ goto err;
+ }
+
+The API provides flexibility to the application so that it can look up multiple
+statistics using an array containing multiple ``id`` numbers. This reduces the
+function call overhead of retrieving statistics, and makes lookup of multiple
+statistics simpler for the application.
+
+.. code-block:: c
+
+ #define APP_NUM_STATS 4
+ /* application cached these ids previously; see above */
+ uint64_t ids_array[APP_NUM_STATS] = {3,4,7,21};
+ uint64_t value_array[APP_NUM_STATS];
+
+ /* Getting multiple xstats values from array of IDs */
+ rte_eth_xstats_get(port_id, ids_array, value_array, APP_NUM_STATS);
+
+ uint32_t i;
+ for(i = 0; i < APP_NUM_STATS; i++) {
+ printf("%d: %"PRIu64"\n", ids_array[i], value_array[i]);
+ }
+
+
+This array lookup API for xstats allows the application create multiple
+"groups" of statistics, and look up the values of those IDs using a single API
+call. As an end result, the application is able to achieve its goal of
+monitoring a single statistic ("rx_errors" in this case), and if that shows
+packets being dropped, it can easily retrieve a "set" of statistics using the
+IDs array parameter to ``rte_eth_xstats_get`` function.
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 4e1e6dc..0653cf7 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1454,12 +1454,19 @@ struct rte_eth_dev *
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
dev = &rte_eth_devices[port_id];
+ if (dev->dev_ops->xstats_get_names_by_ids != NULL) {
+ count = (*dev->dev_ops->xstats_get_names_by_ids)(dev, NULL,
+ NULL, 0);
+ if (count < 0)
+ return count;
+ }
if (dev->dev_ops->xstats_get_names != NULL) {
count = (*dev->dev_ops->xstats_get_names)(dev, NULL, 0);
if (count < 0)
return count;
} else
count = 0;
+
count += RTE_NB_STATS;
count += RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS) *
RTE_NB_RXQ_STATS;
@@ -1469,150 +1476,367 @@ struct rte_eth_dev *
}
int
-rte_eth_xstats_get_names(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names,
- unsigned size)
+rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id)
{
- struct rte_eth_dev *dev;
- int cnt_used_entries;
- int cnt_expected_entries;
- int cnt_driver_entries;
- uint32_t idx, id_queue;
- uint16_t num_q;
+ int cnt_xstats, idx_xstat;
- cnt_expected_entries = get_xstats_count(port_id);
- if (xstats_names == NULL || cnt_expected_entries < 0 ||
- (int)size < cnt_expected_entries)
- return cnt_expected_entries;
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
- /* port_id checked in get_xstats_count() */
- dev = &rte_eth_devices[port_id];
- cnt_used_entries = 0;
+ if (!id) {
+ RTE_PMD_DEBUG_TRACE("Error: id pointer is NULL\n");
+ return -1;
+ }
- for (idx = 0; idx < RTE_NB_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "%s", rte_stats_strings[idx].name);
- cnt_used_entries++;
+ if (!xstat_name) {
+ RTE_PMD_DEBUG_TRACE("Error: xstat_name pointer is NULL\n");
+ return -1;
}
- num_q = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
+
+ /* Get count */
+ cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
+ if (cnt_xstats < 0) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get count of xstats\n");
+ return -1;
+ }
+
+ /* Get id-name lookup table */
+ struct rte_eth_xstat_name xstats_names[cnt_xstats];
+
+ if (cnt_xstats != rte_eth_xstats_get_names(
+ port_id, xstats_names, cnt_xstats, NULL)) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get xstats lookup\n");
+ return -1;
+ }
+
+ for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++) {
+ if (!strcmp(xstats_names[idx_xstat].name, xstat_name)) {
+ *id = idx_xstat;
+ return 0;
+ };
+ }
+
+ return -EINVAL;
+}
+
+int
+rte_eth_xstats_get_names_v1607(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ return rte_eth_xstats_get_names(port_id, xstats_names, size, NULL);
+}
+VERSION_SYMBOL(rte_eth_xstats_get_names, _v1607, 16.07);
+
+int
+rte_eth_xstats_get_names_v1705(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids)
+{
+ /* Get all xstats */
+ if (!ids) {
+ struct rte_eth_dev *dev;
+ int cnt_used_entries;
+ int cnt_expected_entries;
+ int cnt_driver_entries;
+ uint32_t idx, id_queue;
+ uint16_t num_q;
+
+ cnt_expected_entries = get_xstats_count(port_id);
+ if (xstats_names == NULL || cnt_expected_entries < 0 ||
+ (int)size < cnt_expected_entries)
+ return cnt_expected_entries;
+
+ /* port_id checked in get_xstats_count() */
+ dev = &rte_eth_devices[port_id];
+ cnt_used_entries = 0;
+
+ for (idx = 0; idx < RTE_NB_STATS; idx++) {
snprintf(xstats_names[cnt_used_entries].name,
sizeof(xstats_names[0].name),
- "rx_q%u%s",
- id_queue, rte_rxq_stats_strings[idx].name);
+ "%s", rte_stats_strings[idx].name);
cnt_used_entries++;
}
+ num_q = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "rx_q%u%s",
+ id_queue,
+ rte_rxq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+
+ }
+ num_q = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "tx_q%u%s",
+ id_queue,
+ rte_txq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+ }
+
+ if (dev->dev_ops->xstats_get_names_by_ids != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries =
+ (*dev->dev_ops->xstats_get_names_by_ids)(
+ dev,
+ xstats_names + cnt_used_entries,
+ NULL,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+
+ } else if (dev->dev_ops->xstats_get_names != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
+ dev,
+ xstats_names + cnt_used_entries,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+ }
+ return cnt_used_entries;
}
- num_q = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "tx_q%u%s",
- id_queue, rte_txq_stats_strings[idx].name);
- cnt_used_entries++;
+ /* Get only xstats given by IDS */
+ else {
+ uint16_t len, i;
+ struct rte_eth_xstat_name *xstats_names_copy;
+
+ len = rte_eth_xstats_get_names_v1705(port_id, NULL, 0, NULL);
+
+ xstats_names_copy =
+ malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (!xstats_names_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ free(xstats_names_copy);
+ return -1;
}
+
+ rte_eth_xstats_get_names_v1705(port_id, xstats_names_copy,
+ len, NULL);
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] >= len) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ strcpy(xstats_names[i].name,
+ xstats_names_copy[ids[i]].name);
+ }
+ free(xstats_names_copy);
+ return size;
}
+}
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get_names, _v1705, 17.05);
- if (dev->dev_ops->xstats_get_names != NULL) {
- /* If there are any driver-specific xstats, append them
- * to end of list.
- */
- cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
- dev,
- xstats_names + cnt_used_entries,
- size - cnt_used_entries);
- if (cnt_driver_entries < 0)
- return cnt_driver_entries;
- cnt_used_entries += cnt_driver_entries;
+MAP_STATIC_SYMBOL(int
+ rte_eth_xstats_get_names(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size,
+ uint64_t *ids), rte_eth_xstats_get_names_v1705);
+
+/* retrieve ethdev extended statistics */
+int
+rte_eth_xstats_get_v22(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ uint64_t *values_copy;
+ uint16_t size, i;
+
+ values_copy = malloc(sizeof(values_copy) * n);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: Cannot allocate memory for xstats\n");
+ return -1;
}
+ size = rte_eth_xstats_get(port_id, 0, values_copy, n);
- return cnt_used_entries;
+ for (i = 0; i < n; i++) {
+ xstats[i].id = i;
+ xstats[i].value = values_copy[i];
+ }
+ free(values_copy);
+ return size;
}
+VERSION_SYMBOL(rte_eth_xstats_get, _v22, 2.2);
/* retrieve ethdev extended statistics */
int
-rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned n)
+rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n)
{
- struct rte_eth_stats eth_stats;
- struct rte_eth_dev *dev;
- unsigned count = 0, i, q;
- signed xcount = 0;
- uint64_t val, *stats_ptr;
- uint16_t nb_rxqs, nb_txqs;
+ /* If need all xstats */
+ if (!ids) {
+ struct rte_eth_stats eth_stats;
+ struct rte_eth_dev *dev;
+ unsigned int count = 0, i, q;
+ signed int xcount = 0;
+ uint64_t val, *stats_ptr;
+ uint16_t nb_rxqs, nb_txqs;
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
- dev = &rte_eth_devices[port_id];
+ nb_rxqs = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ nb_txqs = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
- nb_rxqs = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- nb_txqs = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ /* Return generic statistics */
+ count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
+ (nb_txqs * RTE_NB_TXQ_STATS);
- /* Return generic statistics */
- count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
- (nb_txqs * RTE_NB_TXQ_STATS);
- /* implemented by the driver */
- if (dev->dev_ops->xstats_get != NULL) {
- /* Retrieve the xstats from the driver at the end of the
- * xstats struct.
- */
- xcount = (*dev->dev_ops->xstats_get)(dev,
- xstats ? xstats + count : NULL,
- (n > count) ? n - count : 0);
+ /* implemented by the driver */
+ if (dev->dev_ops->xstats_get_by_ids != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ */
+ xcount = (*dev->dev_ops->xstats_get_by_ids)(dev,
+ NULL,
+ values ? values + count : NULL,
+ (n > count) ? n - count : 0);
+
+ if (xcount < 0)
+ return xcount;
+ /* implemented by the driver */
+ } else if (dev->dev_ops->xstats_get != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ * Compatibility for PMD without xstats_get_by_ids
+ */
+ unsigned int size = (n > count) ? n - count : 1;
+ struct rte_eth_xstat xstats[size];
- if (xcount < 0)
- return xcount;
- }
+ xcount = (*dev->dev_ops->xstats_get)(dev,
+ values ? xstats : NULL, size);
- if (n < count + xcount || xstats == NULL)
- return count + xcount;
+ if (xcount < 0)
+ return xcount;
+
+ if (values != NULL)
+ for (i = 0 ; i < (unsigned int)xcount; i++)
+ values[i + count] = xstats[i].value;
+ }
- /* now fill the xstats structure */
- count = 0;
- rte_eth_stats_get(port_id, ð_stats);
+ if (n < count + xcount || values == NULL)
+ return count + xcount;
- /* global stats */
- for (i = 0; i < RTE_NB_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_stats_strings[i].offset);
- val = *stats_ptr;
- xstats[count++].value = val;
- }
+ /* now fill the xstats structure */
+ count = 0;
+ rte_eth_stats_get(port_id, ð_stats);
- /* per-rxq stats */
- for (q = 0; q < nb_rxqs; q++) {
- for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ /* global stats */
+ for (i = 0; i < RTE_NB_STATS; i++) {
stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_rxq_stats_strings[i].offset +
- q * sizeof(uint64_t));
+ rte_stats_strings[i].offset);
val = *stats_ptr;
- xstats[count++].value = val;
+ values[count++] = val;
}
+
+ /* per-rxq stats */
+ for (q = 0; q < nb_rxqs; q++) {
+ for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_rxq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+ }
+
+ /* per-txq stats */
+ for (q = 0; q < nb_txqs; q++) {
+ for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_txq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+ }
+
+ return count + xcount;
}
+ /* Need only xstats given by IDS array */
+ else {
+ uint16_t i, size;
+ uint64_t *values_copy;
- /* per-txq stats */
- for (q = 0; q < nb_txqs; q++) {
- for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_txq_stats_strings[i].offset +
- q * sizeof(uint64_t));
- val = *stats_ptr;
- xstats[count++].value = val;
+ size = rte_eth_xstats_get_v1705(port_id, NULL, NULL, 0);
+
+ values_copy = malloc(sizeof(values_copy) * size);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ return -1;
+ }
+
+ rte_eth_xstats_get_v1705(port_id, NULL, values_copy, size);
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] >= size) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ values[i] = values_copy[ids[i]];
}
+ free(values_copy);
+ return n;
+ }
+}
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get, _v1705, 17.05);
+
+MAP_STATIC_SYMBOL(int
+ rte_eth_xstats_get(uint8_t port_id, uint64_t *ids,
+ uint64_t *values, unsigned int n), rte_eth_xstats_get_v1705);
+
+__rte_deprecated int
+rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ uint64_t *values_copy;
+ uint16_t size, i;
+
+ values_copy = malloc(sizeof(values_copy) * n);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: Cannot allocate memory for xstats\n");
+ return -1;
}
+ size = rte_eth_xstats_get(port_id, 0, values_copy, n);
- for (i = 0; i < count; i++)
+ for (i = 0; i < n; i++) {
xstats[i].id = i;
- /* add an offset to driver-specific stats */
- for ( ; i < count + xcount; i++)
- xstats[i].id += count;
+ xstats[i].value = values_copy[i];
+ }
+ free(values_copy);
+ return size;
+}
- return count + xcount;
+__rte_deprecated int
+rte_eth_xstats_get_names_all(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n)
+{
+ return rte_eth_xstats_get_names(port_id, xstats_names, n, NULL);
}
/* reset ethdev extended statistics */
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index d072538..0ef9d20 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -186,6 +186,7 @@
#include "rte_ether.h"
#include "rte_eth_ctrl.h"
#include "rte_dev_info.h"
+#include "rte_compat.h"
struct rte_mbuf;
@@ -1118,6 +1119,10 @@ typedef int (*eth_xstats_get_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat *stats, unsigned n);
/**< @internal Get extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_by_ids_t)(struct rte_eth_dev *dev,
+ uint64_t *ids, uint64_t *values, unsigned int n);
+/**< @internal Get extended stats of an Ethernet device. */
+
typedef void (*eth_xstats_reset_t)(struct rte_eth_dev *dev);
/**< @internal Reset extended stats of an Ethernet device. */
@@ -1125,6 +1130,17 @@ typedef int (*eth_xstats_get_names_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, unsigned size);
/**< @internal Get names of extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_names_by_ids_t)(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
+ unsigned int size);
+/**< @internal Get names of extended stats of an Ethernet device. */
+
+typedef int (*eth_xstats_get_by_name_t)(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ struct rte_eth_xstat *xstat,
+ const char *name);
+/**< @internal Get xstat specified by name of an Ethernet device. */
+
typedef int (*eth_queue_stats_mapping_set_t)(struct rte_eth_dev *dev,
uint16_t queue_id,
uint8_t stat_idx,
@@ -1466,8 +1482,8 @@ struct eth_dev_ops {
eth_stats_reset_t stats_reset; /**< Reset generic device statistics. */
eth_xstats_get_t xstats_get; /**< Get extended device statistics. */
eth_xstats_reset_t xstats_reset; /**< Reset extended device statistics. */
- eth_xstats_get_names_t xstats_get_names;
- /**< Get names of extended statistics. */
+ eth_xstats_get_names_t xstats_get_names;
+ /**< Get names of extended device statistics. */
eth_queue_stats_mapping_set_t queue_stats_mapping_set;
/**< Configure per queue stat counter mapping. */
@@ -1563,6 +1579,12 @@ struct eth_dev_ops {
eth_timesync_adjust_time timesync_adjust_time; /** Adjust the device clock. */
eth_timesync_read_time timesync_read_time; /** Get the device clock time. */
eth_timesync_write_time timesync_write_time; /** Set the device clock time. */
+ eth_xstats_get_by_ids_t xstats_get_by_ids;
+ /**< Get extended device statistics by ID. */
+ eth_xstats_get_names_by_ids_t xstats_get_names_by_ids;
+ /**< Get name of extended device statistics by ID. */
+ eth_xstats_get_by_name_t xstats_get_by_name;
+ /**< Get extended device statistics by name. */
};
/**
@@ -2324,8 +2346,55 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
*/
void rte_eth_stats_reset(uint8_t port_id);
+
/**
- * Retrieve names of extended statistics of an Ethernet device.
+* Gets the ID of a statistic from its name.
+*
+* Note this function searches for the statistics using string compares, and
+* as such should not be used on the fast-path. For fast-path retrieval of
+* specific statistics, store the ID as provided in *id* from this function,
+* and pass the ID to rte_eth_xstats_get()
+*
+* @param port_id The port to look up statistics from
+* @param xstat_name The name of the statistic to return
+* @param[out] id A pointer to an app-supplied uint64_t which should be
+* set to the ID of the stat if the stat exists.
+* @return
+* 0 on success
+* -ENODEV for invalid port_id,
+* -EINVAL if the xstat_name doesn't exist in port_id
+*/
+int rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id);
+
+/**
+ * Retrieve all extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats
+ * A pointer to a table of structure of type *rte_eth_xstat*
+ * to be filled with device statistics ids and values: id is the
+ * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
+ * and value is the statistic counter.
+ * This parameter can be set to NULL if n is 0.
+ * @param n
+ * The size of the xstats array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+__rte_deprecated
+int rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n);
+
+/**
+ * Retrieve names of all extended statistics of an Ethernet device.
*
* @param port_id
* The port identifier of the Ethernet device.
@@ -2333,7 +2402,7 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
* An rte_eth_xstat_name array of at least *size* elements to
* be filled. If set to NULL, the function returns the required number
* of elements.
- * @param size
+ * @param n
* The size of the xstats_names array (number of elements).
* @return
* - A positive value lower or equal to size: success. The return value
@@ -2344,9 +2413,8 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-int rte_eth_xstats_get_names(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names,
- unsigned size);
+__rte_deprecated int rte_eth_xstats_get_names_all(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n);
/**
* Retrieve extended statistics of an Ethernet device.
@@ -2370,8 +2438,92 @@ int rte_eth_xstats_get_names(uint8_t port_id,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-int rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned n);
+int rte_eth_xstats_get_v22(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n);
+
+/**
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param ids
+ * A pointer to an ids array passed by application. This tells wich
+ * statistics values function should retrieve. This parameter
+ * can be set to NULL if n is 0. In this case function will retrieve
+ * all avalible statistics.
+ * @param values
+ * A pointer to a table to be filled with device statistics values.
+ * @param n
+ * The size of the ids array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n);
+
+int rte_eth_xstats_get(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n);
+
+/**
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats_names
+ * A pointer to a table of structure of type *rte_eth_xstat*
+ * to be filled with device statistics ids and values: id is the
+ * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
+ * and value is the statistic counter.
+ * This parameter can be set to NULL if n is 0.
+ * @param n
+ * The size of the xstats array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_names_v1607(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n);
+
+/**
+ * Retrieve names of extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats_names
+ * An rte_eth_xstat_name array of at least *size* elements to
+ * be filled. If set to NULL, the function returns the required number
+ * of elements.
+ * @param ids
+ * IDs array given by app to retrieve specific statistics
+ * @param size
+ * The size of the xstats_names array (number of elements).
+ * @return
+ * - A positive value lower or equal to size: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than size: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_names_v1705(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids);
+
+int rte_eth_xstats_get_names(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids);
/**
* Reset extended statistics of an Ethernet device.
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 0ea3856..f4d0136 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -159,5 +159,10 @@ DPDK_17.05 {
global:
rte_eth_find_next;
+ rte_eth_xstats_get_names;
+ rte_eth_xstats_get;
+ rte_eth_xstats_get_all;
+ rte_eth_xstats_get_names_all;
+ rte_eth_xstats_get_id_by_name;
} DPDK_17.02;
--
1.9.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v5 0/3] Extended xstats API in ethdev library to allow grouping of stats
2017-04-10 17:59 1% ` [dpdk-dev] [PATCH v4 1/3] ethdev: new xstats API add retrieving by ID Jacek Piasecki
@ 2017-04-11 16:37 4% ` Michal Jastrzebski
2017-04-11 16:37 1% ` [dpdk-dev] [PATCH v5 1/3] ethdev: new xstats API add retrieving by ID Michal Jastrzebski
0 siblings, 1 reply; 200+ results
From: Michal Jastrzebski @ 2017-04-11 16:37 UTC (permalink / raw)
To: dev; +Cc: deepak.k.jain, harry.van.haaren, Jacek Piasecki
From: Jacek Piasecki <jacekx.piasecki@intel.com>
Extended xstats API in ethdev library to allow grouping of stats logically
so they can be retrieved per logical grouping managed by the application.
Changed existing functions rte_eth_xstats_get_names and rte_eth_xstats_get
to use a new list of arguments: array of ids and array of values.
ABI versioning mechanism was used to support backward compatibility.
Introduced two new functions rte_eth_xstats_get_all and
rte_eth_xstats_get_names_all which keeps functionality of the previous
ones (respectively rte_eth_xstats_get and rte_eth_xstats_get_names)
but use new API inside. Both functions marked as deprecated.
Introduced new function: rte_eth_xstats_get_id_by_name to retrieve
xstats ids by its names.
Extended functionality of proc_info application:
--xstats-name NAME: to display single xstat value by NAME
Updated test-pmd application to use new API.
v5 changes:
* fix clang shared build compilation
* remove wrong versioning macros
* Makefile LIBABIVER 6 change
v4 changes:
* documentation change after API modification
* fix xstats display for PMD without _by_ids() functions
* fix ABI validator errors
v3 changes:
* checkpatch fixes
* removed malloc bug in ethdev
* add new command to proc_info and IDs parsing
* merged testpmd and proc_info patch with library patch
Jacek Piasecki (3):
ethdev: new xstats API add retrieving by ID
net/e1000: new xstats API add ID support for e1000
net/ixgbe: new xstats API add ID support for ixgbe
app/proc_info/main.c | 150 ++++++++++-
app/test-pmd/config.c | 19 +-
doc/guides/prog_guide/poll_mode_drv.rst | 174 +++++++++++--
drivers/net/e1000/igb_ethdev.c | 92 ++++++-
drivers/net/ixgbe/ixgbe_ethdev.c | 179 +++++++++++++
lib/librte_ether/rte_ethdev.c | 430 ++++++++++++++++++++++++--------
lib/librte_ether/rte_ethdev.h | 170 ++++++++++++-
lib/librte_ether/rte_ether_version.map | 5 +
8 files changed, 1063 insertions(+), 156 deletions(-)
--
1.9.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 1/3] ethdev: new xstats API add retrieving by ID
2017-04-10 17:59 4% ` [dpdk-dev] [PATCH v4 0/3] Extended xstats API in ethdev library to allow grouping of stats Jacek Piasecki
@ 2017-04-10 17:59 1% ` Jacek Piasecki
2017-04-11 16:37 4% ` [dpdk-dev] [PATCH v5 0/3] Extended xstats API in ethdev library to allow grouping of stats Michal Jastrzebski
0 siblings, 1 reply; 200+ results
From: Jacek Piasecki @ 2017-04-10 17:59 UTC (permalink / raw)
To: dev
Cc: harry.van.haaren, deepak.k.jain, Jacek Piasecki, Kuba Kozak,
Tomasz Kulasek
Extended xstats API in ethdev library to allow grouping of stats
logically so they can be retrieved per logical grouping managed
by the application.
Changed existing functions rte_eth_xstats_get_names and
rte_eth_xstats_get to use a new list of arguments: array of ids
and array of values. ABI versioning mechanism was used to
support backward compatibility.
Introduced two new functions rte_eth_xstats_get_all and
rte_eth_xstats_get_names_all which keeps functionality of the
previous ones (respectively rte_eth_xstats_get and
rte_eth_xstats_get_names) but use new API inside.
Both functions marked as deprecated.
Introduced new function: rte_eth_xstats_get_id_by_name
to retrieve xstats ids by its names.
test-pmd: add support for new xstats API retrieving by id in
testpmd application: xstats_get() and
xstats_get_names() call with modified parameters.
proc_info: add support for new xstats API retrieving by id to
proc_info application. There is a new argument --xstats-ids
in proc_info command line to retrieve statistics given by ids.
E.g. --xstats-ids="1,3,5,7,8"
doc: add description for modified xstats API
Documentation change for modified extended statistics API functions.
The old API only allows retrieval of *all* of the NIC statistics
at once. Given this requires a MMIO read PCI transaction per statistic
it is an inefficient way of retrieving just a few key statistics.
Often a monitoring agent only has an interest in a few key statistics,
and the old API forces wasting CPU time and PCIe bandwidth in retrieving
*all* statistics; even those that the application didn't explicitly
show an interest in.
The new, more flexible API allow retrieval of statistics per ID.
If a PMD wishes, it can be implemented to read just the required
NIC registers. As a result, the monitoring application no longer wastes
PCIe bandwidth and CPU time.
Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
---
app/proc_info/main.c | 150 ++++++++++-
app/test-pmd/config.c | 19 +-
doc/guides/prog_guide/poll_mode_drv.rst | 174 +++++++++++--
lib/librte_ether/Makefile | 2 +-
lib/librte_ether/rte_ethdev.c | 426 ++++++++++++++++++++++++--------
lib/librte_ether/rte_ethdev.h | 176 ++++++++++++-
lib/librte_ether/rte_ether_version.map | 5 +
7 files changed, 797 insertions(+), 155 deletions(-)
diff --git a/app/proc_info/main.c b/app/proc_info/main.c
index d576b42..d0eae4b 100644
--- a/app/proc_info/main.c
+++ b/app/proc_info/main.c
@@ -86,6 +86,14 @@
static uint32_t reset_xstats;
/**< Enable memory info. */
static uint32_t mem_info;
+/**< Enable displaying xstat name. */
+static uint32_t enable_xstats_name;
+static char *xstats_name;
+
+/**< Enable xstats by ids. */
+#define MAX_NB_XSTATS_IDS 1024
+static uint32_t nb_xstats_ids;
+static uint64_t xstats_ids[MAX_NB_XSTATS_IDS];
/**< display usage */
static void
@@ -97,8 +105,9 @@
" --stats: to display port statistics, enabled by default\n"
" --xstats: to display extended port statistics, disabled by "
"default\n"
- " --metrics: to display derived metrics of the ports, disabled by "
- "default\n"
+ " --xstats-name NAME: to display single xstat value by NAME\n"
+ " --xstats-ids IDLIST: to display xstat values by id. "
+ "The argument is comma-separated list of xstat ids to print out.\n"
" --stats-reset: to reset port statistics\n"
" --xstats-reset: to reset port extended statistics\n"
" --collectd-format: to print statistics to STDOUT in expected by collectd format\n"
@@ -132,6 +141,33 @@
}
+/*
+ * Parse ids value list into array
+ */
+static int
+parse_xstats_ids(char *list, uint64_t *ids, int limit) {
+ int length;
+ char *token;
+ char *ctx = NULL;
+ char *endptr;
+
+ length = 0;
+ token = strtok_r(list, ",", &ctx);
+ while (token != NULL) {
+ ids[length] = strtoull(token, &endptr, 10);
+ if (*endptr != '\0')
+ return -EINVAL;
+
+ length++;
+ if (length >= limit)
+ return -E2BIG;
+
+ token = strtok_r(NULL, ",", &ctx);
+ }
+
+ return length;
+}
+
static int
proc_info_preparse_args(int argc, char **argv)
{
@@ -178,7 +214,9 @@
{"xstats", 0, NULL, 0},
{"metrics", 0, NULL, 0},
{"xstats-reset", 0, NULL, 0},
+ {"xstats-name", required_argument, NULL, 1},
{"collectd-format", 0, NULL, 0},
+ {"xstats-ids", 1, NULL, 1},
{"host-id", 0, NULL, 0},
{NULL, 0, 0, 0}
};
@@ -224,7 +262,28 @@
MAX_LONG_OPT_SZ))
reset_xstats = 1;
break;
+ case 1:
+ /* Print xstat single value given by name*/
+ if (!strncmp(long_option[option_index].name,
+ "xstats-name", MAX_LONG_OPT_SZ)) {
+ enable_xstats_name = 1;
+ xstats_name = optarg;
+ printf("name:%s:%s\n",
+ long_option[option_index].name,
+ optarg);
+ } else if (!strncmp(long_option[option_index].name,
+ "xstats-ids",
+ MAX_LONG_OPT_SZ)) {
+ nb_xstats_ids = parse_xstats_ids(optarg,
+ xstats_ids, MAX_NB_XSTATS_IDS);
+
+ if (nb_xstats_ids <= 0) {
+ printf("xstats-id list parse error.\n");
+ return -1;
+ }
+ }
+ break;
default:
proc_info_usage(prgname);
return -1;
@@ -351,20 +410,82 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
}
static void
+nic_xstats_by_name_display(uint8_t port_id, char *name)
+{
+ uint64_t id;
+
+ printf("###### NIC statistics for port %-2d, statistic name '%s':\n",
+ port_id, name);
+
+ if (rte_eth_xstats_get_id_by_name(port_id, name, &id) == 0)
+ printf("%s: %"PRIu64"\n", name, id);
+ else
+ printf("Statistic not found...\n");
+
+}
+
+static void
+nic_xstats_by_ids_display(uint8_t port_id, uint64_t *ids, int len)
+{
+ struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
+ int ret, i;
+ static const char *nic_stats_border = "########################";
+
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
+ printf("Cannot allocate memory for xstats\n");
+ return;
+ }
+
+ xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (xstats_names == NULL) {
+ printf("Cannot allocate memory for xstat names\n");
+ free(values);
+ return;
+ }
+
+ if (len != rte_eth_xstats_get_names(
+ port_id, xstats_names, len, ids)) {
+ printf("Cannot get xstat names\n");
+ goto err;
+ }
+
+ printf("###### NIC extended statistics for port %-2d #########\n",
+ port_id);
+ printf("%s############################\n", nic_stats_border);
+ ret = rte_eth_xstats_get(port_id, ids, values, len);
+ if (ret < 0 || ret > len) {
+ printf("Cannot get xstats\n");
+ goto err;
+ }
+
+ for (i = 0; i < len; i++)
+ printf("%s: %"PRIu64"\n",
+ xstats_names[i].name,
+ values[i]);
+
+ printf("%s############################\n", nic_stats_border);
+err:
+ free(values);
+ free(xstats_names);
+}
+
+static void
nic_xstats_display(uint8_t port_id)
{
struct rte_eth_xstat_name *xstats_names;
- struct rte_eth_xstat *xstats;
+ uint64_t *values;
int len, ret, i;
static const char *nic_stats_border = "########################";
- len = rte_eth_xstats_get_names(port_id, NULL, 0);
+ len = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
if (len < 0) {
printf("Cannot get xstats count\n");
return;
}
- xstats = malloc(sizeof(xstats[0]) * len);
- if (xstats == NULL) {
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
printf("Cannot allocate memory for xstats\n");
return;
}
@@ -372,11 +493,11 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
if (xstats_names == NULL) {
printf("Cannot allocate memory for xstat names\n");
- free(xstats);
+ free(values);
return;
}
if (len != rte_eth_xstats_get_names(
- port_id, xstats_names, len)) {
+ port_id, xstats_names, len, NULL)) {
printf("Cannot get xstat names\n");
goto err;
}
@@ -385,7 +506,7 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
port_id);
printf("%s############################\n",
nic_stats_border);
- ret = rte_eth_xstats_get(port_id, xstats, len);
+ ret = rte_eth_xstats_get(port_id, NULL, values, len);
if (ret < 0 || ret > len) {
printf("Cannot get xstats\n");
goto err;
@@ -401,18 +522,18 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
xstats_names[i].name);
sprintf(buf, "PUTVAL %s/dpdkstat-port.%u/%s-%s N:%"
PRIu64"\n", host_id, port_id, counter_type,
- xstats_names[i].name, xstats[i].value);
+ xstats_names[i].name, values[i]);
write(stdout_fd, buf, strlen(buf));
} else {
printf("%s: %"PRIu64"\n", xstats_names[i].name,
- xstats[i].value);
+ values[i]);
}
}
printf("%s############################\n",
nic_stats_border);
err:
- free(xstats);
+ free(values);
free(xstats_names);
}
@@ -551,6 +672,11 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
nic_stats_clear(i);
else if (reset_xstats)
nic_xstats_clear(i);
+ else if (enable_xstats_name)
+ nic_xstats_by_name_display(i, xstats_name);
+ else if (nb_xstats_ids > 0)
+ nic_xstats_by_ids_display(i, xstats_ids,
+ nb_xstats_ids);
else if (enable_metrics)
metrics_display(i);
}
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 4d873cd..ef07925 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -264,9 +264,9 @@ struct rss_type_info {
void
nic_xstats_display(portid_t port_id)
{
- struct rte_eth_xstat *xstats;
int cnt_xstats, idx_xstat;
struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
printf("###### NIC extended statistics for port %-2d\n", port_id);
if (!rte_eth_dev_is_valid_port(port_id)) {
@@ -275,7 +275,7 @@ struct rss_type_info {
}
/* Get count */
- cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0);
+ cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
if (cnt_xstats < 0) {
printf("Error: Cannot get count of xstats\n");
return;
@@ -288,23 +288,24 @@ struct rss_type_info {
return;
}
if (cnt_xstats != rte_eth_xstats_get_names(
- port_id, xstats_names, cnt_xstats)) {
+ port_id, xstats_names, cnt_xstats, NULL)) {
printf("Error: Cannot get xstats lookup\n");
free(xstats_names);
return;
}
/* Get stats themselves */
- xstats = malloc(sizeof(struct rte_eth_xstat) * cnt_xstats);
- if (xstats == NULL) {
+ values = malloc(sizeof(values) * cnt_xstats);
+ if (values == NULL) {
printf("Cannot allocate memory for xstats\n");
free(xstats_names);
return;
}
- if (cnt_xstats != rte_eth_xstats_get(port_id, xstats, cnt_xstats)) {
+ if (cnt_xstats != rte_eth_xstats_get(port_id, NULL, values,
+ cnt_xstats)) {
printf("Error: Unable to get xstats\n");
free(xstats_names);
- free(xstats);
+ free(values);
return;
}
@@ -312,9 +313,9 @@ struct rss_type_info {
for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++)
printf("%s: %"PRIu64"\n",
xstats_names[idx_xstat].name,
- xstats[idx_xstat].value);
+ values[idx_xstat]);
free(xstats_names);
- free(xstats);
+ free(values);
}
void
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index e48c121..3372f30 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -334,24 +334,20 @@ The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK
Extended Statistics API
~~~~~~~~~~~~~~~~~~~~~~~
-The extended statistics API allows each individual PMD to expose a unique set
-of statistics. Accessing these from application programs is done via two
-functions:
-
-* ``rte_eth_xstats_get``: Fills in an array of ``struct rte_eth_xstat``
- with extended statistics.
-* ``rte_eth_xstats_get_names``: Fills in an array of
- ``struct rte_eth_xstat_name`` with extended statistic name lookup
- information.
-
-Each ``struct rte_eth_xstat`` contains an identifier and value pair, and
-each ``struct rte_eth_xstat_name`` contains a string. Each identifier
-within the ``struct rte_eth_xstat`` lookup array must have a corresponding
-entry in the ``struct rte_eth_xstat_name`` lookup array. Within the latter
-the index of the entry is the identifier the string is associated with.
-These identifiers, as well as the number of extended statistic exposed, must
-remain constant during runtime. Note that extended statistic identifiers are
-driver-specific, and hence might not be the same for different ports.
+The extended statistics API allows a PMD to expose all statistics that are
+available to it, including statistics that are unique to the device.
+Each statistic has three properties ``name``, ``id`` and ``value``:
+
+* ``name``: A human readable string formatted by the scheme detailed below.
+* ``id``: An integer that represents only that statistic.
+* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+
+The API consists of various ``rte_eth_xstats_*()`` functions, and allows an
+application to be flexible in how it retrieves statistics.
+
+
+Scheme for Human Readable Names
+_______________________________
A naming scheme exists for the strings exposed to clients of the API. This is
to allow scraping of the API for statistics of interest. The naming scheme uses
@@ -363,8 +359,8 @@ strings split by a single underscore ``_``. The scheme is as follows:
* detail n
* unit
-Examples of common statistics xstats strings, formatted to comply to the scheme
-proposed above:
+Examples of common statistics xstats strings, formatted to comply to the
+above scheme:
* ``rx_bytes``
* ``rx_crc_errors``
@@ -378,7 +374,7 @@ associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc. are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
@@ -392,3 +388,139 @@ Some additions in the metadata scheme are as follows:
An example where queue numbers are used is as follows: ``tx_q7_bytes`` which
indicates this statistic applies to queue number 7, and represents the number
of transmitted bytes on that queue.
+
+API Design
+__________
+
+The xstats API uses the ``name``, ``id``, and ``value`` to allow performant
+lookup of specific statistics. Performant lookup means two things;
+
+* No string comparisons with the ``name`` of the statistic in fast-path
+* Allow requesting of only the statistics of interest
+
+The API ensures these requirements are met by mapping the ``name`` of the
+statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
+The API allows applications to request an array of ``id`` values, so that the
+PMD only performs the required calculations. Expected usage is that the
+application scans the ``name`` of each statistic, and caches the ``id``
+if it has an interest in that statistic. On the fast-path, the integer can be used
+to retrieve the actual ``value`` of the statistic that the ``id`` represents.
+
+API Functions
+_____________
+
+The API is built out of a small number of functions, which can be used to
+retrieve the number of statistics and the names, IDs and values of those
+statistics.
+
+* ``rte_eth_xstats_get_names()``: returns the names of the statistics. When given a
+ ``NULL`` parameter the function returns the number of statistics that are available.
+
+* ``rte_eth_xstats_get_id_by_name()``: Searches for the statistic ID that matches
+ ``xstat_name``. If found, the ``id`` integer is set.
+
+* ``rte_eth_xstats_get()``: Fills in an array of ``uint64_t`` values
+ with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
+ returns all statistics that are available.
+
+
+Application Usage
+_________________
+
+Imagine an application that wants to view the dropped packet count. If no
+packets are dropped, the application does not read any other metrics for
+performance reasons. If packets are dropped, the application has a particular
+set of statistics that it requests. This "set" of statistics allows the app to
+decide what next steps to perform. The following code-snippets show how the
+xstats API can be used to achieve this goal.
+
+First step is to get all statistics names and list them:
+
+.. code-block:: c
+
+ struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
+ int len, i;
+
+ /* Get number of stats */
+ len = rte_eth_xstats_get_names(port_id, NULL, NULL, 0);
+ if (len < 0) {
+ printf("Cannot get xstats count\n");
+ goto err;
+ }
+
+ xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (xstats_names == NULL) {
+ printf("Cannot allocate memory for xstat names\n");
+ goto err;
+ }
+
+ /* Retrieve xstats names, passing NULL for IDs to return all statistics */
+ if (len != rte_eth_xstats_get_names(port_id, xstats_names, NULL, len)) {
+ printf("Cannot get xstat names\n");
+ goto err;
+ }
+
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
+ printf("Cannot allocate memory for xstats\n");
+ goto err;
+ }
+
+ /* Getting xstats values */
+ if (len != rte_eth_xstats_get(port_id, NULL, values, len)) {
+ printf("Cannot get xstat values\n");
+ goto err;
+ }
+
+ /* Print all xstats names and values */
+ for (i = 0; i < len; i++) {
+ printf("%s: %"PRIu64"\n", xstats_names[i].name, values[i]);
+ }
+
+The application has access to the names of all of the statistics that the PMD
+exposes. The application can decide which statistics are of interest, cache the
+ids of those statistics by looking up the name as follows:
+
+.. code-block:: c
+
+ uint64_t id;
+ uint64_t value;
+ const char *xstat_name = "rx_errors";
+
+ if(!rte_eth_xstats_get_id_by_name(port_id, xstat_name, &id)) {
+ rte_eth_xstats_get(port_id, &id, &value, 1);
+ printf("%s: %"PRIu64"\n", xstat_name, value);
+ }
+ else {
+ printf("Cannot find xstats with a given name\n");
+ goto err;
+ }
+
+The API provides flexibility to the application so that it can look up multiple
+statistics using an array containing multiple ``id`` numbers. This reduces the
+function call overhead of retrieving statistics, and makes lookup of multiple
+statistics simpler for the application.
+
+.. code-block:: c
+
+ #define APP_NUM_STATS 4
+ /* application cached these ids previously; see above */
+ uint64_t ids_array[APP_NUM_STATS] = {3,4,7,21};
+ uint64_t value_array[APP_NUM_STATS];
+
+ /* Getting multiple xstats values from array of IDs */
+ rte_eth_xstats_get(port_id, ids_array, value_array, APP_NUM_STATS);
+
+ uint32_t i;
+ for(i = 0; i < APP_NUM_STATS; i++) {
+ printf("%d: %"PRIu64"\n", ids_array[i], value_array[i]);
+ }
+
+
+This array lookup API for xstats allows the application create multiple
+"groups" of statistics, and look up the values of those IDs using a single API
+call. As an end result, the application is able to achieve its goal of
+monitoring a single statistic ("rx_errors" in this case), and if that shows
+packets being dropped, it can easily retrieve a "set" of statistics using the
+IDs array parameter to ``rte_eth_xstats_get`` function.
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index 066114b..5bbf721 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_ether_version.map
-LIBABIVER := 6
+LIBABIVER := 7
SRCS-y += rte_ethdev.c
SRCS-y += rte_flow.c
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 4e1e6dc..e5bab9c 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1454,12 +1454,19 @@ struct rte_eth_dev *
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
dev = &rte_eth_devices[port_id];
+ if (dev->dev_ops->xstats_get_names_by_ids != NULL) {
+ count = (*dev->dev_ops->xstats_get_names_by_ids)(dev, NULL,
+ NULL, 0);
+ if (count < 0)
+ return count;
+ }
if (dev->dev_ops->xstats_get_names != NULL) {
count = (*dev->dev_ops->xstats_get_names)(dev, NULL, 0);
if (count < 0)
return count;
} else
count = 0;
+
count += RTE_NB_STATS;
count += RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS) *
RTE_NB_RXQ_STATS;
@@ -1469,150 +1476,363 @@ struct rte_eth_dev *
}
int
-rte_eth_xstats_get_names(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names,
- unsigned size)
+rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id)
{
- struct rte_eth_dev *dev;
- int cnt_used_entries;
- int cnt_expected_entries;
- int cnt_driver_entries;
- uint32_t idx, id_queue;
- uint16_t num_q;
+ int cnt_xstats, idx_xstat;
- cnt_expected_entries = get_xstats_count(port_id);
- if (xstats_names == NULL || cnt_expected_entries < 0 ||
- (int)size < cnt_expected_entries)
- return cnt_expected_entries;
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
- /* port_id checked in get_xstats_count() */
- dev = &rte_eth_devices[port_id];
- cnt_used_entries = 0;
+ if (!id) {
+ RTE_PMD_DEBUG_TRACE("Error: id pointer is NULL\n");
+ return -1;
+ }
+
+ if (!xstat_name) {
+ RTE_PMD_DEBUG_TRACE("Error: xstat_name pointer is NULL\n");
+ return -1;
+ }
+
+ /* Get count */
+ cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0, NULL);
+ if (cnt_xstats < 0) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get count of xstats\n");
+ return -1;
+ }
+
+ /* Get id-name lookup table */
+ struct rte_eth_xstat_name xstats_names[cnt_xstats];
+
+ if (cnt_xstats != rte_eth_xstats_get_names(
+ port_id, xstats_names, cnt_xstats, NULL)) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get xstats lookup\n");
+ return -1;
+ }
- for (idx = 0; idx < RTE_NB_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "%s", rte_stats_strings[idx].name);
- cnt_used_entries++;
+ for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++) {
+ if (!strcmp(xstats_names[idx_xstat].name, xstat_name)) {
+ *id = idx_xstat;
+ return 0;
+ };
}
- num_q = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
+
+ return -EINVAL;
+}
+
+int
+rte_eth_xstats_get_names_v1705(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids)
+{
+ /* Get all xstats */
+ if (!ids) {
+ struct rte_eth_dev *dev;
+ int cnt_used_entries;
+ int cnt_expected_entries;
+ int cnt_driver_entries;
+ uint32_t idx, id_queue;
+ uint16_t num_q;
+
+ cnt_expected_entries = get_xstats_count(port_id);
+ if (xstats_names == NULL || cnt_expected_entries < 0 ||
+ (int)size < cnt_expected_entries)
+ return cnt_expected_entries;
+
+ /* port_id checked in get_xstats_count() */
+ dev = &rte_eth_devices[port_id];
+ cnt_used_entries = 0;
+
+ for (idx = 0; idx < RTE_NB_STATS; idx++) {
snprintf(xstats_names[cnt_used_entries].name,
sizeof(xstats_names[0].name),
- "rx_q%u%s",
- id_queue, rte_rxq_stats_strings[idx].name);
+ "%s", rte_stats_strings[idx].name);
cnt_used_entries++;
}
+ num_q = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "rx_q%u%s",
+ id_queue,
+ rte_rxq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+
+ }
+ num_q = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "tx_q%u%s",
+ id_queue,
+ rte_txq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+ }
+
+ if (dev->dev_ops->xstats_get_names_by_ids != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries =
+ (*dev->dev_ops->xstats_get_names_by_ids)(
+ dev,
+ xstats_names + cnt_used_entries,
+ NULL,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+
+ } else if (dev->dev_ops->xstats_get_names != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
+ dev,
+ xstats_names + cnt_used_entries,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+ }
+ return cnt_used_entries;
}
- num_q = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "tx_q%u%s",
- id_queue, rte_txq_stats_strings[idx].name);
- cnt_used_entries++;
+ /* Get only xstats given by IDS */
+ else {
+ uint16_t len, i;
+ struct rte_eth_xstat_name *xstats_names_copy;
+
+ len = rte_eth_xstats_get_names_v1705(port_id, NULL, 0, NULL);
+
+ xstats_names_copy =
+ malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (!xstats_names_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ free(xstats_names_copy);
+ return -1;
+ }
+
+ rte_eth_xstats_get_names_v1705(port_id, xstats_names_copy,
+ len, NULL);
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] >= len) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ strcpy(xstats_names[i].name,
+ xstats_names_copy[ids[i]].name);
}
+ free(xstats_names_copy);
+ return size;
}
+}
+MAP_STATIC_SYMBOL(int
+ rte_eth_xstats_get_names(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size,
+ uint64_t *ids), rte_eth_xstats_get_names_v1705);
- if (dev->dev_ops->xstats_get_names != NULL) {
- /* If there are any driver-specific xstats, append them
- * to end of list.
- */
- cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
- dev,
- xstats_names + cnt_used_entries,
- size - cnt_used_entries);
- if (cnt_driver_entries < 0)
- return cnt_driver_entries;
- cnt_used_entries += cnt_driver_entries;
+int
+rte_eth_xstats_get_names_v1607(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ return rte_eth_xstats_get_names(port_id, xstats_names, size, NULL);
+}
+VERSION_SYMBOL(rte_eth_xstats_get_names, _v1607, 16.07);
+
+/* retrieve ethdev extended statistics */
+int
+rte_eth_xstats_get_v22(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ uint64_t *values_copy;
+ uint16_t size, i;
+
+ values_copy = malloc(sizeof(values_copy) * n);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: Cannot allocate memory for xstats\n");
+ return -1;
}
+ size = rte_eth_xstats_get(port_id, 0, values_copy, n);
- return cnt_used_entries;
+ for (i = 0; i < n; i++) {
+ xstats[i].id = i;
+ xstats[i].value = values_copy[i];
+ }
+ free(values_copy);
+ return size;
}
+VERSION_SYMBOL(rte_eth_xstats_get, _v22, 2.2);
/* retrieve ethdev extended statistics */
int
-rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned n)
+rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n)
{
- struct rte_eth_stats eth_stats;
- struct rte_eth_dev *dev;
- unsigned count = 0, i, q;
- signed xcount = 0;
- uint64_t val, *stats_ptr;
- uint16_t nb_rxqs, nb_txqs;
+ /* If need all xstats */
+ if (!ids) {
+ struct rte_eth_stats eth_stats;
+ struct rte_eth_dev *dev;
+ unsigned int count = 0, i, q;
+ signed int xcount = 0;
+ uint64_t val, *stats_ptr;
+ uint16_t nb_rxqs, nb_txqs;
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
- dev = &rte_eth_devices[port_id];
+ nb_rxqs = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ nb_txqs = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
- nb_rxqs = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- nb_txqs = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ /* Return generic statistics */
+ count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
+ (nb_txqs * RTE_NB_TXQ_STATS);
- /* Return generic statistics */
- count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
- (nb_txqs * RTE_NB_TXQ_STATS);
- /* implemented by the driver */
- if (dev->dev_ops->xstats_get != NULL) {
- /* Retrieve the xstats from the driver at the end of the
- * xstats struct.
- */
- xcount = (*dev->dev_ops->xstats_get)(dev,
- xstats ? xstats + count : NULL,
- (n > count) ? n - count : 0);
+ /* implemented by the driver */
+ if (dev->dev_ops->xstats_get_by_ids != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ */
+ xcount = (*dev->dev_ops->xstats_get_by_ids)(dev,
+ NULL,
+ values ? values + count : NULL,
+ (n > count) ? n - count : 0);
+
+ if (xcount < 0)
+ return xcount;
+ /* implemented by the driver */
+ } else if (dev->dev_ops->xstats_get != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ * Compatibility for PMD without xstats_get_by_ids
+ */
+ unsigned int size = (n > count) ? n - count : 1;
+ struct rte_eth_xstat xstats[size];
- if (xcount < 0)
- return xcount;
- }
+ xcount = (*dev->dev_ops->xstats_get)(dev,
+ values ? xstats : NULL, size);
- if (n < count + xcount || xstats == NULL)
- return count + xcount;
+ if (xcount < 0)
+ return xcount;
+
+ if (values != NULL)
+ for (i = 0 ; i < (unsigned int)xcount; i++)
+ values[i + count] = xstats[i].value;
+ }
- /* now fill the xstats structure */
- count = 0;
- rte_eth_stats_get(port_id, ð_stats);
+ if (n < count + xcount || values == NULL)
+ return count + xcount;
- /* global stats */
- for (i = 0; i < RTE_NB_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_stats_strings[i].offset);
- val = *stats_ptr;
- xstats[count++].value = val;
- }
+ /* now fill the xstats structure */
+ count = 0;
+ rte_eth_stats_get(port_id, ð_stats);
- /* per-rxq stats */
- for (q = 0; q < nb_rxqs; q++) {
- for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ /* global stats */
+ for (i = 0; i < RTE_NB_STATS; i++) {
stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_rxq_stats_strings[i].offset +
- q * sizeof(uint64_t));
+ rte_stats_strings[i].offset);
val = *stats_ptr;
- xstats[count++].value = val;
+ values[count++] = val;
}
+
+ /* per-rxq stats */
+ for (q = 0; q < nb_rxqs; q++) {
+ for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_rxq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+ }
+
+ /* per-txq stats */
+ for (q = 0; q < nb_txqs; q++) {
+ for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_txq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+ }
+
+ return count + xcount;
}
+ /* Need only xstats given by IDS array */
+ else {
+ uint16_t i, size;
+ uint64_t *values_copy;
- /* per-txq stats */
- for (q = 0; q < nb_txqs; q++) {
- for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_txq_stats_strings[i].offset +
- q * sizeof(uint64_t));
- val = *stats_ptr;
- xstats[count++].value = val;
+ size = rte_eth_xstats_get_v1705(port_id, NULL, NULL, 0);
+
+ values_copy = malloc(sizeof(values_copy) * size);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ return -1;
}
+
+ rte_eth_xstats_get_v1705(port_id, NULL, values_copy, size);
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] >= size) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ values[i] = values_copy[ids[i]];
+ }
+ free(values_copy);
+ return n;
}
+}
+MAP_STATIC_SYMBOL(int
+ rte_eth_xstats_get(uint8_t port_id, uint64_t *ids,
+ uint64_t *values, unsigned int n), rte_eth_xstats_get_v1705);
+
+__rte_deprecated int
+rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ uint64_t *values_copy;
+ uint16_t size, i;
- for (i = 0; i < count; i++)
+ values_copy = malloc(sizeof(values_copy) * n);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: Cannot allocate memory for xstats\n");
+ return -1;
+ }
+ size = rte_eth_xstats_get(port_id, 0, values_copy, n);
+
+ for (i = 0; i < n; i++) {
xstats[i].id = i;
- /* add an offset to driver-specific stats */
- for ( ; i < count + xcount; i++)
- xstats[i].id += count;
+ xstats[i].value = values_copy[i];
+ }
+ free(values_copy);
+ return size;
+}
- return count + xcount;
+__rte_deprecated int
+rte_eth_xstats_get_names_all(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n)
+{
+ return rte_eth_xstats_get_names(port_id, xstats_names, n, NULL);
}
/* reset ethdev extended statistics */
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index d072538..e4f410a 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -186,6 +186,7 @@
#include "rte_ether.h"
#include "rte_eth_ctrl.h"
#include "rte_dev_info.h"
+#include "rte_compat.h"
struct rte_mbuf;
@@ -1118,6 +1119,10 @@ typedef int (*eth_xstats_get_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat *stats, unsigned n);
/**< @internal Get extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_by_ids_t)(struct rte_eth_dev *dev,
+ uint64_t *ids, uint64_t *values, unsigned int n);
+/**< @internal Get extended stats of an Ethernet device. */
+
typedef void (*eth_xstats_reset_t)(struct rte_eth_dev *dev);
/**< @internal Reset extended stats of an Ethernet device. */
@@ -1125,6 +1130,17 @@ typedef int (*eth_xstats_get_names_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, unsigned size);
/**< @internal Get names of extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_names_by_ids_t)(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
+ unsigned int size);
+/**< @internal Get names of extended stats of an Ethernet device. */
+
+typedef int (*eth_xstats_get_by_name_t)(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ struct rte_eth_xstat *xstat,
+ const char *name);
+/**< @internal Get xstat specified by name of an Ethernet device. */
+
typedef int (*eth_queue_stats_mapping_set_t)(struct rte_eth_dev *dev,
uint16_t queue_id,
uint8_t stat_idx,
@@ -1466,8 +1482,8 @@ struct eth_dev_ops {
eth_stats_reset_t stats_reset; /**< Reset generic device statistics. */
eth_xstats_get_t xstats_get; /**< Get extended device statistics. */
eth_xstats_reset_t xstats_reset; /**< Reset extended device statistics. */
- eth_xstats_get_names_t xstats_get_names;
- /**< Get names of extended statistics. */
+ eth_xstats_get_names_t xstats_get_names;
+ /**< Get names of extended device statistics. */
eth_queue_stats_mapping_set_t queue_stats_mapping_set;
/**< Configure per queue stat counter mapping. */
@@ -1563,6 +1579,12 @@ struct eth_dev_ops {
eth_timesync_adjust_time timesync_adjust_time; /** Adjust the device clock. */
eth_timesync_read_time timesync_read_time; /** Get the device clock time. */
eth_timesync_write_time timesync_write_time; /** Set the device clock time. */
+ eth_xstats_get_by_ids_t xstats_get_by_ids;
+ /**< Get extended device statistics by ID. */
+ eth_xstats_get_names_by_ids_t xstats_get_names_by_ids;
+ /**< Get name of extended device statistics by ID. */
+ eth_xstats_get_by_name_t xstats_get_by_name;
+ /**< Get extended device statistics by name. */
};
/**
@@ -2324,8 +2346,57 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
*/
void rte_eth_stats_reset(uint8_t port_id);
+
/**
- * Retrieve names of extended statistics of an Ethernet device.
+* Gets the ID of a statistic from its name.
+*
+* Note this function searches for the statistics using string compares, and
+* as such should not be used on the fast-path. For fast-path retrieval of
+* specific statistics, store the ID as provided in *id* from this function,
+* and pass the ID to rte_eth_xstats_get()
+*
+* @param port_id The port to look up statistics from
+* @param xstat_name The name of the statistic to return
+* @param[out] id A pointer to an app-supplied uint64_t which should be
+* set to the ID of the stat if the stat exists.
+* @return
+* 0 on success
+* -ENODEV for invalid port_id,
+* -EINVAL if the xstat_name doesn't exist in port_id
+*/
+int rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id);
+
+/**
+ * Retrieve all extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats
+ * A pointer to a table of structure of type *rte_eth_xstat*
+ * to be filled with device statistics ids and values: id is the
+ * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
+ * and value is the statistic counter.
+ * This parameter can be set to NULL if n is 0.
+ * @param n
+ * The size of the xstats array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+__rte_deprecated
+int rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n);
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get_all, _v1705, 17.05);
+
+
+/**
+ * Retrieve names of all extended statistics of an Ethernet device.
*
* @param port_id
* The port identifier of the Ethernet device.
@@ -2333,7 +2404,7 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
* An rte_eth_xstat_name array of at least *size* elements to
* be filled. If set to NULL, the function returns the required number
* of elements.
- * @param size
+ * @param n
* The size of the xstats_names array (number of elements).
* @return
* - A positive value lower or equal to size: success. The return value
@@ -2344,9 +2415,10 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-int rte_eth_xstats_get_names(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names,
- unsigned size);
+__rte_deprecated int rte_eth_xstats_get_names_all(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n);
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get_names_all, _v1705, 17.05);
+
/**
* Retrieve extended statistics of an Ethernet device.
@@ -2370,8 +2442,94 @@ int rte_eth_xstats_get_names(uint8_t port_id,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-int rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned n);
+int rte_eth_xstats_get_v22(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n);
+
+/**
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param ids
+ * A pointer to an ids array passed by application. This tells wich
+ * statistics values function should retrieve. This parameter
+ * can be set to NULL if n is 0. In this case function will retrieve
+ * all avalible statistics.
+ * @param values
+ * A pointer to a table to be filled with device statistics values.
+ * @param n
+ * The size of the ids array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n);
+
+int rte_eth_xstats_get(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n);
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get, _v1705, 17.05);
+
+/**
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats_names
+ * A pointer to a table of structure of type *rte_eth_xstat*
+ * to be filled with device statistics ids and values: id is the
+ * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
+ * and value is the statistic counter.
+ * This parameter can be set to NULL if n is 0.
+ * @param n
+ * The size of the xstats array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_names_v1607(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n);
+
+/**
+ * Retrieve names of extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats_names
+ * An rte_eth_xstat_name array of at least *size* elements to
+ * be filled. If set to NULL, the function returns the required number
+ * of elements.
+ * @param ids
+ * IDs array given by app to retrieve specific statistics
+ * @param size
+ * The size of the xstats_names array (number of elements).
+ * @return
+ * - A positive value lower or equal to size: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than size: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_names_v1705(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids);
+
+int rte_eth_xstats_get_names(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int size,
+ uint64_t *ids);
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get_names, _v1705, 17.05);
/**
* Reset extended statistics of an Ethernet device.
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index 0ea3856..f4d0136 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -159,5 +159,10 @@ DPDK_17.05 {
global:
rte_eth_find_next;
+ rte_eth_xstats_get_names;
+ rte_eth_xstats_get;
+ rte_eth_xstats_get_all;
+ rte_eth_xstats_get_names_all;
+ rte_eth_xstats_get_id_by_name;
} DPDK_17.02;
--
1.9.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v4 0/3] Extended xstats API in ethdev library to allow grouping of stats
2017-04-03 12:09 1% ` [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id Jacek Piasecki
2017-04-03 12:37 0% ` Van Haaren, Harry
2017-04-04 15:03 0% ` Thomas Monjalon
@ 2017-04-10 17:59 4% ` Jacek Piasecki
2017-04-10 17:59 1% ` [dpdk-dev] [PATCH v4 1/3] ethdev: new xstats API add retrieving by ID Jacek Piasecki
2 siblings, 1 reply; 200+ results
From: Jacek Piasecki @ 2017-04-10 17:59 UTC (permalink / raw)
To: dev; +Cc: harry.van.haaren, deepak.k.jain, Jacek Piasecki
Extended xstats API in ethdev library to allow grouping of stats logically
so they can be retrieved per logical grouping managed by the application.
Changed existing functions rte_eth_xstats_get_names and rte_eth_xstats_get
to use a new list of arguments: array of ids and array of values.
ABI versioning mechanism was used to support backward compatibility.
Introduced two new functions rte_eth_xstats_get_all and
rte_eth_xstats_get_names_all which keeps functionality of the previous
ones (respectively rte_eth_xstats_get and rte_eth_xstats_get_names)
but use new API inside. Both functions marked as deprecated.
Introduced new function: rte_eth_xstats_get_id_by_name to retrieve
xstats ids by its names.
Extended functionality of proc_info application:
--xstats-name NAME: to display single xstat value by NAME
Updated test-pmd application to use new API.
v4 changes:
documentation change after API modification
fix xstats display for PMD without _by_ids() functions
fix ABI validator errors
v3 changes:
checkpatch fixes
removed malloc bug in ethdev
add new command to proc_info and IDs parsing
merged testpmd and proc_info patch with library patch
Jacek Piasecki (3):
ethdev: new xstats API add retrieving by ID
net/e1000: new xstats API add ID support for e1000
net/ixgbe: new xstats API add ID support for ixgbe
app/proc_info/main.c | 150 ++++++++++-
app/test-pmd/config.c | 19 +-
doc/guides/prog_guide/poll_mode_drv.rst | 174 +++++++++++--
drivers/net/e1000/igb_ethdev.c | 92 ++++++-
drivers/net/ixgbe/ixgbe_ethdev.c | 179 ++++++++++++++
lib/librte_ether/Makefile | 2 +-
lib/librte_ether/rte_ethdev.c | 426 ++++++++++++++++++++++++--------
lib/librte_ether/rte_ethdev.h | 176 ++++++++++++-
lib/librte_ether/rte_ether_version.map | 5 +
9 files changed, 1066 insertions(+), 157 deletions(-)
--
1.9.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 1/2] eal: add rte_cpu_is_supported to map
2017-04-04 15:38 3% ` [dpdk-dev] [PATCH 1/2] eal: add rte_cpu_is_supported to map Aaron Conole
@ 2017-04-06 20:58 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-04-06 20:58 UTC (permalink / raw)
To: Aaron Conole; +Cc: dev
2017-04-04 11:38, Aaron Conole:
> This function is now part of the public ABI, so should be
> advertised as such.
>
> Signed-off-by: Aaron Conole <aconole@redhat.com>
Fixes: 37e97ad2c56a ("eal: do not panic when CPU is not supported")
Series applied, thanks
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] ring: fix build with icc
@ 2017-04-05 15:03 4% Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2017-04-05 15:03 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Thomas Monjalon, dev, Ferruh Yigit
build error:
In file included from .../lib/librte_ring/rte_ring.c(90):
.../lib/librte_ring/rte_ring.h(162):
error #1366: a reduction in alignment without the "packed" attribute
is ignored
} __rte_cache_aligned;
^
Alignment attribute moved to first element of the struct
Fixes: a6619414e0a9 ("ring: make struct and macros type agnostic")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
lib/librte_ring/rte_ring.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 6642e18..28b7b2a 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -147,7 +147,7 @@ struct rte_ring {
* compatibility requirements, it could be changed to RTE_RING_NAMESIZE
* next time the ABI changes
*/
- char name[RTE_MEMZONE_NAMESIZE]; /**< Name of the ring. */
+ char name[RTE_MEMZONE_NAMESIZE] __rte_cache_aligned; /**< Name of the ring. */
int flags; /**< Flags supplied at creation. */
const struct rte_memzone *memzone;
/**< Memzone, if any, containing the rte_ring */
@@ -159,7 +159,7 @@ struct rte_ring {
/** Ring consumer status. */
struct rte_ring_headtail cons __rte_aligned(CONS_ALIGN);
-} __rte_cache_aligned;
+};
#define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */
#define RING_F_SC_DEQ 0x0002 /**< The default dequeue is "single-consumer". */
--
2.9.3
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] mbuf: bump library version
2017-04-05 10:00 14% [dpdk-dev] [PATCH] mbuf: bump library version Olivier Matz
@ 2017-04-05 11:41 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-04-05 11:41 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev
2017-04-05 12:00, Olivier Matz:
> The reorganization of the mbuf structure induces an ABI breakage.
> Bump the library version, and update the documentation accordingly.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Applied, thanks
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] mbuf: bump library version
@ 2017-04-05 10:00 14% Olivier Matz
2017-04-05 11:41 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2017-04-05 10:00 UTC (permalink / raw)
To: dev
The reorganization of the mbuf structure induces an ABI breakage.
Bump the library version, and update the documentation accordingly.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/rel_notes/deprecation.rst | 7 -------
doc/guides/rel_notes/release_17_05.rst | 21 ++++++++++++++++++++-
lib/librte_mbuf/Makefile | 2 +-
3 files changed, 21 insertions(+), 9 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 9708b3941..fcc2e865d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -75,13 +75,6 @@ Deprecation Notices
``rte_pmd_ixgbe_bypass_wd_timeout_show``, ``rte_pmd_ixgbe_bypass_ver_show``,
``rte_pmd_ixgbe_bypass_wd_reset``.
-* ABI changes are planned for 17.05 in the ``rte_mbuf`` structure: some fields
- may be reordered to facilitate the writing of ``data_off``, ``refcnt``, and
- ``nb_segs`` in one operation, because some platforms have an overhead if the
- store address is not naturally aligned. Other mbuf fields, such as the
- ``port`` field, may be moved or removed as part of this mbuf work. A
- ``timestamp`` will also be added.
-
* The mbuf flags PKT_RX_VLAN_PKT and PKT_RX_QINQ_PKT are deprecated and
are respectively replaced by PKT_RX_VLAN_STRIPPED and
PKT_RX_QINQ_STRIPPED, that are better described. The old flags and
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index d5d520573..9b078e550 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -41,6 +41,20 @@ New Features
Also, make sure to start the actual text at the margin.
=========================================================
+* **Reorganized the mbuf structure.**
+
+ * Align fields to facilitate the writing of ``data_off``, ``refcnt``, and
+ ``nb_segs`` in one operation.
+ * Use 2 bytes for port and number of segments.
+ * Move the sequence number in the second cache line.
+ * Add a timestamp field.
+ * Set default value for ``refcnt``, ``next`` and ``nb_segs`` at mbuf free.
+
+* **Added mbuf raw free API**
+
+ Moved ``rte_mbuf_raw_free()`` and ``rte_pktmbuf_prefree_seg()`` functions to
+ the public API.
+
* **Added free Tx mbuf on demand API.**
Added a new function ``rte_eth_tx_done_cleanup()`` which allows an application
@@ -366,6 +380,11 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* **Reorganized the mbuf structure.**
+
+ The order and size of the fields in the ``mbuf`` structure changed,
+ as described in the `New Features`_ section.
+
Removed Items
-------------
@@ -413,7 +432,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_kni.so.2
librte_kvargs.so.1
librte_lpm.so.2
- librte_mbuf.so.2
+ + librte_mbuf.so.3
librte_mempool.so.2
librte_meter.so.1
librte_net.so.1
diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
index 956902ab4..548273054 100644
--- a/lib/librte_mbuf/Makefile
+++ b/lib/librte_mbuf/Makefile
@@ -38,7 +38,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
EXPORT_MAP := rte_mbuf_version.map
-LIBABIVER := 2
+LIBABIVER := 3
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c
--
2.11.0
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [PATCH v8 1/7] net/ark: stub PMD for Atomic Rules Arkville
@ 2017-04-04 19:50 3% ` Ed Czeck
0 siblings, 0 replies; 200+ results
From: Ed Czeck @ 2017-04-04 19:50 UTC (permalink / raw)
To: dev; +Cc: john.miller, shepard.siegel, ferruh.yigit, stephen, Ed Czeck
Enable Arkville on supported configurations
Add overview documentation
Minimum driver support for valid compile
Arkville PMD is not supported on ARM or PowerPC at this time
v8:
* Update to allow patch application from dpdk-next-net
v6:
* Address review comments
* Unify messaging, logging and debug macros to ark_logs.h
v5:
* Address comments from Ferruh Yigit <ferruh.yigit@intel.com>
* Added documentation on driver args
* Makefile fixes
* Safe argument processing
* vdev args to dev args
v4:
* Address issues report from review
* Add internal comments on driver arg
* provide a bare-bones dev init to avoid compiler warnings
v3:
* Split large patch into several smaller ones
Signed-off-by: Ed Czeck <ed.czeck@atomicrules.com>
Signed-off-by: John Miller <john.miller@atomicrules.com>
---
MAINTAINERS | 8 +
config/common_base | 10 ++
config/defconfig_arm-armv7a-linuxapp-gcc | 1 +
doc/guides/nics/ark.rst | 296 +++++++++++++++++++++++++++++++
doc/guides/nics/index.rst | 1 +
drivers/net/Makefile | 2 +
drivers/net/ark/Makefile | 55 ++++++
drivers/net/ark/ark_ethdev.c | 288 ++++++++++++++++++++++++++++++
drivers/net/ark/ark_ethdev.h | 41 +++++
drivers/net/ark/ark_global.h | 110 ++++++++++++
drivers/net/ark/ark_logs.h | 119 +++++++++++++
drivers/net/ark/rte_pmd_ark_version.map | 4 +
mk/rte.app.mk | 1 +
13 files changed, 936 insertions(+)
create mode 100644 doc/guides/nics/ark.rst
create mode 100644 drivers/net/ark/Makefile
create mode 100644 drivers/net/ark/ark_ethdev.c
create mode 100644 drivers/net/ark/ark_ethdev.h
create mode 100644 drivers/net/ark/ark_global.h
create mode 100644 drivers/net/ark/ark_logs.h
create mode 100644 drivers/net/ark/rte_pmd_ark_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 7d8d95e..bb92d37 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -281,6 +281,14 @@ M: Evgeny Schemeilin <evgenys@amazon.com>
F: drivers/net/ena/
F: doc/guides/nics/ena.rst
+Atomic Rules ARK
+M: Shepard Siegel <shepard.siegel@atomicrules.com>
+M: Ed Czeck <ed.czeck@atomicrules.com>
+M: John Miller <john.miller@atomicrules.com>
+F: drivers/net/ark/
+F: doc/guides/nics/ark.rst
+F: doc/guides/nics/features/ark.ini
+
Broadcom bnxt
M: Stephen Hurd <stephen.hurd@broadcom.com>
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
diff --git a/config/common_base b/config/common_base
index d0f1a8b..5b749c5 100644
--- a/config/common_base
+++ b/config/common_base
@@ -364,6 +364,16 @@ CONFIG_RTE_LIBRTE_QEDE_FW=""
CONFIG_RTE_LIBRTE_PMD_AF_PACKET=n
#
+# Compile ARK PMD
+#
+CONFIG_RTE_LIBRTE_ARK_PMD=y
+CONFIG_RTE_LIBRTE_ARK_PAD_TX=y
+CONFIG_RTE_LIBRTE_ARK_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_ARK_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_ARK_DEBUG_STATS=n
+CONFIG_RTE_LIBRTE_ARK_DEBUG_TRACE=n
+
+#
# Compile WRS accelerated virtual port (AVP) guest PMD driver
#
CONFIG_RTE_LIBRTE_AVP_PMD=n
diff --git a/config/defconfig_arm-armv7a-linuxapp-gcc b/config/defconfig_arm-armv7a-linuxapp-gcc
index d9bd2a8..6d2b5e0 100644
--- a/config/defconfig_arm-armv7a-linuxapp-gcc
+++ b/config/defconfig_arm-armv7a-linuxapp-gcc
@@ -61,6 +61,7 @@ CONFIG_RTE_SCHED_VECTOR=n
# cannot use those on ARM
CONFIG_RTE_KNI_KMOD=n
+CONFIG_RTE_LIBRTE_ARK_PMD=n
CONFIG_RTE_LIBRTE_EM_PMD=n
CONFIG_RTE_LIBRTE_IGB_PMD=n
CONFIG_RTE_LIBRTE_CXGBE_PMD=n
diff --git a/doc/guides/nics/ark.rst b/doc/guides/nics/ark.rst
new file mode 100644
index 0000000..064ed11
--- /dev/null
+++ b/doc/guides/nics/ark.rst
@@ -0,0 +1,296 @@
+.. BSD LICENSE
+
+ Copyright (c) 2015-2017 Atomic Rules LLC
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Atomic Rules LLC nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ARK Poll Mode Driver
+====================
+
+The ARK PMD is a DPDK poll-mode driver for the Atomic Rules Arkville
+(ARK) family of devices.
+
+More information can be found at the `Atomic Rules website
+<http://atomicrules.com>`_.
+
+Overview
+--------
+
+The Atomic Rules Arkville product is DPDK and AXI compliant product
+that marshals packets across a PCIe conduit between host DPDK mbufs and
+FPGA AXI streams.
+
+The ARK PMD, and the spirit of the overall Arkville product,
+has been to take the DPDK API/ABI as a fixed specification;
+then implement much of the business logic in FPGA RTL circuits.
+The approach of *working backwards* from the DPDK API/ABI and having
+the GPP host software *dictate*, while the FPGA hardware *copes*,
+results in significant performance gains over a naive implementation.
+
+While this document describes the ARK PMD software, it is helpful to
+understand what the FPGA hardware is and is not. The Arkville RTL
+component provides a single PCIe Physical Function (PF) supporting
+some number of RX/Ingress and TX/Egress Queues. The ARK PMD controls
+the Arkville core through a dedicated opaque Core BAR (CBAR).
+To allow users full freedom for their own FPGA application IP,
+an independent FPGA Application BAR (ABAR) is provided.
+
+One popular way to imagine Arkville's FPGA hardware aspect is as the
+FPGA PCIe-facing side of a so-called Smart NIC. The Arkville core does
+not contain any MACs, and is link-speed independent, as well as
+agnostic to the number of physical ports the application chooses to
+use. The ARK driver exposes the familiar PMD interface to allow packet
+movement to and from mbufs across multiple queues.
+
+However FPGA RTL applications could contain a universe of added
+functionality that an Arkville RTL core does not provide or can
+not anticipate. To allow for this expectation of user-defined
+innovation, the ARK PMD provides a dynamic mechanism of adding
+capabilities without having to modify the ARK PMD.
+
+The ARK PMD is intended to support all instances of the Arkville
+RTL Core, regardless of configuration, FPGA vendor, or target
+board. While specific capabilities such as number of physical
+hardware queue-pairs are negotiated; the driver is designed to
+remain constant over a broad and extendable feature set.
+
+Intentionally, Arkville by itself DOES NOT provide common NIC
+capabilities such as offload or receive-side scaling (RSS).
+These capabilities would be viewed as a gate-level "tax" on
+Green-box FPGA applications that do not require such function.
+Instead, they can be added as needed with essentially no
+overhead to the FPGA Application.
+
+The ARK PMD also supports optional user extensions, through dynamic linking.
+The ARK PMD user extensions are a feature of Arkville’s DPDK
+net/ark poll mode driver, allowing users to add their
+own code to extend the net/ark functionality without
+having to make source code changes to the driver. One motivation for
+this capability is that while DPDK provides a rich set of functions
+to interact with NIC-like capabilities (e.g. MAC addresses and statistics),
+the Arkville RTL IP does not include a MAC. Users can supply their
+own MAC or custom FPGA applications, which may require control from
+the PMD. The user extension is the means providing the control
+between the user's FPGA application and the existing DPDK features via
+the PMD.
+
+Device Parameters
+-------------------
+
+The ARK PMD supports device parameters that are used for packet
+routing and for internal packet generation and packet checking. This
+section describes the supported parameters. These features are
+primarily used for diagnostics, testing, and performance verification
+under the guidance of an Arkville specialist. The nominal use of
+Arkville does not require any configuration using these parameters.
+
+"Pkt_dir"
+
+The Packet Director controls connectivity between Arkville's internal
+hardware components. The features of the Pkt_dir are only used for
+diagnostics and testing; it is not intended for nominal use. The full
+set of features are not published at this level.
+
+Format:
+Pkt_dir=0x00110F10
+
+"Pkt_gen"
+
+The packet generator parameter takes a file as its argument. The file
+contains configuration parameters used internally for regression
+testing and are not intended to be published at this level. The
+packet generator is an internal Arkville hardware component.
+
+Format:
+Pkt_gen=./config/pg.conf
+
+"Pkt_chkr"
+
+The packet checker parameter takes a file as its argument. The file
+contains configuration parameters used internally for regression
+testing and are not intended to be published at this level. The
+packet checker is an internal Arkville hardware component.
+
+Format:
+Pkt_chkr=./config/pc.conf
+
+
+Data Path Interface
+-------------------
+
+Ingress RX and Egress TX operation is by the nominal DPDK API .
+The driver supports single-port, multi-queue for both RX and TX.
+
+Refer to ``ark_ethdev.h`` for the list of supported methods to
+act upon RX and TX Queues.
+
+Configuration Information
+-------------------------
+
+**DPDK Configuration Parameters**
+
+ The following configuration options are available for the ARK PMD:
+
+ * **CONFIG_RTE_LIBRTE_ARK_PMD** (default y): Enables or disables inclusion
+ of the ARK PMD driver in the DPDK compilation.
+
+ * **CONFIG_RTE_LIBRTE_ARK_PAD_TX** (default y): When enabled TX
+ packets are padded to 60 bytes to support downstream MACS.
+
+ * **CONFIG_RTE_LIBRTE_ARK_DEBUG_RX** (default n): Enables or disables debug
+ logging and internal checking of RX ingress logic within the ARK PMD driver.
+
+ * **CONFIG_RTE_LIBRTE_ARK_DEBUG_TX** (default n): Enables or disables debug
+ logging and internal checking of TX egress logic within the ARK PMD driver.
+
+ * **CONFIG_RTE_LIBRTE_ARK_DEBUG_STATS** (default n): Enables or disables debug
+ logging of detailed packet and performance statistics gathered in
+ the PMD and FPGA.
+
+ * **CONFIG_RTE_LIBRTE_ARK_DEBUG_TRACE** (default n): Enables or disables debug
+ logging of detailed PMD events and status.
+
+
+Building DPDK
+-------------
+
+See the :ref:`DPDK Getting Started Guide for Linux <linux_gsg>` for
+instructions on how to build DPDK.
+
+By default the ARK PMD library will be built into the DPDK library.
+
+For configuring and using UIO and VFIO frameworks, please also refer :ref:`the
+documentation that comes with DPDK suite <linux_gsg>`.
+
+Supported ARK RTL PCIe Instances
+--------------------------------
+
+ARK PMD supports the following Arkville RTL PCIe instances including:
+
+* ``1d6c:100d`` - AR-ARKA-FX0 [Arkville 32B DPDK Data Mover]
+* ``1d6c:100e`` - AR-ARKA-FX1 [Arkville 64B DPDK Data Mover]
+
+Supported Operating Systems
+---------------------------
+
+Any Linux distribution fulfilling the conditions described in ``System Requirements``
+section of :ref:`the DPDK documentation <linux_gsg>` or refer to *DPDK
+Release Notes*. ARM and PowerPC architectures are not supported at this time.
+
+
+Supported Features
+------------------
+
+* Dynamic ARK PMD extensions
+* Multiple receive and transmit queues
+* Jumbo frames up to 9K
+* Hardware Statistics
+
+Unsupported Features
+--------------------
+
+Features that may be part of, or become part of, the Arkville RTL IP that are
+not currently supported or exposed by the ARK PMD include:
+
+* PCIe SR-IOV Virtual Functions (VFs)
+* Arkville's Packet Generator Control and Status
+* Arkville's Packet Director Control and Status
+* Arkville's Packet Checker Control and Status
+* Arkville's Timebase Management
+
+Pre-Requisites
+--------------
+
+#. Prepare the system as recommended by DPDK suite. This includes environment
+ variables, hugepages configuration, tool-chains and configuration
+
+#. Insert igb_uio kernel module using the command 'modprobe igb_uio'
+
+#. Bind the intended ARK device to igb_uio module
+
+At this point the system should be ready to run DPDK applications. Once the
+application runs to completion, the ARK PMD can be detached from igb_uio if necessary.
+
+Usage Example
+-------------
+
+This section demonstrates how to launch **testpmd** with Atomic Rules ARK
+devices managed by librte_pmd_ark.
+
+#. Load the kernel modules:
+
+ .. code-block:: console
+
+ modprobe uio
+ insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+
+ .. note::
+
+ The ARK PMD driver depends upon the igb_uio user space I/O kernel module
+
+#. Mount and request huge pages:
+
+ .. code-block:: console
+
+ mount -t hugetlbfs nodev /mnt/huge
+ echo 256 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+
+#. Bind UIO driver to ARK device at 0000:01:00.0 (using dpdk-devbind.py):
+
+ .. code-block:: console
+
+ ./usertools/dpdk-devbind.py --bind=igb_uio 0000:01:00.0
+
+ .. note::
+
+ The last argument to dpdk-devbind.py is the 4-tuple that indentifies a specific PCIe
+ device. You can use lspci -d 1d6c: to indentify all Atomic Rules devices in the system,
+ and thus determine the correct 4-tuple argument to dpdk-devbind.py
+
+#. Start testpmd with basic parameters:
+
+ .. code-block:: console
+
+ ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 -- -i
+
+ Example output:
+
+ .. code-block:: console
+
+ [...]
+ EAL: PCI device 0000:01:00.0 on NUMA socket -1
+ EAL: probe driver: 1d6c:100e rte_ark_pmd
+ EAL: PCI memory mapped at 0x7f9b6c400000
+ PMD: eth_ark_dev_init(): Initializing 0:2:0.1
+ ARKP PMD CommitID: 378f3a67
+ Configuring Port 0 (socket 0)
+ Port 0: DC:3C:F6:00:00:01
+ Checking link statuses...
+ Port 0 Link Up - speed 100000 Mbps - full-duplex
+ Done
+ testpmd>
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 4537113..3305e80 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -36,6 +36,7 @@ Network Interface Controller Drivers
:numbered:
overview
+ ark
avp
bnx2x
bnxt
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 7a2cb00..2fb7ca5 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -36,6 +36,8 @@ core-libs += librte_net librte_kvargs
DIRS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += af_packet
DEPDIRS-af_packet = $(core-libs)
+DIRS-$(CONFIG_RTE_LIBRTE_ARK_PMD) += ark
+DEPDIRS-ark = $(core-libs)
DIRS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += avp
DEPDIRS-avp = $(core-libs)
DIRS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += bnx2x
diff --git a/drivers/net/ark/Makefile b/drivers/net/ark/Makefile
new file mode 100644
index 0000000..a4e03ab
--- /dev/null
+++ b/drivers/net/ark/Makefile
@@ -0,0 +1,55 @@
+# BSD LICENSE
+#
+# Copyright (c) 2015-2017 Atomic Rules LLC
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of copyright holder nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_ark.a
+
+CFLAGS += -O3 -I./
+CFLAGS += $(WERROR_FLAGS) -Werror
+
+EXPORT_MAP := rte_pmd_ark_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_ARK_PMD) += ark_ethdev.c
+
+# this lib depends upon:
+LDLIBS += -lpthread
+LDLIBS += -ldl
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
new file mode 100644
index 0000000..fa5c9aa
--- /dev/null
+++ b/drivers/net/ark/ark_ethdev.c
@@ -0,0 +1,288 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright (c) 2015-2017 Atomic Rules LLC
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of copyright holder nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <unistd.h>
+#include <sys/stat.h>
+#include <dlfcn.h>
+
+#include <rte_kvargs.h>
+
+#include "ark_global.h"
+#include "ark_logs.h"
+#include "ark_ethdev.h"
+
+/* Internal prototypes */
+static int eth_ark_check_args(struct ark_adapter *ark, const char *params);
+static int eth_ark_dev_init(struct rte_eth_dev *dev);
+static int eth_ark_dev_uninit(struct rte_eth_dev *eth_dev);
+static int eth_ark_dev_configure(struct rte_eth_dev *dev);
+static void eth_ark_dev_info_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info);
+
+/*
+ * The packet generator is a functional block used to generate packet
+ * patterns for testing. It is not intended for nominal use.
+ */
+#define ARK_PKTGEN_ARG "Pkt_gen"
+
+/*
+ * The packet checker is a functional block used to verify packet
+ * patterns for testing. It is not intended for nominal use.
+ */
+#define ARK_PKTCHKR_ARG "Pkt_chkr"
+
+/*
+ * The packet director is used to select the internal ingress and
+ * egress packets paths during testing. It is not intended for
+ * nominal use.
+ */
+#define ARK_PKTDIR_ARG "Pkt_dir"
+
+/* Devinfo configurations */
+#define ARK_RX_MAX_QUEUE (4096 * 4)
+#define ARK_RX_MIN_QUEUE (512)
+#define ARK_RX_MAX_PKT_LEN ((16 * 1024) - 128)
+#define ARK_RX_MIN_BUFSIZE (1024)
+
+#define ARK_TX_MAX_QUEUE (4096 * 4)
+#define ARK_TX_MIN_QUEUE (256)
+
+static const char * const valid_arguments[] = {
+ ARK_PKTGEN_ARG,
+ ARK_PKTCHKR_ARG,
+ ARK_PKTDIR_ARG,
+ NULL
+};
+
+static const struct rte_pci_id pci_id_ark_map[] = {
+ {RTE_PCI_DEVICE(0x1d6c, 0x100d)},
+ {RTE_PCI_DEVICE(0x1d6c, 0x100e)},
+ {.vendor_id = 0, /* sentinel */ },
+};
+
+static struct eth_driver rte_ark_pmd = {
+ .pci_drv = {
+ .probe = rte_eth_dev_pci_probe,
+ .remove = rte_eth_dev_pci_remove,
+ .id_table = pci_id_ark_map,
+ .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC
+ },
+ .eth_dev_init = eth_ark_dev_init,
+ .eth_dev_uninit = eth_ark_dev_uninit,
+ .dev_private_size = sizeof(struct ark_adapter),
+};
+
+static const struct eth_dev_ops ark_eth_dev_ops = {
+ .dev_configure = eth_ark_dev_configure,
+ .dev_infos_get = eth_ark_dev_info_get,
+};
+
+static int
+eth_ark_dev_init(struct rte_eth_dev *dev)
+{
+ struct ark_adapter *ark =
+ (struct ark_adapter *)dev->data->dev_private;
+ struct rte_pci_device *pci_dev;
+ int ret = -1;
+
+ ark->eth_dev = dev;
+
+ PMD_FUNC_LOG(DEBUG, "\n");
+
+ pci_dev = ARK_DEV_TO_PCI(dev);
+ rte_eth_copy_pci_info(dev, pci_dev);
+
+ ark->bar0 = (uint8_t *)pci_dev->mem_resource[0].addr;
+ ark->a_bar = (uint8_t *)pci_dev->mem_resource[2].addr;
+
+ dev->dev_ops = &ark_eth_dev_ops;
+ dev->data->dev_flags |= RTE_ETH_DEV_DETACHABLE;
+
+ if (pci_dev->device.devargs)
+ ret = eth_ark_check_args(ark, pci_dev->device.devargs->args);
+ else
+ PMD_DRV_LOG(INFO, "No Device args found\n");
+
+ return ret;
+}
+
+static int
+eth_ark_dev_uninit(struct rte_eth_dev *dev)
+{
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return 0;
+
+ dev->dev_ops = NULL;
+ dev->rx_pkt_burst = NULL;
+ dev->tx_pkt_burst = NULL;
+ return 0;
+}
+
+static int
+eth_ark_dev_configure(struct rte_eth_dev *dev __rte_unused)
+{
+ PMD_FUNC_LOG(DEBUG, "\n");
+ return 0;
+}
+
+static void
+eth_ark_dev_info_get(struct rte_eth_dev *dev,
+ struct rte_eth_dev_info *dev_info)
+{
+ dev_info->max_rx_pktlen = ARK_RX_MAX_PKT_LEN;
+ dev_info->min_rx_bufsize = ARK_RX_MIN_BUFSIZE;
+
+ dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = ARK_RX_MAX_QUEUE,
+ .nb_min = ARK_RX_MIN_QUEUE,
+ .nb_align = ARK_RX_MIN_QUEUE}; /* power of 2 */
+
+ dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+ .nb_max = ARK_TX_MAX_QUEUE,
+ .nb_min = ARK_TX_MIN_QUEUE,
+ .nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
+
+ /* ARK PMD supports all line rates, how do we indicate that here ?? */
+ dev_info->speed_capa = (ETH_LINK_SPEED_1G |
+ ETH_LINK_SPEED_10G |
+ ETH_LINK_SPEED_25G |
+ ETH_LINK_SPEED_40G |
+ ETH_LINK_SPEED_50G |
+ ETH_LINK_SPEED_100G);
+ dev_info->pci_dev = ARK_DEV_TO_PCI(dev);
+}
+
+static inline int
+process_pktdir_arg(const char *key, const char *value,
+ void *extra_args)
+{
+ PMD_FUNC_LOG(DEBUG, "key = %s, value = %s\n",
+ key, value);
+ struct ark_adapter *ark =
+ (struct ark_adapter *)extra_args;
+
+ ark->pkt_dir_v = strtol(value, NULL, 16);
+ PMD_FUNC_LOG(DEBUG, "pkt_dir_v = 0x%x\n", ark->pkt_dir_v);
+ return 0;
+}
+
+static inline int
+process_file_args(const char *key, const char *value, void *extra_args)
+{
+ PMD_FUNC_LOG(DEBUG, "key = %s, value = %s\n",
+ key, value);
+ char *args = (char *)extra_args;
+
+ /* Open the configuration file */
+ FILE *file = fopen(value, "r");
+ char line[ARK_MAX_ARG_LEN];
+ int size = 0;
+ int first = 1;
+
+ while (fgets(line, sizeof(line), file)) {
+ size += strlen(line);
+ if (size >= ARK_MAX_ARG_LEN) {
+ PMD_DRV_LOG(ERR, "Unable to parse file %s args, "
+ "parameter list is too long\n", value);
+ fclose(file);
+ return -1;
+ }
+ if (first) {
+ strncpy(args, line, ARK_MAX_ARG_LEN);
+ first = 0;
+ } else {
+ strncat(args, line, ARK_MAX_ARG_LEN);
+ }
+ }
+ PMD_FUNC_LOG(DEBUG, "file = %s\n", args);
+ fclose(file);
+ return 0;
+}
+
+static int
+eth_ark_check_args(struct ark_adapter *ark, const char *params)
+{
+ struct rte_kvargs *kvlist;
+ unsigned int k_idx;
+ struct rte_kvargs_pair *pair = NULL;
+
+ kvlist = rte_kvargs_parse(params, valid_arguments);
+ if (kvlist == NULL)
+ return 0;
+
+ ark->pkt_gen_args[0] = 0;
+ ark->pkt_chkr_args[0] = 0;
+
+ for (k_idx = 0; k_idx < kvlist->count; k_idx++) {
+ pair = &kvlist->pairs[k_idx];
+ PMD_FUNC_LOG(DEBUG, "**** Arg passed to PMD = %s:%s\n",
+ pair->key,
+ pair->value);
+ }
+
+ if (rte_kvargs_process(kvlist,
+ ARK_PKTDIR_ARG,
+ &process_pktdir_arg,
+ ark) != 0) {
+ PMD_DRV_LOG(ERR, "Unable to parse arg %s\n", ARK_PKTDIR_ARG);
+ return -1;
+ }
+
+ if (rte_kvargs_process(kvlist,
+ ARK_PKTGEN_ARG,
+ &process_file_args,
+ ark->pkt_gen_args) != 0) {
+ PMD_DRV_LOG(ERR, "Unable to parse arg %s\n", ARK_PKTGEN_ARG);
+ return -1;
+ }
+
+ if (rte_kvargs_process(kvlist,
+ ARK_PKTCHKR_ARG,
+ &process_file_args,
+ ark->pkt_chkr_args) != 0) {
+ PMD_DRV_LOG(ERR, "Unable to parse arg %s\n", ARK_PKTCHKR_ARG);
+ return -1;
+ }
+
+ PMD_DRV_LOG(INFO, "packet director set to 0x%x\n", ark->pkt_dir_v);
+
+ return 0;
+}
+
+RTE_PMD_REGISTER_PCI(net_ark, rte_ark_pmd.pci_drv);
+RTE_PMD_REGISTER_KMOD_DEP(net_ark, "* igb_uio | uio_pci_generic ");
+RTE_PMD_REGISTER_PCI_TABLE(net_ark, pci_id_ark_map);
+RTE_PMD_REGISTER_PARAM_STRING(net_ark,
+ ARK_PKTGEN_ARG "=<filename> "
+ ARK_PKTCHKR_ARG "=<filename> "
+ ARK_PKTDIR_ARG "=<bitmap>");
diff --git a/drivers/net/ark/ark_ethdev.h b/drivers/net/ark/ark_ethdev.h
new file mode 100644
index 0000000..9f8d32f
--- /dev/null
+++ b/drivers/net/ark/ark_ethdev.h
@@ -0,0 +1,41 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright (c) 2015-2017 Atomic Rules LLC
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of copyright holder nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _ARK_ETHDEV_H_
+#define _ARK_ETHDEV_H_
+
+#define ARK_DEV_TO_PCI(eth_dev) \
+ RTE_DEV_TO_PCI((eth_dev)->device)
+
+
+#endif
diff --git a/drivers/net/ark/ark_global.h b/drivers/net/ark/ark_global.h
new file mode 100644
index 0000000..21449c3
--- /dev/null
+++ b/drivers/net/ark/ark_global.h
@@ -0,0 +1,110 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright (c) 2015-2017 Atomic Rules LLC
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of copyright holder nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _ARK_GLOBAL_H_
+#define _ARK_GLOBAL_H_
+
+#include <time.h>
+#include <assert.h>
+
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_cycles.h>
+#include <rte_kvargs.h>
+#include <rte_dev.h>
+#include <rte_version.h>
+
+#define ETH_ARK_ARG_MAXLEN 64
+#define ARK_SYSCTRL_BASE 0x0
+#define ARK_PKTGEN_BASE 0x10000
+#define ARK_MPU_RX_BASE 0x20000
+#define ARK_UDM_BASE 0x30000
+#define ARK_MPU_TX_BASE 0x40000
+#define ARK_DDM_BASE 0x60000
+#define ARK_CMAC_BASE 0x80000
+#define ARK_PKTDIR_BASE 0xa0000
+#define ARK_PKTCHKR_BASE 0x90000
+#define ARK_RCPACING_BASE 0xb0000
+#define ARK_EXTERNAL_BASE 0x100000
+#define ARK_MPU_QOFFSET 0x00100
+#define ARK_MAX_PORTS 8
+
+#define offset8(n) n
+#define offset16(n) ((n) / 2)
+#define offset32(n) ((n) / 4)
+#define offset64(n) ((n) / 8)
+
+/* Maximum length of arg list in bytes */
+#define ARK_MAX_ARG_LEN 256
+
+/*
+ * Structure to store private data for each PF/VF instance.
+ */
+#define def_ptr(type, name) \
+ union type { \
+ uint64_t *t64; \
+ uint32_t *t32; \
+ uint16_t *t16; \
+ uint8_t *t8; \
+ void *v; \
+ } name
+
+struct ark_adapter {
+ /* User extension private data */
+ void *user_data;
+
+ int num_ports;
+
+ /* Packet generator/checker args */
+ char pkt_gen_args[ARK_MAX_ARG_LEN];
+ char pkt_chkr_args[ARK_MAX_ARG_LEN];
+ uint32_t pkt_dir_v;
+
+ /* eth device */
+ struct rte_eth_dev *eth_dev;
+
+ void *d_handle;
+
+ /* Our Bar 0 */
+ uint8_t *bar0;
+
+ /* Application Bar */
+ uint8_t *a_bar;
+};
+
+typedef uint32_t *ark_t;
+
+#endif
diff --git a/drivers/net/ark/ark_logs.h b/drivers/net/ark/ark_logs.h
new file mode 100644
index 0000000..8aff296
--- /dev/null
+++ b/drivers/net/ark/ark_logs.h
@@ -0,0 +1,119 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright (c) 2015-2017 Atomic Rules LLC
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of copyright holder nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _ARK_DEBUG_H_
+#define _ARK_DEBUG_H_
+
+#include <inttypes.h>
+#include <rte_log.h>
+
+
+/* Configuration option to pad TX packets to 60 bytes */
+#ifdef RTE_LIBRTE_ARK_PAD_TX
+#define ARK_TX_PAD_TO_60 1
+#else
+#define ARK_TX_PAD_TO_60 0
+#endif
+
+/* system camel case definition changed to upper case */
+#define PRIU32 PRIu32
+#define PRIU64 PRIu64
+
+/* Format specifiers for string data pairs */
+#define ARK_SU32 "\n\t%-20s %'20" PRIU32
+#define ARK_SU64 "\n\t%-20s %'20" PRIU64
+#define ARK_SU64X "\n\t%-20s %#20" PRIx64
+#define ARK_SPTR "\n\t%-20s %20p"
+
+
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+ RTE_LOG(level, PMD, fmt, ## args)
+
+/* Conditional trace definitions */
+#define ARK_TRACE_ON(level, fmt, ...) \
+ RTE_LOG(level, PMD, fmt, ##__VA_ARGS__)
+
+/* This pattern allows compiler check arguments even if disabled */
+#define ARK_TRACE_OFF(level, fmt, ...) \
+ do {if (0) RTE_LOG(level, PMD, fmt, ##__VA_ARGS__); } \
+ while (0)
+
+
+/* tracing including the function name */
+#define ARK_FUNC_ON(level, fmt, args...) \
+ RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args)
+
+/* tracing including the function name */
+#define ARK_FUNC_OFF(level, fmt, args...) \
+ do { if (0) RTE_LOG(level, PMD, "%s(): " fmt, __func__, ## args); } \
+ while (0)
+
+
+/* Debug macro for tracing full behavior, function tracing and messages*/
+#ifdef RTE_LIBRTE_ARK_DEBUG_TRACE
+#define PMD_FUNC_LOG(level, fmt, ...) ARK_FUNC_ON(level, fmt, ##__VA_ARGS__)
+#define PMD_DEBUG_LOG(level, fmt, ...) ARK_TRACE_ON(level, fmt, ##__VA_ARGS__)
+#else
+#define PMD_FUNC_LOG(level, fmt, ...) ARK_FUNC_OFF(level, fmt, ##__VA_ARGS__)
+#define PMD_DEBUG_LOG(level, fmt, ...) ARK_TRACE_OFF(level, fmt, ##__VA_ARGS__)
+#endif
+
+
+/* Debug macro for reporting FPGA statistics */
+#ifdef RTE_LIBRTE_ARK_DEBUG_STATS
+#define PMD_STATS_LOG(level, fmt, ...) ARK_TRACE_ON(level, fmt, ##__VA_ARGS__)
+#else
+#define PMD_STATS_LOG(level, fmt, ...) ARK_TRACE_OFF(level, fmt, ##__VA_ARGS__)
+#endif
+
+
+/* Debug macro for RX path */
+#ifdef RTE_LIBRTE_ARK_DEBUG_RX
+#define ARK_RX_DEBUG 1
+#define PMD_RX_LOG(level, fmt, ...) ARK_TRACE_ON(level, fmt, ##__VA_ARGS__)
+#else
+#define ARK_RX_DEBUG 0
+#define PMD_RX_LOG(level, fmt, ...) ARK_TRACE_OFF(level, fmt, ##__VA_ARGS__)
+#endif
+
+/* Debug macro for TX path */
+#ifdef RTE_LIBRTE_ARK_DEBUG_TX
+#define ARK_TX_DEBUG 1
+#define PMD_TX_LOG(level, fmt, ...) ARK_TRACE_ON(level, fmt, ##__VA_ARGS__)
+#else
+#define ARK_TX_DEBUG 0
+#define PMD_TX_LOG(level, fmt, ...) ARK_TRACE_OFF(level, fmt, ##__VA_ARGS__)
+#endif
+
+#endif
diff --git a/drivers/net/ark/rte_pmd_ark_version.map b/drivers/net/ark/rte_pmd_ark_version.map
new file mode 100644
index 0000000..1062e04
--- /dev/null
+++ b/drivers/net/ark/rte_pmd_ark_version.map
@@ -0,0 +1,4 @@
+DPDK_17.05 {
+ local: *;
+
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 9c3a753..3829c60 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -104,6 +104,7 @@ _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK) += -lrte_mempool_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AF_PACKET) += -lrte_pmd_af_packet
+_LDLIBS-$(CONFIG_RTE_LIBRTE_ARK_PMD) += -lrte_pmd_ark
_LDLIBS-$(CONFIG_RTE_LIBRTE_AVP_PMD) += -lrte_pmd_avp
_LDLIBS-$(CONFIG_RTE_LIBRTE_BNX2X_PMD) += -lrte_pmd_bnx2x -lz
_LDLIBS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += -lrte_pmd_bnxt
--
1.9.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id
2017-04-04 15:45 0% ` Van Haaren, Harry
@ 2017-04-04 16:18 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2017-04-04 16:18 UTC (permalink / raw)
To: Van Haaren, Harry
Cc: dev, Kozak, KubaX, Kulasek, TomaszX, Piasecki, JacekX,
Jastrzebski, MichalX K
2017-04-04 15:45, Van Haaren, Harry:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > 2017-04-03 14:09, Jacek Piasecki:
> > > From: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
> > >
> > > Extended xstats API in ethdev library to allow grouping of stats
> > > logically so they can be retrieved per logical grouping – managed
> > > by the application.
> > > Changed existing functions rte_eth_xstats_get_names and
> > > rte_eth_xstats_get to use a new list of arguments: array of ids
> > > and array of values. ABI versioning mechanism was used to
> > > support backward compatibility.
> > > Introduced two new functions rte_eth_xstats_get_all and
> > > rte_eth_xstats_get_names_all which keeps functionality of the
> > > previous ones (respectively rte_eth_xstats_get and
> > > rte_eth_xstats_get_names) but use new API inside.
> >
> > Sorry, I still do not understand why we should complicate the API.
> > What is not possible with the existing API?
>
>
> The current API only allows retrieval of *all* of the NIC statistics at once. Given this requires a MMIO read PCI transaction per statistic it is an inefficient way of retrieving just a few key statistics. My understanding is that often a monitoring agent only has an interest in a few key statistics, and the current API forces wasting CPU time and PCIe bandwidth in retrieving *all* statistics; even those that the application didn't explicitly show an interest in.
>
> The more flexible API as implemented in this patchset allow retrieval of statistics per ID. If a PMD wishes, it can be implemented to read just the required NIC registers. As a result, the monitoring application no longer wastes PCIe bandwidth and CPU time.
Thanks for the explanation.
It has never been explained before.
> > The v1 was submitted in the last days of the proposal deadline,
> > v2 in the last minutes of integration deadline,
> > and v3 is submitted after the deadline.
> >
> > Given it is late and it is still difficult to understand the benefit,
> > I think it won't make the release 17.05.
>
>
> All in all, the value add to DPDK of this patchset is to enable applications request statistics of interest to them, and to allow PMDs implement the statistic functions more efficiently if they wish. As a bonus, the ethdev and eventdev xstats APIs will have a consistent design, as eventdev already uses this optimized ID based method.
>
> Unless there are serious concerns about the current API (which should have been flagged between a v1 and now), I don't see a reason to not update the API to use this improved method. If there are concerns about how to update applications to the new API, that can be addressed in a documentation patch if the community feels there is value in that?
I have commented on the need of explanation 3 days after the v1.
There was no answer.
So the review stopped at this point.
Then one month later (last Thursday), a v2 appears which
"replaced grouping mechanism to use mechanism based on IDs".
So you cannot say it "should have been flagged between a v1 and now".
Just because of the lack of communication, I do not want to spend these
days reviewing the API. It needs time and it will wait.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id
2017-04-04 15:03 0% ` Thomas Monjalon
@ 2017-04-04 15:45 0% ` Van Haaren, Harry
2017-04-04 16:18 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Van Haaren, Harry @ 2017-04-04 15:45 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Kozak, KubaX, Kulasek, TomaszX, Piasecki, JacekX,
Jastrzebski, MichalX K
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, April 4, 2017 4:04 PM
> To: Piasecki, JacekX <jacekx.piasecki@intel.com>; Jastrzebski, MichalX K
> <michalx.k.jastrzebski@intel.com>
> Cc: dev@dpdk.org; Van Haaren, Harry <harry.van.haaren@intel.com>; Kozak, KubaX
> <kubax.kozak@intel.com>; Kulasek, TomaszX <tomaszx.kulasek@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id
>
> 2017-04-03 14:09, Jacek Piasecki:
> > From: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
> >
> > Extended xstats API in ethdev library to allow grouping of stats
> > logically so they can be retrieved per logical grouping – managed
> > by the application.
> > Changed existing functions rte_eth_xstats_get_names and
> > rte_eth_xstats_get to use a new list of arguments: array of ids
> > and array of values. ABI versioning mechanism was used to
> > support backward compatibility.
> > Introduced two new functions rte_eth_xstats_get_all and
> > rte_eth_xstats_get_names_all which keeps functionality of the
> > previous ones (respectively rte_eth_xstats_get and
> > rte_eth_xstats_get_names) but use new API inside.
>
> Sorry, I still do not understand why we should complicate the API.
> What is not possible with the existing API?
The current API only allows retrieval of *all* of the NIC statistics at once. Given this requires a MMIO read PCI transaction per statistic it is an inefficient way of retrieving just a few key statistics. My understanding is that often a monitoring agent only has an interest in a few key statistics, and the current API forces wasting CPU time and PCIe bandwidth in retrieving *all* statistics; even those that the application didn't explicitly show an interest in.
The more flexible API as implemented in this patchset allow retrieval of statistics per ID. If a PMD wishes, it can be implemented to read just the required NIC registers. As a result, the monitoring application no longer wastes PCIe bandwidth and CPU time.
> The v1 was submitted in the last days of the proposal deadline,
> v2 in the last minutes of integration deadline,
> and v3 is submitted after the deadline.
>
> Given it is late and it is still difficult to understand the benefit,
> I think it won't make the release 17.05.
All in all, the value add to DPDK of this patchset is to enable applications request statistics of interest to them, and to allow PMDs implement the statistic functions more efficiently if they wish. As a bonus, the ethdev and eventdev xstats APIs will have a consistent design, as eventdev already uses this optimized ID based method.
Unless there are serious concerns about the current API (which should have been flagged between a v1 and now), I don't see a reason to not update the API to use this improved method. If there are concerns about how to update applications to the new API, that can be addressed in a documentation patch if the community feels there is value in that?
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 1/2] eal: add rte_cpu_is_supported to map
@ 2017-04-04 15:38 3% ` Aaron Conole
2017-04-06 20:58 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Aaron Conole @ 2017-04-04 15:38 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon
This function is now part of the public ABI, so should be
advertised as such.
Signed-off-by: Aaron Conole <aconole@redhat.com>
---
lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++
lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++
2 files changed, 14 insertions(+)
diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
index 67f2ffb..82f0f9f 100644
--- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map
@@ -182,3 +182,10 @@ DPDK_17.02 {
rte_bus_unregister;
} DPDK_16.11;
+
+DPDK_17.05 {
+ global;
+
+ rte_cpu_is_supported;
+
+} DPDK_17.02;
diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
index 9c134b4..461f15d 100644
--- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map
+++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map
@@ -186,3 +186,10 @@ DPDK_17.02 {
rte_bus_unregister;
} DPDK_16.11;
+
+DPDK_17.05 {
+ global;
+
+ rte_cpu_is_supported;
+
+} DPDK_17.02;
--
2.9.3
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id
2017-04-03 12:09 1% ` [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id Jacek Piasecki
2017-04-03 12:37 0% ` Van Haaren, Harry
@ 2017-04-04 15:03 0% ` Thomas Monjalon
2017-04-04 15:45 0% ` Van Haaren, Harry
2017-04-10 17:59 4% ` [dpdk-dev] [PATCH v4 0/3] Extended xstats API in ethdev library to allow grouping of stats Jacek Piasecki
2 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2017-04-04 15:03 UTC (permalink / raw)
To: Jacek Piasecki, Michal Jastrzebski
Cc: dev, harry.van.haaren, Kuba Kozak, Tomasz Kulasek
2017-04-03 14:09, Jacek Piasecki:
> From: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
>
> Extended xstats API in ethdev library to allow grouping of stats
> logically so they can be retrieved per logical grouping – managed
> by the application.
> Changed existing functions rte_eth_xstats_get_names and
> rte_eth_xstats_get to use a new list of arguments: array of ids
> and array of values. ABI versioning mechanism was used to
> support backward compatibility.
> Introduced two new functions rte_eth_xstats_get_all and
> rte_eth_xstats_get_names_all which keeps functionality of the
> previous ones (respectively rte_eth_xstats_get and
> rte_eth_xstats_get_names) but use new API inside.
Sorry, I still do not understand why we should complicate the API.
What is not possible with the existing API?
The v1 was submitted in the last days of the proposal deadline,
v2 in the last minutes of integration deadline,
and v3 is submitted after the deadline.
Given it is late and it is still difficult to understand the benefit,
I think it won't make the release 17.05.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] doc: update deprecation notice and release_17_05 for ABI change
2017-04-03 15:32 4% ` Mcnamara, John
@ 2017-04-04 9:59 4% ` De Lara Guarch, Pablo
0 siblings, 0 replies; 200+ results
From: De Lara Guarch, Pablo @ 2017-04-04 9:59 UTC (permalink / raw)
To: Mcnamara, John, akhil.goyal, dev; +Cc: nhorman
> -----Original Message-----
> From: Mcnamara, John
> Sent: Monday, April 03, 2017 4:33 PM
> To: akhil.goyal@nxp.com; dev@dpdk.org
> Cc: nhorman@tuxdriver.com; De Lara Guarch, Pablo
> Subject: RE: [dpdk-dev] [PATCH] doc: update deprecation notice and
> release_17_05 for ABI change
>
>
>
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> akhil.goyal@nxp.com
> > Sent: Monday, April 3, 2017 11:52 AM
> > To: dev@dpdk.org
> > Cc: nhorman@tuxdriver.com; De Lara Guarch, Pablo
> > <pablo.de.lara.guarch@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>
> > Subject: [dpdk-dev] [PATCH] doc: update deprecation notice and
> > release_17_05 for ABI change
> >
> > From: Akhil Goyal <akhil.goyal@nxp.com>
> >
> > Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
>
>
> Acked-by: John McNamara <john.mcnamara@intel.com>
Applied to dpdk-next-crypto.
Thanks,
Pablo
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH] doc: update deprecation notice and release_17_05 for ABI change
2017-04-03 10:52 9% [dpdk-dev] [PATCH] doc: update deprecation notice and release_17_05 for ABI change akhil.goyal
@ 2017-04-03 15:32 4% ` Mcnamara, John
2017-04-04 9:59 4% ` De Lara Guarch, Pablo
0 siblings, 1 reply; 200+ results
From: Mcnamara, John @ 2017-04-03 15:32 UTC (permalink / raw)
To: akhil.goyal, dev; +Cc: nhorman, De Lara Guarch, Pablo
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of akhil.goyal@nxp.com
> Sent: Monday, April 3, 2017 11:52 AM
> To: dev@dpdk.org
> Cc: nhorman@tuxdriver.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>
> Subject: [dpdk-dev] [PATCH] doc: update deprecation notice and
> release_17_05 for ABI change
>
> From: Akhil Goyal <akhil.goyal@nxp.com>
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id
2017-04-03 12:09 1% ` [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id Jacek Piasecki
@ 2017-04-03 12:37 0% ` Van Haaren, Harry
2017-04-04 15:03 0% ` Thomas Monjalon
2017-04-10 17:59 4% ` [dpdk-dev] [PATCH v4 0/3] Extended xstats API in ethdev library to allow grouping of stats Jacek Piasecki
2 siblings, 0 replies; 200+ results
From: Van Haaren, Harry @ 2017-04-03 12:37 UTC (permalink / raw)
To: Piasecki, JacekX, dev
Cc: Jastrzebski, MichalX K, Kozak, KubaX, Kulasek, TomaszX
> From: Piasecki, JacekX
> Sent: Monday, April 3, 2017 1:10 PM
> To: dev@dpdk.org
> Cc: Van Haaren, Harry <harry.van.haaren@intel.com>; Jastrzebski, MichalX K
> <michalx.k.jastrzebski@intel.com>; Piasecki, JacekX <jacekx.piasecki@intel.com>; Kozak, KubaX
> <kubax.kozak@intel.com>; Kulasek, TomaszX <tomaszx.kulasek@intel.com>
> Subject: [PATCH v3 1/3] add new xstats API retrieving by id
>
> From: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
>
> Extended xstats API in ethdev library to allow grouping of stats
> logically so they can be retrieved per logical grouping – managed
> by the application.
> Changed existing functions rte_eth_xstats_get_names and
> rte_eth_xstats_get to use a new list of arguments: array of ids
> and array of values. ABI versioning mechanism was used to
> support backward compatibility.
> Introduced two new functions rte_eth_xstats_get_all and
> rte_eth_xstats_get_names_all which keeps functionality of the
> previous ones (respectively rte_eth_xstats_get and
> rte_eth_xstats_get_names) but use new API inside.
> Both functions marked as deprecated.
> Introduced new function: rte_eth_xstats_get_id_by_name
> to retrieve xstats ids by its names.
>
> test-pmd: add support for new xstats API retrieving by id in
> testpmd application: xstats_get() and
> xstats_get_names() call with modified parameters.
>
> proc_info: add support for new xstats API retrieving by id to
> proc_info application. There is a new argument --xstats-ids
> in proc_info command line to retrieve statistics given by ids.
> E.g. --xstats-ids="1,3,5,7,8"
>
> Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
> Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
> Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
> ---
> app/proc_info/main.c | 148 +++++++++++-
> app/test-pmd/config.c | 19 +-
> lib/librte_ether/Makefile | 2 +-
> lib/librte_ether/rte_ethdev.c | 422 +++++++++++++++++++++++++--------
> lib/librte_ether/rte_ethdev.h | 176 +++++++++++++-
> lib/librte_ether/rte_ether_version.map | 12 +
> 6 files changed, 646 insertions(+), 133 deletions(-)
I gather this patchset contains various changes at once to avoid breaking compile due to function changes. All 3 of patchset compiled and working with ixgbe,
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH] doc: update deprecation notice and release_17_05 for ABI change
@ 2017-04-03 10:52 9% akhil.goyal
2017-04-03 15:32 4% ` Mcnamara, John
0 siblings, 1 reply; 200+ results
From: akhil.goyal @ 2017-04-03 10:52 UTC (permalink / raw)
To: dev; +Cc: nhorman, pablo.de.lara.guarch, Akhil Goyal
From: Akhil Goyal <akhil.goyal@nxp.com>
Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
---
doc/guides/rel_notes/deprecation.rst | 5 -----
doc/guides/rel_notes/release_17_05.rst | 3 +++
2 files changed, 3 insertions(+), 5 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index d6544ed..fd09cf2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -108,11 +108,6 @@ Deprecation Notices
A pointer to a rte_cryptodev_config structure will be added to the
function prototype ``cryptodev_configure_t``, as a new parameter.
-* cryptodev: A new parameter ``max_nb_sessions_per_qp`` will be added to
- ``rte_cryptodev_info.sym``. Some drivers may support limited number of
- sessions per queue_pair. With this new parameter application will know
- how many sessions can be mapped to each queue_pair of a device.
-
* distributor: library API will be changed to incorporate a burst-oriented
API. This will include a change to ``rte_distributor_create``
to specify which type of instance to create (single or burst), and
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 9cd5a6f..46d4e80 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -144,6 +144,9 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* The ``rte_cryptodev_info.sym`` structure has new field ``max_nb_sessions_per_qp``
+ to support drivers which may support limited number of sessions per queue_pair.
+
Removed Items
-------------
--
2.9.3
^ permalink raw reply [relevance 9%]
* [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id
2017-04-03 12:09 3% ` [dpdk-dev] [PATCH v3 0/3] Extended xstats API in ethdev library to allow grouping of stats Jacek Piasecki
@ 2017-04-03 12:09 1% ` Jacek Piasecki
2017-04-03 12:37 0% ` Van Haaren, Harry
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Jacek Piasecki @ 2017-04-03 12:09 UTC (permalink / raw)
To: dev
Cc: harry.van.haaren, Michal Jastrzebski, Jacek Piasecki, Kuba Kozak,
Tomasz Kulasek
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=Y, Size: 34874 bytes --]
From: Michal Jastrzebski <michalx.k.jastrzebski@intel.com>
Extended xstats API in ethdev library to allow grouping of stats
logically so they can be retrieved per logical grouping – managed
by the application.
Changed existing functions rte_eth_xstats_get_names and
rte_eth_xstats_get to use a new list of arguments: array of ids
and array of values. ABI versioning mechanism was used to
support backward compatibility.
Introduced two new functions rte_eth_xstats_get_all and
rte_eth_xstats_get_names_all which keeps functionality of the
previous ones (respectively rte_eth_xstats_get and
rte_eth_xstats_get_names) but use new API inside.
Both functions marked as deprecated.
Introduced new function: rte_eth_xstats_get_id_by_name
to retrieve xstats ids by its names.
test-pmd: add support for new xstats API retrieving by id in
testpmd application: xstats_get() and
xstats_get_names() call with modified parameters.
proc_info: add support for new xstats API retrieving by id to
proc_info application. There is a new argument --xstats-ids
in proc_info command line to retrieve statistics given by ids.
E.g. --xstats-ids="1,3,5,7,8"
Signed-off-by: Jacek Piasecki <jacekx.piasecki@intel.com>
Signed-off-by: Kuba Kozak <kubax.kozak@intel.com>
Signed-off-by: Tomasz Kulasek <tomaszx.kulasek@intel.com>
---
app/proc_info/main.c | 148 +++++++++++-
app/test-pmd/config.c | 19 +-
lib/librte_ether/Makefile | 2 +-
lib/librte_ether/rte_ethdev.c | 422 +++++++++++++++++++++++++--------
lib/librte_ether/rte_ethdev.h | 176 +++++++++++++-
lib/librte_ether/rte_ether_version.map | 12 +
6 files changed, 646 insertions(+), 133 deletions(-)
diff --git a/app/proc_info/main.c b/app/proc_info/main.c
index ef2098d..3c9fe4a 100644
--- a/app/proc_info/main.c
+++ b/app/proc_info/main.c
@@ -83,6 +83,14 @@
static uint32_t reset_xstats;
/**< Enable memory info. */
static uint32_t mem_info;
+/**< Enable displaying xstat name. */
+static uint32_t enable_xstats_name;
+static char *xstats_name;
+
+/**< Enable xstats by ids. */
+#define MAX_NB_XSTATS_IDS 1024
+static uint32_t nb_xstats_ids;
+static uint64_t xstats_ids[MAX_NB_XSTATS_IDS];
/**< display usage */
static void
@@ -94,6 +102,9 @@
" --stats: to display port statistics, enabled by default\n"
" --xstats: to display extended port statistics, disabled by "
"default\n"
+ " --xstats-name NAME: to display single xstat value by NAME\n"
+ " --xstats-ids IDLIST: to display xstat values by id. "
+ "The argument is comma-separated list of xstat ids to print out.\n"
" --stats-reset: to reset port statistics\n"
" --xstats-reset: to reset port extended statistics\n"
" --collectd-format: to print statistics to STDOUT in expected by collectd format\n"
@@ -127,6 +138,33 @@
}
+/*
+ * Parse ids value list into array
+ */
+static int
+parse_xstats_ids(char *list, uint64_t *ids, int limit) {
+ int length;
+ char *token;
+ char *ctx = NULL;
+ char *endptr;
+
+ length = 0;
+ token = strtok_r(list, ",", &ctx);
+ while (token != NULL) {
+ ids[length] = strtoull(token, &endptr, 10);
+ if (*endptr != '\0')
+ return -EINVAL;
+
+ length++;
+ if (length >= limit)
+ return -E2BIG;
+
+ token = strtok_r(NULL, ",", &ctx);
+ }
+
+ return length;
+}
+
static int
proc_info_preparse_args(int argc, char **argv)
{
@@ -172,7 +210,9 @@
{"stats-reset", 0, NULL, 0},
{"xstats", 0, NULL, 0},
{"xstats-reset", 0, NULL, 0},
+ {"xstats-name", required_argument, NULL, 1},
{"collectd-format", 0, NULL, 0},
+ {"xstats-ids", 1, NULL, 1},
{"host-id", 0, NULL, 0},
{NULL, 0, 0, 0}
};
@@ -214,7 +254,28 @@
MAX_LONG_OPT_SZ))
reset_xstats = 1;
break;
+ case 1:
+ /* Print xstat single value given by name*/
+ if (!strncmp(long_option[option_index].name,
+ "xstats-name", MAX_LONG_OPT_SZ)) {
+ enable_xstats_name = 1;
+ xstats_name = optarg;
+ printf("name:%s:%s\n",
+ long_option[option_index].name,
+ optarg);
+ } else if (!strncmp(long_option[option_index].name,
+ "xstats-ids",
+ MAX_LONG_OPT_SZ)) {
+ nb_xstats_ids = parse_xstats_ids(optarg,
+ xstats_ids, MAX_NB_XSTATS_IDS);
+
+ if (nb_xstats_ids <= 0) {
+ printf("xstats-id list parse error.\n");
+ return -1;
+ }
+ }
+ break;
default:
proc_info_usage(prgname);
return -1;
@@ -341,20 +402,82 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
}
static void
+nic_xstats_by_name_display(uint8_t port_id, char *name)
+{
+ uint64_t id;
+
+ printf("###### NIC statistics for port %-2d, statistic name '%s':\n",
+ port_id, name);
+
+ if (rte_eth_xstats_get_id_by_name(port_id, name, &id) == 0)
+ printf("%s: %"PRIu64"\n", name, id);
+ else
+ printf("Statistic not found...\n");
+
+}
+
+static void
+nic_xstats_by_ids_display(uint8_t port_id, uint64_t *ids, int len)
+{
+ struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
+ int ret, i;
+ static const char *nic_stats_border = "########################";
+
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
+ printf("Cannot allocate memory for xstats\n");
+ return;
+ }
+
+ xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (xstats_names == NULL) {
+ printf("Cannot allocate memory for xstat names\n");
+ free(values);
+ return;
+ }
+
+ if (len != rte_eth_xstats_get_names(
+ port_id, xstats_names, ids, len)) {
+ printf("Cannot get xstat names\n");
+ goto err;
+ }
+
+ printf("###### NIC extended statistics for port %-2d #########\n",
+ port_id);
+ printf("%s############################\n", nic_stats_border);
+ ret = rte_eth_xstats_get(port_id, ids, values, len);
+ if (ret < 0 || ret > len) {
+ printf("Cannot get xstats\n");
+ goto err;
+ }
+
+ for (i = 0; i < len; i++)
+ printf("%s: %"PRIu64"\n",
+ xstats_names[i].name,
+ values[i]);
+
+ printf("%s############################\n", nic_stats_border);
+err:
+ free(values);
+ free(xstats_names);
+}
+
+static void
nic_xstats_display(uint8_t port_id)
{
struct rte_eth_xstat_name *xstats_names;
- struct rte_eth_xstat *xstats;
+ uint64_t *values;
int len, ret, i;
static const char *nic_stats_border = "########################";
- len = rte_eth_xstats_get_names(port_id, NULL, 0);
+ len = rte_eth_xstats_get_names(port_id, NULL, NULL, 0);
if (len < 0) {
printf("Cannot get xstats count\n");
return;
}
- xstats = malloc(sizeof(xstats[0]) * len);
- if (xstats == NULL) {
+ values = malloc(sizeof(values) * len);
+ if (values == NULL) {
printf("Cannot allocate memory for xstats\n");
return;
}
@@ -362,11 +485,11 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len);
if (xstats_names == NULL) {
printf("Cannot allocate memory for xstat names\n");
- free(xstats);
+ free(values);
return;
}
if (len != rte_eth_xstats_get_names(
- port_id, xstats_names, len)) {
+ port_id, xstats_names, NULL, len)) {
printf("Cannot get xstat names\n");
goto err;
}
@@ -375,7 +498,7 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
port_id);
printf("%s############################\n",
nic_stats_border);
- ret = rte_eth_xstats_get(port_id, xstats, len);
+ ret = rte_eth_xstats_get(port_id, NULL, values, len);
if (ret < 0 || ret > len) {
printf("Cannot get xstats\n");
goto err;
@@ -391,18 +514,18 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
xstats_names[i].name);
sprintf(buf, "PUTVAL %s/dpdkstat-port.%u/%s-%s N:%"
PRIu64"\n", host_id, port_id, counter_type,
- xstats_names[i].name, xstats[i].value);
+ xstats_names[i].name, values[i]);
write(stdout_fd, buf, strlen(buf));
} else {
printf("%s: %"PRIu64"\n", xstats_names[i].name,
- xstats[i].value);
+ values[i]);
}
}
printf("%s############################\n",
nic_stats_border);
err:
- free(xstats);
+ free(values);
free(xstats_names);
}
@@ -480,6 +603,11 @@ static void collectd_resolve_cnt_type(char *cnt_type, size_t cnt_type_len,
nic_stats_clear(i);
else if (reset_xstats)
nic_xstats_clear(i);
+ else if (enable_xstats_name)
+ nic_xstats_by_name_display(i, xstats_name);
+ else if (nb_xstats_ids > 0)
+ nic_xstats_by_ids_display(i, xstats_ids,
+ nb_xstats_ids);
}
}
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 80491fc..c93e04a 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -264,9 +264,9 @@ struct rss_type_info {
void
nic_xstats_display(portid_t port_id)
{
- struct rte_eth_xstat *xstats;
int cnt_xstats, idx_xstat;
struct rte_eth_xstat_name *xstats_names;
+ uint64_t *values;
printf("###### NIC extended statistics for port %-2d\n", port_id);
if (!rte_eth_dev_is_valid_port(port_id)) {
@@ -275,7 +275,7 @@ struct rss_type_info {
}
/* Get count */
- cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, 0);
+ cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, NULL, 0);
if (cnt_xstats < 0) {
printf("Error: Cannot get count of xstats\n");
return;
@@ -288,23 +288,24 @@ struct rss_type_info {
return;
}
if (cnt_xstats != rte_eth_xstats_get_names(
- port_id, xstats_names, cnt_xstats)) {
+ port_id, xstats_names, NULL, cnt_xstats)) {
printf("Error: Cannot get xstats lookup\n");
free(xstats_names);
return;
}
/* Get stats themselves */
- xstats = malloc(sizeof(struct rte_eth_xstat) * cnt_xstats);
- if (xstats == NULL) {
+ values = malloc(sizeof(values) * cnt_xstats);
+ if (values == NULL) {
printf("Cannot allocate memory for xstats\n");
free(xstats_names);
return;
}
- if (cnt_xstats != rte_eth_xstats_get(port_id, xstats, cnt_xstats)) {
+ if (cnt_xstats != rte_eth_xstats_get(port_id, NULL, values,
+ cnt_xstats)) {
printf("Error: Unable to get xstats\n");
free(xstats_names);
- free(xstats);
+ free(values);
return;
}
@@ -312,9 +313,9 @@ struct rss_type_info {
for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++)
printf("%s: %"PRIu64"\n",
xstats_names[idx_xstat].name,
- xstats[idx_xstat].value);
+ values[idx_xstat]);
free(xstats_names);
- free(xstats);
+ free(values);
}
void
diff --git a/lib/librte_ether/Makefile b/lib/librte_ether/Makefile
index 066114b..5bbf721 100644
--- a/lib/librte_ether/Makefile
+++ b/lib/librte_ether/Makefile
@@ -41,7 +41,7 @@ CFLAGS += $(WERROR_FLAGS)
EXPORT_MAP := rte_ether_version.map
-LIBABIVER := 6
+LIBABIVER := 7
SRCS-y += rte_ethdev.c
SRCS-y += rte_flow.c
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b796e7d..91524c4 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1449,7 +1449,8 @@ struct rte_eth_dev *
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
dev = &rte_eth_devices[port_id];
if (dev->dev_ops->xstats_get_names != NULL) {
- count = (*dev->dev_ops->xstats_get_names)(dev, NULL, 0);
+ count = (*dev->dev_ops->xstats_get_names_by_ids)(dev, NULL,
+ NULL, 0);
if (count < 0)
return count;
} else
@@ -1463,150 +1464,363 @@ struct rte_eth_dev *
}
int
-rte_eth_xstats_get_names(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names,
- unsigned size)
+rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id)
{
- struct rte_eth_dev *dev;
- int cnt_used_entries;
- int cnt_expected_entries;
- int cnt_driver_entries;
- uint32_t idx, id_queue;
- uint16_t num_q;
+ int cnt_xstats, idx_xstat;
- cnt_expected_entries = get_xstats_count(port_id);
- if (xstats_names == NULL || cnt_expected_entries < 0 ||
- (int)size < cnt_expected_entries)
- return cnt_expected_entries;
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
- /* port_id checked in get_xstats_count() */
- dev = &rte_eth_devices[port_id];
- cnt_used_entries = 0;
+ if (!id) {
+ RTE_PMD_DEBUG_TRACE("Error: id pointer is NULL\n");
+ return -1;
+ }
- for (idx = 0; idx < RTE_NB_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "%s", rte_stats_strings[idx].name);
- cnt_used_entries++;
+ if (!xstat_name) {
+ RTE_PMD_DEBUG_TRACE("Error: xstat_name pointer is NULL\n");
+ return -1;
+ }
+
+ /* Get count */
+ cnt_xstats = rte_eth_xstats_get_names(port_id, NULL, NULL, 0);
+ if (cnt_xstats < 0) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get count of xstats\n");
+ return -1;
}
- num_q = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
+
+ /* Get id-name lookup table */
+ struct rte_eth_xstat_name xstats_names[cnt_xstats];
+
+ if (cnt_xstats != rte_eth_xstats_get_names(
+ port_id, xstats_names, NULL, cnt_xstats)) {
+ RTE_PMD_DEBUG_TRACE("Error: Cannot get xstats lookup\n");
+ return -1;
+ }
+
+ for (idx_xstat = 0; idx_xstat < cnt_xstats; idx_xstat++) {
+ if (!strcmp(xstats_names[idx_xstat].name, xstat_name)) {
+ *id = idx_xstat;
+ return 0;
+ };
+ }
+
+ return -EINVAL;
+}
+
+int
+rte_eth_xstats_get_names_v1705(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
+ unsigned int size)
+{
+ /* Get all xstats */
+ if (!ids) {
+ struct rte_eth_dev *dev;
+ int cnt_used_entries;
+ int cnt_expected_entries;
+ int cnt_driver_entries;
+ uint32_t idx, id_queue;
+ uint16_t num_q;
+
+ cnt_expected_entries = get_xstats_count(port_id);
+ if (xstats_names == NULL || cnt_expected_entries < 0 ||
+ (int)size < cnt_expected_entries)
+ return cnt_expected_entries;
+
+ /* port_id checked in get_xstats_count() */
+ dev = &rte_eth_devices[port_id];
+ cnt_used_entries = 0;
+
+ for (idx = 0; idx < RTE_NB_STATS; idx++) {
snprintf(xstats_names[cnt_used_entries].name,
sizeof(xstats_names[0].name),
- "rx_q%u%s",
- id_queue, rte_rxq_stats_strings[idx].name);
+ "%s", rte_stats_strings[idx].name);
cnt_used_entries++;
}
+ num_q = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_RXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "rx_q%u%s",
+ id_queue,
+ rte_rxq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+
+ }
+ num_q = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ for (id_queue = 0; id_queue < num_q; id_queue++) {
+ for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
+ snprintf(xstats_names[cnt_used_entries].name,
+ sizeof(xstats_names[0].name),
+ "tx_q%u%s",
+ id_queue,
+ rte_txq_stats_strings[idx].name);
+ cnt_used_entries++;
+ }
+ }
+ if (dev->dev_ops->xstats_get_names_by_ids != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries =
+ (*dev->dev_ops->xstats_get_names_by_ids)(
+ dev,
+ xstats_names + cnt_used_entries,
+ NULL,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+
+ } else if (dev->dev_ops->xstats_get_names != NULL) {
+ /* If there are any driver-specific xstats, append them
+ * to end of list.
+ */
+ cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
+ dev,
+ xstats_names + cnt_used_entries,
+ size - cnt_used_entries);
+ if (cnt_driver_entries < 0)
+ return cnt_driver_entries;
+ cnt_used_entries += cnt_driver_entries;
+ }
+
+ return cnt_used_entries;
}
- num_q = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- for (id_queue = 0; id_queue < num_q; id_queue++) {
- for (idx = 0; idx < RTE_NB_TXQ_STATS; idx++) {
- snprintf(xstats_names[cnt_used_entries].name,
- sizeof(xstats_names[0].name),
- "tx_q%u%s",
- id_queue, rte_txq_stats_strings[idx].name);
- cnt_used_entries++;
+ /* Get only xstats given by IDS */
+ else {
+ uint16_t len, i;
+ struct rte_eth_xstat_name *xstats_names_copy;
+
+ len = rte_eth_xstats_get_names_v1705(port_id, NULL, NULL, 0);
+
+ xstats_names_copy =
+ malloc(sizeof(struct rte_eth_xstat_name) * len);
+ if (!xstats_names_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ free(xstats_names_copy);
+ return -1;
+ }
+
+ rte_eth_xstats_get_names_v1705(port_id, xstats_names_copy,
+ NULL, len);
+
+ for (i = 0; i < size; i++) {
+ if (ids[i] >= len) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ strcpy(xstats_names[i].name,
+ xstats_names_copy[ids[i]].name);
}
+ free(xstats_names_copy);
+ return size;
}
+}
+MAP_STATIC_SYMBOL(int
+ rte_eth_xstats_get_names(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names,
+ uint64_t *ids,
+ unsigned int size), rte_eth_xstats_get_names_v1705);
- if (dev->dev_ops->xstats_get_names != NULL) {
- /* If there are any driver-specific xstats, append them
- * to end of list.
- */
- cnt_driver_entries = (*dev->dev_ops->xstats_get_names)(
- dev,
- xstats_names + cnt_used_entries,
- size - cnt_used_entries);
- if (cnt_driver_entries < 0)
- return cnt_driver_entries;
- cnt_used_entries += cnt_driver_entries;
+int
+rte_eth_xstats_get_names_v1702(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names,
+ unsigned int size)
+{
+ return rte_eth_xstats_get_names(port_id, xstats_names, NULL, size);
+}
+VERSION_SYMBOL(rte_eth_xstats_get_names, _v1702, 17.02);
+
+/* retrieve ethdev extended statistics */
+int
+rte_eth_xstats_get_v1702(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ uint64_t *values_copy;
+ uint16_t size, i;
+
+ values_copy = malloc(sizeof(values_copy) * n);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: Cannot allocate memory for xstats\n");
+ return -1;
}
+ size = rte_eth_xstats_get(port_id, 0, values_copy, n);
- return cnt_used_entries;
+ for (i = 0; i < n; i++) {
+ xstats[i].id = i;
+ xstats[i].value = values_copy[i];
+ }
+ free(values_copy);
+ return size;
}
+VERSION_SYMBOL(rte_eth_xstats_get, _v1702, 17.02);
/* retrieve ethdev extended statistics */
int
-rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned n)
+rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n)
{
- struct rte_eth_stats eth_stats;
- struct rte_eth_dev *dev;
- unsigned count = 0, i, q;
- signed xcount = 0;
- uint64_t val, *stats_ptr;
- uint16_t nb_rxqs, nb_txqs;
+ /* If need all xstats */
+ if (!ids) {
+ struct rte_eth_stats eth_stats;
+ struct rte_eth_dev *dev;
+ unsigned int count = 0, i, q;
+ signed int xcount = 0;
+ uint64_t val, *stats_ptr;
+ uint16_t nb_rxqs, nb_txqs;
- RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
- dev = &rte_eth_devices[port_id];
+ nb_rxqs = RTE_MIN(dev->data->nb_rx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ nb_txqs = RTE_MIN(dev->data->nb_tx_queues,
+ RTE_ETHDEV_QUEUE_STAT_CNTRS);
- nb_rxqs = RTE_MIN(dev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
- nb_txqs = RTE_MIN(dev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS);
+ /* Return generic statistics */
+ count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
+ (nb_txqs * RTE_NB_TXQ_STATS);
- /* Return generic statistics */
- count = RTE_NB_STATS + (nb_rxqs * RTE_NB_RXQ_STATS) +
- (nb_txqs * RTE_NB_TXQ_STATS);
- /* implemented by the driver */
- if (dev->dev_ops->xstats_get != NULL) {
- /* Retrieve the xstats from the driver at the end of the
- * xstats struct.
- */
- xcount = (*dev->dev_ops->xstats_get)(dev,
- xstats ? xstats + count : NULL,
- (n > count) ? n - count : 0);
+ /* implemented by the driver */
+ if (dev->dev_ops->xstats_get_by_ids != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ */
+ xcount = (*dev->dev_ops->xstats_get_by_ids)(dev,
+ NULL,
+ values ? values + count : NULL,
+ (n > count) ? n - count : 0);
+
+ if (xcount < 0)
+ return xcount;
+ /* implemented by the driver */
+ } else if (dev->dev_ops->xstats_get != NULL) {
+ /* Retrieve the xstats from the driver at the end of the
+ * xstats struct. Retrieve all xstats.
+ * Compatibility for PMD without xstats_get_by_ids
+ */
+ unsigned int size = (n > count) ? n - count : 1;
+ struct rte_eth_xstat xstats[size];
- if (xcount < 0)
- return xcount;
- }
+ xcount = (*dev->dev_ops->xstats_get)(dev,
+ values ? xstats : NULL, size);
- if (n < count + xcount || xstats == NULL)
- return count + xcount;
+ if (xcount < 0)
+ return xcount;
- /* now fill the xstats structure */
- count = 0;
- rte_eth_stats_get(port_id, ð_stats);
+ if (values != NULL)
+ for (i = 0 ; i < (unsigned int)xcount; i++)
+ values[i + count] = xstats[i].value;
+ }
- /* global stats */
- for (i = 0; i < RTE_NB_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_stats_strings[i].offset);
- val = *stats_ptr;
- xstats[count++].value = val;
- }
+ if (n < count + xcount || values == NULL)
+ return count + xcount;
- /* per-rxq stats */
- for (q = 0; q < nb_rxqs; q++) {
- for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ /* now fill the xstats structure */
+ count = 0;
+ rte_eth_stats_get(port_id, ð_stats);
+
+ /* global stats */
+ for (i = 0; i < RTE_NB_STATS; i++) {
stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_rxq_stats_strings[i].offset +
- q * sizeof(uint64_t));
+ rte_stats_strings[i].offset);
val = *stats_ptr;
- xstats[count++].value = val;
+ values[count++] = val;
}
+
+ /* per-rxq stats */
+ for (q = 0; q < nb_rxqs; q++) {
+ for (i = 0; i < RTE_NB_RXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_rxq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+ }
+
+ /* per-txq stats */
+ for (q = 0; q < nb_txqs; q++) {
+ for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
+ stats_ptr = RTE_PTR_ADD(ð_stats,
+ rte_txq_stats_strings[i].offset +
+ q * sizeof(uint64_t));
+ val = *stats_ptr;
+ values[count++] = val;
+ }
+ }
+
+ return count + xcount;
}
+ /* Need only xstats given by IDS array */
+ else {
+ uint16_t i, size;
+ uint64_t *values_copy;
- /* per-txq stats */
- for (q = 0; q < nb_txqs; q++) {
- for (i = 0; i < RTE_NB_TXQ_STATS; i++) {
- stats_ptr = RTE_PTR_ADD(ð_stats,
- rte_txq_stats_strings[i].offset +
- q * sizeof(uint64_t));
- val = *stats_ptr;
- xstats[count++].value = val;
+ size = rte_eth_xstats_get_v1705(port_id, NULL, NULL, 0);
+
+ values_copy = malloc(sizeof(values_copy) * size);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: can't allocate memory for values_copy\n");
+ return -1;
+ }
+
+ rte_eth_xstats_get_v1705(port_id, NULL, values_copy, size);
+
+ for (i = 0; i < n; i++) {
+ if (ids[i] >= size) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: id value isn't valid\n");
+ return -1;
+ }
+ values[i] = values_copy[ids[i]];
}
+ free(values_copy);
+ return n;
}
+}
+MAP_STATIC_SYMBOL(int
+ rte_eth_xstats_get(uint8_t port_id, uint64_t *ids,
+ uint64_t *values, unsigned int n), rte_eth_xstats_get_v1705);
+
+__rte_deprecated int
+rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n)
+{
+ uint64_t *values_copy;
+ uint16_t size, i;
- for (i = 0; i < count; i++)
+ values_copy = malloc(sizeof(values_copy) * n);
+ if (!values_copy) {
+ RTE_PMD_DEBUG_TRACE(
+ "ERROR: Cannot allocate memory for xstats\n");
+ return -1;
+ }
+ size = rte_eth_xstats_get(port_id, 0, values_copy, n);
+
+ for (i = 0; i < n; i++) {
xstats[i].id = i;
- /* add an offset to driver-specific stats */
- for ( ; i < count + xcount; i++)
- xstats[i].id += count;
+ xstats[i].value = values_copy[i];
+ }
+ free(values_copy);
+ return size;
+}
- return count + xcount;
+__rte_deprecated int
+rte_eth_xstats_get_names_all(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n)
+{
+ return rte_eth_xstats_get_names(port_id, xstats_names, NULL, n);
}
/* reset ethdev extended statistics */
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index b3ee872..91c9af6 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -186,6 +186,7 @@
#include "rte_ether.h"
#include "rte_eth_ctrl.h"
#include "rte_dev_info.h"
+#include "rte_compat.h"
struct rte_mbuf;
@@ -1118,6 +1119,10 @@ typedef int (*eth_xstats_get_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat *stats, unsigned n);
/**< @internal Get extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_by_ids_t)(struct rte_eth_dev *dev,
+ uint64_t *ids, uint64_t *values, unsigned int n);
+/**< @internal Get extended stats of an Ethernet device. */
+
typedef void (*eth_xstats_reset_t)(struct rte_eth_dev *dev);
/**< @internal Reset extended stats of an Ethernet device. */
@@ -1125,6 +1130,17 @@ typedef int (*eth_xstats_get_names_t)(struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, unsigned size);
/**< @internal Get names of extended stats of an Ethernet device. */
+typedef int (*eth_xstats_get_names_by_ids_t)(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
+ unsigned int size);
+/**< @internal Get names of extended stats of an Ethernet device. */
+
+typedef int (*eth_xstats_get_by_name_t)(struct rte_eth_dev *dev,
+ struct rte_eth_xstat_name *xstats_names,
+ struct rte_eth_xstat *xstat,
+ const char *name);
+/**< @internal Get xstat specified by name of an Ethernet device. */
+
typedef int (*eth_queue_stats_mapping_set_t)(struct rte_eth_dev *dev,
uint16_t queue_id,
uint8_t stat_idx,
@@ -1459,9 +1475,15 @@ struct eth_dev_ops {
eth_stats_get_t stats_get; /**< Get generic device statistics. */
eth_stats_reset_t stats_reset; /**< Reset generic device statistics. */
eth_xstats_get_t xstats_get; /**< Get extended device statistics. */
+ eth_xstats_get_by_ids_t xstats_get_by_ids;
+ /**< Get extended device statistics by ID. */
eth_xstats_reset_t xstats_reset; /**< Reset extended device statistics. */
- eth_xstats_get_names_t xstats_get_names;
- /**< Get names of extended statistics. */
+ eth_xstats_get_names_t xstats_get_names;
+ /**< Get names of extended device statistics. */
+ eth_xstats_get_names_by_ids_t xstats_get_names_by_ids;
+ /**< Get name of extended device statistics by ID. */
+ eth_xstats_get_by_name_t xstats_get_by_name;
+ /**< Get extended device statistics by name. */
eth_queue_stats_mapping_set_t queue_stats_mapping_set;
/**< Configure per queue stat counter mapping. */
@@ -2287,8 +2309,57 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
*/
void rte_eth_stats_reset(uint8_t port_id);
+
/**
- * Retrieve names of extended statistics of an Ethernet device.
+* Gets the ID of a statistic from its name.
+*
+* Note this function searches for the statistics using string compares, and
+* as such should not be used on the fast-path. For fast-path retrieval of
+* specific statistics, store the ID as provided in *id* from this function,
+* and pass the ID to rte_eth_xstats_get()
+*
+* @param port_id The port to look up statistics from
+* @param xstat_name The name of the statistic to return
+* @param[out] id A pointer to an app-supplied uint64_t which should be
+* set to the ID of the stat if the stat exists.
+* @return
+* 0 on success
+* -ENODEV for invalid port_id,
+* -EINVAL if the xstat_name doesn't exist in port_id
+*/
+int rte_eth_xstats_get_id_by_name(uint8_t port_id, const char *xstat_name,
+ uint64_t *id);
+
+/**
+ * Retrieve all extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats
+ * A pointer to a table of structure of type *rte_eth_xstat*
+ * to be filled with device statistics ids and values: id is the
+ * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
+ * and value is the statistic counter.
+ * This parameter can be set to NULL if n is 0.
+ * @param n
+ * The size of the xstats array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+__rte_deprecated
+int rte_eth_xstats_get_all(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n);
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get_all, _v1705, 17.05);
+
+
+/**
+ * Retrieve names of all extended statistics of an Ethernet device.
*
* @param port_id
* The port identifier of the Ethernet device.
@@ -2296,7 +2367,7 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
* An rte_eth_xstat_name array of at least *size* elements to
* be filled. If set to NULL, the function returns the required number
* of elements.
- * @param size
+ * @param n
* The size of the xstats_names array (number of elements).
* @return
* - A positive value lower or equal to size: success. The return value
@@ -2307,9 +2378,10 @@ int rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-int rte_eth_xstats_get_names(uint8_t port_id,
- struct rte_eth_xstat_name *xstats_names,
- unsigned size);
+__rte_deprecated int rte_eth_xstats_get_names_all(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n);
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get_names_all, _v1705, 17.05);
+
/**
* Retrieve extended statistics of an Ethernet device.
@@ -2333,8 +2405,94 @@ int rte_eth_xstats_get_names(uint8_t port_id,
* shall not be used by the caller.
* - A negative value on error (invalid port id).
*/
-int rte_eth_xstats_get(uint8_t port_id, struct rte_eth_xstat *xstats,
- unsigned n);
+int rte_eth_xstats_get_v1702(uint8_t port_id, struct rte_eth_xstat *xstats,
+ unsigned int n);
+
+/**
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param ids
+ * A pointer to an ids array passed by application. This tells wich
+ * statistics values function should retrieve. This parameter
+ * can be set to NULL if n is 0. In this case function will retrieve
+ * all avalible statistics.
+ * @param values
+ * A pointer to a table to be filled with device statistics values.
+ * @param n
+ * The size of the ids array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_v1705(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n);
+
+int rte_eth_xstats_get(uint8_t port_id, uint64_t *ids, uint64_t *values,
+ unsigned int n);
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get, _v1705, 17.05);
+
+/**
+ * Retrieve extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats_names
+ * A pointer to a table of structure of type *rte_eth_xstat*
+ * to be filled with device statistics ids and values: id is the
+ * index of the name string in xstats_names (see rte_eth_xstats_get_names()),
+ * and value is the statistic counter.
+ * This parameter can be set to NULL if n is 0.
+ * @param n
+ * The size of the xstats array (number of elements).
+ * @return
+ * - A positive value lower or equal to n: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than n: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_names_v1702(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, unsigned int n);
+
+/**
+ * Retrieve names of extended statistics of an Ethernet device.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param xstats_names
+ * An rte_eth_xstat_name array of at least *size* elements to
+ * be filled. If set to NULL, the function returns the required number
+ * of elements.
+ * @param ids
+ * IDs array given by app to retrieve specific statistics
+ * @param size
+ * The size of the xstats_names array (number of elements).
+ * @return
+ * - A positive value lower or equal to size: success. The return value
+ * is the number of entries filled in the stats table.
+ * - A positive value higher than size: error, the given statistics table
+ * is too small. The return value corresponds to the size that should
+ * be given to succeed. The entries in the table are not valid and
+ * shall not be used by the caller.
+ * - A negative value on error (invalid port id).
+ */
+int rte_eth_xstats_get_names_v1705(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
+ unsigned int size);
+
+int rte_eth_xstats_get_names(uint8_t port_id,
+ struct rte_eth_xstat_name *xstats_names, uint64_t *ids,
+ unsigned int size);
+BIND_DEFAULT_SYMBOL(rte_eth_xstats_get_names, _v1705, 17.05);
/**
* Reset extended statistics of an Ethernet device.
diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map
index c6c9d0d..8a1a330 100644
--- a/lib/librte_ether/rte_ether_version.map
+++ b/lib/librte_ether/rte_ether_version.map
@@ -154,3 +154,15 @@ DPDK_17.02 {
rte_flow_validate;
} DPDK_16.11;
+
+DPDK_17.05 {
+ global:
+
+ rte_eth_xstats_get_names;
+ rte_eth_xstats_get;
+ rte_eth_xstats_get_all;
+ rte_eth_xstats_get_names_all;
+ rte_eth_xstats_get_id_by_name;
+
+} DPDK_17.02;
+
--
1.9.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v3 0/3] Extended xstats API in ethdev library to allow grouping of stats
@ 2017-04-03 12:09 3% ` Jacek Piasecki
2017-04-03 12:09 1% ` [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id Jacek Piasecki
0 siblings, 1 reply; 200+ results
From: Jacek Piasecki @ 2017-04-03 12:09 UTC (permalink / raw)
To: dev; +Cc: harry.van.haaren, Jacek Piasecki
Extended xstats API in ethdev library to allow grouping of stats logically
so they can be retrieved per logical grouping managed by the application.
Changed existing functions rte_eth_xstats_get_names and rte_eth_xstats_get
to use a new list of arguments: array of ids and array of values.
ABI versioning mechanism was used to support backward compatibility.
Introduced two new functions rte_eth_xstats_get_all and
rte_eth_xstats_get_names_all which keeps functionality of the previous
ones (respectively rte_eth_xstats_get and rte_eth_xstats_get_names)
but use new API inside. Both functions marked as deprecated.
Introduced new function: rte_eth_xstats_get_id_by_name to retrieve
xstats ids by its names.
Extended functionality of proc_info application:
--xstats-name NAME: to display single xstat value by NAME
Updated test-pmd application to use new API.
v3 changes:
checkpatch fixes
removed malloc bug in ethdev
add new command to proc_info and IDs parsing
merged testpmd and proc_info patch with library patch
Jacek Piasecki (3):
add new xstats API retrieving by id
add new xstats API id support for e1000
add new xstats API id support for ixgbe
app/proc_info/main.c | 148 +++++++++++-
app/test-pmd/config.c | 19 +-
drivers/net/e1000/igb_ethdev.c | 92 ++++++-
drivers/net/ixgbe/ixgbe_ethdev.c | 179 ++++++++++++++
lib/librte_ether/Makefile | 2 +-
lib/librte_ether/rte_ethdev.c | 422 +++++++++++++++++++++++++--------
lib/librte_ether/rte_ethdev.h | 176 +++++++++++++-
lib/librte_ether/rte_ether_version.map | 12 +
8 files changed, 915 insertions(+), 135 deletions(-)
--
1.9.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v4 00/22] vhost: generic vhost API
` (6 preceding siblings ...)
2017-04-01 7:22 5% ` [dpdk-dev] [PATCH v4 19/22] vhost: rename header file Yuanhan Liu
@ 2017-04-01 8:44 0% ` Yuanhan Liu
7 siblings, 0 replies; 200+ results
From: Yuanhan Liu @ 2017-04-01 8:44 UTC (permalink / raw)
To: dev; +Cc: Maxime Coquelin, Harris James R, Liu Changpeng
On Sat, Apr 01, 2017 at 03:22:38PM +0800, Yuanhan Liu wrote:
> This patchset makes DPDK vhost library generic enough, so that we could
> build other vhost-user drivers on top of it. For example, SPDK (Storage
> Performance Development Kit) is trying to enable vhost-user SCSI.
>
> The basic idea is, let DPDK vhost be a vhost-user agent. It stores all
> the info about the virtio device (i.e. vring address, negotiated features,
> etc) and let the specific vhost-user driver to fetch them (by the API
> provided by DPDK vhost lib). With those info being provided, the vhost-user
> driver then could get/put vring entries, thus, it could exchange data
> between the guest and host.
>
> The last patch demonstrates how to use these new APIs to implement a
> very simple vhost-user net driver, without any fancy features enabled.
Series applied to dpdk-next-virtio.
--yliu
>
>
> Change log
> ==========
>
> v2: - rebase
> - updated release note
> - updated API comments
> - renamed rte_vhost_get_vhost_memory to rte_vhost_get_mem_table
>
> - added a new device callback: features_changed(), bascially for live
> migration support
> - introduced rte_vhost_driver_start() to start a specific driver
> - misc fixes
>
> v3: - rebaseon top of vhost-user socket fix
> - fix reconnect
> - fix shared build
> - fix typos
>
> v4: - rebase
> - let rte_vhost_get.*_features() to return features by parameter and
> return -1 on failure
> - Follow the style of ring rework to update the release note: use one
> entry for all vhost changes and add sub items for each change.
>
>
> Major API/ABI Changes summary
> =============================
>
> - some renames
> * "struct virtio_net_device_ops" ==> "struct vhost_device_ops"
> * "rte_virtio_net.h" ==> "rte_vhost.h"
>
> - driver related APIs are bond with the socket file
> * rte_vhost_driver_set_features(socket_file, features);
> * rte_vhost_driver_get_features(socket_file, features);
> * rte_vhost_driver_enable_features(socket_file, features)
> * rte_vhost_driver_disable_features(socket_file, features)
> * rte_vhost_driver_callback_register(socket_file, notify_ops);
> * rte_vhost_driver_start(socket_file);
> This function replaces rte_vhost_driver_session_start(). Check patch
> 18 for more information.
>
> - new APIs to fetch guest and vring info
> * rte_vhost_get_mem_table(vid, mem);
> * rte_vhost_get_negotiated_features(vid);
> * rte_vhost_get_vhost_vring(vid, vring_idx, vring);
>
> - new exported structures
> * struct rte_vhost_vring
> * struct rte_vhost_mem_region
> * struct rte_vhost_memory
>
> - a new device ops callback: features_changed().
>
>
> --yliu
>
> ---
> Yuanhan Liu (22):
> vhost: introduce driver features related APIs
> net/vhost: remove feature related APIs
> vhost: use new APIs to handle features
> vhost: make notify ops per vhost driver
> vhost: export guest memory regions
> vhost: introduce API to fetch negotiated features
> vhost: export vhost vring info
> vhost: export API to translate gpa to vva
> vhost: turn queue pair to vring
> vhost: export the number of vrings
> vhost: move the device ready check at proper place
> vhost: drop the Rx and Tx queue macro
> vhost: do not include net specific headers
> vhost: rename device ops struct
> vhost: rename virtio-net to vhost
> vhost: add features changed callback
> vhost: export APIs for live migration support
> vhost: introduce API to start a specific driver
> vhost: rename header file
> vhost: workaround the build dependency on mbuf header
> vhost: do not destroy device on repeat mem table message
> examples/vhost: demonstrate the new generic vhost APIs
>
> doc/guides/prog_guide/vhost_lib.rst | 42 +--
> doc/guides/rel_notes/deprecation.rst | 9 -
> doc/guides/rel_notes/release_17_05.rst | 43 +++
> drivers/net/vhost/rte_eth_vhost.c | 101 ++-----
> drivers/net/vhost/rte_eth_vhost.h | 32 +--
> drivers/net/vhost/rte_pmd_vhost_version.map | 3 -
> examples/tep_termination/main.c | 23 +-
> examples/tep_termination/main.h | 2 +
> examples/tep_termination/vxlan_setup.c | 2 +-
> examples/vhost/Makefile | 2 +-
> examples/vhost/main.c | 100 +++++--
> examples/vhost/main.h | 32 ++-
> examples/vhost/virtio_net.c | 405 ++++++++++++++++++++++++++
> lib/librte_vhost/Makefile | 4 +-
> lib/librte_vhost/fd_man.c | 9 +-
> lib/librte_vhost/fd_man.h | 2 +-
> lib/librte_vhost/rte_vhost.h | 427 ++++++++++++++++++++++++++++
> lib/librte_vhost/rte_vhost_version.map | 16 +-
> lib/librte_vhost/rte_virtio_net.h | 208 --------------
> lib/librte_vhost/socket.c | 229 ++++++++++++---
> lib/librte_vhost/vhost.c | 230 ++++++++-------
> lib/librte_vhost/vhost.h | 113 +++++---
> lib/librte_vhost/vhost_user.c | 121 ++++----
> lib/librte_vhost/vhost_user.h | 2 +-
> lib/librte_vhost/virtio_net.c | 71 ++---
> 25 files changed, 1541 insertions(+), 687 deletions(-)
> create mode 100644 examples/vhost/virtio_net.c
> create mode 100644 lib/librte_vhost/rte_vhost.h
> delete mode 100644 lib/librte_vhost/rte_virtio_net.h
>
> --
> 1.9.0
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 19/22] vhost: rename header file
` (5 preceding siblings ...)
2017-04-01 7:22 3% ` [dpdk-dev] [PATCH v4 18/22] vhost: introduce API to start a specific driver Yuanhan Liu
@ 2017-04-01 7:22 5% ` Yuanhan Liu
2017-04-01 8:44 0% ` [dpdk-dev] [PATCH v4 00/22] vhost: generic vhost API Yuanhan Liu
7 siblings, 0 replies; 200+ results
From: Yuanhan Liu @ 2017-04-01 7:22 UTC (permalink / raw)
To: dev; +Cc: Maxime Coquelin, Harris James R, Liu Changpeng, Yuanhan Liu
Rename "rte_virtio_net.h" to "rte_vhost.h", to not let it be virtio
net specific.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
doc/guides/rel_notes/deprecation.rst | 9 -
doc/guides/rel_notes/release_17_05.rst | 3 +
drivers/net/vhost/rte_eth_vhost.c | 2 +-
drivers/net/vhost/rte_eth_vhost.h | 2 +-
examples/tep_termination/main.c | 2 +-
examples/tep_termination/vxlan_setup.c | 2 +-
examples/vhost/main.c | 2 +-
lib/librte_vhost/Makefile | 2 +-
lib/librte_vhost/rte_vhost.h | 425 +++++++++++++++++++++++++++++++++
lib/librte_vhost/rte_virtio_net.h | 425 ---------------------------------
lib/librte_vhost/vhost.c | 2 +-
lib/librte_vhost/vhost.h | 2 +-
lib/librte_vhost/vhost_user.h | 2 +-
lib/librte_vhost/virtio_net.c | 2 +-
14 files changed, 438 insertions(+), 444 deletions(-)
create mode 100644 lib/librte_vhost/rte_vhost.h
delete mode 100644 lib/librte_vhost/rte_virtio_net.h
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index d6544ed..9708b39 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -95,15 +95,6 @@ Deprecation Notices
Target release for removal of the legacy API will be defined once most
PMDs have switched to rte_flow.
-* vhost: API/ABI changes are planned for 17.05, for making DPDK vhost library
- generic enough so that applications can build different vhost-user drivers
- (instead of vhost-user net only) on top of that.
- Specifically, ``virtio_net_device_ops`` will be renamed to ``vhost_device_ops``.
- Correspondingly, some API's parameter need be changed. Few more functions also
- need be reworked to let it be device aware. For example, different virtio device
- has different feature set, meaning functions like ``rte_vhost_feature_disable``
- need be changed. Last, file rte_virtio_net.h will be renamed to rte_vhost.h.
-
* ABI changes are planned for 17.05 in the ``rte_cryptodev_ops`` structure.
A pointer to a rte_cryptodev_config structure will be added to the
function prototype ``cryptodev_configure_t``, as a new parameter.
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index ed206d2..ab83098 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -231,6 +231,9 @@ API Changes
``rte_vhost_driver_start`` should be used, and no need to create a
thread to call it.
+ * The vhost public header file ``rte_virtio_net.h`` is renamed to
+ ``rte_vhost.h``
+
ABI Changes
-----------
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 0a4c476..a4a35be 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -40,7 +40,7 @@
#include <rte_memcpy.h>
#include <rte_vdev.h>
#include <rte_kvargs.h>
-#include <rte_virtio_net.h>
+#include <rte_vhost.h>
#include <rte_spinlock.h>
#include "rte_eth_vhost.h"
diff --git a/drivers/net/vhost/rte_eth_vhost.h b/drivers/net/vhost/rte_eth_vhost.h
index ea4bce4..39ca771 100644
--- a/drivers/net/vhost/rte_eth_vhost.h
+++ b/drivers/net/vhost/rte_eth_vhost.h
@@ -41,7 +41,7 @@
#include <stdint.h>
#include <stdbool.h>
-#include <rte_virtio_net.h>
+#include <rte_vhost.h>
/*
* Event description.
diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c
index 24c62cd..cd6e3f1 100644
--- a/examples/tep_termination/main.c
+++ b/examples/tep_termination/main.c
@@ -49,7 +49,7 @@
#include <rte_log.h>
#include <rte_string_fns.h>
#include <rte_malloc.h>
-#include <rte_virtio_net.h>
+#include <rte_vhost.h>
#include "main.h"
#include "vxlan.h"
diff --git a/examples/tep_termination/vxlan_setup.c b/examples/tep_termination/vxlan_setup.c
index 8f1f15b..87de74d 100644
--- a/examples/tep_termination/vxlan_setup.c
+++ b/examples/tep_termination/vxlan_setup.c
@@ -49,7 +49,7 @@
#include <rte_tcp.h>
#include "main.h"
-#include "rte_virtio_net.h"
+#include "rte_vhost.h"
#include "vxlan.h"
#include "vxlan_setup.h"
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 64b3eea..08b82f6 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -49,7 +49,7 @@
#include <rte_log.h>
#include <rte_string_fns.h>
#include <rte_malloc.h>
-#include <rte_virtio_net.h>
+#include <rte_vhost.h>
#include <rte_ip.h>
#include <rte_tcp.h>
diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile
index 1262dcc..4a116fe 100644
--- a/lib/librte_vhost/Makefile
+++ b/lib/librte_vhost/Makefile
@@ -51,6 +51,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_VHOST) := fd_man.c socket.c vhost.c vhost_user.c \
virtio_net.c
# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_virtio_net.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_vhost.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
new file mode 100644
index 0000000..6681dd7
--- /dev/null
+++ b/lib/librte_vhost/rte_vhost.h
@@ -0,0 +1,425 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_VHOST_H_
+#define _RTE_VHOST_H_
+
+/**
+ * @file
+ * Interface to vhost-user
+ */
+
+#include <stdint.h>
+#include <linux/vhost.h>
+#include <linux/virtio_ring.h>
+#include <sys/eventfd.h>
+
+#include <rte_memory.h>
+#include <rte_mempool.h>
+
+#define RTE_VHOST_USER_CLIENT (1ULL << 0)
+#define RTE_VHOST_USER_NO_RECONNECT (1ULL << 1)
+#define RTE_VHOST_USER_DEQUEUE_ZERO_COPY (1ULL << 2)
+
+/**
+ * Information relating to memory regions including offsets to
+ * addresses in QEMUs memory file.
+ */
+struct rte_vhost_mem_region {
+ uint64_t guest_phys_addr;
+ uint64_t guest_user_addr;
+ uint64_t host_user_addr;
+ uint64_t size;
+ void *mmap_addr;
+ uint64_t mmap_size;
+ int fd;
+};
+
+/**
+ * Memory structure includes region and mapping information.
+ */
+struct rte_vhost_memory {
+ uint32_t nregions;
+ struct rte_vhost_mem_region regions[0];
+};
+
+struct rte_vhost_vring {
+ struct vring_desc *desc;
+ struct vring_avail *avail;
+ struct vring_used *used;
+ uint64_t log_guest_addr;
+
+ int callfd;
+ int kickfd;
+ uint16_t size;
+};
+
+/**
+ * Device and vring operations.
+ */
+struct vhost_device_ops {
+ int (*new_device)(int vid); /**< Add device. */
+ void (*destroy_device)(int vid); /**< Remove device. */
+
+ int (*vring_state_changed)(int vid, uint16_t queue_id, int enable); /**< triggered when a vring is enabled or disabled */
+
+ /**
+ * Features could be changed after the feature negotiation.
+ * For example, VHOST_F_LOG_ALL will be set/cleared at the
+ * start/end of live migration, respectively. This callback
+ * is used to inform the application on such change.
+ */
+ int (*features_changed)(int vid, uint64_t features);
+
+ void *reserved[4]; /**< Reserved for future extension */
+};
+
+/**
+ * Convert guest physical address to host virtual address
+ *
+ * @param mem
+ * the guest memory regions
+ * @param gpa
+ * the guest physical address for querying
+ * @return
+ * the host virtual address on success, 0 on failure
+ */
+static inline uint64_t __attribute__((always_inline))
+rte_vhost_gpa_to_vva(struct rte_vhost_memory *mem, uint64_t gpa)
+{
+ struct rte_vhost_mem_region *reg;
+ uint32_t i;
+
+ for (i = 0; i < mem->nregions; i++) {
+ reg = &mem->regions[i];
+ if (gpa >= reg->guest_phys_addr &&
+ gpa < reg->guest_phys_addr + reg->size) {
+ return gpa - reg->guest_phys_addr +
+ reg->host_user_addr;
+ }
+ }
+
+ return 0;
+}
+
+#define RTE_VHOST_NEED_LOG(features) ((features) & (1ULL << VHOST_F_LOG_ALL))
+
+/**
+ * Log the memory write start with given address.
+ *
+ * This function only need be invoked when the live migration starts.
+ * Therefore, we won't need call it at all in the most of time. For
+ * making the performance impact be minimum, it's suggested to do a
+ * check before calling it:
+ *
+ * if (unlikely(RTE_VHOST_NEED_LOG(features)))
+ * rte_vhost_log_write(vid, addr, len);
+ *
+ * @param vid
+ * vhost device ID
+ * @param addr
+ * the starting address for write
+ * @param len
+ * the length to write
+ */
+void rte_vhost_log_write(int vid, uint64_t addr, uint64_t len);
+
+/**
+ * Log the used ring update start at given offset.
+ *
+ * Same as rte_vhost_log_write, it's suggested to do a check before
+ * calling it:
+ *
+ * if (unlikely(RTE_VHOST_NEED_LOG(features)))
+ * rte_vhost_log_used_vring(vid, vring_idx, offset, len);
+ *
+ * @param vid
+ * vhost device ID
+ * @param vring_idx
+ * the vring index
+ * @param offset
+ * the offset inside the used ring
+ * @param len
+ * the length to write
+ */
+void rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
+ uint64_t offset, uint64_t len);
+
+int rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable);
+
+/**
+ * Register vhost driver. path could be different for multiple
+ * instance support.
+ */
+int rte_vhost_driver_register(const char *path, uint64_t flags);
+
+/* Unregister vhost driver. This is only meaningful to vhost user. */
+int rte_vhost_driver_unregister(const char *path);
+
+/**
+ * Set the feature bits the vhost-user driver supports.
+ *
+ * @param path
+ * The vhost-user socket file path
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_driver_set_features(const char *path, uint64_t features);
+
+/**
+ * Enable vhost-user driver features.
+ *
+ * Note that
+ * - the param @features should be a subset of the feature bits provided
+ * by rte_vhost_driver_set_features().
+ * - it must be invoked before vhost-user negotiation starts.
+ *
+ * @param path
+ * The vhost-user socket file path
+ * @param features
+ * Features to enable
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_driver_enable_features(const char *path, uint64_t features);
+
+/**
+ * Disable vhost-user driver features.
+ *
+ * The two notes at rte_vhost_driver_enable_features() also apply here.
+ *
+ * @param path
+ * The vhost-user socket file path
+ * @param features
+ * Features to disable
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_driver_disable_features(const char *path, uint64_t features);
+
+/**
+ * Get the feature bits before feature negotiation.
+ *
+ * @param path
+ * The vhost-user socket file path
+ * @param features
+ * A pointer to store the queried feature bits
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_driver_get_features(const char *path, uint64_t *features);
+
+/**
+ * Get the feature bits after negotiation
+ *
+ * @param vid
+ * Vhost device ID
+ * @param features
+ * A pointer to store the queried feature bits
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_get_negotiated_features(int vid, uint64_t *features);
+
+/* Register callbacks. */
+int rte_vhost_driver_callback_register(const char *path,
+ struct vhost_device_ops const * const ops);
+
+/**
+ *
+ * Start the vhost-user driver.
+ *
+ * This function triggers the vhost-user negotiation.
+ *
+ * @param path
+ * The vhost-user socket file path
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_driver_start(const char *path);
+
+/**
+ * Get the MTU value of the device if set in QEMU.
+ *
+ * @param vid
+ * virtio-net device ID
+ * @param mtu
+ * The variable to store the MTU value
+ *
+ * @return
+ * 0: success
+ * -EAGAIN: device not yet started
+ * -ENOTSUP: device does not support MTU feature
+ */
+int rte_vhost_get_mtu(int vid, uint16_t *mtu);
+
+/**
+ * Get the numa node from which the virtio net device's memory
+ * is allocated.
+ *
+ * @param vid
+ * vhost device ID
+ *
+ * @return
+ * The numa node, -1 on failure
+ */
+int rte_vhost_get_numa_node(int vid);
+
+/**
+ * @deprecated
+ * Get the number of queues the device supports.
+ *
+ * Note this function is deprecated, as it returns a queue pair number,
+ * which is vhost specific. Instead, rte_vhost_get_vring_num should
+ * be used.
+ *
+ * @param vid
+ * vhost device ID
+ *
+ * @return
+ * The number of queues, 0 on failure
+ */
+__rte_deprecated
+uint32_t rte_vhost_get_queue_num(int vid);
+
+/**
+ * Get the number of vrings the device supports.
+ *
+ * @param vid
+ * vhost device ID
+ *
+ * @return
+ * The number of vrings, 0 on failure
+ */
+uint16_t rte_vhost_get_vring_num(int vid);
+
+/**
+ * Get the virtio net device's ifname, which is the vhost-user socket
+ * file path.
+ *
+ * @param vid
+ * vhost device ID
+ * @param buf
+ * The buffer to stored the queried ifname
+ * @param len
+ * The length of buf
+ *
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_get_ifname(int vid, char *buf, size_t len);
+
+/**
+ * Get how many avail entries are left in the queue
+ *
+ * @param vid
+ * vhost device ID
+ * @param queue_id
+ * virtio queue index
+ *
+ * @return
+ * num of avail entires left
+ */
+uint16_t rte_vhost_avail_entries(int vid, uint16_t queue_id);
+
+/**
+ * This function adds buffers to the virtio devices RX virtqueue. Buffers can
+ * be received from the physical port or from another virtual device. A packet
+ * count is returned to indicate the number of packets that were succesfully
+ * added to the RX queue.
+ * @param vid
+ * vhost device ID
+ * @param queue_id
+ * virtio queue index in mq case
+ * @param pkts
+ * array to contain packets to be enqueued
+ * @param count
+ * packets num to be enqueued
+ * @return
+ * num of packets enqueued
+ */
+uint16_t rte_vhost_enqueue_burst(int vid, uint16_t queue_id,
+ struct rte_mbuf **pkts, uint16_t count);
+
+/**
+ * This function gets guest buffers from the virtio device TX virtqueue,
+ * construct host mbufs, copies guest buffer content to host mbufs and
+ * store them in pkts to be processed.
+ * @param vid
+ * vhost device ID
+ * @param queue_id
+ * virtio queue index in mq case
+ * @param mbuf_pool
+ * mbuf_pool where host mbuf is allocated.
+ * @param pkts
+ * array to contain packets to be dequeued
+ * @param count
+ * packets num to be dequeued
+ * @return
+ * num of packets dequeued
+ */
+uint16_t rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
+ struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count);
+
+/**
+ * Get guest mem table: a list of memory regions.
+ *
+ * An rte_vhost_vhost_memory object will be allocated internaly, to hold the
+ * guest memory regions. Application should free it at destroy_device()
+ * callback.
+ *
+ * @param vid
+ * vhost device ID
+ * @param mem
+ * To store the returned mem regions
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_get_mem_table(int vid, struct rte_vhost_memory **mem);
+
+/**
+ * Get guest vring info, including the vring address, vring size, etc.
+ *
+ * @param vid
+ * vhost device ID
+ * @param vring_idx
+ * vring index
+ * @param vring
+ * the structure to hold the requested vring info
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
+ struct rte_vhost_vring *vring);
+
+#endif /* _RTE_VHOST_H_ */
diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
deleted file mode 100644
index 890f4b2..0000000
--- a/lib/librte_vhost/rte_virtio_net.h
+++ /dev/null
@@ -1,425 +0,0 @@
-/*-
- * BSD LICENSE
- *
- * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _VIRTIO_NET_H_
-#define _VIRTIO_NET_H_
-
-/**
- * @file
- * Interface to vhost net
- */
-
-#include <stdint.h>
-#include <linux/vhost.h>
-#include <linux/virtio_ring.h>
-#include <sys/eventfd.h>
-
-#include <rte_memory.h>
-#include <rte_mempool.h>
-
-#define RTE_VHOST_USER_CLIENT (1ULL << 0)
-#define RTE_VHOST_USER_NO_RECONNECT (1ULL << 1)
-#define RTE_VHOST_USER_DEQUEUE_ZERO_COPY (1ULL << 2)
-
-/**
- * Information relating to memory regions including offsets to
- * addresses in QEMUs memory file.
- */
-struct rte_vhost_mem_region {
- uint64_t guest_phys_addr;
- uint64_t guest_user_addr;
- uint64_t host_user_addr;
- uint64_t size;
- void *mmap_addr;
- uint64_t mmap_size;
- int fd;
-};
-
-/**
- * Memory structure includes region and mapping information.
- */
-struct rte_vhost_memory {
- uint32_t nregions;
- struct rte_vhost_mem_region regions[0];
-};
-
-struct rte_vhost_vring {
- struct vring_desc *desc;
- struct vring_avail *avail;
- struct vring_used *used;
- uint64_t log_guest_addr;
-
- int callfd;
- int kickfd;
- uint16_t size;
-};
-
-/**
- * Device and vring operations.
- */
-struct vhost_device_ops {
- int (*new_device)(int vid); /**< Add device. */
- void (*destroy_device)(int vid); /**< Remove device. */
-
- int (*vring_state_changed)(int vid, uint16_t queue_id, int enable); /**< triggered when a vring is enabled or disabled */
-
- /**
- * Features could be changed after the feature negotiation.
- * For example, VHOST_F_LOG_ALL will be set/cleared at the
- * start/end of live migration, respectively. This callback
- * is used to inform the application on such change.
- */
- int (*features_changed)(int vid, uint64_t features);
-
- void *reserved[4]; /**< Reserved for future extension */
-};
-
-/**
- * Convert guest physical address to host virtual address
- *
- * @param mem
- * the guest memory regions
- * @param gpa
- * the guest physical address for querying
- * @return
- * the host virtual address on success, 0 on failure
- */
-static inline uint64_t __attribute__((always_inline))
-rte_vhost_gpa_to_vva(struct rte_vhost_memory *mem, uint64_t gpa)
-{
- struct rte_vhost_mem_region *reg;
- uint32_t i;
-
- for (i = 0; i < mem->nregions; i++) {
- reg = &mem->regions[i];
- if (gpa >= reg->guest_phys_addr &&
- gpa < reg->guest_phys_addr + reg->size) {
- return gpa - reg->guest_phys_addr +
- reg->host_user_addr;
- }
- }
-
- return 0;
-}
-
-#define RTE_VHOST_NEED_LOG(features) ((features) & (1ULL << VHOST_F_LOG_ALL))
-
-/**
- * Log the memory write start with given address.
- *
- * This function only need be invoked when the live migration starts.
- * Therefore, we won't need call it at all in the most of time. For
- * making the performance impact be minimum, it's suggested to do a
- * check before calling it:
- *
- * if (unlikely(RTE_VHOST_NEED_LOG(features)))
- * rte_vhost_log_write(vid, addr, len);
- *
- * @param vid
- * vhost device ID
- * @param addr
- * the starting address for write
- * @param len
- * the length to write
- */
-void rte_vhost_log_write(int vid, uint64_t addr, uint64_t len);
-
-/**
- * Log the used ring update start at given offset.
- *
- * Same as rte_vhost_log_write, it's suggested to do a check before
- * calling it:
- *
- * if (unlikely(RTE_VHOST_NEED_LOG(features)))
- * rte_vhost_log_used_vring(vid, vring_idx, offset, len);
- *
- * @param vid
- * vhost device ID
- * @param vring_idx
- * the vring index
- * @param offset
- * the offset inside the used ring
- * @param len
- * the length to write
- */
-void rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
- uint64_t offset, uint64_t len);
-
-int rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable);
-
-/**
- * Register vhost driver. path could be different for multiple
- * instance support.
- */
-int rte_vhost_driver_register(const char *path, uint64_t flags);
-
-/* Unregister vhost driver. This is only meaningful to vhost user. */
-int rte_vhost_driver_unregister(const char *path);
-
-/**
- * Set the feature bits the vhost-user driver supports.
- *
- * @param path
- * The vhost-user socket file path
- * @return
- * 0 on success, -1 on failure
- */
-int rte_vhost_driver_set_features(const char *path, uint64_t features);
-
-/**
- * Enable vhost-user driver features.
- *
- * Note that
- * - the param @features should be a subset of the feature bits provided
- * by rte_vhost_driver_set_features().
- * - it must be invoked before vhost-user negotiation starts.
- *
- * @param path
- * The vhost-user socket file path
- * @param features
- * Features to enable
- * @return
- * 0 on success, -1 on failure
- */
-int rte_vhost_driver_enable_features(const char *path, uint64_t features);
-
-/**
- * Disable vhost-user driver features.
- *
- * The two notes at rte_vhost_driver_enable_features() also apply here.
- *
- * @param path
- * The vhost-user socket file path
- * @param features
- * Features to disable
- * @return
- * 0 on success, -1 on failure
- */
-int rte_vhost_driver_disable_features(const char *path, uint64_t features);
-
-/**
- * Get the feature bits before feature negotiation.
- *
- * @param path
- * The vhost-user socket file path
- * @param features
- * A pointer to store the queried feature bits
- * @return
- * 0 on success, -1 on failure
- */
-int rte_vhost_driver_get_features(const char *path, uint64_t *features);
-
-/**
- * Get the feature bits after negotiation
- *
- * @param vid
- * Vhost device ID
- * @param features
- * A pointer to store the queried feature bits
- * @return
- * 0 on success, -1 on failure
- */
-int rte_vhost_get_negotiated_features(int vid, uint64_t *features);
-
-/* Register callbacks. */
-int rte_vhost_driver_callback_register(const char *path,
- struct vhost_device_ops const * const ops);
-
-/**
- *
- * Start the vhost-user driver.
- *
- * This function triggers the vhost-user negotiation.
- *
- * @param path
- * The vhost-user socket file path
- * @return
- * 0 on success, -1 on failure
- */
-int rte_vhost_driver_start(const char *path);
-
-/**
- * Get the MTU value of the device if set in QEMU.
- *
- * @param vid
- * virtio-net device ID
- * @param mtu
- * The variable to store the MTU value
- *
- * @return
- * 0: success
- * -EAGAIN: device not yet started
- * -ENOTSUP: device does not support MTU feature
- */
-int rte_vhost_get_mtu(int vid, uint16_t *mtu);
-
-/**
- * Get the numa node from which the virtio net device's memory
- * is allocated.
- *
- * @param vid
- * vhost device ID
- *
- * @return
- * The numa node, -1 on failure
- */
-int rte_vhost_get_numa_node(int vid);
-
-/**
- * @deprecated
- * Get the number of queues the device supports.
- *
- * Note this function is deprecated, as it returns a queue pair number,
- * which is vhost specific. Instead, rte_vhost_get_vring_num should
- * be used.
- *
- * @param vid
- * vhost device ID
- *
- * @return
- * The number of queues, 0 on failure
- */
-__rte_deprecated
-uint32_t rte_vhost_get_queue_num(int vid);
-
-/**
- * Get the number of vrings the device supports.
- *
- * @param vid
- * vhost device ID
- *
- * @return
- * The number of vrings, 0 on failure
- */
-uint16_t rte_vhost_get_vring_num(int vid);
-
-/**
- * Get the virtio net device's ifname, which is the vhost-user socket
- * file path.
- *
- * @param vid
- * vhost device ID
- * @param buf
- * The buffer to stored the queried ifname
- * @param len
- * The length of buf
- *
- * @return
- * 0 on success, -1 on failure
- */
-int rte_vhost_get_ifname(int vid, char *buf, size_t len);
-
-/**
- * Get how many avail entries are left in the queue
- *
- * @param vid
- * vhost device ID
- * @param queue_id
- * virtio queue index
- *
- * @return
- * num of avail entires left
- */
-uint16_t rte_vhost_avail_entries(int vid, uint16_t queue_id);
-
-/**
- * This function adds buffers to the virtio devices RX virtqueue. Buffers can
- * be received from the physical port or from another virtual device. A packet
- * count is returned to indicate the number of packets that were succesfully
- * added to the RX queue.
- * @param vid
- * vhost device ID
- * @param queue_id
- * virtio queue index in mq case
- * @param pkts
- * array to contain packets to be enqueued
- * @param count
- * packets num to be enqueued
- * @return
- * num of packets enqueued
- */
-uint16_t rte_vhost_enqueue_burst(int vid, uint16_t queue_id,
- struct rte_mbuf **pkts, uint16_t count);
-
-/**
- * This function gets guest buffers from the virtio device TX virtqueue,
- * construct host mbufs, copies guest buffer content to host mbufs and
- * store them in pkts to be processed.
- * @param vid
- * vhost device ID
- * @param queue_id
- * virtio queue index in mq case
- * @param mbuf_pool
- * mbuf_pool where host mbuf is allocated.
- * @param pkts
- * array to contain packets to be dequeued
- * @param count
- * packets num to be dequeued
- * @return
- * num of packets dequeued
- */
-uint16_t rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
- struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count);
-
-/**
- * Get guest mem table: a list of memory regions.
- *
- * An rte_vhost_vhost_memory object will be allocated internaly, to hold the
- * guest memory regions. Application should free it at destroy_device()
- * callback.
- *
- * @param vid
- * vhost device ID
- * @param mem
- * To store the returned mem regions
- * @return
- * 0 on success, -1 on failure
- */
-int rte_vhost_get_mem_table(int vid, struct rte_vhost_memory **mem);
-
-/**
- * Get guest vring info, including the vring address, vring size, etc.
- *
- * @param vid
- * vhost device ID
- * @param vring_idx
- * vring index
- * @param vring
- * the structure to hold the requested vring info
- * @return
- * 0 on success, -1 on failure
- */
-int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
- struct rte_vhost_vring *vring);
-
-#endif /* _VIRTIO_NET_H_ */
diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 59de2ea..0b19d2e 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -45,7 +45,7 @@
#include <rte_string_fns.h>
#include <rte_memory.h>
#include <rte_malloc.h>
-#include <rte_virtio_net.h>
+#include <rte_vhost.h>
#include "vhost.h"
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index a199ee6..ddd8a9c 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -46,7 +46,7 @@
#include <rte_log.h>
#include <rte_ether.h>
-#include "rte_virtio_net.h"
+#include "rte_vhost.h"
/* Used to indicate that the device is running on a data core */
#define VIRTIO_DEV_RUNNING 1
diff --git a/lib/librte_vhost/vhost_user.h b/lib/librte_vhost/vhost_user.h
index 838dec8..2ba22db 100644
--- a/lib/librte_vhost/vhost_user.h
+++ b/lib/librte_vhost/vhost_user.h
@@ -37,7 +37,7 @@
#include <stdint.h>
#include <linux/vhost.h>
-#include "rte_virtio_net.h"
+#include "rte_vhost.h"
/* refer to hw/virtio/vhost-user.c */
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index fc336d9..d6b7c7a 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -39,7 +39,7 @@
#include <rte_memcpy.h>
#include <rte_ether.h>
#include <rte_ip.h>
-#include <rte_virtio_net.h>
+#include <rte_vhost.h>
#include <rte_tcp.h>
#include <rte_udp.h>
#include <rte_sctp.h>
--
1.9.0
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v4 18/22] vhost: introduce API to start a specific driver
` (4 preceding siblings ...)
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 14/22] vhost: rename device ops struct Yuanhan Liu
@ 2017-04-01 7:22 3% ` Yuanhan Liu
2017-04-01 7:22 5% ` [dpdk-dev] [PATCH v4 19/22] vhost: rename header file Yuanhan Liu
2017-04-01 8:44 0% ` [dpdk-dev] [PATCH v4 00/22] vhost: generic vhost API Yuanhan Liu
7 siblings, 0 replies; 200+ results
From: Yuanhan Liu @ 2017-04-01 7:22 UTC (permalink / raw)
To: dev; +Cc: Maxime Coquelin, Harris James R, Liu Changpeng, Yuanhan Liu
We used to use rte_vhost_driver_session_start() to trigger the vhost-user
session. It takes no argument, thus it's a global trigger. And it could
be problematic.
The issue is, currently, rte_vhost_driver_register(path, flags) actually
tries to put it into the session loop (by fdset_add). However, it needs
a set of APIs to set a vhost-user driver properly:
* rte_vhost_driver_register(path, flags);
* rte_vhost_driver_set_features(path, features);
* rte_vhost_driver_callback_register(path, vhost_device_ops);
If a new vhost-user driver is registered after the trigger (think OVS-DPDK
that could add a port dynamically from cmdline), the current code will
effectively starts the session for the new driver just after the first
API rte_vhost_driver_register() is invoked, leaving later calls taking
no effect at all.
To handle the case properly, this patch introduce a new API,
rte_vhost_driver_start(path), to trigger a specific vhost-user driver.
To do that, the rte_vhost_driver_register(path, flags) is simplified
to create the socket only and let rte_vhost_driver_start(path) to
actually put it into the session loop.
Meanwhile, the rte_vhost_driver_session_start is removed: we could hide
the session thread internally (create the thread if it has not been
created). This would also simplify the application.
NOTE: the API order in prog guide is slightly adjusted for showing the
correct invoke order.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
v3: - fix broken reconnect
---
doc/guides/prog_guide/vhost_lib.rst | 24 +++++-----
doc/guides/rel_notes/release_17_05.rst | 4 ++
drivers/net/vhost/rte_eth_vhost.c | 50 ++------------------
examples/tep_termination/main.c | 8 +++-
examples/vhost/main.c | 9 +++-
lib/librte_vhost/fd_man.c | 9 ++--
lib/librte_vhost/fd_man.h | 2 +-
lib/librte_vhost/rte_vhost_version.map | 2 +-
lib/librte_vhost/rte_virtio_net.h | 15 +++++-
lib/librte_vhost/socket.c | 84 ++++++++++++++++++++--------------
10 files changed, 104 insertions(+), 103 deletions(-)
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index a4fb1f1..5979290 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -116,12 +116,6 @@ The following is an overview of some key Vhost API functions:
vhost-user driver could be vhost-user net, yet it could be something else,
say, vhost-user SCSI.
-* ``rte_vhost_driver_session_start()``
-
- This function starts the vhost session loop to handle vhost messages. It
- starts an infinite loop, therefore it should be called in a dedicated
- thread.
-
* ``rte_vhost_driver_callback_register(path, vhost_device_ops)``
This function registers a set of callbacks, to let DPDK applications take
@@ -149,6 +143,17 @@ The following is an overview of some key Vhost API functions:
``VHOST_F_LOG_ALL`` will be set/cleared at the start/end of live
migration, respectively.
+* ``rte_vhost_driver_disable/enable_features(path, features))``
+
+ This function disables/enables some features. For example, it can be used to
+ disable mergeable buffers and TSO features, which both are enabled by
+ default.
+
+* ``rte_vhost_driver_start(path)``
+
+ This function triggers the vhost-user negotiation. It should be invoked at
+ the end of initializing a vhost-user driver.
+
* ``rte_vhost_enqueue_burst(vid, queue_id, pkts, count)``
Transmits (enqueues) ``count`` packets from host to guest.
@@ -157,13 +162,6 @@ The following is an overview of some key Vhost API functions:
Receives (dequeues) ``count`` packets from guest, and stored them at ``pkts``.
-* ``rte_vhost_driver_disable/enable_features(path, features))``
-
- This function disables/enables some features. For example, it can be used to
- disable mergeable buffers and TSO features, which both are enabled by
- default.
-
-
Vhost-user Implementations
--------------------------
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index a400bd0..ed206d2 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -227,6 +227,10 @@ API Changes
* The vhost struct ``virtio_net_device_ops`` is renamed to
``vhost_device_ops``
+ * The vhost API ``rte_vhost_driver_session_start`` is removed. Instead,
+ ``rte_vhost_driver_start`` should be used, and no need to create a
+ thread to call it.
+
ABI Changes
-----------
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 0b514cc..0a4c476 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -127,9 +127,6 @@ struct internal_list {
static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
-static rte_atomic16_t nb_started_ports;
-static pthread_t session_th;
-
static struct rte_eth_link pmd_link = {
.link_speed = 10000,
.link_duplex = ETH_LINK_FULL_DUPLEX,
@@ -743,42 +740,6 @@ struct vhost_xstats_name_off {
return vid;
}
-static void *
-vhost_driver_session(void *param __rte_unused)
-{
- /* start event handling */
- rte_vhost_driver_session_start();
-
- return NULL;
-}
-
-static int
-vhost_driver_session_start(void)
-{
- int ret;
-
- ret = pthread_create(&session_th,
- NULL, vhost_driver_session, NULL);
- if (ret)
- RTE_LOG(ERR, PMD, "Can't create a thread\n");
-
- return ret;
-}
-
-static void
-vhost_driver_session_stop(void)
-{
- int ret;
-
- ret = pthread_cancel(session_th);
- if (ret)
- RTE_LOG(ERR, PMD, "Can't cancel the thread\n");
-
- ret = pthread_join(session_th, NULL);
- if (ret)
- RTE_LOG(ERR, PMD, "Can't join the thread\n");
-}
-
static int
eth_dev_start(struct rte_eth_dev *dev)
{
@@ -1094,10 +1055,10 @@ struct vhost_xstats_name_off {
goto error;
}
- /* We need only one message handling thread */
- if (rte_atomic16_add_return(&nb_started_ports, 1) == 1) {
- if (vhost_driver_session_start())
- goto error;
+ if (rte_vhost_driver_start(iface_name) < 0) {
+ RTE_LOG(ERR, PMD, "Failed to start driver for %s\n",
+ iface_name);
+ goto error;
}
return data->port_id;
@@ -1224,9 +1185,6 @@ struct vhost_xstats_name_off {
eth_dev_close(eth_dev);
- if (rte_atomic16_sub_return(&nb_started_ports, 1) == 0)
- vhost_driver_session_stop();
-
rte_free(vring_states[eth_dev->data->port_id]);
vring_states[eth_dev->data->port_id] = NULL;
diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c
index 738f2d2..24c62cd 100644
--- a/examples/tep_termination/main.c
+++ b/examples/tep_termination/main.c
@@ -1263,7 +1263,13 @@ static inline void __attribute__((always_inline))
"failed to register vhost driver callbacks.\n");
}
- rte_vhost_driver_session_start();
+ if (rte_vhost_driver_start(dev_basename) < 0) {
+ rte_exit(EXIT_FAILURE,
+ "failed to start vhost driver.\n");
+ }
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id)
+ rte_eal_wait_lcore(lcore_id);
return 0;
}
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 4395306..64b3eea 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1545,9 +1545,16 @@ static inline void __attribute__((always_inline))
rte_exit(EXIT_FAILURE,
"failed to register vhost driver callbacks.\n");
}
+
+ if (rte_vhost_driver_start(file) < 0) {
+ rte_exit(EXIT_FAILURE,
+ "failed to start vhost driver.\n");
+ }
}
- rte_vhost_driver_session_start();
+ RTE_LCORE_FOREACH_SLAVE(lcore_id)
+ rte_eal_wait_lcore(lcore_id);
+
return 0;
}
diff --git a/lib/librte_vhost/fd_man.c b/lib/librte_vhost/fd_man.c
index c7a4490..2ceacc9 100644
--- a/lib/librte_vhost/fd_man.c
+++ b/lib/librte_vhost/fd_man.c
@@ -210,8 +210,8 @@
* will wait until the flag is reset to zero(which indicates the callback is
* finished), then it could free the context after fdset_del.
*/
-void
-fdset_event_dispatch(struct fdset *pfdset)
+void *
+fdset_event_dispatch(void *arg)
{
int i;
struct pollfd *pfd;
@@ -221,9 +221,10 @@
int fd, numfds;
int remove1, remove2;
int need_shrink;
+ struct fdset *pfdset = arg;
if (pfdset == NULL)
- return;
+ return NULL;
while (1) {
@@ -294,4 +295,6 @@
if (need_shrink)
fdset_shrink(pfdset);
}
+
+ return NULL;
}
diff --git a/lib/librte_vhost/fd_man.h b/lib/librte_vhost/fd_man.h
index d319cac..90d34db 100644
--- a/lib/librte_vhost/fd_man.h
+++ b/lib/librte_vhost/fd_man.h
@@ -64,6 +64,6 @@ int fdset_add(struct fdset *pfdset, int fd,
void *fdset_del(struct fdset *pfdset, int fd);
-void fdset_event_dispatch(struct fdset *pfdset);
+void *fdset_event_dispatch(void *arg);
#endif
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index f4b74da..0785873 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -4,7 +4,6 @@ DPDK_2.0 {
rte_vhost_dequeue_burst;
rte_vhost_driver_callback_register;
rte_vhost_driver_register;
- rte_vhost_driver_session_start;
rte_vhost_enable_guest_notification;
rte_vhost_enqueue_burst;
@@ -35,6 +34,7 @@ DPDK_17.05 {
rte_vhost_driver_enable_features;
rte_vhost_driver_get_features;
rte_vhost_driver_set_features;
+ rte_vhost_driver_start;
rte_vhost_get_mem_table;
rte_vhost_get_mtu;
rte_vhost_get_negotiated_features;
diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
index 7a08bbf..890f4b2 100644
--- a/lib/librte_vhost/rte_virtio_net.h
+++ b/lib/librte_vhost/rte_virtio_net.h
@@ -254,8 +254,19 @@ void rte_vhost_log_used_vring(int vid, uint16_t vring_idx,
/* Register callbacks. */
int rte_vhost_driver_callback_register(const char *path,
struct vhost_device_ops const * const ops);
-/* Start vhost driver session blocking loop. */
-int rte_vhost_driver_session_start(void);
+
+/**
+ *
+ * Start the vhost-user driver.
+ *
+ * This function triggers the vhost-user negotiation.
+ *
+ * @param path
+ * The vhost-user socket file path
+ * @return
+ * 0 on success, -1 on failure
+ */
+int rte_vhost_driver_start(const char *path);
/**
* Get the MTU value of the device if set in QEMU.
diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
index 3b68fc9..66fd335 100644
--- a/lib/librte_vhost/socket.c
+++ b/lib/librte_vhost/socket.c
@@ -63,7 +63,8 @@ struct vhost_user_socket {
struct vhost_user_connection_list conn_list;
pthread_mutex_t conn_mutex;
char *path;
- int listenfd;
+ int socket_fd;
+ struct sockaddr_un un;
bool is_server;
bool reconnect;
bool dequeue_zero_copy;
@@ -101,7 +102,8 @@ struct vhost_user {
static void vhost_user_server_new_connection(int fd, void *data, int *remove);
static void vhost_user_read_cb(int fd, void *dat, int *remove);
-static int vhost_user_create_client(struct vhost_user_socket *vsocket);
+static int create_unix_socket(struct vhost_user_socket *vsocket);
+static int vhost_user_start_client(struct vhost_user_socket *vsocket);
static struct vhost_user vhost_user = {
.fdset = {
@@ -280,23 +282,26 @@ struct vhost_user {
free(conn);
- if (vsocket->reconnect)
- vhost_user_create_client(vsocket);
+ if (vsocket->reconnect) {
+ create_unix_socket(vsocket);
+ vhost_user_start_client(vsocket);
+ }
}
}
static int
-create_unix_socket(const char *path, struct sockaddr_un *un, bool is_server)
+create_unix_socket(struct vhost_user_socket *vsocket)
{
int fd;
+ struct sockaddr_un *un = &vsocket->un;
fd = socket(AF_UNIX, SOCK_STREAM, 0);
if (fd < 0)
return -1;
RTE_LOG(INFO, VHOST_CONFIG, "vhost-user %s: socket created, fd: %d\n",
- is_server ? "server" : "client", fd);
+ vsocket->is_server ? "server" : "client", fd);
- if (!is_server && fcntl(fd, F_SETFL, O_NONBLOCK)) {
+ if (!vsocket->is_server && fcntl(fd, F_SETFL, O_NONBLOCK)) {
RTE_LOG(ERR, VHOST_CONFIG,
"vhost-user: can't set nonblocking mode for socket, fd: "
"%d (%s)\n", fd, strerror(errno));
@@ -306,25 +311,21 @@ struct vhost_user {
memset(un, 0, sizeof(*un));
un->sun_family = AF_UNIX;
- strncpy(un->sun_path, path, sizeof(un->sun_path));
+ strncpy(un->sun_path, vsocket->path, sizeof(un->sun_path));
un->sun_path[sizeof(un->sun_path) - 1] = '\0';
- return fd;
+ vsocket->socket_fd = fd;
+ return 0;
}
static int
-vhost_user_create_server(struct vhost_user_socket *vsocket)
+vhost_user_start_server(struct vhost_user_socket *vsocket)
{
- int fd;
int ret;
- struct sockaddr_un un;
+ int fd = vsocket->socket_fd;
const char *path = vsocket->path;
- fd = create_unix_socket(path, &un, vsocket->is_server);
- if (fd < 0)
- return -1;
-
- ret = bind(fd, (struct sockaddr *)&un, sizeof(un));
+ ret = bind(fd, (struct sockaddr *)&vsocket->un, sizeof(vsocket->un));
if (ret < 0) {
RTE_LOG(ERR, VHOST_CONFIG,
"failed to bind to %s: %s; remove it and try again\n",
@@ -337,7 +338,6 @@ struct vhost_user {
if (ret < 0)
goto err;
- vsocket->listenfd = fd;
ret = fdset_add(&vhost_user.fdset, fd, vhost_user_server_new_connection,
NULL, vsocket);
if (ret < 0) {
@@ -456,20 +456,15 @@ struct vhost_user_reconnect_list {
}
static int
-vhost_user_create_client(struct vhost_user_socket *vsocket)
+vhost_user_start_client(struct vhost_user_socket *vsocket)
{
- int fd;
int ret;
- struct sockaddr_un un;
+ int fd = vsocket->socket_fd;
const char *path = vsocket->path;
struct vhost_user_reconnect *reconn;
- fd = create_unix_socket(path, &un, vsocket->is_server);
- if (fd < 0)
- return -1;
-
- ret = vhost_user_connect_nonblock(fd, (struct sockaddr *)&un,
- sizeof(un));
+ ret = vhost_user_connect_nonblock(fd, (struct sockaddr *)&vsocket->un,
+ sizeof(vsocket->un));
if (ret == 0) {
vhost_user_add_connection(fd, vsocket);
return 0;
@@ -492,7 +487,7 @@ struct vhost_user_reconnect_list {
close(fd);
return -1;
}
- reconn->un = un;
+ reconn->un = vsocket->un;
reconn->fd = fd;
reconn->vsocket = vsocket;
pthread_mutex_lock(&reconn_list.mutex);
@@ -645,11 +640,10 @@ struct vhost_user_reconnect_list {
goto out;
}
}
- ret = vhost_user_create_client(vsocket);
} else {
vsocket->is_server = true;
- ret = vhost_user_create_server(vsocket);
}
+ ret = create_unix_socket(vsocket);
if (ret < 0) {
free(vsocket->path);
free(vsocket);
@@ -705,8 +699,8 @@ struct vhost_user_reconnect_list {
if (!strcmp(vsocket->path, path)) {
if (vsocket->is_server) {
- fdset_del(&vhost_user.fdset, vsocket->listenfd);
- close(vsocket->listenfd);
+ fdset_del(&vhost_user.fdset, vsocket->socket_fd);
+ close(vsocket->socket_fd);
unlink(path);
} else if (vsocket->reconnect) {
vhost_user_remove_reconnect(vsocket);
@@ -776,8 +770,28 @@ struct vhost_device_ops const *
}
int
-rte_vhost_driver_session_start(void)
+rte_vhost_driver_start(const char *path)
{
- fdset_event_dispatch(&vhost_user.fdset);
- return 0;
+ struct vhost_user_socket *vsocket;
+ static pthread_t fdset_tid;
+
+ pthread_mutex_lock(&vhost_user.mutex);
+ vsocket = find_vhost_user_socket(path);
+ pthread_mutex_unlock(&vhost_user.mutex);
+
+ if (!vsocket)
+ return -1;
+
+ if (fdset_tid == 0) {
+ int ret = pthread_create(&fdset_tid, NULL, fdset_event_dispatch,
+ &vhost_user.fdset);
+ if (ret < 0)
+ RTE_LOG(ERR, VHOST_CONFIG,
+ "failed to create fdset handling thread");
+ }
+
+ if (vsocket->is_server)
+ return vhost_user_start_server(vsocket);
+ else
+ return vhost_user_start_client(vsocket);
}
--
1.9.0
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 14/22] vhost: rename device ops struct
` (3 preceding siblings ...)
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 13/22] vhost: do not include net specific headers Yuanhan Liu
@ 2017-04-01 7:22 4% ` Yuanhan Liu
2017-04-01 7:22 3% ` [dpdk-dev] [PATCH v4 18/22] vhost: introduce API to start a specific driver Yuanhan Liu
` (2 subsequent siblings)
7 siblings, 0 replies; 200+ results
From: Yuanhan Liu @ 2017-04-01 7:22 UTC (permalink / raw)
To: dev; +Cc: Maxime Coquelin, Harris James R, Liu Changpeng, Yuanhan Liu
rename "virtio_net_device_ops" to "vhost_device_ops", to not let it
be virtio-net specific.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
doc/guides/prog_guide/vhost_lib.rst | 2 +-
doc/guides/rel_notes/release_17_05.rst | 3 +++
drivers/net/vhost/rte_eth_vhost.c | 2 +-
examples/tep_termination/main.c | 2 +-
examples/vhost/main.c | 2 +-
lib/librte_vhost/Makefile | 2 +-
lib/librte_vhost/rte_virtio_net.h | 4 ++--
lib/librte_vhost/socket.c | 6 +++---
lib/librte_vhost/vhost.h | 4 ++--
9 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index 40f3b3b..e6e34f3 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -122,7 +122,7 @@ The following is an overview of some key Vhost API functions:
starts an infinite loop, therefore it should be called in a dedicated
thread.
-* ``rte_vhost_driver_callback_register(path, virtio_net_device_ops)``
+* ``rte_vhost_driver_callback_register(path, vhost_device_ops)``
This function registers a set of callbacks, to let DPDK applications take
the appropriate action when some events happen. The following events are
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index ebc28f5..a400bd0 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -224,6 +224,9 @@ API Changes
* ``linux/if.h``
* ``rte_ether.h``
+ * The vhost struct ``virtio_net_device_ops`` is renamed to
+ ``vhost_device_ops``
+
ABI Changes
-----------
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index f6e49da..0b514cc 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -671,7 +671,7 @@ struct vhost_xstats_name_off {
return 0;
}
-static struct virtio_net_device_ops vhost_ops = {
+static struct vhost_device_ops vhost_ops = {
.new_device = new_device,
.destroy_device = destroy_device,
.vring_state_changed = vring_state_changed,
diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c
index 18b977e..738f2d2 100644
--- a/examples/tep_termination/main.c
+++ b/examples/tep_termination/main.c
@@ -1081,7 +1081,7 @@ static inline void __attribute__((always_inline))
* These callback allow devices to be added to the data core when configuration
* has been fully complete.
*/
-static const struct virtio_net_device_ops virtio_net_device_ops = {
+static const struct vhost_device_ops virtio_net_device_ops = {
.new_device = new_device,
.destroy_device = destroy_device,
};
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 72a9d69..4395306 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1270,7 +1270,7 @@ static inline void __attribute__((always_inline))
* These callback allow devices to be added to the data core when configuration
* has been fully complete.
*/
-static const struct virtio_net_device_ops virtio_net_device_ops =
+static const struct vhost_device_ops virtio_net_device_ops =
{
.new_device = new_device,
.destroy_device = destroy_device,
diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile
index 1b224b3..1262dcc 100644
--- a/lib/librte_vhost/Makefile
+++ b/lib/librte_vhost/Makefile
@@ -36,7 +36,7 @@ LIB = librte_vhost.a
EXPORT_MAP := rte_vhost_version.map
-LIBABIVER := 3
+LIBABIVER := 4
CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -D_FILE_OFFSET_BITS=64
CFLAGS += -I vhost_user
diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
index 9915751..4287c68 100644
--- a/lib/librte_vhost/rte_virtio_net.h
+++ b/lib/librte_vhost/rte_virtio_net.h
@@ -87,7 +87,7 @@ struct rte_vhost_vring {
/**
* Device and vring operations.
*/
-struct virtio_net_device_ops {
+struct vhost_device_ops {
int (*new_device)(int vid); /**< Add device. */
void (*destroy_device)(int vid); /**< Remove device. */
@@ -202,7 +202,7 @@ static inline uint64_t __attribute__((always_inline))
/* Register callbacks. */
int rte_vhost_driver_callback_register(const char *path,
- struct virtio_net_device_ops const * const ops);
+ struct vhost_device_ops const * const ops);
/* Start vhost driver session blocking loop. */
int rte_vhost_driver_session_start(void);
diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
index aa948b9..3b68fc9 100644
--- a/lib/librte_vhost/socket.c
+++ b/lib/librte_vhost/socket.c
@@ -78,7 +78,7 @@ struct vhost_user_socket {
uint64_t supported_features;
uint64_t features;
- struct virtio_net_device_ops const *notify_ops;
+ struct vhost_device_ops const *notify_ops;
};
struct vhost_user_connection {
@@ -750,7 +750,7 @@ struct vhost_user_reconnect_list {
*/
int
rte_vhost_driver_callback_register(const char *path,
- struct virtio_net_device_ops const * const ops)
+ struct vhost_device_ops const * const ops)
{
struct vhost_user_socket *vsocket;
@@ -763,7 +763,7 @@ struct vhost_user_reconnect_list {
return vsocket ? 0 : -1;
}
-struct virtio_net_device_ops const *
+struct vhost_device_ops const *
vhost_driver_callback_get(const char *path)
{
struct vhost_user_socket *vsocket;
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 672098b..225ff2e 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -191,7 +191,7 @@ struct virtio_net {
struct ether_addr mac;
uint16_t mtu;
- struct virtio_net_device_ops const *notify_ops;
+ struct vhost_device_ops const *notify_ops;
uint32_t nr_guest_pages;
uint32_t max_guest_pages;
@@ -265,7 +265,7 @@ static inline phys_addr_t __attribute__((always_inline))
void vhost_set_ifname(int, const char *if_name, unsigned int if_len);
void vhost_enable_dequeue_zero_copy(int vid);
-struct virtio_net_device_ops const *vhost_driver_callback_get(const char *path);
+struct vhost_device_ops const *vhost_driver_callback_get(const char *path);
/*
* Backend-specific cleanup.
--
1.9.0
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 12/22] vhost: drop the Rx and Tx queue macro
2017-04-01 7:22 3% ` [dpdk-dev] [PATCH v4 04/22] vhost: make notify ops per vhost driver Yuanhan Liu
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 10/22] vhost: export the number of vrings Yuanhan Liu
@ 2017-04-01 7:22 4% ` Yuanhan Liu
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 13/22] vhost: do not include net specific headers Yuanhan Liu
` (4 subsequent siblings)
7 siblings, 0 replies; 200+ results
From: Yuanhan Liu @ 2017-04-01 7:22 UTC (permalink / raw)
To: dev; +Cc: Maxime Coquelin, Harris James R, Liu Changpeng, Yuanhan Liu
They are virtio-net specific and should be defined inside the virtio-net
driver.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
v2: - update release note
---
doc/guides/rel_notes/release_17_05.rst | 6 ++++++
drivers/net/vhost/rte_eth_vhost.c | 2 ++
examples/tep_termination/main.h | 2 ++
examples/vhost/main.h | 2 ++
lib/librte_vhost/rte_virtio_net.h | 3 ---
5 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 5dc5b87..471a509 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -211,6 +211,12 @@ API Changes
* The vhost API ``rte_vhost_get_queue_num`` is deprecated, instead,
``rte_vhost_get_vring_num`` should be used.
+ * Following macros are removed in ``rte_virtio_net.h``
+
+ * ``VIRTIO_RXQ``
+ * ``VIRTIO_TXQ``
+ * ``VIRTIO_QNUM``
+
ABI Changes
-----------
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 5435bd6..f6e49da 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -45,6 +45,8 @@
#include "rte_eth_vhost.h"
+enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM};
+
#define ETH_VHOST_IFACE_ARG "iface"
#define ETH_VHOST_QUEUES_ARG "queues"
#define ETH_VHOST_CLIENT_ARG "client"
diff --git a/examples/tep_termination/main.h b/examples/tep_termination/main.h
index c0ea766..8ed817d 100644
--- a/examples/tep_termination/main.h
+++ b/examples/tep_termination/main.h
@@ -54,6 +54,8 @@
/* Max number of devices. Limited by the application. */
#define MAX_DEVICES 64
+enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM};
+
/* Per-device statistics struct */
struct device_statistics {
uint64_t tx_total;
diff --git a/examples/vhost/main.h b/examples/vhost/main.h
index 6bb42e8..7a3d251 100644
--- a/examples/vhost/main.h
+++ b/examples/vhost/main.h
@@ -41,6 +41,8 @@
#define RTE_LOGTYPE_VHOST_DATA RTE_LOGTYPE_USER2
#define RTE_LOGTYPE_VHOST_PORT RTE_LOGTYPE_USER3
+enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM};
+
struct device_statistics {
uint64_t tx;
uint64_t tx_total;
diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
index 9c1809e..c6e11e9 100644
--- a/lib/librte_vhost/rte_virtio_net.h
+++ b/lib/librte_vhost/rte_virtio_net.h
@@ -55,9 +55,6 @@
#define RTE_VHOST_USER_NO_RECONNECT (1ULL << 1)
#define RTE_VHOST_USER_DEQUEUE_ZERO_COPY (1ULL << 2)
-/* Enum for virtqueue management. */
-enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM};
-
/**
* Information relating to memory regions including offsets to
* addresses in QEMUs memory file.
--
1.9.0
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 13/22] vhost: do not include net specific headers
` (2 preceding siblings ...)
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 12/22] vhost: drop the Rx and Tx queue macro Yuanhan Liu
@ 2017-04-01 7:22 4% ` Yuanhan Liu
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 14/22] vhost: rename device ops struct Yuanhan Liu
` (3 subsequent siblings)
7 siblings, 0 replies; 200+ results
From: Yuanhan Liu @ 2017-04-01 7:22 UTC (permalink / raw)
To: dev; +Cc: Maxime Coquelin, Harris James R, Liu Changpeng, Yuanhan Liu
Include it internally, at vhost.h.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
v2: - update release note
---
doc/guides/rel_notes/release_17_05.rst | 7 +++++++
examples/vhost/main.h | 2 ++
lib/librte_vhost/rte_virtio_net.h | 4 ----
lib/librte_vhost/vhost.h | 4 ++++
4 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 471a509..ebc28f5 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -217,6 +217,13 @@ API Changes
* ``VIRTIO_TXQ``
* ``VIRTIO_QNUM``
+ * Following net specific header files are removed in ``rte_virtio_net.h``
+
+ * ``linux/virtio_net.h``
+ * ``sys/socket.h``
+ * ``linux/if.h``
+ * ``rte_ether.h``
+
ABI Changes
-----------
diff --git a/examples/vhost/main.h b/examples/vhost/main.h
index 7a3d251..ddcd858 100644
--- a/examples/vhost/main.h
+++ b/examples/vhost/main.h
@@ -36,6 +36,8 @@
#include <sys/queue.h>
+#include <rte_ether.h>
+
/* Macros for printing using RTE_LOG */
#define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1
#define RTE_LOGTYPE_VHOST_DATA RTE_LOGTYPE_USER2
diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
index c6e11e9..9915751 100644
--- a/lib/librte_vhost/rte_virtio_net.h
+++ b/lib/librte_vhost/rte_virtio_net.h
@@ -42,14 +42,10 @@
#include <stdint.h>
#include <linux/vhost.h>
#include <linux/virtio_ring.h>
-#include <linux/virtio_net.h>
#include <sys/eventfd.h>
-#include <sys/socket.h>
-#include <linux/if.h>
#include <rte_memory.h>
#include <rte_mempool.h>
-#include <rte_ether.h>
#define RTE_VHOST_USER_CLIENT (1ULL << 0)
#define RTE_VHOST_USER_NO_RECONNECT (1ULL << 1)
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 84e379a..672098b 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -39,8 +39,12 @@
#include <sys/queue.h>
#include <unistd.h>
#include <linux/vhost.h>
+#include <linux/virtio_net.h>
+#include <sys/socket.h>
+#include <linux/if.h>
#include <rte_log.h>
+#include <rte_ether.h>
#include "rte_virtio_net.h"
--
1.9.0
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 10/22] vhost: export the number of vrings
2017-04-01 7:22 3% ` [dpdk-dev] [PATCH v4 04/22] vhost: make notify ops per vhost driver Yuanhan Liu
@ 2017-04-01 7:22 4% ` Yuanhan Liu
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 12/22] vhost: drop the Rx and Tx queue macro Yuanhan Liu
` (5 subsequent siblings)
7 siblings, 0 replies; 200+ results
From: Yuanhan Liu @ 2017-04-01 7:22 UTC (permalink / raw)
To: dev; +Cc: Maxime Coquelin, Harris James R, Liu Changpeng, Yuanhan Liu
We used to use rte_vhost_get_queue_num() for telling how many vrings.
However, the return value is the number of "queue pairs", which is
very virtio-net specific. To make it generic, we should return the
number of vrings instead, and let the driver do the proper translation.
Say, virtio-net driver could turn it to the number of queue pairs by
dividing 2.
Meanwhile, mark rte_vhost_get_queue_num as deprecated.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
v2: - update release note
---
doc/guides/rel_notes/release_17_05.rst | 3 +++
drivers/net/vhost/rte_eth_vhost.c | 2 +-
lib/librte_vhost/rte_vhost_version.map | 1 +
lib/librte_vhost/rte_virtio_net.h | 17 +++++++++++++++++
lib/librte_vhost/vhost.c | 11 +++++++++++
5 files changed, 33 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index 2a4a480..5dc5b87 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -208,6 +208,9 @@ API Changes
be per vhost-user socket file. Thus, it takes one more argument:
``rte_vhost_driver_callback_register(path, ops)``.
+ * The vhost API ``rte_vhost_get_queue_num`` is deprecated, instead,
+ ``rte_vhost_get_vring_num`` should be used.
+
ABI Changes
-----------
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 7504f89..5435bd6 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -569,7 +569,7 @@ struct vhost_xstats_name_off {
vq->port = eth_dev->data->port_id;
}
- for (i = 0; i < rte_vhost_get_queue_num(vid) * VIRTIO_QNUM; i++)
+ for (i = 0; i < rte_vhost_get_vring_num(vid); i++)
rte_vhost_enable_guest_notification(vid, i, 0);
rte_vhost_get_mtu(vid, ð_dev->data->mtu);
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 2b309b2..8df14dc 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -39,6 +39,7 @@ DPDK_17.05 {
rte_vhost_get_mtu;
rte_vhost_get_negotiated_features;
rte_vhost_get_vhost_vring;
+ rte_vhost_get_vring_num;
rte_vhost_gpa_to_vva;
} DPDK_16.07;
diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
index e019f98..9c1809e 100644
--- a/lib/librte_vhost/rte_virtio_net.h
+++ b/lib/librte_vhost/rte_virtio_net.h
@@ -241,17 +241,34 @@ int rte_vhost_driver_callback_register(const char *path,
int rte_vhost_get_numa_node(int vid);
/**
+ * @deprecated
* Get the number of queues the device supports.
*
+ * Note this function is deprecated, as it returns a queue pair number,
+ * which is virtio-net specific. Instead, rte_vhost_get_vring_num should
+ * be used.
+ *
* @param vid
* virtio-net device ID
*
* @return
* The number of queues, 0 on failure
*/
+__rte_deprecated
uint32_t rte_vhost_get_queue_num(int vid);
/**
+ * Get the number of vrings the device supports.
+ *
+ * @param vid
+ * vhost device ID
+ *
+ * @return
+ * The number of vrings, 0 on failure
+ */
+uint16_t rte_vhost_get_vring_num(int vid);
+
+/**
* Get the virtio net device's ifname, which is the vhost-user socket
* file path.
*
diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index f0ed729..d57d4b2 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -317,6 +317,17 @@ struct virtio_net *
return dev->nr_vring / 2;
}
+uint16_t
+rte_vhost_get_vring_num(int vid)
+{
+ struct virtio_net *dev = get_device(vid);
+
+ if (dev == NULL)
+ return 0;
+
+ return dev->nr_vring;
+}
+
int
rte_vhost_get_ifname(int vid, char *buf, size_t len)
{
--
1.9.0
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 04/22] vhost: make notify ops per vhost driver
@ 2017-04-01 7:22 3% ` Yuanhan Liu
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 10/22] vhost: export the number of vrings Yuanhan Liu
` (6 subsequent siblings)
7 siblings, 0 replies; 200+ results
From: Yuanhan Liu @ 2017-04-01 7:22 UTC (permalink / raw)
To: dev; +Cc: Maxime Coquelin, Harris James R, Liu Changpeng, Yuanhan Liu
Assume there is an application both support vhost-user net and
vhost-user scsi, the callback should be different. Making notify
ops per vhost driver allow application define different set of
callbacks for different driver.
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
v2: - check the return value of callback_register and callback_get
- update release note
---
doc/guides/prog_guide/vhost_lib.rst | 2 +-
doc/guides/rel_notes/release_17_05.rst | 4 ++++
drivers/net/vhost/rte_eth_vhost.c | 20 +++++++++++---------
examples/tep_termination/main.c | 7 ++++++-
examples/vhost/main.c | 9 +++++++--
lib/librte_vhost/rte_virtio_net.h | 3 ++-
lib/librte_vhost/socket.c | 32 ++++++++++++++++++++++++++++++++
lib/librte_vhost/vhost.c | 16 +---------------
lib/librte_vhost/vhost.h | 5 ++++-
lib/librte_vhost/vhost_user.c | 22 ++++++++++++++++------
10 files changed, 84 insertions(+), 36 deletions(-)
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index 6a4d206..40f3b3b 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -122,7 +122,7 @@ The following is an overview of some key Vhost API functions:
starts an infinite loop, therefore it should be called in a dedicated
thread.
-* ``rte_vhost_driver_callback_register(virtio_net_device_ops)``
+* ``rte_vhost_driver_callback_register(path, virtio_net_device_ops)``
This function registers a set of callbacks, to let DPDK applications take
the appropriate action when some events happen. The following events are
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index e0432ea..2a4a480 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -204,6 +204,10 @@ API Changes
* ``rte_eth_vhost_feature_enable``
* ``rte_eth_vhost_feature_get``
+ * The vhost API ``rte_vhost_driver_callback_register(ops)`` is reworked to
+ be per vhost-user socket file. Thus, it takes one more argument:
+ ``rte_vhost_driver_callback_register(path, ops)``.
+
ABI Changes
-----------
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index 762509b..7504f89 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -669,6 +669,12 @@ struct vhost_xstats_name_off {
return 0;
}
+static struct virtio_net_device_ops vhost_ops = {
+ .new_device = new_device,
+ .destroy_device = destroy_device,
+ .vring_state_changed = vring_state_changed,
+};
+
int
rte_eth_vhost_get_queue_event(uint8_t port_id,
struct rte_eth_vhost_queue_event *event)
@@ -738,15 +744,6 @@ struct vhost_xstats_name_off {
static void *
vhost_driver_session(void *param __rte_unused)
{
- static struct virtio_net_device_ops vhost_ops;
-
- /* set vhost arguments */
- vhost_ops.new_device = new_device;
- vhost_ops.destroy_device = destroy_device;
- vhost_ops.vring_state_changed = vring_state_changed;
- if (rte_vhost_driver_callback_register(&vhost_ops) < 0)
- RTE_LOG(ERR, PMD, "Can't register callbacks\n");
-
/* start event handling */
rte_vhost_driver_session_start();
@@ -1090,6 +1087,11 @@ struct vhost_xstats_name_off {
if (rte_vhost_driver_register(iface_name, flags))
goto error;
+ if (rte_vhost_driver_callback_register(iface_name, &vhost_ops) < 0) {
+ RTE_LOG(ERR, PMD, "Can't register callbacks\n");
+ goto error;
+ }
+
/* We need only one message handling thread */
if (rte_atomic16_add_return(&nb_started_ports, 1) == 1) {
if (vhost_driver_session_start())
diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c
index 8097dcd..18b977e 100644
--- a/examples/tep_termination/main.c
+++ b/examples/tep_termination/main.c
@@ -1256,7 +1256,12 @@ static inline void __attribute__((always_inline))
rte_vhost_driver_disable_features(dev_basename,
1ULL << VIRTIO_NET_F_MRG_RXBUF);
- rte_vhost_driver_callback_register(&virtio_net_device_ops);
+ ret = rte_vhost_driver_callback_register(dev_basename,
+ &virtio_net_device_ops);
+ if (ret != 0) {
+ rte_exit(EXIT_FAILURE,
+ "failed to register vhost driver callbacks.\n");
+ }
rte_vhost_driver_session_start();
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index 972a6a8..72a9d69 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -1538,9 +1538,14 @@ static inline void __attribute__((always_inline))
rte_vhost_driver_enable_features(file,
1ULL << VIRTIO_NET_F_CTRL_RX);
}
- }
- rte_vhost_driver_callback_register(&virtio_net_device_ops);
+ ret = rte_vhost_driver_callback_register(file,
+ &virtio_net_device_ops);
+ if (ret != 0) {
+ rte_exit(EXIT_FAILURE,
+ "failed to register vhost driver callbacks.\n");
+ }
+ }
rte_vhost_driver_session_start();
return 0;
diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
index 90db986..8c8e67e 100644
--- a/lib/librte_vhost/rte_virtio_net.h
+++ b/lib/librte_vhost/rte_virtio_net.h
@@ -135,7 +135,8 @@ struct virtio_net_device_ops {
int rte_vhost_driver_get_features(const char *path, uint64_t *features);
/* Register callbacks. */
-int rte_vhost_driver_callback_register(struct virtio_net_device_ops const * const);
+int rte_vhost_driver_callback_register(const char *path,
+ struct virtio_net_device_ops const * const ops);
/* Start vhost driver session blocking loop. */
int rte_vhost_driver_session_start(void);
diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
index 416b1fd..aa948b9 100644
--- a/lib/librte_vhost/socket.c
+++ b/lib/librte_vhost/socket.c
@@ -77,6 +77,8 @@ struct vhost_user_socket {
*/
uint64_t supported_features;
uint64_t features;
+
+ struct virtio_net_device_ops const *notify_ops;
};
struct vhost_user_connection {
@@ -743,6 +745,36 @@ struct vhost_user_reconnect_list {
return -1;
}
+/*
+ * Register ops so that we can add/remove device to data core.
+ */
+int
+rte_vhost_driver_callback_register(const char *path,
+ struct virtio_net_device_ops const * const ops)
+{
+ struct vhost_user_socket *vsocket;
+
+ pthread_mutex_lock(&vhost_user.mutex);
+ vsocket = find_vhost_user_socket(path);
+ if (vsocket)
+ vsocket->notify_ops = ops;
+ pthread_mutex_unlock(&vhost_user.mutex);
+
+ return vsocket ? 0 : -1;
+}
+
+struct virtio_net_device_ops const *
+vhost_driver_callback_get(const char *path)
+{
+ struct vhost_user_socket *vsocket;
+
+ pthread_mutex_lock(&vhost_user.mutex);
+ vsocket = find_vhost_user_socket(path);
+ pthread_mutex_unlock(&vhost_user.mutex);
+
+ return vsocket ? vsocket->notify_ops : NULL;
+}
+
int
rte_vhost_driver_session_start(void)
{
diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 7b40a92..7d7bb3c 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -51,9 +51,6 @@
struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
-/* device ops to add/remove device to/from data core. */
-struct virtio_net_device_ops const *notify_ops;
-
struct virtio_net *
get_device(int vid)
{
@@ -253,7 +250,7 @@ struct virtio_net *
if (dev->flags & VIRTIO_DEV_RUNNING) {
dev->flags &= ~VIRTIO_DEV_RUNNING;
- notify_ops->destroy_device(vid);
+ dev->notify_ops->destroy_device(vid);
}
cleanup_device(dev, 1);
@@ -396,14 +393,3 @@ struct virtio_net *
dev->virtqueue[queue_id]->used->flags = VRING_USED_F_NO_NOTIFY;
return 0;
}
-
-/*
- * Register ops so that we can add/remove device to data core.
- */
-int
-rte_vhost_driver_callback_register(struct virtio_net_device_ops const * const ops)
-{
- notify_ops = ops;
-
- return 0;
-}
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 692691b..6186216 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -185,6 +185,8 @@ struct virtio_net {
struct ether_addr mac;
uint16_t mtu;
+ struct virtio_net_device_ops const *notify_ops;
+
uint32_t nr_guest_pages;
uint32_t max_guest_pages;
struct guest_page *guest_pages;
@@ -288,7 +290,6 @@ static inline phys_addr_t __attribute__((always_inline))
return 0;
}
-struct virtio_net_device_ops const *notify_ops;
struct virtio_net *get_device(int vid);
int vhost_new_device(void);
@@ -301,6 +302,8 @@ static inline phys_addr_t __attribute__((always_inline))
void vhost_set_ifname(int, const char *if_name, unsigned int if_len);
void vhost_enable_dequeue_zero_copy(int vid);
+struct virtio_net_device_ops const *vhost_driver_callback_get(const char *path);
+
/*
* Backend-specific cleanup.
*
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index 72eb368..7fe07bc 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -135,7 +135,7 @@
{
if (dev->flags & VIRTIO_DEV_RUNNING) {
dev->flags &= ~VIRTIO_DEV_RUNNING;
- notify_ops->destroy_device(dev->vid);
+ dev->notify_ops->destroy_device(dev->vid);
}
cleanup_device(dev, 0);
@@ -509,7 +509,7 @@
/* Remove from the data plane. */
if (dev->flags & VIRTIO_DEV_RUNNING) {
dev->flags &= ~VIRTIO_DEV_RUNNING;
- notify_ops->destroy_device(dev->vid);
+ dev->notify_ops->destroy_device(dev->vid);
}
if (dev->mem) {
@@ -693,7 +693,7 @@
"dequeue zero copy is enabled\n");
}
- if (notify_ops->new_device(dev->vid) == 0)
+ if (dev->notify_ops->new_device(dev->vid) == 0)
dev->flags |= VIRTIO_DEV_RUNNING;
}
}
@@ -727,7 +727,7 @@
/* We have to stop the queue (virtio) if it is running. */
if (dev->flags & VIRTIO_DEV_RUNNING) {
dev->flags &= ~VIRTIO_DEV_RUNNING;
- notify_ops->destroy_device(dev->vid);
+ dev->notify_ops->destroy_device(dev->vid);
}
dev->flags &= ~VIRTIO_DEV_READY;
@@ -769,8 +769,8 @@
"set queue enable: %d to qp idx: %d\n",
enable, state->index);
- if (notify_ops->vring_state_changed)
- notify_ops->vring_state_changed(dev->vid, state->index, enable);
+ if (dev->notify_ops->vring_state_changed)
+ dev->notify_ops->vring_state_changed(dev->vid, state->index, enable);
dev->virtqueue[state->index]->enabled = enable;
@@ -984,6 +984,16 @@
if (dev == NULL)
return -1;
+ if (!dev->notify_ops) {
+ dev->notify_ops = vhost_driver_callback_get(dev->ifname);
+ if (!dev->notify_ops) {
+ RTE_LOG(ERR, VHOST_CONFIG,
+ "failed to get callback ops for driver %s\n",
+ dev->ifname);
+ return -1;
+ }
+ }
+
ret = read_vhost_message(fd, &msg);
if (ret <= 0 || msg.request >= VHOST_USER_MAX) {
if (ret < 0)
--
1.9.0
^ permalink raw reply [relevance 3%]
Results 11001-11200 of ~18000 next (older) | prev (newer) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2016-12-16 16:24 [dpdk-dev] [PATCH v2 00/25] Generic flow API (rte_flow) Adrien Mazarguil
2016-12-19 17:48 ` [dpdk-dev] [PATCH v3 " Adrien Mazarguil
2016-12-19 17:48 ` [dpdk-dev] [PATCH v3 01/25] ethdev: introduce generic flow API Adrien Mazarguil
2017-05-23 6:07 0% ` Zhao1, Wei
2017-03-04 1:10 [dpdk-dev] [PATCH v3 1/2] ethdev: add capability control API Cristian Dumitrescu
2017-05-19 17:12 ` [dpdk-dev] [PATCH v4 0/2] ethdev: abstraction layer for QoS traffic management Cristian Dumitrescu
2017-05-19 17:12 1% ` [dpdk-dev] [PATCH v4 2/2] ethdev: add traffic management API Cristian Dumitrescu
2017-05-24 11:28 0% ` Hemant Agrawal
2017-03-28 12:45 [dpdk-dev] [PATCH v3 00/22] vhost: generic vhost API Yuanhan Liu
2017-04-01 7:22 ` [dpdk-dev] [PATCH v4 " Yuanhan Liu
2017-04-01 7:22 3% ` [dpdk-dev] [PATCH v4 04/22] vhost: make notify ops per vhost driver Yuanhan Liu
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 10/22] vhost: export the number of vrings Yuanhan Liu
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 12/22] vhost: drop the Rx and Tx queue macro Yuanhan Liu
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 13/22] vhost: do not include net specific headers Yuanhan Liu
2017-04-01 7:22 4% ` [dpdk-dev] [PATCH v4 14/22] vhost: rename device ops struct Yuanhan Liu
2017-04-01 7:22 3% ` [dpdk-dev] [PATCH v4 18/22] vhost: introduce API to start a specific driver Yuanhan Liu
2017-04-01 7:22 5% ` [dpdk-dev] [PATCH v4 19/22] vhost: rename header file Yuanhan Liu
2017-04-01 8:44 0% ` [dpdk-dev] [PATCH v4 00/22] vhost: generic vhost API Yuanhan Liu
2017-03-28 20:35 [dpdk-dev] [PATCH v4 00/14] refactor and cleanup of rte_ring Bruce Richardson
2017-03-29 13:09 ` [dpdk-dev] [PATCH v5 " Bruce Richardson
2017-03-29 13:09 ` [dpdk-dev] [PATCH v5 07/14] ring: make bulk and burst fn return vals consistent Bruce Richardson
2017-04-13 6:42 0% ` Wang, Zhihong
2017-03-29 21:32 [dpdk-dev] [PATCH v7 1/7] net/ark: stub PMD for Atomic Rules Arkville Ed Czeck
2017-04-04 19:50 3% ` [dpdk-dev] [PATCH v8 " Ed Czeck
2017-03-30 21:50 [dpdk-dev] [PATCH v2 1/5] add new xstats API retrieving by id Michal Jastrzebski
2017-04-03 12:09 3% ` [dpdk-dev] [PATCH v3 0/3] Extended xstats API in ethdev library to allow grouping of stats Jacek Piasecki
2017-04-03 12:09 1% ` [dpdk-dev] [PATCH v3 1/3] add new xstats API retrieving by id Jacek Piasecki
2017-04-03 12:37 0% ` Van Haaren, Harry
2017-04-04 15:03 0% ` Thomas Monjalon
2017-04-04 15:45 0% ` Van Haaren, Harry
2017-04-04 16:18 0% ` Thomas Monjalon
2017-04-10 17:59 4% ` [dpdk-dev] [PATCH v4 0/3] Extended xstats API in ethdev library to allow grouping of stats Jacek Piasecki
2017-04-10 17:59 1% ` [dpdk-dev] [PATCH v4 1/3] ethdev: new xstats API add retrieving by ID Jacek Piasecki
2017-04-11 16:37 4% ` [dpdk-dev] [PATCH v5 0/3] Extended xstats API in ethdev library to allow grouping of stats Michal Jastrzebski
2017-04-11 16:37 1% ` [dpdk-dev] [PATCH v5 1/3] ethdev: new xstats API add retrieving by ID Michal Jastrzebski
2017-04-12 8:56 5% ` Van Haaren, Harry
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Kuba Kozak
2017-04-13 14:59 3% ` [dpdk-dev] [PATCH v6 1/5] ethdev: new xstats API add retrieving by ID Kuba Kozak
2017-04-13 16:23 0% ` Van Haaren, Harry
2017-04-13 14:59 4% ` [dpdk-dev] [PATCH v6 2/5] ethdev: added new function for xstats ID Kuba Kozak
2017-04-13 16:21 0% ` [dpdk-dev] [PATCH v6 0/5] Extended xstats API in ethdev library to allow grouping of stats Van Haaren, Harry
2017-04-20 20:31 0% ` Thomas Monjalon
2017-04-24 12:32 3% ` Olivier Matz
2017-04-24 12:41 0% ` Thomas Monjalon
2017-03-31 18:31 [dpdk-dev] [PATCH] eal: deprecate rte_cpu_check_supported Aaron Conole
2017-04-04 15:38 3% ` [dpdk-dev] [PATCH 1/2] eal: add rte_cpu_is_supported to map Aaron Conole
2017-04-06 20:58 0% ` Thomas Monjalon
2017-04-03 10:52 9% [dpdk-dev] [PATCH] doc: update deprecation notice and release_17_05 for ABI change akhil.goyal
2017-04-03 15:32 4% ` Mcnamara, John
2017-04-04 9:59 4% ` De Lara Guarch, Pablo
2017-04-05 10:00 14% [dpdk-dev] [PATCH] mbuf: bump library version Olivier Matz
2017-04-05 11:41 0% ` Thomas Monjalon
2017-04-05 15:03 4% [dpdk-dev] [PATCH] ring: fix build with icc Ferruh Yigit
2017-04-06 6:51 [dpdk-dev] [PATCH v6 0/3] net/i40e: vf port reset Wei Zhao
2017-04-10 3:02 ` [dpdk-dev] [PATCH v7 " Wei Zhao
2017-04-10 3:02 ` [dpdk-dev] [PATCH v7 1/3] lib/librte_ether: add support for " Wei Zhao
2017-04-20 20:49 3% ` Thomas Monjalon
2017-04-21 3:20 0% ` Zhao1, Wei
2017-04-12 9:02 3% [dpdk-dev] [PATCH v3 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-04-12 9:02 5% ` [dpdk-dev] [PATCH v3 2/3] doc: change type of return value of adding MAC addr Wei Dai
2017-04-13 8:21 3% ` [dpdk-dev] [PATCH v4 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-04-13 13:54 0% ` Ananyev, Konstantin
2017-04-13 8:21 5% ` [dpdk-dev] [PATCH v4 2/3] doc: change type of return value of adding MAC addr Wei Dai
2017-04-13 5:29 [dpdk-dev] [PATCH] ethdev: fix compilation issue with strict flags Shahaf Shuler
2017-04-13 9:36 5% ` Van Haaren, Harry
2017-04-14 13:27 [dpdk-dev] 17.08 Roadmap O'Driscoll, Tim
2017-04-14 14:30 4% ` Adrien Mazarguil
2017-04-19 1:23 0% ` Lu, Wenzhuo
2017-04-18 15:48 4% [dpdk-dev] [PATCH] doc: postpone ABI change in ethdev Bernard Iremonger
2017-04-26 15:02 4% ` Mcnamara, John
2017-05-10 23:27 4% ` Thomas Monjalon
2017-05-10 15:18 4% ` Pattan, Reshma
2017-05-10 16:31 4% ` Ananyev, Konstantin
2017-04-25 9:40 [dpdk-dev] [PATCH v2 4/5] app/testpmd: request link status interrupt Ferruh Yigit
2017-04-25 10:10 ` [dpdk-dev] [PATCH 1/3] doc: fix missing backquotes Gaetan Rivet
2017-04-25 10:10 4% ` [dpdk-dev] [PATCH 2/3] doc: add device removal event to release note Gaetan Rivet
2017-04-26 4:00 [dpdk-dev] custom align for mempool elements Gregory Etelson
2017-05-05 11:26 4% ` Olivier Matz
2017-05-05 17:23 0% ` Gregory Etelson
2017-05-10 15:01 4% ` Olivier Matz
2017-04-27 14:42 [dpdk-dev] [PATCH v1 0/6] Extended xstats API in ethdev library to allow grouping of stats Michal Jastrzebski
2017-04-27 14:42 1% ` [dpdk-dev] [PATCH v1 1/6] ethdev: revert patches extending xstats API in ethdev Michal Jastrzebski
2017-04-27 14:42 2% ` [dpdk-dev] [PATCH v1 2/6] ethdev: retrieve xstats by ID Michal Jastrzebski
2017-04-27 14:42 4% ` [dpdk-dev] [PATCH v1 3/6] ethdev: get xstats ID by name Michal Jastrzebski
2017-04-28 13:25 [dpdk-dev] [RFC PATCH] timer: inform periodic timers of multiple expiries Bruce Richardson
2017-04-28 13:29 4% ` Bruce Richardson
2017-05-31 9:16 4% ` [dpdk-dev] [PATCH 0/3] " Bruce Richardson
2017-05-31 9:16 ` [dpdk-dev] [PATCH 1/3] " Bruce Richardson
2017-06-30 10:14 3% ` Olivier Matz
2017-06-30 12:06 0% ` Bruce Richardson
2017-05-31 9:16 12% ` [dpdk-dev] [PATCH 2/3] timer: add symbol versions for ABI compatibility Bruce Richardson
2017-04-29 6:08 3% [dpdk-dev] [PATCH v5 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-04-29 6:08 5% ` [dpdk-dev] [PATCH v5 2/3] doc: change type of return value of adding MAC addr Wei Dai
2017-04-30 4:11 3% [dpdk-dev] [PATCH v5 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-04-30 4:11 5% ` [dpdk-dev] [PATCH v5 2/3] doc: change type of return value of adding MAC addr Wei Dai
2017-05-02 12:44 3% ` [dpdk-dev] [PATCH v6 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-05-02 12:44 5% ` [dpdk-dev] [PATCH v6 2/3] doc: change type of return value of adding MAC addr Wei Dai
2017-05-05 0:39 3% ` [dpdk-dev] [PATCH v7 0/3] MAC address fail to be added shouldn't be stored Wei Dai
2017-05-05 0:40 5% ` [dpdk-dev] [PATCH v7 2/3] doc: change type of return value of adding MAC addr Wei Dai
2017-05-05 14:23 3% ` Thomas Monjalon
2017-05-01 6:58 13% [dpdk-dev] [PATCH] doc: announce ABI change on ethdev Shahaf Shuler
2017-05-09 10:24 4% ` Shahaf Shuler
2017-05-09 13:40 4% ` Adrien Mazarguil
2017-05-09 17:04 4% ` Jerin Jacob
2017-05-10 23:17 4% ` Thomas Monjalon
2017-05-09 13:49 4% ` Ferruh Yigit
2017-05-09 16:55 4% ` Shahaf Shuler
2017-05-09 18:09 4% ` Ananyev, Konstantin
2017-05-10 14:29 4% ` Bruce Richardson
2017-05-03 11:31 [dpdk-dev] [PATCH] doc: notice for changes in kni structures Hemant Agrawal
2017-05-04 16:50 4% ` Ferruh Yigit
2017-05-08 9:46 0% ` Hemant Agrawal
2017-05-09 13:42 0% ` Ferruh Yigit
2017-05-09 16:25 2% [dpdk-dev] [PATCH v1] doc: update release notes for 17.05 John McNamara
2017-05-09 16:56 2% ` [dpdk-dev] [PATCH v2] " John McNamara
2017-05-10 19:01 5% ` [dpdk-dev] [PATCH v3] " John McNamara
2017-05-10 12:52 [dpdk-dev] [PATCH 1/2] pci: deprecate PCI detach Gaetan Rivet
2017-05-10 12:52 13% ` [dpdk-dev] [PATCH 2/2] devargs: announce ABI change for device parameters Gaetan Rivet
2017-05-10 13:04 4% ` Thomas Monjalon
2017-05-10 15:46 13% ` [dpdk-dev] [PATCH v2] " Gaetan Rivet
2017-05-10 17:28 4% ` Jerin Jacob
2017-05-10 17:54 4% ` Stephen Hemminger
2017-05-10 21:59 4% ` Gaëtan Rivet
2017-05-10 18:50 4% ` David Marchand
2017-05-10 19:54 4% ` Maxime Coquelin
2017-05-10 23:14 4% ` Thomas Monjalon
2017-05-11 0:25 [dpdk-dev] [PATCH 1/2] doc: remove outdated deprecation notices Thomas Monjalon
2017-05-11 0:25 3% ` [dpdk-dev] [PATCH 2/2] doc: postpone unaccomplished " Thomas Monjalon
2017-05-11 0:29 0% ` Thomas Monjalon
2017-05-11 12:57 6% [dpdk-dev] [PATCH v1] doc: add template release notes for 17.08 John McNamara
2017-05-15 9:55 [dpdk-dev] [PATCH] cryptodev: remove crypto device type enumeration Slawomir Mrozowicz
2017-05-22 11:10 3% ` [dpdk-dev] [PATCH v2] " Slawomir Mrozowicz
2017-06-30 14:10 2% ` [dpdk-dev] [PATCH v3] " Pablo de Lara
2017-06-30 14:34 2% ` [dpdk-dev] [PATCH v4] " Pablo de Lara
2017-05-16 10:08 [dpdk-dev] Issue->Dpdk for arm cortex-a15 compilation Jimmy Carter
2017-05-16 10:51 ` Jan Viktorin
2017-05-16 11:44 ` Neil Horman
2017-05-16 11:55 ` Jimmy Carter
2017-05-16 12:28 3% ` Jan Viktorin
2017-05-16 13:27 0% ` Jimmy Carter
2017-05-16 14:00 0% ` Jan Viktorin
2017-05-18 10:14 [dpdk-dev] [PATCH 1/2] eal: add static endianness conversion macros Adrien Mazarguil
2017-06-15 15:48 3% ` [dpdk-dev] [PATCH v2 3/3] ethdev: tidy up endianness handling in flow API Adrien Mazarguil
2017-05-19 11:03 [dpdk-dev] [PATCH v1 0/4] ethdev: callback process API Bernard Iremonger
2017-06-12 15:18 ` [dpdk-dev] [PATCH v2 1/4] ethdev: modify " Bernard Iremonger
2017-06-14 21:22 3% ` Thomas Monjalon
2017-06-15 10:56 3% ` Iremonger, Bernard
2017-05-19 16:39 [dpdk-dev] [RFC 0/4] DPDK multiprocess rework Anatoly Burakov
2017-05-19 16:39 4% ` [dpdk-dev] [RFC 4/4] mk: default to compiling shared libraries Anatoly Burakov
2017-05-19 17:12 [dpdk-dev] [PATCH v4 1/2] ethdev: add traffic management ops get API Cristian Dumitrescu
2017-06-09 16:51 ` [dpdk-dev] [PATCH v5 0/2] ethdev: abstraction layer for QoS traffic management Cristian Dumitrescu
2017-06-09 16:51 1% ` [dpdk-dev] [PATCH v5 2/2] ethdev: add traffic management API Cristian Dumitrescu
2017-06-12 13:35 ` [dpdk-dev] [PATCH v6 0/2] ethdev: abstraction layer for QoS traffic management Cristian Dumitrescu
2017-06-12 13:35 1% ` [dpdk-dev] [PATCH v6 2/2] ethdev: add traffic management API Cristian Dumitrescu
2017-05-22 13:54 3% [dpdk-dev] [PATCH] cryptodev: remove crypto device driver name Slawomir Mrozowicz
2017-05-23 23:28 [dpdk-dev] [PATCH] ethdev: add roughly match pattern Qi Zhang
2017-05-30 12:46 4% ` Adrien Mazarguil
2017-06-13 3:07 2% ` [dpdk-dev] [PATCH v3] ethdev: add fuzzy match in flow API Qi Zhang
2017-05-24 14:57 [dpdk-dev] [PATCH v2] ethdev: add isolated mode to " Adrien Mazarguil
2017-06-14 12:45 ` [dpdk-dev] [PATCH v3] " Adrien Mazarguil
2017-06-14 13:01 ` Andrew Rybchenko
2017-06-14 13:35 3% ` Adrien Mazarguil
2017-06-14 14:04 0% ` Andrew Rybchenko
2017-05-24 23:40 3% [dpdk-dev] PCI domain size Stephen Hemminger
2017-06-07 14:23 0% ` Thomas Monjalon
2017-05-26 16:11 [dpdk-dev] [PATCH 1/2] ethdev: ensure same name size for device and ethdev Ferruh Yigit
2017-05-26 16:11 ` [dpdk-dev] [PATCH 2/2] drivers/net: use device name from device structure Ferruh Yigit
2017-06-09 13:52 ` Thomas Monjalon
2017-06-10 7:35 ` Jan Blunck
2017-06-12 8:57 3% ` Ferruh Yigit
2017-05-26 18:45 [dpdk-dev] [PATCH] cryptodev: make crypto session device independent Michal Jastrzebski
2017-06-30 17:09 ` [dpdk-dev] [PATCH v2 00/11] Device independent crypto sessions Pablo de Lara
2017-06-30 17:09 2% ` [dpdk-dev] [PATCH v2 04/11] cryptodev: do not create session mempool internally Pablo de Lara
2017-06-30 17:09 4% ` [dpdk-dev] [PATCH v2 05/11] cryptodev: change attach session to queue pair API Pablo de Lara
2017-06-30 17:09 4% ` [dpdk-dev] [PATCH v2 06/11] cryptodev: remove dev_id from crypto session Pablo de Lara
2017-06-30 17:09 3% ` [dpdk-dev] [PATCH v2 07/11] cryptodev: remove driver id from session Pablo de Lara
2017-06-30 17:09 4% ` [dpdk-dev] [PATCH v2 08/11] cryptodev: remove mempool " Pablo de Lara
2017-06-30 17:09 1% ` [dpdk-dev] [PATCH v2 09/11] cryptodev: support device independent sessions Pablo de Lara
2017-06-30 17:09 2% ` [dpdk-dev] [PATCH v2 10/11] cryptodev: add mempool pointer in queue pair setup Pablo de Lara
2017-05-28 8:37 [dpdk-dev] [PATCH 0/7] net/qede/base: update PMD to 2.5.0.1 Rasesh Mody
2017-06-07 7:43 ` [dpdk-dev] [PATCH v2 1/2] mbuf: introduce new Tx offload flag for MPLS-in-UDP Rasesh Mody
2017-06-08 12:25 3% ` Olivier Matz
2017-06-08 21:46 3% ` Patil, Harish
[not found] ` <20170609091811.0867b1d1@platinum>
2017-06-10 6:17 0% ` Patil, Harish
2017-05-28 21:05 [dpdk-dev] [PATCH 00/13] Crypto operation restructuring Pablo de Lara
2017-06-26 10:22 ` [dpdk-dev] [PATCH v2 00/27] " Pablo de Lara
2017-06-26 10:22 2% ` [dpdk-dev] [PATCH v2 01/27] cryptodev: move session type to generic crypto op Pablo de Lara
2017-06-26 10:22 4% ` [dpdk-dev] [PATCH v2 02/27] cryptodev: replace enums with 1-byte variables Pablo de Lara
2017-06-26 10:22 4% ` [dpdk-dev] [PATCH v2 03/27] cryptodev: remove opaque data pointer in crypto op Pablo de Lara
2017-06-26 10:22 4% ` [dpdk-dev] [PATCH v2 04/27] cryptodev: do not store pointer to op specific params Pablo de Lara
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 13/27] cryptodev: pass IV as offset Pablo de Lara
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 14/27] cryptodev: move IV parameters to crypto session Pablo de Lara
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 15/27] cryptodev: add auth IV Pablo de Lara
2017-06-26 10:22 2% ` [dpdk-dev] [PATCH v2 17/27] cryptodev: remove AAD length from crypto op Pablo de Lara
2017-06-26 10:22 1% ` [dpdk-dev] [PATCH v2 18/27] cryptodev: remove digest " Pablo de Lara
2017-06-26 10:22 2% ` [dpdk-dev] [PATCH v2 21/27] cryptodev: add AEAD parameters in crypto operation Pablo de Lara
2017-06-26 10:23 4% ` [dpdk-dev] [PATCH v2 27/27] cryptodev: remove AAD from authentication structure Pablo de Lara
2017-06-29 11:34 ` [dpdk-dev] [PATCH v3 00/26] Crypto operation restructuring Pablo de Lara
2017-06-29 11:34 2% ` [dpdk-dev] [PATCH v3 01/26] cryptodev: move session type to generic crypto op Pablo de Lara
2017-06-29 11:34 4% ` [dpdk-dev] [PATCH v3 02/26] cryptodev: replace enums with 1-byte variables Pablo de Lara
2017-06-29 11:34 4% ` [dpdk-dev] [PATCH v3 03/26] cryptodev: remove opaque data pointer in crypto op Pablo de Lara
2017-06-29 11:34 4% ` [dpdk-dev] [PATCH v3 04/26] cryptodev: do not store pointer to op specific params Pablo de Lara
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 12/26] cryptodev: pass IV as offset Pablo de Lara
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 13/26] cryptodev: move IV parameters to crypto session Pablo de Lara
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 14/26] cryptodev: add auth IV Pablo de Lara
2017-06-29 11:35 2% ` [dpdk-dev] [PATCH v3 16/26] cryptodev: remove AAD length from crypto op Pablo de Lara
2017-06-29 11:35 1% ` [dpdk-dev] [PATCH v3 17/26] cryptodev: remove digest " Pablo de Lara
2017-06-29 11:35 2% ` [dpdk-dev] [PATCH v3 20/26] cryptodev: add AEAD parameters in crypto operation Pablo de Lara
2017-06-29 11:35 4% ` [dpdk-dev] [PATCH v3 26/26] cryptodev: remove AAD from authentication structure Pablo de Lara
2017-05-30 16:44 [dpdk-dev] [RFC 0/3] rte_mtr: generic ethdev API for metering and policing Cristian Dumitrescu
2017-05-30 16:44 2% ` [dpdk-dev] [RFC 2/3] ethdev: add new rte_mtr API for traffic " Cristian Dumitrescu
2017-06-01 10:14 [dpdk-dev] [PATCH 0/8] bus/pci: remove PCI bus from EAL Gaetan Rivet
2017-06-01 10:14 1% ` [dpdk-dev] [PATCH 3/8] pmdinfogen: move to drivers subdirectory Gaetan Rivet
2017-06-07 23:58 ` [dpdk-dev] [PATCH 0/8] bus/pci: remove PCI bus from EAL Gaetan Rivet
2017-06-07 23:59 1% ` [dpdk-dev] [PATCH v2 05/12] pmdinfogen: move to drivers subdirectory Gaetan Rivet
2017-06-07 23:59 ` [dpdk-dev] [PATCH v2 07/12] pdump: disabled by default Gaetan Rivet
2017-06-09 14:24 ` Pattan, Reshma
2017-06-11 19:42 ` Gaëtan Rivet
2017-06-13 17:15 ` Ferruh Yigit
2017-06-14 23:01 3% ` Gaëtan Rivet
2017-06-15 13:07 0% ` Ferruh Yigit
2017-06-07 23:59 ` [dpdk-dev] [PATCH v2 08/12] kni: " Gaetan Rivet
2017-06-09 8:56 ` Ferruh Yigit
2017-06-09 9:06 ` Gaëtan Rivet
2017-06-15 13:09 0% ` Ferruh Yigit
2017-06-15 14:48 3% ` Gaëtan Rivet
2017-06-20 23:36 ` [dpdk-dev] [PATCH v3 0/9] bus/pci: remove PCI bus from EAL Gaetan Rivet
2017-06-20 23:36 1% ` [dpdk-dev] [PATCH v3 5/9] pmdinfogen: move to drivers subdirectory Gaetan Rivet
2017-06-02 20:03 [dpdk-dev] [PATCH] ring: use aligned memzone allocation Daniel Verkamp
2017-06-02 20:12 ` [dpdk-dev] [PATCH v2] " Daniel Verkamp
2017-06-02 20:51 ` Ananyev, Konstantin
2017-06-02 22:24 3% ` Verkamp, Daniel
2017-06-03 10:00 0% ` Ananyev, Konstantin
2017-06-05 16:21 0% ` Verkamp, Daniel
2017-06-06 9:59 0% ` Ananyev, Konstantin
2017-06-06 12:42 0% ` Bruce Richardson
2017-06-06 13:19 0% ` Ananyev, Konstantin
2017-06-06 14:56 0% ` Bruce Richardson
2017-06-08 12:45 3% ` Olivier Matz
2017-06-08 13:20 3% ` Bruce Richardson
2017-06-08 14:05 0% ` Olivier Matz
2017-06-08 14:11 0% ` Bruce Richardson
2017-06-08 14:50 3% ` Ananyev, Konstantin
2017-06-08 15:24 4% ` Bruce Richardson
2017-06-08 15:35 0% ` Ananyev, Konstantin
2017-06-08 16:03 0% ` Bruce Richardson
2017-06-08 16:12 0% ` Ananyev, Konstantin
2017-06-08 16:20 0% ` Richardson, Bruce
2017-06-08 16:42 0% ` Ananyev, Konstantin
2017-06-09 9:02 0% ` Bruce Richardson
2017-06-12 9:02 4% ` Olivier Matz
2017-06-12 9:56 0% ` Bruce Richardson
2017-06-30 11:35 0% ` Olivier Matz
2017-06-07 5:05 6% [dpdk-dev] [PATCH] dpdk: remove typos using codespell utility Jerin Jacob
2017-06-07 10:36 [dpdk-dev] [PATCH] cryptodev: fix cryptodev start return value Pavan Nikhilesh
2017-06-07 15:54 ` Trahe, Fiona
2017-06-08 8:12 3% ` Pavan Nikhilesh Bhagavatula
2017-06-09 3:01 [dpdk-dev] how to build 'example' folder in dpdk-2.2.0? Sam
2017-06-09 6:43 ` Shyam Shrivastav
2017-06-09 7:01 0% ` [dpdk-dev] [dpdk-users] " Dharmesh Mehta
2017-06-20 21:21 [dpdk-dev] [RFC PATCH] mk: symlink every headers first Thomas Monjalon
2017-06-21 10:27 ` Bruce Richardson
2017-06-21 14:27 ` Wiles, Keith
2017-06-21 15:14 3% ` Gaëtan Rivet
2017-06-21 15:53 0% ` Wiles, Keith
2017-06-21 16:35 [dpdk-dev] [PATCH 0/3] 32 bit PCI domain patches Stephen Hemminger
2017-06-21 16:35 3% ` [dpdk-dev] [PATCH 2/3] eal: PCI domain should be 32 bits Stephen Hemminger
2017-06-22 9:28 0% ` Chang, Cunyin
2017-06-22 15:56 [dpdk-dev] [PATCH v2 0/3] 32 bit PCI domain support Stephen Hemminger
2017-06-22 15:56 3% ` [dpdk-dev] [PATCH v2 2/3] eal: PCI domain should be 32 bits Stephen Hemminger
2017-06-22 16:05 0% ` Thomas Monjalon
2017-06-22 16:23 0% ` Stephen Hemminger
2017-06-23 9:06 1% [dpdk-dev] [PATCH 1/6] service cores: header and implementation Harry van Haaren
2017-06-26 11:59 0% ` Jerin Jacob
2017-06-29 11:13 3% ` Van Haaren, Harry
2017-06-29 11:23 ` [dpdk-dev] [PATCH v2 0/5] service cores: cover letter Harry van Haaren
2017-06-29 11:23 1% ` [dpdk-dev] [PATCH v2 1/5] service cores: header and implementation Harry van Haaren
2017-06-30 14:26 4% [dpdk-dev] [RFC] ring: relax alignment constraint on ring structure Olivier Matz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).