From: "Gujjar, Abhinandan S" <abhinandan.gujjar@intel.com>
To: Shijith Thotton <sthotton@marvell.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"jerinj@marvell.com" <jerinj@marvell.com>
Subject: RE: [PATCH v5] app/eventdev: add crypto producer mode
Date: Wed, 16 Feb 2022 04:47:01 +0000 [thread overview]
Message-ID: <PH0PR11MB482417AB9832814EE3F10289E8359@PH0PR11MB4824.namprd11.prod.outlook.com> (raw)
In-Reply-To: <b5e04463920e20fe6f272a2ebb47d8324dad8c97.1644944095.git.sthotton@marvell.com>
Hi Shijith,
> -----Original Message-----
> From: Shijith Thotton <sthotton@marvell.com>
> Sent: Tuesday, February 15, 2022 10:27 PM
> To: dev@dpdk.org; jerinj@marvell.com
> Cc: Shijith Thotton <sthotton@marvell.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>
> Subject: [PATCH v5] app/eventdev: add crypto producer mode
>
> In crypto producer mode, producer core enqueues cryptodev with software
> generated crypto ops and worker core dequeues crypto completion events
> from the eventdev. Event crypto metadata used for above processing is pre-
> populated in each crypto session.
>
> Parameter --prod_type_cryptodev can be used to enable crypto producer
> mode. Parameter --crypto_adptr_mode can be set to select the crypto
> adapter mode, 0 for OP_NEW and 1 for OP_FORWARD.
>
> This mode can be used to measure the performance of crypto adapter.
>
> Example:
> ./dpdk-test-eventdev -l 0-2 -w <EVENTDEV> -w <CRYPTODEV> -- \
> --prod_type_cryptodev --crypto_adptr_mode 1 --test=perf_atq \
> --stlist=a --wlcores 1 --plcores 2
>
I still see error with both OP_NEW and OP_FORWARD mode. Can't run the test.
root@xdp-dev:/home/intel/abhi/dpdk-next-eventdev/abhi# ./app/dpdk-test-eventdev -l 0-8 -s 0xf0 --vdev=event_sw0 --vdev="crypto_null" -- --prod_type_cryptodev --crypto_adptr_mode 1 --test=perf_queue --stlist=a --wlcores 1 --plcores 2
EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
CRYPTODEV: Creating cryptodev crypto_null
CRYPTODEV: Initialisation parameters - name: crypto_null,socket id: 0, max queue pairs: 8
TELEMETRY: No legacy callbacks, legacy socket not created
driver : event_sw
test : perf_queue
dev : 0
verbose_level : 1
socket_id : -1
pool_sz : 16384
main lcore : 0
nb_pkts : 67108864
nb_timers : 100000000
available lcores : {0 1 2 3 8}
nb_flows : 1024
worker deq depth : 16
fwd_latency : false
nb_prod_lcores : 1
producer lcores : {2}
nb_worker_lcores : 1
worker lcores : {1}
nb_stages : 1
nb_evdev_ports : 2
nb_evdev_queues : 1
queue_priority : false
sched_type_list : {A}
crypto adapter mode : OP_FORWARD
nb_cryptodev : 1
prod_type : Event crypto adapter producers
prod_enq_burst_sz : 1
CRYPTODEV: elt_size 0 is expanded to 208
error: perf_event_crypto_adapter_setup() crypto adapter OP_FORWARD mode unsupported
error: main() perf_queue: eventdev setup failed
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
> v5:
> * Rebased.
>
> v4:
> * Addressed comments on v3 and rebased.
> * Added cryptodev cleanup in signal handler.
>
> v3:
> * Reduce dereference inside loop.
>
> v2:
> * Fix RHEL compilation warning.
>
> app/test-eventdev/evt_common.h | 3 +
> app/test-eventdev/evt_main.c | 17 +-
> app/test-eventdev/evt_options.c | 27 ++
> app/test-eventdev/evt_options.h | 12 +
> app/test-eventdev/evt_test.h | 6 +
> app/test-eventdev/test_perf_atq.c | 49 +++
> app/test-eventdev/test_perf_common.c | 425
> ++++++++++++++++++++++++++- app/test-eventdev/test_perf_common.h |
> 18 +- app/test-eventdev/test_perf_queue.c | 50 ++++
> doc/guides/tools/testeventdev.rst | 13 +
> 10 files changed, 612 insertions(+), 8 deletions(-)
>
> diff --git a/app/test-eventdev/evt_common.h b/app/test-
> eventdev/evt_common.h index f466434459..2f301a7e79 100644
> --- a/app/test-eventdev/evt_common.h
> +++ b/app/test-eventdev/evt_common.h
> @@ -7,6 +7,7 @@
>
> #include <rte_common.h>
> #include <rte_debug.h>
> +#include <rte_event_crypto_adapter.h>
> #include <rte_eventdev.h>
> #include <rte_service.h>
>
> @@ -39,6 +40,7 @@ enum evt_prod_type {
> EVT_PROD_TYPE_SYNT, /* Producer type Synthetic i.e. CPU. */
> EVT_PROD_TYPE_ETH_RX_ADPTR, /* Producer type Eth Rx Adapter.
> */
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR, /* Producer type Timer
> Adapter. */
> + EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR, /* Producer type Crypto
> Adapter. */
> EVT_PROD_TYPE_MAX,
> };
>
> @@ -77,6 +79,7 @@ struct evt_options {
> uint64_t timer_tick_nsec;
> uint64_t optm_timer_tick_nsec;
> enum evt_prod_type prod_type;
> + enum rte_event_crypto_adapter_mode crypto_adptr_mode;
> };
>
> static inline bool
> diff --git a/app/test-eventdev/evt_main.c b/app/test-eventdev/evt_main.c
> index 194c980c7a..a7d6b0c1cf 100644
> --- a/app/test-eventdev/evt_main.c
> +++ b/app/test-eventdev/evt_main.c
> @@ -35,6 +35,9 @@ signal_handler(int signum)
> if (test->ops.ethdev_destroy)
> test->ops.ethdev_destroy(test, &opt);
>
> + if (test->ops.cryptodev_destroy)
> + test->ops.cryptodev_destroy(test, &opt);
> +
> rte_eal_mp_wait_lcore();
>
> if (test->ops.test_result)
> @@ -162,11 +165,19 @@ main(int argc, char **argv)
> }
> }
>
> + /* Test specific cryptodev setup */
> + if (test->ops.cryptodev_setup) {
> + if (test->ops.cryptodev_setup(test, &opt)) {
> + evt_err("%s: cryptodev setup failed",
> opt.test_name);
> + goto ethdev_destroy;
> + }
> + }
> +
> /* Test specific eventdev setup */
> if (test->ops.eventdev_setup) {
> if (test->ops.eventdev_setup(test, &opt)) {
> evt_err("%s: eventdev setup failed", opt.test_name);
> - goto ethdev_destroy;
> + goto cryptodev_destroy;
> }
> }
>
> @@ -197,6 +208,10 @@ main(int argc, char **argv)
> if (test->ops.eventdev_destroy)
> test->ops.eventdev_destroy(test, &opt);
>
> +cryptodev_destroy:
> + if (test->ops.cryptodev_destroy)
> + test->ops.cryptodev_destroy(test, &opt);
> +
> ethdev_destroy:
> if (test->ops.ethdev_destroy)
> test->ops.ethdev_destroy(test, &opt); diff --git a/app/test-
> eventdev/evt_options.c b/app/test-eventdev/evt_options.c index
> 4ae44801da..d3c704d2b3 100644
> --- a/app/test-eventdev/evt_options.c
> +++ b/app/test-eventdev/evt_options.c
> @@ -122,6 +122,26 @@ evt_parse_timer_prod_type_burst(struct
> evt_options *opt,
> return 0;
> }
>
> +static int
> +evt_parse_crypto_prod_type(struct evt_options *opt,
> + const char *arg __rte_unused)
> +{
> + opt->prod_type = EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR;
> + return 0;
> +}
> +
> +static int
> +evt_parse_crypto_adptr_mode(struct evt_options *opt, const char *arg) {
> + uint8_t mode;
> + int ret;
> +
> + ret = parser_read_uint8(&mode, arg);
> + opt->crypto_adptr_mode = mode ?
> RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD :
> +
> RTE_EVENT_CRYPTO_ADAPTER_OP_NEW;
> + return ret;
> +}
> +
> static int
> evt_parse_test_name(struct evt_options *opt, const char *arg) { @@ -335,6
> +355,7 @@ usage(char *program)
> "\t--queue_priority : enable queue priority\n"
> "\t--deq_tmo_nsec : global dequeue timeout\n"
> "\t--prod_type_ethdev : use ethernet device as producer.\n"
> + "\t--prod_type_cryptodev : use crypto device as
> producer.\n"
> "\t--prod_type_timerdev : use event timer device as
> producer.\n"
> "\t expiry_nsec would be the timeout\n"
> "\t in ns.\n"
> @@ -345,6 +366,8 @@ usage(char *program)
> "\t--timer_tick_nsec : timer tick interval in ns.\n"
> "\t--max_tmo_nsec : max timeout interval in ns.\n"
> "\t--expiry_nsec : event timer expiry ns.\n"
> + "\t--crypto_adptr_mode : 0 for OP_NEW mode (default)
> and\n"
> + "\t 1 for OP_FORWARD mode.\n"
> "\t--mbuf_sz : packet mbuf size.\n"
> "\t--max_pkt_sz : max packet size.\n"
> "\t--prod_enq_burst_sz : producer enqueue burst size.\n"
> @@ -415,8 +438,10 @@ static struct option lgopts[] = {
> { EVT_QUEUE_PRIORITY, 0, 0, 0 },
> { EVT_DEQ_TMO_NSEC, 1, 0, 0 },
> { EVT_PROD_ETHDEV, 0, 0, 0 },
> + { EVT_PROD_CRYPTODEV, 0, 0, 0 },
> { EVT_PROD_TIMERDEV, 0, 0, 0 },
> { EVT_PROD_TIMERDEV_BURST, 0, 0, 0 },
> + { EVT_CRYPTO_ADPTR_MODE, 1, 0, 0 },
> { EVT_NB_TIMERS, 1, 0, 0 },
> { EVT_NB_TIMER_ADPTRS, 1, 0, 0 },
> { EVT_TIMER_TICK_NSEC, 1, 0, 0 },
> @@ -455,8 +480,10 @@ evt_opts_parse_long(int opt_idx, struct evt_options
> *opt)
> { EVT_QUEUE_PRIORITY, evt_parse_queue_priority},
> { EVT_DEQ_TMO_NSEC, evt_parse_deq_tmo_nsec},
> { EVT_PROD_ETHDEV, evt_parse_eth_prod_type},
> + { EVT_PROD_CRYPTODEV, evt_parse_crypto_prod_type},
> { EVT_PROD_TIMERDEV, evt_parse_timer_prod_type},
> { EVT_PROD_TIMERDEV_BURST,
> evt_parse_timer_prod_type_burst},
> + { EVT_CRYPTO_ADPTR_MODE,
> evt_parse_crypto_adptr_mode},
> { EVT_NB_TIMERS, evt_parse_nb_timers},
> { EVT_NB_TIMER_ADPTRS, evt_parse_nb_timer_adptrs},
> { EVT_TIMER_TICK_NSEC, evt_parse_timer_tick_nsec}, diff --
> git a/app/test-eventdev/evt_options.h b/app/test-eventdev/evt_options.h
> index 413d7092f0..2231c58801 100644
> --- a/app/test-eventdev/evt_options.h
> +++ b/app/test-eventdev/evt_options.h
> @@ -9,6 +9,7 @@
> #include <stdbool.h>
>
> #include <rte_common.h>
> +#include <rte_cryptodev.h>
> #include <rte_ethdev.h>
> #include <rte_eventdev.h>
> #include <rte_lcore.h>
> @@ -33,8 +34,10 @@
> #define EVT_QUEUE_PRIORITY ("queue_priority")
> #define EVT_DEQ_TMO_NSEC ("deq_tmo_nsec")
> #define EVT_PROD_ETHDEV ("prod_type_ethdev")
> +#define EVT_PROD_CRYPTODEV ("prod_type_cryptodev")
> #define EVT_PROD_TIMERDEV ("prod_type_timerdev")
> #define EVT_PROD_TIMERDEV_BURST ("prod_type_timerdev_burst")
> +#define EVT_CRYPTO_ADPTR_MODE ("crypto_adptr_mode")
> #define EVT_NB_TIMERS ("nb_timers")
> #define EVT_NB_TIMER_ADPTRS ("nb_timer_adptrs")
> #define EVT_TIMER_TICK_NSEC ("timer_tick_nsec")
> @@ -249,6 +252,8 @@ evt_prod_id_to_name(enum evt_prod_type
> prod_type)
> return "Ethdev Rx Adapter";
> case EVT_PROD_TYPE_EVENT_TIMER_ADPTR:
> return "Event timer adapter";
> + case EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR:
> + return "Event crypto adapter";
> }
>
> return "";
> @@ -288,6 +293,13 @@ evt_dump_producer_type(struct evt_options *opt)
> evt_dump("timer_tick_nsec", "%"PRIu64"",
> opt->timer_tick_nsec);
> break;
> + case EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR:
> + snprintf(name, EVT_PROD_MAX_NAME_LEN,
> + "Event crypto adapter producers");
> + evt_dump("crypto adapter mode", "%s",
> + opt->crypto_adptr_mode ? "OP_FORWARD" :
> "OP_NEW");
> + evt_dump("nb_cryptodev", "%u", rte_cryptodev_count());
> + break;
> }
> evt_dump("prod_type", "%s", name);
> }
> diff --git a/app/test-eventdev/evt_test.h b/app/test-eventdev/evt_test.h
> index f07d2c3336..50fa474ec2 100644
> --- a/app/test-eventdev/evt_test.h
> +++ b/app/test-eventdev/evt_test.h
> @@ -29,6 +29,8 @@ typedef int (*evt_test_mempool_setup_t)
> (struct evt_test *test, struct evt_options *opt); typedef int
> (*evt_test_ethdev_setup_t)
> (struct evt_test *test, struct evt_options *opt);
> +typedef int (*evt_test_cryptodev_setup_t)
> + (struct evt_test *test, struct evt_options *opt);
> typedef int (*evt_test_eventdev_setup_t)
> (struct evt_test *test, struct evt_options *opt); typedef int
> (*evt_test_launch_lcores_t) @@ -39,6 +41,8 @@ typedef void
> (*evt_test_eventdev_destroy_t)
> (struct evt_test *test, struct evt_options *opt); typedef void
> (*evt_test_ethdev_destroy_t)
> (struct evt_test *test, struct evt_options *opt);
> +typedef void (*evt_test_cryptodev_destroy_t)
> + (struct evt_test *test, struct evt_options *opt);
> typedef void (*evt_test_mempool_destroy_t)
> (struct evt_test *test, struct evt_options *opt); typedef void
> (*evt_test_destroy_t) @@ -52,10 +56,12 @@ struct evt_test_ops {
> evt_test_mempool_setup_t mempool_setup;
> evt_test_ethdev_setup_t ethdev_setup;
> evt_test_eventdev_setup_t eventdev_setup;
> + evt_test_cryptodev_setup_t cryptodev_setup;
> evt_test_launch_lcores_t launch_lcores;
> evt_test_result_t test_result;
> evt_test_eventdev_destroy_t eventdev_destroy;
> evt_test_ethdev_destroy_t ethdev_destroy;
> + evt_test_cryptodev_destroy_t cryptodev_destroy;
> evt_test_mempool_destroy_t mempool_destroy;
> evt_test_destroy_t test_destroy;
> };
> diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-
> eventdev/test_perf_atq.c
> index 8fd51004ee..67ff681666 100644
> --- a/app/test-eventdev/test_perf_atq.c
> +++ b/app/test-eventdev/test_perf_atq.c
> @@ -48,6 +48,22 @@ perf_atq_worker(void *arg, const int
> enable_fwd_latency)
> continue;
> }
>
> + if (prod_crypto_type &&
> + (ev.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
> + struct rte_crypto_op *op = ev.event_ptr;
> +
> + if (op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
> {
> + if (op->sym->m_dst == NULL)
> + ev.event_ptr = op->sym->m_src;
> + else
> + ev.event_ptr = op->sym->m_dst;
> + rte_crypto_op_free(op);
> + } else {
> + rte_crypto_op_free(op);
> + continue;
> + }
> + }
> +
> if (enable_fwd_latency && !prod_timer_type)
> /* first stage in pipeline, mark ts to compute fwd latency */
> atq_mark_fwd_latency(&ev);
> @@ -87,6 +103,25 @@ perf_atq_worker_burst(void *arg, const int
> enable_fwd_latency)
> }
>
> for (i = 0; i < nb_rx; i++) {
> + if (prod_crypto_type &&
> + (ev[i].event_type ==
> RTE_EVENT_TYPE_CRYPTODEV)) {
> + struct rte_crypto_op *op = ev[i].event_ptr;
> +
> + if (op->status ==
> + RTE_CRYPTO_OP_STATUS_SUCCESS) {
> + if (op->sym->m_dst == NULL)
> + ev[i].event_ptr =
> + op->sym->m_src;
> + else
> + ev[i].event_ptr =
> + op->sym->m_dst;
> + rte_crypto_op_free(op);
> + } else {
> + rte_crypto_op_free(op);
> + continue;
> + }
> + }
> +
> if (enable_fwd_latency && !prod_timer_type) {
> rte_prefetch0(ev[i+1].event_ptr);
> /* first stage in pipeline.
> @@ -254,6 +289,18 @@ perf_atq_eventdev_setup(struct evt_test *test,
> struct evt_options *opt)
> return ret;
> }
> }
> + } else if (opt->prod_type ==
> EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR) {
> + uint8_t cdev_id, cdev_count;
> +
> + cdev_count = rte_cryptodev_count();
> + for (cdev_id = 0; cdev_id < cdev_count; cdev_id++) {
> + ret = rte_cryptodev_start(cdev_id);
> + if (ret) {
> + evt_err("Failed to start cryptodev %u",
> + cdev_id);
> + return ret;
> + }
> + }
> }
>
> return 0;
> @@ -295,12 +342,14 @@ static const struct evt_test_ops perf_atq = {
> .opt_dump = perf_atq_opt_dump,
> .test_setup = perf_test_setup,
> .ethdev_setup = perf_ethdev_setup,
> + .cryptodev_setup = perf_cryptodev_setup,
> .mempool_setup = perf_mempool_setup,
> .eventdev_setup = perf_atq_eventdev_setup,
> .launch_lcores = perf_atq_launch_lcores,
> .eventdev_destroy = perf_eventdev_destroy,
> .mempool_destroy = perf_mempool_destroy,
> .ethdev_destroy = perf_ethdev_destroy,
> + .cryptodev_destroy = perf_cryptodev_destroy,
> .test_result = perf_test_result,
> .test_destroy = perf_test_destroy,
> };
> diff --git a/app/test-eventdev/test_perf_common.c b/app/test-
> eventdev/test_perf_common.c
> index 9b73874151..7d4eab9b8e 100644
> --- a/app/test-eventdev/test_perf_common.c
> +++ b/app/test-eventdev/test_perf_common.c
> @@ -6,6 +6,8 @@
>
> #include "test_perf_common.h"
>
> +#define NB_CRYPTODEV_DESCRIPTORS 128
> +
> int
> perf_test_result(struct evt_test *test, struct evt_options *opt) { @@ -272,6
> +274,123 @@ perf_event_timer_producer_burst(void *arg)
> return 0;
> }
>
> +static inline void
> +crypto_adapter_enq_op_new(struct prod_data *p) {
> + struct rte_cryptodev_sym_session **crypto_sess = p-
> >ca.crypto_sess;
> + struct test_perf *t = p->t;
> + const uint32_t nb_flows = t->nb_flows;
> + const uint64_t nb_pkts = t->nb_pkts;
> + struct rte_mempool *pool = t->pool;
> + struct rte_crypto_sym_op *sym_op;
> + struct evt_options *opt = t->opt;
> + uint16_t qp_id = p->ca.cdev_qp_id;
> + uint8_t cdev_id = p->ca.cdev_id;
> + uint32_t flow_counter = 0;
> + struct rte_crypto_op *op;
> + struct rte_mbuf *m;
> + uint64_t count = 0;
> + uint16_t len;
> +
> + if (opt->verbose_level > 1)
> + printf("%s(): lcore %d queue %d cdev_id %u cdev_qp_id
> %u\n",
> + __func__, rte_lcore_id(), p->queue_id, p->ca.cdev_id,
> + p->ca.cdev_qp_id);
> +
> + len = opt->mbuf_sz ? opt->mbuf_sz : RTE_ETHER_MIN_LEN;
> +
> + while (count < nb_pkts && t->done == false) {
> + m = rte_pktmbuf_alloc(pool);
> + if (m == NULL)
> + continue;
> +
> + rte_pktmbuf_append(m, len);
> + op = rte_crypto_op_alloc(t->ca_op_pool,
> +
> RTE_CRYPTO_OP_TYPE_SYMMETRIC);
> + sym_op = op->sym;
> + sym_op->m_src = m;
> + sym_op->cipher.data.offset = 0;
> + sym_op->cipher.data.length = len;
> + rte_crypto_op_attach_sym_session(
> + op, crypto_sess[flow_counter++ % nb_flows]);
> +
> + while (rte_cryptodev_enqueue_burst(cdev_id, qp_id, &op,
> 1) != 1 &&
> + t->done == false)
> + rte_pause();
> +
> + count++;
> + }
> +}
> +
> +static inline void
> +crypto_adapter_enq_op_fwd(struct prod_data *p) {
> + struct rte_cryptodev_sym_session **crypto_sess = p-
> >ca.crypto_sess;
> + const uint8_t dev_id = p->dev_id;
> + const uint8_t port = p->port_id;
> + struct test_perf *t = p->t;
> + const uint32_t nb_flows = t->nb_flows;
> + const uint64_t nb_pkts = t->nb_pkts;
> + struct rte_mempool *pool = t->pool;
> + struct evt_options *opt = t->opt;
> + struct rte_crypto_sym_op *sym_op;
> + uint32_t flow_counter = 0;
> + struct rte_crypto_op *op;
> + struct rte_event ev;
> + struct rte_mbuf *m;
> + uint64_t count = 0;
> + uint16_t len;
> +
> + if (opt->verbose_level > 1)
> + printf("%s(): lcore %d port %d queue %d cdev_id %u
> cdev_qp_id %u\n",
> + __func__, rte_lcore_id(), port, p->queue_id,
> + p->ca.cdev_id, p->ca.cdev_qp_id);
> +
> + ev.event = 0;
> + ev.op = RTE_EVENT_OP_FORWARD;
> + ev.queue_id = p->queue_id;
> + ev.sched_type = RTE_SCHED_TYPE_ATOMIC;
> + ev.event_type = RTE_EVENT_TYPE_CPU;
> + len = opt->mbuf_sz ? opt->mbuf_sz : RTE_ETHER_MIN_LEN;
> +
> + while (count < nb_pkts && t->done == false) {
> + m = rte_pktmbuf_alloc(pool);
> + if (m == NULL)
> + continue;
> +
> + rte_pktmbuf_append(m, len);
> + op = rte_crypto_op_alloc(t->ca_op_pool,
> +
> RTE_CRYPTO_OP_TYPE_SYMMETRIC);
> + sym_op = op->sym;
> + sym_op->m_src = m;
> + sym_op->cipher.data.offset = 0;
> + sym_op->cipher.data.length = len;
> + rte_crypto_op_attach_sym_session(
> + op, crypto_sess[flow_counter++ % nb_flows]);
> + ev.event_ptr = op;
> +
> + while (rte_event_crypto_adapter_enqueue(dev_id, port,
> &ev, 1) != 1 &&
> + t->done == false)
> + rte_pause();
> +
> + count++;
> + }
> +}
> +
> +static inline int
> +perf_event_crypto_producer(void *arg)
> +{
> + struct prod_data *p = arg;
> + struct evt_options *opt = p->t->opt;
> +
> + if (opt->crypto_adptr_mode ==
> RTE_EVENT_CRYPTO_ADAPTER_OP_NEW)
> + crypto_adapter_enq_op_new(p);
> + else
> + crypto_adapter_enq_op_fwd(p);
> +
> + return 0;
> +}
> +
> static int
> perf_producer_wrapper(void *arg)
> {
> @@ -298,6 +417,8 @@ perf_producer_wrapper(void *arg)
> else if (t->opt->prod_type ==
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR &&
> t->opt->timdev_use_burst)
> return perf_event_timer_producer_burst(arg);
> + else if (t->opt->prod_type ==
> EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR)
> + return perf_event_crypto_producer(arg);
> return 0;
> }
>
> @@ -405,8 +526,10 @@ perf_launch_lcores(struct evt_test *test, struct
> evt_options *opt,
> if (remaining <= 0) {
> t->result = EVT_TEST_SUCCESS;
> if (opt->prod_type == EVT_PROD_TYPE_SYNT
> ||
> - opt->prod_type ==
> -
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR) {
> + opt->prod_type ==
> +
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR ||
> + opt->prod_type ==
> +
> EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR) {
> t->done = true;
> break;
> }
> @@ -415,7 +538,8 @@ perf_launch_lcores(struct evt_test *test, struct
> evt_options *opt,
>
> if (new_cycles - dead_lock_cycles > dead_lock_sample &&
> (opt->prod_type == EVT_PROD_TYPE_SYNT ||
> - opt->prod_type ==
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR)) {
> + opt->prod_type ==
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR ||
> + opt->prod_type ==
> EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR)) {
> remaining = t->outstand_pkts - processed_pkts(t);
> if (dead_lock_remaining == remaining) {
> rte_event_dev_dump(opt->dev_id, stdout);
> @@ -537,6 +661,80 @@ perf_event_timer_adapter_setup(struct test_perf
> *t)
> return 0;
> }
>
> +static int
> +perf_event_crypto_adapter_setup(struct test_perf *t, struct prod_data
> +*p) {
> + struct evt_options *opt = t->opt;
> + uint32_t cap;
> + int ret;
> +
> + ret = rte_event_crypto_adapter_caps_get(p->dev_id, p->ca.cdev_id,
> &cap);
> + if (ret) {
> + evt_err("Failed to get crypto adapter capabilities");
> + return ret;
> + }
> +
> + if (((opt->crypto_adptr_mode ==
> RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) &&
> + !(cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) ||
> + ((opt->crypto_adptr_mode ==
> RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) &&
> + !(cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD))) {
> + evt_err("crypto adapter %s mode unsupported\n",
> + opt->crypto_adptr_mode ? "OP_FORWARD" :
> "OP_NEW");
> + return -EINVAL;
> + } else if (!(cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA)) {
> + evt_err("Storing crypto session not supported");
> + return -EINVAL;
> + }
> +
> + if (cap &
> RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) {
> + struct rte_event response_info;
> +
> + response_info.event = 0;
> + response_info.op =
> + opt->crypto_adptr_mode ==
> +
> RTE_EVENT_CRYPTO_ADAPTER_OP_NEW ?
> + RTE_EVENT_OP_NEW :
> + RTE_EVENT_OP_FORWARD;
> + response_info.sched_type = RTE_SCHED_TYPE_ATOMIC;
> + response_info.event_type = RTE_EVENT_TYPE_CRYPTODEV;
> + response_info.queue_id = p->queue_id;
> + ret = rte_event_crypto_adapter_queue_pair_add(
> + TEST_PERF_CA_ID, p->ca.cdev_id, p->ca.cdev_qp_id,
> + &response_info);
> + } else {
> + ret = rte_event_crypto_adapter_queue_pair_add(
> + TEST_PERF_CA_ID, p->ca.cdev_id, p->ca.cdev_qp_id,
> NULL);
> + }
> +
> + return ret;
> +}
> +
> +static struct rte_cryptodev_sym_session *
> +cryptodev_sym_sess_create(struct prod_data *p, struct test_perf *t) {
> + struct rte_crypto_sym_xform cipher_xform;
> + struct rte_cryptodev_sym_session *sess;
> +
> + cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
> + cipher_xform.cipher.algo = RTE_CRYPTO_CIPHER_NULL;
> + cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_ENCRYPT;
> + cipher_xform.next = NULL;
> +
> + sess = rte_cryptodev_sym_session_create(t->ca_sess_pool);
> + if (sess == NULL) {
> + evt_err("Failed to create sym session");
> + return NULL;
> + }
> +
> + if (rte_cryptodev_sym_session_init(p->ca.cdev_id, sess,
> &cipher_xform,
> + t->ca_sess_priv_pool)) {
> + evt_err("Failed to init session");
> + return NULL;
> + }
> +
> + return sess;
> +}
> +
> int
> perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
> uint8_t stride, uint8_t nb_queues,
> @@ -598,6 +796,80 @@ perf_event_dev_port_setup(struct evt_test *test,
> struct evt_options *opt,
> ret = perf_event_timer_adapter_setup(t);
> if (ret)
> return ret;
> + } else if (opt->prod_type ==
> EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR) {
> + struct rte_event_port_conf conf = *port_conf;
> + uint8_t cdev_id = 0;
> + uint16_t qp_id = 0;
> +
> + ret = rte_event_crypto_adapter_create(TEST_PERF_CA_ID,
> + opt->dev_id, &conf, 0);
> + if (ret) {
> + evt_err("Failed to create crypto adapter");
> + return ret;
> + }
> +
> + prod = 0;
> + for (; port < perf_nb_event_ports(opt); port++) {
> + struct rte_cryptodev_sym_session *crypto_sess;
> + union rte_event_crypto_metadata m_data;
> + struct prod_data *p = &t->prod[port];
> + uint32_t flow_id;
> +
> + if (qp_id ==
> rte_cryptodev_queue_pair_count(cdev_id)) {
> + cdev_id++;
> + qp_id = 0;
> + }
> +
> + p->dev_id = opt->dev_id;
> + p->port_id = port;
> + p->queue_id = prod * stride;
> + p->ca.cdev_id = cdev_id;
> + p->ca.cdev_qp_id = qp_id;
> + p->ca.crypto_sess = rte_zmalloc_socket(
> + NULL, sizeof(crypto_sess) * t->nb_flows,
> + RTE_CACHE_LINE_SIZE, opt->socket_id);
> + p->t = t;
> +
> + m_data.request_info.cdev_id = p->ca.cdev_id;
> + m_data.request_info.queue_pair_id = p-
> >ca.cdev_qp_id;
> + m_data.response_info.op =
> + opt->crypto_adptr_mode ==
> +
> RTE_EVENT_CRYPTO_ADAPTER_OP_NEW ?
> + RTE_EVENT_OP_NEW :
> + RTE_EVENT_OP_FORWARD;
> + m_data.response_info.sched_type =
> RTE_SCHED_TYPE_ATOMIC;
> + m_data.response_info.event_type =
> + RTE_EVENT_TYPE_CRYPTODEV;
> + m_data.response_info.queue_id = p->queue_id;
> +
> + for (flow_id = 0; flow_id < t->nb_flows; flow_id++) {
> + crypto_sess = cryptodev_sym_sess_create(p,
> t);
> + if (crypto_sess == NULL)
> + return -ENOMEM;
> +
> + m_data.response_info.flow_id = flow_id;
> + rte_cryptodev_sym_session_set_user_data(
> + crypto_sess, &m_data,
> sizeof(m_data));
> + p->ca.crypto_sess[flow_id] = crypto_sess;
> + }
> +
> + conf.event_port_cfg |=
> + RTE_EVENT_PORT_CFG_HINT_PRODUCER |
> + RTE_EVENT_PORT_CFG_HINT_CONSUMER;
> +
> + ret = rte_event_port_setup(opt->dev_id, port,
> &conf);
> + if (ret) {
> + evt_err("failed to setup port %d", port);
> + return ret;
> + }
> +
> + ret = perf_event_crypto_adapter_setup(t, p);
> + if (ret)
> + return ret;
> +
> + qp_id++;
> + prod++;
> + }
> } else {
> prod = 0;
> for ( ; port < perf_nb_event_ports(opt); port++) { @@ -659,7
> +931,8 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
> }
>
> if (opt->prod_type == EVT_PROD_TYPE_SYNT ||
> - opt->prod_type ==
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR) {
> + opt->prod_type == EVT_PROD_TYPE_EVENT_TIMER_ADPTR ||
> + opt->prod_type == EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR) {
> /* Validate producer lcores */
> if (evt_lcores_has_overlap(opt->plcores,
> rte_get_main_lcore())) {
> @@ -767,8 +1040,7 @@ perf_ethdev_setup(struct evt_test *test, struct
> evt_options *opt)
> },
> };
>
> - if (opt->prod_type == EVT_PROD_TYPE_SYNT ||
> - opt->prod_type ==
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR)
> + if (opt->prod_type != EVT_PROD_TYPE_ETH_RX_ADPTR)
> return 0;
>
> if (!rte_eth_dev_count_avail()) {
> @@ -841,6 +1113,147 @@ void perf_ethdev_destroy(struct evt_test *test,
> struct evt_options *opt)
> }
> }
>
> +int
> +perf_cryptodev_setup(struct evt_test *test, struct evt_options *opt) {
> + uint8_t cdev_count, cdev_id, nb_plcores, nb_qps;
> + struct test_perf *t = evt_test_priv(test);
> + unsigned int max_session_size;
> + uint32_t nb_sessions;
> + int ret;
> +
> + if (opt->prod_type != EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR)
> + return 0;
> +
> + cdev_count = rte_cryptodev_count();
> + if (cdev_count == 0) {
> + evt_err("No crypto devices available\n");
> + return -ENODEV;
> + }
> +
> + t->ca_op_pool = rte_crypto_op_pool_create(
> + "crypto_op_pool", RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> opt->pool_sz,
> + 128, 0, rte_socket_id());
> + if (t->ca_op_pool == NULL) {
> + evt_err("Failed to create crypto op pool");
> + return -ENOMEM;
> + }
> +
> + nb_sessions = evt_nr_active_lcores(opt->plcores) * t->nb_flows;
> + t->ca_sess_pool = rte_cryptodev_sym_session_pool_create(
> + "ca_sess_pool", nb_sessions, 0, 0,
> + sizeof(union rte_event_crypto_metadata),
> SOCKET_ID_ANY);
> + if (t->ca_sess_pool == NULL) {
> + evt_err("Failed to create sym session pool");
> + ret = -ENOMEM;
> + goto err;
> + }
> +
> + max_session_size = 0;
> + for (cdev_id = 0; cdev_id < cdev_count; cdev_id++) {
> + unsigned int session_size;
> +
> + session_size =
> +
> rte_cryptodev_sym_get_private_session_size(cdev_id);
> + if (session_size > max_session_size)
> + max_session_size = session_size;
> + }
> +
> + max_session_size += sizeof(union rte_event_crypto_metadata);
> + t->ca_sess_priv_pool = rte_mempool_create(
> + "ca_sess_priv_pool", nb_sessions, max_session_size, 0, 0,
> NULL,
> + NULL, NULL, NULL, SOCKET_ID_ANY, 0);
> + if (t->ca_sess_priv_pool == NULL) {
> + evt_err("failed to create sym session private pool");
> + ret = -ENOMEM;
> + goto err;
> + }
> +
> + /*
> + * Calculate number of needed queue pairs, based on the amount of
> + * available number of logical cores and crypto devices. For instance,
> + * if there are 4 cores and 2 crypto devices, 2 queue pairs will be set
> + * up per device.
> + */
> + nb_plcores = evt_nr_active_lcores(opt->plcores);
> + nb_qps = (nb_plcores % cdev_count) ? (nb_plcores / cdev_count) + 1
> :
> + nb_plcores / cdev_count;
> + for (cdev_id = 0; cdev_id < cdev_count; cdev_id++) {
> + struct rte_cryptodev_qp_conf qp_conf;
> + struct rte_cryptodev_config conf;
> + struct rte_cryptodev_info info;
> + int qp_id;
> +
> + rte_cryptodev_info_get(cdev_id, &info);
> + if (nb_qps > info.max_nb_queue_pairs) {
> + evt_err("Not enough queue pairs per cryptodev
> (%u)",
> + nb_qps);
> + ret = -EINVAL;
> + goto err;
> + }
> +
> + conf.nb_queue_pairs = nb_qps;
> + conf.socket_id = SOCKET_ID_ANY;
> + conf.ff_disable = RTE_CRYPTODEV_FF_SECURITY;
> +
> + ret = rte_cryptodev_configure(cdev_id, &conf);
> + if (ret) {
> + evt_err("Failed to configure cryptodev (%u)",
> cdev_id);
> + goto err;
> + }
> +
> + qp_conf.nb_descriptors = NB_CRYPTODEV_DESCRIPTORS;
> + qp_conf.mp_session = t->ca_sess_pool;
> + qp_conf.mp_session_private = t->ca_sess_priv_pool;
> +
> + for (qp_id = 0; qp_id < conf.nb_queue_pairs; qp_id++) {
> + ret = rte_cryptodev_queue_pair_setup(
> + cdev_id, qp_id, &qp_conf,
> + rte_cryptodev_socket_id(cdev_id));
> + if (ret) {
> + evt_err("Failed to setup queue pairs on
> cryptodev %u\n",
> + cdev_id);
> + goto err;
> + }
> + }
> + }
> +
> + return 0;
> +err:
> + rte_mempool_free(t->ca_op_pool);
> + rte_mempool_free(t->ca_sess_pool);
> + rte_mempool_free(t->ca_sess_priv_pool);
> +
> + return ret;
> +}
> +
> +void
> +perf_cryptodev_destroy(struct evt_test *test, struct evt_options *opt)
> +{
> + uint8_t cdev_id, cdev_count = rte_cryptodev_count();
> + struct test_perf *t = evt_test_priv(test);
> + uint16_t port;
> +
> + if (opt->prod_type != EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR)
> + return;
> +
> + for (port = t->nb_workers; port < perf_nb_event_ports(opt); port++) {
> + struct prod_data *p = &t->prod[port];
> +
> + rte_event_crypto_adapter_queue_pair_del(
> + TEST_PERF_CA_ID, p->ca.cdev_id, p-
> >ca.cdev_qp_id);
> + }
> +
> + rte_event_crypto_adapter_free(TEST_PERF_CA_ID);
> +
> + for (cdev_id = 0; cdev_id < cdev_count; cdev_id++)
> + rte_cryptodev_stop(cdev_id);
> +
> + rte_mempool_free(t->ca_op_pool);
> + rte_mempool_free(t->ca_sess_pool);
> + rte_mempool_free(t->ca_sess_priv_pool);
> +}
> +
> int
> perf_mempool_setup(struct evt_test *test, struct evt_options *opt) { diff --
> git a/app/test-eventdev/test_perf_common.h b/app/test-
> eventdev/test_perf_common.h
> index 14dcf80429..ea0907d61a 100644
> --- a/app/test-eventdev/test_perf_common.h
> +++ b/app/test-eventdev/test_perf_common.h
> @@ -9,9 +9,11 @@
> #include <stdbool.h>
> #include <unistd.h>
>
> +#include <rte_cryptodev.h>
> #include <rte_cycles.h>
> #include <rte_ethdev.h>
> #include <rte_eventdev.h>
> +#include <rte_event_crypto_adapter.h>
> #include <rte_event_eth_rx_adapter.h>
> #include <rte_event_timer_adapter.h>
> #include <rte_lcore.h>
> @@ -23,6 +25,8 @@
> #include "evt_options.h"
> #include "evt_test.h"
>
> +#define TEST_PERF_CA_ID 0
> +
> struct test_perf;
>
> struct worker_data {
> @@ -33,14 +37,19 @@ struct worker_data {
> struct test_perf *t;
> } __rte_cache_aligned;
>
> +struct crypto_adptr_data {
> + uint8_t cdev_id;
> + uint16_t cdev_qp_id;
> + struct rte_cryptodev_sym_session **crypto_sess; };
> struct prod_data {
> uint8_t dev_id;
> uint8_t port_id;
> uint8_t queue_id;
> + struct crypto_adptr_data ca;
> struct test_perf *t;
> } __rte_cache_aligned;
>
> -
> struct test_perf {
> /* Don't change the offset of "done". Signal handler use this memory
> * to terminate all lcores work.
> @@ -58,6 +67,9 @@ struct test_perf {
> uint8_t sched_type_list[EVT_MAX_STAGES] __rte_cache_aligned;
> struct rte_event_timer_adapter *timer_adptr[
> RTE_EVENT_TIMER_ADAPTER_NUM_MAX]
> __rte_cache_aligned;
> + struct rte_mempool *ca_op_pool;
> + struct rte_mempool *ca_sess_pool;
> + struct rte_mempool *ca_sess_priv_pool;
> } __rte_cache_aligned;
>
> struct perf_elt {
> @@ -81,6 +93,8 @@ struct perf_elt {
> const uint8_t port = w->port_id;\
> const uint8_t prod_timer_type = \
> opt->prod_type ==
> EVT_PROD_TYPE_EVENT_TIMER_ADPTR;\
> + const uint8_t prod_crypto_type = \
> + opt->prod_type ==
> EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR;\
> uint8_t *const sched_type_list = &t->sched_type_list[0];\
> struct rte_mempool *const pool = t->pool;\
> const uint8_t nb_stages = t->opt->nb_stages;\ @@ -154,6 +168,7
> @@ int perf_test_result(struct evt_test *test, struct evt_options *opt); int
> perf_opt_check(struct evt_options *opt, uint64_t nb_queues); int
> perf_test_setup(struct evt_test *test, struct evt_options *opt); int
> perf_ethdev_setup(struct evt_test *test, struct evt_options *opt);
> +int perf_cryptodev_setup(struct evt_test *test, struct evt_options
> +*opt);
> int perf_mempool_setup(struct evt_test *test, struct evt_options *opt); int
> perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
> uint8_t stride, uint8_t nb_queues,
> @@ -164,6 +179,7 @@ int perf_launch_lcores(struct evt_test *test, struct
> evt_options *opt, void perf_opt_dump(struct evt_options *opt, uint8_t
> nb_queues); void perf_test_destroy(struct evt_test *test, struct evt_options
> *opt); void perf_eventdev_destroy(struct evt_test *test, struct evt_options
> *opt);
> +void perf_cryptodev_destroy(struct evt_test *test, struct evt_options
> +*opt);
> void perf_ethdev_destroy(struct evt_test *test, struct evt_options *opt);
> void perf_mempool_destroy(struct evt_test *test, struct evt_options *opt);
>
> diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-
> eventdev/test_perf_queue.c
> index f4ea3a795f..dcf6d82947 100644
> --- a/app/test-eventdev/test_perf_queue.c
> +++ b/app/test-eventdev/test_perf_queue.c
> @@ -49,6 +49,23 @@ perf_queue_worker(void *arg, const int
> enable_fwd_latency)
> rte_pause();
> continue;
> }
> +
> + if (prod_crypto_type &&
> + (ev.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
> + struct rte_crypto_op *op = ev.event_ptr;
> +
> + if (op->status == RTE_CRYPTO_OP_STATUS_SUCCESS)
> {
> + if (op->sym->m_dst == NULL)
> + ev.event_ptr = op->sym->m_src;
> + else
> + ev.event_ptr = op->sym->m_dst;
> + rte_crypto_op_free(op);
> + } else {
> + rte_crypto_op_free(op);
> + continue;
> + }
> + }
> +
> if (enable_fwd_latency && !prod_timer_type)
> /* first q in pipeline, mark timestamp to compute fwd
> latency */
> mark_fwd_latency(&ev, nb_stages);
> @@ -88,6 +105,25 @@ perf_queue_worker_burst(void *arg, const int
> enable_fwd_latency)
> }
>
> for (i = 0; i < nb_rx; i++) {
> + if (prod_crypto_type &&
> + (ev[i].event_type ==
> RTE_EVENT_TYPE_CRYPTODEV)) {
> + struct rte_crypto_op *op = ev[i].event_ptr;
> +
> + if (op->status ==
> + RTE_CRYPTO_OP_STATUS_SUCCESS) {
> + if (op->sym->m_dst == NULL)
> + ev[i].event_ptr =
> + op->sym->m_src;
> + else
> + ev[i].event_ptr =
> + op->sym->m_dst;
> + rte_crypto_op_free(op);
> + } else {
> + rte_crypto_op_free(op);
> + continue;
> + }
> + }
> +
> if (enable_fwd_latency && !prod_timer_type) {
> rte_prefetch0(ev[i+1].event_ptr);
> /* first queue in pipeline.
> @@ -269,6 +305,18 @@ perf_queue_eventdev_setup(struct evt_test *test,
> struct evt_options *opt)
> return ret;
> }
> }
> + } else if (opt->prod_type ==
> EVT_PROD_TYPE_EVENT_CRYPTO_ADPTR) {
> + uint8_t cdev_id, cdev_count;
> +
> + cdev_count = rte_cryptodev_count();
> + for (cdev_id = 0; cdev_id < cdev_count; cdev_id++) {
> + ret = rte_cryptodev_start(cdev_id);
> + if (ret) {
> + evt_err("Failed to start cryptodev %u",
> + cdev_id);
> + return ret;
> + }
> + }
> }
>
> return 0;
> @@ -311,11 +359,13 @@ static const struct evt_test_ops perf_queue = {
> .test_setup = perf_test_setup,
> .mempool_setup = perf_mempool_setup,
> .ethdev_setup = perf_ethdev_setup,
> + .cryptodev_setup = perf_cryptodev_setup,
> .eventdev_setup = perf_queue_eventdev_setup,
> .launch_lcores = perf_queue_launch_lcores,
> .eventdev_destroy = perf_eventdev_destroy,
> .mempool_destroy = perf_mempool_destroy,
> .ethdev_destroy = perf_ethdev_destroy,
> + .cryptodev_destroy = perf_cryptodev_destroy,
> .test_result = perf_test_result,
> .test_destroy = perf_test_destroy,
> };
> diff --git a/doc/guides/tools/testeventdev.rst
> b/doc/guides/tools/testeventdev.rst
> index 48efb9ea6e..f7d813226d 100644
> --- a/doc/guides/tools/testeventdev.rst
> +++ b/doc/guides/tools/testeventdev.rst
> @@ -120,6 +120,10 @@ The following are the application command-line
> options:
>
> Use burst mode event timer adapter as producer.
>
> +* ``--prod_type_cryptodev``
> +
> + Use crypto device as producer.
> +
> * ``--timer_tick_nsec``
>
> Used to dictate number of nano seconds between bucket traversal of
> the @@ -148,6 +152,11 @@ The following are the application command-
> line options:
> timeout is out of the supported range of event device it will be
> adjusted to the highest/lowest supported dequeue timeout supported.
>
> +* ``--crypto_adptr_mode``
> +
> + Set crypto adapter mode. Use 0 for OP_NEW (default) and 1 for
> + OP_FORWARD mode.
> +
> * ``--mbuf_sz``
>
> Set packet mbuf size. Can be used to configure Jumbo Frames. Only @@
> -420,6 +429,7 @@ Supported application command line options are
> following::
> --prod_type_ethdev
> --prod_type_timerdev_burst
> --prod_type_timerdev
> + --prod_type_cryptodev
> --prod_enq_burst_sz
> --timer_tick_nsec
> --max_tmo_nsec
> @@ -427,6 +437,7 @@ Supported application command line options are
> following::
> --nb_timers
> --nb_timer_adptrs
> --deq_tmo_nsec
> + --crypto_adptr_mode
>
> Example
> ^^^^^^^
> @@ -529,12 +540,14 @@ Supported application command line options are
> following::
> --prod_type_ethdev
> --prod_type_timerdev_burst
> --prod_type_timerdev
> + --prod_type_cryptodev
> --timer_tick_nsec
> --max_tmo_nsec
> --expiry_nsec
> --nb_timers
> --nb_timer_adptrs
> --deq_tmo_nsec
> + --crypto_adptr_mode
>
> Example
> ^^^^^^^
> --
> 2.25.1
next prev parent reply other threads:[~2022-02-16 4:47 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-20 19:53 [PATCH] " Shijith Thotton
2021-12-21 8:51 ` [PATCH v2] " Shijith Thotton
2021-12-30 11:56 ` Gujjar, Abhinandan S
2022-01-03 6:04 ` Shijith Thotton
2022-01-03 8:46 ` Gujjar, Abhinandan S
2022-01-03 9:14 ` Shijith Thotton
2022-01-04 15:28 ` Aaron Conole
2022-01-04 15:49 ` [EXT] " Shijith Thotton
2022-01-04 10:30 ` [PATCH v3] " Shijith Thotton
2022-01-21 12:25 ` Jerin Jacob
2022-01-23 16:56 ` Gujjar, Abhinandan S
2022-01-23 18:44 ` Gujjar, Abhinandan S
2022-01-24 6:09 ` Shijith Thotton
2022-01-24 6:59 ` Shijith Thotton
2022-01-25 14:15 ` Gujjar, Abhinandan S
2022-01-25 13:39 ` Gujjar, Abhinandan S
2022-02-08 17:00 ` Shijith Thotton
2022-02-14 15:26 ` Jerin Jacob
2022-02-14 15:31 ` Gujjar, Abhinandan S
2022-02-08 16:33 ` [PATCH v4] " Shijith Thotton
2022-02-15 6:03 ` Gujjar, Abhinandan S
2022-02-15 16:08 ` Shijith Thotton
2022-02-15 16:46 ` Gujjar, Abhinandan S
2022-02-15 16:56 ` [PATCH v5] " Shijith Thotton
2022-02-16 4:47 ` Gujjar, Abhinandan S [this message]
2022-02-16 7:08 ` Shijith Thotton
2022-02-16 7:49 ` Gujjar, Abhinandan S
2022-02-16 8:44 ` Jerin Jacob
2022-02-16 8:54 ` Jerin Jacob
2022-02-17 5:33 ` Gujjar, Abhinandan S
2022-02-21 13:10 ` Van Haaren, Harry
2022-02-22 7:03 ` Shijith Thotton
2022-02-23 9:02 ` Gujjar, Abhinandan S
2022-02-23 10:02 ` Shijith Thotton
2022-02-23 10:13 ` Van Haaren, Harry
2022-02-23 16:33 ` Gujjar, Abhinandan S
2022-02-23 17:02 ` Shijith Thotton
2022-02-17 6:56 ` [EXT] " Akhil Goyal
2022-02-18 12:00 ` Shijith Thotton
2022-02-18 12:11 ` [PATCH v6] " Shijith Thotton
2022-02-24 4:46 ` [PATCH v7] " Shijith Thotton
2022-02-24 6:18 ` Gujjar, Abhinandan S
2022-02-24 7:58 ` Jerin Jacob
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=PH0PR11MB482417AB9832814EE3F10289E8359@PH0PR11MB4824.namprd11.prod.outlook.com \
--to=abhinandan.gujjar@intel.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=sthotton@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).