* [dpdk-dev] [PATCH 0/8] remove mbuf seqn @ 2020-10-27 22:13 David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 1/8] event/dpaa2: remove dead code David Marchand ` (8 more replies) 0 siblings, 9 replies; 29+ messages in thread From: David Marchand @ 2020-10-27 22:13 UTC (permalink / raw) To: dev Followup on the work started by Thomas to free some space in the mbuf structure. This series applies on top of Thomas v3 series [1] and drops the seqn field that had been deprecated. 1: https://patchwork.dpdk.org/project/dpdk/list/?series=13393 -- David Marchand David Marchand (8): event/dpaa2: remove dead code crypto/scheduler: remove unused internal seqn net/ark: remove use of seqn for debug reorder: switch sequence number to dynamic mbuf field dpaa: switch sequence number to dynamic field fslmc: switch sequence number to dynamic field event: switch sequence number to dynamic field mbuf: remove seqn field app/test-eventdev/evt_main.c | 3 ++ app/test-eventdev/test_order_common.c | 2 +- app/test-eventdev/test_order_common.h | 5 ++- app/test/test_reorder.c | 8 ++-- doc/guides/rel_notes/deprecation.rst | 1 - doc/guides/rel_notes/release_20_11.rst | 3 ++ drivers/bus/dpaa/dpaa_bus.c | 16 ++++++++ drivers/bus/dpaa/rte_dpaa_bus.h | 28 ++++++++++++++ drivers/bus/dpaa/version.map | 1 + drivers/bus/fslmc/fslmc_bus.c | 17 +++++++++ drivers/bus/fslmc/rte_fslmc.h | 23 +++++++++++ drivers/bus/fslmc/version.map | 1 + drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 +++++---- drivers/crypto/dpaa_sec/dpaa_sec.c | 6 +-- .../crypto/scheduler/scheduler_pmd_private.h | 1 - drivers/event/dpaa/dpaa_eventdev.c | 6 +-- drivers/event/dpaa2/dpaa2_eventdev.c | 9 +++-- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 17 ++------- drivers/event/octeontx/ssovf_evdev_selftest.c | 32 ++++++++-------- drivers/event/octeontx2/otx2_evdev_selftest.c | 31 +++++++-------- drivers/event/opdl/opdl_test.c | 8 ++-- drivers/event/sw/sw_evdev_selftest.c | 34 +++++++++-------- drivers/mempool/dpaa2/dpaa2_hw_mempool.h | 2 - drivers/net/ark/ark_ethdev_rx.c | 8 +--- drivers/net/dpaa/dpaa_ethdev.h | 7 ---- drivers/net/dpaa/dpaa_rxtx.c | 6 +-- drivers/net/dpaa2/dpaa2_rxtx.c | 28 +++++++------- examples/packet_ordering/main.c | 2 +- lib/librte_eventdev/rte_eventdev.c | 21 +++++++++- lib/librte_eventdev/rte_eventdev.h | 38 ++++++++++++++++--- lib/librte_eventdev/version.map | 2 + lib/librte_mbuf/rte_mbuf_core.h | 15 +++----- lib/librte_reorder/rte_reorder.c | 23 +++++++++-- lib/librte_reorder/rte_reorder.h | 21 ++++++++++ lib/librte_reorder/version.map | 6 +++ 35 files changed, 307 insertions(+), 142 deletions(-) -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH 1/8] event/dpaa2: remove dead code 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand @ 2020-10-27 22:13 ` David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 2/8] crypto/scheduler: remove unused internal seqn David Marchand ` (7 subsequent siblings) 8 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-27 22:13 UTC (permalink / raw) To: dev; +Cc: Hemant Agrawal, Nipun Gupta This code has never been used since introduction. Fixes: 653242c3375a ("event/dpaa2: add self test") Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c index b1f3891484..5447db8a8a 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c @@ -47,17 +47,6 @@ struct event_attr { uint8_t seq; }; -static uint32_t seqn_list_index; -static int seqn_list[NUM_PACKETS]; - -static void -seqn_list_init(void) -{ - RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); - memset(seqn_list, 0, sizeof(seqn_list)); - seqn_list_index = 0; -} - struct test_core_param { rte_atomic32_t *total_events; uint64_t dequeue_tmo_ticks; @@ -516,7 +505,7 @@ launch_workers_and_wait(int (*main_worker)(void *), return 0; rte_atomic32_set(&atomic_total_events, total_events); - seqn_list_init(); + RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); param = malloc(sizeof(struct test_core_param) * nb_workers); if (!param) -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH 2/8] crypto/scheduler: remove unused internal seqn 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 1/8] event/dpaa2: remove dead code David Marchand @ 2020-10-27 22:13 ` David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 3/8] net/ark: remove use of seqn for debug David Marchand ` (6 subsequent siblings) 8 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-27 22:13 UTC (permalink / raw) To: dev; +Cc: Fan Zhang, Sergio Gonzalez Monroy, Declan Doherty This field has been left behind after dropping its use. Fixes: 8a48e039432b ("crypto/scheduler: optimize crypto op ordering") Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/crypto/scheduler/scheduler_pmd_private.h | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h index adb4eb0632..4d33b9ab44 100644 --- a/drivers/crypto/scheduler/scheduler_pmd_private.h +++ b/drivers/crypto/scheduler/scheduler_pmd_private.h @@ -59,7 +59,6 @@ struct scheduler_qp_ctx { uint32_t max_nb_objs; struct rte_ring *order_ring; - uint32_t seqn; } __rte_cache_aligned; -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH 3/8] net/ark: remove use of seqn for debug 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 1/8] event/dpaa2: remove dead code David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 2/8] crypto/scheduler: remove unused internal seqn David Marchand @ 2020-10-27 22:13 ` David Marchand 2020-10-28 12:19 ` Ed Czeck 2020-10-27 22:13 ` [dpdk-dev] [PATCH 4/8] reorder: switch sequence number to dynamic mbuf field David Marchand ` (5 subsequent siblings) 8 siblings, 1 reply; 29+ messages in thread From: David Marchand @ 2020-10-27 22:13 UTC (permalink / raw) To: dev; +Cc: Shepard Siegel, Ed Czeck, John Miller The seqn mbuf field is deprecated. It is currently hacked for debug purpose, it could be changed to a dynamic field but I see little value in the debug info it offers. Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/net/ark/ark_ethdev_rx.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c index c5788498b3..cc0881b77c 100644 --- a/drivers/net/ark/ark_ethdev_rx.c +++ b/drivers/net/ark/ark_ethdev_rx.c @@ -302,8 +302,6 @@ eth_ark_recv_pkts(void *rx_queue, mbuf->pkt_len = 63; meta->pkt_len = 63; } - /* seqn is only set under debug */ - mbuf->seqn = cons_index; } if (unlikely(meta->pkt_len > ARK_RX_MAX_NOCHAIN)) @@ -360,8 +358,6 @@ eth_ark_rx_jumbo(struct ark_rx_queue *queue, mbuf_prev = mbuf; mbuf->data_len = data_len; mbuf->data_off = 0; - if (ARK_DEBUG_CORE) - mbuf->seqn = cons_index; /* for debug only */ cons_index += 1; } @@ -667,8 +663,8 @@ dump_mbuf_data(struct rte_mbuf *mbuf, uint16_t lo, uint16_t hi) { uint16_t i, j; - ARK_PMD_LOG(DEBUG, " MBUF: %p len %d, off: %d, seq: %" PRIU32 "\n", - mbuf, mbuf->pkt_len, mbuf->data_off, mbuf->seqn); + ARK_PMD_LOG(DEBUG, " MBUF: %p len %d, off: %d\n", + mbuf, mbuf->pkt_len, mbuf->data_off); for (i = lo; i < hi; i += 16) { uint8_t *dp = RTE_PTR_ADD(mbuf->buf_addr, i); -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH 3/8] net/ark: remove use of seqn for debug 2020-10-27 22:13 ` [dpdk-dev] [PATCH 3/8] net/ark: remove use of seqn for debug David Marchand @ 2020-10-28 12:19 ` Ed Czeck 0 siblings, 0 replies; 29+ messages in thread From: Ed Czeck @ 2020-10-28 12:19 UTC (permalink / raw) To: David Marchand; +Cc: dev, Shepard Siegel, John Miller Thank you Acked-by: Ed Czeck <ed.czeck@atomicrules.com> On Tue, Oct 27, 2020 at 6:13 PM David Marchand <david.marchand@redhat.com> wrote: > > The seqn mbuf field is deprecated. > It is currently hacked for debug purpose, it could be changed to a > dynamic field but I see little value in the debug info it offers. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > drivers/net/ark/ark_ethdev_rx.c | 8 ++------ > 1 file changed, 2 insertions(+), 6 deletions(-) > > diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c > index c5788498b3..cc0881b77c 100644 > --- a/drivers/net/ark/ark_ethdev_rx.c > +++ b/drivers/net/ark/ark_ethdev_rx.c > @@ -302,8 +302,6 @@ eth_ark_recv_pkts(void *rx_queue, > mbuf->pkt_len = 63; > meta->pkt_len = 63; > } > - /* seqn is only set under debug */ > - mbuf->seqn = cons_index; > } > > if (unlikely(meta->pkt_len > ARK_RX_MAX_NOCHAIN)) > @@ -360,8 +358,6 @@ eth_ark_rx_jumbo(struct ark_rx_queue *queue, > mbuf_prev = mbuf; > mbuf->data_len = data_len; > mbuf->data_off = 0; > - if (ARK_DEBUG_CORE) > - mbuf->seqn = cons_index; /* for debug only */ > > cons_index += 1; > } > @@ -667,8 +663,8 @@ dump_mbuf_data(struct rte_mbuf *mbuf, uint16_t lo, uint16_t hi) > { > uint16_t i, j; > > - ARK_PMD_LOG(DEBUG, " MBUF: %p len %d, off: %d, seq: %" PRIU32 "\n", > - mbuf, mbuf->pkt_len, mbuf->data_off, mbuf->seqn); > + ARK_PMD_LOG(DEBUG, " MBUF: %p len %d, off: %d\n", > + mbuf, mbuf->pkt_len, mbuf->data_off); > for (i = lo; i < hi; i += 16) { > uint8_t *dp = RTE_PTR_ADD(mbuf->buf_addr, i); > > -- > 2.23.0 > ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH 4/8] reorder: switch sequence number to dynamic mbuf field 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand ` (2 preceding siblings ...) 2020-10-27 22:13 ` [dpdk-dev] [PATCH 3/8] net/ark: remove use of seqn for debug David Marchand @ 2020-10-27 22:13 ` David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 5/8] dpaa: switch sequence number to dynamic field David Marchand ` (4 subsequent siblings) 8 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-27 22:13 UTC (permalink / raw) To: dev; +Cc: Reshma Pattan, Ray Kinsella, Neil Horman The reorder library used sequence numbers stored in the deprecated field seqn. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand <david.marchand@redhat.com> --- app/test/test_reorder.c | 8 ++++---- examples/packet_ordering/main.c | 2 +- lib/librte_reorder/rte_reorder.c | 23 ++++++++++++++++++++--- lib/librte_reorder/rte_reorder.h | 21 +++++++++++++++++++++ lib/librte_reorder/version.map | 6 ++++++ 5 files changed, 52 insertions(+), 8 deletions(-) diff --git a/app/test/test_reorder.c b/app/test/test_reorder.c index 58fa9c71b5..1c4226da65 100644 --- a/app/test/test_reorder.c +++ b/app/test/test_reorder.c @@ -149,7 +149,7 @@ test_reorder_insert(void) for (i = 0; i < num_bufs; i++) { bufs[i] = rte_pktmbuf_alloc(p); TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n"); - bufs[i]->seqn = i; + *rte_reorder_seqn(bufs[i]) = i; } /* This should fill up order buffer: @@ -183,7 +183,7 @@ test_reorder_insert(void) bufs[4] = NULL; /* early packet from current sequence window - full ready buffer */ - bufs[5]->seqn = 2 * size; + *rte_reorder_seqn(bufs[5]) = 2 * size; ret = rte_reorder_insert(b, bufs[5]); if (!((ret == -1) && (rte_errno == ENOSPC))) { printf("%s:%d: No error inserting early packet with full ready buffer\n", @@ -194,7 +194,7 @@ test_reorder_insert(void) bufs[5] = NULL; /* late packet */ - bufs[6]->seqn = 3 * size; + *rte_reorder_seqn(bufs[6]) = 3 * size; ret = rte_reorder_insert(b, bufs[6]); if (!((ret == -1) && (rte_errno == ERANGE))) { printf("%s:%d: No error inserting late packet with seqn:" @@ -250,7 +250,7 @@ test_reorder_drain(void) for (i = 0; i < num_bufs; i++) { bufs[i] = rte_pktmbuf_alloc(p); TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n"); - bufs[i]->seqn = i; + *rte_reorder_seqn(bufs[i]) = i; } /* Insert packet with seqn 1: diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c index a79d77a321..4bea1982d5 100644 --- a/examples/packet_ordering/main.c +++ b/examples/packet_ordering/main.c @@ -451,7 +451,7 @@ rx_thread(struct rte_ring *ring_out) /* mark sequence number */ for (i = 0; i < nb_rx_pkts; ) - pkts[i++]->seqn = seqn++; + *rte_reorder_seqn(pkts[i++]) = seqn++; /* enqueue to rx_to_workers ring */ ret = rte_ring_enqueue_burst(ring_out, diff --git a/lib/librte_reorder/rte_reorder.c b/lib/librte_reorder/rte_reorder.c index 3c9f0e2d08..9445853b79 100644 --- a/lib/librte_reorder/rte_reorder.c +++ b/lib/librte_reorder/rte_reorder.c @@ -8,6 +8,7 @@ #include <rte_string_fns.h> #include <rte_log.h> #include <rte_mbuf.h> +#include <rte_mbuf_dyn.h> #include <rte_eal_memconfig.h> #include <rte_errno.h> #include <rte_malloc.h> @@ -29,6 +30,9 @@ EAL_REGISTER_TAILQ(rte_reorder_tailq) /* Macros for printing using RTE_LOG */ #define RTE_LOGTYPE_REORDER RTE_LOGTYPE_USER1 +#define RTE_REORDER_SEQN_DYNFIELD_NAME "rte_reorder_seqn_dynfield" +int rte_reorder_seqn_dynfield_offset = -1; + /* A generic circular buffer */ struct cir_buffer { unsigned int size; /**< Number of entries that can be stored */ @@ -103,6 +107,11 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) struct rte_reorder_list *reorder_list; const unsigned int bufsize = sizeof(struct rte_reorder_buffer) + (2 * size * sizeof(struct rte_mbuf *)); + static const struct rte_mbuf_dynfield reorder_seqn_dynfield_desc = { + .name = RTE_REORDER_SEQN_DYNFIELD_NAME, + .size = sizeof(rte_reorder_seqn_t), + .align = __alignof__(rte_reorder_seqn_t), + }; reorder_list = RTE_TAILQ_CAST(rte_reorder_tailq.head, rte_reorder_list); @@ -120,6 +129,14 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) return NULL; } + rte_reorder_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc); + if (rte_reorder_seqn_dynfield_offset < 0) { + RTE_LOG(ERR, REORDER, "Failed to register mbuf field for reorder sequence number\n"); + rte_errno = ENOMEM; + return NULL; + } + rte_mcfg_tailq_write_lock(); /* guarantee there's no existing */ @@ -310,7 +327,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf) order_buf = &b->order_buf; if (!b->is_initialized) { - b->min_seqn = mbuf->seqn; + b->min_seqn = *rte_reorder_seqn(mbuf); b->is_initialized = 1; } @@ -322,7 +339,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf) * mbuf_seqn = 0x0010 * offset = 0x0010 - 0xFFFD = 0x13 */ - offset = mbuf->seqn - b->min_seqn; + offset = *rte_reorder_seqn(mbuf) - b->min_seqn; /* * action to take depends on offset. @@ -352,7 +369,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf) rte_errno = ENOSPC; return -1; } - offset = mbuf->seqn - b->min_seqn; + offset = *rte_reorder_seqn(mbuf) - b->min_seqn; position = (order_buf->head + offset) & order_buf->mask; order_buf->entries[position] = mbuf; } else { diff --git a/lib/librte_reorder/rte_reorder.h b/lib/librte_reorder/rte_reorder.h index 6d39710088..ce6cd598e9 100644 --- a/lib/librte_reorder/rte_reorder.h +++ b/lib/librte_reorder/rte_reorder.h @@ -16,6 +16,7 @@ */ #include <rte_mbuf.h> +#include <rte_mbuf_dyn.h> #ifdef __cplusplus extern "C" { @@ -23,6 +24,26 @@ extern "C" { struct rte_reorder_buffer; +typedef uint32_t rte_reorder_seqn_t; +extern int rte_reorder_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read reorder sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to reorder sequence number. + */ +__rte_experimental +static inline rte_reorder_seqn_t * +rte_reorder_seqn(const struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, rte_reorder_seqn_dynfield_offset, + rte_reorder_seqn_t *); +} + /** * Create a new reorder buffer instance * diff --git a/lib/librte_reorder/version.map b/lib/librte_reorder/version.map index 8c0220d324..d902a7fa12 100644 --- a/lib/librte_reorder/version.map +++ b/lib/librte_reorder/version.map @@ -11,3 +11,9 @@ DPDK_21 { local: *; }; + +EXPERIMENTAL { + global: + + rte_reorder_seqn_dynfield_offset; +}; -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH 5/8] dpaa: switch sequence number to dynamic field 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand ` (3 preceding siblings ...) 2020-10-27 22:13 ` [dpdk-dev] [PATCH 4/8] reorder: switch sequence number to dynamic mbuf field David Marchand @ 2020-10-27 22:13 ` David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 6/8] fslmc: " David Marchand ` (3 subsequent siblings) 8 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-27 22:13 UTC (permalink / raw) To: dev Cc: Hemant Agrawal, Sachin Saxena, Ray Kinsella, Neil Horman, Akhil Goyal, Nipun Gupta The dpaa drivers have been hacking the deprecated field seqn for internal features. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/bus/dpaa/dpaa_bus.c | 16 ++++++++++++++++ drivers/bus/dpaa/rte_dpaa_bus.h | 28 ++++++++++++++++++++++++++++ drivers/bus/dpaa/version.map | 1 + drivers/crypto/dpaa_sec/dpaa_sec.c | 6 +++--- drivers/event/dpaa/dpaa_eventdev.c | 6 +++--- drivers/net/dpaa/dpaa_ethdev.h | 7 ------- drivers/net/dpaa/dpaa_rxtx.c | 6 +++--- 7 files changed, 54 insertions(+), 16 deletions(-) diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index c94c72106f..ece6a4c424 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -32,6 +32,7 @@ #include <rte_ring.h> #include <rte_bus.h> #include <rte_mbuf_pool_ops.h> +#include <rte_mbuf_dyn.h> #include <dpaa_of.h> #include <rte_dpaa_bus.h> @@ -55,6 +56,9 @@ unsigned int dpaa_svr_family; RTE_DEFINE_PER_LCORE(struct dpaa_portal *, dpaa_io); +#define DPAA_SEQN_DYNFIELD_NAME "dpaa_seqn_dynfield" +int dpaa_seqn_dynfield_offset = -1; + struct fm_eth_port_cfg * dpaa_get_eth_port_cfg(int dev_id) { @@ -251,6 +255,11 @@ dpaa_clean_device_list(void) int rte_dpaa_portal_init(void *arg) { + static const struct rte_mbuf_dynfield dpaa_seqn_dynfield_desc = { + .name = DPAA_SEQN_DYNFIELD_NAME, + .size = sizeof(dpaa_seqn_t), + .align = __alignof__(dpaa_seqn_t), + }; unsigned int cpu, lcore = rte_lcore_id(); int ret; @@ -264,6 +273,13 @@ int rte_dpaa_portal_init(void *arg) cpu = rte_lcore_to_cpu_id(lcore); + dpaa_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&dpaa_seqn_dynfield_desc); + if (dpaa_seqn_dynfield_offset < 0) { + DPAA_BUS_LOG(ERR, "Failed to register mbuf field for dpaa sequence number\n"); + return -rte_errno; + } + /* Initialise bman thread portals */ ret = bman_thread_init(); if (ret) { diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h index fdaa63a09b..5b8c44df72 100644 --- a/drivers/bus/dpaa/rte_dpaa_bus.h +++ b/drivers/bus/dpaa/rte_dpaa_bus.h @@ -7,6 +7,7 @@ #define __RTE_DPAA_BUS_H__ #include <rte_bus.h> +#include <rte_mbuf_dyn.h> #include <rte_mempool.h> #include <dpaax_iova_table.h> @@ -16,6 +17,33 @@ #include <fsl_bman.h> #include <netcfg.h> +/* This sequence number field is used to store event entry index for + * driver specific usage. For parallel mode queues, invalid + * index will be set and for atomic mode queues, valid value + * ranging from 1 to 16. + */ +#define DPAA_INVALID_MBUF_SEQN 0 + +typedef uint32_t dpaa_seqn_t; +extern int dpaa_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read dpaa sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to dpaa sequence number. + */ +__rte_experimental +static inline dpaa_seqn_t * +dpaa_seqn(const struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, dpaa_seqn_dynfield_offset, + dpaa_seqn_t *); +} + #define DPAA_MEMPOOL_OPS_NAME "dpaa" #define DEV_TO_DPAA_DEVICE(ptr) \ diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map index 9bd2601213..fe4f9ac5aa 100644 --- a/drivers/bus/dpaa/version.map +++ b/drivers/bus/dpaa/version.map @@ -14,6 +14,7 @@ INTERNAL { dpaa_get_qm_channel_pool; dpaa_get_link_status; dpaa_restart_link_autoneg; + dpaa_seqn_dynfield_offset; dpaa_update_link_speed; dpaa_intr_disable; dpaa_intr_enable; diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index 55f457ac9a..44c742738f 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -1721,8 +1721,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, DPAA_SEC_BURST : nb_ops; for (loop = 0; loop < frames_to_send; loop++) { op = *(ops++); - if (op->sym->m_src->seqn != 0) { - index = op->sym->m_src->seqn - 1; + if (*dpaa_seqn(op->sym->m_src) != 0) { + index = *dpaa_seqn(op->sym->m_src) - 1; if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) { /* QM_EQCR_DCA_IDXMASK = 0x0f */ flags[loop] = ((index & 0x0f) << 8); @@ -3212,7 +3212,7 @@ dpaa_sec_process_atomic_event(void *event, DPAA_PER_LCORE_DQRR_HELD |= 1 << index; DPAA_PER_LCORE_DQRR_MBUF(index) = ctx->op->sym->m_src; ev->impl_opaque = index + 1; - ctx->op->sym->m_src->seqn = (uint32_t)index + 1; + *dpaa_seqn(ctx->op->sym->m_src) = (uint32_t)index + 1; *bufs = (void *)ctx->op; rte_mempool_put(ctx->ctx_pool, (void *)ctx); diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c index 07cd079768..01ddd0eb63 100644 --- a/drivers/event/dpaa/dpaa_eventdev.c +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -99,7 +99,7 @@ dpaa_event_enqueue_burst(void *port, const struct rte_event ev[], case RTE_EVENT_OP_RELEASE: qman_dca_index(ev[i].impl_opaque, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; break; @@ -206,7 +206,7 @@ dpaa_event_dequeue_burst(void *port, struct rte_event ev[], if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) { qman_dca_index(i, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; } @@ -276,7 +276,7 @@ dpaa_event_dequeue_burst_intr(void *port, struct rte_event ev[], if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) { qman_dca_index(i, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; } diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index 1b8e120e8f..659bceb467 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -22,13 +22,6 @@ #define DPAA_MBUF_HW_ANNOTATION 64 #define DPAA_FD_PTA_SIZE 64 -/* mbuf->seqn will be used to store event entry index for - * driver specific usage. For parallel mode queues, invalid - * index will be set and for atomic mode queues, valid value - * ranging from 1 to 16. - */ -#define DPAA_INVALID_MBUF_SEQN 0 - /* we will re-use the HEADROOM for annotation in RX */ #define DPAA_HW_BUF_RESERVE 0 #define DPAA_PACKET_LAYOUT_ALIGN 64 diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index e4f012c233..e2459d9b99 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -649,7 +649,7 @@ dpaa_rx_cb_parallel(void *event, ev->queue_id = fq->ev.queue_id; ev->priority = fq->ev.priority; ev->impl_opaque = (uint8_t)DPAA_INVALID_MBUF_SEQN; - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; *bufs = mbuf; return qman_cb_dqrr_consume; @@ -683,7 +683,7 @@ dpaa_rx_cb_atomic(void *event, DPAA_PER_LCORE_DQRR_HELD |= 1 << index; DPAA_PER_LCORE_DQRR_MBUF(index) = mbuf; ev->impl_opaque = index + 1; - mbuf->seqn = (uint32_t)index + 1; + *dpaa_seqn(mbuf) = (uint32_t)index + 1; *bufs = mbuf; return qman_cb_dqrr_defer; @@ -1078,7 +1078,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if (dpaa_svr_family == SVR_LS1043A_FAMILY && (mbuf->data_off & 0x7F) != 0x0) realloc_mbuf = 1; - seqn = mbuf->seqn; + seqn = *dpaa_seqn(mbuf); if (seqn != DPAA_INVALID_MBUF_SEQN) { index = seqn - 1; if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) { -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH 6/8] fslmc: switch sequence number to dynamic field 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand ` (4 preceding siblings ...) 2020-10-27 22:13 ` [dpdk-dev] [PATCH 5/8] dpaa: switch sequence number to dynamic field David Marchand @ 2020-10-27 22:13 ` David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 7/8] event: " David Marchand ` (2 subsequent siblings) 8 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-27 22:13 UTC (permalink / raw) To: dev Cc: Hemant Agrawal, Sachin Saxena, Ray Kinsella, Neil Horman, Akhil Goyal, Nipun Gupta The dpaa2 drivers have been hacking the deprecated field seqn for internal features. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/bus/fslmc/fslmc_bus.c | 17 +++++++++++ drivers/bus/fslmc/rte_fslmc.h | 23 +++++++++++++++ drivers/bus/fslmc/version.map | 1 + drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 ++++++------ drivers/event/dpaa2/dpaa2_eventdev.c | 9 +++--- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 4 ++- drivers/mempool/dpaa2/dpaa2_hw_mempool.h | 2 -- drivers/net/dpaa2/dpaa2_rxtx.c | 28 +++++++++---------- 8 files changed, 73 insertions(+), 29 deletions(-) diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c index beb3dd008f..db93669628 100644 --- a/drivers/bus/fslmc/fslmc_bus.c +++ b/drivers/bus/fslmc/fslmc_bus.c @@ -14,6 +14,7 @@ #include <rte_devargs.h> #include <rte_memcpy.h> #include <rte_ethdev_driver.h> +#include <rte_mbuf_dyn.h> #include <rte_fslmc.h> #include <fslmc_vfio.h> @@ -27,6 +28,9 @@ struct rte_fslmc_bus rte_fslmc_bus; uint8_t dpaa2_virt_mode; +#define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield" +int dpaa2_seqn_dynfield_offset = -1; + uint32_t rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type) { @@ -374,9 +378,22 @@ rte_fslmc_probe(void) struct rte_dpaa2_device *dev; struct rte_dpaa2_driver *drv; + static const struct rte_mbuf_dynfield dpaa2_seqn_dynfield_desc = { + .name = DPAA2_SEQN_DYNFIELD_NAME, + .size = sizeof(dpaa2_seqn_t), + .align = __alignof__(dpaa2_seqn_t), + }; + if (TAILQ_EMPTY(&rte_fslmc_bus.device_list)) return 0; + dpaa2_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&dpaa2_seqn_dynfield_desc); + if (dpaa2_seqn_dynfield_offset < 0) { + DPAA2_BUS_ERR("Failed to register mbuf field for dpaa sequence number"); + return 0; + } + ret = fslmc_vfio_setup_group(); if (ret) { DPAA2_BUS_ERR("Unable to setup VFIO %d", ret); diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h index 80873fffc9..37ca6886f2 100644 --- a/drivers/bus/fslmc/rte_fslmc.h +++ b/drivers/bus/fslmc/rte_fslmc.h @@ -32,11 +32,34 @@ extern "C" { #include <rte_bus.h> #include <rte_tailq.h> #include <rte_devargs.h> +#include <rte_mbuf.h> +#include <rte_mbuf_dyn.h> #include <fslmc_vfio.h> #define FSLMC_OBJECT_MAX_LEN 32 /**< Length of each device on bus */ +#define DPAA2_INVALID_MBUF_SEQN 0 + +typedef uint32_t dpaa2_seqn_t; +extern int dpaa2_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read dpaa2 sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to dpaa2 sequence number. + */ +__rte_experimental +static inline dpaa2_seqn_t * +dpaa2_seqn(const struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, dpaa2_seqn_dynfield_offset, + dpaa2_seqn_t *); +} /** Device driver supports link state interrupt */ #define RTE_DPAA2_DRV_INTR_LSC 0x0008 diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map index b169f5228a..f44c1a7988 100644 --- a/drivers/bus/fslmc/version.map +++ b/drivers/bus/fslmc/version.map @@ -19,6 +19,7 @@ INTERNAL { dpaa2_free_eq_descriptors; dpaa2_get_mcp_ptr; dpaa2_io_portal; + dpaa2_seqn_dynfield_offset; dpaa2_svr_family; dpaa2_virt_mode; dpbp_disable; diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c index afcd6bd063..ce1d50ce77 100644 --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c @@ -1472,13 +1472,15 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, dpaa2_eqcr_size : nb_ops; for (loop = 0; loop < frames_to_send; loop++) { - if ((*ops)->sym->m_src->seqn) { - uint8_t dqrr_index = (*ops)->sym->m_src->seqn - 1; - - flags[loop] = QBMAN_ENQUEUE_FLAG_DCA | dqrr_index; - DPAA2_PER_LCORE_DQRR_SIZE--; - DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dqrr_index); - (*ops)->sym->m_src->seqn = DPAA2_INVALID_MBUF_SEQN; + if (*dpaa2_seqn((*ops)->sym->m_src)) { + uint8_t dqrr_index = + *dpaa2_seqn((*ops)->sym->m_src) - 1; + + flags[loop] = QBMAN_ENQUEUE_FLAG_DCA | dqrr_index; + DPAA2_PER_LCORE_DQRR_SIZE--; + DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dqrr_index); + *dpaa2_seqn((*ops)->sym->m_src) = + DPAA2_INVALID_MBUF_SEQN; } /*Clear the unused FD fields before sending*/ @@ -3714,7 +3716,7 @@ dpaa2_sec_process_atomic_event(struct qbman_swp *swp __rte_unused, ev->event_ptr = sec_fd_to_mbuf(fd); dqrr_index = qbman_get_dqrr_idx(dq); - crypto_op->sym->m_src->seqn = dqrr_index + 1; + *dpaa2_seqn(crypto_op->sym->m_src) = dqrr_index + 1; DPAA2_PER_LCORE_DQRR_SIZE++; DPAA2_PER_LCORE_DQRR_HELD |= 1 << dqrr_index; DPAA2_PER_LCORE_DQRR_MBUF(dqrr_index) = crypto_op->sym->m_src; diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 95f03c8b9e..eeb2494bd0 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -131,8 +131,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[], qbman_eq_desc_set_response(&eqdesc[loop], 0, 0); if (event->sched_type == RTE_SCHED_TYPE_ATOMIC - && event->mbuf->seqn) { - uint8_t dqrr_index = event->mbuf->seqn - 1; + && *dpaa2_seqn(event->mbuf)) { + uint8_t dqrr_index = + *dpaa2_seqn(event->mbuf) - 1; qbman_eq_desc_set_dca(&eqdesc[loop], 1, dqrr_index, 0); @@ -249,7 +250,7 @@ static void dpaa2_eventdev_process_atomic(struct qbman_swp *swp, rte_memcpy(ev, ev_temp, sizeof(struct rte_event)); rte_free(ev_temp); - ev->mbuf->seqn = dqrr_index + 1; + *dpaa2_seqn(ev->mbuf) = dqrr_index + 1; DPAA2_PER_LCORE_DQRR_SIZE++; DPAA2_PER_LCORE_DQRR_HELD |= 1 << dqrr_index; DPAA2_PER_LCORE_DQRR_MBUF(dqrr_index) = ev->mbuf; @@ -314,7 +315,7 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[], if (DPAA2_PER_LCORE_DQRR_HELD & (1 << i)) { qbman_swp_dqrr_idx_consume(swp, i); DPAA2_PER_LCORE_DQRR_SIZE--; - DPAA2_PER_LCORE_DQRR_MBUF(i)->seqn = + *dpaa2_seqn(DPAA2_PER_LCORE_DQRR_MBUF(i)) = DPAA2_INVALID_MBUF_SEQN; } i++; diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c index 5447db8a8a..cd7311a94d 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c @@ -19,6 +19,7 @@ #include <rte_random.h> #include <rte_bus_vdev.h> #include <rte_test.h> +#include <rte_fslmc.h> #include "dpaa2_eventdev.h" #include "dpaa2_eventdev_logs.h" @@ -274,7 +275,8 @@ check_excess_events(uint8_t port) valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); RTE_TEST_ASSERT_SUCCESS(valid_event, - "Unexpected valid event=%d", ev.mbuf->seqn); + "Unexpected valid event=%d", + *dpaa2_seqn(ev.mbuf)); } return 0; } diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h index 53fa1552d1..7c493b28e7 100644 --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h @@ -10,8 +10,6 @@ #define DPAA2_MAX_BUF_POOLS 8 -#define DPAA2_INVALID_MBUF_SEQN 0 - struct buf_pool_cfg { void *addr; /**< The address from where DPAA2 will carve out the buffers */ diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index 4dd1d5f578..6201de4606 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -710,7 +710,7 @@ dpaa2_dev_process_atomic_event(struct qbman_swp *swp __rte_unused, ev->mbuf = eth_fd_to_mbuf(fd, rxq->eth_data->port_id); dqrr_index = qbman_get_dqrr_idx(dq); - ev->mbuf->seqn = dqrr_index + 1; + *dpaa2_seqn(ev->mbuf) = dqrr_index + 1; DPAA2_PER_LCORE_DQRR_SIZE++; DPAA2_PER_LCORE_DQRR_HELD |= 1 << dqrr_index; DPAA2_PER_LCORE_DQRR_MBUF(dqrr_index) = ev->mbuf; @@ -736,9 +736,9 @@ dpaa2_dev_process_ordered_event(struct qbman_swp *swp, ev->mbuf = eth_fd_to_mbuf(fd, rxq->eth_data->port_id); - ev->mbuf->seqn = DPAA2_ENQUEUE_FLAG_ORP; - ev->mbuf->seqn |= qbman_result_DQ_odpid(dq) << DPAA2_EQCR_OPRID_SHIFT; - ev->mbuf->seqn |= qbman_result_DQ_seqnum(dq) << DPAA2_EQCR_SEQNUM_SHIFT; + *dpaa2_seqn(ev->mbuf) = DPAA2_ENQUEUE_FLAG_ORP; + *dpaa2_seqn(ev->mbuf) |= qbman_result_DQ_odpid(dq) << DPAA2_EQCR_OPRID_SHIFT; + *dpaa2_seqn(ev->mbuf) |= qbman_result_DQ_seqnum(dq) << DPAA2_EQCR_SEQNUM_SHIFT; qbman_swp_dqrr_consume(swp, dq); } @@ -1063,14 +1063,14 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) dpaa2_eqcr_size : nb_pkts; for (loop = 0; loop < frames_to_send; loop++) { - if ((*bufs)->seqn) { - uint8_t dqrr_index = (*bufs)->seqn - 1; + if (*dpaa2_seqn(*bufs)) { + uint8_t dqrr_index = *dpaa2_seqn(*bufs) - 1; flags[loop] = QBMAN_ENQUEUE_FLAG_DCA | dqrr_index; DPAA2_PER_LCORE_DQRR_SIZE--; DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dqrr_index); - (*bufs)->seqn = DPAA2_INVALID_MBUF_SEQN; + *dpaa2_seqn(*bufs) = DPAA2_INVALID_MBUF_SEQN; } if (likely(RTE_MBUF_DIRECT(*bufs))) { @@ -1230,10 +1230,10 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q, qbman_eq_desc_set_fq(eqdesc, dpaa2_q->fqid); - if (m->seqn & DPAA2_ENQUEUE_FLAG_ORP) { - orpid = (m->seqn & DPAA2_EQCR_OPRID_MASK) >> + if (*dpaa2_seqn(m) & DPAA2_ENQUEUE_FLAG_ORP) { + orpid = (*dpaa2_seqn(m) & DPAA2_EQCR_OPRID_MASK) >> DPAA2_EQCR_OPRID_SHIFT; - seqnum = (m->seqn & DPAA2_EQCR_SEQNUM_MASK) >> + seqnum = (*dpaa2_seqn(m) & DPAA2_EQCR_SEQNUM_MASK) >> DPAA2_EQCR_SEQNUM_SHIFT; if (!priv->en_loose_ordered) { @@ -1255,12 +1255,12 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q, qbman_eq_desc_set_orp(eqdesc, 0, orpid, seqnum, 0); } } else { - dq_idx = m->seqn - 1; + dq_idx = *dpaa2_seqn(m) - 1; qbman_eq_desc_set_dca(eqdesc, 1, dq_idx, 0); DPAA2_PER_LCORE_DQRR_SIZE--; DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dq_idx); } - m->seqn = DPAA2_INVALID_MBUF_SEQN; + *dpaa2_seqn(m) = DPAA2_INVALID_MBUF_SEQN; } /* Callback to handle sending ordered packets through WRIOP based interface */ @@ -1314,7 +1314,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) dpaa2_eqcr_size : nb_pkts; if (!priv->en_loose_ordered) { - if ((*bufs)->seqn & DPAA2_ENQUEUE_FLAG_ORP) { + if (*dpaa2_seqn(*bufs) & DPAA2_ENQUEUE_FLAG_ORP) { num_free_eq_desc = dpaa2_free_eq_descriptors(); if (num_free_eq_desc < frames_to_send) frames_to_send = num_free_eq_desc; @@ -1325,7 +1325,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /*Prepare enqueue descriptor*/ qbman_eq_desc_clear(&eqdesc[loop]); - if ((*bufs)->seqn) { + if (*dpaa2_seqn(*bufs)) { /* Use only queue 0 for Tx in case of atomic/ * ordered packets as packets can get unordered * when being tranmitted out from the interface -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH 7/8] event: switch sequence number to dynamic field 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand ` (5 preceding siblings ...) 2020-10-27 22:13 ` [dpdk-dev] [PATCH 6/8] fslmc: " David Marchand @ 2020-10-27 22:13 ` David Marchand 2020-10-27 22:18 ` David Marchand 2020-10-28 7:27 ` Jerin Jacob 2020-10-27 22:13 ` [dpdk-dev] [PATCH 8/8] mbuf: remove seqn field David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand 8 siblings, 2 replies; 29+ messages in thread From: David Marchand @ 2020-10-27 22:13 UTC (permalink / raw) To: dev Cc: Jerin Jacob, Pavan Nikhilesh, Liang Ma, Peter Mccarthy, Harry van Haaren, Ray Kinsella, Neil Horman The eventdev drivers have been hacking the deprecated field seqn for internal test usage. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand <david.marchand@redhat.com> --- app/test-eventdev/evt_main.c | 3 ++ app/test-eventdev/test_order_common.c | 2 +- app/test-eventdev/test_order_common.h | 5 ++- drivers/event/octeontx/ssovf_evdev_selftest.c | 32 ++++++++-------- drivers/event/octeontx2/otx2_evdev_selftest.c | 31 +++++++-------- drivers/event/opdl/opdl_test.c | 8 ++-- drivers/event/sw/sw_evdev_selftest.c | 34 +++++++++-------- lib/librte_eventdev/rte_eventdev.c | 21 +++++++++- lib/librte_eventdev/rte_eventdev.h | 38 ++++++++++++++++--- lib/librte_eventdev/version.map | 2 + 10 files changed, 116 insertions(+), 60 deletions(-) diff --git a/app/test-eventdev/evt_main.c b/app/test-eventdev/evt_main.c index a8d304bab3..832bb21d7c 100644 --- a/app/test-eventdev/evt_main.c +++ b/app/test-eventdev/evt_main.c @@ -89,6 +89,9 @@ main(int argc, char **argv) if (!evdevs) rte_panic("no eventdev devices found\n"); + if (rte_event_test_seqn_dynfield_register() < 0) + rte_panic("failed to register event dev sequence number\n"); + /* Populate the default values of the options */ evt_options_default(&opt); diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c index c5f7317440..d15ff80273 100644 --- a/app/test-eventdev/test_order_common.c +++ b/app/test-eventdev/test_order_common.c @@ -50,7 +50,7 @@ order_producer(void *arg) const flow_id_t flow = (uintptr_t)m % nb_flows; /* Maintain seq number per flow */ - m->seqn = producer_flow_seq[flow]++; + *rte_event_test_seqn(m) = producer_flow_seq[flow]++; flow_id_save(flow, m, &ev); while (rte_event_enqueue_burst(dev_id, port, &ev, 1) != 1) { diff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h index 9e3415e421..d4ad31da46 100644 --- a/app/test-eventdev/test_order_common.h +++ b/app/test-eventdev/test_order_common.h @@ -89,9 +89,10 @@ order_process_stage_1(struct test_order *const t, { const uint32_t flow = (uintptr_t)ev->mbuf % nb_flows; /* compare the seqn against expected value */ - if (ev->mbuf->seqn != expected_flow_seq[flow]) { + if (*rte_event_test_seqn(ev->mbuf) != expected_flow_seq[flow]) { evt_err("flow=%x seqn mismatch got=%x expected=%x", - flow, ev->mbuf->seqn, expected_flow_seq[flow]); + flow, *rte_event_test_seqn(ev->mbuf), + expected_flow_seq[flow]); t->err = true; rte_smp_wmb(); } diff --git a/drivers/event/octeontx/ssovf_evdev_selftest.c b/drivers/event/octeontx/ssovf_evdev_selftest.c index 7a2b7ded25..b99889e2cc 100644 --- a/drivers/event/octeontx/ssovf_evdev_selftest.c +++ b/drivers/event/octeontx/ssovf_evdev_selftest.c @@ -300,7 +300,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_test_seqn(m) = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, sub_event_type, sched_type, queue, port); rte_event_enqueue_burst(evdev, port, &ev, 1); @@ -320,7 +320,8 @@ check_excess_events(uint8_t port) valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); RTE_TEST_ASSERT_SUCCESS(valid_event, - "Unexpected valid event=%d", ev.mbuf->seqn); + "Unexpected valid event=%d", + *rte_event_test_seqn(ev.mbuf)); } return 0; } @@ -425,8 +426,9 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", - index, ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, *rte_event_test_seqn(ev->mbuf), + "index=%d != seqn=%d", index, + *rte_event_test_seqn(ev->mbuf)); return 0; } @@ -509,10 +511,10 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, - "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", - ev->mbuf->seqn, index, expected_val, range, - queue_count, MAX_EVENTS); + RTE_TEST_ASSERT_EQUAL(*rte_event_test_seqn(ev->mbuf), expected_val, + "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", + *rte_event_test_seqn(ev->mbuf), index, expected_val, range, + queue_count, MAX_EVENTS); return 0; } @@ -537,7 +539,7 @@ test_multi_queue_priority(void) m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_test_seqn(m) = i; queue = i % queue_count; update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU, 0, RTE_SCHED_TYPE_PARALLEL, queue, 0); @@ -904,7 +906,7 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + if (seqn_list_update(*rte_event_test_seqn(ev.mbuf)) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -939,7 +941,7 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events( 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, @@ -1059,7 +1061,7 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + if (seqn_list_update(*rte_event_test_seqn(ev.mbuf)) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -1101,7 +1103,7 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events( 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, @@ -1238,7 +1240,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events( 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, @@ -1360,7 +1362,7 @@ worker_ordered_flow_producer(void *arg) if (m == NULL) continue; - m->seqn = counter++; + *rte_event_test_seqn(m) = counter++; struct rte_event ev = {.event = 0, .u64 = 0}; diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c index 334a9ccb7c..c6381ac785 100644 --- a/drivers/event/octeontx2/otx2_evdev_selftest.c +++ b/drivers/event/octeontx2/otx2_evdev_selftest.c @@ -279,7 +279,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_test_seqn(m) = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, sub_event_type, sched_type, queue, port); @@ -301,7 +301,7 @@ check_excess_events(uint8_t port) RTE_TEST_ASSERT_SUCCESS(valid_event, "Unexpected valid event=%d", - ev.mbuf->seqn); + *rte_event_test_seqn(ev.mbuf)); } return 0; } @@ -406,8 +406,9 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", - index, ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, *rte_event_test_seqn(ev->mbuf), + "index=%d != seqn=%d", + index, *rte_event_test_seqn(ev->mbuf)); return 0; } @@ -493,10 +494,10 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, - "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", - ev->mbuf->seqn, index, expected_val, range, - queue_count, MAX_EVENTS); + RTE_TEST_ASSERT_EQUAL(*rte_event_test_seqn(ev->mbuf), expected_val, + "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", + *rte_event_test_seqn(ev->mbuf), index, expected_val, range, + queue_count, MAX_EVENTS); return 0; } @@ -523,7 +524,7 @@ test_multi_queue_priority(void) m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_test_seqn(m) = i; queue = i % queue_count; update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU, 0, RTE_SCHED_TYPE_PARALLEL, @@ -888,7 +889,7 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + if (seqn_list_update(*rte_event_test_seqn(ev.mbuf)) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -923,7 +924,7 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events(0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, 0 /* sub_event_type (stage 0) */, @@ -1043,7 +1044,7 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + if (seqn_list_update(*rte_event_test_seqn(ev.mbuf)) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -1084,7 +1085,7 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events(0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, 0 /* sub_event_type (stage 0) */, @@ -1222,7 +1223,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events(0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, 0 /* sub_event_type (stage 0) */, @@ -1348,7 +1349,7 @@ worker_ordered_flow_producer(void *arg) if (m == NULL) continue; - m->seqn = counter++; + *rte_event_test_seqn(m) = counter++; struct rte_event ev = {.event = 0, .u64 = 0}; diff --git a/drivers/event/opdl/opdl_test.c b/drivers/event/opdl/opdl_test.c index e7a32fbd31..cbf33d38f7 100644 --- a/drivers/event/opdl/opdl_test.c +++ b/drivers/event/opdl/opdl_test.c @@ -256,7 +256,7 @@ ordered_basic(struct test *t) ev.queue_id = t->qid[0]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = MAGIC_SEQN + i; + *rte_event_test_seqn(mbufs[i]) = MAGIC_SEQN + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); @@ -281,7 +281,7 @@ ordered_basic(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } - seq = deq_ev[i].mbuf->seqn - MAGIC_SEQN; + seq = *rte_event_test_seqn(deq_ev[i].mbuf) - MAGIC_SEQN; if (seq != (i-1)) { PMD_DRV_LOG(ERR, " seq test failed ! eq is %d , " @@ -396,7 +396,7 @@ atomic_basic(struct test *t) ev.op = RTE_EVENT_OP_NEW; ev.flow_id = 1; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = MAGIC_SEQN + i; + *rte_event_test_seqn(mbufs[i]) = MAGIC_SEQN + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); @@ -625,7 +625,7 @@ single_link_w_stats(struct test *t) ev.queue_id = t->qid[0]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = 1234 + i; + *rte_event_test_seqn(mbufs[i]) = 1234 + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c index ad4fc0eed7..47f5b55651 100644 --- a/drivers/event/sw/sw_evdev_selftest.c +++ b/drivers/event/sw/sw_evdev_selftest.c @@ -380,7 +380,7 @@ run_prio_packet_test(struct test *t) printf("%d: gen of pkt failed\n", __LINE__); return -1; } - arp->seqn = MAGIC_SEQN[i]; + *rte_event_test_seqn(arp) = MAGIC_SEQN[i]; ev = (struct rte_event){ .priority = PRIORITY[i], @@ -419,7 +419,7 @@ run_prio_packet_test(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } - if (ev.mbuf->seqn != MAGIC_SEQN[1]) { + if (*rte_event_test_seqn(ev.mbuf) != MAGIC_SEQN[1]) { printf("%d: first packet out not highest priority\n", __LINE__); rte_event_dev_dump(evdev, stdout); @@ -433,7 +433,7 @@ run_prio_packet_test(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } - if (ev2.mbuf->seqn != MAGIC_SEQN[0]) { + if (*rte_event_test_seqn(ev2.mbuf) != MAGIC_SEQN[0]) { printf("%d: second packet out not lower priority\n", __LINE__); rte_event_dev_dump(evdev, stdout); @@ -477,7 +477,7 @@ test_single_directed_packet(struct test *t) } const uint32_t MAGIC_SEQN = 4711; - arp->seqn = MAGIC_SEQN; + *rte_event_test_seqn(arp) = MAGIC_SEQN; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, rx_enq, &ev, 1); @@ -516,7 +516,7 @@ test_single_directed_packet(struct test *t) return -1; } - if (ev.mbuf->seqn != MAGIC_SEQN) { + if (*rte_event_test_seqn(ev.mbuf) != MAGIC_SEQN) { printf("%d: error magic sequence number not dequeued\n", __LINE__); return -1; @@ -934,7 +934,7 @@ xstats_tests(struct test *t) ev.op = RTE_EVENT_OP_NEW; ev.mbuf = arp; ev.flow_id = 7; - arp->seqn = i; + *rte_event_test_seqn(arp) = i; int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); if (err != 1) { @@ -1485,7 +1485,7 @@ xstats_id_reset_tests(struct test *t) ev.queue_id = t->qid[i]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = arp; - arp->seqn = i; + *rte_event_test_seqn(arp) = i; int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); if (err != 1) { @@ -1873,7 +1873,7 @@ qid_priorities(struct test *t) ev.queue_id = t->qid[i]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = arp; - arp->seqn = i; + *rte_event_test_seqn(arp) = i; int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); if (err != 1) { @@ -1894,7 +1894,7 @@ qid_priorities(struct test *t) return -1; } for (i = 0; i < 3; i++) { - if (ev[i].mbuf->seqn != 2-i) { + if (*rte_event_test_seqn(ev[i].mbuf) != 2-i) { printf( "%d: qid priority test: seqn %d incorrectly prioritized\n", __LINE__, i); @@ -2371,7 +2371,7 @@ single_packet(struct test *t) ev.mbuf = arp; ev.queue_id = 0; ev.flow_id = 3; - arp->seqn = MAGIC_SEQN; + *rte_event_test_seqn(arp) = MAGIC_SEQN; err = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1); if (err != 1) { @@ -2411,7 +2411,7 @@ single_packet(struct test *t) } err = test_event_dev_stats_get(evdev, &stats); - if (ev.mbuf->seqn != MAGIC_SEQN) { + if (*rte_event_test_seqn(ev.mbuf) != MAGIC_SEQN) { printf("%d: magic sequence number not dequeued\n", __LINE__); return -1; } @@ -2684,7 +2684,7 @@ parallel_basic(struct test *t, int check_order) ev.queue_id = t->qid[0]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = MAGIC_SEQN + i; + *rte_event_test_seqn(mbufs[i]) = MAGIC_SEQN + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); @@ -2739,10 +2739,12 @@ parallel_basic(struct test *t, int check_order) /* Check to see if the sequence numbers are in expected order */ if (check_order) { for (j = 0 ; j < deq_pkts ; j++) { - if (deq_ev[j].mbuf->seqn != MAGIC_SEQN + j) { - printf( - "%d: Incorrect sequence number(%d) from port %d\n", - __LINE__, mbufs_out[j]->seqn, tx_port); + if (*rte_event_test_seqn(deq_ev[j].mbuf) != + MAGIC_SEQN + j) { + printf("%d: Incorrect sequence number(%d) from port %d\n", + __LINE__, + *rte_event_test_seqn(mbufs_out[j]), + tx_port); return -1; } } diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index 322453c532..61ff6d3404 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -109,6 +109,22 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info) return 0; } +#define RTE_EVENT_TEST_SEQN_DYNFIELD_NAME "rte_event_test_seqn_dynfield" +int rte_event_test_seqn_dynfield_offset = -1; + +int +rte_event_test_seqn_dynfield_register(void) +{ + static const struct rte_mbuf_dynfield event_test_seqn_dynfield_desc = { + .name = RTE_EVENT_TEST_SEQN_DYNFIELD_NAME, + .size = sizeof(rte_event_test_seqn_t), + .align = __alignof__(rte_event_test_seqn_t), + }; + rte_event_test_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&event_test_seqn_dynfield_desc); + return rte_event_test_seqn_dynfield_offset; +} + int rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id, uint32_t *caps) @@ -1247,8 +1263,11 @@ int rte_event_dev_selftest(uint8_t dev_id) RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); struct rte_eventdev *dev = &rte_eventdevs[dev_id]; - if (dev->dev_ops->dev_selftest != NULL) + if (dev->dev_ops->dev_selftest != NULL) { + if (rte_event_test_seqn_dynfield_register() < 0) + return -ENOMEM; return (*dev->dev_ops->dev_selftest)(); + } return -ENOTSUP; } diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index ce1fc2ce0f..1656ff8dce 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -211,13 +211,15 @@ extern "C" { #endif #include <rte_common.h> +#include <rte_compat.h> #include <rte_config.h> +#include <rte_mbuf.h> +#include <rte_mbuf_dyn.h> #include <rte_memory.h> #include <rte_errno.h> #include "rte_eventdev_trace_fp.h" -struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */ struct rte_event; /* Event device capability bitmap flags */ @@ -570,9 +572,9 @@ struct rte_event_queue_conf { */ uint32_t nb_atomic_order_sequences; /**< The maximum number of outstanding events waiting to be - * reordered by this queue. In other words, the number of entries in - * this queue’s reorder buffer.When the number of events in the - * reorder buffer reaches to *nb_atomic_order_sequences* then the + * event_tested by this queue. In other words, the number of entries in + * this queue’s event_test buffer.When the number of events in the + * event_test buffer reaches to *nb_atomic_order_sequences* then the * scheduler cannot schedule the events from this queue and invalid * event will be returned from dequeue until one or more entries are * freed up/released. @@ -935,7 +937,7 @@ rte_event_dev_close(uint8_t dev_id); * Event ordering is based on the received event(s), but also other * (newly allocated or stored) events are ordered when enqueued within the same * ordered context. Events not enqueued (e.g. released or stored) within the - * context are considered missing from reordering and are skipped at this time + * context are considered missing from event_testing and are skipped at this time * (but can be ordered again within another context). * * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE @@ -1021,7 +1023,7 @@ rte_event_dev_close(uint8_t dev_id); * then this function hints the scheduler that the user has done all that need * to maintain event order in the current ordered context. * The scheduler is allowed to release the ordered context of this port and - * avoid reordering any following enqueues. + * avoid event_testing any following enqueues. * * Early ordered context release may increase parallelism and thus system * performance. @@ -1111,6 +1113,30 @@ struct rte_event { }; }; +typedef uint32_t rte_event_test_seqn_t; +extern int rte_event_test_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read test sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to test sequence number. + */ +__rte_experimental +static inline rte_event_test_seqn_t * +rte_event_test_seqn(const struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, rte_event_test_seqn_dynfield_offset, + rte_event_test_seqn_t *); +} + +__rte_experimental +int +rte_event_test_seqn_dynfield_register(void); + /* Ethdev Rx adapter capability bitmap flags */ #define RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT 0x1 /**< This flag is sent when the packet transfer mechanism is in HW. diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 8ae8420f9b..e49382ba99 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -138,4 +138,6 @@ EXPERIMENTAL { __rte_eventdev_trace_port_setup; # added in 20.11 rte_event_pmd_pci_probe_named; + rte_event_test_seqn_dynfield_offset; + rte_event_test_seqn_dynfield_register; }; -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH 7/8] event: switch sequence number to dynamic field 2020-10-27 22:13 ` [dpdk-dev] [PATCH 7/8] event: " David Marchand @ 2020-10-27 22:18 ` David Marchand 2020-10-28 7:27 ` Jerin Jacob 1 sibling, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-27 22:18 UTC (permalink / raw) To: dev Cc: Jerin Jacob, Pavan Nikhilesh, Liang Ma, Peter Mccarthy, Harry van Haaren, Ray Kinsella, Neil Horman On Tue, Oct 27, 2020 at 11:16 PM David Marchand <david.marchand@redhat.com> wrote: > diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h > index ce1fc2ce0f..1656ff8dce 100644 > --- a/lib/librte_eventdev/rte_eventdev.h > +++ b/lib/librte_eventdev/rte_eventdev.h > @@ -211,13 +211,15 @@ extern "C" { > #endif > > #include <rte_common.h> > +#include <rte_compat.h> > #include <rte_config.h> > +#include <rte_mbuf.h> > +#include <rte_mbuf_dyn.h> > #include <rte_memory.h> > #include <rte_errno.h> > > #include "rte_eventdev_trace_fp.h" > > -struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */ > struct rte_event; > > /* Event device capability bitmap flags */ > @@ -570,9 +572,9 @@ struct rte_event_queue_conf { > */ > uint32_t nb_atomic_order_sequences; > /**< The maximum number of outstanding events waiting to be > - * reordered by this queue. In other words, the number of entries in > - * this queue’s reorder buffer.When the number of events in the > - * reorder buffer reaches to *nb_atomic_order_sequences* then the > + * event_tested by this queue. In other words, the number of entries in > + * this queue’s event_test buffer.When the number of events in the > + * event_test buffer reaches to *nb_atomic_order_sequences* then the v2 tomorrow, without this %s/reorder/test_event/g... > * scheduler cannot schedule the events from this queue and invalid > * event will be returned from dequeue until one or more entries are > * freed up/released. > @@ -935,7 +937,7 @@ rte_event_dev_close(uint8_t dev_id); > * Event ordering is based on the received event(s), but also other > * (newly allocated or stored) events are ordered when enqueued within the same > * ordered context. Events not enqueued (e.g. released or stored) within the > - * context are considered missing from reordering and are skipped at this time > + * context are considered missing from event_testing and are skipped at this time > * (but can be ordered again within another context). > * > * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE > @@ -1021,7 +1023,7 @@ rte_event_dev_close(uint8_t dev_id); > * then this function hints the scheduler that the user has done all that need > * to maintain event order in the current ordered context. > * The scheduler is allowed to release the ordered context of this port and > - * avoid reordering any following enqueues. > + * avoid event_testing any following enqueues. > * > * Early ordered context release may increase parallelism and thus system > * performance. -- David Marchand ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH 7/8] event: switch sequence number to dynamic field 2020-10-27 22:13 ` [dpdk-dev] [PATCH 7/8] event: " David Marchand 2020-10-27 22:18 ` David Marchand @ 2020-10-28 7:27 ` Jerin Jacob 2020-10-28 8:55 ` David Marchand 1 sibling, 1 reply; 29+ messages in thread From: Jerin Jacob @ 2020-10-28 7:27 UTC (permalink / raw) To: David Marchand, McDaniel, Timothy Cc: dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Liang Ma, Peter Mccarthy, Harry van Haaren, Ray Kinsella, Neil Horman On Wed, Oct 28, 2020 at 3:46 AM David Marchand <david.marchand@redhat.com> wrote: > > The eventdev drivers have been hacking the deprecated field seqn for > internal test usage. > It is moved to a dynamic mbuf field in order to allow removal of seqn. > > Signed-off-by: David Marchand <david.marchand@redhat.com> Overall looks good some minor comments: # use eventdev: instead of event: in git commit > --- > app/test-eventdev/evt_main.c | 3 ++ > app/test-eventdev/test_order_common.c | 2 +- > app/test-eventdev/test_order_common.h | 5 ++- > drivers/event/octeontx/ssovf_evdev_selftest.c | 32 ++++++++-------- > drivers/event/octeontx2/otx2_evdev_selftest.c | 31 +++++++-------- > drivers/event/opdl/opdl_test.c | 8 ++-- > drivers/event/sw/sw_evdev_selftest.c | 34 +++++++++-------- > lib/librte_eventdev/rte_eventdev.c | 21 +++++++++- > lib/librte_eventdev/rte_eventdev.h | 38 ++++++++++++++++--- > lib/librte_eventdev/version.map | 2 + > 10 files changed, 116 insertions(+), 60 deletions(-) > > diff --git a/app/test-eventdev/evt_main.c b/app/test-eventdev/evt_main.c > index a8d304bab3..832bb21d7c 100644 > --- a/app/test-eventdev/evt_main.c > +++ b/app/test-eventdev/evt_main.c > @@ -89,6 +89,9 @@ main(int argc, char **argv) > if (!evdevs) > rte_panic("no eventdev devices found\n"); > > + if (rte_event_test_seqn_dynfield_register() < 0) > + rte_panic("failed to register event dev sequence number\n"); > + > /* Populate the default values of the options */ > evt_options_default(&opt); > > diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c > index c5f7317440..d15ff80273 100644 > --- a/app/test-eventdev/test_order_common.c > +++ b/app/test-eventdev/test_order_common.c > @@ -50,7 +50,7 @@ order_producer(void *arg) > > const flow_id_t flow = (uintptr_t)m % nb_flows; > /* Maintain seq number per flow */ > - m->seqn = producer_flow_seq[flow]++; > + *rte_event_test_seqn(m) = producer_flow_seq[flow]++; > flow_id_save(flow, m, &ev); > > while (rte_event_enqueue_burst(dev_id, port, &ev, 1) != 1) { > diff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h > index 9e3415e421..d4ad31da46 100644 > --- a/app/test-eventdev/test_order_common.h > +++ b/app/test-eventdev/test_order_common.h > @@ -89,9 +89,10 @@ order_process_stage_1(struct test_order *const t, > { > const uint32_t flow = (uintptr_t)ev->mbuf % nb_flows; > /* compare the seqn against expected value */ > - if (ev->mbuf->seqn != expected_flow_seq[flow]) { > + if (*rte_event_test_seqn(ev->mbuf) != expected_flow_seq[flow]) { > evt_err("flow=%x seqn mismatch got=%x expected=%x", > - flow, ev->mbuf->seqn, expected_flow_seq[flow]); > + flow, *rte_event_test_seqn(ev->mbuf), > + expected_flow_seq[flow]); > t->err = true; > rte_smp_wmb(); > } # Since rte_event_test_seqn_dynfield_register() is the public API, I would like to limit the scope only to self test so that rte_event_test_seqn_dynfield_register() need not be exposed. Could you have a separate application-specific dynamic field here? # Also this patch used in fastpath, better to have offset stored in fastpath cache line. see http://mails.dpdk.org/archives/dev/2020-October/189588.html > +#define RTE_EVENT_TEST_SEQN_DYNFIELD_NAME "rte_event_test_seqn_dynfield" > +int rte_event_test_seqn_dynfield_offset = -1; > + > +int > +rte_event_test_seqn_dynfield_register(void) > +{ > + static const struct rte_mbuf_dynfield event_test_seqn_dynfield_desc = { > + .name = RTE_EVENT_TEST_SEQN_DYNFIELD_NAME, > + .size = sizeof(rte_event_test_seqn_t), > + .align = __alignof__(rte_event_test_seqn_t), > + }; > + rte_event_test_seqn_dynfield_offset = > + rte_mbuf_dynfield_register(&event_test_seqn_dynfield_desc); > + return rte_event_test_seqn_dynfield_offset; > +} > + > int > rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id, > uint32_t *caps) > @@ -1247,8 +1263,11 @@ int rte_event_dev_selftest(uint8_t dev_id) > RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); > struct rte_eventdev *dev = &rte_eventdevs[dev_id]; > > - if (dev->dev_ops->dev_selftest != NULL) > + if (dev->dev_ops->dev_selftest != NULL) { > + if (rte_event_test_seqn_dynfield_register() < 0) > + return -ENOMEM; > return (*dev->dev_ops->dev_selftest)(); > + } > return -ENOTSUP; > } > > diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h > index ce1fc2ce0f..1656ff8dce 100644 > --- a/lib/librte_eventdev/rte_eventdev.h > +++ b/lib/librte_eventdev/rte_eventdev.h > @@ -211,13 +211,15 @@ extern "C" { > #endif > > #include <rte_common.h> > +#include <rte_compat.h> > #include <rte_config.h> > +#include <rte_mbuf.h> > +#include <rte_mbuf_dyn.h> > #include <rte_memory.h> > #include <rte_errno.h> > > #include "rte_eventdev_trace_fp.h" > > -struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */ > struct rte_event; > > /* Event device capability bitmap flags */ > @@ -570,9 +572,9 @@ struct rte_event_queue_conf { > */ > uint32_t nb_atomic_order_sequences; > /**< The maximum number of outstanding events waiting to be > - * reordered by this queue. In other words, the number of entries in > - * this queue’s reorder buffer.When the number of events in the > - * reorder buffer reaches to *nb_atomic_order_sequences* then the > + * event_tested by this queue. In other words, the number of entries in > + * this queue’s event_test buffer.When the number of events in the > + * event_test buffer reaches to *nb_atomic_order_sequences* then the > * scheduler cannot schedule the events from this queue and invalid > * event will be returned from dequeue until one or more entries are > * freed up/released. > @@ -935,7 +937,7 @@ rte_event_dev_close(uint8_t dev_id); > * Event ordering is based on the received event(s), but also other > * (newly allocated or stored) events are ordered when enqueued within the same > * ordered context. Events not enqueued (e.g. released or stored) within the > - * context are considered missing from reordering and are skipped at this time > + * context are considered missing from event_testing and are skipped at this time > * (but can be ordered again within another context). > * > * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE > @@ -1021,7 +1023,7 @@ rte_event_dev_close(uint8_t dev_id); > * then this function hints the scheduler that the user has done all that need > * to maintain event order in the current ordered context. > * The scheduler is allowed to release the ordered context of this port and > - * avoid reordering any following enqueues. > + * avoid event_testing any following enqueues. > * > * Early ordered context release may increase parallelism and thus system > * performance. > @@ -1111,6 +1113,30 @@ struct rte_event { > }; > }; > > +typedef uint32_t rte_event_test_seqn_t; > +extern int rte_event_test_seqn_dynfield_offset; > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice > + * > + * Read test sequence number from mbuf. > + * > + * @param mbuf Structure to read from. > + * @return pointer to test sequence number. > + */ > +__rte_experimental > +static inline rte_event_test_seqn_t * > +rte_event_test_seqn(const struct rte_mbuf *mbuf) > +{ > + return RTE_MBUF_DYNFIELD(mbuf, rte_event_test_seqn_dynfield_offset, > + rte_event_test_seqn_t *); > +} > + > +__rte_experimental > +int > +rte_event_test_seqn_dynfield_register(void); > + > /* Ethdev Rx adapter capability bitmap flags */ > #define RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT 0x1 > /**< This flag is sent when the packet transfer mechanism is in HW. > diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map > index 8ae8420f9b..e49382ba99 100644 > --- a/lib/librte_eventdev/version.map > +++ b/lib/librte_eventdev/version.map > @@ -138,4 +138,6 @@ EXPERIMENTAL { > __rte_eventdev_trace_port_setup; > # added in 20.11 > rte_event_pmd_pci_probe_named; > + rte_event_test_seqn_dynfield_offset; > + rte_event_test_seqn_dynfield_register; Could you change symbol name to rte_event_dev_selftest_seqn_dynfield_offset() to limit the scope only to self test. also, it is not required to expose rte_event_test_seqn_dynfield_register() in that case. > }; > -- > 2.23.0 > ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH 7/8] event: switch sequence number to dynamic field 2020-10-28 7:27 ` Jerin Jacob @ 2020-10-28 8:55 ` David Marchand 2020-10-28 9:09 ` Jerin Jacob 0 siblings, 1 reply; 29+ messages in thread From: David Marchand @ 2020-10-28 8:55 UTC (permalink / raw) To: Jerin Jacob Cc: McDaniel, Timothy, dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Liang Ma, Peter Mccarthy, Harry van Haaren, Ray Kinsella, Neil Horman On Wed, Oct 28, 2020 at 8:27 AM Jerin Jacob <jerinjacobk@gmail.com> wrote: > > diff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h > > index 9e3415e421..d4ad31da46 100644 > > --- a/app/test-eventdev/test_order_common.h > > +++ b/app/test-eventdev/test_order_common.h > > @@ -89,9 +89,10 @@ order_process_stage_1(struct test_order *const t, > > { > > const uint32_t flow = (uintptr_t)ev->mbuf % nb_flows; > > /* compare the seqn against expected value */ > > - if (ev->mbuf->seqn != expected_flow_seq[flow]) { > > + if (*rte_event_test_seqn(ev->mbuf) != expected_flow_seq[flow]) { > > evt_err("flow=%x seqn mismatch got=%x expected=%x", > > - flow, ev->mbuf->seqn, expected_flow_seq[flow]); > > + flow, *rte_event_test_seqn(ev->mbuf), > > + expected_flow_seq[flow]); > > t->err = true; > > rte_smp_wmb(); > > } > > # Since rte_event_test_seqn_dynfield_register() is the public API, I > would like to limit the scope > only to self test so that rte_event_test_seqn_dynfield_register() > need not be exposed. > Could you have a separate application-specific dynamic field here? > # Also this patch used in fastpath, better to have offset stored in > fastpath cache line. > see http://mails.dpdk.org/archives/dev/2020-October/189588.html Ack for a test app dynamic field. On the second comment, I'll wait for Thomas respin. > > diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map > > index 8ae8420f9b..e49382ba99 100644 > > --- a/lib/librte_eventdev/version.map > > +++ b/lib/librte_eventdev/version.map > > @@ -138,4 +138,6 @@ EXPERIMENTAL { > > __rte_eventdev_trace_port_setup; > > # added in 20.11 > > rte_event_pmd_pci_probe_named; > > + rte_event_test_seqn_dynfield_offset; > > + rte_event_test_seqn_dynfield_register; > > Could you change symbol name to rte_event_dev_selftest_seqn_dynfield_offset() > to limit the scope only to self test. also, it is not required to expose > rte_event_test_seqn_dynfield_register() in that case. How about moving this to rte_eventdev_pmd.h and make it a pmd only API? => rte_event_pmd_ prefix I would mark it rte_internal too. -- David Marchand ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH 7/8] event: switch sequence number to dynamic field 2020-10-28 8:55 ` David Marchand @ 2020-10-28 9:09 ` Jerin Jacob 0 siblings, 0 replies; 29+ messages in thread From: Jerin Jacob @ 2020-10-28 9:09 UTC (permalink / raw) To: David Marchand Cc: McDaniel, Timothy, dpdk-dev, Jerin Jacob, Pavan Nikhilesh, Liang Ma, Peter Mccarthy, Harry van Haaren, Ray Kinsella, Neil Horman On Wed, Oct 28, 2020 at 2:25 PM David Marchand <david.marchand@redhat.com> wrote: > > On Wed, Oct 28, 2020 at 8:27 AM Jerin Jacob <jerinjacobk@gmail.com> wrote: > > > diff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h > > > index 9e3415e421..d4ad31da46 100644 > > > --- a/app/test-eventdev/test_order_common.h > > > +++ b/app/test-eventdev/test_order_common.h > > > > Could you change symbol name to rte_event_dev_selftest_seqn_dynfield_offset() > > to limit the scope only to self test. also, it is not required to expose > > rte_event_test_seqn_dynfield_register() in that case. > > How about moving this to rte_eventdev_pmd.h and make it a pmd only API? > => rte_event_pmd_ prefix +1 > > I would mark it rte_internal too. Ack. > > > -- > David Marchand > ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH 8/8] mbuf: remove seqn field 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand ` (6 preceding siblings ...) 2020-10-27 22:13 ` [dpdk-dev] [PATCH 7/8] event: " David Marchand @ 2020-10-27 22:13 ` David Marchand 2020-10-28 10:27 ` Andrew Rybchenko 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand 8 siblings, 1 reply; 29+ messages in thread From: David Marchand @ 2020-10-27 22:13 UTC (permalink / raw) To: dev; +Cc: Ray Kinsella, Neil Horman, Olivier Matz As announced in the deprecation note, the field seqn is removed to give more space to the dynamic fields. This is how the mbuf layout looks like (pahole-style): word type name byte size 0 void * buf_addr; /* 0 + 8 */ 1 rte_iova_t buf_iova /* 8 + 8 */ /* --- RTE_MARKER64 rearm_data; */ 2 uint16_t data_off; /* 16 + 2 */ uint16_t refcnt; /* 18 + 2 */ uint16_t nb_segs; /* 20 + 2 */ uint16_t port; /* 22 + 2 */ 3 uint64_t ol_flags; /* 24 + 8 */ /* --- RTE_MARKER rx_descriptor_fields1; */ 4 uint32_t union packet_type; /* 32 + 4 */ uint32_t pkt_len; /* 36 + 4 */ 5 uint16_t data_len; /* 40 + 2 */ uint16_t vlan_tci; /* 42 + 2 */ 5.5 uint64_t union hash; /* 44 + 8 */ 6.5 uint16_t vlan_tci_outer; /* 52 + 2 */ uint16_t buf_len; /* 54 + 2 */ 7 uint64_t timestamp; /* 56 + 8 */ /* --- RTE_MARKER cacheline1; */ 8 struct rte_mempool * pool; /* 64 + 8 */ 9 struct rte_mbuf * next; /* 72 + 8 */ 10 uint64_t union tx_offload; /* 80 + 8 */ 11 struct rte_mbuf_ext_shared_info * shinfo; /* 88 + 8 */ 12 uint16_t priv_size; /* 96 + 2 */ uint16_t timesync; /* 98 + 2 */ 12.5 uint32_t dynfield1[7]; /* 100 + 28 */ 16 /* --- END 128 */ Signed-off-by: David Marchand <david.marchand@redhat.com> --- doc/guides/rel_notes/deprecation.rst | 1 - doc/guides/rel_notes/release_20_11.rst | 3 +++ lib/librte_mbuf/rte_mbuf_core.h | 15 ++++++--------- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 0f6f1df12a..fe3fd3956c 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -87,7 +87,6 @@ Deprecation Notices The following static fields will be moved as dynamic: - ``timestamp`` - - ``seqn`` As a consequence, the layout of the ``struct rte_mbuf`` will be re-arranged, avoiding impact on vectorized implementation of the driver datapaths, diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 3d7edbfdbb..c0a9fc96aa 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -429,6 +429,9 @@ API Changes * mbuf: Removed the unioned fields ``userdata`` and ``udata64`` from the structure ``rte_mbuf``. It is replaced with dynamic fields. +* mbuf: Removed the field ``seqn`` from the structure ``rte_mbuf``. + It is replaced with dynamic fields. + * pci: Removed the ``rte_kernel_driver`` enum defined in rte_dev.h and replaced with a private enum in the PCI subsystem. diff --git a/lib/librte_mbuf/rte_mbuf_core.h b/lib/librte_mbuf/rte_mbuf_core.h index a65eaaf692..3fb5abda3c 100644 --- a/lib/librte_mbuf/rte_mbuf_core.h +++ b/lib/librte_mbuf/rte_mbuf_core.h @@ -640,6 +640,11 @@ struct rte_mbuf { }; }; + /** Shared data for external buffer attached to mbuf. See + * rte_pktmbuf_attach_extbuf(). + */ + struct rte_mbuf_ext_shared_info *shinfo; + /** Size of the application private data. In case of an indirect * mbuf, it stores the direct mbuf private data size. */ @@ -648,15 +653,7 @@ struct rte_mbuf { /** Timesync flags for use with IEEE1588. */ uint16_t timesync; - /** Sequence number. See also rte_reorder_insert(). */ - uint32_t seqn; - - /** Shared data for external buffer attached to mbuf. See - * rte_pktmbuf_attach_extbuf(). - */ - struct rte_mbuf_ext_shared_info *shinfo; - - uint64_t dynfield1[3]; /**< Reserved for dynamic fields. */ + uint32_t dynfield1[7]; /**< Reserved for dynamic fields. */ } __rte_cache_aligned; /** -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH 8/8] mbuf: remove seqn field 2020-10-27 22:13 ` [dpdk-dev] [PATCH 8/8] mbuf: remove seqn field David Marchand @ 2020-10-28 10:27 ` Andrew Rybchenko 0 siblings, 0 replies; 29+ messages in thread From: Andrew Rybchenko @ 2020-10-28 10:27 UTC (permalink / raw) To: David Marchand, dev; +Cc: Ray Kinsella, Neil Horman, Olivier Matz On 10/28/20 1:13 AM, David Marchand wrote: > As announced in the deprecation note, the field seqn is removed to give > more space to the dynamic fields. > > This is how the mbuf layout looks like (pahole-style): > > word type name byte size > 0 void * buf_addr; /* 0 + 8 */ > 1 rte_iova_t buf_iova /* 8 + 8 */ > /* --- RTE_MARKER64 rearm_data; */ > 2 uint16_t data_off; /* 16 + 2 */ > uint16_t refcnt; /* 18 + 2 */ > uint16_t nb_segs; /* 20 + 2 */ > uint16_t port; /* 22 + 2 */ > 3 uint64_t ol_flags; /* 24 + 8 */ > /* --- RTE_MARKER rx_descriptor_fields1; */ > 4 uint32_t union packet_type; /* 32 + 4 */ > uint32_t pkt_len; /* 36 + 4 */ > 5 uint16_t data_len; /* 40 + 2 */ > uint16_t vlan_tci; /* 42 + 2 */ > 5.5 uint64_t union hash; /* 44 + 8 */ > 6.5 uint16_t vlan_tci_outer; /* 52 + 2 */ > uint16_t buf_len; /* 54 + 2 */ > 7 uint64_t timestamp; /* 56 + 8 */ > /* --- RTE_MARKER cacheline1; */ > 8 struct rte_mempool * pool; /* 64 + 8 */ > 9 struct rte_mbuf * next; /* 72 + 8 */ > 10 uint64_t union tx_offload; /* 80 + 8 */ > 11 struct rte_mbuf_ext_shared_info * shinfo; /* 88 + 8 */ > 12 uint16_t priv_size; /* 96 + 2 */ > uint16_t timesync; /* 98 + 2 */ > 12.5 uint32_t dynfield1[7]; /* 100 + 28 */ > 16 /* --- END 128 */ > > Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand ` (7 preceding siblings ...) 2020-10-27 22:13 ` [dpdk-dev] [PATCH 8/8] mbuf: remove seqn field David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 1/9] event/dpaa2: remove dead code David Marchand ` (9 more replies) 8 siblings, 10 replies; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev Followup on the work started by Thomas to free some space in the mbuf structure. This series applies on top of Thomas v4 series [1] and drops the seqn field that had been deprecated. 1: https://patchwork.dpdk.org/project/dpdk/list/?series=13416 -- David Marchand Changelog since v1: - rebased on latest Thomas series, - introduced a specific seqn for the eventdev test app, - moved eventdev seqn as a pmd only thing, David Marchand (9): event/dpaa2: remove dead code crypto/scheduler: remove unused internal seqn net/ark: remove use of seqn for debug reorder: switch sequence number to dynamic mbuf field dpaa: switch sequence number to dynamic mbuf field fslmc: switch sequence number to dynamic mbuf field eventdev: switch sequence number to dynamic mbuf field app/eventdev: switch sequence number to dynamic mbuf field mbuf: remove seqn field app/test-eventdev/test_order_common.c | 14 +++++++- app/test-eventdev/test_order_common.h | 13 +++++-- app/test/test_reorder.c | 8 ++--- doc/guides/rel_notes/deprecation.rst | 1 - doc/guides/rel_notes/release_20_11.rst | 3 ++ drivers/bus/dpaa/dpaa_bus.c | 16 +++++++++ drivers/bus/dpaa/rte_dpaa_bus.h | 28 +++++++++++++++ drivers/bus/dpaa/version.map | 1 + drivers/bus/fslmc/fslmc_bus.c | 17 +++++++++ drivers/bus/fslmc/rte_fslmc.h | 23 ++++++++++++ drivers/bus/fslmc/version.map | 1 + drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 +++++----- drivers/crypto/dpaa_sec/dpaa_sec.c | 6 ++-- .../crypto/scheduler/scheduler_pmd_private.h | 1 - drivers/event/dpaa/dpaa_eventdev.c | 6 ++-- drivers/event/dpaa2/dpaa2_eventdev.c | 9 ++--- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 17 +++------ drivers/event/octeontx/ssovf_evdev_selftest.c | 32 +++++++++-------- drivers/event/octeontx2/otx2_evdev_selftest.c | 36 +++++++++++-------- drivers/event/opdl/opdl_test.c | 8 ++--- drivers/event/sw/sw_evdev_selftest.c | 34 +++++++++--------- drivers/mempool/dpaa2/dpaa2_hw_mempool.h | 2 -- drivers/net/ark/ark_ethdev_rx.c | 8 ++--- drivers/net/dpaa/dpaa_ethdev.h | 7 ---- drivers/net/dpaa/dpaa_rxtx.c | 6 ++-- drivers/net/dpaa2/dpaa2_rxtx.c | 28 +++++++-------- examples/packet_ordering/main.c | 2 +- lib/librte_eventdev/rte_eventdev.c | 14 +++++++- lib/librte_eventdev/rte_eventdev_pmd.h | 20 +++++++++++ lib/librte_eventdev/version.map | 6 ++++ lib/librte_mbuf/rte_mbuf_core.h | 15 ++++---- lib/librte_reorder/rte_reorder.c | 23 ++++++++++-- lib/librte_reorder/rte_reorder.h | 21 +++++++++++ lib/librte_reorder/version.map | 6 ++++ 34 files changed, 314 insertions(+), 136 deletions(-) -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 1/9] event/dpaa2: remove dead code 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-31 18:28 ` Nipun Gupta 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 2/9] crypto/scheduler: remove unused internal seqn David Marchand ` (8 subsequent siblings) 9 siblings, 1 reply; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev; +Cc: Hemant Agrawal, Nipun Gupta This code has never been used since introduction. Fixes: 653242c3375a ("event/dpaa2: add self test") Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c index b1f3891484..5447db8a8a 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c @@ -47,17 +47,6 @@ struct event_attr { uint8_t seq; }; -static uint32_t seqn_list_index; -static int seqn_list[NUM_PACKETS]; - -static void -seqn_list_init(void) -{ - RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); - memset(seqn_list, 0, sizeof(seqn_list)); - seqn_list_index = 0; -} - struct test_core_param { rte_atomic32_t *total_events; uint64_t dequeue_tmo_ticks; @@ -516,7 +505,7 @@ launch_workers_and_wait(int (*main_worker)(void *), return 0; rte_atomic32_set(&atomic_total_events, total_events); - seqn_list_init(); + RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); param = malloc(sizeof(struct test_core_param) * nb_workers); if (!param) -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/9] event/dpaa2: remove dead code 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 1/9] event/dpaa2: remove dead code David Marchand @ 2020-10-31 18:28 ` Nipun Gupta 0 siblings, 0 replies; 29+ messages in thread From: Nipun Gupta @ 2020-10-31 18:28 UTC (permalink / raw) To: David Marchand, dev; +Cc: Hemant Agrawal Acked-by: Nipun Gupta <nipun.gupta@nxp.com> > -----Original Message----- > From: David Marchand <david.marchand@redhat.com> > Sent: Wednesday, October 28, 2020 5:50 PM > To: dev@dpdk.org > Cc: Hemant Agrawal <hemant.agrawal@nxp.com>; Nipun Gupta > <nipun.gupta@nxp.com> > Subject: [PATCH v2 1/9] event/dpaa2: remove dead code > > This code has never been used since introduction. > > Fixes: 653242c3375a ("event/dpaa2: add self test") > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 13 +------------ > 1 file changed, 1 insertion(+), 12 deletions(-) > > diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c > b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c > index b1f3891484..5447db8a8a 100644 > --- a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c > +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c > @@ -47,17 +47,6 @@ struct event_attr { > uint8_t seq; > }; > > -static uint32_t seqn_list_index; > -static int seqn_list[NUM_PACKETS]; > - > -static void > -seqn_list_init(void) > -{ > - RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); > - memset(seqn_list, 0, sizeof(seqn_list)); > - seqn_list_index = 0; > -} > - > struct test_core_param { > rte_atomic32_t *total_events; > uint64_t dequeue_tmo_ticks; > @@ -516,7 +505,7 @@ launch_workers_and_wait(int (*main_worker)(void *), > return 0; > > rte_atomic32_set(&atomic_total_events, total_events); > - seqn_list_init(); > + RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); > > param = malloc(sizeof(struct test_core_param) * nb_workers); > if (!param) > -- > 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 2/9] crypto/scheduler: remove unused internal seqn 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 1/9] event/dpaa2: remove dead code David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 3/9] net/ark: remove use of seqn for debug David Marchand ` (7 subsequent siblings) 9 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev; +Cc: Fan Zhang, Sergio Gonzalez Monroy, Declan Doherty This field has been left behind after dropping its use. Fixes: 8a48e039432b ("crypto/scheduler: optimize crypto op ordering") Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/crypto/scheduler/scheduler_pmd_private.h | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h b/drivers/crypto/scheduler/scheduler_pmd_private.h index adb4eb0632..4d33b9ab44 100644 --- a/drivers/crypto/scheduler/scheduler_pmd_private.h +++ b/drivers/crypto/scheduler/scheduler_pmd_private.h @@ -59,7 +59,6 @@ struct scheduler_qp_ctx { uint32_t max_nb_objs; struct rte_ring *order_ring; - uint32_t seqn; } __rte_cache_aligned; -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 3/9] net/ark: remove use of seqn for debug 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 1/9] event/dpaa2: remove dead code David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 2/9] crypto/scheduler: remove unused internal seqn David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 4/9] reorder: switch sequence number to dynamic mbuf field David Marchand ` (6 subsequent siblings) 9 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev; +Cc: Shepard Siegel, Ed Czeck, John Miller The seqn mbuf field is deprecated. It is currently hacked for debug purpose, it could be changed to a dynamic field but I see little value in the debug info it offers. Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/net/ark/ark_ethdev_rx.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/net/ark/ark_ethdev_rx.c b/drivers/net/ark/ark_ethdev_rx.c index 825b4791be..c24cc00e2f 100644 --- a/drivers/net/ark/ark_ethdev_rx.c +++ b/drivers/net/ark/ark_ethdev_rx.c @@ -302,8 +302,6 @@ eth_ark_recv_pkts(void *rx_queue, mbuf->pkt_len = 63; meta->pkt_len = 63; } - /* seqn is only set under debug */ - mbuf->seqn = cons_index; } if (unlikely(meta->pkt_len > ARK_RX_MAX_NOCHAIN)) @@ -360,8 +358,6 @@ eth_ark_rx_jumbo(struct ark_rx_queue *queue, mbuf_prev = mbuf; mbuf->data_len = data_len; mbuf->data_off = 0; - if (ARK_DEBUG_CORE) - mbuf->seqn = cons_index; /* for debug only */ cons_index += 1; } @@ -667,8 +663,8 @@ dump_mbuf_data(struct rte_mbuf *mbuf, uint16_t lo, uint16_t hi) { uint16_t i, j; - ARK_PMD_LOG(DEBUG, " MBUF: %p len %d, off: %d, seq: %" PRIU32 "\n", - mbuf, mbuf->pkt_len, mbuf->data_off, mbuf->seqn); + ARK_PMD_LOG(DEBUG, " MBUF: %p len %d, off: %d\n", + mbuf, mbuf->pkt_len, mbuf->data_off); for (i = lo; i < hi; i += 16) { uint8_t *dp = RTE_PTR_ADD(mbuf->buf_addr, i); -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 4/9] reorder: switch sequence number to dynamic mbuf field 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand ` (2 preceding siblings ...) 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 3/9] net/ark: remove use of seqn for debug David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-28 12:54 ` Andrew Rybchenko 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 5/9] dpaa: " David Marchand ` (5 subsequent siblings) 9 siblings, 1 reply; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev; +Cc: Reshma Pattan, Ray Kinsella, Neil Horman The reorder library used sequence numbers stored in the deprecated field seqn. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand <david.marchand@redhat.com> --- app/test/test_reorder.c | 8 ++++---- examples/packet_ordering/main.c | 2 +- lib/librte_reorder/rte_reorder.c | 23 ++++++++++++++++++++--- lib/librte_reorder/rte_reorder.h | 21 +++++++++++++++++++++ lib/librte_reorder/version.map | 6 ++++++ 5 files changed, 52 insertions(+), 8 deletions(-) diff --git a/app/test/test_reorder.c b/app/test/test_reorder.c index 58fa9c71b5..1c4226da65 100644 --- a/app/test/test_reorder.c +++ b/app/test/test_reorder.c @@ -149,7 +149,7 @@ test_reorder_insert(void) for (i = 0; i < num_bufs; i++) { bufs[i] = rte_pktmbuf_alloc(p); TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n"); - bufs[i]->seqn = i; + *rte_reorder_seqn(bufs[i]) = i; } /* This should fill up order buffer: @@ -183,7 +183,7 @@ test_reorder_insert(void) bufs[4] = NULL; /* early packet from current sequence window - full ready buffer */ - bufs[5]->seqn = 2 * size; + *rte_reorder_seqn(bufs[5]) = 2 * size; ret = rte_reorder_insert(b, bufs[5]); if (!((ret == -1) && (rte_errno == ENOSPC))) { printf("%s:%d: No error inserting early packet with full ready buffer\n", @@ -194,7 +194,7 @@ test_reorder_insert(void) bufs[5] = NULL; /* late packet */ - bufs[6]->seqn = 3 * size; + *rte_reorder_seqn(bufs[6]) = 3 * size; ret = rte_reorder_insert(b, bufs[6]); if (!((ret == -1) && (rte_errno == ERANGE))) { printf("%s:%d: No error inserting late packet with seqn:" @@ -250,7 +250,7 @@ test_reorder_drain(void) for (i = 0; i < num_bufs; i++) { bufs[i] = rte_pktmbuf_alloc(p); TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation failed\n"); - bufs[i]->seqn = i; + *rte_reorder_seqn(bufs[i]) = i; } /* Insert packet with seqn 1: diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c index a79d77a321..4bea1982d5 100644 --- a/examples/packet_ordering/main.c +++ b/examples/packet_ordering/main.c @@ -451,7 +451,7 @@ rx_thread(struct rte_ring *ring_out) /* mark sequence number */ for (i = 0; i < nb_rx_pkts; ) - pkts[i++]->seqn = seqn++; + *rte_reorder_seqn(pkts[i++]) = seqn++; /* enqueue to rx_to_workers ring */ ret = rte_ring_enqueue_burst(ring_out, diff --git a/lib/librte_reorder/rte_reorder.c b/lib/librte_reorder/rte_reorder.c index 3c9f0e2d08..9445853b79 100644 --- a/lib/librte_reorder/rte_reorder.c +++ b/lib/librte_reorder/rte_reorder.c @@ -8,6 +8,7 @@ #include <rte_string_fns.h> #include <rte_log.h> #include <rte_mbuf.h> +#include <rte_mbuf_dyn.h> #include <rte_eal_memconfig.h> #include <rte_errno.h> #include <rte_malloc.h> @@ -29,6 +30,9 @@ EAL_REGISTER_TAILQ(rte_reorder_tailq) /* Macros for printing using RTE_LOG */ #define RTE_LOGTYPE_REORDER RTE_LOGTYPE_USER1 +#define RTE_REORDER_SEQN_DYNFIELD_NAME "rte_reorder_seqn_dynfield" +int rte_reorder_seqn_dynfield_offset = -1; + /* A generic circular buffer */ struct cir_buffer { unsigned int size; /**< Number of entries that can be stored */ @@ -103,6 +107,11 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) struct rte_reorder_list *reorder_list; const unsigned int bufsize = sizeof(struct rte_reorder_buffer) + (2 * size * sizeof(struct rte_mbuf *)); + static const struct rte_mbuf_dynfield reorder_seqn_dynfield_desc = { + .name = RTE_REORDER_SEQN_DYNFIELD_NAME, + .size = sizeof(rte_reorder_seqn_t), + .align = __alignof__(rte_reorder_seqn_t), + }; reorder_list = RTE_TAILQ_CAST(rte_reorder_tailq.head, rte_reorder_list); @@ -120,6 +129,14 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) return NULL; } + rte_reorder_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc); + if (rte_reorder_seqn_dynfield_offset < 0) { + RTE_LOG(ERR, REORDER, "Failed to register mbuf field for reorder sequence number\n"); + rte_errno = ENOMEM; + return NULL; + } + rte_mcfg_tailq_write_lock(); /* guarantee there's no existing */ @@ -310,7 +327,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf) order_buf = &b->order_buf; if (!b->is_initialized) { - b->min_seqn = mbuf->seqn; + b->min_seqn = *rte_reorder_seqn(mbuf); b->is_initialized = 1; } @@ -322,7 +339,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf) * mbuf_seqn = 0x0010 * offset = 0x0010 - 0xFFFD = 0x13 */ - offset = mbuf->seqn - b->min_seqn; + offset = *rte_reorder_seqn(mbuf) - b->min_seqn; /* * action to take depends on offset. @@ -352,7 +369,7 @@ rte_reorder_insert(struct rte_reorder_buffer *b, struct rte_mbuf *mbuf) rte_errno = ENOSPC; return -1; } - offset = mbuf->seqn - b->min_seqn; + offset = *rte_reorder_seqn(mbuf) - b->min_seqn; position = (order_buf->head + offset) & order_buf->mask; order_buf->entries[position] = mbuf; } else { diff --git a/lib/librte_reorder/rte_reorder.h b/lib/librte_reorder/rte_reorder.h index 6d39710088..9de0240374 100644 --- a/lib/librte_reorder/rte_reorder.h +++ b/lib/librte_reorder/rte_reorder.h @@ -16,6 +16,7 @@ */ #include <rte_mbuf.h> +#include <rte_mbuf_dyn.h> #ifdef __cplusplus extern "C" { @@ -23,6 +24,26 @@ extern "C" { struct rte_reorder_buffer; +typedef uint32_t rte_reorder_seqn_t; +extern int rte_reorder_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read reorder sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to reorder sequence number. + */ +__rte_experimental +static inline rte_reorder_seqn_t * +rte_reorder_seqn(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, rte_reorder_seqn_dynfield_offset, + rte_reorder_seqn_t *); +} + /** * Create a new reorder buffer instance * diff --git a/lib/librte_reorder/version.map b/lib/librte_reorder/version.map index 8c0220d324..d902a7fa12 100644 --- a/lib/librte_reorder/version.map +++ b/lib/librte_reorder/version.map @@ -11,3 +11,9 @@ DPDK_21 { local: *; }; + +EXPERIMENTAL { + global: + + rte_reorder_seqn_dynfield_offset; +}; -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/9] reorder: switch sequence number to dynamic mbuf field 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 4/9] reorder: switch sequence number to dynamic mbuf field David Marchand @ 2020-10-28 12:54 ` Andrew Rybchenko 0 siblings, 0 replies; 29+ messages in thread From: Andrew Rybchenko @ 2020-10-28 12:54 UTC (permalink / raw) To: David Marchand, dev; +Cc: Reshma Pattan, Ray Kinsella, Neil Horman On 10/28/20 3:20 PM, David Marchand wrote: > The reorder library used sequence numbers stored in the deprecated field > seqn. > It is moved to a dynamic mbuf field in order to allow removal of seqn. > > Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 5/9] dpaa: switch sequence number to dynamic mbuf field 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand ` (3 preceding siblings ...) 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 4/9] reorder: switch sequence number to dynamic mbuf field David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 6/9] fslmc: " David Marchand ` (4 subsequent siblings) 9 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev Cc: Hemant Agrawal, Sachin Saxena, Ray Kinsella, Neil Horman, Akhil Goyal, Nipun Gupta The dpaa drivers have been hacking the deprecated field seqn for internal features. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/bus/dpaa/dpaa_bus.c | 16 ++++++++++++++++ drivers/bus/dpaa/rte_dpaa_bus.h | 28 ++++++++++++++++++++++++++++ drivers/bus/dpaa/version.map | 1 + drivers/crypto/dpaa_sec/dpaa_sec.c | 6 +++--- drivers/event/dpaa/dpaa_eventdev.c | 6 +++--- drivers/net/dpaa/dpaa_ethdev.h | 7 ------- drivers/net/dpaa/dpaa_rxtx.c | 6 +++--- 7 files changed, 54 insertions(+), 16 deletions(-) diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index c94c72106f..ece6a4c424 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -32,6 +32,7 @@ #include <rte_ring.h> #include <rte_bus.h> #include <rte_mbuf_pool_ops.h> +#include <rte_mbuf_dyn.h> #include <dpaa_of.h> #include <rte_dpaa_bus.h> @@ -55,6 +56,9 @@ unsigned int dpaa_svr_family; RTE_DEFINE_PER_LCORE(struct dpaa_portal *, dpaa_io); +#define DPAA_SEQN_DYNFIELD_NAME "dpaa_seqn_dynfield" +int dpaa_seqn_dynfield_offset = -1; + struct fm_eth_port_cfg * dpaa_get_eth_port_cfg(int dev_id) { @@ -251,6 +255,11 @@ dpaa_clean_device_list(void) int rte_dpaa_portal_init(void *arg) { + static const struct rte_mbuf_dynfield dpaa_seqn_dynfield_desc = { + .name = DPAA_SEQN_DYNFIELD_NAME, + .size = sizeof(dpaa_seqn_t), + .align = __alignof__(dpaa_seqn_t), + }; unsigned int cpu, lcore = rte_lcore_id(); int ret; @@ -264,6 +273,13 @@ int rte_dpaa_portal_init(void *arg) cpu = rte_lcore_to_cpu_id(lcore); + dpaa_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&dpaa_seqn_dynfield_desc); + if (dpaa_seqn_dynfield_offset < 0) { + DPAA_BUS_LOG(ERR, "Failed to register mbuf field for dpaa sequence number\n"); + return -rte_errno; + } + /* Initialise bman thread portals */ ret = bman_thread_init(); if (ret) { diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h index fdaa63a09b..48d5cf4625 100644 --- a/drivers/bus/dpaa/rte_dpaa_bus.h +++ b/drivers/bus/dpaa/rte_dpaa_bus.h @@ -7,6 +7,7 @@ #define __RTE_DPAA_BUS_H__ #include <rte_bus.h> +#include <rte_mbuf_dyn.h> #include <rte_mempool.h> #include <dpaax_iova_table.h> @@ -16,6 +17,33 @@ #include <fsl_bman.h> #include <netcfg.h> +/* This sequence number field is used to store event entry index for + * driver specific usage. For parallel mode queues, invalid + * index will be set and for atomic mode queues, valid value + * ranging from 1 to 16. + */ +#define DPAA_INVALID_MBUF_SEQN 0 + +typedef uint32_t dpaa_seqn_t; +extern int dpaa_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read dpaa sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to dpaa sequence number. + */ +__rte_experimental +static inline dpaa_seqn_t * +dpaa_seqn(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, dpaa_seqn_dynfield_offset, + dpaa_seqn_t *); +} + #define DPAA_MEMPOOL_OPS_NAME "dpaa" #define DEV_TO_DPAA_DEVICE(ptr) \ diff --git a/drivers/bus/dpaa/version.map b/drivers/bus/dpaa/version.map index 9bd2601213..fe4f9ac5aa 100644 --- a/drivers/bus/dpaa/version.map +++ b/drivers/bus/dpaa/version.map @@ -14,6 +14,7 @@ INTERNAL { dpaa_get_qm_channel_pool; dpaa_get_link_status; dpaa_restart_link_autoneg; + dpaa_seqn_dynfield_offset; dpaa_update_link_speed; dpaa_intr_disable; dpaa_intr_enable; diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index 55f457ac9a..44c742738f 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -1721,8 +1721,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, DPAA_SEC_BURST : nb_ops; for (loop = 0; loop < frames_to_send; loop++) { op = *(ops++); - if (op->sym->m_src->seqn != 0) { - index = op->sym->m_src->seqn - 1; + if (*dpaa_seqn(op->sym->m_src) != 0) { + index = *dpaa_seqn(op->sym->m_src) - 1; if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) { /* QM_EQCR_DCA_IDXMASK = 0x0f */ flags[loop] = ((index & 0x0f) << 8); @@ -3212,7 +3212,7 @@ dpaa_sec_process_atomic_event(void *event, DPAA_PER_LCORE_DQRR_HELD |= 1 << index; DPAA_PER_LCORE_DQRR_MBUF(index) = ctx->op->sym->m_src; ev->impl_opaque = index + 1; - ctx->op->sym->m_src->seqn = (uint32_t)index + 1; + *dpaa_seqn(ctx->op->sym->m_src) = (uint32_t)index + 1; *bufs = (void *)ctx->op; rte_mempool_put(ctx->ctx_pool, (void *)ctx); diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c index 07cd079768..01ddd0eb63 100644 --- a/drivers/event/dpaa/dpaa_eventdev.c +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -99,7 +99,7 @@ dpaa_event_enqueue_burst(void *port, const struct rte_event ev[], case RTE_EVENT_OP_RELEASE: qman_dca_index(ev[i].impl_opaque, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; break; @@ -206,7 +206,7 @@ dpaa_event_dequeue_burst(void *port, struct rte_event ev[], if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) { qman_dca_index(i, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; } @@ -276,7 +276,7 @@ dpaa_event_dequeue_burst_intr(void *port, struct rte_event ev[], if (DPAA_PER_LCORE_DQRR_HELD & (1 << i)) { qman_dca_index(i, 0); mbuf = DPAA_PER_LCORE_DQRR_MBUF(i); - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; DPAA_PER_LCORE_DQRR_HELD &= ~(1 << i); DPAA_PER_LCORE_DQRR_SIZE--; } diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h index 1b8e120e8f..659bceb467 100644 --- a/drivers/net/dpaa/dpaa_ethdev.h +++ b/drivers/net/dpaa/dpaa_ethdev.h @@ -22,13 +22,6 @@ #define DPAA_MBUF_HW_ANNOTATION 64 #define DPAA_FD_PTA_SIZE 64 -/* mbuf->seqn will be used to store event entry index for - * driver specific usage. For parallel mode queues, invalid - * index will be set and for atomic mode queues, valid value - * ranging from 1 to 16. - */ -#define DPAA_INVALID_MBUF_SEQN 0 - /* we will re-use the HEADROOM for annotation in RX */ #define DPAA_HW_BUF_RESERVE 0 #define DPAA_PACKET_LAYOUT_ALIGN 64 diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c index e4f012c233..e2459d9b99 100644 --- a/drivers/net/dpaa/dpaa_rxtx.c +++ b/drivers/net/dpaa/dpaa_rxtx.c @@ -649,7 +649,7 @@ dpaa_rx_cb_parallel(void *event, ev->queue_id = fq->ev.queue_id; ev->priority = fq->ev.priority; ev->impl_opaque = (uint8_t)DPAA_INVALID_MBUF_SEQN; - mbuf->seqn = DPAA_INVALID_MBUF_SEQN; + *dpaa_seqn(mbuf) = DPAA_INVALID_MBUF_SEQN; *bufs = mbuf; return qman_cb_dqrr_consume; @@ -683,7 +683,7 @@ dpaa_rx_cb_atomic(void *event, DPAA_PER_LCORE_DQRR_HELD |= 1 << index; DPAA_PER_LCORE_DQRR_MBUF(index) = mbuf; ev->impl_opaque = index + 1; - mbuf->seqn = (uint32_t)index + 1; + *dpaa_seqn(mbuf) = (uint32_t)index + 1; *bufs = mbuf; return qman_cb_dqrr_defer; @@ -1078,7 +1078,7 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if (dpaa_svr_family == SVR_LS1043A_FAMILY && (mbuf->data_off & 0x7F) != 0x0) realloc_mbuf = 1; - seqn = mbuf->seqn; + seqn = *dpaa_seqn(mbuf); if (seqn != DPAA_INVALID_MBUF_SEQN) { index = seqn - 1; if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) { -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 6/9] fslmc: switch sequence number to dynamic mbuf field 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand ` (4 preceding siblings ...) 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 5/9] dpaa: " David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 7/9] eventdev: " David Marchand ` (3 subsequent siblings) 9 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev Cc: Hemant Agrawal, Sachin Saxena, Ray Kinsella, Neil Horman, Akhil Goyal, Nipun Gupta The dpaa2 drivers have been hacking the deprecated field seqn for internal features. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/bus/fslmc/fslmc_bus.c | 17 +++++++++++ drivers/bus/fslmc/rte_fslmc.h | 23 +++++++++++++++ drivers/bus/fslmc/version.map | 1 + drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 18 ++++++------ drivers/event/dpaa2/dpaa2_eventdev.c | 9 +++--- drivers/event/dpaa2/dpaa2_eventdev_selftest.c | 4 ++- drivers/mempool/dpaa2/dpaa2_hw_mempool.h | 2 -- drivers/net/dpaa2/dpaa2_rxtx.c | 28 +++++++++---------- 8 files changed, 73 insertions(+), 29 deletions(-) diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c index beb3dd008f..db93669628 100644 --- a/drivers/bus/fslmc/fslmc_bus.c +++ b/drivers/bus/fslmc/fslmc_bus.c @@ -14,6 +14,7 @@ #include <rte_devargs.h> #include <rte_memcpy.h> #include <rte_ethdev_driver.h> +#include <rte_mbuf_dyn.h> #include <rte_fslmc.h> #include <fslmc_vfio.h> @@ -27,6 +28,9 @@ struct rte_fslmc_bus rte_fslmc_bus; uint8_t dpaa2_virt_mode; +#define DPAA2_SEQN_DYNFIELD_NAME "dpaa2_seqn_dynfield" +int dpaa2_seqn_dynfield_offset = -1; + uint32_t rte_fslmc_get_device_count(enum rte_dpaa2_dev_type device_type) { @@ -374,9 +378,22 @@ rte_fslmc_probe(void) struct rte_dpaa2_device *dev; struct rte_dpaa2_driver *drv; + static const struct rte_mbuf_dynfield dpaa2_seqn_dynfield_desc = { + .name = DPAA2_SEQN_DYNFIELD_NAME, + .size = sizeof(dpaa2_seqn_t), + .align = __alignof__(dpaa2_seqn_t), + }; + if (TAILQ_EMPTY(&rte_fslmc_bus.device_list)) return 0; + dpaa2_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&dpaa2_seqn_dynfield_desc); + if (dpaa2_seqn_dynfield_offset < 0) { + DPAA2_BUS_ERR("Failed to register mbuf field for dpaa sequence number"); + return 0; + } + ret = fslmc_vfio_setup_group(); if (ret) { DPAA2_BUS_ERR("Unable to setup VFIO %d", ret); diff --git a/drivers/bus/fslmc/rte_fslmc.h b/drivers/bus/fslmc/rte_fslmc.h index 80873fffc9..37d45dffe5 100644 --- a/drivers/bus/fslmc/rte_fslmc.h +++ b/drivers/bus/fslmc/rte_fslmc.h @@ -32,11 +32,34 @@ extern "C" { #include <rte_bus.h> #include <rte_tailq.h> #include <rte_devargs.h> +#include <rte_mbuf.h> +#include <rte_mbuf_dyn.h> #include <fslmc_vfio.h> #define FSLMC_OBJECT_MAX_LEN 32 /**< Length of each device on bus */ +#define DPAA2_INVALID_MBUF_SEQN 0 + +typedef uint32_t dpaa2_seqn_t; +extern int dpaa2_seqn_dynfield_offset; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Read dpaa2 sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to dpaa2 sequence number. + */ +__rte_experimental +static inline dpaa2_seqn_t * +dpaa2_seqn(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, dpaa2_seqn_dynfield_offset, + dpaa2_seqn_t *); +} /** Device driver supports link state interrupt */ #define RTE_DPAA2_DRV_INTR_LSC 0x0008 diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map index b169f5228a..f44c1a7988 100644 --- a/drivers/bus/fslmc/version.map +++ b/drivers/bus/fslmc/version.map @@ -19,6 +19,7 @@ INTERNAL { dpaa2_free_eq_descriptors; dpaa2_get_mcp_ptr; dpaa2_io_portal; + dpaa2_seqn_dynfield_offset; dpaa2_svr_family; dpaa2_virt_mode; dpbp_disable; diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c index afcd6bd063..ce1d50ce77 100644 --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c @@ -1472,13 +1472,15 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, dpaa2_eqcr_size : nb_ops; for (loop = 0; loop < frames_to_send; loop++) { - if ((*ops)->sym->m_src->seqn) { - uint8_t dqrr_index = (*ops)->sym->m_src->seqn - 1; - - flags[loop] = QBMAN_ENQUEUE_FLAG_DCA | dqrr_index; - DPAA2_PER_LCORE_DQRR_SIZE--; - DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dqrr_index); - (*ops)->sym->m_src->seqn = DPAA2_INVALID_MBUF_SEQN; + if (*dpaa2_seqn((*ops)->sym->m_src)) { + uint8_t dqrr_index = + *dpaa2_seqn((*ops)->sym->m_src) - 1; + + flags[loop] = QBMAN_ENQUEUE_FLAG_DCA | dqrr_index; + DPAA2_PER_LCORE_DQRR_SIZE--; + DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dqrr_index); + *dpaa2_seqn((*ops)->sym->m_src) = + DPAA2_INVALID_MBUF_SEQN; } /*Clear the unused FD fields before sending*/ @@ -3714,7 +3716,7 @@ dpaa2_sec_process_atomic_event(struct qbman_swp *swp __rte_unused, ev->event_ptr = sec_fd_to_mbuf(fd); dqrr_index = qbman_get_dqrr_idx(dq); - crypto_op->sym->m_src->seqn = dqrr_index + 1; + *dpaa2_seqn(crypto_op->sym->m_src) = dqrr_index + 1; DPAA2_PER_LCORE_DQRR_SIZE++; DPAA2_PER_LCORE_DQRR_HELD |= 1 << dqrr_index; DPAA2_PER_LCORE_DQRR_MBUF(dqrr_index) = crypto_op->sym->m_src; diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 95f03c8b9e..eeb2494bd0 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -131,8 +131,9 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[], qbman_eq_desc_set_response(&eqdesc[loop], 0, 0); if (event->sched_type == RTE_SCHED_TYPE_ATOMIC - && event->mbuf->seqn) { - uint8_t dqrr_index = event->mbuf->seqn - 1; + && *dpaa2_seqn(event->mbuf)) { + uint8_t dqrr_index = + *dpaa2_seqn(event->mbuf) - 1; qbman_eq_desc_set_dca(&eqdesc[loop], 1, dqrr_index, 0); @@ -249,7 +250,7 @@ static void dpaa2_eventdev_process_atomic(struct qbman_swp *swp, rte_memcpy(ev, ev_temp, sizeof(struct rte_event)); rte_free(ev_temp); - ev->mbuf->seqn = dqrr_index + 1; + *dpaa2_seqn(ev->mbuf) = dqrr_index + 1; DPAA2_PER_LCORE_DQRR_SIZE++; DPAA2_PER_LCORE_DQRR_HELD |= 1 << dqrr_index; DPAA2_PER_LCORE_DQRR_MBUF(dqrr_index) = ev->mbuf; @@ -314,7 +315,7 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[], if (DPAA2_PER_LCORE_DQRR_HELD & (1 << i)) { qbman_swp_dqrr_idx_consume(swp, i); DPAA2_PER_LCORE_DQRR_SIZE--; - DPAA2_PER_LCORE_DQRR_MBUF(i)->seqn = + *dpaa2_seqn(DPAA2_PER_LCORE_DQRR_MBUF(i)) = DPAA2_INVALID_MBUF_SEQN; } i++; diff --git a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c index 5447db8a8a..cd7311a94d 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev_selftest.c +++ b/drivers/event/dpaa2/dpaa2_eventdev_selftest.c @@ -19,6 +19,7 @@ #include <rte_random.h> #include <rte_bus_vdev.h> #include <rte_test.h> +#include <rte_fslmc.h> #include "dpaa2_eventdev.h" #include "dpaa2_eventdev_logs.h" @@ -274,7 +275,8 @@ check_excess_events(uint8_t port) valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); RTE_TEST_ASSERT_SUCCESS(valid_event, - "Unexpected valid event=%d", ev.mbuf->seqn); + "Unexpected valid event=%d", + *dpaa2_seqn(ev.mbuf)); } return 0; } diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h index 53fa1552d1..7c493b28e7 100644 --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.h +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.h @@ -10,8 +10,6 @@ #define DPAA2_MAX_BUF_POOLS 8 -#define DPAA2_INVALID_MBUF_SEQN 0 - struct buf_pool_cfg { void *addr; /**< The address from where DPAA2 will carve out the buffers */ diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index 4dd1d5f578..6201de4606 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -710,7 +710,7 @@ dpaa2_dev_process_atomic_event(struct qbman_swp *swp __rte_unused, ev->mbuf = eth_fd_to_mbuf(fd, rxq->eth_data->port_id); dqrr_index = qbman_get_dqrr_idx(dq); - ev->mbuf->seqn = dqrr_index + 1; + *dpaa2_seqn(ev->mbuf) = dqrr_index + 1; DPAA2_PER_LCORE_DQRR_SIZE++; DPAA2_PER_LCORE_DQRR_HELD |= 1 << dqrr_index; DPAA2_PER_LCORE_DQRR_MBUF(dqrr_index) = ev->mbuf; @@ -736,9 +736,9 @@ dpaa2_dev_process_ordered_event(struct qbman_swp *swp, ev->mbuf = eth_fd_to_mbuf(fd, rxq->eth_data->port_id); - ev->mbuf->seqn = DPAA2_ENQUEUE_FLAG_ORP; - ev->mbuf->seqn |= qbman_result_DQ_odpid(dq) << DPAA2_EQCR_OPRID_SHIFT; - ev->mbuf->seqn |= qbman_result_DQ_seqnum(dq) << DPAA2_EQCR_SEQNUM_SHIFT; + *dpaa2_seqn(ev->mbuf) = DPAA2_ENQUEUE_FLAG_ORP; + *dpaa2_seqn(ev->mbuf) |= qbman_result_DQ_odpid(dq) << DPAA2_EQCR_OPRID_SHIFT; + *dpaa2_seqn(ev->mbuf) |= qbman_result_DQ_seqnum(dq) << DPAA2_EQCR_SEQNUM_SHIFT; qbman_swp_dqrr_consume(swp, dq); } @@ -1063,14 +1063,14 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) dpaa2_eqcr_size : nb_pkts; for (loop = 0; loop < frames_to_send; loop++) { - if ((*bufs)->seqn) { - uint8_t dqrr_index = (*bufs)->seqn - 1; + if (*dpaa2_seqn(*bufs)) { + uint8_t dqrr_index = *dpaa2_seqn(*bufs) - 1; flags[loop] = QBMAN_ENQUEUE_FLAG_DCA | dqrr_index; DPAA2_PER_LCORE_DQRR_SIZE--; DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dqrr_index); - (*bufs)->seqn = DPAA2_INVALID_MBUF_SEQN; + *dpaa2_seqn(*bufs) = DPAA2_INVALID_MBUF_SEQN; } if (likely(RTE_MBUF_DIRECT(*bufs))) { @@ -1230,10 +1230,10 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q, qbman_eq_desc_set_fq(eqdesc, dpaa2_q->fqid); - if (m->seqn & DPAA2_ENQUEUE_FLAG_ORP) { - orpid = (m->seqn & DPAA2_EQCR_OPRID_MASK) >> + if (*dpaa2_seqn(m) & DPAA2_ENQUEUE_FLAG_ORP) { + orpid = (*dpaa2_seqn(m) & DPAA2_EQCR_OPRID_MASK) >> DPAA2_EQCR_OPRID_SHIFT; - seqnum = (m->seqn & DPAA2_EQCR_SEQNUM_MASK) >> + seqnum = (*dpaa2_seqn(m) & DPAA2_EQCR_SEQNUM_MASK) >> DPAA2_EQCR_SEQNUM_SHIFT; if (!priv->en_loose_ordered) { @@ -1255,12 +1255,12 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q, qbman_eq_desc_set_orp(eqdesc, 0, orpid, seqnum, 0); } } else { - dq_idx = m->seqn - 1; + dq_idx = *dpaa2_seqn(m) - 1; qbman_eq_desc_set_dca(eqdesc, 1, dq_idx, 0); DPAA2_PER_LCORE_DQRR_SIZE--; DPAA2_PER_LCORE_DQRR_HELD &= ~(1 << dq_idx); } - m->seqn = DPAA2_INVALID_MBUF_SEQN; + *dpaa2_seqn(m) = DPAA2_INVALID_MBUF_SEQN; } /* Callback to handle sending ordered packets through WRIOP based interface */ @@ -1314,7 +1314,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) dpaa2_eqcr_size : nb_pkts; if (!priv->en_loose_ordered) { - if ((*bufs)->seqn & DPAA2_ENQUEUE_FLAG_ORP) { + if (*dpaa2_seqn(*bufs) & DPAA2_ENQUEUE_FLAG_ORP) { num_free_eq_desc = dpaa2_free_eq_descriptors(); if (num_free_eq_desc < frames_to_send) frames_to_send = num_free_eq_desc; @@ -1325,7 +1325,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) /*Prepare enqueue descriptor*/ qbman_eq_desc_clear(&eqdesc[loop]); - if ((*bufs)->seqn) { + if (*dpaa2_seqn(*bufs)) { /* Use only queue 0 for Tx in case of atomic/ * ordered packets as packets can get unordered * when being tranmitted out from the interface -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 7/9] eventdev: switch sequence number to dynamic mbuf field 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand ` (5 preceding siblings ...) 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 6/9] fslmc: " David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 8/9] app/eventdev: " David Marchand ` (2 subsequent siblings) 9 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev Cc: Jerin Jacob, Pavan Nikhilesh, Liang Ma, Peter Mccarthy, Harry van Haaren, Ray Kinsella, Neil Horman The eventdev drivers have been hacking the deprecated field seqn for internal test usage. It is moved to a dynamic mbuf field in order to allow removal of seqn. Signed-off-by: David Marchand <david.marchand@redhat.com> --- Changelog since v1: - split eventdev test app changes from this patch (moved to next patch), - moved seqn as a pmd only API, fixed namespace and marked as internal, --- drivers/event/octeontx/ssovf_evdev_selftest.c | 32 +++++++++-------- drivers/event/octeontx2/otx2_evdev_selftest.c | 36 +++++++++++-------- drivers/event/opdl/opdl_test.c | 8 ++--- drivers/event/sw/sw_evdev_selftest.c | 34 +++++++++--------- lib/librte_eventdev/rte_eventdev.c | 14 +++++++- lib/librte_eventdev/rte_eventdev_pmd.h | 20 +++++++++++ lib/librte_eventdev/version.map | 6 ++++ 7 files changed, 99 insertions(+), 51 deletions(-) diff --git a/drivers/event/octeontx/ssovf_evdev_selftest.c b/drivers/event/octeontx/ssovf_evdev_selftest.c index 7a2b7ded25..528f99dd84 100644 --- a/drivers/event/octeontx/ssovf_evdev_selftest.c +++ b/drivers/event/octeontx/ssovf_evdev_selftest.c @@ -300,7 +300,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_pmd_selftest_seqn(m) = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, sub_event_type, sched_type, queue, port); rte_event_enqueue_burst(evdev, port, &ev, 1); @@ -320,7 +320,8 @@ check_excess_events(uint8_t port) valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); RTE_TEST_ASSERT_SUCCESS(valid_event, - "Unexpected valid event=%d", ev.mbuf->seqn); + "Unexpected valid event=%d", + *rte_event_pmd_selftest_seqn(ev.mbuf)); } return 0; } @@ -425,8 +426,9 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", - index, ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf), + "index=%d != seqn=%d", index, + *rte_event_pmd_selftest_seqn(ev->mbuf)); return 0; } @@ -509,10 +511,10 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, - "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", - ev->mbuf->seqn, index, expected_val, range, - queue_count, MAX_EVENTS); + RTE_TEST_ASSERT_EQUAL(*rte_event_pmd_selftest_seqn(ev->mbuf), expected_val, + "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", + *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val, range, + queue_count, MAX_EVENTS); return 0; } @@ -537,7 +539,7 @@ test_multi_queue_priority(void) m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_pmd_selftest_seqn(m) = i; queue = i % queue_count; update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU, 0, RTE_SCHED_TYPE_PARALLEL, queue, 0); @@ -904,7 +906,7 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + if (seqn_list_update(*rte_event_pmd_selftest_seqn(ev.mbuf)) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -939,7 +941,7 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events( 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, @@ -1059,7 +1061,7 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + if (seqn_list_update(*rte_event_pmd_selftest_seqn(ev.mbuf)) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -1101,7 +1103,7 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events( 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, @@ -1238,7 +1240,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events( 0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, @@ -1360,7 +1362,7 @@ worker_ordered_flow_producer(void *arg) if (m == NULL) continue; - m->seqn = counter++; + *rte_event_pmd_selftest_seqn(m) = counter++; struct rte_event ev = {.event = 0, .u64 = 0}; diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c index 334a9ccb7c..48bfaf893d 100644 --- a/drivers/event/octeontx2/otx2_evdev_selftest.c +++ b/drivers/event/octeontx2/otx2_evdev_selftest.c @@ -279,7 +279,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_pmd_selftest_seqn(m) = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, sub_event_type, sched_type, queue, port); @@ -301,7 +301,7 @@ check_excess_events(uint8_t port) RTE_TEST_ASSERT_SUCCESS(valid_event, "Unexpected valid event=%d", - ev.mbuf->seqn); + *rte_event_pmd_selftest_seqn(ev.mbuf)); } return 0; } @@ -406,8 +406,9 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", - index, ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf), + "index=%d != seqn=%d", + index, *rte_event_pmd_selftest_seqn(ev->mbuf)); return 0; } @@ -493,10 +494,11 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, - "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", - ev->mbuf->seqn, index, expected_val, range, - queue_count, MAX_EVENTS); + RTE_TEST_ASSERT_EQUAL( + *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val, + "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", + *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val, + range, queue_count, MAX_EVENTS); return 0; } @@ -523,7 +525,7 @@ test_multi_queue_priority(void) m = rte_pktmbuf_alloc(eventdev_test_mempool); RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); - m->seqn = i; + *rte_event_pmd_selftest_seqn(m) = i; queue = i % queue_count; update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU, 0, RTE_SCHED_TYPE_PARALLEL, @@ -888,7 +890,9 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf); + + if (seqn_list_update(seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -923,7 +927,7 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events(0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, 0 /* sub_event_type (stage 0) */, @@ -1043,7 +1047,9 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == 0) { + uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf); + + if (seqn_list_update(seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { @@ -1084,7 +1090,7 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events(0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, 0 /* sub_event_type (stage 0) */, @@ -1222,7 +1228,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) return 0; } - /* Injects events with m->seqn=0 to total_events */ + /* Injects events with a 0 sequence number to total_events */ ret = inject_events(0x1 /*flow_id */, RTE_EVENT_TYPE_CPU /* event_type */, 0 /* sub_event_type (stage 0) */, @@ -1348,7 +1354,7 @@ worker_ordered_flow_producer(void *arg) if (m == NULL) continue; - m->seqn = counter++; + *rte_event_pmd_selftest_seqn(m) = counter++; struct rte_event ev = {.event = 0, .u64 = 0}; diff --git a/drivers/event/opdl/opdl_test.c b/drivers/event/opdl/opdl_test.c index e7a32fbd31..e4fc70a440 100644 --- a/drivers/event/opdl/opdl_test.c +++ b/drivers/event/opdl/opdl_test.c @@ -256,7 +256,7 @@ ordered_basic(struct test *t) ev.queue_id = t->qid[0]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = MAGIC_SEQN + i; + *rte_event_pmd_selftest_seqn(mbufs[i]) = MAGIC_SEQN + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); @@ -281,7 +281,7 @@ ordered_basic(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } - seq = deq_ev[i].mbuf->seqn - MAGIC_SEQN; + seq = *rte_event_pmd_selftest_seqn(deq_ev[i].mbuf) - MAGIC_SEQN; if (seq != (i-1)) { PMD_DRV_LOG(ERR, " seq test failed ! eq is %d , " @@ -396,7 +396,7 @@ atomic_basic(struct test *t) ev.op = RTE_EVENT_OP_NEW; ev.flow_id = 1; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = MAGIC_SEQN + i; + *rte_event_pmd_selftest_seqn(mbufs[i]) = MAGIC_SEQN + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); @@ -625,7 +625,7 @@ single_link_w_stats(struct test *t) ev.queue_id = t->qid[0]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = 1234 + i; + *rte_event_pmd_selftest_seqn(mbufs[i]) = 1234 + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); diff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c index ad4fc0eed7..4555f799df 100644 --- a/drivers/event/sw/sw_evdev_selftest.c +++ b/drivers/event/sw/sw_evdev_selftest.c @@ -380,7 +380,7 @@ run_prio_packet_test(struct test *t) printf("%d: gen of pkt failed\n", __LINE__); return -1; } - arp->seqn = MAGIC_SEQN[i]; + *rte_event_pmd_selftest_seqn(arp) = MAGIC_SEQN[i]; ev = (struct rte_event){ .priority = PRIORITY[i], @@ -419,7 +419,7 @@ run_prio_packet_test(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } - if (ev.mbuf->seqn != MAGIC_SEQN[1]) { + if (*rte_event_pmd_selftest_seqn(ev.mbuf) != MAGIC_SEQN[1]) { printf("%d: first packet out not highest priority\n", __LINE__); rte_event_dev_dump(evdev, stdout); @@ -433,7 +433,7 @@ run_prio_packet_test(struct test *t) rte_event_dev_dump(evdev, stdout); return -1; } - if (ev2.mbuf->seqn != MAGIC_SEQN[0]) { + if (*rte_event_pmd_selftest_seqn(ev2.mbuf) != MAGIC_SEQN[0]) { printf("%d: second packet out not lower priority\n", __LINE__); rte_event_dev_dump(evdev, stdout); @@ -477,7 +477,7 @@ test_single_directed_packet(struct test *t) } const uint32_t MAGIC_SEQN = 4711; - arp->seqn = MAGIC_SEQN; + *rte_event_pmd_selftest_seqn(arp) = MAGIC_SEQN; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, rx_enq, &ev, 1); @@ -516,7 +516,7 @@ test_single_directed_packet(struct test *t) return -1; } - if (ev.mbuf->seqn != MAGIC_SEQN) { + if (*rte_event_pmd_selftest_seqn(ev.mbuf) != MAGIC_SEQN) { printf("%d: error magic sequence number not dequeued\n", __LINE__); return -1; @@ -934,7 +934,7 @@ xstats_tests(struct test *t) ev.op = RTE_EVENT_OP_NEW; ev.mbuf = arp; ev.flow_id = 7; - arp->seqn = i; + *rte_event_pmd_selftest_seqn(arp) = i; int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); if (err != 1) { @@ -1485,7 +1485,7 @@ xstats_id_reset_tests(struct test *t) ev.queue_id = t->qid[i]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = arp; - arp->seqn = i; + *rte_event_pmd_selftest_seqn(arp) = i; int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); if (err != 1) { @@ -1873,7 +1873,7 @@ qid_priorities(struct test *t) ev.queue_id = t->qid[i]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = arp; - arp->seqn = i; + *rte_event_pmd_selftest_seqn(arp) = i; int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1); if (err != 1) { @@ -1894,7 +1894,7 @@ qid_priorities(struct test *t) return -1; } for (i = 0; i < 3; i++) { - if (ev[i].mbuf->seqn != 2-i) { + if (*rte_event_pmd_selftest_seqn(ev[i].mbuf) != 2-i) { printf( "%d: qid priority test: seqn %d incorrectly prioritized\n", __LINE__, i); @@ -2371,7 +2371,7 @@ single_packet(struct test *t) ev.mbuf = arp; ev.queue_id = 0; ev.flow_id = 3; - arp->seqn = MAGIC_SEQN; + *rte_event_pmd_selftest_seqn(arp) = MAGIC_SEQN; err = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1); if (err != 1) { @@ -2411,7 +2411,7 @@ single_packet(struct test *t) } err = test_event_dev_stats_get(evdev, &stats); - if (ev.mbuf->seqn != MAGIC_SEQN) { + if (*rte_event_pmd_selftest_seqn(ev.mbuf) != MAGIC_SEQN) { printf("%d: magic sequence number not dequeued\n", __LINE__); return -1; } @@ -2684,7 +2684,7 @@ parallel_basic(struct test *t, int check_order) ev.queue_id = t->qid[0]; ev.op = RTE_EVENT_OP_NEW; ev.mbuf = mbufs[i]; - mbufs[i]->seqn = MAGIC_SEQN + i; + *rte_event_pmd_selftest_seqn(mbufs[i]) = MAGIC_SEQN + i; /* generate pkt and enqueue */ err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1); @@ -2739,10 +2739,12 @@ parallel_basic(struct test *t, int check_order) /* Check to see if the sequence numbers are in expected order */ if (check_order) { for (j = 0 ; j < deq_pkts ; j++) { - if (deq_ev[j].mbuf->seqn != MAGIC_SEQN + j) { - printf( - "%d: Incorrect sequence number(%d) from port %d\n", - __LINE__, mbufs_out[j]->seqn, tx_port); + if (*rte_event_pmd_selftest_seqn(deq_ev[j].mbuf) != + MAGIC_SEQN + j) { + printf("%d: Incorrect sequence number(%d) from port %d\n", + __LINE__, + *rte_event_pmd_selftest_seqn(mbufs_out[j]), + tx_port); return -1; } } diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index 322453c532..994bd1eaa9 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1242,13 +1242,25 @@ int rte_event_dev_xstats_reset(uint8_t dev_id, return -ENOTSUP; } +int rte_event_pmd_selftest_seqn_dynfield_offset = -1; + int rte_event_dev_selftest(uint8_t dev_id) { RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + static const struct rte_mbuf_dynfield test_seqn_dynfield_desc = { + .name = "rte_event_pmd_selftest_seqn_dynfield", + .size = sizeof(rte_event_pmd_selftest_seqn_t), + .align = __alignof__(rte_event_pmd_selftest_seqn_t), + }; struct rte_eventdev *dev = &rte_eventdevs[dev_id]; - if (dev->dev_ops->dev_selftest != NULL) + if (dev->dev_ops->dev_selftest != NULL) { + rte_event_pmd_selftest_seqn_dynfield_offset = + rte_mbuf_dynfield_register(&test_seqn_dynfield_desc); + if (rte_event_pmd_selftest_seqn_dynfield_offset < 0) + return -ENOMEM; return (*dev->dev_ops->dev_selftest)(); + } return -ENOTSUP; } diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h index d118b9e5ba..27be376ed1 100644 --- a/lib/librte_eventdev/rte_eventdev_pmd.h +++ b/lib/librte_eventdev/rte_eventdev_pmd.h @@ -20,10 +20,13 @@ extern "C" { #include <string.h> #include <rte_common.h> +#include <rte_compat.h> #include <rte_config.h> #include <rte_dev.h> #include <rte_log.h> #include <rte_malloc.h> +#include <rte_mbuf.h> +#include <rte_mbuf_dyn.h> #include "rte_eventdev.h" #include "rte_event_timer_adapter_pmd.h" @@ -635,6 +638,23 @@ typedef int (*eventdev_eth_rx_adapter_stats_reset) */ typedef int (*eventdev_selftest)(void); +typedef uint32_t rte_event_pmd_selftest_seqn_t; +extern int rte_event_pmd_selftest_seqn_dynfield_offset; + +/** + * Read test sequence number from mbuf. + * + * @param mbuf Structure to read from. + * @return pointer to test sequence number. + */ +__rte_internal +static inline rte_event_pmd_selftest_seqn_t * +rte_event_pmd_selftest_seqn(struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, + rte_event_pmd_selftest_seqn_dynfield_offset, + rte_event_pmd_selftest_seqn_t *); +} struct rte_cryptodev; diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 8ae8420f9b..3e5c09cfdb 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -139,3 +139,9 @@ EXPERIMENTAL { # added in 20.11 rte_event_pmd_pci_probe_named; }; + +INTERNAL { + global: + + rte_event_pmd_selftest_seqn_dynfield_offset; +}; -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 8/9] app/eventdev: switch sequence number to dynamic mbuf field 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand ` (6 preceding siblings ...) 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 7/9] eventdev: " David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 9/9] mbuf: remove seqn field David Marchand 2020-10-31 21:11 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn Thomas Monjalon 9 siblings, 0 replies; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev; +Cc: Jerin Jacob The order test stored a sequence number in the deprecated mbuf field seqn. It is moved to a dynamic field in order to allow removal of seqn. Signed-off-by: David Marchand <david.marchand@redhat.com> --- app/test-eventdev/test_order_common.c | 14 +++++++++++++- app/test-eventdev/test_order_common.h | 13 +++++++++++-- 2 files changed, 24 insertions(+), 3 deletions(-) diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c index 01a44bcd75..04456d56db 100644 --- a/app/test-eventdev/test_order_common.c +++ b/app/test-eventdev/test_order_common.c @@ -48,7 +48,7 @@ order_producer(void *arg) const flow_id_t flow = (uintptr_t)m % nb_flows; /* Maintain seq number per flow */ - m->seqn = producer_flow_seq[flow]++; + *order_mbuf_seqn(t, m) = producer_flow_seq[flow]++; order_flow_id_save(t, flow, m, &ev); while (rte_event_enqueue_burst(dev_id, port, &ev, 1) != 1) { @@ -141,6 +141,11 @@ order_test_setup(struct evt_test *test, struct evt_options *opt) .size = sizeof(flow_id_t), .align = __alignof__(flow_id_t), }; + static const struct rte_mbuf_dynfield seqn_dynfield_desc = { + .name = "test_event_dynfield_seqn", + .size = sizeof(seqn_t), + .align = __alignof__(seqn_t), + }; test_order = rte_zmalloc_socket(test->name, sizeof(struct test_order), RTE_CACHE_LINE_SIZE, opt->socket_id); @@ -158,6 +163,13 @@ order_test_setup(struct evt_test *test, struct evt_options *opt) return -rte_errno; } + t->seqn_dynfield_offset = + rte_mbuf_dynfield_register(&seqn_dynfield_desc); + if (t->seqn_dynfield_offset < 0) { + evt_err("failed to register mbuf field"); + return -rte_errno; + } + t->producer_flow_seq = rte_zmalloc_socket("test_producer_flow_seq", sizeof(*t->producer_flow_seq) * opt->nb_flows, RTE_CACHE_LINE_SIZE, opt->socket_id); diff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h index 90eac96fc8..5ef8404938 100644 --- a/app/test-eventdev/test_order_common.h +++ b/app/test-eventdev/test_order_common.h @@ -22,6 +22,7 @@ #define BURST_SIZE 16 typedef uint32_t flow_id_t; +typedef uint32_t seqn_t; struct test_order; @@ -53,6 +54,7 @@ struct test_order { uint64_t nb_pkts; struct rte_mempool *pool; int flow_id_dynfield_offset; + int seqn_dynfield_offset; struct prod_data prod; struct worker_data worker[EVT_MAX_PORTS]; uint32_t *producer_flow_seq; @@ -77,6 +79,12 @@ order_flow_id_save(struct test_order *t, flow_id_t flow_id, event->mbuf = mbuf; } +static inline seqn_t * +order_mbuf_seqn(struct test_order *t, struct rte_mbuf *mbuf) +{ + return RTE_MBUF_DYNFIELD(mbuf, t->seqn_dynfield_offset, seqn_t *); +} + static inline int order_nb_event_ports(struct evt_options *opt) { @@ -91,9 +99,10 @@ order_process_stage_1(struct test_order *const t, { const uint32_t flow = (uintptr_t)ev->mbuf % nb_flows; /* compare the seqn against expected value */ - if (ev->mbuf->seqn != expected_flow_seq[flow]) { + if (*order_mbuf_seqn(t, ev->mbuf) != expected_flow_seq[flow]) { evt_err("flow=%x seqn mismatch got=%x expected=%x", - flow, ev->mbuf->seqn, expected_flow_seq[flow]); + flow, *order_mbuf_seqn(t, ev->mbuf), + expected_flow_seq[flow]); t->err = true; rte_smp_wmb(); } -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* [dpdk-dev] [PATCH v2 9/9] mbuf: remove seqn field 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand ` (7 preceding siblings ...) 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 8/9] app/eventdev: " David Marchand @ 2020-10-28 12:20 ` David Marchand 2020-10-31 21:09 ` Thomas Monjalon 2020-10-31 21:11 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn Thomas Monjalon 9 siblings, 1 reply; 29+ messages in thread From: David Marchand @ 2020-10-28 12:20 UTC (permalink / raw) To: dev; +Cc: Andrew Rybchenko, Ray Kinsella, Neil Horman, Olivier Matz As announced in the deprecation note, the field seqn is removed to give more space to the dynamic fields. This is how the mbuf layout looks like (pahole-style): word type name byte size 0 void * buf_addr; /* 0 + 8 */ 1 rte_iova_t buf_iova /* 8 + 8 */ /* --- RTE_MARKER64 rearm_data; */ 2 uint16_t data_off; /* 16 + 2 */ uint16_t refcnt; /* 18 + 2 */ uint16_t nb_segs; /* 20 + 2 */ uint16_t port; /* 22 + 2 */ 3 uint64_t ol_flags; /* 24 + 8 */ /* --- RTE_MARKER rx_descriptor_fields1; */ 4 uint32_t union packet_type; /* 32 + 4 */ uint32_t pkt_len; /* 36 + 4 */ 5 uint16_t data_len; /* 40 + 2 */ uint16_t vlan_tci; /* 42 + 2 */ 5.5 uint64_t union hash; /* 44 + 8 */ 6.5 uint16_t vlan_tci_outer; /* 52 + 2 */ uint16_t buf_len; /* 54 + 2 */ 7 uint64_t timestamp; /* 56 + 8 */ /* --- RTE_MARKER cacheline1; */ 8 struct rte_mempool * pool; /* 64 + 8 */ 9 struct rte_mbuf * next; /* 72 + 8 */ 10 uint64_t union tx_offload; /* 80 + 8 */ 11 struct rte_mbuf_ext_shared_info * shinfo; /* 88 + 8 */ 12 uint16_t priv_size; /* 96 + 2 */ uint16_t timesync; /* 98 + 2 */ 12.5 uint32_t dynfield1[7]; /* 100 + 28 */ 16 /* --- END 128 */ Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- doc/guides/rel_notes/deprecation.rst | 1 - doc/guides/rel_notes/release_20_11.rst | 3 +++ lib/librte_mbuf/rte_mbuf_core.h | 15 ++++++--------- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 0f6f1df12a..fe3fd3956c 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -87,7 +87,6 @@ Deprecation Notices The following static fields will be moved as dynamic: - ``timestamp`` - - ``seqn`` As a consequence, the layout of the ``struct rte_mbuf`` will be re-arranged, avoiding impact on vectorized implementation of the driver datapaths, diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 3d7edbfdbb..c0a9fc96aa 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -429,6 +429,9 @@ API Changes * mbuf: Removed the unioned fields ``userdata`` and ``udata64`` from the structure ``rte_mbuf``. It is replaced with dynamic fields. +* mbuf: Removed the field ``seqn`` from the structure ``rte_mbuf``. + It is replaced with dynamic fields. + * pci: Removed the ``rte_kernel_driver`` enum defined in rte_dev.h and replaced with a private enum in the PCI subsystem. diff --git a/lib/librte_mbuf/rte_mbuf_core.h b/lib/librte_mbuf/rte_mbuf_core.h index a65eaaf692..3fb5abda3c 100644 --- a/lib/librte_mbuf/rte_mbuf_core.h +++ b/lib/librte_mbuf/rte_mbuf_core.h @@ -640,6 +640,11 @@ struct rte_mbuf { }; }; + /** Shared data for external buffer attached to mbuf. See + * rte_pktmbuf_attach_extbuf(). + */ + struct rte_mbuf_ext_shared_info *shinfo; + /** Size of the application private data. In case of an indirect * mbuf, it stores the direct mbuf private data size. */ @@ -648,15 +653,7 @@ struct rte_mbuf { /** Timesync flags for use with IEEE1588. */ uint16_t timesync; - /** Sequence number. See also rte_reorder_insert(). */ - uint32_t seqn; - - /** Shared data for external buffer attached to mbuf. See - * rte_pktmbuf_attach_extbuf(). - */ - struct rte_mbuf_ext_shared_info *shinfo; - - uint64_t dynfield1[3]; /**< Reserved for dynamic fields. */ + uint32_t dynfield1[7]; /**< Reserved for dynamic fields. */ } __rte_cache_aligned; /** -- 2.23.0 ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH v2 9/9] mbuf: remove seqn field 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 9/9] mbuf: remove seqn field David Marchand @ 2020-10-31 21:09 ` Thomas Monjalon 0 siblings, 0 replies; 29+ messages in thread From: Thomas Monjalon @ 2020-10-31 21:09 UTC (permalink / raw) To: David Marchand Cc: dev, Andrew Rybchenko, Ray Kinsella, Neil Horman, Olivier Matz 28/10/2020 13:20, David Marchand: > As announced in the deprecation note, the field seqn is removed to give > more space to the dynamic fields. > > This is how the mbuf layout looks like (pahole-style): > > word type name byte size > 0 void * buf_addr; /* 0 + 8 */ > 1 rte_iova_t buf_iova /* 8 + 8 */ > /* --- RTE_MARKER64 rearm_data; */ > 2 uint16_t data_off; /* 16 + 2 */ > uint16_t refcnt; /* 18 + 2 */ > uint16_t nb_segs; /* 20 + 2 */ > uint16_t port; /* 22 + 2 */ > 3 uint64_t ol_flags; /* 24 + 8 */ > /* --- RTE_MARKER rx_descriptor_fields1; */ > 4 uint32_t union packet_type; /* 32 + 4 */ > uint32_t pkt_len; /* 36 + 4 */ > 5 uint16_t data_len; /* 40 + 2 */ > uint16_t vlan_tci; /* 42 + 2 */ > 5.5 uint64_t union hash; /* 44 + 8 */ > 6.5 uint16_t vlan_tci_outer; /* 52 + 2 */ > uint16_t buf_len; /* 54 + 2 */ > 7 uint64_t timestamp; /* 56 + 8 */ > /* --- RTE_MARKER cacheline1; */ > 8 struct rte_mempool * pool; /* 64 + 8 */ > 9 struct rte_mbuf * next; /* 72 + 8 */ > 10 uint64_t union tx_offload; /* 80 + 8 */ > 11 struct rte_mbuf_ext_shared_info * shinfo; /* 88 + 8 */ > 12 uint16_t priv_size; /* 96 + 2 */ > uint16_t timesync; /* 98 + 2 */ > 12.5 uint32_t dynfield1[7]; /* 100 + 28 */ > 16 /* --- END 128 */ > > Signed-off-by: David Marchand <david.marchand@redhat.com> > Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Thomas Monjalon <thomas@monjalon.net> ^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand ` (8 preceding siblings ...) 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 9/9] mbuf: remove seqn field David Marchand @ 2020-10-31 21:11 ` Thomas Monjalon 9 siblings, 0 replies; 29+ messages in thread From: Thomas Monjalon @ 2020-10-31 21:11 UTC (permalink / raw) To: David Marchand; +Cc: dev, olivier.matz, andrew.rybchenko 28/10/2020 13:20, David Marchand: > Followup on the work started by Thomas to free some space in the mbuf > structure. > This series applies on top of Thomas v4 series [1] and drops the seqn > field that had been deprecated. Thanks for the follow-up. Applied, thanks. Next step: removing timestamp and deciding what to get instead. ^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2020-10-31 21:11 UTC | newest] Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-10-27 22:13 [dpdk-dev] [PATCH 0/8] remove mbuf seqn David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 1/8] event/dpaa2: remove dead code David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 2/8] crypto/scheduler: remove unused internal seqn David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 3/8] net/ark: remove use of seqn for debug David Marchand 2020-10-28 12:19 ` Ed Czeck 2020-10-27 22:13 ` [dpdk-dev] [PATCH 4/8] reorder: switch sequence number to dynamic mbuf field David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 5/8] dpaa: switch sequence number to dynamic field David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 6/8] fslmc: " David Marchand 2020-10-27 22:13 ` [dpdk-dev] [PATCH 7/8] event: " David Marchand 2020-10-27 22:18 ` David Marchand 2020-10-28 7:27 ` Jerin Jacob 2020-10-28 8:55 ` David Marchand 2020-10-28 9:09 ` Jerin Jacob 2020-10-27 22:13 ` [dpdk-dev] [PATCH 8/8] mbuf: remove seqn field David Marchand 2020-10-28 10:27 ` Andrew Rybchenko 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 1/9] event/dpaa2: remove dead code David Marchand 2020-10-31 18:28 ` Nipun Gupta 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 2/9] crypto/scheduler: remove unused internal seqn David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 3/9] net/ark: remove use of seqn for debug David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 4/9] reorder: switch sequence number to dynamic mbuf field David Marchand 2020-10-28 12:54 ` Andrew Rybchenko 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 5/9] dpaa: " David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 6/9] fslmc: " David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 7/9] eventdev: " David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 8/9] app/eventdev: " David Marchand 2020-10-28 12:20 ` [dpdk-dev] [PATCH v2 9/9] mbuf: remove seqn field David Marchand 2020-10-31 21:09 ` Thomas Monjalon 2020-10-31 21:11 ` [dpdk-dev] [PATCH v2 0/9] remove mbuf seqn Thomas Monjalon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).