* [dpdk-dev] [PATCH 1/5] ring: allow rings with non power-of-2 sizes
2017-06-07 13:36 [dpdk-dev] [PATCH 0/5] create event rings type Bruce Richardson
@ 2017-06-07 13:36 ` Bruce Richardson
2017-06-30 9:40 ` Olivier Matz
2017-06-07 13:36 ` [dpdk-dev] [PATCH 2/5] test/test: add unit tests for exact size rings Bruce Richardson
` (4 subsequent siblings)
5 siblings, 1 reply; 28+ messages in thread
From: Bruce Richardson @ 2017-06-07 13:36 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
The rte_rings traditionally have only supported having ring sizes as powers
of 2, with the actual usable space being the size - 1. In some cases, for
example, with an eventdev where we want to precisely control queue depths
for latency, we need to allow ring sizes which are not powers of two so we
add in an additional ring capacity value to allow that. For existing rings,
this value will be size-1, i.e. the same as the mask, but if the new
EXACT_SZ flag is passed on ring creation, the ring will have exactly the
usable space requested, although the underlying memory size may be bigger.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/librte_ring/rte_ring.c | 26 ++++++++++++--
lib/librte_ring/rte_ring.h | 88 +++++++++++++++++++++++++++++-----------------
2 files changed, 78 insertions(+), 36 deletions(-)
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index 5f98c33..b8047ee 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -140,8 +140,22 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
r->flags = flags;
r->prod.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP;
r->cons.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC;
- r->size = count;
- r->mask = count - 1;
+
+ if (flags & RING_F_EXACT_SZ) {
+ r->size = rte_align32pow2(count + 1);
+ r->mask = r->size - 1;
+ r->capacity = count;
+ } else {
+ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK)) {
+ RTE_LOG(ERR, RING,
+ "Requested size is invalid, must be power of 2, and not exceed the size limit %u\n",
+ RTE_RING_SZ_MASK);
+ return -EINVAL;
+ }
+ r->size = count;
+ r->mask = count - 1;
+ r->capacity = r->mask;
+ }
r->prod.head = r->cons.head = 0;
r->prod.tail = r->cons.tail = 0;
@@ -160,10 +174,15 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
ssize_t ring_size;
int mz_flags = 0;
struct rte_ring_list* ring_list = NULL;
+ const unsigned int requested_count = count;
int ret;
ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list);
+ /* for an exact size ring, round up from count to a power of two */
+ if (flags & RING_F_EXACT_SZ)
+ count = rte_align32pow2(count + 1);
+
ring_size = rte_ring_get_memsize(count);
if (ring_size < 0) {
rte_errno = ring_size;
@@ -194,7 +213,7 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
r = mz->addr;
/* no need to check return value here, we already checked the
* arguments above */
- rte_ring_init(r, name, count, flags);
+ rte_ring_init(r, name, requested_count, flags);
te->data = (void *) r;
r->memzone = mz;
@@ -262,6 +281,7 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
fprintf(f, "ring <%s>@%p\n", r->name, r);
fprintf(f, " flags=%x\n", r->flags);
fprintf(f, " size=%"PRIu32"\n", r->size);
+ fprintf(f, " capacity=%"PRIu32"\n", r->capacity);
fprintf(f, " ct=%"PRIu32"\n", r->cons.tail);
fprintf(f, " ch=%"PRIu32"\n", r->cons.head);
fprintf(f, " pt=%"PRIu32"\n", r->prod.tail);
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 97f025a..494d31f 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -153,6 +153,7 @@ struct rte_ring {
/**< Memzone, if any, containing the rte_ring */
uint32_t size; /**< Size of ring. */
uint32_t mask; /**< Mask (size-1) of ring. */
+ uint32_t capacity; /**< Usable size of ring */
/** Ring producer status. */
struct rte_ring_headtail prod __rte_aligned(PROD_ALIGN);
@@ -163,6 +164,15 @@ struct rte_ring {
#define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */
#define RING_F_SC_DEQ 0x0002 /**< The default dequeue is "single-consumer". */
+/**
+ * Ring is to hold exactly requested number of entries.
+ * Without this flag set, the ring size requested must be a power of 2, and the
+ * usable space will be that size - 1. With the flag, the requested size will
+ * be rounded up to the next power of two, but the usable space will be exactly
+ * that requested. Worst case, if a power-of-2 size is requested, half the
+ * ring space will be wasted.
+ */
+#define RING_F_EXACT_SZ 0x0004
#define RTE_RING_SZ_MASK (unsigned)(0x0fffffff) /**< Ring size mask */
/* @internal defines for passing to the enqueue dequeue worker functions */
@@ -389,7 +399,7 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
uint32_t *old_head, uint32_t *new_head,
uint32_t *free_entries)
{
- const uint32_t mask = r->mask;
+ const uint32_t capacity = r->capacity;
unsigned int max = n;
int success;
@@ -399,11 +409,13 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
*old_head = r->prod.head;
const uint32_t cons_tail = r->cons.tail;
- /* The subtraction is done between two unsigned 32bits value
+ /*
+ * The subtraction is done between two unsigned 32bits value
* (the result is always modulo 32 bits even if we have
* *old_head > cons_tail). So 'free_entries' is always between 0
- * and size(ring)-1. */
- *free_entries = (mask + cons_tail - *old_head);
+ * and capacity (which is < size).
+ */
+ *free_entries = (capacity + cons_tail - *old_head);
/* check that we have enough room in ring */
if (unlikely(n > *free_entries))
@@ -845,69 +857,63 @@ rte_ring_dequeue(struct rte_ring *r, void **obj_p)
}
/**
- * Test if a ring is full.
+ * Return the number of entries in a ring.
*
* @param r
* A pointer to the ring structure.
* @return
- * - 1: The ring is full.
- * - 0: The ring is not full.
+ * The number of entries in the ring.
*/
-static inline int
-rte_ring_full(const struct rte_ring *r)
+static inline unsigned
+rte_ring_count(const struct rte_ring *r)
{
uint32_t prod_tail = r->prod.tail;
uint32_t cons_tail = r->cons.tail;
- return ((cons_tail - prod_tail - 1) & r->mask) == 0;
+ return (prod_tail - cons_tail) & r->mask;
}
/**
- * Test if a ring is empty.
+ * Return the number of free entries in a ring.
*
* @param r
* A pointer to the ring structure.
* @return
- * - 1: The ring is empty.
- * - 0: The ring is not empty.
+ * The number of free entries in the ring.
*/
-static inline int
-rte_ring_empty(const struct rte_ring *r)
+static inline unsigned
+rte_ring_free_count(const struct rte_ring *r)
{
- uint32_t prod_tail = r->prod.tail;
- uint32_t cons_tail = r->cons.tail;
- return !!(cons_tail == prod_tail);
+ return r->capacity - rte_ring_count(r);
}
/**
- * Return the number of entries in a ring.
+ * Test if a ring is full.
*
* @param r
* A pointer to the ring structure.
* @return
- * The number of entries in the ring.
+ * - 1: The ring is full.
+ * - 0: The ring is not full.
*/
-static inline unsigned
-rte_ring_count(const struct rte_ring *r)
+static inline int
+rte_ring_full(const struct rte_ring *r)
{
- uint32_t prod_tail = r->prod.tail;
- uint32_t cons_tail = r->cons.tail;
- return (prod_tail - cons_tail) & r->mask;
+ return rte_ring_free_count(r) == 0;
}
/**
- * Return the number of free entries in a ring.
+ * Test if a ring is empty.
*
* @param r
* A pointer to the ring structure.
* @return
- * The number of free entries in the ring.
+ * - 1: The ring is empty.
+ * - 0: The ring is not empty.
*/
-static inline unsigned
-rte_ring_free_count(const struct rte_ring *r)
+static inline int
+rte_ring_empty(const struct rte_ring *r)
{
- uint32_t prod_tail = r->prod.tail;
- uint32_t cons_tail = r->cons.tail;
- return (cons_tail - prod_tail - 1) & r->mask;
+ return rte_ring_count(r) == 0;
}
/**
@@ -916,7 +922,9 @@ rte_ring_free_count(const struct rte_ring *r)
* @param r
* A pointer to the ring structure.
* @return
- * The number of elements which can be stored in the ring.
+ * The size of the data store used by the ring.
+ * NOTE: this is not the same as the usable space in the ring. To query that
+ * use ``rte_ring_get_capacity()``.
*/
static inline unsigned int
rte_ring_get_size(const struct rte_ring *r)
@@ -925,6 +933,20 @@ rte_ring_get_size(const struct rte_ring *r)
}
/**
+ * Return the number of elements which can be stored in the ring.
+ *
+ * @param r
+ * A pointer to the ring structure.
+ * @return
+ * The usable size of the ring.
+ */
+static inline unsigned int
+rte_ring_get_capacity(const struct rte_ring *r)
+{
+ return r->capacity;
+}
+
+/**
* Dump the status of all rings on the console
*
* @param f
--
2.9.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 1/5] ring: allow rings with non power-of-2 sizes
2017-06-07 13:36 ` [dpdk-dev] [PATCH 1/5] ring: allow rings with non power-of-2 sizes Bruce Richardson
@ 2017-06-30 9:40 ` Olivier Matz
2017-06-30 11:32 ` Bruce Richardson
0 siblings, 1 reply; 28+ messages in thread
From: Olivier Matz @ 2017-06-30 9:40 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, jerin.jacob
Hi Bruce,
Few comments inline.
On Wed, 7 Jun 2017 14:36:16 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> The rte_rings traditionally have only supported having ring sizes as powers
> of 2, with the actual usable space being the size - 1. In some cases, for
> example, with an eventdev where we want to precisely control queue depths
> for latency, we need to allow ring sizes which are not powers of two so we
> add in an additional ring capacity value to allow that. For existing rings,
> this value will be size-1, i.e. the same as the mask, but if the new
> EXACT_SZ flag is passed on ring creation, the ring will have exactly the
> usable space requested, although the underlying memory size may be bigger.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Specifying the exact wanted size looks to be a better API than the current
one, which is to give the power of two above the wanted value, knowing there
will be only size-1 elements available.
What would you think about deprecating ring init/creation without this
flag in a few versions? Then, later, we could remove this flag and
the new behavior would become the default.
> ---
> lib/librte_ring/rte_ring.c | 26 ++++++++++++--
> lib/librte_ring/rte_ring.h | 88 +++++++++++++++++++++++++++++-----------------
> 2 files changed, 78 insertions(+), 36 deletions(-)
>
> diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
> index 5f98c33..b8047ee 100644
> --- a/lib/librte_ring/rte_ring.c
> +++ b/lib/librte_ring/rte_ring.c
> @@ -140,8 +140,22 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
> r->flags = flags;
> r->prod.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP;
> r->cons.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC;
> - r->size = count;
> - r->mask = count - 1;
> +
> + if (flags & RING_F_EXACT_SZ) {
> + r->size = rte_align32pow2(count + 1);
> + r->mask = r->size - 1;
> + r->capacity = count;
> + } else {
> + if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK)) {
> + RTE_LOG(ERR, RING,
> + "Requested size is invalid, must be power of 2, and not exceed the size limit %u\n",
> + RTE_RING_SZ_MASK);
> + return -EINVAL;
> + }
> + r->size = count;
> + r->mask = count - 1;
> + r->capacity = r->mask;
> + }
> r->prod.head = r->cons.head = 0;
> r->prod.tail = r->cons.tail = 0;
>
> @@ -160,10 +174,15 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
> ssize_t ring_size;
> int mz_flags = 0;
> struct rte_ring_list* ring_list = NULL;
> + const unsigned int requested_count = count;
> int ret;
>
> ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list);
>
> + /* for an exact size ring, round up from count to a power of two */
> + if (flags & RING_F_EXACT_SZ)
> + count = rte_align32pow2(count + 1);
> +
> ring_size = rte_ring_get_memsize(count);
> if (ring_size < 0) {
> rte_errno = ring_size;
> @@ -194,7 +213,7 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
> r = mz->addr;
> /* no need to check return value here, we already checked the
> * arguments above */
> - rte_ring_init(r, name, count, flags);
> + rte_ring_init(r, name, requested_count, flags);
>
> te->data = (void *) r;
> r->memzone = mz;
> @@ -262,6 +281,7 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
> fprintf(f, "ring <%s>@%p\n", r->name, r);
> fprintf(f, " flags=%x\n", r->flags);
> fprintf(f, " size=%"PRIu32"\n", r->size);
> + fprintf(f, " capacity=%"PRIu32"\n", r->capacity);
> fprintf(f, " ct=%"PRIu32"\n", r->cons.tail);
> fprintf(f, " ch=%"PRIu32"\n", r->cons.head);
> fprintf(f, " pt=%"PRIu32"\n", r->prod.tail);
> diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
> index 97f025a..494d31f 100644
> --- a/lib/librte_ring/rte_ring.h
> +++ b/lib/librte_ring/rte_ring.h
> @@ -153,6 +153,7 @@ struct rte_ring {
> /**< Memzone, if any, containing the rte_ring */
> uint32_t size; /**< Size of ring. */
> uint32_t mask; /**< Mask (size-1) of ring. */
> + uint32_t capacity; /**< Usable size of ring */
>
> /** Ring producer status. */
> struct rte_ring_headtail prod __rte_aligned(PROD_ALIGN);
> @@ -163,6 +164,15 @@ struct rte_ring {
>
> #define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */
> #define RING_F_SC_DEQ 0x0002 /**< The default dequeue is "single-consumer". */
> +/**
> + * Ring is to hold exactly requested number of entries.
> + * Without this flag set, the ring size requested must be a power of 2, and the
> + * usable space will be that size - 1. With the flag, the requested size will
> + * be rounded up to the next power of two, but the usable space will be exactly
> + * that requested. Worst case, if a power-of-2 size is requested, half the
> + * ring space will be wasted.
> + */
> +#define RING_F_EXACT_SZ 0x0004
> #define RTE_RING_SZ_MASK (unsigned)(0x0fffffff) /**< Ring size mask */
>
> /* @internal defines for passing to the enqueue dequeue worker functions */
> @@ -389,7 +399,7 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
> uint32_t *old_head, uint32_t *new_head,
> uint32_t *free_entries)
> {
> - const uint32_t mask = r->mask;
> + const uint32_t capacity = r->capacity;
> unsigned int max = n;
> int success;
>
> @@ -399,11 +409,13 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
>
> *old_head = r->prod.head;
> const uint32_t cons_tail = r->cons.tail;
> - /* The subtraction is done between two unsigned 32bits value
> + /*
> + * The subtraction is done between two unsigned 32bits value
> * (the result is always modulo 32 bits even if we have
> * *old_head > cons_tail). So 'free_entries' is always between 0
> - * and size(ring)-1. */
> - *free_entries = (mask + cons_tail - *old_head);
> + * and capacity (which is < size).
> + */
> + *free_entries = (capacity + cons_tail - *old_head);
>
> /* check that we have enough room in ring */
> if (unlikely(n > *free_entries))
> @@ -845,69 +857,63 @@ rte_ring_dequeue(struct rte_ring *r, void **obj_p)
> }
>
> /**
> - * Test if a ring is full.
> + * Return the number of entries in a ring.
> *
> * @param r
> * A pointer to the ring structure.
> * @return
> - * - 1: The ring is full.
> - * - 0: The ring is not full.
> + * The number of entries in the ring.
> */
> -static inline int
> -rte_ring_full(const struct rte_ring *r)
> +static inline unsigned
> +rte_ring_count(const struct rte_ring *r)
> {
> uint32_t prod_tail = r->prod.tail;
> uint32_t cons_tail = r->cons.tail;
> - return ((cons_tail - prod_tail - 1) & r->mask) == 0;
> + return (prod_tail - cons_tail) & r->mask;
> }
When used on a mc/mp ring, this function can now return a value
which is higher than the ring capacity. Even if this function is
not really accurate when used while the ring is use, I think we
should maximize the result with r->capacity.
This will also avoid rte_ring_free_count() to return a overflowed
value.
>
> /**
> - * Test if a ring is empty.
> + * Return the number of free entries in a ring.
> *
> * @param r
> * A pointer to the ring structure.
> * @return
> - * - 1: The ring is empty.
> - * - 0: The ring is not empty.
> + * The number of free entries in the ring.
> */
> -static inline int
> -rte_ring_empty(const struct rte_ring *r)
> +static inline unsigned
> +rte_ring_free_count(const struct rte_ring *r)
> {
> - uint32_t prod_tail = r->prod.tail;
> - uint32_t cons_tail = r->cons.tail;
> - return !!(cons_tail == prod_tail);
> + return r->capacity - rte_ring_count(r);
> }
>
> /**
> - * Return the number of entries in a ring.
> + * Test if a ring is full.
> *
> * @param r
> * A pointer to the ring structure.
> * @return
> - * The number of entries in the ring.
> + * - 1: The ring is full.
> + * - 0: The ring is not full.
> */
> -static inline unsigned
> -rte_ring_count(const struct rte_ring *r)
> +static inline int
> +rte_ring_full(const struct rte_ring *r)
> {
> - uint32_t prod_tail = r->prod.tail;
> - uint32_t cons_tail = r->cons.tail;
> - return (prod_tail - cons_tail) & r->mask;
> + return rte_ring_free_count(r) == 0;
> }
>
> /**
> - * Return the number of free entries in a ring.
> + * Test if a ring is empty.
> *
> * @param r
> * A pointer to the ring structure.
> * @return
> - * The number of free entries in the ring.
> + * - 1: The ring is empty.
> + * - 0: The ring is not empty.
> */
> -static inline unsigned
> -rte_ring_free_count(const struct rte_ring *r)
> +static inline int
> +rte_ring_empty(const struct rte_ring *r)
> {
> - uint32_t prod_tail = r->prod.tail;
> - uint32_t cons_tail = r->cons.tail;
> - return (cons_tail - prod_tail - 1) & r->mask;
> + return rte_ring_count(r) == 0;
> }
>
> /**
> @@ -916,7 +922,9 @@ rte_ring_free_count(const struct rte_ring *r)
> * @param r
> * A pointer to the ring structure.
> * @return
> - * The number of elements which can be stored in the ring.
> + * The size of the data store used by the ring.
> + * NOTE: this is not the same as the usable space in the ring. To query that
> + * use ``rte_ring_get_capacity()``.
> */
> static inline unsigned int
> rte_ring_get_size(const struct rte_ring *r)
> @@ -925,6 +933,20 @@ rte_ring_get_size(const struct rte_ring *r)
> }
>
> /**
> + * Return the number of elements which can be stored in the ring.
> + *
> + * @param r
> + * A pointer to the ring structure.
> + * @return
> + * The usable size of the ring.
> + */
> +static inline unsigned int
> +rte_ring_get_capacity(const struct rte_ring *r)
> +{
> + return r->capacity;
> +}
> +
> +/**
> * Dump the status of all rings on the console
> *
> * @param f
I think the users of rte_ring_get_size() use this API to get the
number of elements that can fit in the ring. The patch changes the
semantic of this function (even if the result is the same when
EXACT_SZ is not passed). This is the same for ring->size.
Wouldn't it be better to rename all current occurences of "size" as
"storage_size", and introduce a new "size", which is what you call
"capacity".
I'm asking the question but I'm not that convinced myself...
nevertheless I find storage_size/size less confusing than
size/capacity.
Olivier
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 1/5] ring: allow rings with non power-of-2 sizes
2017-06-30 9:40 ` Olivier Matz
@ 2017-06-30 11:32 ` Bruce Richardson
2017-06-30 12:24 ` Olivier Matz
0 siblings, 1 reply; 28+ messages in thread
From: Bruce Richardson @ 2017-06-30 11:32 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev, jerin.jacob
On Fri, Jun 30, 2017 at 11:40:46AM +0200, Olivier Matz wrote:
> Hi Bruce,
>
> Few comments inline.
>
> On Wed, 7 Jun 2017 14:36:16 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > The rte_rings traditionally have only supported having ring sizes as powers
> > of 2, with the actual usable space being the size - 1. In some cases, for
> > example, with an eventdev where we want to precisely control queue depths
> > for latency, we need to allow ring sizes which are not powers of two so we
> > add in an additional ring capacity value to allow that. For existing rings,
> > this value will be size-1, i.e. the same as the mask, but if the new
> > EXACT_SZ flag is passed on ring creation, the ring will have exactly the
> > usable space requested, although the underlying memory size may be bigger.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>
> Specifying the exact wanted size looks to be a better API than the current
> one, which is to give the power of two above the wanted value, knowing there
> will be only size-1 elements available.
>
> What would you think about deprecating ring init/creation without this
> flag in a few versions? Then, later, we could remove this flag and
> the new behavior would become the default.
>
I'm not really sure it's necessary. For the vast majority of cases what
we have now is fine, and it ensures that we maximise the ring space. I'd
tend to keep the new behaviour for those cases where we really need it.
It's a trade off between what we hide and expose:
* current scheme hides the fact that you get one entry less than you ask
for, but the memory space is as expected.
* new scheme gives you exactly the entries you ask for, but hides the
fact that you could be using up to double the memory space for the
ring.
>
> > ---
> > lib/librte_ring/rte_ring.c | 26 ++++++++++++--
> > lib/librte_ring/rte_ring.h | 88 +++++++++++++++++++++++++++++-----------------
> > 2 files changed, 78 insertions(+), 36 deletions(-)
> >
> > diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
> > index 5f98c33..b8047ee 100644
> > --- a/lib/librte_ring/rte_ring.c
> > +++ b/lib/librte_ring/rte_ring.c
> > @@ -140,8 +140,22 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
> > r->flags = flags;
> > r->prod.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP;
> > r->cons.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC;
> > - r->size = count;
> > - r->mask = count - 1;
> > +
> > + if (flags & RING_F_EXACT_SZ) {
> > + r->size = rte_align32pow2(count + 1);
> > + r->mask = r->size - 1;
> > + r->capacity = count;
> > + } else {
> > + if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK)) {
> > + RTE_LOG(ERR, RING,
> > + "Requested size is invalid, must be power of 2, and not exceed the size limit %u\n",
> > + RTE_RING_SZ_MASK);
> > + return -EINVAL;
> > + }
> > + r->size = count;
> > + r->mask = count - 1;
> > + r->capacity = r->mask;
> > + }
> > r->prod.head = r->cons.head = 0;
> > r->prod.tail = r->cons.tail = 0;
> >
> > @@ -160,10 +174,15 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
> > ssize_t ring_size;
> > int mz_flags = 0;
> > struct rte_ring_list* ring_list = NULL;
> > + const unsigned int requested_count = count;
> > int ret;
> >
> > ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list);
> >
> > + /* for an exact size ring, round up from count to a power of two */
> > + if (flags & RING_F_EXACT_SZ)
> > + count = rte_align32pow2(count + 1);
> > +
> > ring_size = rte_ring_get_memsize(count);
> > if (ring_size < 0) {
> > rte_errno = ring_size;
> > @@ -194,7 +213,7 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
> > r = mz->addr;
> > /* no need to check return value here, we already checked the
> > * arguments above */
> > - rte_ring_init(r, name, count, flags);
> > + rte_ring_init(r, name, requested_count, flags);
> >
> > te->data = (void *) r;
> > r->memzone = mz;
> > @@ -262,6 +281,7 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
> > fprintf(f, "ring <%s>@%p\n", r->name, r);
> > fprintf(f, " flags=%x\n", r->flags);
> > fprintf(f, " size=%"PRIu32"\n", r->size);
> > + fprintf(f, " capacity=%"PRIu32"\n", r->capacity);
> > fprintf(f, " ct=%"PRIu32"\n", r->cons.tail);
> > fprintf(f, " ch=%"PRIu32"\n", r->cons.head);
> > fprintf(f, " pt=%"PRIu32"\n", r->prod.tail);
> > diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
> > index 97f025a..494d31f 100644
> > --- a/lib/librte_ring/rte_ring.h
> > +++ b/lib/librte_ring/rte_ring.h
> > @@ -153,6 +153,7 @@ struct rte_ring {
> > /**< Memzone, if any, containing the rte_ring */
> > uint32_t size; /**< Size of ring. */
> > uint32_t mask; /**< Mask (size-1) of ring. */
> > + uint32_t capacity; /**< Usable size of ring */
> >
> > /** Ring producer status. */
> > struct rte_ring_headtail prod __rte_aligned(PROD_ALIGN);
> > @@ -163,6 +164,15 @@ struct rte_ring {
> >
> > #define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */
> > #define RING_F_SC_DEQ 0x0002 /**< The default dequeue is "single-consumer". */
> > +/**
> > + * Ring is to hold exactly requested number of entries.
> > + * Without this flag set, the ring size requested must be a power of 2, and the
> > + * usable space will be that size - 1. With the flag, the requested size will
> > + * be rounded up to the next power of two, but the usable space will be exactly
> > + * that requested. Worst case, if a power-of-2 size is requested, half the
> > + * ring space will be wasted.
> > + */
> > +#define RING_F_EXACT_SZ 0x0004
> > #define RTE_RING_SZ_MASK (unsigned)(0x0fffffff) /**< Ring size mask */
> >
> > /* @internal defines for passing to the enqueue dequeue worker functions */
> > @@ -389,7 +399,7 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
> > uint32_t *old_head, uint32_t *new_head,
> > uint32_t *free_entries)
> > {
> > - const uint32_t mask = r->mask;
> > + const uint32_t capacity = r->capacity;
> > unsigned int max = n;
> > int success;
> >
> > @@ -399,11 +409,13 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
> >
> > *old_head = r->prod.head;
> > const uint32_t cons_tail = r->cons.tail;
> > - /* The subtraction is done between two unsigned 32bits value
> > + /*
> > + * The subtraction is done between two unsigned 32bits value
> > * (the result is always modulo 32 bits even if we have
> > * *old_head > cons_tail). So 'free_entries' is always between 0
> > - * and size(ring)-1. */
> > - *free_entries = (mask + cons_tail - *old_head);
> > + * and capacity (which is < size).
> > + */
> > + *free_entries = (capacity + cons_tail - *old_head);
> >
> > /* check that we have enough room in ring */
> > if (unlikely(n > *free_entries))
> > @@ -845,69 +857,63 @@ rte_ring_dequeue(struct rte_ring *r, void **obj_p)
> > }
> >
> > /**
> > - * Test if a ring is full.
> > + * Return the number of entries in a ring.
> > *
> > * @param r
> > * A pointer to the ring structure.
> > * @return
> > - * - 1: The ring is full.
> > - * - 0: The ring is not full.
> > + * The number of entries in the ring.
> > */
> > -static inline int
> > -rte_ring_full(const struct rte_ring *r)
> > +static inline unsigned
> > +rte_ring_count(const struct rte_ring *r)
> > {
> > uint32_t prod_tail = r->prod.tail;
> > uint32_t cons_tail = r->cons.tail;
> > - return ((cons_tail - prod_tail - 1) & r->mask) == 0;
> > + return (prod_tail - cons_tail) & r->mask;
> > }
>
> When used on a mc/mp ring, this function can now return a value
> which is higher than the ring capacity. Even if this function is
> not really accurate when used while the ring is use, I think we
> should maximize the result with r->capacity.
>
> This will also avoid rte_ring_free_count() to return a overflowed
> value.
>
How does it return an overflowing value? I think in the MP/MC case the
tests made each time around the loop for cmpset should ensure we never
overflow.
>
> >
> > /**
> > - * Test if a ring is empty.
> > + * Return the number of free entries in a ring.
> > *
> > * @param r
> > * A pointer to the ring structure.
> > * @return
> > - * - 1: The ring is empty.
> > - * - 0: The ring is not empty.
> > + * The number of free entries in the ring.
> > */
> > -static inline int
> > -rte_ring_empty(const struct rte_ring *r)
> > +static inline unsigned
> > +rte_ring_free_count(const struct rte_ring *r)
> > {
> > - uint32_t prod_tail = r->prod.tail;
> > - uint32_t cons_tail = r->cons.tail;
> > - return !!(cons_tail == prod_tail);
> > + return r->capacity - rte_ring_count(r);
> > }
> >
> > /**
> > - * Return the number of entries in a ring.
> > + * Test if a ring is full.
> > *
> > * @param r
> > * A pointer to the ring structure.
> > * @return
> > - * The number of entries in the ring.
> > + * - 1: The ring is full.
> > + * - 0: The ring is not full.
> > */
> > -static inline unsigned
> > -rte_ring_count(const struct rte_ring *r)
> > +static inline int
> > +rte_ring_full(const struct rte_ring *r)
> > {
> > - uint32_t prod_tail = r->prod.tail;
> > - uint32_t cons_tail = r->cons.tail;
> > - return (prod_tail - cons_tail) & r->mask;
> > + return rte_ring_free_count(r) == 0;
> > }
> >
> > /**
> > - * Return the number of free entries in a ring.
> > + * Test if a ring is empty.
> > *
> > * @param r
> > * A pointer to the ring structure.
> > * @return
> > - * The number of free entries in the ring.
> > + * - 1: The ring is empty.
> > + * - 0: The ring is not empty.
> > */
> > -static inline unsigned
> > -rte_ring_free_count(const struct rte_ring *r)
> > +static inline int
> > +rte_ring_empty(const struct rte_ring *r)
> > {
> > - uint32_t prod_tail = r->prod.tail;
> > - uint32_t cons_tail = r->cons.tail;
> > - return (cons_tail - prod_tail - 1) & r->mask;
> > + return rte_ring_count(r) == 0;
> > }
> >
> > /**
> > @@ -916,7 +922,9 @@ rte_ring_free_count(const struct rte_ring *r)
> > * @param r
> > * A pointer to the ring structure.
> > * @return
> > - * The number of elements which can be stored in the ring.
> > + * The size of the data store used by the ring.
> > + * NOTE: this is not the same as the usable space in the ring. To query that
> > + * use ``rte_ring_get_capacity()``.
> > */
> > static inline unsigned int
> > rte_ring_get_size(const struct rte_ring *r)
> > @@ -925,6 +933,20 @@ rte_ring_get_size(const struct rte_ring *r)
> > }
> >
> > /**
> > + * Return the number of elements which can be stored in the ring.
> > + *
> > + * @param r
> > + * A pointer to the ring structure.
> > + * @return
> > + * The usable size of the ring.
> > + */
> > +static inline unsigned int
> > +rte_ring_get_capacity(const struct rte_ring *r)
> > +{
> > + return r->capacity;
> > +}
> > +
> > +/**
> > * Dump the status of all rings on the console
> > *
> > * @param f
>
> I think the users of rte_ring_get_size() use this API to get the
> number of elements that can fit in the ring. The patch changes the
> semantic of this function (even if the result is the same when
> EXACT_SZ is not passed). This is the same for ring->size.
>
> Wouldn't it be better to rename all current occurences of "size" as
> "storage_size", and introduce a new "size", which is what you call
> "capacity".
>
> I'm asking the question but I'm not that convinced myself...
> nevertheless I find storage_size/size less confusing than
> size/capacity.
>
I went back and forward on this myself a bit too. However, the reason I
went for this approach is purely one of backwards compatibility. For
legacy apps unaware of the new apps, there is no change in behaviour or
return value for the size function. For apps that are aware of this
change there is the new function provided.
If we did change size to be storage size, then you could end up getting
off-by-one errors in apps that haven't been updated.
Regards,
/Bruce
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 1/5] ring: allow rings with non power-of-2 sizes
2017-06-30 11:32 ` Bruce Richardson
@ 2017-06-30 12:24 ` Olivier Matz
2017-06-30 13:59 ` Bruce Richardson
0 siblings, 1 reply; 28+ messages in thread
From: Olivier Matz @ 2017-06-30 12:24 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, jerin.jacob
On Fri, 30 Jun 2017 12:32:27 +0100, Bruce Richardson
<bruce.richardson@intel.com> wrote:
> On Fri, Jun 30, 2017 at 11:40:46AM +0200, Olivier Matz wrote:
> > Hi Bruce,
> >
> > Few comments inline.
> >
> > On Wed, 7 Jun 2017 14:36:16 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > The rte_rings traditionally have only supported having ring sizes as powers
> > > of 2, with the actual usable space being the size - 1. In some cases, for
> > > example, with an eventdev where we want to precisely control queue depths
> > > for latency, we need to allow ring sizes which are not powers of two so we
> > > add in an additional ring capacity value to allow that. For existing rings,
> > > this value will be size-1, i.e. the same as the mask, but if the new
> > > EXACT_SZ flag is passed on ring creation, the ring will have exactly the
> > > usable space requested, although the underlying memory size may be bigger.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> >
> > Specifying the exact wanted size looks to be a better API than the current
> > one, which is to give the power of two above the wanted value, knowing there
> > will be only size-1 elements available.
> >
> > What would you think about deprecating ring init/creation without this
> > flag in a few versions? Then, later, we could remove this flag and
> > the new behavior would become the default.
> >
>
> I'm not really sure it's necessary. For the vast majority of cases what
> we have now is fine, and it ensures that we maximise the ring space. I'd
> tend to keep the new behaviour for those cases where we really need it.
>
> It's a trade off between what we hide and expose:
> * current scheme hides the fact that you get one entry less than you ask
> for, but the memory space is as expected.
> * new scheme gives you exactly the entries you ask for, but hides the
> fact that you could be using up to double the memory space for the
> ring.
Yes, having 2 different behavior is precisely what I don't really like.
Giving the number of entries the user asks for is straightforward, which
is a good thing for an API. And hiding the extra consumed memory looks
acceptable, this can be documented.
That said, if we decide to deprecate it, there's no need to do it right
now.
> > > @@ -845,69 +857,63 @@ rte_ring_dequeue(struct rte_ring *r, void **obj_p)
> > > }
> > >
> > > /**
> > > - * Test if a ring is full.
> > > + * Return the number of entries in a ring.
> > > *
> > > * @param r
> > > * A pointer to the ring structure.
> > > * @return
> > > - * - 1: The ring is full.
> > > - * - 0: The ring is not full.
> > > + * The number of entries in the ring.
> > > */
> > > -static inline int
> > > -rte_ring_full(const struct rte_ring *r)
> > > +static inline unsigned
> > > +rte_ring_count(const struct rte_ring *r)
> > > {
> > > uint32_t prod_tail = r->prod.tail;
> > > uint32_t cons_tail = r->cons.tail;
> > > - return ((cons_tail - prod_tail - 1) & r->mask) == 0;
> > > + return (prod_tail - cons_tail) & r->mask;
> > > }
> >
> > When used on a mc/mp ring, this function can now return a value
> > which is higher than the ring capacity. Even if this function is
> > not really accurate when used while the ring is use, I think we
> > should maximize the result with r->capacity.
> >
> > This will also avoid rte_ring_free_count() to return a overflowed
> > value.
> >
>
> How does it return an overflowing value? I think in the MP/MC case the
> tests made each time around the loop for cmpset should ensure we never
> overflow.
Here we are reading r->prod.tail and r->cons.tail without
synchronization, nor memory barriers. So basically, there is no
guarantee that the values are consistent.
Before, the returned value could be wrong, but always in the
interval [0, mask].
Now, rte_ring_count() is still in the interval [0, mask], but the
capacity of the ring is lower than mask. If rte_ring_count() returns
mask, it would cause rte_ring_free_count() to return a value lower than
0 (overflowed since the result is unsigned).
> > > @@ -916,7 +922,9 @@ rte_ring_free_count(const struct rte_ring *r)
> > > * @param r
> > > * A pointer to the ring structure.
> > > * @return
> > > - * The number of elements which can be stored in the ring.
> > > + * The size of the data store used by the ring.
> > > + * NOTE: this is not the same as the usable space in the ring. To query that
> > > + * use ``rte_ring_get_capacity()``.
> > > */
> > > static inline unsigned int
> > > rte_ring_get_size(const struct rte_ring *r)
> > > @@ -925,6 +933,20 @@ rte_ring_get_size(const struct rte_ring *r)
> > > }
> > >
> > > /**
> > > + * Return the number of elements which can be stored in the ring.
> > > + *
> > > + * @param r
> > > + * A pointer to the ring structure.
> > > + * @return
> > > + * The usable size of the ring.
> > > + */
> > > +static inline unsigned int
> > > +rte_ring_get_capacity(const struct rte_ring *r)
> > > +{
> > > + return r->capacity;
> > > +}
> > > +
> > > +/**
> > > * Dump the status of all rings on the console
> > > *
> > > * @param f
> >
> > I think the users of rte_ring_get_size() use this API to get the
> > number of elements that can fit in the ring. The patch changes the
> > semantic of this function (even if the result is the same when
> > EXACT_SZ is not passed). This is the same for ring->size.
> >
> > Wouldn't it be better to rename all current occurences of "size" as
> > "storage_size", and introduce a new "size", which is what you call
> > "capacity".
> >
> > I'm asking the question but I'm not that convinced myself...
> > nevertheless I find storage_size/size less confusing than
> > size/capacity.
> >
> I went back and forward on this myself a bit too. However, the reason I
> went for this approach is purely one of backwards compatibility. For
> legacy apps unaware of the new apps, there is no change in behaviour or
> return value for the size function. For apps that are aware of this
> change there is the new function provided.
> If we did change size to be storage size, then you could end up getting
> off-by-one errors in apps that haven't been updated.
OK.
Olivier
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 1/5] ring: allow rings with non power-of-2 sizes
2017-06-30 12:24 ` Olivier Matz
@ 2017-06-30 13:59 ` Bruce Richardson
0 siblings, 0 replies; 28+ messages in thread
From: Bruce Richardson @ 2017-06-30 13:59 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev, jerin.jacob
On Fri, Jun 30, 2017 at 02:24:38PM +0200, Olivier Matz wrote:
> On Fri, 30 Jun 2017 12:32:27 +0100, Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> > On Fri, Jun 30, 2017 at 11:40:46AM +0200, Olivier Matz wrote:
> > > Hi Bruce,
> > >
> > > Few comments inline.
> > >
> > > On Wed, 7 Jun 2017 14:36:16 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > > > The rte_rings traditionally have only supported having ring sizes as powers
> > > > of 2, with the actual usable space being the size - 1. In some cases, for
> > > > example, with an eventdev where we want to precisely control queue depths
> > > > for latency, we need to allow ring sizes which are not powers of two so we
> > > > add in an additional ring capacity value to allow that. For existing rings,
> > > > this value will be size-1, i.e. the same as the mask, but if the new
> > > > EXACT_SZ flag is passed on ring creation, the ring will have exactly the
> > > > usable space requested, although the underlying memory size may be bigger.
> > > >
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > >
> > > Specifying the exact wanted size looks to be a better API than the current
> > > one, which is to give the power of two above the wanted value, knowing there
> > > will be only size-1 elements available.
> > >
> > > What would you think about deprecating ring init/creation without this
> > > flag in a few versions? Then, later, we could remove this flag and
> > > the new behavior would become the default.
> > >
> >
> > I'm not really sure it's necessary. For the vast majority of cases what
> > we have now is fine, and it ensures that we maximise the ring space. I'd
> > tend to keep the new behaviour for those cases where we really need it.
> >
> > It's a trade off between what we hide and expose:
> > * current scheme hides the fact that you get one entry less than you ask
> > for, but the memory space is as expected.
> > * new scheme gives you exactly the entries you ask for, but hides the
> > fact that you could be using up to double the memory space for the
> > ring.
>
> Yes, having 2 different behavior is precisely what I don't really like.
> Giving the number of entries the user asks for is straightforward, which
> is a good thing for an API. And hiding the extra consumed memory looks
> acceptable, this can be documented.
>
> That said, if we decide to deprecate it, there's no need to do it right
> now.
>
>
> > > > @@ -845,69 +857,63 @@ rte_ring_dequeue(struct rte_ring *r, void **obj_p)
> > > > }
> > > >
> > > > /**
> > > > - * Test if a ring is full.
> > > > + * Return the number of entries in a ring.
> > > > *
> > > > * @param r
> > > > * A pointer to the ring structure.
> > > > * @return
> > > > - * - 1: The ring is full.
> > > > - * - 0: The ring is not full.
> > > > + * The number of entries in the ring.
> > > > */
> > > > -static inline int
> > > > -rte_ring_full(const struct rte_ring *r)
> > > > +static inline unsigned
> > > > +rte_ring_count(const struct rte_ring *r)
> > > > {
> > > > uint32_t prod_tail = r->prod.tail;
> > > > uint32_t cons_tail = r->cons.tail;
> > > > - return ((cons_tail - prod_tail - 1) & r->mask) == 0;
> > > > + return (prod_tail - cons_tail) & r->mask;
> > > > }
> > >
> > > When used on a mc/mp ring, this function can now return a value
> > > which is higher than the ring capacity. Even if this function is
> > > not really accurate when used while the ring is use, I think we
> > > should maximize the result with r->capacity.
> > >
> > > This will also avoid rte_ring_free_count() to return a overflowed
> > > value.
> > >
> >
> > How does it return an overflowing value? I think in the MP/MC case the
> > tests made each time around the loop for cmpset should ensure we never
> > overflow.
>
> Here we are reading r->prod.tail and r->cons.tail without
> synchronization, nor memory barriers. So basically, there is no
> guarantee that the values are consistent.
>
> Before, the returned value could be wrong, but always in the
> interval [0, mask].
>
> Now, rte_ring_count() is still in the interval [0, mask], but the
> capacity of the ring is lower than mask. If rte_ring_count() returns
> mask, it would cause rte_ring_free_count() to return a value lower than
> 0 (overflowed since the result is unsigned).
>
>
I'm still not convinced that this can actually happen, but just in case,
I'll add in the extra check in V2.
/Bruce
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH 2/5] test/test: add unit tests for exact size rings
2017-06-07 13:36 [dpdk-dev] [PATCH 0/5] create event rings type Bruce Richardson
2017-06-07 13:36 ` [dpdk-dev] [PATCH 1/5] ring: allow rings with non power-of-2 sizes Bruce Richardson
@ 2017-06-07 13:36 ` Bruce Richardson
2017-06-30 9:42 ` Olivier Matz
2017-06-07 13:36 ` [dpdk-dev] [PATCH 3/5] eventdev: add ring structure for events Bruce Richardson
` (3 subsequent siblings)
5 siblings, 1 reply; 28+ messages in thread
From: Bruce Richardson @ 2017-06-07 13:36 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
test/test/test_ring.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 71 insertions(+)
diff --git a/test/test/test_ring.c b/test/test/test_ring.c
index 858ebc1..7f3b00d 100644
--- a/test/test/test_ring.c
+++ b/test/test/test_ring.c
@@ -770,6 +770,74 @@ test_ring_basic_ex(void)
}
static int
+test_ring_with_exact_size(void)
+{
+ struct rte_ring *std_ring, *exact_sz_ring;
+ void *ptr_array[16];
+ static const unsigned int ring_sz = RTE_DIM(ptr_array);
+ unsigned int i;
+
+ std_ring = rte_ring_create("std", ring_sz, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (std_ring == NULL) {
+ printf("%s: error, can't create std ring\n", __func__);
+ return -1;
+ }
+ exact_sz_ring = rte_ring_create("exact sz", ring_sz, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
+ if (exact_sz_ring == NULL) {
+ printf("%s: error, can't create exact size ring\n", __func__);
+ return -1;
+ }
+
+ /*
+ * Check that the exact size ring is bigger than the standard ring
+ */
+ if (rte_ring_get_size(std_ring) >= rte_ring_get_size(exact_sz_ring)) {
+ printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n",
+ __func__,
+ rte_ring_get_size(std_ring),
+ rte_ring_get_size(exact_sz_ring));
+ return -1;
+ }
+ /*
+ * check that the exact_sz_ring can hold one more element than the
+ * standard ring. (16 vs 15 elements)
+ */
+ for (i = 0; i < ring_sz - 1; i++) {
+ rte_ring_enqueue(std_ring, NULL);
+ rte_ring_enqueue(exact_sz_ring, NULL);
+ }
+ if (rte_ring_enqueue(std_ring, NULL) != -ENOBUFS) {
+ printf("%s: error, unexpected successful enqueue\n", __func__);
+ return -1;
+ }
+ if (rte_ring_enqueue(exact_sz_ring, NULL) == -ENOBUFS) {
+ printf("%s: error, enqueue failed\n", __func__);
+ return -1;
+ }
+
+ /* check that dequeue returns the expected number of elements */
+ if (rte_ring_dequeue_burst(exact_sz_ring, ptr_array,
+ RTE_DIM(ptr_array), NULL) != ring_sz) {
+ printf("%s: error, failed to dequeue expected nb of elements\n",
+ __func__);
+ return -1;
+ }
+
+ /* check that the capacity function returns expected value */
+ if (rte_ring_get_capacity(exact_sz_ring) != ring_sz) {
+ printf("%s: error, incorrect ring capacity reported\n",
+ __func__);
+ return -1;
+ }
+
+ rte_ring_free(std_ring);
+ rte_ring_free(exact_sz_ring);
+ return 0;
+}
+
+static int
test_ring(void)
{
/* some more basic operations */
@@ -820,6 +888,9 @@ test_ring(void)
if (test_ring_creation_with_an_used_name() < 0)
return -1;
+ if (test_ring_with_exact_size() < 0)
+ return -1;
+
/* dump the ring status */
rte_ring_list_dump(stdout);
--
2.9.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] test/test: add unit tests for exact size rings
2017-06-07 13:36 ` [dpdk-dev] [PATCH 2/5] test/test: add unit tests for exact size rings Bruce Richardson
@ 2017-06-30 9:42 ` Olivier Matz
0 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2017-06-30 9:42 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, jerin.jacob
Hi Bruce,
On Wed, 7 Jun 2017 14:36:17 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
> test/test/test_ring.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 71 insertions(+)
>
> diff --git a/test/test/test_ring.c b/test/test/test_ring.c
> index 858ebc1..7f3b00d 100644
> --- a/test/test/test_ring.c
> +++ b/test/test/test_ring.c
> @@ -770,6 +770,74 @@ test_ring_basic_ex(void)
> }
>
> static int
> +test_ring_with_exact_size(void)
> +{
> + struct rte_ring *std_ring, *exact_sz_ring;
> + void *ptr_array[16];
> + static const unsigned int ring_sz = RTE_DIM(ptr_array);
> + unsigned int i;
> +
> + std_ring = rte_ring_create("std", ring_sz, rte_socket_id(),
> + RING_F_SP_ENQ | RING_F_SC_DEQ);
> + if (std_ring == NULL) {
> + printf("%s: error, can't create std ring\n", __func__);
> + return -1;
> + }
> + exact_sz_ring = rte_ring_create("exact sz", ring_sz, rte_socket_id(),
> + RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
> + if (exact_sz_ring == NULL) {
> + printf("%s: error, can't create exact size ring\n", __func__);
> + return -1;
> + }
I think you should use a "goto fail" to free the ring in error
cases.
> +
> + /*
> + * Check that the exact size ring is bigger than the standard ring
> + */
> + if (rte_ring_get_size(std_ring) >= rte_ring_get_size(exact_sz_ring)) {
> + printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n",
> + __func__,
> + rte_ring_get_size(std_ring),
> + rte_ring_get_size(exact_sz_ring));
> + return -1;
> + }
> + /*
> + * check that the exact_sz_ring can hold one more element than the
> + * standard ring. (16 vs 15 elements)
> + */
> + for (i = 0; i < ring_sz - 1; i++) {
> + rte_ring_enqueue(std_ring, NULL);
> + rte_ring_enqueue(exact_sz_ring, NULL);
> + }
> + if (rte_ring_enqueue(std_ring, NULL) != -ENOBUFS) {
> + printf("%s: error, unexpected successful enqueue\n", __func__);
> + return -1;
> + }
> + if (rte_ring_enqueue(exact_sz_ring, NULL) == -ENOBUFS) {
> + printf("%s: error, enqueue failed\n", __func__);
> + return -1;
> + }
> +
> + /* check that dequeue returns the expected number of elements */
> + if (rte_ring_dequeue_burst(exact_sz_ring, ptr_array,
> + RTE_DIM(ptr_array), NULL) != ring_sz) {
> + printf("%s: error, failed to dequeue expected nb of elements\n",
> + __func__);
> + return -1;
> + }
> +
> + /* check that the capacity function returns expected value */
> + if (rte_ring_get_capacity(exact_sz_ring) != ring_sz) {
> + printf("%s: error, incorrect ring capacity reported\n",
> + __func__);
> + return -1;
> + }
> +
> + rte_ring_free(std_ring);
> + rte_ring_free(exact_sz_ring);
> + return 0;
> +}
> +
> +static int
> test_ring(void)
> {
> /* some more basic operations */
> @@ -820,6 +888,9 @@ test_ring(void)
> if (test_ring_creation_with_an_used_name() < 0)
> return -1;
>
> + if (test_ring_with_exact_size() < 0)
> + return -1;
> +
> /* dump the ring status */
> rte_ring_list_dump(stdout);
>
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH 3/5] eventdev: add ring structure for events
2017-06-07 13:36 [dpdk-dev] [PATCH 0/5] create event rings type Bruce Richardson
2017-06-07 13:36 ` [dpdk-dev] [PATCH 1/5] ring: allow rings with non power-of-2 sizes Bruce Richardson
2017-06-07 13:36 ` [dpdk-dev] [PATCH 2/5] test/test: add unit tests for exact size rings Bruce Richardson
@ 2017-06-07 13:36 ` Bruce Richardson
2017-06-12 5:15 ` Jerin Jacob
2017-06-07 13:36 ` [dpdk-dev] [PATCH 4/5] test/test: add auto-tests for event ring functions Bruce Richardson
` (2 subsequent siblings)
5 siblings, 1 reply; 28+ messages in thread
From: Bruce Richardson @ 2017-06-07 13:36 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
Add in a new rte_event_ring structure type and functions to allow events to
be passed core to core. This is needed because the standard rte_ring type
only works on pointers, while for events, we want to copy the entire, 16B
events themselves - not just pointers to them. The code makes extensive use
of the functions already defined in rte_ring.h
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/Makefile | 2 +-
lib/librte_eventdev/Makefile | 2 +
lib/librte_eventdev/rte_event_ring.c | 207 ++++++++++++++++++
lib/librte_eventdev/rte_event_ring.h | 312 +++++++++++++++++++++++++++
lib/librte_eventdev/rte_eventdev_version.map | 9 +
5 files changed, 531 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_eventdev/rte_event_ring.c
create mode 100644 lib/librte_eventdev/rte_event_ring.h
diff --git a/lib/Makefile b/lib/Makefile
index 07e1fd0..aef584e 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -52,7 +52,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
DEPDIRS-librte_cryptodev += librte_kvargs
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
-DEPDIRS-librte_eventdev := librte_eal
+DEPDIRS-librte_eventdev := librte_eal librte_ring
DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
DEPDIRS-librte_vhost := librte_eal librte_mempool librte_mbuf librte_ether
DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
index e06346a..0313277 100644
--- a/lib/librte_eventdev/Makefile
+++ b/lib/librte_eventdev/Makefile
@@ -42,10 +42,12 @@ CFLAGS += $(WERROR_FLAGS)
# library source files
SRCS-y += rte_eventdev.c
+SRCS-y += rte_event_ring.c
# export include files
SYMLINK-y-include += rte_eventdev.h
SYMLINK-y-include += rte_eventdev_pmd.h
+SYMLINK-y-include += rte_event_ring.h
# versioning export map
EXPORT_MAP := rte_eventdev_version.map
diff --git a/lib/librte_eventdev/rte_event_ring.c b/lib/librte_eventdev/rte_event_ring.c
new file mode 100644
index 0000000..b14c212
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_ring.c
@@ -0,0 +1,207 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/queue.h>
+#include <string.h>
+
+#include <rte_tailq.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_eal_memconfig.h>
+#include "rte_event_ring.h"
+
+TAILQ_HEAD(rte_event_ring_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_event_ring_tailq = {
+ .name = RTE_TAILQ_EVENT_RING_NAME,
+};
+EAL_REGISTER_TAILQ(rte_event_ring_tailq)
+
+int
+rte_event_ring_init(struct rte_event_ring *r, const char *name,
+ unsigned int count, unsigned int flags)
+{
+ /* compilation-time checks */
+ RTE_BUILD_BUG_ON((sizeof(struct rte_event_ring) &
+ RTE_CACHE_LINE_MASK) != 0);
+
+ /* init the ring structure */
+ return rte_ring_init(&r->r, name, count, flags);
+}
+
+/* create the ring */
+struct rte_event_ring *
+rte_event_ring_create(const char *name, unsigned int count, int socket_id,
+ unsigned int flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_event_ring *r;
+ struct rte_tailq_entry *te;
+ const struct rte_memzone *mz;
+ ssize_t ring_size;
+ int mz_flags = 0;
+ struct rte_event_ring_list *ring_list = NULL;
+ const unsigned int requested_count = count;
+ int ret;
+
+ ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+ rte_event_ring_list);
+
+ /* for an exact size ring, round up from count to a power of two */
+ if (flags & RING_F_EXACT_SZ)
+ count = rte_align32pow2(count + 1);
+ else if (!rte_is_power_of_2(count)) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ ring_size = sizeof(*r) + (count * sizeof(struct rte_event));
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_RING_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /*
+ * reserve a memory zone for this ring. If we can't get rte_config or
+ * we are secondary process, the memzone_reserve function will set
+ * rte_errno for us appropriately - hence no check in this this function
+ */
+ mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
+ if (mz != NULL) {
+ r = mz->addr;
+ /*
+ * no need to check return value here, we already checked the
+ * arguments above
+ */
+ rte_event_ring_init(r, name, requested_count, flags);
+
+ te->data = (void *) r;
+ r->r.memzone = mz;
+
+ TAILQ_INSERT_TAIL(ring_list, te, next);
+ } else {
+ r = NULL;
+ RTE_LOG(ERR, RING, "Cannot reserve memory\n");
+ rte_free(te);
+ }
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return r;
+}
+
+
+struct rte_event_ring *
+rte_event_ring_lookup(const char *name)
+{
+ struct rte_tailq_entry *te;
+ struct rte_event_ring *r = NULL;
+ struct rte_event_ring_list *ring_list;
+
+ ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+ rte_event_ring_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, ring_list, next) {
+ r = (struct rte_event_ring *) te->data;
+ if (strncmp(name, r->r.name, RTE_RING_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+/* free the ring */
+void
+rte_event_ring_free(struct rte_event_ring *r)
+{
+ struct rte_event_ring_list *ring_list = NULL;
+ struct rte_tailq_entry *te;
+
+ if (r == NULL)
+ return;
+
+ /*
+ * Ring was not created with rte_event_ring_create,
+ * therefore, there is no memzone to free.
+ */
+ if (r->r.memzone == NULL) {
+ RTE_LOG(ERR, RING,
+ "Cannot free ring (not created with rte_event_ring_create()");
+ return;
+ }
+
+ if (rte_memzone_free(r->r.memzone) != 0) {
+ RTE_LOG(ERR, RING, "Cannot free memory\n");
+ return;
+ }
+
+ ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+ rte_event_ring_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, ring_list, next) {
+ if (te->data == (void *) r)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(ring_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+}
diff --git a/lib/librte_eventdev/rte_event_ring.h b/lib/librte_eventdev/rte_event_ring.h
new file mode 100644
index 0000000..f7eae58
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_ring.h
@@ -0,0 +1,312 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/**
+ * @file
+ * RTE Event Ring
+ *
+ * This provides a ring implementation for passing rte_event structures
+ * from one core to another.
+ */
+
+#ifndef _RTE_EVENT_RING_
+#define _RTE_EVENT_RING_
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include "rte_eventdev.h"
+
+#define RTE_TAILQ_EVENT_RING_NAME "RTE_EVENT_RING"
+
+#ifndef force_inline
+#define force_inline inline __attribute__((always_inline))
+#endif
+
+/**
+ * Generic ring structure for passing rte_event objects from core to core.
+ *
+ * Based on the primitives given in the rte_ring library. Designed to be
+ * used inside software eventdev implementations and by applications
+ * directly as needed.
+ */
+struct rte_event_ring {
+ struct rte_ring r;
+};
+
+/**
+ * Returns the number of events in the ring
+ *
+ * @param r
+ * pointer to the event ring
+ * @return
+ * the number of events in the ring
+ */
+static force_inline unsigned int
+rte_event_ring_count(const struct rte_event_ring *r)
+{
+ return rte_ring_count(&r->r);
+}
+
+/**
+ * Returns the amount of free space in the ring
+ *
+ * @param r
+ * pointer to the event ring
+ * @return
+ * the number of free slots in the ring, i.e. the number of events that
+ * can be successfully enqueued before dequeue must be called
+ */
+static force_inline unsigned int
+rte_event_ring_free_count(const struct rte_event_ring *r)
+{
+ return rte_ring_free_count(&r->r);
+}
+
+/**
+ * Enqueue a set of events onto a ring
+ *
+ * Note: this API enqueues by copying the events themselves onto the ring,
+ * rather than just placing a pointer to each event onto the ring. This
+ * means that statically-allocated events can safely be enqueued by this
+ * API.
+ *
+ * @param r
+ * pointer to the event ring
+ * @param events
+ * pointer to an array of struct rte_event objects
+ * @param n
+ * number of events in the array to enqueue
+ * @param free_space
+ * if non-null, is updated to indicate the amount of free space in the
+ * ring once the enqueue has completed.
+ * @return
+ * the number of elements, n', enqueued to the ring, 0 <= n' <= n
+ */
+static force_inline unsigned int
+rte_event_ring_enqueue_burst(struct rte_event_ring *r,
+ const struct rte_event *events,
+ unsigned int n, uint16_t *free_space)
+{
+ uint32_t prod_head, prod_next;
+ uint32_t free_entries;
+
+ n = __rte_ring_move_prod_head(&r->r, r->r.prod.single, n,
+ RTE_RING_QUEUE_VARIABLE,
+ &prod_head, &prod_next, &free_entries);
+ if (n == 0)
+ goto end;
+
+ ENQUEUE_PTRS(&r->r, &r[1], prod_head, events, n, struct rte_event);
+ rte_smp_wmb();
+
+ update_tail(&r->r.prod, prod_head, prod_next, 1);
+end:
+ if (free_space != NULL)
+ *free_space = free_entries - n;
+ return n;
+}
+
+/**
+ * Dequeue a set of events from a ring
+ *
+ * Note: this API does not work with pointers to events, rather it copies
+ * the events themselves to the destination ``events`` buffer.
+ *
+ * @param r
+ * pointer to the event ring
+ * @param events
+ * pointer to an array to hold the struct rte_event objects
+ * @param n
+ * number of events that can be held in the ``events`` array
+ * @param available
+ * if non-null, is updated to indicate the number of events remaining in
+ * the ring once the dequeue has completed
+ * @return
+ * the number of elements, n', dequeued from the ring, 0 <= n' <= n
+ */
+static force_inline unsigned int
+rte_event_ring_dequeue_burst(struct rte_event_ring *r,
+ struct rte_event *events,
+ unsigned int n, uint16_t *available)
+{
+ uint32_t cons_head, cons_next;
+ uint32_t entries;
+
+ n = __rte_ring_move_cons_head(&r->r, r->r.cons.single, n,
+ RTE_RING_QUEUE_VARIABLE,
+ &cons_head, &cons_next, &entries);
+ if (n == 0)
+ goto end;
+
+ DEQUEUE_PTRS(&r->r, &r[1], cons_head, events, n, struct rte_event);
+ rte_smp_rmb();
+
+ update_tail(&r->r.cons, cons_head, cons_next, 1);
+
+end:
+ if (available != NULL)
+ *available = entries - n;
+ return n;
+}
+
+/*
+ * Initializes an already-allocated ring structure
+ *
+ * @param r
+ * pointer to the ring memory to be initialized
+ * @param name
+ * name to be given to the ring
+ * @param count
+ * the number of elements to be stored in the ring. If the flag
+ * ``RING_F_EXACT_SZ`` is not set, this must be a power of 2, and the actual
+ * usable space in the ring will be ``count - 1`` entries. If the flag
+ * ``RING_F_EXACT_SZ`` is set, the this can be any value up to the ring size
+ * limit - 1, and the usable space will be exactly that requested.
+ * @param flags
+ * An OR of the following:
+ * - RING_F_SP_ENQ: If this flag is set, the default behavior when
+ * using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``
+ * is "single-producer". Otherwise, it is "multi-producers".
+ * - RING_F_SC_DEQ: If this flag is set, the default behavior when
+ * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``
+ * is "single-consumer". Otherwise, it is "multi-consumers".
+ * - RING_F_EXACT_SZ: If this flag is set, the ``count`` parameter is to
+ * be taken as the exact usable size of the ring, and as such does not
+ * need to be a power of 2. The underlying ring memory should be a
+ * power-of-2 size greater than the count value.
+ * @return
+ * 0 on success, or a negative value on error.
+ */
+int
+rte_event_ring_init(struct rte_event_ring *r, const char *name,
+ unsigned int count, unsigned int flags);
+
+/*
+ * Create an event ring structure
+ *
+ * This function allocates memory and initializes an event ring inside that
+ * memory.
+ *
+ * @param name
+ * name to be given to the ring
+ * @param count
+ * the number of elements to be stored in the ring. If the flag
+ * ``RING_F_EXACT_SZ`` is not set, this must be a power of 2, and the actual
+ * usable space in the ring will be ``count - 1`` entries. If the flag
+ * ``RING_F_EXACT_SZ`` is set, the this can be any value up to the ring size
+ * limit - 1, and the usable space will be exactly that requested.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * An OR of the following:
+ * - RING_F_SP_ENQ: If this flag is set, the default behavior when
+ * using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``
+ * is "single-producer". Otherwise, it is "multi-producers".
+ * - RING_F_SC_DEQ: If this flag is set, the default behavior when
+ * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``
+ * is "single-consumer". Otherwise, it is "multi-consumers".
+ * - RING_F_EXACT_SZ: If this flag is set, the ``count`` parameter is to
+ * be taken as the exact usable size of the ring, and as such does not
+ * need to be a power of 2. The underlying ring memory should be a
+ * power-of-2 size greater than the count value.
+ * @return
+ * On success, the pointer to the new allocated ring. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ * - E_RTE_SECONDARY - function was called from a secondary process instance
+ * - EINVAL - count provided is not a power of 2
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a memzone with the same name already exists
+ * - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_event_ring *
+rte_event_ring_create(const char *name, unsigned int count, int socket_id,
+ unsigned int flags);
+
+/**
+ * Search for an event ring based on its name
+ *
+ * @param name
+ * The name of the ring.
+ * @return
+ * The pointer to the ring matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - required entry not available to return.
+ */
+struct rte_event_ring *
+rte_event_ring_lookup(const char *name);
+
+/**
+ * De-allocate all memory used by the ring.
+ *
+ * @param r
+ * Ring to free
+ */
+void
+rte_event_ring_free(struct rte_event_ring *r);
+
+/**
+ * Return the size of the event ring.
+ *
+ * @param r
+ * A pointer to the ring structure.
+ * @return
+ * The size of the data store used by the ring.
+ * NOTE: this is not the same as the usable space in the ring. To query that
+ * use ``rte_ring_get_capacity()``.
+ */
+static inline unsigned int
+rte_event_ring_get_size(const struct rte_event_ring *r)
+{
+ return rte_ring_get_size(&r->r);
+}
+
+/**
+ * Return the number of elements which can be stored in the event ring.
+ *
+ * @param r
+ * A pointer to the ring structure.
+ * @return
+ * The usable size of the ring.
+ */
+static inline unsigned int
+rte_event_ring_get_capacity(const struct rte_event_ring *r)
+{
+ return rte_ring_get_capacity(&r->r);
+}
+#endif
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 1fa6b33..4c48e5f 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -42,3 +42,12 @@ DPDK_17.05 {
local: *;
};
+
+DPDK_17.08 {
+ global:
+
+ rte_event_ring_create;
+ rte_event_ring_free;
+ rte_event_ring_init;
+ rte_event_ring_lookup;
+} DPDK_17.05;
--
2.9.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 3/5] eventdev: add ring structure for events
2017-06-07 13:36 ` [dpdk-dev] [PATCH 3/5] eventdev: add ring structure for events Bruce Richardson
@ 2017-06-12 5:15 ` Jerin Jacob
2017-06-12 8:53 ` Bruce Richardson
2017-06-30 13:24 ` Bruce Richardson
0 siblings, 2 replies; 28+ messages in thread
From: Jerin Jacob @ 2017-06-12 5:15 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, olivier.matz
-----Original Message-----
> Date: Wed, 7 Jun 2017 14:36:18 +0100
> From: Bruce Richardson <bruce.richardson@intel.com>
> To: dev@dpdk.org
> CC: olivier.matz@6wind.com, jerin.jacob@caviumnetworks.com, Bruce
> Richardson <bruce.richardson@intel.com>
> Subject: [PATCH 3/5] eventdev: add ring structure for events
> X-Mailer: git-send-email 2.9.4
>
> Add in a new rte_event_ring structure type and functions to allow events to
> be passed core to core. This is needed because the standard rte_ring type
> only works on pointers, while for events, we want to copy the entire, 16B
> events themselves - not just pointers to them. The code makes extensive use
> of the functions already defined in rte_ring.h
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Thomas,
Should this series applied through dpdk/master or dpdk-next-eventdev/master?
Bruce,
__rte_always_inline macro is now part of master. Good to replace with force_inline.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 3/5] eventdev: add ring structure for events
2017-06-12 5:15 ` Jerin Jacob
@ 2017-06-12 8:53 ` Bruce Richardson
2017-06-30 13:24 ` Bruce Richardson
1 sibling, 0 replies; 28+ messages in thread
From: Bruce Richardson @ 2017-06-12 8:53 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, olivier.matz
On Mon, Jun 12, 2017 at 10:45:47AM +0530, Jerin Jacob wrote:
> -----Original Message-----
> > Date: Wed, 7 Jun 2017 14:36:18 +0100
> > From: Bruce Richardson <bruce.richardson@intel.com>
> > To: dev@dpdk.org
> > CC: olivier.matz@6wind.com, jerin.jacob@caviumnetworks.com, Bruce
> > Richardson <bruce.richardson@intel.com>
> > Subject: [PATCH 3/5] eventdev: add ring structure for events
> > X-Mailer: git-send-email 2.9.4
> >
> > Add in a new rte_event_ring structure type and functions to allow events to
> > be passed core to core. This is needed because the standard rte_ring type
> > only works on pointers, while for events, we want to copy the entire, 16B
> > events themselves - not just pointers to them. The code makes extensive use
> > of the functions already defined in rte_ring.h
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>
>
> Thomas,
>
> Should this series applied through dpdk/master or dpdk-next-eventdev/master?
>
> Bruce,
> __rte_always_inline macro is now part of master. Good to replace with force_inline.
Yes, good point. Thanks.
/Bruce
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH 3/5] eventdev: add ring structure for events
2017-06-12 5:15 ` Jerin Jacob
2017-06-12 8:53 ` Bruce Richardson
@ 2017-06-30 13:24 ` Bruce Richardson
1 sibling, 0 replies; 28+ messages in thread
From: Bruce Richardson @ 2017-06-30 13:24 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, olivier.matz
On Mon, Jun 12, 2017 at 10:45:47AM +0530, Jerin Jacob wrote:
> -----Original Message-----
> > Date: Wed, 7 Jun 2017 14:36:18 +0100
> > From: Bruce Richardson <bruce.richardson@intel.com>
> > To: dev@dpdk.org
> > CC: olivier.matz@6wind.com, jerin.jacob@caviumnetworks.com, Bruce
> > Richardson <bruce.richardson@intel.com>
> > Subject: [PATCH 3/5] eventdev: add ring structure for events
> > X-Mailer: git-send-email 2.9.4
> >
> > Add in a new rte_event_ring structure type and functions to allow events to
> > be passed core to core. This is needed because the standard rte_ring type
> > only works on pointers, while for events, we want to copy the entire, 16B
> > events themselves - not just pointers to them. The code makes extensive use
> > of the functions already defined in rte_ring.h
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>
>
> Thomas,
>
> Should this series applied through dpdk/master or dpdk-next-eventdev/master?
I'll leave that up to yourself and Thomas to sort out. My 2c. is that it
probably belongs better in the eventdev tree, especially since the rings
code doesn't see many, if any, changes on mainline anyway to conflict
with this.
I'm going to do up a V2 based on all the feedback this afternoon.
>
> Bruce,
> __rte_always_inline macro is now part of master. Good to replace with force_inline.
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH 4/5] test/test: add auto-tests for event ring functions
2017-06-07 13:36 [dpdk-dev] [PATCH 0/5] create event rings type Bruce Richardson
` (2 preceding siblings ...)
2017-06-07 13:36 ` [dpdk-dev] [PATCH 3/5] eventdev: add ring structure for events Bruce Richardson
@ 2017-06-07 13:36 ` Bruce Richardson
2017-06-07 13:36 ` [dpdk-dev] [PATCH 5/5] event/sw: change worker rings to standard event rings Bruce Richardson
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 0/5] create event rings type Bruce Richardson
5 siblings, 0 replies; 28+ messages in thread
From: Bruce Richardson @ 2017-06-07 13:36 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
Add some basic tests for the event ring functions. Not everything needs
testing as there is so much code re-use from the rte_ring code.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
test/test/Makefile | 1 +
test/test/test_event_ring.c | 275 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 276 insertions(+)
create mode 100644 test/test/test_event_ring.c
diff --git a/test/test/Makefile b/test/test/Makefile
index ee240be..e797c20 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -201,6 +201,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
SRCS-y += test_eventdev.c
+SRCS-y += test_event_ring.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += test_eventdev_sw.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF) += test_eventdev_octeontx.c
endif
diff --git a/test/test/test_event_ring.c b/test/test/test_event_ring.c
new file mode 100644
index 0000000..f1a768f
--- /dev/null
+++ b/test/test/test_event_ring.c
@@ -0,0 +1,275 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_event_ring.h>
+
+#include "test.h"
+
+/*
+ * Event Ring
+ * ===========
+ *
+ * Test some basic ops for the event rings.
+ * Does not fully test everything, since most code is reused from rte_ring
+ * library and tested as part of the normal ring autotests.
+ */
+
+#define RING_SIZE 4096
+#define MAX_BULK 32
+
+static struct rte_event_ring *r;
+
+/*
+ * ensure failure to create ring with a bad ring size
+ */
+static int
+test_event_ring_creation_with_wrong_size(void)
+{
+ struct rte_event_ring *rp = NULL;
+
+ /* Test if ring size is not power of 2 */
+ rp = rte_event_ring_create("test_bad_ring_size", RING_SIZE + 1,
+ SOCKET_ID_ANY, 0);
+ if (rp != NULL)
+ return -1;
+
+ /* Test if ring size is exceeding the limit */
+ rp = rte_event_ring_create("test_bad_ring_size", (RTE_RING_SZ_MASK + 1),
+ SOCKET_ID_ANY, 0);
+ if (rp != NULL)
+ return -1;
+ return 0;
+}
+
+/*
+ * Test to check if a non-power-of-2 count causes the create
+ * function to fail correctly
+ */
+static int
+test_create_count_odd(void)
+{
+ struct rte_event_ring *r = rte_event_ring_create("test_event_ring_count",
+ 4097, SOCKET_ID_ANY, 0);
+ if (r != NULL)
+ return -1;
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_event_ring *rlp = rte_event_ring_lookup("ring_not_found");
+ if (rlp == NULL && rte_errno != ENOENT) {
+ printf("test failed to return error on null pointer\n");
+ return -1;
+ }
+ return 0;
+}
+
+static int
+test_basic_event_enqueue_dequeue(void)
+{
+ struct rte_event_ring *sr = NULL;
+ struct rte_event evs[16];
+ uint16_t ret, free_count, used_count;
+
+ memset(evs, 0, sizeof(evs));
+ sr = rte_event_ring_create("spsc_ring", 32, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (sr == NULL) {
+ printf("Failed to create sp/sc ring\n");
+ return -1;
+ }
+ if (rte_event_ring_get_capacity(sr) != 31) {
+ printf("Error, invalid capacity\n");
+ goto error;
+ }
+
+ /* test sp/sc ring */
+ if (rte_event_ring_count(sr) != 0) {
+ printf("Error, ring not empty as expected\n");
+ goto error;
+ }
+ if (rte_event_ring_free_count(sr) != rte_event_ring_get_capacity(sr)) {
+ printf("Error, ring free count not as expected\n");
+ goto error;
+ }
+
+ ret = rte_event_ring_enqueue_burst(sr, evs, RTE_DIM(evs), &free_count);
+ if (ret != RTE_DIM(evs) ||
+ free_count != rte_event_ring_get_capacity(sr) - ret) {
+ printf("Error, status after enqueue is unexpected\n");
+ goto error;
+ }
+
+ ret = rte_event_ring_enqueue_burst(sr, evs, RTE_DIM(evs), &free_count);
+ if (ret != RTE_DIM(evs) - 1 ||
+ free_count != 0) {
+ printf("Error, status after enqueue is unexpected\n");
+ goto error;
+ }
+
+ ret = rte_event_ring_dequeue_burst(sr, evs, RTE_DIM(evs), &used_count);
+ if (ret != RTE_DIM(evs) ||
+ used_count != rte_event_ring_get_capacity(sr) - ret) {
+ printf("Error, status after enqueue is unexpected\n");
+ goto error;
+ }
+ ret = rte_event_ring_dequeue_burst(sr, evs, RTE_DIM(evs), &used_count);
+ if (ret != RTE_DIM(evs) - 1 ||
+ used_count != 0) {
+ printf("Error, status after enqueue is unexpected\n");
+ goto error;
+ }
+
+ rte_event_ring_free(sr);
+ return 0;
+error:
+ rte_event_ring_free(sr);
+ return -1;
+}
+
+static int
+test_event_ring_with_exact_size(void)
+{
+ struct rte_event_ring *std_ring, *exact_sz_ring;
+ struct rte_event ev = { .mbuf = NULL };
+ struct rte_event ev_array[16];
+ static const unsigned int ring_sz = RTE_DIM(ev_array);
+ unsigned int i;
+
+ std_ring = rte_event_ring_create("std", ring_sz, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (std_ring == NULL) {
+ printf("%s: error, can't create std ring\n", __func__);
+ return -1;
+ }
+ exact_sz_ring = rte_event_ring_create("exact sz",
+ ring_sz, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
+ if (exact_sz_ring == NULL) {
+ printf("%s: error, can't create exact size ring\n", __func__);
+ return -1;
+ }
+
+ /*
+ * Check that the exact size ring is bigger than the standard ring
+ */
+ if (rte_event_ring_get_size(std_ring) >=
+ rte_event_ring_get_size(exact_sz_ring)) {
+ printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n",
+ __func__,
+ rte_event_ring_get_size(std_ring),
+ rte_event_ring_get_size(exact_sz_ring));
+ return -1;
+ }
+ /*
+ * check that the exact_sz_ring can hold one more element than the
+ * standard ring. (16 vs 15 elements)
+ */
+ for (i = 0; i < ring_sz - 1; i++) {
+ rte_event_ring_enqueue_burst(std_ring, &ev, 1, NULL);
+ rte_event_ring_enqueue_burst(exact_sz_ring, &ev, 1, NULL);
+ }
+ if (rte_event_ring_enqueue_burst(std_ring, &ev, 1, NULL) != 0) {
+ printf("%s: error, unexpected successful enqueue\n", __func__);
+ return -1;
+ }
+ if (rte_event_ring_enqueue_burst(exact_sz_ring, &ev, 1, NULL) != 1) {
+ printf("%s: error, enqueue failed\n", __func__);
+ return -1;
+ }
+
+ /* check that dequeue returns the expected number of elements */
+ if (rte_event_ring_dequeue_burst(exact_sz_ring, ev_array,
+ RTE_DIM(ev_array), NULL) != ring_sz) {
+ printf("%s: error, failed to dequeue expected nb of elements\n",
+ __func__);
+ return -1;
+ }
+
+ /* check that the capacity function returns expected value */
+ if (rte_event_ring_get_capacity(exact_sz_ring) != ring_sz) {
+ printf("%s: error, incorrect ring capacity reported\n",
+ __func__);
+ return -1;
+ }
+
+ rte_event_ring_free(std_ring);
+ rte_event_ring_free(exact_sz_ring);
+ return 0;
+}
+
+static int
+test_event_ring(void)
+{
+ if (r == NULL)
+ r = rte_event_ring_create("test", RING_SIZE, SOCKET_ID_ANY, 0);
+ if (r == NULL)
+ return -1;
+
+ /* retrieve the ring from its name */
+ if (rte_event_ring_lookup("test") != r) {
+ printf("Cannot lookup ring from its name\n");
+ return -1;
+ }
+
+ /* basic operations */
+ if (test_create_count_odd() < 0) {
+ printf("Test failed to detect odd count\n");
+ return -1;
+ }
+ printf("Test detected odd count\n");
+
+ if (test_lookup_null() < 0) {
+ printf("Test failed to detect NULL ring lookup\n");
+ return -1;
+ }
+ printf("Test detected NULL ring lookup\n");
+
+ /* test of creating ring with wrong size */
+ if (test_event_ring_creation_with_wrong_size() < 0)
+ return -1;
+
+ if (test_basic_event_enqueue_dequeue() < 0)
+ return -1;
+
+ if (test_event_ring_with_exact_size() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(event_ring_autotest, test_event_ring);
--
2.9.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH 5/5] event/sw: change worker rings to standard event rings
2017-06-07 13:36 [dpdk-dev] [PATCH 0/5] create event rings type Bruce Richardson
` (3 preceding siblings ...)
2017-06-07 13:36 ` [dpdk-dev] [PATCH 4/5] test/test: add auto-tests for event ring functions Bruce Richardson
@ 2017-06-07 13:36 ` Bruce Richardson
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 0/5] create event rings type Bruce Richardson
5 siblings, 0 replies; 28+ messages in thread
From: Bruce Richardson @ 2017-06-07 13:36 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
Now that we have a standard event ring implementation for passing events
core-to-core, use that in place of the custom event rings in the software
eventdev.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/event/sw/event_ring.h | 185 ----------------------------------
drivers/event/sw/sw_evdev.c | 38 +++----
drivers/event/sw/sw_evdev.h | 4 +-
drivers/event/sw/sw_evdev_scheduler.c | 19 ++--
drivers/event/sw/sw_evdev_worker.c | 28 ++++-
drivers/event/sw/sw_evdev_xstats.c | 15 +--
6 files changed, 64 insertions(+), 225 deletions(-)
delete mode 100644 drivers/event/sw/event_ring.h
diff --git a/drivers/event/sw/event_ring.h b/drivers/event/sw/event_ring.h
deleted file mode 100644
index cdaee95..0000000
--- a/drivers/event/sw/event_ring.h
+++ /dev/null
@@ -1,185 +0,0 @@
-/*-
- * BSD LICENSE
- *
- * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-/*
- * Generic ring structure for passing events from one core to another.
- *
- * Used by the software scheduler for the producer and consumer rings for
- * each port, i.e. for passing events from worker cores to scheduler and
- * vice-versa. Designed for single-producer, single-consumer use with two
- * cores working on each ring.
- */
-
-#ifndef _EVENT_RING_
-#define _EVENT_RING_
-
-#include <stdint.h>
-
-#include <rte_common.h>
-#include <rte_memory.h>
-#include <rte_malloc.h>
-
-#define QE_RING_NAMESIZE 32
-
-struct qe_ring {
- char name[QE_RING_NAMESIZE] __rte_cache_aligned;
- uint32_t ring_size; /* size of memory block allocated to the ring */
- uint32_t mask; /* mask for read/write values == ring_size -1 */
- uint32_t size; /* actual usable space in the ring */
- volatile uint32_t write_idx __rte_cache_aligned;
- volatile uint32_t read_idx __rte_cache_aligned;
-
- struct rte_event ring[0] __rte_cache_aligned;
-};
-
-#ifndef force_inline
-#define force_inline inline __attribute__((always_inline))
-#endif
-
-static inline struct qe_ring *
-qe_ring_create(const char *name, unsigned int size, unsigned int socket_id)
-{
- struct qe_ring *retval;
- const uint32_t ring_size = rte_align32pow2(size + 1);
- size_t memsize = sizeof(*retval) +
- (ring_size * sizeof(retval->ring[0]));
-
- retval = rte_zmalloc_socket(NULL, memsize, 0, socket_id);
- if (retval == NULL)
- goto end;
-
- snprintf(retval->name, sizeof(retval->name), "EVDEV_RG_%s", name);
- retval->ring_size = ring_size;
- retval->mask = ring_size - 1;
- retval->size = size;
-end:
- return retval;
-}
-
-static inline void
-qe_ring_destroy(struct qe_ring *r)
-{
- rte_free(r);
-}
-
-static force_inline unsigned int
-qe_ring_count(const struct qe_ring *r)
-{
- return r->write_idx - r->read_idx;
-}
-
-static force_inline unsigned int
-qe_ring_free_count(const struct qe_ring *r)
-{
- return r->size - qe_ring_count(r);
-}
-
-static force_inline unsigned int
-qe_ring_enqueue_burst(struct qe_ring *r, const struct rte_event *qes,
- unsigned int nb_qes, uint16_t *free_count)
-{
- const uint32_t size = r->size;
- const uint32_t mask = r->mask;
- const uint32_t read = r->read_idx;
- uint32_t write = r->write_idx;
- const uint32_t space = read + size - write;
- uint32_t i;
-
- if (space < nb_qes)
- nb_qes = space;
-
- for (i = 0; i < nb_qes; i++, write++)
- r->ring[write & mask] = qes[i];
-
- rte_smp_wmb();
-
- if (nb_qes != 0)
- r->write_idx = write;
-
- *free_count = space - nb_qes;
-
- return nb_qes;
-}
-
-static force_inline unsigned int
-qe_ring_enqueue_burst_with_ops(struct qe_ring *r, const struct rte_event *qes,
- unsigned int nb_qes, uint8_t *ops)
-{
- const uint32_t size = r->size;
- const uint32_t mask = r->mask;
- const uint32_t read = r->read_idx;
- uint32_t write = r->write_idx;
- const uint32_t space = read + size - write;
- uint32_t i;
-
- if (space < nb_qes)
- nb_qes = space;
-
- for (i = 0; i < nb_qes; i++, write++) {
- r->ring[write & mask] = qes[i];
- r->ring[write & mask].op = ops[i];
- }
-
- rte_smp_wmb();
-
- if (nb_qes != 0)
- r->write_idx = write;
-
- return nb_qes;
-}
-
-static force_inline unsigned int
-qe_ring_dequeue_burst(struct qe_ring *r, struct rte_event *qes,
- unsigned int nb_qes)
-{
- const uint32_t mask = r->mask;
- uint32_t read = r->read_idx;
- const uint32_t write = r->write_idx;
- const uint32_t items = write - read;
- uint32_t i;
-
- if (items < nb_qes)
- nb_qes = items;
-
-
- for (i = 0; i < nb_qes; i++, read++)
- qes[i] = r->ring[read & mask];
-
- rte_smp_rmb();
-
- if (nb_qes != 0)
- r->read_idx += nb_qes;
-
- return nb_qes;
-}
-
-#endif
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index a31aaa6..2e9d907 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -37,10 +37,10 @@
#include <rte_kvargs.h>
#include <rte_ring.h>
#include <rte_errno.h>
+#include <rte_event_ring.h>
#include "sw_evdev.h"
#include "iq_ring.h"
-#include "event_ring.h"
#define EVENTDEV_NAME_SW_PMD event_sw
#define NUMA_NODE_ARG "numa_node"
@@ -138,7 +138,7 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
{
struct sw_evdev *sw = sw_pmd_priv(dev);
struct sw_port *p = &sw->ports[port_id];
- char buf[QE_RING_NAMESIZE];
+ char buf[RTE_RING_NAMESIZE];
unsigned int i;
struct rte_event_dev_info info;
@@ -159,10 +159,11 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
p->id = port_id;
p->sw = sw;
- snprintf(buf, sizeof(buf), "sw%d_%s", dev->data->dev_id,
- "rx_worker_ring");
- p->rx_worker_ring = qe_ring_create(buf, MAX_SW_PROD_Q_DEPTH,
- dev->data->socket_id);
+ snprintf(buf, sizeof(buf), "sw%d_p%u_%s", dev->data->dev_id,
+ port_id, "rx_worker_ring");
+ p->rx_worker_ring = rte_event_ring_create(buf, MAX_SW_PROD_Q_DEPTH,
+ dev->data->socket_id,
+ RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
if (p->rx_worker_ring == NULL) {
SW_LOG_ERR("Error creating RX worker ring for port %d\n",
port_id);
@@ -171,12 +172,13 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
p->inflight_max = conf->new_event_threshold;
- snprintf(buf, sizeof(buf), "sw%d_%s", dev->data->dev_id,
- "cq_worker_ring");
- p->cq_worker_ring = qe_ring_create(buf, conf->dequeue_depth,
- dev->data->socket_id);
+ snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
+ port_id, "cq_worker_ring");
+ p->cq_worker_ring = rte_event_ring_create(buf, conf->dequeue_depth,
+ dev->data->socket_id,
+ RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
if (p->cq_worker_ring == NULL) {
- qe_ring_destroy(p->rx_worker_ring);
+ rte_event_ring_free(p->rx_worker_ring);
SW_LOG_ERR("Error creating CQ worker ring for port %d\n",
port_id);
return -1;
@@ -202,8 +204,8 @@ sw_port_release(void *port)
if (p == NULL)
return;
- qe_ring_destroy(p->rx_worker_ring);
- qe_ring_destroy(p->cq_worker_ring);
+ rte_event_ring_free(p->rx_worker_ring);
+ rte_event_ring_free(p->cq_worker_ring);
memset(p, 0, sizeof(*p));
}
@@ -509,8 +511,9 @@ sw_dump(struct rte_eventdev *dev, FILE *f)
fprintf(f, "\n");
if (p->rx_worker_ring) {
- uint64_t used = qe_ring_count(p->rx_worker_ring);
- uint64_t space = qe_ring_free_count(p->rx_worker_ring);
+ uint64_t used = rte_event_ring_count(p->rx_worker_ring);
+ uint64_t space = rte_event_ring_free_count(
+ p->rx_worker_ring);
const char *col = (space == 0) ? COL_RED : COL_RESET;
fprintf(f, "\t%srx ring used: %4"PRIu64"\tfree: %4"
PRIu64 COL_RESET"\n", col, used, space);
@@ -518,8 +521,9 @@ sw_dump(struct rte_eventdev *dev, FILE *f)
fprintf(f, "\trx ring not initialized.\n");
if (p->cq_worker_ring) {
- uint64_t used = qe_ring_count(p->cq_worker_ring);
- uint64_t space = qe_ring_free_count(p->cq_worker_ring);
+ uint64_t used = rte_event_ring_count(p->cq_worker_ring);
+ uint64_t space = rte_event_ring_free_count(
+ p->cq_worker_ring);
const char *col = (space == 0) ? COL_RED : COL_RESET;
fprintf(f, "\t%scq ring used: %4"PRIu64"\tfree: %4"
PRIu64 COL_RESET"\n", col, used, space);
diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h
index 61c671d..1695352 100644
--- a/drivers/event/sw/sw_evdev.h
+++ b/drivers/event/sw/sw_evdev.h
@@ -189,9 +189,9 @@ struct sw_port {
int16_t num_ordered_qids;
/** Ring and buffer for pulling events from workers for scheduling */
- struct qe_ring *rx_worker_ring __rte_cache_aligned;
+ struct rte_event_ring *rx_worker_ring __rte_cache_aligned;
/** Ring and buffer for pushing packets to workers after scheduling */
- struct qe_ring *cq_worker_ring;
+ struct rte_event_ring *cq_worker_ring;
/* hole */
diff --git a/drivers/event/sw/sw_evdev_scheduler.c b/drivers/event/sw/sw_evdev_scheduler.c
index a333a6f..0778c80 100644
--- a/drivers/event/sw/sw_evdev_scheduler.c
+++ b/drivers/event/sw/sw_evdev_scheduler.c
@@ -32,9 +32,9 @@
#include <rte_ring.h>
#include <rte_hash_crc.h>
+#include <rte_event_ring.h>
#include "sw_evdev.h"
#include "iq_ring.h"
-#include "event_ring.h"
#define SW_IQS_MASK (SW_IQS_MAX-1)
@@ -122,8 +122,8 @@ sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid,
/* if we just filled in the last slot, flush the buffer */
if (sw->cq_ring_space[cq] == 0) {
- struct qe_ring *worker = p->cq_worker_ring;
- qe_ring_enqueue_burst(worker, p->cq_buf,
+ struct rte_event_ring *worker = p->cq_worker_ring;
+ rte_event_ring_enqueue_burst(worker, p->cq_buf,
p->cq_buf_count,
&sw->cq_ring_space[cq]);
p->cq_buf_count = 0;
@@ -170,7 +170,8 @@ sw_schedule_parallel_to_cq(struct sw_evdev *sw, struct sw_qid * const qid,
cq = qid->cq_map[cq_idx];
if (++cq_idx == qid->cq_num_mapped_cqs)
cq_idx = 0;
- } while (qe_ring_free_count(sw->ports[cq].cq_worker_ring) == 0 ||
+ } while (rte_event_ring_free_count(
+ sw->ports[cq].cq_worker_ring) == 0 ||
sw->ports[cq].inflights == SW_PORT_HIST_LIST);
struct sw_port *p = &sw->ports[cq];
@@ -366,10 +367,10 @@ static inline void __attribute__((always_inline))
sw_refill_pp_buf(struct sw_evdev *sw, struct sw_port *port)
{
RTE_SET_USED(sw);
- struct qe_ring *worker = port->rx_worker_ring;
+ struct rte_event_ring *worker = port->rx_worker_ring;
port->pp_buf_start = 0;
- port->pp_buf_count = qe_ring_dequeue_burst(worker, port->pp_buf,
- RTE_DIM(port->pp_buf));
+ port->pp_buf_count = rte_event_ring_dequeue_burst(worker, port->pp_buf,
+ RTE_DIM(port->pp_buf), NULL);
}
static inline uint32_t __attribute__((always_inline))
@@ -585,8 +586,8 @@ sw_event_schedule(struct rte_eventdev *dev)
* worker cores: aka, do the ring transfers batched.
*/
for (i = 0; i < sw->port_count; i++) {
- struct qe_ring *worker = sw->ports[i].cq_worker_ring;
- qe_ring_enqueue_burst(worker, sw->ports[i].cq_buf,
+ struct rte_event_ring *worker = sw->ports[i].cq_worker_ring;
+ rte_event_ring_enqueue_burst(worker, sw->ports[i].cq_buf,
sw->ports[i].cq_buf_count,
&sw->cq_ring_space[i]);
sw->ports[i].cq_buf_count = 0;
diff --git a/drivers/event/sw/sw_evdev_worker.c b/drivers/event/sw/sw_evdev_worker.c
index 9cb6bef..c9431a3 100644
--- a/drivers/event/sw/sw_evdev_worker.c
+++ b/drivers/event/sw/sw_evdev_worker.c
@@ -32,9 +32,9 @@
#include <rte_atomic.h>
#include <rte_cycles.h>
+#include <rte_event_ring.h>
#include "sw_evdev.h"
-#include "event_ring.h"
#define PORT_ENQUEUE_MAX_BURST_SIZE 64
@@ -52,13 +52,31 @@ sw_event_release(struct sw_port *p, uint8_t index)
ev.op = sw_qe_flag_map[RTE_EVENT_OP_RELEASE];
uint16_t free_count;
- qe_ring_enqueue_burst(p->rx_worker_ring, &ev, 1, &free_count);
+ rte_event_ring_enqueue_burst(p->rx_worker_ring, &ev, 1, &free_count);
/* each release returns one credit */
p->outstanding_releases--;
p->inflight_credits++;
}
+/*
+ * special-case of rte_event_ring enqueue, with overriding the ops member on
+ * the events that get written to the ring.
+ */
+static inline unsigned int
+enqueue_burst_with_ops(struct rte_event_ring *r, const struct rte_event *events,
+ unsigned int n, uint8_t *ops)
+{
+ struct rte_event tmp_evs[PORT_ENQUEUE_MAX_BURST_SIZE];
+ unsigned int i;
+
+ memcpy(tmp_evs, events, n * sizeof(events[0]));
+ for (i = 0; i < n; i++)
+ tmp_evs[i].op = ops[i];
+
+ return rte_event_ring_enqueue_burst(r, tmp_evs, n, NULL);
+}
+
uint16_t
sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num)
{
@@ -114,7 +132,7 @@ sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num)
}
/* returns number of events actually enqueued */
- uint32_t enq = qe_ring_enqueue_burst_with_ops(p->rx_worker_ring, ev, i,
+ uint32_t enq = enqueue_burst_with_ops(p->rx_worker_ring, ev, i,
new_ops);
if (p->outstanding_releases == 0 && p->last_dequeue_burst_sz != 0) {
uint64_t burst_ticks = rte_get_timer_cycles() -
@@ -141,7 +159,7 @@ sw_event_dequeue_burst(void *port, struct rte_event *ev, uint16_t num,
RTE_SET_USED(wait);
struct sw_port *p = (void *)port;
struct sw_evdev *sw = (void *)p->sw;
- struct qe_ring *ring = p->cq_worker_ring;
+ struct rte_event_ring *ring = p->cq_worker_ring;
uint32_t credit_update_quanta = sw->credit_update_quanta;
/* check that all previous dequeues have been released */
@@ -153,7 +171,7 @@ sw_event_dequeue_burst(void *port, struct rte_event *ev, uint16_t num,
}
/* returns number of events actually dequeued */
- uint16_t ndeq = qe_ring_dequeue_burst(ring, ev, num);
+ uint16_t ndeq = rte_event_ring_dequeue_burst(ring, ev, num, NULL);
if (unlikely(ndeq == 0)) {
p->outstanding_releases = 0;
p->zero_polls++;
diff --git a/drivers/event/sw/sw_evdev_xstats.c b/drivers/event/sw/sw_evdev_xstats.c
index c7b1abe..4e13509 100644
--- a/drivers/event/sw/sw_evdev_xstats.c
+++ b/drivers/event/sw/sw_evdev_xstats.c
@@ -30,9 +30,9 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <rte_event_ring.h>
#include "sw_evdev.h"
#include "iq_ring.h"
-#include "event_ring.h"
enum xstats_type {
/* common stats */
@@ -104,10 +104,10 @@ get_port_stat(const struct sw_evdev *sw, uint16_t obj_idx,
case calls: return p->total_polls;
case credits: return p->inflight_credits;
case poll_return: return p->zero_polls;
- case rx_used: return qe_ring_count(p->rx_worker_ring);
- case rx_free: return qe_ring_free_count(p->rx_worker_ring);
- case tx_used: return qe_ring_count(p->cq_worker_ring);
- case tx_free: return qe_ring_free_count(p->cq_worker_ring);
+ case rx_used: return rte_event_ring_count(p->rx_worker_ring);
+ case rx_free: return rte_event_ring_free_count(p->rx_worker_ring);
+ case tx_used: return rte_event_ring_count(p->cq_worker_ring);
+ case tx_free: return rte_event_ring_free_count(p->cq_worker_ring);
default: return -1;
}
}
@@ -312,8 +312,9 @@ sw_xstats_init(struct sw_evdev *sw)
port, port_stats[i]);
}
- for (bkt = 0; bkt < (sw->ports[port].cq_worker_ring->size >>
- SW_DEQ_STAT_BUCKET_SHIFT) + 1; bkt++) {
+ for (bkt = 0; bkt < (rte_event_ring_get_capacity(
+ sw->ports[port].cq_worker_ring) >>
+ SW_DEQ_STAT_BUCKET_SHIFT) + 1; bkt++) {
for (i = 0; i < RTE_DIM(port_bucket_stats); i++) {
sw->xstats[stat] = (struct sw_xstats_entry){
.fn = get_port_bucket_stat,
--
2.9.4
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 0/5] create event rings type
2017-06-07 13:36 [dpdk-dev] [PATCH 0/5] create event rings type Bruce Richardson
` (4 preceding siblings ...)
2017-06-07 13:36 ` [dpdk-dev] [PATCH 5/5] event/sw: change worker rings to standard event rings Bruce Richardson
@ 2017-06-30 15:06 ` Bruce Richardson
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 1/5] ring: allow rings with non power-of-2 sizes Bruce Richardson
` (4 more replies)
5 siblings, 5 replies; 28+ messages in thread
From: Bruce Richardson @ 2017-06-30 15:06 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
Following on from the refactoring of the rte_rings code done in the 17.05
release, it becomes relatively easy to create new ring implementations for
data types other than "void *". The first candidate for this is the
rte_event type, which being 16B in size, is small enough to be passed
around directly rather than via pointer.
The SW eventdev implementation already defines an event ring implementation
and this can be replaced by a more general implementation done in the
eventdev library itself. However, feature parity between the SW eventdev
implementation and a general rte_ring implementation is required, so
support for rings of a defined size is added to the rte_ring library first.
v2: minor adjustments made following review feedback.
Bruce Richardson (5):
ring: allow rings with non power-of-2 sizes
test/test: add unit tests for exact size rings
eventdev: add ring structure for events
test/test: add auto-tests for event ring functions
event/sw: change worker rings to standard event rings
drivers/event/sw/sw_evdev.c | 38 ++--
drivers/event/sw/sw_evdev.h | 4 +-
drivers/event/sw/sw_evdev_scheduler.c | 19 +-
drivers/event/sw/sw_evdev_worker.c | 28 ++-
drivers/event/sw/sw_evdev_xstats.c | 15 +-
lib/Makefile | 2 +-
lib/librte_eventdev/Makefile | 2 +
lib/librte_eventdev/rte_event_ring.c | 207 ++++++++++++++++++
lib/librte_eventdev/rte_event_ring.h | 308 +++++++++++++++++++++++++++
lib/librte_eventdev/rte_eventdev_version.map | 9 +
lib/librte_ring/rte_ring.c | 26 ++-
lib/librte_ring/rte_ring.h | 89 +++++---
test/test/Makefile | 1 +
test/test/test_event_ring.c | 275 ++++++++++++++++++++++++
test/test/test_ring.c | 74 +++++++
15 files changed, 1020 insertions(+), 77 deletions(-)
create mode 100644 lib/librte_eventdev/rte_event_ring.c
create mode 100644 lib/librte_eventdev/rte_event_ring.h
create mode 100644 test/test/test_event_ring.c
--
2.13.0
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 1/5] ring: allow rings with non power-of-2 sizes
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 0/5] create event rings type Bruce Richardson
@ 2017-06-30 15:06 ` Bruce Richardson
2017-07-03 8:46 ` Olivier Matz
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 2/5] test/test: add unit tests for exact size rings Bruce Richardson
` (3 subsequent siblings)
4 siblings, 1 reply; 28+ messages in thread
From: Bruce Richardson @ 2017-06-30 15:06 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
The rte_rings traditionally have only supported having ring sizes as powers
of 2, with the actual usable space being the size - 1. In some cases, for
example, with an eventdev where we want to precisely control queue depths
for latency, we need to allow ring sizes which are not powers of two so we
add in an additional ring capacity value to allow that. For existing rings,
this value will be size-1, i.e. the same as the mask, but if the new
EXACT_SZ flag is passed on ring creation, the ring will have exactly the
usable space requested, although the underlying memory size may be bigger.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V2: added in bounds check on rte_ring_count
---
lib/librte_ring/rte_ring.c | 26 ++++++++++++--
lib/librte_ring/rte_ring.h | 89 +++++++++++++++++++++++++++++-----------------
2 files changed, 79 insertions(+), 36 deletions(-)
diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c
index 5f98c33f9..b8047eef3 100644
--- a/lib/librte_ring/rte_ring.c
+++ b/lib/librte_ring/rte_ring.c
@@ -140,8 +140,22 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count,
r->flags = flags;
r->prod.single = (flags & RING_F_SP_ENQ) ? __IS_SP : __IS_MP;
r->cons.single = (flags & RING_F_SC_DEQ) ? __IS_SC : __IS_MC;
- r->size = count;
- r->mask = count - 1;
+
+ if (flags & RING_F_EXACT_SZ) {
+ r->size = rte_align32pow2(count + 1);
+ r->mask = r->size - 1;
+ r->capacity = count;
+ } else {
+ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK)) {
+ RTE_LOG(ERR, RING,
+ "Requested size is invalid, must be power of 2, and not exceed the size limit %u\n",
+ RTE_RING_SZ_MASK);
+ return -EINVAL;
+ }
+ r->size = count;
+ r->mask = count - 1;
+ r->capacity = r->mask;
+ }
r->prod.head = r->cons.head = 0;
r->prod.tail = r->cons.tail = 0;
@@ -160,10 +174,15 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
ssize_t ring_size;
int mz_flags = 0;
struct rte_ring_list* ring_list = NULL;
+ const unsigned int requested_count = count;
int ret;
ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list);
+ /* for an exact size ring, round up from count to a power of two */
+ if (flags & RING_F_EXACT_SZ)
+ count = rte_align32pow2(count + 1);
+
ring_size = rte_ring_get_memsize(count);
if (ring_size < 0) {
rte_errno = ring_size;
@@ -194,7 +213,7 @@ rte_ring_create(const char *name, unsigned count, int socket_id,
r = mz->addr;
/* no need to check return value here, we already checked the
* arguments above */
- rte_ring_init(r, name, count, flags);
+ rte_ring_init(r, name, requested_count, flags);
te->data = (void *) r;
r->memzone = mz;
@@ -262,6 +281,7 @@ rte_ring_dump(FILE *f, const struct rte_ring *r)
fprintf(f, "ring <%s>@%p\n", r->name, r);
fprintf(f, " flags=%x\n", r->flags);
fprintf(f, " size=%"PRIu32"\n", r->size);
+ fprintf(f, " capacity=%"PRIu32"\n", r->capacity);
fprintf(f, " ct=%"PRIu32"\n", r->cons.tail);
fprintf(f, " ch=%"PRIu32"\n", r->cons.head);
fprintf(f, " pt=%"PRIu32"\n", r->prod.tail);
diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
index 18684a53d..be186266f 100644
--- a/lib/librte_ring/rte_ring.h
+++ b/lib/librte_ring/rte_ring.h
@@ -153,6 +153,7 @@ struct rte_ring {
/**< Memzone, if any, containing the rte_ring */
uint32_t size; /**< Size of ring. */
uint32_t mask; /**< Mask (size-1) of ring. */
+ uint32_t capacity; /**< Usable size of ring */
/** Ring producer status. */
struct rte_ring_headtail prod __rte_aligned(PROD_ALIGN);
@@ -163,6 +164,15 @@ struct rte_ring {
#define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */
#define RING_F_SC_DEQ 0x0002 /**< The default dequeue is "single-consumer". */
+/**
+ * Ring is to hold exactly requested number of entries.
+ * Without this flag set, the ring size requested must be a power of 2, and the
+ * usable space will be that size - 1. With the flag, the requested size will
+ * be rounded up to the next power of two, but the usable space will be exactly
+ * that requested. Worst case, if a power-of-2 size is requested, half the
+ * ring space will be wasted.
+ */
+#define RING_F_EXACT_SZ 0x0004
#define RTE_RING_SZ_MASK (unsigned)(0x0fffffff) /**< Ring size mask */
/* @internal defines for passing to the enqueue dequeue worker functions */
@@ -389,7 +399,7 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
uint32_t *old_head, uint32_t *new_head,
uint32_t *free_entries)
{
- const uint32_t mask = r->mask;
+ const uint32_t capacity = r->capacity;
unsigned int max = n;
int success;
@@ -399,11 +409,13 @@ __rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
*old_head = r->prod.head;
const uint32_t cons_tail = r->cons.tail;
- /* The subtraction is done between two unsigned 32bits value
+ /*
+ * The subtraction is done between two unsigned 32bits value
* (the result is always modulo 32 bits even if we have
* *old_head > cons_tail). So 'free_entries' is always between 0
- * and size(ring)-1. */
- *free_entries = (mask + cons_tail - *old_head);
+ * and capacity (which is < size).
+ */
+ *free_entries = (capacity + cons_tail - *old_head);
/* check that we have enough room in ring */
if (unlikely(n > *free_entries))
@@ -845,69 +857,64 @@ rte_ring_dequeue(struct rte_ring *r, void **obj_p)
}
/**
- * Test if a ring is full.
+ * Return the number of entries in a ring.
*
* @param r
* A pointer to the ring structure.
* @return
- * - 1: The ring is full.
- * - 0: The ring is not full.
+ * The number of entries in the ring.
*/
-static inline int
-rte_ring_full(const struct rte_ring *r)
+static inline unsigned
+rte_ring_count(const struct rte_ring *r)
{
uint32_t prod_tail = r->prod.tail;
uint32_t cons_tail = r->cons.tail;
- return ((cons_tail - prod_tail - 1) & r->mask) == 0;
+ uint32_t count = (prod_tail - cons_tail) & r->mask;
+ return (count > r->capacity) ? r->capacity : count;
}
/**
- * Test if a ring is empty.
+ * Return the number of free entries in a ring.
*
* @param r
* A pointer to the ring structure.
* @return
- * - 1: The ring is empty.
- * - 0: The ring is not empty.
+ * The number of free entries in the ring.
*/
-static inline int
-rte_ring_empty(const struct rte_ring *r)
+static inline unsigned
+rte_ring_free_count(const struct rte_ring *r)
{
- uint32_t prod_tail = r->prod.tail;
- uint32_t cons_tail = r->cons.tail;
- return !!(cons_tail == prod_tail);
+ return r->capacity - rte_ring_count(r);
}
/**
- * Return the number of entries in a ring.
+ * Test if a ring is full.
*
* @param r
* A pointer to the ring structure.
* @return
- * The number of entries in the ring.
+ * - 1: The ring is full.
+ * - 0: The ring is not full.
*/
-static inline unsigned
-rte_ring_count(const struct rte_ring *r)
+static inline int
+rte_ring_full(const struct rte_ring *r)
{
- uint32_t prod_tail = r->prod.tail;
- uint32_t cons_tail = r->cons.tail;
- return (prod_tail - cons_tail) & r->mask;
+ return rte_ring_free_count(r) == 0;
}
/**
- * Return the number of free entries in a ring.
+ * Test if a ring is empty.
*
* @param r
* A pointer to the ring structure.
* @return
- * The number of free entries in the ring.
+ * - 1: The ring is empty.
+ * - 0: The ring is not empty.
*/
-static inline unsigned
-rte_ring_free_count(const struct rte_ring *r)
+static inline int
+rte_ring_empty(const struct rte_ring *r)
{
- uint32_t prod_tail = r->prod.tail;
- uint32_t cons_tail = r->cons.tail;
- return (cons_tail - prod_tail - 1) & r->mask;
+ return rte_ring_count(r) == 0;
}
/**
@@ -916,7 +923,9 @@ rte_ring_free_count(const struct rte_ring *r)
* @param r
* A pointer to the ring structure.
* @return
- * The number of elements which can be stored in the ring.
+ * The size of the data store used by the ring.
+ * NOTE: this is not the same as the usable space in the ring. To query that
+ * use ``rte_ring_get_capacity()``.
*/
static inline unsigned int
rte_ring_get_size(const struct rte_ring *r)
@@ -925,6 +934,20 @@ rte_ring_get_size(const struct rte_ring *r)
}
/**
+ * Return the number of elements which can be stored in the ring.
+ *
+ * @param r
+ * A pointer to the ring structure.
+ * @return
+ * The usable size of the ring.
+ */
+static inline unsigned int
+rte_ring_get_capacity(const struct rte_ring *r)
+{
+ return r->capacity;
+}
+
+/**
* Dump the status of all rings on the console
*
* @param f
--
2.13.0
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/5] ring: allow rings with non power-of-2 sizes
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 1/5] ring: allow rings with non power-of-2 sizes Bruce Richardson
@ 2017-07-03 8:46 ` Olivier Matz
0 siblings, 0 replies; 28+ messages in thread
From: Olivier Matz @ 2017-07-03 8:46 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, jerin.jacob
On Fri, 30 Jun 2017 16:06:17 +0100, Bruce Richardson <bruce.richardson@intel.com> wrote:
> The rte_rings traditionally have only supported having ring sizes as powers
> of 2, with the actual usable space being the size - 1. In some cases, for
> example, with an eventdev where we want to precisely control queue depths
> for latency, we need to allow ring sizes which are not powers of two so we
> add in an additional ring capacity value to allow that. For existing rings,
> this value will be size-1, i.e. the same as the mask, but if the new
> EXACT_SZ flag is passed on ring creation, the ring will have exactly the
> usable space requested, although the underlying memory size may be bigger.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 2/5] test/test: add unit tests for exact size rings
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 0/5] create event rings type Bruce Richardson
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 1/5] ring: allow rings with non power-of-2 sizes Bruce Richardson
@ 2017-06-30 15:06 ` Bruce Richardson
2017-07-03 8:47 ` Olivier Matz
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 3/5] eventdev: add ring structure for events Bruce Richardson
` (2 subsequent siblings)
4 siblings, 1 reply; 28+ messages in thread
From: Bruce Richardson @ 2017-06-30 15:06 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V2: tear down allocated rings on error
---
test/test/test_ring.c | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/test/test/test_ring.c b/test/test/test_ring.c
index 858ebc1da..d68654eb8 100644
--- a/test/test/test_ring.c
+++ b/test/test/test_ring.c
@@ -770,6 +770,77 @@ test_ring_basic_ex(void)
}
static int
+test_ring_with_exact_size(void)
+{
+ struct rte_ring *std_ring = NULL, *exact_sz_ring = NULL;
+ void *ptr_array[16];
+ static const unsigned int ring_sz = RTE_DIM(ptr_array);
+ unsigned int i;
+ int ret = -1;
+
+ std_ring = rte_ring_create("std", ring_sz, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (std_ring == NULL) {
+ printf("%s: error, can't create std ring\n", __func__);
+ goto end;
+ }
+ exact_sz_ring = rte_ring_create("exact sz", ring_sz, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
+ if (exact_sz_ring == NULL) {
+ printf("%s: error, can't create exact size ring\n", __func__);
+ goto end;
+ }
+
+ /*
+ * Check that the exact size ring is bigger than the standard ring
+ */
+ if (rte_ring_get_size(std_ring) >= rte_ring_get_size(exact_sz_ring)) {
+ printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n",
+ __func__,
+ rte_ring_get_size(std_ring),
+ rte_ring_get_size(exact_sz_ring));
+ goto end;
+ }
+ /*
+ * check that the exact_sz_ring can hold one more element than the
+ * standard ring. (16 vs 15 elements)
+ */
+ for (i = 0; i < ring_sz - 1; i++) {
+ rte_ring_enqueue(std_ring, NULL);
+ rte_ring_enqueue(exact_sz_ring, NULL);
+ }
+ if (rte_ring_enqueue(std_ring, NULL) != -ENOBUFS) {
+ printf("%s: error, unexpected successful enqueue\n", __func__);
+ goto end;
+ }
+ if (rte_ring_enqueue(exact_sz_ring, NULL) == -ENOBUFS) {
+ printf("%s: error, enqueue failed\n", __func__);
+ goto end;
+ }
+
+ /* check that dequeue returns the expected number of elements */
+ if (rte_ring_dequeue_burst(exact_sz_ring, ptr_array,
+ RTE_DIM(ptr_array), NULL) != ring_sz) {
+ printf("%s: error, failed to dequeue expected nb of elements\n",
+ __func__);
+ goto end;
+ }
+
+ /* check that the capacity function returns expected value */
+ if (rte_ring_get_capacity(exact_sz_ring) != ring_sz) {
+ printf("%s: error, incorrect ring capacity reported\n",
+ __func__);
+ goto end;
+ }
+
+ ret = 0; /* all ok if we get here */
+end:
+ rte_ring_free(std_ring);
+ rte_ring_free(exact_sz_ring);
+ return ret;
+}
+
+static int
test_ring(void)
{
/* some more basic operations */
@@ -820,6 +891,9 @@ test_ring(void)
if (test_ring_creation_with_an_used_name() < 0)
return -1;
+ if (test_ring_with_exact_size() < 0)
+ return -1;
+
/* dump the ring status */
rte_ring_list_dump(stdout);
--
2.13.0
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 3/5] eventdev: add ring structure for events
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 0/5] create event rings type Bruce Richardson
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 1/5] ring: allow rings with non power-of-2 sizes Bruce Richardson
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 2/5] test/test: add unit tests for exact size rings Bruce Richardson
@ 2017-06-30 15:06 ` Bruce Richardson
2017-07-03 9:52 ` Van Haaren, Harry
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 4/5] test/test: add auto-tests for event ring functions Bruce Richardson
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings Bruce Richardson
4 siblings, 1 reply; 28+ messages in thread
From: Bruce Richardson @ 2017-06-30 15:06 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
Add in a new rte_event_ring structure type and functions to allow events to
be passed core to core. This is needed because the standard rte_ring type
only works on pointers, while for events, we want to copy the entire, 16B
events themselves - not just pointers to them. The code makes extensive use
of the functions already defined in rte_ring.h
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V2: use __rte_always_inline instead of local macro
---
lib/Makefile | 2 +-
lib/librte_eventdev/Makefile | 2 +
lib/librte_eventdev/rte_event_ring.c | 207 ++++++++++++++++++
lib/librte_eventdev/rte_event_ring.h | 308 +++++++++++++++++++++++++++
lib/librte_eventdev/rte_eventdev_version.map | 9 +
5 files changed, 527 insertions(+), 1 deletion(-)
create mode 100644 lib/librte_eventdev/rte_event_ring.c
create mode 100644 lib/librte_eventdev/rte_event_ring.h
diff --git a/lib/Makefile b/lib/Makefile
index 07e1fd0c5..aef584e14 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -52,7 +52,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
DEPDIRS-librte_cryptodev += librte_kvargs
DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
-DEPDIRS-librte_eventdev := librte_eal
+DEPDIRS-librte_eventdev := librte_eal librte_ring
DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
DEPDIRS-librte_vhost := librte_eal librte_mempool librte_mbuf librte_ether
DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
index 629069ad6..0ed777d61 100644
--- a/lib/librte_eventdev/Makefile
+++ b/lib/librte_eventdev/Makefile
@@ -42,12 +42,14 @@ CFLAGS += $(WERROR_FLAGS)
# library source files
SRCS-y += rte_eventdev.c
+SRCS-y += rte_event_ring.c
# export include files
SYMLINK-y-include += rte_eventdev.h
SYMLINK-y-include += rte_eventdev_pmd.h
SYMLINK-y-include += rte_eventdev_pmd_pci.h
SYMLINK-y-include += rte_eventdev_pmd_vdev.h
+SYMLINK-y-include += rte_event_ring.h
# versioning export map
EXPORT_MAP := rte_eventdev_version.map
diff --git a/lib/librte_eventdev/rte_event_ring.c b/lib/librte_eventdev/rte_event_ring.c
new file mode 100644
index 000000000..b14c2127f
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_ring.c
@@ -0,0 +1,207 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/queue.h>
+#include <string.h>
+
+#include <rte_tailq.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_eal_memconfig.h>
+#include "rte_event_ring.h"
+
+TAILQ_HEAD(rte_event_ring_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_event_ring_tailq = {
+ .name = RTE_TAILQ_EVENT_RING_NAME,
+};
+EAL_REGISTER_TAILQ(rte_event_ring_tailq)
+
+int
+rte_event_ring_init(struct rte_event_ring *r, const char *name,
+ unsigned int count, unsigned int flags)
+{
+ /* compilation-time checks */
+ RTE_BUILD_BUG_ON((sizeof(struct rte_event_ring) &
+ RTE_CACHE_LINE_MASK) != 0);
+
+ /* init the ring structure */
+ return rte_ring_init(&r->r, name, count, flags);
+}
+
+/* create the ring */
+struct rte_event_ring *
+rte_event_ring_create(const char *name, unsigned int count, int socket_id,
+ unsigned int flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_event_ring *r;
+ struct rte_tailq_entry *te;
+ const struct rte_memzone *mz;
+ ssize_t ring_size;
+ int mz_flags = 0;
+ struct rte_event_ring_list *ring_list = NULL;
+ const unsigned int requested_count = count;
+ int ret;
+
+ ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+ rte_event_ring_list);
+
+ /* for an exact size ring, round up from count to a power of two */
+ if (flags & RING_F_EXACT_SZ)
+ count = rte_align32pow2(count + 1);
+ else if (!rte_is_power_of_2(count)) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ ring_size = sizeof(*r) + (count * sizeof(struct rte_event));
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_RING_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /*
+ * reserve a memory zone for this ring. If we can't get rte_config or
+ * we are secondary process, the memzone_reserve function will set
+ * rte_errno for us appropriately - hence no check in this this function
+ */
+ mz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);
+ if (mz != NULL) {
+ r = mz->addr;
+ /*
+ * no need to check return value here, we already checked the
+ * arguments above
+ */
+ rte_event_ring_init(r, name, requested_count, flags);
+
+ te->data = (void *) r;
+ r->r.memzone = mz;
+
+ TAILQ_INSERT_TAIL(ring_list, te, next);
+ } else {
+ r = NULL;
+ RTE_LOG(ERR, RING, "Cannot reserve memory\n");
+ rte_free(te);
+ }
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return r;
+}
+
+
+struct rte_event_ring *
+rte_event_ring_lookup(const char *name)
+{
+ struct rte_tailq_entry *te;
+ struct rte_event_ring *r = NULL;
+ struct rte_event_ring_list *ring_list;
+
+ ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+ rte_event_ring_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, ring_list, next) {
+ r = (struct rte_event_ring *) te->data;
+ if (strncmp(name, r->r.name, RTE_RING_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+/* free the ring */
+void
+rte_event_ring_free(struct rte_event_ring *r)
+{
+ struct rte_event_ring_list *ring_list = NULL;
+ struct rte_tailq_entry *te;
+
+ if (r == NULL)
+ return;
+
+ /*
+ * Ring was not created with rte_event_ring_create,
+ * therefore, there is no memzone to free.
+ */
+ if (r->r.memzone == NULL) {
+ RTE_LOG(ERR, RING,
+ "Cannot free ring (not created with rte_event_ring_create()");
+ return;
+ }
+
+ if (rte_memzone_free(r->r.memzone) != 0) {
+ RTE_LOG(ERR, RING, "Cannot free memory\n");
+ return;
+ }
+
+ ring_list = RTE_TAILQ_CAST(rte_event_ring_tailq.head,
+ rte_event_ring_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, ring_list, next) {
+ if (te->data == (void *) r)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(ring_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+}
diff --git a/lib/librte_eventdev/rte_event_ring.h b/lib/librte_eventdev/rte_event_ring.h
new file mode 100644
index 000000000..ea9b68854
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_ring.h
@@ -0,0 +1,308 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/**
+ * @file
+ * RTE Event Ring
+ *
+ * This provides a ring implementation for passing rte_event structures
+ * from one core to another.
+ */
+
+#ifndef _RTE_EVENT_RING_
+#define _RTE_EVENT_RING_
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_memory.h>
+#include <rte_malloc.h>
+#include <rte_ring.h>
+#include "rte_eventdev.h"
+
+#define RTE_TAILQ_EVENT_RING_NAME "RTE_EVENT_RING"
+
+/**
+ * Generic ring structure for passing rte_event objects from core to core.
+ *
+ * Based on the primitives given in the rte_ring library. Designed to be
+ * used inside software eventdev implementations and by applications
+ * directly as needed.
+ */
+struct rte_event_ring {
+ struct rte_ring r;
+};
+
+/**
+ * Returns the number of events in the ring
+ *
+ * @param r
+ * pointer to the event ring
+ * @return
+ * the number of events in the ring
+ */
+static __rte_always_inline unsigned int
+rte_event_ring_count(const struct rte_event_ring *r)
+{
+ return rte_ring_count(&r->r);
+}
+
+/**
+ * Returns the amount of free space in the ring
+ *
+ * @param r
+ * pointer to the event ring
+ * @return
+ * the number of free slots in the ring, i.e. the number of events that
+ * can be successfully enqueued before dequeue must be called
+ */
+static __rte_always_inline unsigned int
+rte_event_ring_free_count(const struct rte_event_ring *r)
+{
+ return rte_ring_free_count(&r->r);
+}
+
+/**
+ * Enqueue a set of events onto a ring
+ *
+ * Note: this API enqueues by copying the events themselves onto the ring,
+ * rather than just placing a pointer to each event onto the ring. This
+ * means that statically-allocated events can safely be enqueued by this
+ * API.
+ *
+ * @param r
+ * pointer to the event ring
+ * @param events
+ * pointer to an array of struct rte_event objects
+ * @param n
+ * number of events in the array to enqueue
+ * @param free_space
+ * if non-null, is updated to indicate the amount of free space in the
+ * ring once the enqueue has completed.
+ * @return
+ * the number of elements, n', enqueued to the ring, 0 <= n' <= n
+ */
+static __rte_always_inline unsigned int
+rte_event_ring_enqueue_burst(struct rte_event_ring *r,
+ const struct rte_event *events,
+ unsigned int n, uint16_t *free_space)
+{
+ uint32_t prod_head, prod_next;
+ uint32_t free_entries;
+
+ n = __rte_ring_move_prod_head(&r->r, r->r.prod.single, n,
+ RTE_RING_QUEUE_VARIABLE,
+ &prod_head, &prod_next, &free_entries);
+ if (n == 0)
+ goto end;
+
+ ENQUEUE_PTRS(&r->r, &r[1], prod_head, events, n, struct rte_event);
+ rte_smp_wmb();
+
+ update_tail(&r->r.prod, prod_head, prod_next, 1);
+end:
+ if (free_space != NULL)
+ *free_space = free_entries - n;
+ return n;
+}
+
+/**
+ * Dequeue a set of events from a ring
+ *
+ * Note: this API does not work with pointers to events, rather it copies
+ * the events themselves to the destination ``events`` buffer.
+ *
+ * @param r
+ * pointer to the event ring
+ * @param events
+ * pointer to an array to hold the struct rte_event objects
+ * @param n
+ * number of events that can be held in the ``events`` array
+ * @param available
+ * if non-null, is updated to indicate the number of events remaining in
+ * the ring once the dequeue has completed
+ * @return
+ * the number of elements, n', dequeued from the ring, 0 <= n' <= n
+ */
+static __rte_always_inline unsigned int
+rte_event_ring_dequeue_burst(struct rte_event_ring *r,
+ struct rte_event *events,
+ unsigned int n, uint16_t *available)
+{
+ uint32_t cons_head, cons_next;
+ uint32_t entries;
+
+ n = __rte_ring_move_cons_head(&r->r, r->r.cons.single, n,
+ RTE_RING_QUEUE_VARIABLE,
+ &cons_head, &cons_next, &entries);
+ if (n == 0)
+ goto end;
+
+ DEQUEUE_PTRS(&r->r, &r[1], cons_head, events, n, struct rte_event);
+ rte_smp_rmb();
+
+ update_tail(&r->r.cons, cons_head, cons_next, 1);
+
+end:
+ if (available != NULL)
+ *available = entries - n;
+ return n;
+}
+
+/*
+ * Initializes an already-allocated ring structure
+ *
+ * @param r
+ * pointer to the ring memory to be initialized
+ * @param name
+ * name to be given to the ring
+ * @param count
+ * the number of elements to be stored in the ring. If the flag
+ * ``RING_F_EXACT_SZ`` is not set, this must be a power of 2, and the actual
+ * usable space in the ring will be ``count - 1`` entries. If the flag
+ * ``RING_F_EXACT_SZ`` is set, the this can be any value up to the ring size
+ * limit - 1, and the usable space will be exactly that requested.
+ * @param flags
+ * An OR of the following:
+ * - RING_F_SP_ENQ: If this flag is set, the default behavior when
+ * using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``
+ * is "single-producer". Otherwise, it is "multi-producers".
+ * - RING_F_SC_DEQ: If this flag is set, the default behavior when
+ * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``
+ * is "single-consumer". Otherwise, it is "multi-consumers".
+ * - RING_F_EXACT_SZ: If this flag is set, the ``count`` parameter is to
+ * be taken as the exact usable size of the ring, and as such does not
+ * need to be a power of 2. The underlying ring memory should be a
+ * power-of-2 size greater than the count value.
+ * @return
+ * 0 on success, or a negative value on error.
+ */
+int
+rte_event_ring_init(struct rte_event_ring *r, const char *name,
+ unsigned int count, unsigned int flags);
+
+/*
+ * Create an event ring structure
+ *
+ * This function allocates memory and initializes an event ring inside that
+ * memory.
+ *
+ * @param name
+ * name to be given to the ring
+ * @param count
+ * the number of elements to be stored in the ring. If the flag
+ * ``RING_F_EXACT_SZ`` is not set, this must be a power of 2, and the actual
+ * usable space in the ring will be ``count - 1`` entries. If the flag
+ * ``RING_F_EXACT_SZ`` is set, the this can be any value up to the ring size
+ * limit - 1, and the usable space will be exactly that requested.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * An OR of the following:
+ * - RING_F_SP_ENQ: If this flag is set, the default behavior when
+ * using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``
+ * is "single-producer". Otherwise, it is "multi-producers".
+ * - RING_F_SC_DEQ: If this flag is set, the default behavior when
+ * using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``
+ * is "single-consumer". Otherwise, it is "multi-consumers".
+ * - RING_F_EXACT_SZ: If this flag is set, the ``count`` parameter is to
+ * be taken as the exact usable size of the ring, and as such does not
+ * need to be a power of 2. The underlying ring memory should be a
+ * power-of-2 size greater than the count value.
+ * @return
+ * On success, the pointer to the new allocated ring. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ * - E_RTE_SECONDARY - function was called from a secondary process instance
+ * - EINVAL - count provided is not a power of 2
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a memzone with the same name already exists
+ * - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+struct rte_event_ring *
+rte_event_ring_create(const char *name, unsigned int count, int socket_id,
+ unsigned int flags);
+
+/**
+ * Search for an event ring based on its name
+ *
+ * @param name
+ * The name of the ring.
+ * @return
+ * The pointer to the ring matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - required entry not available to return.
+ */
+struct rte_event_ring *
+rte_event_ring_lookup(const char *name);
+
+/**
+ * De-allocate all memory used by the ring.
+ *
+ * @param r
+ * Ring to free
+ */
+void
+rte_event_ring_free(struct rte_event_ring *r);
+
+/**
+ * Return the size of the event ring.
+ *
+ * @param r
+ * A pointer to the ring structure.
+ * @return
+ * The size of the data store used by the ring.
+ * NOTE: this is not the same as the usable space in the ring. To query that
+ * use ``rte_ring_get_capacity()``.
+ */
+static inline unsigned int
+rte_event_ring_get_size(const struct rte_event_ring *r)
+{
+ return rte_ring_get_size(&r->r);
+}
+
+/**
+ * Return the number of elements which can be stored in the event ring.
+ *
+ * @param r
+ * A pointer to the ring structure.
+ * @return
+ * The usable size of the ring.
+ */
+static inline unsigned int
+rte_event_ring_get_capacity(const struct rte_event_ring *r)
+{
+ return rte_ring_get_capacity(&r->r);
+}
+#endif
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 1fa6b333f..4c48e5f0a 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -42,3 +42,12 @@ DPDK_17.05 {
local: *;
};
+
+DPDK_17.08 {
+ global:
+
+ rte_event_ring_create;
+ rte_event_ring_free;
+ rte_event_ring_init;
+ rte_event_ring_lookup;
+} DPDK_17.05;
--
2.13.0
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/5] eventdev: add ring structure for events
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 3/5] eventdev: add ring structure for events Bruce Richardson
@ 2017-07-03 9:52 ` Van Haaren, Harry
0 siblings, 0 replies; 28+ messages in thread
From: Van Haaren, Harry @ 2017-07-03 9:52 UTC (permalink / raw)
To: Richardson, Bruce, dev; +Cc: olivier.matz, jerin.jacob, Richardson, Bruce
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> Sent: Friday, June 30, 2017 4:06 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; jerin.jacob@caviumnetworks.com; Richardson, Bruce
> <bruce.richardson@intel.com>
> Subject: [dpdk-dev] [PATCH v2 3/5] eventdev: add ring structure for events
>
> Add in a new rte_event_ring structure type and functions to allow events to
> be passed core to core. This is needed because the standard rte_ring type
> only works on pointers, while for events, we want to copy the entire, 16B
> events themselves - not just pointers to them. The code makes extensive use
> of the functions already defined in rte_ring.h
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 4/5] test/test: add auto-tests for event ring functions
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 0/5] create event rings type Bruce Richardson
` (2 preceding siblings ...)
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 3/5] eventdev: add ring structure for events Bruce Richardson
@ 2017-06-30 15:06 ` Bruce Richardson
2017-07-03 12:30 ` Van Haaren, Harry
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings Bruce Richardson
4 siblings, 1 reply; 28+ messages in thread
From: Bruce Richardson @ 2017-06-30 15:06 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
Add some basic tests for the event ring functions. Not everything needs
testing as there is so much code re-use from the rte_ring code.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V2: change name of static ring to avoid conflict with ring_autotest
---
test/test/Makefile | 1 +
test/test/test_event_ring.c | 276 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 277 insertions(+)
create mode 100644 test/test/test_event_ring.c
diff --git a/test/test/Makefile b/test/test/Makefile
index ee240be46..e797c2094 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -201,6 +201,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
SRCS-y += test_eventdev.c
+SRCS-y += test_event_ring.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += test_eventdev_sw.c
SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF) += test_eventdev_octeontx.c
endif
diff --git a/test/test/test_event_ring.c b/test/test/test_event_ring.c
new file mode 100644
index 000000000..c158c9b09
--- /dev/null
+++ b/test/test/test_event_ring.c
@@ -0,0 +1,276 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <string.h>
+
+#include <rte_event_ring.h>
+
+#include "test.h"
+
+/*
+ * Event Ring
+ * ===========
+ *
+ * Test some basic ops for the event rings.
+ * Does not fully test everything, since most code is reused from rte_ring
+ * library and tested as part of the normal ring autotests.
+ */
+
+#define RING_SIZE 4096
+#define MAX_BULK 32
+
+static struct rte_event_ring *r;
+
+/*
+ * ensure failure to create ring with a bad ring size
+ */
+static int
+test_event_ring_creation_with_wrong_size(void)
+{
+ struct rte_event_ring *rp = NULL;
+
+ /* Test if ring size is not power of 2 */
+ rp = rte_event_ring_create("test_bad_ring_size", RING_SIZE + 1,
+ SOCKET_ID_ANY, 0);
+ if (rp != NULL)
+ return -1;
+
+ /* Test if ring size is exceeding the limit */
+ rp = rte_event_ring_create("test_bad_ring_size", (RTE_RING_SZ_MASK + 1),
+ SOCKET_ID_ANY, 0);
+ if (rp != NULL)
+ return -1;
+ return 0;
+}
+
+/*
+ * Test to check if a non-power-of-2 count causes the create
+ * function to fail correctly
+ */
+static int
+test_create_count_odd(void)
+{
+ struct rte_event_ring *r = rte_event_ring_create("test_event_ring_count",
+ 4097, SOCKET_ID_ANY, 0);
+ if (r != NULL)
+ return -1;
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_event_ring *rlp = rte_event_ring_lookup("ring_not_found");
+ if (rlp == NULL && rte_errno != ENOENT) {
+ printf("test failed to return error on null pointer\n");
+ return -1;
+ }
+ return 0;
+}
+
+static int
+test_basic_event_enqueue_dequeue(void)
+{
+ struct rte_event_ring *sr = NULL;
+ struct rte_event evs[16];
+ uint16_t ret, free_count, used_count;
+
+ memset(evs, 0, sizeof(evs));
+ sr = rte_event_ring_create("spsc_ring", 32, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (sr == NULL) {
+ printf("Failed to create sp/sc ring\n");
+ return -1;
+ }
+ if (rte_event_ring_get_capacity(sr) != 31) {
+ printf("Error, invalid capacity\n");
+ goto error;
+ }
+
+ /* test sp/sc ring */
+ if (rte_event_ring_count(sr) != 0) {
+ printf("Error, ring not empty as expected\n");
+ goto error;
+ }
+ if (rte_event_ring_free_count(sr) != rte_event_ring_get_capacity(sr)) {
+ printf("Error, ring free count not as expected\n");
+ goto error;
+ }
+
+ ret = rte_event_ring_enqueue_burst(sr, evs, RTE_DIM(evs), &free_count);
+ if (ret != RTE_DIM(evs) ||
+ free_count != rte_event_ring_get_capacity(sr) - ret) {
+ printf("Error, status after enqueue is unexpected\n");
+ goto error;
+ }
+
+ ret = rte_event_ring_enqueue_burst(sr, evs, RTE_DIM(evs), &free_count);
+ if (ret != RTE_DIM(evs) - 1 ||
+ free_count != 0) {
+ printf("Error, status after enqueue is unexpected\n");
+ goto error;
+ }
+
+ ret = rte_event_ring_dequeue_burst(sr, evs, RTE_DIM(evs), &used_count);
+ if (ret != RTE_DIM(evs) ||
+ used_count != rte_event_ring_get_capacity(sr) - ret) {
+ printf("Error, status after enqueue is unexpected\n");
+ goto error;
+ }
+ ret = rte_event_ring_dequeue_burst(sr, evs, RTE_DIM(evs), &used_count);
+ if (ret != RTE_DIM(evs) - 1 ||
+ used_count != 0) {
+ printf("Error, status after enqueue is unexpected\n");
+ goto error;
+ }
+
+ rte_event_ring_free(sr);
+ return 0;
+error:
+ rte_event_ring_free(sr);
+ return -1;
+}
+
+static int
+test_event_ring_with_exact_size(void)
+{
+ struct rte_event_ring *std_ring, *exact_sz_ring;
+ struct rte_event ev = { .mbuf = NULL };
+ struct rte_event ev_array[16];
+ static const unsigned int ring_sz = RTE_DIM(ev_array);
+ unsigned int i;
+
+ std_ring = rte_event_ring_create("std", ring_sz, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ);
+ if (std_ring == NULL) {
+ printf("%s: error, can't create std ring\n", __func__);
+ return -1;
+ }
+ exact_sz_ring = rte_event_ring_create("exact sz",
+ ring_sz, rte_socket_id(),
+ RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
+ if (exact_sz_ring == NULL) {
+ printf("%s: error, can't create exact size ring\n", __func__);
+ return -1;
+ }
+
+ /*
+ * Check that the exact size ring is bigger than the standard ring
+ */
+ if (rte_event_ring_get_size(std_ring) >=
+ rte_event_ring_get_size(exact_sz_ring)) {
+ printf("%s: error, std ring (size: %u) is not smaller than exact size one (size %u)\n",
+ __func__,
+ rte_event_ring_get_size(std_ring),
+ rte_event_ring_get_size(exact_sz_ring));
+ return -1;
+ }
+ /*
+ * check that the exact_sz_ring can hold one more element than the
+ * standard ring. (16 vs 15 elements)
+ */
+ for (i = 0; i < ring_sz - 1; i++) {
+ rte_event_ring_enqueue_burst(std_ring, &ev, 1, NULL);
+ rte_event_ring_enqueue_burst(exact_sz_ring, &ev, 1, NULL);
+ }
+ if (rte_event_ring_enqueue_burst(std_ring, &ev, 1, NULL) != 0) {
+ printf("%s: error, unexpected successful enqueue\n", __func__);
+ return -1;
+ }
+ if (rte_event_ring_enqueue_burst(exact_sz_ring, &ev, 1, NULL) != 1) {
+ printf("%s: error, enqueue failed\n", __func__);
+ return -1;
+ }
+
+ /* check that dequeue returns the expected number of elements */
+ if (rte_event_ring_dequeue_burst(exact_sz_ring, ev_array,
+ RTE_DIM(ev_array), NULL) != ring_sz) {
+ printf("%s: error, failed to dequeue expected nb of elements\n",
+ __func__);
+ return -1;
+ }
+
+ /* check that the capacity function returns expected value */
+ if (rte_event_ring_get_capacity(exact_sz_ring) != ring_sz) {
+ printf("%s: error, incorrect ring capacity reported\n",
+ __func__);
+ return -1;
+ }
+
+ rte_event_ring_free(std_ring);
+ rte_event_ring_free(exact_sz_ring);
+ return 0;
+}
+
+static int
+test_event_ring(void)
+{
+ if (r == NULL)
+ r = rte_event_ring_create("ev_test", RING_SIZE,
+ SOCKET_ID_ANY, 0);
+ if (r == NULL)
+ return -1;
+
+ /* retrieve the ring from its name */
+ if (rte_event_ring_lookup("ev_test") != r) {
+ printf("Cannot lookup ring from its name\n");
+ return -1;
+ }
+
+ /* basic operations */
+ if (test_create_count_odd() < 0) {
+ printf("Test failed to detect odd count\n");
+ return -1;
+ }
+ printf("Test detected odd count\n");
+
+ if (test_lookup_null() < 0) {
+ printf("Test failed to detect NULL ring lookup\n");
+ return -1;
+ }
+ printf("Test detected NULL ring lookup\n");
+
+ /* test of creating ring with wrong size */
+ if (test_event_ring_creation_with_wrong_size() < 0)
+ return -1;
+
+ if (test_basic_event_enqueue_dequeue() < 0)
+ return -1;
+
+ if (test_event_ring_with_exact_size() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(event_ring_autotest, test_event_ring);
--
2.13.0
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/5] test/test: add auto-tests for event ring functions
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 4/5] test/test: add auto-tests for event ring functions Bruce Richardson
@ 2017-07-03 12:30 ` Van Haaren, Harry
0 siblings, 0 replies; 28+ messages in thread
From: Van Haaren, Harry @ 2017-07-03 12:30 UTC (permalink / raw)
To: Richardson, Bruce, dev; +Cc: olivier.matz, jerin.jacob, Richardson, Bruce
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> Sent: Friday, June 30, 2017 4:06 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; jerin.jacob@caviumnetworks.com; Richardson, Bruce
> <bruce.richardson@intel.com>
> Subject: [dpdk-dev] [PATCH v2 4/5] test/test: add auto-tests for event ring functions
>
> Add some basic tests for the event ring functions. Not everything needs
> testing as there is so much code re-use from the rte_ring code.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
^ permalink raw reply [flat|nested] 28+ messages in thread
* [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 0/5] create event rings type Bruce Richardson
` (3 preceding siblings ...)
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 4/5] test/test: add auto-tests for event ring functions Bruce Richardson
@ 2017-06-30 15:06 ` Bruce Richardson
2017-07-03 12:28 ` Van Haaren, Harry
4 siblings, 1 reply; 28+ messages in thread
From: Bruce Richardson @ 2017-06-30 15:06 UTC (permalink / raw)
To: dev; +Cc: olivier.matz, jerin.jacob, Bruce Richardson
Now that we have a standard event ring implementation for passing events
core-to-core, use that in place of the custom event rings in the software
eventdev.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/event/sw/sw_evdev.c | 38 +++++++++++++++++++----------------
drivers/event/sw/sw_evdev.h | 4 ++--
drivers/event/sw/sw_evdev_scheduler.c | 19 +++++++++---------
drivers/event/sw/sw_evdev_worker.c | 28 +++++++++++++++++++++-----
drivers/event/sw/sw_evdev_xstats.c | 15 +++++++-------
5 files changed, 64 insertions(+), 40 deletions(-)
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index fe2a61e2f..31880aa5c 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -38,10 +38,10 @@
#include <rte_kvargs.h>
#include <rte_ring.h>
#include <rte_errno.h>
+#include <rte_event_ring.h>
#include "sw_evdev.h"
#include "iq_ring.h"
-#include "event_ring.h"
#define EVENTDEV_NAME_SW_PMD event_sw
#define NUMA_NODE_ARG "numa_node"
@@ -140,7 +140,7 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
{
struct sw_evdev *sw = sw_pmd_priv(dev);
struct sw_port *p = &sw->ports[port_id];
- char buf[QE_RING_NAMESIZE];
+ char buf[RTE_RING_NAMESIZE];
unsigned int i;
struct rte_event_dev_info info;
@@ -161,10 +161,11 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
p->id = port_id;
p->sw = sw;
- snprintf(buf, sizeof(buf), "sw%d_%s", dev->data->dev_id,
- "rx_worker_ring");
- p->rx_worker_ring = qe_ring_create(buf, MAX_SW_PROD_Q_DEPTH,
- dev->data->socket_id);
+ snprintf(buf, sizeof(buf), "sw%d_p%u_%s", dev->data->dev_id,
+ port_id, "rx_worker_ring");
+ p->rx_worker_ring = rte_event_ring_create(buf, MAX_SW_PROD_Q_DEPTH,
+ dev->data->socket_id,
+ RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
if (p->rx_worker_ring == NULL) {
SW_LOG_ERR("Error creating RX worker ring for port %d\n",
port_id);
@@ -173,12 +174,13 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
p->inflight_max = conf->new_event_threshold;
- snprintf(buf, sizeof(buf), "sw%d_%s", dev->data->dev_id,
- "cq_worker_ring");
- p->cq_worker_ring = qe_ring_create(buf, conf->dequeue_depth,
- dev->data->socket_id);
+ snprintf(buf, sizeof(buf), "sw%d_p%u, %s", dev->data->dev_id,
+ port_id, "cq_worker_ring");
+ p->cq_worker_ring = rte_event_ring_create(buf, conf->dequeue_depth,
+ dev->data->socket_id,
+ RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
if (p->cq_worker_ring == NULL) {
- qe_ring_destroy(p->rx_worker_ring);
+ rte_event_ring_free(p->rx_worker_ring);
SW_LOG_ERR("Error creating CQ worker ring for port %d\n",
port_id);
return -1;
@@ -204,8 +206,8 @@ sw_port_release(void *port)
if (p == NULL)
return;
- qe_ring_destroy(p->rx_worker_ring);
- qe_ring_destroy(p->cq_worker_ring);
+ rte_event_ring_free(p->rx_worker_ring);
+ rte_event_ring_free(p->cq_worker_ring);
memset(p, 0, sizeof(*p));
}
@@ -512,8 +514,9 @@ sw_dump(struct rte_eventdev *dev, FILE *f)
fprintf(f, "\n");
if (p->rx_worker_ring) {
- uint64_t used = qe_ring_count(p->rx_worker_ring);
- uint64_t space = qe_ring_free_count(p->rx_worker_ring);
+ uint64_t used = rte_event_ring_count(p->rx_worker_ring);
+ uint64_t space = rte_event_ring_free_count(
+ p->rx_worker_ring);
const char *col = (space == 0) ? COL_RED : COL_RESET;
fprintf(f, "\t%srx ring used: %4"PRIu64"\tfree: %4"
PRIu64 COL_RESET"\n", col, used, space);
@@ -521,8 +524,9 @@ sw_dump(struct rte_eventdev *dev, FILE *f)
fprintf(f, "\trx ring not initialized.\n");
if (p->cq_worker_ring) {
- uint64_t used = qe_ring_count(p->cq_worker_ring);
- uint64_t space = qe_ring_free_count(p->cq_worker_ring);
+ uint64_t used = rte_event_ring_count(p->cq_worker_ring);
+ uint64_t space = rte_event_ring_free_count(
+ p->cq_worker_ring);
const char *col = (space == 0) ? COL_RED : COL_RESET;
fprintf(f, "\t%scq ring used: %4"PRIu64"\tfree: %4"
PRIu64 COL_RESET"\n", col, used, space);
diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h
index 0d7f94f3b..6ef03ceb8 100644
--- a/drivers/event/sw/sw_evdev.h
+++ b/drivers/event/sw/sw_evdev.h
@@ -190,9 +190,9 @@ struct sw_port {
int16_t num_ordered_qids;
/** Ring and buffer for pulling events from workers for scheduling */
- struct qe_ring *rx_worker_ring __rte_cache_aligned;
+ struct rte_event_ring *rx_worker_ring __rte_cache_aligned;
/** Ring and buffer for pushing packets to workers after scheduling */
- struct qe_ring *cq_worker_ring;
+ struct rte_event_ring *cq_worker_ring;
/* hole */
diff --git a/drivers/event/sw/sw_evdev_scheduler.c b/drivers/event/sw/sw_evdev_scheduler.c
index fe1551706..8a2c9d4f9 100644
--- a/drivers/event/sw/sw_evdev_scheduler.c
+++ b/drivers/event/sw/sw_evdev_scheduler.c
@@ -32,9 +32,9 @@
#include <rte_ring.h>
#include <rte_hash_crc.h>
+#include <rte_event_ring.h>
#include "sw_evdev.h"
#include "iq_ring.h"
-#include "event_ring.h"
#define SW_IQS_MASK (SW_IQS_MAX-1)
@@ -123,8 +123,8 @@ sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid,
/* if we just filled in the last slot, flush the buffer */
if (sw->cq_ring_space[cq] == 0) {
- struct qe_ring *worker = p->cq_worker_ring;
- qe_ring_enqueue_burst(worker, p->cq_buf,
+ struct rte_event_ring *worker = p->cq_worker_ring;
+ rte_event_ring_enqueue_burst(worker, p->cq_buf,
p->cq_buf_count,
&sw->cq_ring_space[cq]);
p->cq_buf_count = 0;
@@ -171,7 +171,8 @@ sw_schedule_parallel_to_cq(struct sw_evdev *sw, struct sw_qid * const qid,
cq = qid->cq_map[cq_idx];
if (++cq_idx == qid->cq_num_mapped_cqs)
cq_idx = 0;
- } while (qe_ring_free_count(sw->ports[cq].cq_worker_ring) == 0 ||
+ } while (rte_event_ring_free_count(
+ sw->ports[cq].cq_worker_ring) == 0 ||
sw->ports[cq].inflights == SW_PORT_HIST_LIST);
struct sw_port *p = &sw->ports[cq];
@@ -367,10 +368,10 @@ static __rte_always_inline void
sw_refill_pp_buf(struct sw_evdev *sw, struct sw_port *port)
{
RTE_SET_USED(sw);
- struct qe_ring *worker = port->rx_worker_ring;
+ struct rte_event_ring *worker = port->rx_worker_ring;
port->pp_buf_start = 0;
- port->pp_buf_count = qe_ring_dequeue_burst(worker, port->pp_buf,
- RTE_DIM(port->pp_buf));
+ port->pp_buf_count = rte_event_ring_dequeue_burst(worker, port->pp_buf,
+ RTE_DIM(port->pp_buf), NULL);
}
static __rte_always_inline uint32_t
@@ -586,8 +587,8 @@ sw_event_schedule(struct rte_eventdev *dev)
* worker cores: aka, do the ring transfers batched.
*/
for (i = 0; i < sw->port_count; i++) {
- struct qe_ring *worker = sw->ports[i].cq_worker_ring;
- qe_ring_enqueue_burst(worker, sw->ports[i].cq_buf,
+ struct rte_event_ring *worker = sw->ports[i].cq_worker_ring;
+ rte_event_ring_enqueue_burst(worker, sw->ports[i].cq_buf,
sw->ports[i].cq_buf_count,
&sw->cq_ring_space[i]);
sw->ports[i].cq_buf_count = 0;
diff --git a/drivers/event/sw/sw_evdev_worker.c b/drivers/event/sw/sw_evdev_worker.c
index b738506ac..d76d3d5c8 100644
--- a/drivers/event/sw/sw_evdev_worker.c
+++ b/drivers/event/sw/sw_evdev_worker.c
@@ -32,9 +32,9 @@
#include <rte_atomic.h>
#include <rte_cycles.h>
+#include <rte_event_ring.h>
#include "sw_evdev.h"
-#include "event_ring.h"
#define PORT_ENQUEUE_MAX_BURST_SIZE 64
@@ -52,13 +52,31 @@ sw_event_release(struct sw_port *p, uint8_t index)
ev.op = sw_qe_flag_map[RTE_EVENT_OP_RELEASE];
uint16_t free_count;
- qe_ring_enqueue_burst(p->rx_worker_ring, &ev, 1, &free_count);
+ rte_event_ring_enqueue_burst(p->rx_worker_ring, &ev, 1, &free_count);
/* each release returns one credit */
p->outstanding_releases--;
p->inflight_credits++;
}
+/*
+ * special-case of rte_event_ring enqueue, with overriding the ops member on
+ * the events that get written to the ring.
+ */
+static inline unsigned int
+enqueue_burst_with_ops(struct rte_event_ring *r, const struct rte_event *events,
+ unsigned int n, uint8_t *ops)
+{
+ struct rte_event tmp_evs[PORT_ENQUEUE_MAX_BURST_SIZE];
+ unsigned int i;
+
+ memcpy(tmp_evs, events, n * sizeof(events[0]));
+ for (i = 0; i < n; i++)
+ tmp_evs[i].op = ops[i];
+
+ return rte_event_ring_enqueue_burst(r, tmp_evs, n, NULL);
+}
+
uint16_t
sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num)
{
@@ -119,7 +137,7 @@ sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num)
p->inflight_credits -= forwards * p->is_directed;
/* returns number of events actually enqueued */
- uint32_t enq = qe_ring_enqueue_burst_with_ops(p->rx_worker_ring, ev, i,
+ uint32_t enq = enqueue_burst_with_ops(p->rx_worker_ring, ev, i,
new_ops);
if (p->outstanding_releases == 0 && p->last_dequeue_burst_sz != 0) {
uint64_t burst_ticks = rte_get_timer_cycles() -
@@ -146,7 +164,7 @@ sw_event_dequeue_burst(void *port, struct rte_event *ev, uint16_t num,
RTE_SET_USED(wait);
struct sw_port *p = (void *)port;
struct sw_evdev *sw = (void *)p->sw;
- struct qe_ring *ring = p->cq_worker_ring;
+ struct rte_event_ring *ring = p->cq_worker_ring;
uint32_t credit_update_quanta = sw->credit_update_quanta;
/* check that all previous dequeues have been released */
@@ -158,7 +176,7 @@ sw_event_dequeue_burst(void *port, struct rte_event *ev, uint16_t num,
}
/* returns number of events actually dequeued */
- uint16_t ndeq = qe_ring_dequeue_burst(ring, ev, num);
+ uint16_t ndeq = rte_event_ring_dequeue_burst(ring, ev, num, NULL);
if (unlikely(ndeq == 0)) {
p->outstanding_releases = 0;
p->zero_polls++;
diff --git a/drivers/event/sw/sw_evdev_xstats.c b/drivers/event/sw/sw_evdev_xstats.c
index 7b66fbb15..8cb6d88d1 100644
--- a/drivers/event/sw/sw_evdev_xstats.c
+++ b/drivers/event/sw/sw_evdev_xstats.c
@@ -30,9 +30,9 @@
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <rte_event_ring.h>
#include "sw_evdev.h"
#include "iq_ring.h"
-#include "event_ring.h"
enum xstats_type {
/* common stats */
@@ -105,10 +105,10 @@ get_port_stat(const struct sw_evdev *sw, uint16_t obj_idx,
case calls: return p->total_polls;
case credits: return p->inflight_credits;
case poll_return: return p->zero_polls;
- case rx_used: return qe_ring_count(p->rx_worker_ring);
- case rx_free: return qe_ring_free_count(p->rx_worker_ring);
- case tx_used: return qe_ring_count(p->cq_worker_ring);
- case tx_free: return qe_ring_free_count(p->cq_worker_ring);
+ case rx_used: return rte_event_ring_count(p->rx_worker_ring);
+ case rx_free: return rte_event_ring_free_count(p->rx_worker_ring);
+ case tx_used: return rte_event_ring_count(p->cq_worker_ring);
+ case tx_free: return rte_event_ring_free_count(p->cq_worker_ring);
default: return -1;
}
}
@@ -318,8 +318,9 @@ sw_xstats_init(struct sw_evdev *sw)
port, port_stats[i]);
}
- for (bkt = 0; bkt < (sw->ports[port].cq_worker_ring->size >>
- SW_DEQ_STAT_BUCKET_SHIFT) + 1; bkt++) {
+ for (bkt = 0; bkt < (rte_event_ring_get_capacity(
+ sw->ports[port].cq_worker_ring) >>
+ SW_DEQ_STAT_BUCKET_SHIFT) + 1; bkt++) {
for (i = 0; i < RTE_DIM(port_bucket_stats); i++) {
sw->xstats[stat] = (struct sw_xstats_entry){
.fn = get_port_bucket_stat,
--
2.13.0
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings
2017-06-30 15:06 ` [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings Bruce Richardson
@ 2017-07-03 12:28 ` Van Haaren, Harry
2017-07-03 12:44 ` Jerin Jacob
0 siblings, 1 reply; 28+ messages in thread
From: Van Haaren, Harry @ 2017-07-03 12:28 UTC (permalink / raw)
To: jerin.jacob; +Cc: olivier.matz, Richardson, Bruce, dev, Richardson, Bruce
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> Sent: Friday, June 30, 2017 4:06 PM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; jerin.jacob@caviumnetworks.com; Richardson, Bruce
> <bruce.richardson@intel.com>
> Subject: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings
>
> Now that we have a standard event ring implementation for passing events
> core-to-core, use that in place of the custom event rings in the software
> eventdev.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Agree with 99% of this patch, but due to the implementation (with memzone lookup),
we need to change one part of the sw_port_setup() function.
The change is required to allow port_setup() to be called multiple times on the same
port, which is required to re-configure a port that has already been configured once.
I can send a separate fix, or I could re-spin Bruce's 5 patches, and include the fix.
Given this is a small, non-datapath modification to the SW PMD, my preference is to
ack this patch once I've posted a separate patch fix for the SW PMD.
@Jerin, any preference?
> ---
> drivers/event/sw/sw_evdev.c | 38 +++++++++++++++++++----------------
> drivers/event/sw/sw_evdev.h | 4 ++--
> drivers/event/sw/sw_evdev_scheduler.c | 19 +++++++++---------
> drivers/event/sw/sw_evdev_worker.c | 28 +++++++++++++++++++++-----
> drivers/event/sw/sw_evdev_xstats.c | 15 +++++++-------
> 5 files changed, 64 insertions(+), 40 deletions(-)
>
> diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
> index fe2a61e2f..31880aa5c 100644
> --- a/drivers/event/sw/sw_evdev.c
> +++ b/drivers/event/sw/sw_evdev.c
> @@ -38,10 +38,10 @@
> #include <rte_kvargs.h>
> #include <rte_ring.h>
> #include <rte_errno.h>
> +#include <rte_event_ring.h>
>
> #include "sw_evdev.h"
> #include "iq_ring.h"
> -#include "event_ring.h"
>
> #define EVENTDEV_NAME_SW_PMD event_sw
> #define NUMA_NODE_ARG "numa_node"
> @@ -140,7 +140,7 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
> {
> struct sw_evdev *sw = sw_pmd_priv(dev);
> struct sw_port *p = &sw->ports[port_id];
> - char buf[QE_RING_NAMESIZE];
> + char buf[RTE_RING_NAMESIZE];
> unsigned int i;
>
> struct rte_event_dev_info info;
> @@ -161,10 +161,11 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
> p->id = port_id;
> p->sw = sw;
>
> - snprintf(buf, sizeof(buf), "sw%d_%s", dev->data->dev_id,
> - "rx_worker_ring");
> - p->rx_worker_ring = qe_ring_create(buf, MAX_SW_PROD_Q_DEPTH,
> - dev->data->socket_id);
> + snprintf(buf, sizeof(buf), "sw%d_p%u_%s", dev->data->dev_id,
> + port_id, "rx_worker_ring");
> + p->rx_worker_ring = rte_event_ring_create(buf, MAX_SW_PROD_Q_DEPTH,
> + dev->data->socket_id,
> + RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings
2017-07-03 12:28 ` Van Haaren, Harry
@ 2017-07-03 12:44 ` Jerin Jacob
2017-07-03 13:01 ` Van Haaren, Harry
0 siblings, 1 reply; 28+ messages in thread
From: Jerin Jacob @ 2017-07-03 12:44 UTC (permalink / raw)
To: Van Haaren, Harry; +Cc: olivier.matz, Richardson, Bruce, dev
-----Original Message-----
> Date: Mon, 3 Jul 2017 12:28:32 +0000
> From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
> To: "jerin.jacob@caviumnetworks.com" <jerin.jacob@caviumnetworks.com>
> CC: "olivier.matz@6wind.com" <olivier.matz@6wind.com>, "Richardson, Bruce"
> <bruce.richardson@intel.com>, "dev@dpdk.org" <dev@dpdk.org>, "Richardson,
> Bruce" <bruce.richardson@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to
> standard event rings
>
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> > Sent: Friday, June 30, 2017 4:06 PM
> > To: dev@dpdk.org
> > Cc: olivier.matz@6wind.com; jerin.jacob@caviumnetworks.com; Richardson, Bruce
> > <bruce.richardson@intel.com>
> > Subject: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings
> >
> > Now that we have a standard event ring implementation for passing events
> > core-to-core, use that in place of the custom event rings in the software
> > eventdev.
> >
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>
> Agree with 99% of this patch, but due to the implementation (with memzone lookup),
> we need to change one part of the sw_port_setup() function.
>
> The change is required to allow port_setup() to be called multiple times on the same
> port, which is required to re-configure a port that has already been configured once.
>
> I can send a separate fix, or I could re-spin Bruce's 5 patches, and include the fix.
>
> Given this is a small, non-datapath modification to the SW PMD, my preference is to
> ack this patch once I've posted a separate patch fix for the SW PMD.
>
> @Jerin, any preference?
I think, you can send it as a separate patch. I can squash the fix patch with this
patch or apply it as separate one if you are not concerned about
breaking when we do "git bisect". Let me know.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings
2017-07-03 12:44 ` Jerin Jacob
@ 2017-07-03 13:01 ` Van Haaren, Harry
2017-07-04 5:36 ` Jerin Jacob
0 siblings, 1 reply; 28+ messages in thread
From: Van Haaren, Harry @ 2017-07-03 13:01 UTC (permalink / raw)
To: Jerin Jacob; +Cc: olivier.matz, Richardson, Bruce, dev
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Monday, July 3, 2017 1:45 PM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>
> Cc: olivier.matz@6wind.com; Richardson, Bruce <bruce.richardson@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event
> rings
>
> -----Original Message-----
> > Date: Mon, 3 Jul 2017 12:28:32 +0000
> > From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
> > To: "jerin.jacob@caviumnetworks.com" <jerin.jacob@caviumnetworks.com>
> > CC: "olivier.matz@6wind.com" <olivier.matz@6wind.com>, "Richardson, Bruce"
> > <bruce.richardson@intel.com>, "dev@dpdk.org" <dev@dpdk.org>, "Richardson,
> > Bruce" <bruce.richardson@intel.com>
> > Subject: RE: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to
> > standard event rings
> >
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> > > Sent: Friday, June 30, 2017 4:06 PM
> > > To: dev@dpdk.org
> > > Cc: olivier.matz@6wind.com; jerin.jacob@caviumnetworks.com; Richardson, Bruce
> > > <bruce.richardson@intel.com>
> > > Subject: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event
> rings
> > >
> > > Now that we have a standard event ring implementation for passing events
> > > core-to-core, use that in place of the custom event rings in the software
> > > eventdev.
> > >
> > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> >
> > Agree with 99% of this patch, but due to the implementation (with memzone lookup),
> > we need to change one part of the sw_port_setup() function.
> >
> > The change is required to allow port_setup() to be called multiple times on the same
> > port, which is required to re-configure a port that has already been configured once.
> >
> > I can send a separate fix, or I could re-spin Bruce's 5 patches, and include the fix.
> >
> > Given this is a small, non-datapath modification to the SW PMD, my preference is to
> > ack this patch once I've posted a separate patch fix for the SW PMD.
> >
> > @Jerin, any preference?
>
> I think, you can send it as a separate patch. I can squash the fix patch with this
> patch or apply it as separate one if you are not concerned about
> breaking when we do "git bisect". Let me know.
Can be squashed then, please and thanks!
Then this patch itself (5/5) is
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to standard event rings
2017-07-03 13:01 ` Van Haaren, Harry
@ 2017-07-04 5:36 ` Jerin Jacob
0 siblings, 0 replies; 28+ messages in thread
From: Jerin Jacob @ 2017-07-04 5:36 UTC (permalink / raw)
To: Van Haaren, Harry; +Cc: olivier.matz, Richardson, Bruce, dev
-----Original Message-----
> Date: Mon, 3 Jul 2017 13:01:47 +0000
> From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: "olivier.matz@6wind.com" <olivier.matz@6wind.com>, "Richardson, Bruce"
> <bruce.richardson@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
> Subject: RE: [dpdk-dev] [PATCH v2 5/5] event/sw: change worker rings to
> standard event rings
>
> > rings
> > > >
> > > > Now that we have a standard event ring implementation for passing events
> > > > core-to-core, use that in place of the custom event rings in the software
> > > > eventdev.
> > > >
> > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > >
> > > Agree with 99% of this patch, but due to the implementation (with memzone lookup),
> > > we need to change one part of the sw_port_setup() function.
> > >
> > > The change is required to allow port_setup() to be called multiple times on the same
> > > port, which is required to re-configure a port that has already been configured once.
> > >
> > > I can send a separate fix, or I could re-spin Bruce's 5 patches, and include the fix.
> > >
> > > Given this is a small, non-datapath modification to the SW PMD, my preference is to
> > > ack this patch once I've posted a separate patch fix for the SW PMD.
> > >
> > > @Jerin, any preference?
> >
> > I think, you can send it as a separate patch. I can squash the fix patch with this
> > patch or apply it as separate one if you are not concerned about
> > breaking when we do "git bisect". Let me know.
>
> Can be squashed then, please and thanks!
>
> Then this patch itself (5/5) is
>
> Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
Applied this series to dpdk-next-eventdev/master after squashing
http://dpdk.org/dev/patchwork/patch/26241/
Thanks
^ permalink raw reply [flat|nested] 28+ messages in thread