Hello Julien,
On Tue, Aug 22, 2023 at 08:34:53AM +0200, jhascoet wrote:
> From: Julien Hascoet <ju.hascoet@gmail.com>
>
> In case of ring full state, we retry the enqueue
> operation in order to avoid mbuf loss.
>
> Fixes: af75078fece ("first public release")
>
> Signed-off-by: Julien Hascoet <ju.hascoet@gmail.com>
> ---
> app/test/test_mbuf.c | 15 ++++++++++++---
> 1 file changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> index efac01806b..ad18bf6378 100644
> --- a/app/test/test_mbuf.c
> +++ b/app/test/test_mbuf.c
> @@ -1033,12 +1033,21 @@ test_refcnt_iter(unsigned int lcore, unsigned int iter,
> tref += ref;
> if ((ref & 1) != 0) {
> rte_pktmbuf_refcnt_update(m, ref);
> - while (ref-- != 0)
> - rte_ring_enqueue(refcnt_mbuf_ring, m);
> + while (ref-- != 0) {
> + /* retry in case of failure */
> + while (rte_ring_enqueue(refcnt_mbuf_ring, m) != 0) {
> + /* let others consume */
> + rte_pause();
> + }
> + }
> } else {
> while (ref-- != 0) {
> rte_pktmbuf_refcnt_update(m, 1);
> - rte_ring_enqueue(refcnt_mbuf_ring, m);
> + /* retry in case of failure */
> + while (rte_ring_enqueue(refcnt_mbuf_ring, m) != 0) {
> + /* let others consume */
> + rte_pause();
> + }
> }
> }
> rte_pktmbuf_free(m);
> --
> 2.34.1
>
Can you give some more details about how to reproduce the issue?
From what I see, the code does the following:
main core:
create a ring with at least (REFCNT_MBUF_NUM * REFCNT_MAX_REF) entries
create an mbuf pool with REFCNT_MBUF_NUM entries
start worker cores
do REFCNT_MAX_ITER times:
for each mbuf of the pool (REFCNT_MBUF_NUM entries):
let r be a random number between 1 and REFCNT_MAX_REF
increase mbuf references by r, and enqueue r times in the ring
wait that the ring is empty (since worker cores are dequeuing mbufs)
stop worker cores
worker cores:
dequeue packets from the ring and free them until asked to stop
I may be mistaking but I don't see how the number of mbufs in ring could
exceed REFCNT_MBUF_NUM * REFCNT_MAX_REF.
Regards,
Olivier
Note: removing CC maintainers@dpdk.org