From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <hejianet@gmail.com>
Received: from mail-pl0-f42.google.com (mail-pl0-f42.google.com
 [209.85.160.42]) by dpdk.org (Postfix) with ESMTP id E62F91B3B0
 for <dev@dpdk.org>; Wed,  8 Nov 2017 03:31:54 +0100 (CET)
Received: by mail-pl0-f42.google.com with SMTP id 14so322476plf.4
 for <dev@dpdk.org>; Tue, 07 Nov 2017 18:31:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding;
 bh=b0fnuo9XQl819hwHnUFipQqXmrXJnkjGV4BnXuXsFyE=;
 b=chvDyU5yxptX1CaSimO22JpH9IUeCFzIUTTHFA4CMfnoRw3ku0Ufe5zC/5LpvssZM7
 QPJd+qaHiYR6YnAbIDQXLFEeDN49sFWNyE2n8EOMFsOIq6e3Cj4fVNL5GIg7L3PMlVVO
 XhA7/JVwTdqAJ4zW8d/J3N4dXpBuf5qHFYQaink+2d14S6kzR84HhS0C++Wh7212oAY4
 h4UFTCduIaBsXY/iMvNHhl1lk4QGiNj5Kzd+L5/M223y2ATEN6BApcgFkXB9/kmau+Sw
 dKDM78+CtnnQI9P3WVUzCLhkja0mTu3y9coe9SfNGekzWUXyc+uSDRFQUWva6kHl0kMa
 GjJg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding;
 bh=b0fnuo9XQl819hwHnUFipQqXmrXJnkjGV4BnXuXsFyE=;
 b=Oakk62iokB8vnY7MZsHuSKlVBHQ0XWjHG7GYTGhIO7t6lT/rgUof1izuBUaFOvtvOg
 WQrOxAym3AFLydr1+qWiPs6Z8ahdp7Zify4pxhZS5GZ70pODQs53WSG/NUu6xD4vOe3q
 X8bgIZdAsr4Q/z775CFUtZrRTvb0ajiVk3NRs1LAtuTxz4o7MT8wQKnD85Ovfs56ceJ5
 4VB15AQ4HhgRwlBacQdgDPIMpkF7ob1dPx27w4A4Vs2aJvCKzDrE0+9JEdnKhndM20Ny
 oERGjAObXsAD0KL2/4I/lZREYI7iy/jRUIIjy2PDyMVXD5N0dcUMaCtDTYZuz6rHrmoI
 Zf7A==
X-Gm-Message-State: AJaThX5OWcepKFXdo2iZiKYZAerX5aNlImgCxXjqySGbUYO1DNw/uIWR
 8cMWLNa5z60poG72K5UesIk=
X-Google-Smtp-Source: ABhQp+Q00ExGJdKf605abhmeSXREaUWeFu8tTx6/JAiECdYiNhatNwMyB9k7fPOcHDIE5CkmqgBYcg==
X-Received: by 10.84.224.76 with SMTP id a12mr727655plt.207.1510108313975;
 Tue, 07 Nov 2017 18:31:53 -0800 (PST)
Received: from [0.0.0.0] (67.209.179.165.16clouds.com. [67.209.179.165])
 by smtp.gmail.com with ESMTPSA id u6sm5441236pfg.175.2017.11.07.18.31.41
 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 07 Nov 2017 18:31:53 -0800 (PST)
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Cc: dev@dpdk.org, olivier.matz@6wind.com, konstantin.ananyev@intel.com,
 bruce.richardson@intel.com, jianbo.liu@arm.com, hemant.agrawal@nxp.com,
 jie2.liu@hxt-semitech.com, bing.zhao@hxt-semitech.com,
 jia.he@hxt-semitech.com
References: <1509612210-5499-1-git-send-email-hejianet@gmail.com>
 <20171102172337.GB1478@jerin>
 <25192429-8369-ac3d-44b0-c1b1d7182ef0@gmail.com>
 <20171103125616.GB20326@jerin>
 <7b7f3677-8313-9a2f-868f-b3a6231548d6@gmail.com>
 <20171107043655.GA3244@jerin>
 <c2ce8774-a1b6-edf6-444e-ee0981df7497@gmail.com>
 <20171107095727.GA23010@jerin>
From: Jia He <hejianet@gmail.com>
Message-ID: <06fb4de9-5348-01e2-f9bb-1aecb90c6adc@gmail.com>
Date: Wed, 8 Nov 2017 10:31:32 +0800
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
 Thunderbird/52.4.0
MIME-Version: 1.0
In-Reply-To: <20171107095727.GA23010@jerin>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Subject: Re: [dpdk-dev] [PATCH v2] ring: guarantee ordering of cons/prod
 loading when doing
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 08 Nov 2017 02:31:55 -0000

Hi Jerin

Thank you,

I mistakenly think x86 doen't need rte_smp_rmb().

Since rte_smp_rmb() only impact x86's compiler optimization, I will 
simplify the codes as your suggestions

Cheers,

Jia


On 11/7/2017 5:57 PM, Jerin Jacob Wrote:
> -----Original Message-----
>> Date: Tue, 7 Nov 2017 16:34:30 +0800
>> From: Jia He <hejianet@gmail.com>
>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> Cc: dev@dpdk.org, olivier.matz@6wind.com, konstantin.ananyev@intel.com,
>>   bruce.richardson@intel.com, jianbo.liu@arm.com, hemant.agrawal@nxp.com,
>>   jie2.liu@hxt-semitech.com, bing.zhao@hxt-semitech.com,
>>   jia.he@hxt-semitech.com
>> Subject: Re: [PATCH v2] ring: guarantee ordering of cons/prod loading when
>>   doing
>> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101
>>   Thunderbird/52.4.0
>>
>>
>>
>> On 11/7/2017 12:36 PM, Jerin Jacob Wrote:
>>> -----Original Message-----
>>>
>>> On option could be to change the prototype of update_tail() and make
>>> compiler accommodate it for zero cost for arm64(Which I think, it it the
>>> case. But you can check the generated instructions)
>>> If not, move, __rte_ring_do_dequeue() and __rte_ring_do_enqueue() instead of
>>> __rte_ring_move_prod_head/__rte_ring_move_cons_head/update_tail()
>>>
>>>
>>> ➜ [master][dpdk.org] $ git diff
>>> diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h
>>> index 5e9b3b7b4..b32648825 100644
>>> --- a/lib/librte_ring/rte_ring.h
>>> +++ b/lib/librte_ring/rte_ring.h
>>> @@ -358,8 +358,12 @@ void rte_ring_dump(FILE *f, const struct rte_ring
>>> *r);
>>>    static __rte_always_inline void
>>>    update_tail(struct rte_ring_headtail *ht, uint32_t old_val, uint32_t
>>> new_val,
>>> -               uint32_t single)
>>> +               uint32_t single, const uint32_t enqueue)
>>>    {
>>> +       if (enqueue)
>>> +               rte_smp_wmb();
>>> +       else
>>> +               rte_smp_rmb();
>>>           /*
>>>            * If there are other enqueues/dequeues in progress that
>>>            * preceded us,
>>>            * we need to wait for them to complete
>>> @@ -470,9 +474,8 @@ __rte_ring_do_enqueue(struct rte_ring *r, void *
>>> const *obj_table,
>>>                   goto end;
>>>           ENQUEUE_PTRS(r, &r[1], prod_head, obj_table, n, void *);
>>> -       rte_smp_wmb();
>>> -       update_tail(&r->prod, prod_head, prod_next, is_sp);
>>> +       update_tail(&r->prod, prod_head, prod_next, is_sp, 1);
>>>    end:
>>>           if (free_space != NULL)
>>>                   *free_space = free_entries - n;
>>> @@ -575,9 +578,8 @@ __rte_ring_do_dequeue(struct rte_ring *r, void
>>> **obj_table,
>>>                   goto end;
>>>           DEQUEUE_PTRS(r, &r[1], cons_head, obj_table, n, void *);
>>> -       rte_smp_rmb();
>>> -       update_tail(&r->cons, cons_head, cons_next, is_sc);
>>> +       update_tail(&r->cons, cons_head, cons_next, is_sc, 0);
>>>    end:
>>>           if (available != NULL)
>>>
>>>
>>>
>> Hi Jerin, yes I knew this suggestion in update_tail.
>> But what I mean is the rte_smp_rmb() in __rte_ring_move_cons_head and
>> __rte_ring_move_pros_head:
>> [option 1]
>> +        *old_head = r->cons.head;
>> +        rte_smp_rmb();
>> +        const uint32_t prod_tail = r->prod.tail;
>>
>> [option 2]
>> +        *old_head = __atomic_load_n(&r->cons.head,
>> +                    __ATOMIC_ACQUIRE);
>> +        *old_head = r->cons.head;
>>
>> ie.I wonder what is the suitable new config name to distinguish the above 2
>> options?
> Why?
> If you fix the generic version with rte_smp_rmb() then we just need only
> one config to differentiate between c11 vs generic. See comments below,
>
>> Thanks for the patience :-)
>>
>> see my drafted patch below, the marcro "PREFER":
>> + */
>> +
>> +#ifndef _RTE_RING_C11_MEM_H_
>> +#define _RTE_RING_C11_MEM_H_
>> +
>> +static __rte_always_inline void
>> +update_tail(struct rte_ring_headtail *ht, uint32_t old_val, uint32_t
>> new_val,
>> +        uint32_t single, uint32_t enqueue)
>> +{
>> +    /* Don't need wmb/rmb when we prefer to use load_acquire/
>> +     * store_release barrier */
>> +#ifndef PREFER
>> +    if (enqueue)
>> +        rte_smp_wmb();
>> +    else
>> +        rte_smp_rmb();
>> +#endif
> You can remove PREFER and let the "generic" version has this. For x86,
> rte_smp_?mb() it will be NOOP. So no issue.
>
>> +
>> +    /*
>> +     * If there are other enqueues/dequeues in progress that preceded us,
>> +     * we need to wait for them to complete
>> +     */
>> +    if (!single)
>> +        while (unlikely(ht->tail != old_val))
>> +            rte_pause();
>> +
>> +#ifdef PREFER
>> +    __atomic_store_n(&ht->tail, new_val, __ATOMIC_RELEASE);
> for c11 mem model version, it needs only __atomic_store_n version.
>
>> +#else
>> +    ht->tail = new_val;
>> +#endif
>> +}
>> +
>> +/**
>> + * @internal This function updates the producer head for enqueue
>> + *
>> + * @param r
>> + *   A pointer to the ring structure
>> + * @param is_sp
>> + *   Indicates whether multi-producer path is needed or not
>> + * @param n
>> + *   The number of elements we will want to enqueue, i.e. how far should
>> the
>> + *   head be moved
>> + * @param behavior
>> + *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring
>> + *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring
>> + * @param old_head
>> + *   Returns head value as it was before the move, i.e. where enqueue
>> starts
>> + * @param new_head
>> + *   Returns the current/new head value i.e. where enqueue finishes
>> + * @param free_entries
>> + *   Returns the amount of free space in the ring BEFORE head was moved
>> + * @return
>> + *   Actual number of objects enqueued.
>> + *   If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
>> + */
>> +static __rte_always_inline unsigned int
>> +__rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
>> +        unsigned int n, enum rte_ring_queue_behavior behavior,
>> +        uint32_t *old_head, uint32_t *new_head,
>> +        uint32_t *free_entries)
>> +{
>> +    const uint32_t capacity = r->capacity;
>> +    unsigned int max = n;
>> +    int success;
>> +
>> +    do {
>> +        /* Reset n to the initial burst count */
>> +        n = max;
>> +
>> +#ifdef PREFER
>> +        *old_head = __atomic_load_n(&r->prod.head,
>> +                    __ATOMIC_ACQUIRE);
>> +#else
>> +        *old_head = r->prod.head;
>> +        /* prevent reorder of load/load */
>> +        rte_smp_rmb();
>> +#endif
> Same as above comment.
>
>> +        const uint32_t cons_tail = r->cons.tail;
>> +        /*
>> +         *  The subtraction is done between two unsigned 32bits value
>> +         * (the result is always modulo 32 bits even if we have
>> +         * *old_head > cons_tail). So 'free_entries' is always between 0
>> +         * and capacity (which is < size).
>> +         */
>> +        *free_entries = (capacity + cons_tail - *old_head);
>> +
>> +        /* check that we have enough room in ring */
>> +        if (unlikely(n > *free_entries))
>> +static __rte_always_inline unsigned int
>> +__rte_ring_do_enqueue(struct rte_ring *r, void * const *obj_table,
>> +         unsigned int n, enum rte_ring_queue_behavior behavior,
>> +         int is_sp, unsigned int *free_space)
>> +{
>
> Duplicate function, No need to replicate on both versions.
>
>
>> +static __rte_always_inline unsigned int
>> +__rte_ring_move_cons_head(struct rte_ring *r, int is_sc,
>> +        unsigned int n, enum rte_ring_queue_behavior behavior,
>> +        uint32_t *old_head, uint32_t *new_head,
>> +        uint32_t *entries)
>> +{
>> +    unsigned int max = n;
>> +    int success;
>> +
>> +    /* move cons.head atomically */
>> +    do {
>> +        /* Restore n as it may change every loop */
>> +        n = max;
>> +#ifdef PREFER
>> +        *old_head = __atomic_load_n(&r->cons.head,
>> +                    __ATOMIC_ACQUIRE);
>> +#else
>> +        *old_head = r->cons.head;
>> +        /*  prevent reorder of load/load */
>> +        rte_smp_rmb();
>> +#endif
> Same as above comment
>
>> +
>> +        const uint32_t prod_tail = r->prod.tail;
>> +        /* The subtraction is done between two unsigned 32bits value
>> +         * (the result is always modulo 32 bits even if we have
>> +         * cons_head > prod_tail). So 'entries' is always between 0
>> +         * and size(ring)-1. */
>> +        *entries = (prod_tail - *old_head);
>> +
>> +        /* Set the actual entries for dequeue */
>> +        if (n > *entries)
>> +            n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : *entries;
>> +
>> +        if (unlikely(n == 0))
>> +            return 0;
>> +
>> +        *new_head = *old_head + n;
>> +        if (is_sc)
>> +            r->cons.head = *new_head, success = 1;
>> +        else
>> +#ifdef PREFER
>> +            success = arch_rte_atomic32_cmpset(&r->cons.head,
>> +                            old_head, *new_head,
>> +                            0, __ATOMIC_ACQUIRE,
>> +                            __ATOMIC_RELAXED);
>> +#else
>> +            success = rte_atomic32_cmpset(&r->cons.head, *old_head,
>> +                    *new_head);
>> +#endif
> Same as above comment
>
>> +    } while (unlikely(success == 0));
>> +    return n;
>> +}
>> +
>> +/**
>> + * @internal Dequeue several objects from the ring
>> + *
>> + * @param r
>> + *   A pointer to the ring structure.
>> + * @param obj_table
>> + *   A pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   The number of objects to pull from the ring.
>> + * @param behavior
>> + *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring
>> + *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring
>> + * @param is_sc
>> + *   Indicates whether to use single consumer or multi-consumer head update
>> + * @param available
>> + *   returns the number of remaining ring entries after the dequeue has
>> finished
>> + * @return
>> + *   - Actual number of objects dequeued.
>> + *     If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
>> + */
>> +static __rte_always_inline unsigned int
>> +__rte_ring_do_dequeue(struct rte_ring *r, void **obj_table,
>> +         unsigned int n, enum rte_ring_queue_behavior behavior,
>> +         int is_sc, unsigned int *available)
>> +{
> Duplicate function, No need to replicate on both versions.
>
>
>> +    uint32_t cons_head, cons_next;
>> +    uint32_t entries;
>> +
>> +    n = __rte_ring_move_cons_head(r, is_sc, n, behavior,
>> +            &cons_head, &cons_next, &entries);
>> +    if (n == 0)
>> +        goto end;
>> +
>> +    DEQUEUE_PTRS(r, &r[1], cons_head, obj_table, n, void *);
>> +
>> +    update_tail(&r->cons, cons_head, cons_next, is_sc, 0);
>> +
>> +end:
>> +    if (available != NULL)
>> +        *available = entries - n;
>> +    return n;
>> +}
>> +
>> +#endif /* _RTE_RING_C11_MEM_H_ */
>> +
>> diff --git a/lib/librte_ring/rte_ring_generic.h
>> b/lib/librte_ring/rte_ring_generic.h
>> new file mode 100644
>> index 0000000..0ce6d57
>> --- /dev/null
>> +++ b/lib/librte_ring/rte_ring_generic.h
>> @@ -0,0 +1,268 @@
>> +/*-
>> + *   BSD LICENSE
>> + *
>> + *   Copyright(c) 2017 hxt-semitech. All rights reserved.
>> + *
>> + *   Redistribution and use in source and binary forms, with or without
>> + *   modification, are permitted provided that the following conditions
>> + *   are met:
>> + *
>> + *     * Redistributions of source code must retain the above copyright
>> + *       notice, this list of conditions and the following disclaimer.
>> + *     * Redistributions in binary form must reproduce the above copyright
>> + *       notice, this list of conditions and the following disclaimer in
>> + *       the documentation and/or other materials provided with the
>> + *       distribution.
>> + *     * Neither the name of hxt-semitech nor the names of its
>> + *       contributors may be used to endorse or promote products derived
>> + *       from this software without specific prior written permission.
>> + *
>> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
>> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
>> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
>> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
>> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
>> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
>> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
>> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
>> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
>> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
>> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>> + */
>> +
>> +#ifndef _RTE_RING_GENERIC_H_
>> +#define _RTE_RING_GENERIC_H_
>> +
>> +static __rte_always_inline void
>> +update_tail(struct rte_ring_headtail *ht, uint32_t old_val, uint32_t
>> new_val,
>> +        uint32_t single, uint32_t enqueue)
>> +{
>> +    if (enqueue)
>> +        rte_smp_wmb();
>> +    else
>> +        rte_smp_rmb();
>> +    /*
>> +     * If there are other enqueues/dequeues in progress that preceded us,
>> +     * we need to wait for them to complete
>> +     */
>> +    if (!single)
>> +        while (unlikely(ht->tail != old_val))
>> +            rte_pause();
>> +
>> +    ht->tail = new_val;
>> +}
>> +
>> +/**
>> + * @internal This function updates the producer head for enqueue
>> + *
>> + * @param r
>> + *   A pointer to the ring structure
>> + * @param is_sp
>> + *   Indicates whether multi-producer path is needed or not
>> + * @param n
>> + *   The number of elements we will want to enqueue, i.e. how far should
>> the
>> + *   head be moved
>> + * @param behavior
>> + *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring
>> + *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring
>> + * @param old_head
>> + *   Returns head value as it was before the move, i.e. where enqueue
>> starts
>> + * @param new_head
>> + *   Returns the current/new head value i.e. where enqueue finishes
>> + * @param free_entries
>> + *   Returns the amount of free space in the ring BEFORE head was moved
>> + * @return
>> + *   Actual number of objects enqueued.
>> + *   If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
>> + */
>> +static __rte_always_inline unsigned int
>> +__rte_ring_move_prod_head(struct rte_ring *r, int is_sp,
>> +        unsigned int n, enum rte_ring_queue_behavior behavior,
>> +        uint32_t *old_head, uint32_t *new_head,
>> +        uint32_t *free_entries)
>> +{
>> +    const uint32_t capacity = r->capacity;
>> +    unsigned int max = n;
>> +    int success;
>> +
>> +    do {
>> +        /* Reset n to the initial burst count */
>> +        n = max;
>> +
>> +        *old_head = r->prod.head;
> adding rte_smp_rmb() no harm here as it is NOOP for x86 and it is
> semantically correct too.
>
>
>> +        const uint32_t cons_tail = r->cons.tail;
>> +        /*
>> +         *  The subtraction is done between two unsigned 32bits value
>> +         * (the result is always modulo 32 bits even if we have
>> +         * *old_head > cons_tail). So 'free_entries' is always between 0
>> +         * and capacity (which is < size).
>> +         */
>> +        *free_entries = (capacity + cons_tail - *old_head);
>> +
>> +        /* check that we have enough room in ring */
>> +        if (unlikely(n > *free_entries))
>> +            n = (behavior == RTE_RING_QUEUE_FIXED) ?
>> +                    0 : *free_entries;
>> +
>> +        if (n == 0)
>> +            return 0;
>> +
>> +        *new_head = *old_head + n;
>> +        if (is_sp)
>> +            r->prod.head = *new_head, success = 1;
>> +        else
>> +            success = rte_atomic32_cmpset(&r->prod.head,
>> +                    *old_head, *new_head);
>> +    } while (unlikely(success == 0));
>> +    return n;
>> +}
>> +
>> +/**
>> + * @internal Enqueue several objects on the ring
>> + *
>> +  * @param r
>> + *   A pointer to the ring structure.
>> + * @param obj_table
>> + *   A pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   The number of objects to add in the ring from the obj_table.
>> + * @param behavior
>> + *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring
>> + *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from ring
>> + * @param is_sp
>> + *   Indicates whether to use single producer or multi-producer head update
>> + * @param free_space
>> + *   returns the amount of space after the enqueue operation has finished
>> + * @return
>> + *   Actual number of objects enqueued.
>> + *   If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
>> + */
>> +static __rte_always_inline unsigned int
>> +__rte_ring_do_enqueue(struct rte_ring *r, void * const *obj_table,
>> +         unsigned int n, enum rte_ring_queue_behavior behavior,
>> +         int is_sp, unsigned int *free_space)
>> +{
> Duplicate function, No need to replicate on both versions.
>
>> +
>> +/**
>> + * @internal This function updates the consumer head for dequeue
>> + *
>> + * @param r
>> + *   A pointer to the ring structure
>> + * @param is_sc
>> + *   Indicates whether multi-consumer path is needed or not
>> + * @param n
>> + *   The number of elements we will want to enqueue, i.e. how far should
>> the
>> + *   head be moved
>> + * @param behavior
>> + *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring
>> + *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring
>> + * @param old_head
>> + *   Returns head value as it was before the move, i.e. where dequeue
>> starts
>> + * @param new_head
>> + *   Returns the current/new head value i.e. where dequeue finishes
>> + * @param entries
>> + *   Returns the number of entries in the ring BEFORE head was moved
>> + * @return
>> + *   - Actual number of objects dequeued.
>> + *     If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
>> + */
>> +static __rte_always_inline unsigned int
>> +__rte_ring_move_cons_head(struct rte_ring *r, int is_sc,
>> +        unsigned int n, enum rte_ring_queue_behavior behavior,
>> +        uint32_t *old_head, uint32_t *new_head,
>> +        uint32_t *entries)
>> +{
>> +    unsigned int max = n;
>> +    int success;
>> +
>> +    /* move cons.head atomically */
>> +    do {
>> +        /* Restore n as it may change every loop */
>> +        n = max;
>> +
>> +        *old_head = r->cons.head;
>> +        const uint32_t prod_tail = r->prod.tail;
> Same as above comment.
>
>> +        /* The subtraction is done between two unsigned 32bits value
>> +         * (the result is always modulo 32 bits even if we have
>> +         * cons_head > prod_tail). So 'entries' is always between 0
>> +         * and size(ring)-1. */
>> +        *entries = (prod_tail - *old_head);
>> +
>> +        /* Set the actual entries for dequeue */
>> +        if (n > *entries)
>> +            n = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : *entries;
>> +
>> +        if (unlikely(n == 0))
>> +            return 0;
>> +
>> +        *new_head = *old_head + n;
>> +        if (is_sc)
>> +            r->cons.head = *new_head, success = 1;
>> +        else
>> +            success = rte_atomic32_cmpset(&r->cons.head, *old_head,
>> +                    *new_head);
>> +    } while (unlikely(success == 0));
>> +    return n;
>> +}
>> +
>> +/**
>> + * @internal Dequeue several objects from the ring
>> + *
>> + * @param r
>> + *   A pointer to the ring structure.
>> + * @param obj_table
>> + *   A pointer to a table of void * pointers (objects).
>> + * @param n
>> + *   The number of objects to pull from the ring.
>> + * @param behavior
>> + *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring
>> + *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from ring
>> + * @param is_sc
>> + *   Indicates whether to use single consumer or multi-consumer head update
>> + * @param available
>> + *   returns the number of remaining ring entries after the dequeue has
>> finished
>> + * @return
>> + *   - Actual number of objects dequeued.
>> + *     If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only.
>> + */
>> +static __rte_always_inline unsigned int
>> +__rte_ring_do_dequeue(struct rte_ring *r, void **obj_table,
>> +         unsigned int n, enum rte_ring_queue_behavior behavior,
>> +         int is_sc, unsigned int *available)
>> +{
> Duplicate function, No need to replicate on both versions.

-- 
Cheers,
Jia