From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 485E3A04AD; Mon, 24 Jan 2022 18:04:25 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C367541155; Mon, 24 Jan 2022 18:04:24 +0100 (CET) Received: from mail-wr1-f41.google.com (mail-wr1-f41.google.com [209.85.221.41]) by mails.dpdk.org (Postfix) with ESMTP id 2A32140040 for ; Mon, 24 Jan 2022 18:04:24 +0100 (CET) Received: by mail-wr1-f41.google.com with SMTP id l25so14799179wrb.13 for ; Mon, 24 Jan 2022 09:04:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=SX8kCSZIIQn4NOKYHji2NYEFXu1hIXJqFGkUOCKyg/w=; b=F5hUTHpWczTJmcxt7B8DvjHUmwmNnhLG2UdlL7BrxQuiLes7vqwjBdq+b5TJb/LkZZ nMHCz3q1uAxaf45ueSwNtqwWwTV2XMP2zU1rSfm4eV6bXaPq9dzHwmtTyC4Z0sJGGdxE iJrmPl+bCoiFyhMMa5mLH0EhjhWeNpKkG0gyFvP4jw5ntNCXXnjqH98b1/Z8lPTv11iU EmD+VzV3GgZul9MxXou4MnYr0oiFEgAv/CeuveV3jobYkRJV012pp54i1G+Lo1+r4etR +XM4idecb1IIFA/4qytareDtB9bh8a8N5ycKtSTHmF/ZEolEiXRe6k+z5ccGD13wzXru msaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=SX8kCSZIIQn4NOKYHji2NYEFXu1hIXJqFGkUOCKyg/w=; b=PmQbELYckrNbB1AYspOBkHrkuDk95Dh9eV0agzJLOLBSx8Q8ArH4rD6x2j079+YK4D 8cwSe+arPucbOsd2vqCn12uTBOSVhZMOYQWaE30yTcXaVRtBpz3+4Fuu6sOHrbce5ZCK 6YuTxyDbIuGpkfzTtN4yE6sZ0JFjkwLst4QWTuHpOp5ZJLcYrjEl5WeOMv7N2LNNIEld vXjl9ajph9JvdTJFjHkNJzd9blW+tJTQBpT4lMNyZSnNLQ+kiOh0IiWsgGpbsxaEDU1Y DhjFqztAC1fh/nrAivjwKDtMaXfwicKfq47re1x3BZrx41uO6kxZtHVINPGF0tphtwlf JFCg== X-Gm-Message-State: AOAM530eA6s2hzYFkI4nQu8bDG+w4Qqp5TPVkKy0pmCNdj0qN9B2lRTg k7LF4tD5Wh3dS+S/4wFE98p9tw== X-Google-Smtp-Source: ABdhPJyvVALcX27e4zkdikR1IGBE1iWeI7HeCb3t+0OlmbB1Xjq15jqJRP5fcwRMwFV7RqqAyJqq2w== X-Received: by 2002:a05:6000:144d:: with SMTP id v13mr8192796wrx.212.1643043863638; Mon, 24 Jan 2022 09:04:23 -0800 (PST) Received: from 6wind.com ([2a01:e0a:5ac:6460:c065:401d:87eb:9b25]) by smtp.gmail.com with ESMTPSA id 11sm7046170wmx.5.2022.01.24.09.04.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Jan 2022 09:04:23 -0800 (PST) Date: Mon, 24 Jan 2022 18:04:22 +0100 From: Olivier Matz To: Zhiheng Chen Cc: Andrew Rybchenko , dev@dpdk.org Subject: Re: [PATCH v3] mempool: fix the description of some function return values Message-ID: References: <20211222082550.436-1-chenzhiheng0227@gmail.com> <20211223100741.21292-1-chenzhiheng0227@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20211223100741.21292-1-chenzhiheng0227@gmail.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Zhiheng, Thank you for your patch proposal. On Thu, Dec 23, 2021 at 10:07:41AM +0000, Zhiheng Chen wrote: > In rte_mempool_ring.c, the committer uses the symbol ENOBUFS to > describe the return value of function common_ring_sc_dequeue, > but in rte_mempool.h, the symbol ENOENT is used to describe > the return value of function rte_mempool_get. If the user of > dpdk uses the symbol ENOENT as the judgment condition of > the return value, it may cause some abnormal phenomena > in their own programs, such as when the mempool space is exhausted. The issue I see with this approach is that currently, there is no standard error code in mempool drivers dequeue: bucket: -ENOBUFS cn10k: -ENOENT cn9k: -ENOENT dpaa: -1, -ENOBUFS dpaa2: -1, -ENOENT, -ENOBUFS octeontx: -ENOMEM ring: -ENOBUFS stack: -ENOBUFS After your patch, the drivers do not match the documentation. I agree it would be better to return the same code for the same error, whatever the driver is used. But I think we should keep the possibility for a driver to return another code. For instance, it could be an hardware error in case of hardware mempool driver. I see 2 possibilities: 1/ simplest one: relax documentation and do not talk about -ENOENT or -ENOBUFS, just say negative value is an error 2/ fix driver and doc Mempool drivers should be modified first, knowing that changing them is an ABI modification (which I think is acceptable, because the error code varies depending on driver). Then, this patch could be applied. For reference, note that the documentation was probably right initially, but the behavior changed in commit cfa7c9e6fc1f ("ring: make bulk and burst return values consistent"), returning -ENOBUFS instead of -ENOENT on dequeue error. > v2: > * Update the descriptions of underlying functions. > > v3: > * Correct the description that the return value cannot be greater than 0 > * Update the description of the dequeue function prototype > > Signed-off-by: Zhiheng Chen > --- > lib/mempool/rte_mempool.h | 34 ++++++++++++++++++++++------------ > 1 file changed, 22 insertions(+), 12 deletions(-) > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h > index 1e7a3c1527..cae81d8a32 100644 > --- a/lib/mempool/rte_mempool.h > +++ b/lib/mempool/rte_mempool.h > @@ -447,6 +447,16 @@ typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp, > > /** > * Dequeue an object from the external pool. > + * > + * @param mp > + * Pointer to the memory pool. > + * @param obj_table > + * Pointer to a table of void * pointers (objects). > + * @param n > + * Number of objects to get. > + * @return > + * - 0: Success; got n objects. > + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved. Also, we should have, in addition to -ENOBUFS: - <0: Another driver-specific error code (-errno) This comment applies for the other functions below. > */ > typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp, > void **obj_table, unsigned int n); > @@ -738,7 +748,7 @@ rte_mempool_ops_alloc(struct rte_mempool *mp); > * Number of objects to get. > * @return > * - 0: Success; got n objects. > - * - <0: Error; code of dequeue function. > + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved. > */ > static inline int > rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp, > @@ -1452,8 +1462,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj) > * @param cache > * A pointer to a mempool cache structure. May be NULL if not needed. > * @return > - * - >=0: Success; number of objects supplied. > - * - <0: Error; code of ring dequeue function. > + * - 0: Success; got n objects. > + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved. > */ > static __rte_always_inline int > rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, > @@ -1521,7 +1531,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, > * Get several objects from the mempool. > * > * If cache is enabled, objects will be retrieved first from cache, > - * subsequently from the common pool. Note that it can return -ENOENT when > + * subsequently from the common pool. Note that it can return -ENOBUFS when > * the local cache and common pool are empty, even if cache from other > * lcores are full. > * > @@ -1534,8 +1544,8 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table, > * @param cache > * A pointer to a mempool cache structure. May be NULL if not needed. > * @return > - * - 0: Success; objects taken. > - * - -ENOENT: Not enough entries in the mempool; no object is retrieved. > + * - 0: Success; got n objects. > + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved. > */ > static __rte_always_inline int > rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, > @@ -1557,7 +1567,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, > * mempool creation time (see flags). > * > * If cache is enabled, objects will be retrieved first from cache, > - * subsequently from the common pool. Note that it can return -ENOENT when > + * subsequently from the common pool. Note that it can return -ENOBUFS when > * the local cache and common pool are empty, even if cache from other > * lcores are full. > * > @@ -1568,8 +1578,8 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table, > * @param n > * The number of objects to get from the mempool to obj_table. > * @return > - * - 0: Success; objects taken > - * - -ENOENT: Not enough entries in the mempool; no object is retrieved. > + * - 0: Success; got n objects. > + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved. > */ > static __rte_always_inline int > rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n) > @@ -1588,7 +1598,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n) > * mempool creation (see flags). > * > * If cache is enabled, objects will be retrieved first from cache, > - * subsequently from the common pool. Note that it can return -ENOENT when > + * subsequently from the common pool. Note that it can return -ENOBUFS when > * the local cache and common pool are empty, even if cache from other > * lcores are full. > * > @@ -1597,8 +1607,8 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n) > * @param obj_p > * A pointer to a void * pointer (object) that will be filled. > * @return > - * - 0: Success; objects taken. > - * - -ENOENT: Not enough entries in the mempool; no object is retrieved. > + * - 0: Success; got n objects. > + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved. > */ > static __rte_always_inline int > rte_mempool_get(struct rte_mempool *mp, void **obj_p) > -- > 2.32.0 > Thanks, Olivier