From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3864A0566; Mon, 8 Mar 2021 17:50:06 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5EBFF22A406; Mon, 8 Mar 2021 17:50:06 +0100 (CET) Received: from mail-il1-f178.google.com (mail-il1-f178.google.com [209.85.166.178]) by mails.dpdk.org (Postfix) with ESMTP id EF95E22A3FF for ; Mon, 8 Mar 2021 17:50:04 +0100 (CET) Received: by mail-il1-f178.google.com with SMTP id z9so9448979iln.1 for ; Mon, 08 Mar 2021 08:50:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=tObtvqdvOtOQ5ZXCtGvofZ+cZnmtT+YzpJw3HVOZATk=; b=FLaY47IuHnozuIkG9gkfvGhFLlRLymkOAb/1saS734D9gYzgYzJmLYaQgTQZdzuGel LkyP2dAveuMCTNE2bhFkvWUG7PduHVyC0m0Z8Y2csscRxcMpjddOXVBQLcycVVrf4S6r WsIAv7RNAfF8booYzqM0tR+iU11ER5iYPQX1uw5I2hiID4lvZy5CLYDchZryz8I+a6JQ +VPql0HFmhyxjCXmdaFUcKJZxHw9fUS2R4Q0rssEoK9dCH9t/JRXbtpO0BPfhMJucBKX +G6bhlf2gHQbRgZkZsizzHb2u2OzmcCG5UM9q1U+btvL7uYgY74z2xX4y8v6JlhaCevk kCEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=tObtvqdvOtOQ5ZXCtGvofZ+cZnmtT+YzpJw3HVOZATk=; b=Slq359szlJwjcesAJTOROPe1mYIWOsQo6/SYkLOSGE7JbHd9bu+emdDGm7fiFEek7n fGVy727Luz/FNUNdJeiBd9u8xX2qpPmoGzitIm7P5p5hlKsvRmxMCZ6rmMh5EzZuiAOI IRMtW/HuNiRkiaIM/OIcIqk2vgbNoXLGXLDgxjvWexOvFBs5UjvF4Kny+fovWi+hoLZr SEPreP195sq2ilAopv44DwTIfwQ7JcgGelyeBSP5tWIZdHFqBq86evw/nS4Q1QPbu8cu 2Kdju/Bwj8CIwItWYVl/SrEqyJ2DXHHL7cM6R1GnnEJ+MxfeqAqvBPH0wpxascmglnih vjDA== X-Gm-Message-State: AOAM530qpAL+wYni6T7TF8DMtNTX8b75LtSEp5Pi5srBxd01koFzR9yQ Y9WLm5kuSkMqybU5oDyE5v6T6dLtOQNBuX85xw4= X-Google-Smtp-Source: ABdhPJx1WKxm2fmLTNTELnobdw9wH5rSgt+7An7zlLuCnESZakZT8aH8jMrKGwpYFFOr4vJBHl343nIyxULkDEStGaE= X-Received: by 2002:a92:b70c:: with SMTP id k12mr22024416ili.60.1615222204137; Mon, 08 Mar 2021 08:50:04 -0800 (PST) MIME-Version: 1.0 References: <20210220220957.4583-1-pbhagavatula@marvell.com> <20210220220957.4583-2-pbhagavatula@marvell.com> In-Reply-To: <20210220220957.4583-2-pbhagavatula@marvell.com> From: Jerin Jacob Date: Mon, 8 Mar 2021 22:19:47 +0530 Message-ID: To: Pavan Nikhilesh Cc: Jerin Jacob , "Jayatheerthan, Jay" , Erik Gabriel Carrillo , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , Hemant Agrawal , "Van Haaren, Harry" , =?UTF-8?Q?Mattias_R=C3=B6nnblom?= , Liang Ma , Ray Kinsella , Neil Horman , dpdk-dev Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH 1/7] eventdev: introduce event vector capability X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Sun, Feb 21, 2021 at 3:40 AM wrote: > > From: Pavan Nikhilesh > > Introduce rte_event_vector datastructure which is capable of holding > multiple uintptr_t of the same flow thereby allowing applications > to vectorize their pipeline and reducing the complexity of pipelining > the events across multiple stages. > This approach also reduces the scheduling overhead on a event device. > > Add a event vector mempool create handler to create mempools based on > the best mempool ops available on a given platform. > > Signed-off-by: Pavan Nikhilesh > --- > doc/guides/prog_guide/eventdev.rst | 36 ++++++++- > lib/librte_eventdev/rte_eventdev.h | 113 ++++++++++++++++++++++++++++- > lib/librte_eventdev/version.map | 3 + > 3 files changed, 149 insertions(+), 3 deletions(-) > > diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst > index ccde086f6..d19c91ab0 100644 > --- a/doc/guides/prog_guide/eventdev.rst > +++ b/doc/guides/prog_guide/eventdev.rst > @@ -63,13 +63,45 @@ the actual event being scheduled is. The payload is a union of the following: > * ``uint64_t u64`` > * ``void *event_ptr`` > * ``struct rte_mbuf *mbuf`` > +* ``struct rte_event_vector *vec`` > > -These three items in a union occupy the same 64 bits at the end of the rte_event > +These four items in a union occupy the same 64 bits at the end of the rte_event > structure. The application can utilize the 64 bits directly by accessing the > -u64 variable, while the event_ptr and mbuf are provided as convenience as a convenience > +u64 variable, while the event_ptr, mbuf, vec are provided as convenience > variables. For example the mbuf pointer in the union can used to schedule a > DPDK packet. > > +Event Vector > +~~~~~~~~~~~~ > + > +The rte_event_vector struct contains a vector of elements defined by the event > +type specified in the ``rte_event``. The event_vector structure contains the > +following data: > + > +* ``nb_elem`` - The number of elements held within the vector. > + > +Similar to ``rte_event`` the payload of event vector is also a union, allowing > +flexibility in what the actual vector is. > + > +* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs. > +* ``void *ptrs[0]`` - An array of pointers. > +* ``uint64_t *u64s[0]`` - An array of uint64_t elements. > + > +The size of the event vector is related to the total number of elements it is > +configured to hold, this is achieved by making `rte_event_vector` a variable > +length structure. > +A helper function is provided to create a mempool that holds event vector, which > +takes name of the pool, total number of required ``rte_event_vector``, > +cache size, number of elements in each ``rte_event_vector`` and socket id. > + > +.. code-block:: c > + > + rte_event_vector_pool_create("vector_pool", nb_event_vectors, cache_sz, > + nb_elements_per_vector, socket_id); > + > +The function ``rte_event_vector_pool_create`` creates mempool with the best > +platform mempool ops. > + > Queues > ~~~~~~ > > diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h > index ce1fc2ce0..ff6cb3e6a 100644 > --- a/lib/librte_eventdev/rte_eventdev.h > +++ b/lib/librte_eventdev/rte_eventdev.h > @@ -212,8 +212,10 @@ extern "C" { > > #include > #include > -#include > #include > +#include > +#include > +#include > > #include "rte_eventdev_trace_fp.h" > > @@ -913,6 +915,25 @@ rte_event_dev_stop_flush_callback_register(uint8_t dev_id, > int > rte_event_dev_close(uint8_t dev_id); > > +/** > + * Event vector structure. > + */ > +struct rte_event_vector { > + uint64_t nb_elem : 16; > + /**< Number of elements in this event vector. */ > + uint64_t rsvd : 48; > + uint64_t impl_opaque; > + union { > + struct rte_mbuf *mbufs[0]; > + void *ptrs[0]; > + uint64_t *u64s[0]; > + } __rte_aligned(16); > + /**< Start of the vector array union. Depending upon the event type the > + * vector array can be an array of mbufs or pointers or opaque u64 > + * values. > + */ > +}; > + > /* Scheduler type definitions */ > #define RTE_SCHED_TYPE_ORDERED 0 > /**< Ordered scheduling > @@ -986,6 +1007,21 @@ rte_event_dev_close(uint8_t dev_id); > */ > #define RTE_EVENT_TYPE_ETH_RX_ADAPTER 0x4 > /**< The event generated from event eth Rx adapter */ > +#define RTE_EVENT_TYPE_VECTOR 0x8 > +/**< Indicates that event is a vector. > + * All vector event types should be an logical OR of EVENT_TYPE_VECTOR. > + * This simplifies the pipeline design as we can split processing the events > + * between vector events and normal event across event types. > + * Example: > + * if (ev.event_type & RTE_EVENT_TYPE_VECTOR) { > + * // Classify and handle vector event. I think, we can remove C++-style comments from the documentation. ie change from // to /* */ > + * } else { > + * // Classify and handle event. > + * } > + */ > +#define RTE_EVENT_TYPE_CPU_VECTOR (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU) > +/**< The event vector generated from cpu for pipelining. */ > + > #define RTE_EVENT_TYPE_MAX 0x10 > /**< Maximum number of event types */ > > @@ -1108,6 +1144,8 @@ struct rte_event { > /**< Opaque event pointer */ > struct rte_mbuf *mbuf; > /**< mbuf pointer if dequeued event is associated with mbuf */ > + struct rte_event_vector *vec; > + /**< Event vector pointer. */ > }; > }; > > @@ -2023,6 +2061,79 @@ rte_event_dev_xstats_reset(uint8_t dev_id, > */ > int rte_event_dev_selftest(uint8_t dev_id); > > +/** > + * Get the memory required per event vector based on number of elements per the number of > + * vector. > + * This should be used to create the mempool that holds the event vectors. > + * > + * @param name > + * The name of the vector pool. > + * @param n > + * The number of elements in the mbuf pool. > + * @param cache_size > + * Size of the per-core object cache. See rte_mempool_create() for > + * details. > + * @param nb_elem > + * The number of elements then a single event vector should be able to hold. > + * @param socket_id > + * The socket identifier where the memory should be allocated. The > + * value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the > + * reserved zone > + * > + * @return > + * The pointer to the new allocated mempool, on success. NULL on error s/new/newly > + * with rte_errno set appropriately. Possible rte_errno values include: > + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure > + * - E_RTE_SECONDARY - function was called from a secondary process instance > + * - EINVAL - cache size provided is too large, or priv_size is not aligned. > + * - ENOSPC - the maximum number of memzones has already been allocated > + * - EEXIST - a memzone with the same name already exists > + * - ENOMEM - no appropriate memory area found in which to create memzone > + */ > +__rte_experimental > +static inline struct rte_mempool * > +rte_event_vector_pool_create(const char *name, unsigned int n, > + unsigned int cache_size, uint16_t nb_elem, > + int socket_id) > +{ > + const char *mp_ops_name; > + struct rte_mempool *mp; > + unsigned int elt_sz; > + int ret; > + > + if (!nb_elem) { > + RTE_LOG(ERR, EVENTDEV, > + "Invalid number of elements=%d requested\n", nb_elem); > + rte_errno = -EINVAL; > + return NULL; > + } > + > + elt_sz = > + sizeof(struct rte_event_vector) + (nb_elem * sizeof(uintptr_t)); > + mp = rte_mempool_create_empty(name, n, elt_sz, cache_size, 0, socket_id, > + 0); > + if (mp == NULL) > + return NULL; > + > + mp_ops_name = rte_mbuf_best_mempool_ops(); > + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); > + if (ret != 0) { > + RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n"); > + rte_mempool_free(mp); > + rte_errno = -ret; See below > + return NULL; > + } > + > + ret = rte_mempool_populate_default(mp); > + if (ret < 0) { > + rte_mempool_free(mp); > + rte_errno = -ret; > + return NULL; make it as goto err: kind of structure to avoid code duplication.(See above) > + } > + > + return mp; > +} > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map > index 3e5c09cfd..a070ef56e 100644 > --- a/lib/librte_eventdev/version.map > +++ b/lib/librte_eventdev/version.map > @@ -138,6 +138,9 @@ EXPERIMENTAL { > __rte_eventdev_trace_port_setup; > # added in 20.11 > rte_event_pmd_pci_probe_named; > + > + #added in 21.05 > + rte_event_vector_pool_create; > }; > > INTERNAL { > -- > 2.17.1 >