From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A365A09E4; Tue, 23 Mar 2021 12:12:28 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 384BB140D21; Tue, 23 Mar 2021 12:12:28 +0100 (CET) Received: from mail-il1-f174.google.com (mail-il1-f174.google.com [209.85.166.174]) by mails.dpdk.org (Postfix) with ESMTP id 566994014D for ; Tue, 23 Mar 2021 12:12:27 +0100 (CET) Received: by mail-il1-f174.google.com with SMTP id t6so17772457ilp.11 for ; Tue, 23 Mar 2021 04:12:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Zukx5jEs84FGK7tgHF54SgUTBy9eMDav/h7+mGtkVek=; b=JPwHGB/VTACBIVNqnptf08I919dGCbwkkL4jDab20GcRRZb2l2ZSZa+scqHYAX5w9U whA/N242bFomQbb9nDlQE5NsLY6Io2ldvPPEuBBfGZsi0+SfEaYwR6Qdgv2Ml/JVLi9V f7a2zAbKhJSf4PJvp4ho7BZb93nCuiwucSUlUQy9RpzF3VCc4Tr4wsirRY80PDMv8miM TtCesJU2UQwYlYNCXcfy4woEnegqUs2CKI+skse/93kDPC4EcquH/5XJsMwpOw3wK1zq QWJuGzp71DVMyOwq5j71F74/YnS3vXHy5/yerCTAln2KgrsRlj+m1022swFe/fDpwCKS FSKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Zukx5jEs84FGK7tgHF54SgUTBy9eMDav/h7+mGtkVek=; b=NbvQrWRL+24yr++ra0zkXkRd87zSnQ3c4StRcBXggGcEX8Je5LNvtukVA3jVPkRuhX 4f0X1aNS4Ebu97RXFsqxcU+pQK5VHrgZaqwkbjnga5qDBMvn9J4CcT5/2dtzcBl/398N XJxkiy3rv+VJfirV83V4YSkXBr1M7styc49/QzmGw7Nw+fFXBN7IiO+fBLzEUECt70j3 rrbxw6nQ2FInWbXf1KhswydFry+ipI0Js/XiiSLSqVXAmBIvnpZD6fQVN8Ls407+M4xO lLz4np8dfMF2O8ydwNfbUre10+KCUZeKp8yTxHOjTHIgzwk8Uj5T3E/ddStGlTXKKDFX 8jqA== X-Gm-Message-State: AOAM533YxGsfY7nPOm7aufH1FAg7P/UVfjWgXjznoMYAbRKhQ3mSaFL2 1uU+xY1F7OmS4EpWqAlfQayzolnVhvII5DCNAic= X-Google-Smtp-Source: ABdhPJxbfm7yoPJoI+2tSW9GLIQFemQAxOJxQjdnxPtxON0KUi/gY4tKC2ii2Rl1MhqP+VK5bXx3LynbS4NsEE/gZkA= X-Received: by 2002:a92:d28b:: with SMTP id p11mr4322115ilp.130.1616497946713; Tue, 23 Mar 2021 04:12:26 -0700 (PDT) MIME-Version: 1.0 References: <20210316200156.252-1-pbhagavatula@marvell.com> <20210319205718.1436-1-pbhagavatula@marvell.com> <20210319205718.1436-2-pbhagavatula@marvell.com> In-Reply-To: <20210319205718.1436-2-pbhagavatula@marvell.com> From: Jerin Jacob Date: Tue, 23 Mar 2021 16:42:09 +0530 Message-ID: To: Pavan Nikhilesh Cc: Jerin Jacob , "Jayatheerthan, Jay" , Erik Gabriel Carrillo , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , Hemant Agrawal , "Van Haaren, Harry" , =?UTF-8?Q?Mattias_R=C3=B6nnblom?= , Liang Ma , Ray Kinsella , Neil Horman , dpdk-dev Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH v4 1/8] eventdev: introduce event vector capability X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Sat, Mar 20, 2021 at 2:27 AM wrote: > > From: Pavan Nikhilesh > > Introduce rte_event_vector datastructure which is capable of holding > multiple uintptr_t of the same flow thereby allowing applications > to vectorize their pipeline and reducing the complexity of pipelining > the events across multiple stages. > This approach also reduces the scheduling overhead on a event device. > > Add a event vector mempool create handler to create mempools based on > the best mempool ops available on a given platform. > > Signed-off-by: Pavan Nikhilesh Some minor comments below. Feel free to add Acked-by: Jerin Jacob after fixing the comments and Ray's suggestion. > --- > doc/guides/prog_guide/eventdev.rst | 36 +++++++++- > lib/librte_eventdev/rte_eventdev.h | 112 ++++++++++++++++++++++++++++- > lib/librte_eventdev/version.map | 3 + > 3 files changed, 148 insertions(+), 3 deletions(-) > > diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst > index ccde086f6..fda9c3743 100644 > --- a/doc/guides/prog_guide/eventdev.rst > +++ b/doc/guides/prog_guide/eventdev.rst > @@ -63,13 +63,45 @@ the actual event being scheduled is. The payload is a union of the following: > * ``uint64_t u64`` > * ``void *event_ptr`` > * ``struct rte_mbuf *mbuf`` > +* ``struct rte_event_vector *vec`` > > -These three items in a union occupy the same 64 bits at the end of the rte_event > +These four items in a union occupy the same 64 bits at the end of the rte_event > structure. The application can utilize the 64 bits directly by accessing the > -u64 variable, while the event_ptr and mbuf are provided as convenience > +u64 variable, while the event_ptr, mbuf, vec are provided as a convenience > variables. For example the mbuf pointer in the union can used to schedule a > DPDK packet. > > +Event Vector > +~~~~~~~~~~~~ > + > +The rte_event_vector struct contains a vector of elements defined by the event > +type specified in the ``rte_event``. The event_vector structure contains the > +following data: > + > +* ``nb_elem`` - The number of elements held within the vector. > + > +Similar to ``rte_event`` the payload of event vector is also a union, allowing > +flexibility in what the actual vector is. > + > +* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs. > +* ``void *ptrs[0]`` - An array of pointers. > +* ``uint64_t *u64s[0]`` - An array of uint64_t elements. > + > +The size of the event vector is related to the total number of elements it is > +configured to hold, this is achieved by making `rte_event_vector` a variable > +length structure. > +A helper function is provided to create a mempool that holds event vector, which > +takes name of the pool, total number of required ``rte_event_vector``, > +cache size, number of elements in each ``rte_event_vector`` and socket id. > + > +.. code-block:: c > + > + rte_event_vector_pool_create("vector_pool", nb_event_vectors, cache_sz, > + nb_elements_per_vector, socket_id); > + > +The function ``rte_event_vector_pool_create`` creates mempool with the best > +platform mempool ops. > + > Queues > ~~~~~~ > > diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h > index ce1fc2ce0..5586a3f15 100644 > --- a/lib/librte_eventdev/rte_eventdev.h > +++ b/lib/librte_eventdev/rte_eventdev.h > @@ -212,8 +212,10 @@ extern "C" { > > #include > #include > -#include > #include > +#include > +#include > +#include > > #include "rte_eventdev_trace_fp.h" > > @@ -913,6 +915,25 @@ rte_event_dev_stop_flush_callback_register(uint8_t dev_id, > int > rte_event_dev_close(uint8_t dev_id); > > +/** > + * Event vector structure. > + */ > +struct rte_event_vector { > + uint64_t nb_elem : 16; > + /**< Number of elements in this event vector. */ > + uint64_t rsvd : 48; Please add comment here to look Doxygen output correctly. > + uint64_t impl_opaque; Please add comment here to look Doxygen output correctly. > + union { > + struct rte_mbuf *mbufs[0]; > + void *ptrs[0]; > + uint64_t *u64s[0]; > + } __rte_aligned(16); > + /**< Start of the vector array union. Depending upon the event type the > + * vector array can be an array of mbufs or pointers or opaque u64 > + * values. > + */ > +}; > + > /* Scheduler type definitions */ > #define RTE_SCHED_TYPE_ORDERED 0 > /**< Ordered scheduling > @@ -986,6 +1007,21 @@ rte_event_dev_close(uint8_t dev_id); > */ > #define RTE_EVENT_TYPE_ETH_RX_ADAPTER 0x4 > /**< The event generated from event eth Rx adapter */ > +#define RTE_EVENT_TYPE_VECTOR 0x8 > +/**< Indicates that event is a vector. > + * All vector event types should be an logical OR of EVENT_TYPE_VECTOR. an logical -> a logical? > + * This simplifies the pipeline design as we can split processing the events we -> one > + * between vector events and normal event across event types. > + * Example: > + * if (ev.event_type & RTE_EVENT_TYPE_VECTOR) { > + * // Classify and handle vector event. > + * } else { > + * // Classify and handle event. > + * } > + */ > +#define RTE_EVENT_TYPE_CPU_VECTOR (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU) > +/**< The event vector generated from cpu for pipelining. */ > + > #define RTE_EVENT_TYPE_MAX 0x10 > /**< Maximum number of event types */ > > @@ -1108,6 +1144,8 @@ struct rte_event { > /**< Opaque event pointer */ > struct rte_mbuf *mbuf; > /**< mbuf pointer if dequeued event is associated with mbuf */ > + struct rte_event_vector *vec; > + /**< Event vector pointer. */ > }; > }; > > @@ -2023,6 +2061,78 @@ rte_event_dev_xstats_reset(uint8_t dev_id, > */ > int rte_event_dev_selftest(uint8_t dev_id); > > +/** > + * Get the memory required per event vector based on the number of elements per > + * vector. > + * This should be used to create the mempool that holds the event vectors. > + * > + * @param name > + * The name of the vector pool. > + * @param n > + * The number of elements in the mbuf pool. > + * @param cache_size > + * Size of the per-core object cache. See rte_mempool_create() for > + * details. > + * @param nb_elem > + * The number of elements then a single event vector should be able to hold. > + * @param socket_id > + * The socket identifier where the memory should be allocated. The > + * value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the > + * reserved zone > + * > + * @return > + * The pointer to the newly allocated mempool, on success. NULL on error > + * with rte_errno set appropriately. Possible rte_errno values include: > + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure > + * - E_RTE_SECONDARY - function was called from a secondary process instance > + * - EINVAL - cache size provided is too large, or priv_size is not aligned. > + * - ENOSPC - the maximum number of memzones has already been allocated > + * - EEXIST - a memzone with the same name already exists > + * - ENOMEM - no appropriate memory area found in which to create memzone > + */ > +__rte_experimental > +static inline struct rte_mempool * > +rte_event_vector_pool_create(const char *name, unsigned int n, > + unsigned int cache_size, uint16_t nb_elem, > + int socket_id) > +{ > + const char *mp_ops_name; > + struct rte_mempool *mp; > + unsigned int elt_sz; > + int ret; > + > + if (!nb_elem) { > + RTE_LOG(ERR, EVENTDEV, > + "Invalid number of elements=%d requested\n", nb_elem); > + rte_errno = -EINVAL; > + return NULL; > + } > + > + elt_sz = > + sizeof(struct rte_event_vector) + (nb_elem * sizeof(uintptr_t)); > + mp = rte_mempool_create_empty(name, n, elt_sz, cache_size, 0, socket_id, > + 0); > + if (mp == NULL) > + return NULL; > + > + mp_ops_name = rte_mbuf_best_mempool_ops(); > + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); > + if (ret != 0) { > + RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n"); > + goto err; > + } > + > + ret = rte_mempool_populate_default(mp); > + if (ret < 0) > + goto err; > + > + return mp; > +err: > + rte_mempool_free(mp); > + rte_errno = -ret; > + return NULL; > +} > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map > index 3e5c09cfd..a070ef56e 100644 > --- a/lib/librte_eventdev/version.map > +++ b/lib/librte_eventdev/version.map > @@ -138,6 +138,9 @@ EXPERIMENTAL { > __rte_eventdev_trace_port_setup; > # added in 20.11 > rte_event_pmd_pci_probe_named; > + > + #added in 21.05 > + rte_event_vector_pool_create; > }; > > INTERNAL { > -- > 2.17.1 >