From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AC482A034F; Mon, 22 Mar 2021 10:07:01 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 88D2340040; Mon, 22 Mar 2021 10:07:01 +0100 (CET) Received: from dal3relay60.mxroute.com (dal3relay60.mxroute.com [64.40.27.60]) by mails.dpdk.org (Postfix) with ESMTP id 8759F4003D for ; Mon, 22 Mar 2021 10:06:59 +0100 (CET) Received: from filter004.mxroute.com ([149.28.56.236] filter004.mxroute.com) (Authenticated sender: mN4UYu2MZsgR) by dal3relay60.mxroute.com (ZoneMTA) with ESMTPSA id 178592fe958000362f.001 for (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES128-GCM-SHA256); Mon, 22 Mar 2021 09:06:55 +0000 X-Zone-Loop: 15433cc10807dacbe476c7ba24fef498e07bd2fc4bf6 X-Originating-IP: [149.28.56.236] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=ashroe.eu; s=x; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:MIME-Version:Date: Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=cJUZVYSXe/KwYbUjlLD08gTHAxZBMJpC19eXmirKp60=; b=XJ4bYl6UH0H4kczqfFspYrLqax ZTCx1xPkXIdGY7q4QFi0IFZoInBntgySoM9hBBhbnXDcKuy3RLrTsKJkvWZ3EslAPdyjvXKd0Cvlv 4nwNVtM6VuUJlNwiomd8KtMKBut7u9S/d5Dg5/Rh9RCxsf/89Ry4XvbmHCdjcStLWI+HaKalThhuX Rr2X+qO3tM9hp10jmEgcLNcc7zeZOl8gARjaLwnlpvFHKT+A50Huk6TdXWc75lYpMQEBRGStJKGEX lbudilx30/mQvrGwIOrLZaFlMjtacppRB5Zv7tGVOY6FoxPYdvGgxI5ZV2LM9g2lkMZltpFO+UbTD EMhLe2xg==; To: pbhagavatula@marvell.com, jerinj@marvell.com, jay.jayatheerthan@intel.com, erik.g.carrillo@intel.com, abhinandan.gujjar@intel.com, timothy.mcdaniel@intel.com, hemant.agrawal@nxp.com, harry.van.haaren@intel.com, mattias.ronnblom@ericsson.com, liang.j.ma@intel.com, Neil Horman Cc: dev@dpdk.org References: <20210316200156.252-1-pbhagavatula@marvell.com> <20210319205718.1436-1-pbhagavatula@marvell.com> <20210319205718.1436-2-pbhagavatula@marvell.com> From: "Kinsella, Ray" Message-ID: <94200d6a-c2c9-6b09-c7b1-eb67ab8b1315@ashroe.eu> Date: Mon, 22 Mar 2021 09:06:51 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: <20210319205718.1436-2-pbhagavatula@marvell.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-AuthUser: mdr@ashroe.eu Subject: Re: [dpdk-dev] [PATCH v4 1/8] eventdev: introduce event vector capability X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 19/03/2021 20:57, pbhagavatula@marvell.com wrote: > From: Pavan Nikhilesh > > Introduce rte_event_vector datastructure which is capable of holding > multiple uintptr_t of the same flow thereby allowing applications > to vectorize their pipeline and reducing the complexity of pipelining > the events across multiple stages. > This approach also reduces the scheduling overhead on a event device. > > Add a event vector mempool create handler to create mempools based on > the best mempool ops available on a given platform. > > Signed-off-by: Pavan Nikhilesh > --- > doc/guides/prog_guide/eventdev.rst | 36 +++++++++- > lib/librte_eventdev/rte_eventdev.h | 112 ++++++++++++++++++++++++++++- > lib/librte_eventdev/version.map | 3 + > 3 files changed, 148 insertions(+), 3 deletions(-) > [SNIP] > > diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h > index ce1fc2ce0..5586a3f15 100644 > --- a/lib/librte_eventdev/rte_eventdev.h > +++ b/lib/librte_eventdev/rte_eventdev.h > @@ -212,8 +212,10 @@ extern "C" { > > #include > #include > -#include > #include > +#include > +#include > +#include > > #include "rte_eventdev_trace_fp.h" > > @@ -913,6 +915,25 @@ rte_event_dev_stop_flush_callback_register(uint8_t dev_id, > int > rte_event_dev_close(uint8_t dev_id); > > +/** > + * Event vector structure. > + */ > +struct rte_event_vector { > + uint64_t nb_elem : 16; > + /**< Number of elements in this event vector. */ > + uint64_t rsvd : 48; > + uint64_t impl_opaque; > + union { > + struct rte_mbuf *mbufs[0]; > + void *ptrs[0]; > + uint64_t *u64s[0]; > + } __rte_aligned(16); > + /**< Start of the vector array union. Depending upon the event type the > + * vector array can be an array of mbufs or pointers or opaque u64 > + * values. > + */ > +}; > + > /* Scheduler type definitions */ > #define RTE_SCHED_TYPE_ORDERED 0 > /**< Ordered scheduling > @@ -986,6 +1007,21 @@ rte_event_dev_close(uint8_t dev_id); > */ > #define RTE_EVENT_TYPE_ETH_RX_ADAPTER 0x4 > /**< The event generated from event eth Rx adapter */ > +#define RTE_EVENT_TYPE_VECTOR 0x8 > +/**< Indicates that event is a vector. > + * All vector event types should be an logical OR of EVENT_TYPE_VECTOR. > + * This simplifies the pipeline design as we can split processing the events > + * between vector events and normal event across event types. > + * Example: > + * if (ev.event_type & RTE_EVENT_TYPE_VECTOR) { > + * // Classify and handle vector event. > + * } else { > + * // Classify and handle event. > + * } > + */ > +#define RTE_EVENT_TYPE_CPU_VECTOR (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU) > +/**< The event vector generated from cpu for pipelining. */ > + > #define RTE_EVENT_TYPE_MAX 0x10 > /**< Maximum number of event types */ > > @@ -1108,6 +1144,8 @@ struct rte_event { > /**< Opaque event pointer */ > struct rte_mbuf *mbuf; > /**< mbuf pointer if dequeued event is associated with mbuf */ > + struct rte_event_vector *vec; > + /**< Event vector pointer. */ > }; > }; > > @@ -2023,6 +2061,78 @@ rte_event_dev_xstats_reset(uint8_t dev_id, > */ > int rte_event_dev_selftest(uint8_t dev_id); > > +/** > + * Get the memory required per event vector based on the number of elements per > + * vector. > + * This should be used to create the mempool that holds the event vectors. > + * > + * @param name > + * The name of the vector pool. > + * @param n > + * The number of elements in the mbuf pool. > + * @param cache_size > + * Size of the per-core object cache. See rte_mempool_create() for > + * details. > + * @param nb_elem > + * The number of elements then a single event vector should be able to hold. > + * @param socket_id > + * The socket identifier where the memory should be allocated. The > + * value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the > + * reserved zone > + * > + * @return > + * The pointer to the newly allocated mempool, on success. NULL on error > + * with rte_errno set appropriately. Possible rte_errno values include: > + * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure > + * - E_RTE_SECONDARY - function was called from a secondary process instance > + * - EINVAL - cache size provided is too large, or priv_size is not aligned. > + * - ENOSPC - the maximum number of memzones has already been allocated > + * - EEXIST - a memzone with the same name already exists > + * - ENOMEM - no appropriate memory area found in which to create memzone > + */ > +__rte_experimental > +static inline struct rte_mempool * > +rte_event_vector_pool_create(const char *name, unsigned int n, > + unsigned int cache_size, uint16_t nb_elem, > + int socket_id) Handling in-lined function is tricky at best from an ABI stability PoV. Since this function is used at initialization time and I would suggest since performance is not issue here. There is no need for this function to be an inline. > +{ > + const char *mp_ops_name; > + struct rte_mempool *mp; > + unsigned int elt_sz; > + int ret; > + > + if (!nb_elem) { > + RTE_LOG(ERR, EVENTDEV, > + "Invalid number of elements=%d requested\n", nb_elem); > + rte_errno = -EINVAL; > + return NULL; > + } > + > + elt_sz = > + sizeof(struct rte_event_vector) + (nb_elem * sizeof(uintptr_t)); > + mp = rte_mempool_create_empty(name, n, elt_sz, cache_size, 0, socket_id, > + 0); > + if (mp == NULL) > + return NULL; > + > + mp_ops_name = rte_mbuf_best_mempool_ops(); > + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); > + if (ret != 0) { > + RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n"); > + goto err; > + } > + > + ret = rte_mempool_populate_default(mp); > + if (ret < 0) > + goto err; > + > + return mp; > +err: > + rte_mempool_free(mp); > + rte_errno = -ret; > + return NULL; > +} > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map > index 3e5c09cfd..a070ef56e 100644 > --- a/lib/librte_eventdev/version.map > +++ b/lib/librte_eventdev/version.map > @@ -138,6 +138,9 @@ EXPERIMENTAL { > __rte_eventdev_trace_port_setup; > # added in 20.11 > rte_event_pmd_pci_probe_named; > + > + #added in 21.05 > + rte_event_vector_pool_create; > }; > > INTERNAL { >