From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6D48BA0A02; Wed, 24 Mar 2021 19:20:25 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 54752140FAC; Wed, 24 Mar 2021 19:20:25 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9E884140FAB for ; Wed, 24 Mar 2021 19:20:23 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 12OI9eLC010102; Wed, 24 Mar 2021 11:20:20 -0700 Received: from nam12-bn8-obe.outbound.protection.outlook.com (mail-bn8nam12lp2176.outbound.protection.outlook.com [104.47.55.176]) by mx0a-0016f401.pphosted.com with ESMTP id 37ft17k8pr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 24 Mar 2021 11:20:20 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aPv5nejEdAdvs4iztC/nhIVvwPHavqEhxsauqMugVBA7gs/SUDRJTvN+SyPF1acQ6LK/2OOm619ekgX2nJ8BumC6t0w6fR9MLWHdqYn/PvX/3O/P6Bludgg+5pU40K33ciKw0LF1/e3py1vfDMCrrP1oifWvldaG5dYG6T18qsyOu47igPoKnDgggb7DQhXsHHkdO4iGQUSp/MWnpa+UqsbRS7gyXfqzN8S/BUOGf+7jg+BGr1ykQYjCpE3HJsfi//b2F9/N1R75H7IEvlG72bbR74j+EUWtmwYhNQEdHaX9iBSfJ4nsFmTgknsl30eBU26Pj0L7jG8m/dfbA+pEDA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Z41QEiCgFGdrhFh26LOjxLTIATz+YBlCi3FGROt7GPM=; b=CXfy8fT0daPjIMwildILwddHdJNhFZhwTb/Bay1y22Z7+LDwov7ynMIdxgW9baqv66tEHatAIIj5EmAuV8noYGWBUwuTbdrB/2enLFvug2xiZAimpDWxO7CJ0aK7aavA2C8ed8G6w0eiaeg9EvbeVVHUsSW7GzHFs1RXLCfEbSiJWng3XIbTJZKcVSPJ1BuG+5bZRWcxQrF2hZBe116uIfjdyBnTeAbSrnKT/nquG7wQkZ0PPwNuiClvziz1nD0nd1vKgRXrdJCyhYvTfZeDeWXqmqrBK9HUUEeNzMPn1ymi76J0H01gNMYyuw4wmq6mBZ8r6c/5qS3LT/2+E+f9gg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Z41QEiCgFGdrhFh26LOjxLTIATz+YBlCi3FGROt7GPM=; b=Pq2TlnwTqvTZZKbrewpRJ1H6SP0eNN2UtVIcPTkU4hb0e84c31g8eiYssphpkrJFYS3ixcF5CJimaBAk2o5PSXs7BtOC0TGkQIRLuhzwxRS+WxdHT2r2oe4Dpl8L7sV8vlsZJJhoLaNATsA0MiqaOocn1rY6K9uQMnquzHg/mnc= Received: from PH0PR18MB4086.namprd18.prod.outlook.com (2603:10b6:510:3::9) by PH0PR18MB4021.namprd18.prod.outlook.com (2603:10b6:510:2e::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.25; Wed, 24 Mar 2021 18:20:17 +0000 Received: from PH0PR18MB4086.namprd18.prod.outlook.com ([fe80::51dd:b5d6:af81:172d]) by PH0PR18MB4086.namprd18.prod.outlook.com ([fe80::51dd:b5d6:af81:172d%4]) with mapi id 15.20.3955.024; Wed, 24 Mar 2021 18:20:17 +0000 From: Pavan Nikhilesh Bhagavatula To: "Jayatheerthan, Jay" , Jerin Jacob Kollanukkaran , "Carrillo, Erik G" , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , "hemant.agrawal@nxp.com" , "Van Haaren, Harry" , mattias.ronnblom , "Ma, Liang J" , Ray Kinsella , Neil Horman CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v5 1/8] eventdev: introduce event vector capability Thread-Index: AQHXIGtdJnLQLGIRIU6mIdwMXQQE3qqSsrEAgADAxkA= Date: Wed, 24 Mar 2021 18:20:17 +0000 Message-ID: References: <20210319205718.1436-1-pbhagavatula@marvell.com> <20210324050525.4489-1-pbhagavatula@marvell.com> <20210324050525.4489-2-pbhagavatula@marvell.com> In-Reply-To: Accept-Language: en-IN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=marvell.com; x-originating-ip: [49.37.166.152] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 4e4e7081-4f06-42fb-4387-08d8eef17a1b x-ms-traffictypediagnostic: PH0PR18MB4021: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:5236; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: tCnpcWUeVIDf7W9VLHfXsXixF+Ea+cjTM6dTqguE5aQuUbGbK03Z/2chpStmpAjQwvMonCrTRMW2vjq9cHpBQnLsdomPllzB2QeB/pvaG9HFcrCBlfU5sbqAACF69pq13+zjt9QJVTixBIj290kWrDkSjDoMCbwAqo/da/KdPB34CYcvMRJbuKXX5TbmgHKKdAsKJWr4GeI9weBRyonBlWAWKLOoxaXBa3U3lVgithF9XB5OU+L6Yvkus8cGdRWPU9kE2OC6hijKX58rMeeK9aTqwecuEzxAI5STj9Q2hdXMdTZYfND86RCAhBl4GazjOsca9d8xU8krjgZwKs5cujyONJF3C/F0WYMT5GlRmGY6cRj8Y4lRk7zKhyJMc8p3svPjlNcPaNZxfH6Txl9nOE2XKY7TBmclkrAVL1kfKwUNsY3YiNwWScDjZ+SUROlTBpwFkvWax6KDGT9ej8ujqXx3u05Vmg6MCeNghNHB6StV3rcARyukOkkjNONE2rkNlHOTkGuPQ+Hea7xi2hIaVKPBBRmhxIl/1huKNYlI2wIi/PBEiFGxnFYYhxCJZO0AWzyUVqgT0iTy2gQ5TfQP1JvfK3+GTSRMiPKeXimb6ymABYmG6cIx02bid3KR7JptrXAwBxHLeNkyQv0721EcF1y/HB/0LchBhnr3GGwsHhEdpqYHgX1FgKAutgG9GeEz x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR18MB4086.namprd18.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(346002)(376002)(366004)(396003)(39860400002)(136003)(7416002)(5660300002)(2906002)(8936002)(8676002)(53546011)(52536014)(9686003)(64756008)(66476007)(30864003)(55016002)(6506007)(66946007)(76116006)(4326008)(66446008)(71200400001)(66556008)(7696005)(110136005)(921005)(186003)(316002)(478600001)(83380400001)(86362001)(26005)(38100700001)(33656002); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?LaeOLol8ty1PVI+ouFJ+lGwXTiFJkf85Sby4ib1E1qi3xx7yFMmCOjaIZIQT?= =?us-ascii?Q?BUvRAhMNy2TkJHtmRqqgMY2wB1eryZ6yV4xY1P0aQ47dADkLiBzdiZJ2+nVr?= =?us-ascii?Q?y7jVH66DVtUMasCOz5tn/T9fZvYGsxxbd2zhrs6QsMVParIcJEC8U+J0ka/P?= =?us-ascii?Q?swBoCGbJfLUhYLxLyHc0N1Tncoeolq3q0LXItH4Id9txQJvqD7pekjPrwdrQ?= =?us-ascii?Q?k7/xCZeDP447BT8R8zgRhikz4Mxj4gDMykaqipSH2whvTNyRMc58tRw4A9x3?= =?us-ascii?Q?Es+BcyJ0gF+xp9CaNP3YQWspK16B8xBkmjTL6PwPSrKxb6eRPFs696dcWeXL?= =?us-ascii?Q?E/O9+cvvzNDbBNjhG2wCRULXA2JBLktDk9gHO02aAB4KgaS4daZsscA57e1I?= =?us-ascii?Q?+gzNVZYzSbxTANNIbwGvJF+sYzSkC65Ful9JnUfuZIAXclvoWVomFhGGm6YI?= =?us-ascii?Q?6dWwKwT59zFATiwiW6jLCmBpAUf42EGLW5ZSbCVDoM6GxydTayQB2BPVWMNB?= =?us-ascii?Q?yRpkYt95ngQDh7XU/ZCbGHl10kMDA6Nyumk36dIj8fB4rnHl8YKpYmCalOyN?= =?us-ascii?Q?KCSY83gHjyOYHAHPT9LydsRb9H7MXREQhIClUJXjnMzdyVWVbiddSRcVLzcW?= =?us-ascii?Q?nBkCxv8Kn0RC+4ZggIO9TRTyXnSssAgggJm4gE2KsIug1bznc/EeoOWHx+/7?= =?us-ascii?Q?T1xv/qUoo0N6X7SBWRcqmXtpH8p7nQ+xo4ghAP80oCcznLqSYSjZ2r5ge+g2?= =?us-ascii?Q?sqYq5Kht5BRBoc97QgVsX3R14EuPB85fs50UdJXC3XM2MyyGiPkDXFGHst9T?= =?us-ascii?Q?ZVO/mn1Wfau2dmIC9SxKvqCmWzOtzVW69CWlXx6rVlp5Gi3mOxiXI4mESwMH?= =?us-ascii?Q?86nfhw/ACv2IBzD41hUM/B7gBy5ziN1hp8Q4C8v2F/VnWMAsAesQJ8mExMcv?= =?us-ascii?Q?UwUjwaW+Qh89KJHzZ3PI/BPNfvDvo6H+gz+mkaodiYyqLEANs3R67otfsxhc?= =?us-ascii?Q?8a6WE8h7CXqXXsnd1d2ThLy4NIaD7GEeELyhUXVFlW8h2P3O2KeHdwbefGyu?= =?us-ascii?Q?N+/W+aLcyuRJScfUaBT+xpVYDpafULhPqMaM/6IbwSdIiKX0Ng0MvKhRHNPS?= =?us-ascii?Q?4Q/h4/2C8s2zkUqKXoDKh/vjY48bokg6/fCRYGaVby53wR69o54Pf6DbpxCH?= =?us-ascii?Q?1jOh6EnAbf2kR8YX3KkRvSn4uvjIRWXbg3+Mz5P6asKussffV7+xSRHTFqFa?= =?us-ascii?Q?MpIesRv1PttNeomWqLb0s6xOPw+Ichqx8zYJapWInsSkYkfye2lynd1g8lTT?= =?us-ascii?Q?QcJpmYXtgVF2JrFC6Ri/sGQ6?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: marvell.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR18MB4086.namprd18.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4e4e7081-4f06-42fb-4387-08d8eef17a1b X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Mar 2021 18:20:17.5215 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: PoCUOlAszYKyEE6jlCyyafHAPvSH/xEhenNQz0g76ZjUXIQ1Vwi1741jaHo5/apD8g7QkPf/gxwhLef50uPlcUgmUbYo4PTNbEMqjOl8KCI= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR18MB4021 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-24_13:2021-03-24, 2021-03-24 signatures=0 Subject: Re: [dpdk-dev] [PATCH v5 1/8] eventdev: introduce event vector capability X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" >> -----Original Message----- >> From: pbhagavatula@marvell.com >> Sent: Wednesday, March 24, 2021 10:35 AM >> To: jerinj@marvell.com; Jayatheerthan, Jay >; Carrillo, Erik G >; Gujjar, >> Abhinandan S ; McDaniel, Timothy >; hemant.agrawal@nxp.com; Van >> Haaren, Harry ; mattias.ronnblom >; Ma, Liang J >> ; Ray Kinsella ; Neil Horman > >> Cc: dev@dpdk.org; Pavan Nikhilesh >> Subject: [dpdk-dev] [PATCH v5 1/8] eventdev: introduce event vector >capability >> >> From: Pavan Nikhilesh >> >> Introduce rte_event_vector datastructure which is capable of holding >> multiple uintptr_t of the same flow thereby allowing applications >> to vectorize their pipeline and reducing the complexity of pipelining >> the events across multiple stages. >> This approach also reduces the scheduling overhead on a event >device. >> >> Add a event vector mempool create handler to create mempools >based on >> the best mempool ops available on a given platform. >> >> Signed-off-by: Pavan Nikhilesh >> Acked-by: Jerin Jacob >> --- >> doc/guides/prog_guide/eventdev.rst | 36 +++++++++++- >> doc/guides/rel_notes/release_21_05.rst | 8 +++ >> lib/librte_eventdev/rte_eventdev.c | 42 +++++++++++++ >> lib/librte_eventdev/rte_eventdev.h | 81 >+++++++++++++++++++++++++- >> lib/librte_eventdev/version.map | 3 + >> 5 files changed, 167 insertions(+), 3 deletions(-) >> >> diff --git a/doc/guides/prog_guide/eventdev.rst >b/doc/guides/prog_guide/eventdev.rst >> index ccde086f6..fda9c3743 100644 >> --- a/doc/guides/prog_guide/eventdev.rst >> +++ b/doc/guides/prog_guide/eventdev.rst >> @@ -63,13 +63,45 @@ the actual event being scheduled is. The >payload is a union of the following: >> * ``uint64_t u64`` >> * ``void *event_ptr`` >> * ``struct rte_mbuf *mbuf`` >> +* ``struct rte_event_vector *vec`` >> >> -These three items in a union occupy the same 64 bits at the end of >the rte_event >> +These four items in a union occupy the same 64 bits at the end of the >rte_event >> structure. The application can utilize the 64 bits directly by accessin= g >the >> -u64 variable, while the event_ptr and mbuf are provided as >convenience >> +u64 variable, while the event_ptr, mbuf, vec are provided as a >convenience >> variables. For example the mbuf pointer in the union can used to >schedule a >> DPDK packet. >> >> +Event Vector >> +~~~~~~~~~~~~ >> + >> +The rte_event_vector struct contains a vector of elements defined >by the event >> +type specified in the ``rte_event``. The event_vector structure >contains the >> +following data: >> + >> +* ``nb_elem`` - The number of elements held within the vector. >> + >> +Similar to ``rte_event`` the payload of event vector is also a union, >allowing >> +flexibility in what the actual vector is. >> + >> +* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs. >> +* ``void *ptrs[0]`` - An array of pointers. >> +* ``uint64_t *u64s[0]`` - An array of uint64_t elements. >> + >> +The size of the event vector is related to the total number of >elements it is >> +configured to hold, this is achieved by making `rte_event_vector` a >variable >> +length structure. >> +A helper function is provided to create a mempool that holds event >vector, which >> +takes name of the pool, total number of required >``rte_event_vector``, >> +cache size, number of elements in each ``rte_event_vector`` and >socket id. >> + >> +.. code-block:: c >> + >> + rte_event_vector_pool_create("vector_pool", >nb_event_vectors, cache_sz, >> + nb_elements_per_vector, socket_id)= ; >> + >> +The function ``rte_event_vector_pool_create`` creates mempool >with the best >> +platform mempool ops. >> + >> Queues >> ~~~~~~ >> >> diff --git a/doc/guides/rel_notes/release_21_05.rst >b/doc/guides/rel_notes/release_21_05.rst >> index 8e686cc62..358623f2f 100644 >> --- a/doc/guides/rel_notes/release_21_05.rst >> +++ b/doc/guides/rel_notes/release_21_05.rst >> @@ -101,6 +101,14 @@ New Features >> * Added command to display Rx queue used descriptor count. >> ``show port (port_id) rxq (queue_id) desc used count`` >> >> +* **Add Event device vector capability.** >> + >> + * Added ``rte_event_vector`` data structure which is capable of >holding >> + multiple ``uintptr_t`` of the same flow thereby allowing >applications >> + to vectorize their pipelines and also reduce the complexity of >pipelining >> + the events across multiple stages. >> + * This also reduces the scheduling overhead on a event device. >> + >> >> Removed Items >> ------------- >> diff --git a/lib/librte_eventdev/rte_eventdev.c >b/lib/librte_eventdev/rte_eventdev.c >> index b57363f80..f95edc075 100644 >> --- a/lib/librte_eventdev/rte_eventdev.c >> +++ b/lib/librte_eventdev/rte_eventdev.c >> @@ -1266,6 +1266,48 @@ int rte_event_dev_selftest(uint8_t dev_id) >> return -ENOTSUP; >> } >> >> +struct rte_mempool * >> +rte_event_vector_pool_create(const char *name, unsigned int n, >> + unsigned int cache_size, uint16_t nb_elem, >> + int socket_id) >> +{ >> + const char *mp_ops_name; >> + struct rte_mempool *mp; >> + unsigned int elt_sz; >> + int ret; >> + >> + if (!nb_elem) { >> + RTE_LOG(ERR, EVENTDEV, >> + "Invalid number of elements=3D%d requested\n", >nb_elem); >> + rte_errno =3D -EINVAL; > >rte_mempool_create_empty() call below returns non-negative EINVAL. >Should we maintain consistency within same API call? > >> + return NULL; >> + } >> + >> + elt_sz =3D >> + sizeof(struct rte_event_vector) + (nb_elem * >sizeof(uintptr_t)); >> + mp =3D rte_mempool_create_empty(name, n, elt_sz, cache_size, >0, socket_id, >> + 0); >> + if (mp =3D=3D NULL) >> + return NULL; >> + >> + mp_ops_name =3D rte_mbuf_best_mempool_ops(); >> + ret =3D rte_mempool_set_ops_byname(mp, mp_ops_name, >NULL); >> + if (ret !=3D 0) { >> + RTE_LOG(ERR, EVENTDEV, "error setting mempool >handler\n"); >> + goto err; >> + } >> + >> + ret =3D rte_mempool_populate_default(mp); >> + if (ret < 0) >> + goto err; >> + >> + return mp; >> +err: >> + rte_mempool_free(mp); >> + rte_errno =3D -ret; > >rte_mempool_set_ops_byname() API already returns negative ret and >we are making it positive. DPDK has many instances of error/ret being >negative and positive. Probably a larger effort to make it consistent >would help in general. > Since rte_eventdev uses positive rte_errno, I will use the same here for consistency. >> + return NULL; >> +} >> + >> int >> rte_event_dev_start(uint8_t dev_id) >> { >> diff --git a/lib/librte_eventdev/rte_eventdev.h >b/lib/librte_eventdev/rte_eventdev.h >> index ce1fc2ce0..aa4dd3959 100644 >> --- a/lib/librte_eventdev/rte_eventdev.h >> +++ b/lib/librte_eventdev/rte_eventdev.h >> @@ -212,8 +212,10 @@ extern "C" { >> >> #include >> #include >> -#include >> #include >> +#include >> +#include >> +#include >> >> #include "rte_eventdev_trace_fp.h" >> >> @@ -913,6 +915,31 @@ >rte_event_dev_stop_flush_callback_register(uint8_t dev_id, >> int >> rte_event_dev_close(uint8_t dev_id); >> >> +/** >> + * Event vector structure. >> + */ >> +struct rte_event_vector { >> + uint64_t nb_elem : 16; >> + /**< Number of elements in this event vector. */ >> + uint64_t rsvd : 48; >> + /**< Reserved for future use */ >> + uint64_t impl_opaque; >> + /**< Implementation specific opaque value. >> + * An implementation may use this field to hold implementation >specific >> + * value to share between dequeue and enqueue operation. >> + * The application should not modify this field. >> + */ >> + union { >> + struct rte_mbuf *mbufs[0]; >> + void *ptrs[0]; >> + uint64_t *u64s[0]; >> + } __rte_aligned(16); >> + /**< Start of the vector array union. Depending upon the event >type the >> + * vector array can be an array of mbufs or pointers or opaque >u64 >> + * values. >> + */ >> +}; >> + >> /* Scheduler type definitions */ >> #define RTE_SCHED_TYPE_ORDERED 0 >> /**< Ordered scheduling >> @@ -986,6 +1013,21 @@ rte_event_dev_close(uint8_t dev_id); >> */ >> #define RTE_EVENT_TYPE_ETH_RX_ADAPTER 0x4 >> /**< The event generated from event eth Rx adapter */ >> +#define RTE_EVENT_TYPE_VECTOR 0x8 >> +/**< Indicates that event is a vector. >> + * All vector event types should be a logical OR of >EVENT_TYPE_VECTOR. >> + * This simplifies the pipeline design as one can split processing the >events >> + * between vector events and normal event across event types. >> + * Example: >> + * if (ev.event_type & RTE_EVENT_TYPE_VECTOR) { >> + * // Classify and handle vector event. >> + * } else { >> + * // Classify and handle event. >> + * } >> + */ >> +#define RTE_EVENT_TYPE_CPU_VECTOR >(RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU) >> +/**< The event vector generated from cpu for pipelining. */ >> + >> #define RTE_EVENT_TYPE_MAX 0x10 >> /**< Maximum number of event types */ >> >> @@ -1108,6 +1150,8 @@ struct rte_event { >> /**< Opaque event pointer */ >> struct rte_mbuf *mbuf; >> /**< mbuf pointer if dequeued event is associated with >mbuf */ >> + struct rte_event_vector *vec; >> + /**< Event vector pointer. */ >> }; >> }; >> >> @@ -2023,6 +2067,41 @@ rte_event_dev_xstats_reset(uint8_t >dev_id, >> */ >> int rte_event_dev_selftest(uint8_t dev_id); >> >> +/** >> + * Get the memory required per event vector based on the number of >elements per >> + * vector. >> + * This should be used to create the mempool that holds the event >vectors. >> + * >> + * @param name >> + * The name of the vector pool. >> + * @param n >> + * The number of elements in the mbuf pool. >> + * @param cache_size >> + * Size of the per-core object cache. See rte_mempool_create() for >> + * details. >> + * @param nb_elem >> + * The number of elements then a single event vector should be >able to hold. > >Typo: that instead of then. > >> + * @param socket_id >> + * The socket identifier where the memory should be allocated. The >> + * value can be *SOCKET_ID_ANY* if there is no NUMA constraint >for the >> + * reserved zone >> + * >> + * @return >> + * The pointer to the newly allocated mempool, on success. NULL >on error >> + * with rte_errno set appropriately. Possible rte_errno values >include: >> + * - E_RTE_NO_CONFIG - function could not get pointer to >rte_config structure >> + * - E_RTE_SECONDARY - function was called from a secondary >process instance >> + * - EINVAL - cache size provided is too large, or priv_size is not >aligned. >> + * - ENOSPC - the maximum number of memzones has already been >allocated >> + * - EEXIST - a memzone with the same name already exists >> + * - ENOMEM - no appropriate memory area found in which to >create memzone > >rte_mempool_create_empty() can return ENAMETOOLONG if name is >too long. > >> + */ >> +__rte_experimental >> +struct rte_mempool * >> +rte_event_vector_pool_create(const char *name, unsigned int n, >> + unsigned int cache_size, uint16_t nb_elem, >> + int socket_id); >> + >> #ifdef __cplusplus >> } >> #endif >> diff --git a/lib/librte_eventdev/version.map >b/lib/librte_eventdev/version.map >> index 3e5c09cfd..a070ef56e 100644 >> --- a/lib/librte_eventdev/version.map >> +++ b/lib/librte_eventdev/version.map >> @@ -138,6 +138,9 @@ EXPERIMENTAL { >> __rte_eventdev_trace_port_setup; >> # added in 20.11 >> rte_event_pmd_pci_probe_named; >> + >> + #added in 21.05 >> + rte_event_vector_pool_create; >> }; >> >> INTERNAL { >> -- >> 2.17.1