From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 3118EA0A02;
	Wed, 24 Mar 2021 07:49:07 +0100 (CET)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id A86D4140DCD;
	Wed, 24 Mar 2021 07:49:06 +0100 (CET)
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by mails.dpdk.org (Postfix) with ESMTP id B2A764067B
 for <dev@dpdk.org>; Wed, 24 Mar 2021 07:49:03 +0100 (CET)
IronPort-SDR: u7yB5GHd/lGlsgGXQuMGTW0LycZ59gdDcomBUakvrzY8m/3JAs1XYwpl3Jgm5TEC4GrxpYkzbi
 lIRqq9dWcOcw==
X-IronPort-AV: E=McAfee;i="6000,8403,9932"; a="254637820"
X-IronPort-AV: E=Sophos;i="5.81,274,1610438400"; d="scan'208";a="254637820"
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Mar 2021 23:49:01 -0700
IronPort-SDR: yo/fnaIt3gmelBqHadXk5Dvc68GMBUv08UB0ioBbOVYz+8Z6QclE70fjDauPD1jLa/ogeB64WX
 FqeNGkp+Zx5A==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.81,274,1610438400"; d="scan'208";a="442843053"
Received: from orsmsx606.amr.corp.intel.com ([10.22.229.19])
 by FMSMGA003.fm.intel.com with ESMTP; 23 Mar 2021 23:49:01 -0700
Received: from orsmsx609.amr.corp.intel.com (10.22.229.22) by
 ORSMSX606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2106.2; Tue, 23 Mar 2021 23:49:00 -0700
Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by
 orsmsx609.amr.corp.intel.com (10.22.229.22) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2
 via Frontend Transport; Tue, 23 Mar 2021 23:49:00 -0700
Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.175)
 by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2106.2; Tue, 23 Mar 2021 23:49:00 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JAl/Ol58rLj0sHU6pStE1mw1h37B4VtarK2mltaFE/BRY1e4HYuUYnfv1k4h3bjYx6MyASKhO5WxPsL346MgaVMlbnrhIpf73RIhN013tlq8aQqjdHjeUXuG7/f7UEosCWxz5qS194oNlY5LzGb2kToZnnMiUQVmlCNE78DDF39n0Twt9ISNkEvKzh1gWhZrjmzflxRiXtvqw/wCHniIii5or9C5uowu2x7Kpvlmw+xyy4JjxGXbhuBDbj3xRIgUM9p/Ck6TjFTnQ0BkNWzxtTS/OStLltHM60AvyjRXTb49/6wcZWMsWXX5W+f60Jkwya60rroFppvHOIMoxVvRWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=925SaZb0xoW0XOxsa/gfpmj1zFbcOqvwBekMMGtDHhA=;
 b=aP2zRa3YNg0UX5On7OZkmGa5jlw3mY9QdK2UwPZC9Zp1i/OpKuxDMpXGqEJ3ZAsPoPWi1MgIhMd7Bn6GmDWXZ1pXPUQNj6K8455FH42nCCHo5bkWqUZGOOqkTCyX4ylFTSt75YmmbBQIyNkVxEllboAxNNHN4tyllWrNJ8GtkLPTdQjgkz9epg3i/GCl1TMLboj8mesGgF73tASbJpbaga6xFFmAsTV9VmRGWi75yMV6EDiwK2SSvjUkSjtWrHGfyfkRVbOr7pSreBzRhoosY2AgoYcOhnpSYU335ylCunijCuPBClHwa6dnm7Hx+BE+G7B7D2LW2Ee0AVO3Mtpgpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=925SaZb0xoW0XOxsa/gfpmj1zFbcOqvwBekMMGtDHhA=;
 b=HPRqGlMAjEwpP7UG+AaPk4zhOXGyu5u8oDR06cc2ofw0hou9RP6nWk6mq0UFAjDIjiROCBxHJTGD3P+NoH3dmczj9cvOONZCuR38K0IY0t42MaVDuAH1QBVref8WZtD2eCgmlCY6ol1x5Paecvlo6ZafH1JK727k2ab9qeTZMKs=
Received: from SN6PR11MB3117.namprd11.prod.outlook.com (2603:10b6:805:d7::32)
 by SA2PR11MB5146.namprd11.prod.outlook.com (2603:10b6:806:116::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.24; Wed, 24 Mar
 2021 06:48:58 +0000
Received: from SN6PR11MB3117.namprd11.prod.outlook.com
 ([fe80::4579:2bc0:3dc8:8b37]) by SN6PR11MB3117.namprd11.prod.outlook.com
 ([fe80::4579:2bc0:3dc8:8b37%5]) with mapi id 15.20.3955.027; Wed, 24 Mar 2021
 06:48:58 +0000
From: "Jayatheerthan, Jay" <jay.jayatheerthan@intel.com>
To: "pbhagavatula@marvell.com" <pbhagavatula@marvell.com>,
 "jerinj@marvell.com" <jerinj@marvell.com>, "Carrillo, Erik G"
 <erik.g.carrillo@intel.com>, "Gujjar, Abhinandan S"
 <abhinandan.gujjar@intel.com>, "McDaniel, Timothy"
 <timothy.mcdaniel@intel.com>, "hemant.agrawal@nxp.com"
 <hemant.agrawal@nxp.com>, "Van Haaren, Harry" <harry.van.haaren@intel.com>,
 mattias.ronnblom <mattias.ronnblom@ericsson.com>, "Ma, Liang J"
 <liang.j.ma@intel.com>, Ray Kinsella <mdr@ashroe.eu>, Neil Horman
 <nhorman@tuxdriver.com>
CC: "dev@dpdk.org" <dev@dpdk.org>
Thread-Topic: [dpdk-dev] [PATCH v5 1/8] eventdev: introduce event vector
 capability
Thread-Index: AQHXIGtvRPqvY0ATI0+fDXmeV+qLQqqSparA
Date: Wed, 24 Mar 2021 06:48:58 +0000
Message-ID: <SN6PR11MB3117B31AEE3E3C1637C87D29FD639@SN6PR11MB3117.namprd11.prod.outlook.com>
References: <20210319205718.1436-1-pbhagavatula@marvell.com>
 <20210324050525.4489-1-pbhagavatula@marvell.com>
 <20210324050525.4489-2-pbhagavatula@marvell.com>
In-Reply-To: <20210324050525.4489-2-pbhagavatula@marvell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-product: dlpe-windows
dlp-reaction: no-action
dlp-version: 11.5.1.3
authentication-results: marvell.com; dkim=none (message not signed)
 header.d=none;marvell.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [122.167.113.74]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 7e0c3e91-92fc-466a-fcbe-08d8ee90e685
x-ms-traffictypediagnostic: SA2PR11MB5146:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <SA2PR11MB5146717EDCD0C14CC162B347FD639@SA2PR11MB5146.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5236;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: QainYMtxWrXfm90CmPTEEd4LnuBQ78aEsDOG7PSPCCwHChei0phj06R9HSS2hAl+q4gt6pOerapMBLZV1D8t6pnyukoCi4xr/qPJrdhbrpRB79EoQyvEmwahSdOxz1oXlfpuVdO2AUes4TBaVixXo1qwC775Ai1HS37iwfbwTPB8G0UAscv07t7ieSmdASMwhUm8VEzU1nEWx5wGLOeq5mroNHgRoNuqEGFC7AqpSh9fULL+3ZclRlZ1Puidsmax1935/WPtYyFFdd6z7QVzpTxuBETU2sKeBRwOqMGazC/IV+Lze2kns1GATBCXibIDX4JoLVMuyo4bXdAhiGcebso+V7mi4VmqF/Eu9kZw4YgsGKvCef2Q6MzZ+X4U1RFK8jN4PjdM1lOoYMgiVHeLW1ZK39uY9UZfWxvJV08zt+R0mfe6bU9hZxaO2wjZZWJBkXeAaicEld1ueL0OtGuON4zMrVYjAc0w8Tl+WJ161Y1KJr9BGSlnO05mcGnj7oUoZRL9i+jzH/4L9SP2St8PWxzIhkcJK+Ho1dihqrv7AR2ZCKpJy6GbKL1Ey/f4bgW9AuLVW/xiCt36HHmXRrKdNekzsVxh9OnztNwASnTL1c95yfqmZP7s0pxgpPDlVOfEid+ahwPuAYmKLoNhAjTvsh5dC3UH0j+xQfue9dBkCuLvGoK7CJhrBXSpgpQfOCeg
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:SN6PR11MB3117.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(376002)(396003)(346002)(136003)(39860400002)(366004)(186003)(478600001)(316002)(66946007)(83380400001)(921005)(26005)(110136005)(4326008)(86362001)(9686003)(55016002)(7696005)(30864003)(6506007)(38100700001)(52536014)(53546011)(5660300002)(8936002)(8676002)(64756008)(66446008)(66556008)(66476007)(33656002)(76116006)(71200400001)(2906002);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: =?us-ascii?Q?gbhlypgp/c4bHIxXZj9F7xwuaqHviBkmy+QVgtZT9usxYvGUzVrB6yAD74oZ?=
 =?us-ascii?Q?5y1Pk4r86b8B2evBylA6SmNNGcBb//EHWCbyGafI892/HU2kTEyKmnPIWMnK?=
 =?us-ascii?Q?iAHyJiU31zNuUgEtk4HD0bUBf6wCtnwWIyKNsmF+hPVUP7I8Oc+DFiGUPrCa?=
 =?us-ascii?Q?MmhFEVQFD/CFfqGthCg0cds9EMBQ4iw3EEoLgslKZ1v7iv631PF2iqBa6kuh?=
 =?us-ascii?Q?aIzyZjksS1qsq9rDhnHtUUStemytsavUqY83+8nT7/3LmwHpN3ac/cEK5R2Q?=
 =?us-ascii?Q?TX8opUCuobTFMxttxA/LhfVbpEgxHPOIXsHD7GRYVTrgAPe1SQJkNyB6w6KV?=
 =?us-ascii?Q?1+81uxkROzSt9gKr44PJ4Wwa1mwWiSAEx3UOkns5tUiURpUU9GNsLwJNNcy8?=
 =?us-ascii?Q?qCLICkEcSd4gWlQepGYwuQUAfGiOg2HxB2L+GgsBFf0uABMN34xaUcjYBFxG?=
 =?us-ascii?Q?J9ZE/jRuXslDnMhzYJM15qOex9ldnyNeC5pokgXkjWiVQjhQc9rmHK96iW6h?=
 =?us-ascii?Q?JIiMgpQPht5eDfS9ddcTsn840Ug62taKsfVp05MlQLJUilWaxPYKEf+Xx8f4?=
 =?us-ascii?Q?DiSgTYToF1AOnEGB0fmPpg+LwB1cK7H8YWOw4ALoI1T598Y6Oa6iBbJ2THNS?=
 =?us-ascii?Q?784sh7JDGg93xGfH3TGfPV1zIdyfh9JeIejT8V3zpKdPfBmDexD5NxZUsMRD?=
 =?us-ascii?Q?/OQ4/wszpJnuU9lut3WsG15flQhMbuw3fhbVfLcuAD4v7/k9lr42aq44VcR6?=
 =?us-ascii?Q?QI2M3kmLUVIQDuZLPE8xdh3kfLj7sc5cTrxZRb/3+B7PZNRtVUEp+aloCjOL?=
 =?us-ascii?Q?hJLDEdK8e1La6X6vCXHedMEYgtpjK4jFjVRa/iEgm25+t4bcbC96y1nB1ONi?=
 =?us-ascii?Q?8ga3hpAclw1idm/7m9QjtAbqhOUSwPQQGrUfhQybKpUy//BHyxJbvqAmQPyb?=
 =?us-ascii?Q?k3/nftjXVu4/oBPyk2Wx8mqBqKvka5K7PyewrdcI/lQ/4YlDir3rTTsiFQgl?=
 =?us-ascii?Q?3g01kK4613O4PuAxRXJEImOZBY9eorR31ZOcUXnENZYhSz30atGlK55KOV6M?=
 =?us-ascii?Q?JKtEw0/Y+DCy2gxllVy+ndAA2z/QGFhKcTa3dTxd9gTgE7bf46tUkw65Sr2a?=
 =?us-ascii?Q?N3Sb50dnXkac41bPt47DM5YUgqWtvfiUEKAIdGjqfRO7J3XixLAV9QK7kN9b?=
 =?us-ascii?Q?Qt4/pPqJM8cExCDlqCNBMUkg4TuQV7JJQMfy8H+G87Ln2pN+ORCLdZHDkm2L?=
 =?us-ascii?Q?uQ0bXxcksynCJDcEEgSqur9bu/N5QWW9CLQn4vwMr9jQMoud7VIhOsxFbNir?=
 =?us-ascii?Q?+CqRDnKVVMh4EeMeKedWDuK1?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN6PR11MB3117.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e0c3e91-92fc-466a-fcbe-08d8ee90e685
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Mar 2021 06:48:58.1618 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: QJDk6uII11eHwiggoSyQg6dbRZjeZlLSM8zlOUHk7yC6wZjQ0dQNYuZkYa6OA9uiQuAfbtec50t3SBotGSNMjCyHJlTHkkTY+B0Bb+agUd4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR11MB5146
X-OriginatorOrg: intel.com
Subject: Re: [dpdk-dev] [PATCH v5 1/8] eventdev: introduce event vector
 capability
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

> -----Original Message-----
> From: pbhagavatula@marvell.com <pbhagavatula@marvell.com>
> Sent: Wednesday, March 24, 2021 10:35 AM
> To: jerinj@marvell.com; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>;=
 Carrillo, Erik G <erik.g.carrillo@intel.com>; Gujjar,
> Abhinandan S <abhinandan.gujjar@intel.com>; McDaniel, Timothy <timothy.mc=
daniel@intel.com>; hemant.agrawal@nxp.com; Van
> Haaren, Harry <harry.van.haaren@intel.com>; mattias.ronnblom <mattias.ron=
nblom@ericsson.com>; Ma, Liang J
> <liang.j.ma@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil Horman <nhorma=
n@tuxdriver.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Subject: [dpdk-dev] [PATCH v5 1/8] eventdev: introduce event vector capab=
ility
>=20
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>=20
> Introduce rte_event_vector datastructure which is capable of holding
> multiple uintptr_t of the same flow thereby allowing applications
> to vectorize their pipeline and reducing the complexity of pipelining
> the events across multiple stages.
> This approach also reduces the scheduling overhead on a event device.
>=20
> Add a event vector mempool create handler to create mempools based on
> the best mempool ops available on a given platform.
>=20
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---
>  doc/guides/prog_guide/eventdev.rst     | 36 +++++++++++-
>  doc/guides/rel_notes/release_21_05.rst |  8 +++
>  lib/librte_eventdev/rte_eventdev.c     | 42 +++++++++++++
>  lib/librte_eventdev/rte_eventdev.h     | 81 +++++++++++++++++++++++++-
>  lib/librte_eventdev/version.map        |  3 +
>  5 files changed, 167 insertions(+), 3 deletions(-)
>=20
> diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/e=
ventdev.rst
> index ccde086f6..fda9c3743 100644
> --- a/doc/guides/prog_guide/eventdev.rst
> +++ b/doc/guides/prog_guide/eventdev.rst
> @@ -63,13 +63,45 @@ the actual event being scheduled is. The payload is a=
 union of the following:
>  * ``uint64_t u64``
>  * ``void *event_ptr``
>  * ``struct rte_mbuf *mbuf``
> +* ``struct rte_event_vector *vec``
>=20
> -These three items in a union occupy the same 64 bits at the end of the r=
te_event
> +These four items in a union occupy the same 64 bits at the end of the rt=
e_event
>  structure. The application can utilize the 64 bits directly by accessing=
 the
> -u64 variable, while the event_ptr and mbuf are provided as convenience
> +u64 variable, while the event_ptr, mbuf, vec are provided as a convenien=
ce
>  variables.  For example the mbuf pointer in the union can used to schedu=
le a
>  DPDK packet.
>=20
> +Event Vector
> +~~~~~~~~~~~~
> +
> +The rte_event_vector struct contains a vector of elements defined by the=
 event
> +type specified in the ``rte_event``. The event_vector structure contains=
 the
> +following data:
> +
> +* ``nb_elem`` - The number of elements held within the vector.
> +
> +Similar to ``rte_event`` the payload of event vector is also a union, al=
lowing
> +flexibility in what the actual vector is.
> +
> +* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs.
> +* ``void *ptrs[0]`` - An array of pointers.
> +* ``uint64_t *u64s[0]`` - An array of uint64_t elements.
> +
> +The size of the event vector is related to the total number of elements =
it is
> +configured to hold, this is achieved by making `rte_event_vector` a vari=
able
> +length structure.
> +A helper function is provided to create a mempool that holds event vecto=
r, which
> +takes name of the pool, total number of required ``rte_event_vector``,
> +cache size, number of elements in each ``rte_event_vector`` and socket i=
d.
> +
> +.. code-block:: c
> +
> +        rte_event_vector_pool_create("vector_pool", nb_event_vectors, ca=
che_sz,
> +                                     nb_elements_per_vector, socket_id);
> +
> +The function ``rte_event_vector_pool_create`` creates mempool with the b=
est
> +platform mempool ops.
> +
>  Queues
>  ~~~~~~
>=20
> diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_note=
s/release_21_05.rst
> index 8e686cc62..358623f2f 100644
> --- a/doc/guides/rel_notes/release_21_05.rst
> +++ b/doc/guides/rel_notes/release_21_05.rst
> @@ -101,6 +101,14 @@ New Features
>    * Added command to display Rx queue used descriptor count.
>      ``show port (port_id) rxq (queue_id) desc used count``
>=20
> +* **Add Event device vector capability.**
> +
> +  * Added ``rte_event_vector`` data structure which is capable of holdin=
g
> +    multiple ``uintptr_t`` of the same flow thereby allowing application=
s
> +    to vectorize their pipelines and also reduce the complexity of pipel=
ining
> +    the events across multiple stages.
> +  * This also reduces the scheduling overhead on a event device.
> +
>=20
>  Removed Items
>  -------------
> diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte=
_eventdev.c
> index b57363f80..f95edc075 100644
> --- a/lib/librte_eventdev/rte_eventdev.c
> +++ b/lib/librte_eventdev/rte_eventdev.c
> @@ -1266,6 +1266,48 @@ int rte_event_dev_selftest(uint8_t dev_id)
>  	return -ENOTSUP;
>  }
>=20
> +struct rte_mempool *
> +rte_event_vector_pool_create(const char *name, unsigned int n,
> +			     unsigned int cache_size, uint16_t nb_elem,
> +			     int socket_id)
> +{
> +	const char *mp_ops_name;
> +	struct rte_mempool *mp;
> +	unsigned int elt_sz;
> +	int ret;
> +
> +	if (!nb_elem) {
> +		RTE_LOG(ERR, EVENTDEV,
> +			"Invalid number of elements=3D%d requested\n", nb_elem);
> +		rte_errno =3D -EINVAL;

rte_mempool_create_empty() call below returns non-negative EINVAL. Should w=
e maintain consistency within same API call?

> +		return NULL;
> +	}
> +
> +	elt_sz =3D
> +		sizeof(struct rte_event_vector) + (nb_elem * sizeof(uintptr_t));
> +	mp =3D rte_mempool_create_empty(name, n, elt_sz, cache_size, 0, socket_=
id,
> +				      0);
> +	if (mp =3D=3D NULL)
> +		return NULL;
> +
> +	mp_ops_name =3D rte_mbuf_best_mempool_ops();
> +	ret =3D rte_mempool_set_ops_byname(mp, mp_ops_name, NULL);
> +	if (ret !=3D 0) {
> +		RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n");
> +		goto err;
> +	}
> +
> +	ret =3D rte_mempool_populate_default(mp);
> +	if (ret < 0)
> +		goto err;
> +
> +	return mp;
> +err:
> +	rte_mempool_free(mp);
> +	rte_errno =3D -ret;

rte_mempool_set_ops_byname() API already returns negative ret and we are ma=
king it positive. DPDK has many instances of error/ret being negative and p=
ositive. Probably a larger effort to make it consistent would help in gener=
al.

> +	return NULL;
> +}
> +
>  int
>  rte_event_dev_start(uint8_t dev_id)
>  {
> diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte=
_eventdev.h
> index ce1fc2ce0..aa4dd3959 100644
> --- a/lib/librte_eventdev/rte_eventdev.h
> +++ b/lib/librte_eventdev/rte_eventdev.h
> @@ -212,8 +212,10 @@ extern "C" {
>=20
>  #include <rte_common.h>
>  #include <rte_config.h>
> -#include <rte_memory.h>
>  #include <rte_errno.h>
> +#include <rte_mbuf_pool_ops.h>
> +#include <rte_memory.h>
> +#include <rte_mempool.h>
>=20
>  #include "rte_eventdev_trace_fp.h"
>=20
> @@ -913,6 +915,31 @@ rte_event_dev_stop_flush_callback_register(uint8_t d=
ev_id,
>  int
>  rte_event_dev_close(uint8_t dev_id);
>=20
> +/**
> + * Event vector structure.
> + */
> +struct rte_event_vector {
> +	uint64_t nb_elem : 16;
> +	/**< Number of elements in this event vector. */
> +	uint64_t rsvd : 48;
> +	/**< Reserved for future use */
> +	uint64_t impl_opaque;
> +	/**< Implementation specific opaque value.
> +	 * An implementation may use this field to hold implementation specific
> +	 * value to share between dequeue and enqueue operation.
> +	 * The application should not modify this field.
> +	 */
> +	union {
> +		struct rte_mbuf *mbufs[0];
> +		void *ptrs[0];
> +		uint64_t *u64s[0];
> +	} __rte_aligned(16);
> +	/**< Start of the vector array union. Depending upon the event type the
> +	 * vector array can be an array of mbufs or pointers or opaque u64
> +	 * values.
> +	 */
> +};
> +
>  /* Scheduler type definitions */
>  #define RTE_SCHED_TYPE_ORDERED          0
>  /**< Ordered scheduling
> @@ -986,6 +1013,21 @@ rte_event_dev_close(uint8_t dev_id);
>   */
>  #define RTE_EVENT_TYPE_ETH_RX_ADAPTER   0x4
>  /**< The event generated from event eth Rx adapter */
> +#define RTE_EVENT_TYPE_VECTOR           0x8
> +/**< Indicates that event is a vector.
> + * All vector event types should be a logical OR of EVENT_TYPE_VECTOR.
> + * This simplifies the pipeline design as one can split processing the e=
vents
> + * between vector events and normal event across event types.
> + * Example:
> + *	if (ev.event_type & RTE_EVENT_TYPE_VECTOR) {
> + *		// Classify and handle vector event.
> + *	} else {
> + *		// Classify and handle event.
> + *	}
> + */
> +#define RTE_EVENT_TYPE_CPU_VECTOR (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYP=
E_CPU)
> +/**< The event vector generated from cpu for pipelining. */
> +
>  #define RTE_EVENT_TYPE_MAX              0x10
>  /**< Maximum number of event types */
>=20
> @@ -1108,6 +1150,8 @@ struct rte_event {
>  		/**< Opaque event pointer */
>  		struct rte_mbuf *mbuf;
>  		/**< mbuf pointer if dequeued event is associated with mbuf */
> +		struct rte_event_vector *vec;
> +		/**< Event vector pointer. */
>  	};
>  };
>=20
> @@ -2023,6 +2067,41 @@ rte_event_dev_xstats_reset(uint8_t dev_id,
>   */
>  int rte_event_dev_selftest(uint8_t dev_id);
>=20
> +/**
> + * Get the memory required per event vector based on the number of eleme=
nts per
> + * vector.
> + * This should be used to create the mempool that holds the event vector=
s.
> + *
> + * @param name
> + *   The name of the vector pool.
> + * @param n
> + *   The number of elements in the mbuf pool.
> + * @param cache_size
> + *   Size of the per-core object cache. See rte_mempool_create() for
> + *   details.
> + * @param nb_elem
> + *   The number of elements then a single event vector should be able to=
 hold.

Typo: that instead of then.

> + * @param socket_id
> + *   The socket identifier where the memory should be allocated. The
> + *   value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the
> + *   reserved zone
> + *
> + * @return
> + *   The pointer to the newly allocated mempool, on success. NULL on err=
or
> + *   with rte_errno set appropriately. Possible rte_errno values include=
:
> + *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config s=
tructure
> + *    - E_RTE_SECONDARY - function was called from a secondary process i=
nstance
> + *    - EINVAL - cache size provided is too large, or priv_size is not a=
ligned.
> + *    - ENOSPC - the maximum number of memzones has already been allocat=
ed
> + *    - EEXIST - a memzone with the same name already exists
> + *    - ENOMEM - no appropriate memory area found in which to create mem=
zone

rte_mempool_create_empty() can return ENAMETOOLONG if name is too long.

> + */
> +__rte_experimental
> +struct rte_mempool *
> +rte_event_vector_pool_create(const char *name, unsigned int n,
> +			     unsigned int cache_size, uint16_t nb_elem,
> +			     int socket_id);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/versio=
n.map
> index 3e5c09cfd..a070ef56e 100644
> --- a/lib/librte_eventdev/version.map
> +++ b/lib/librte_eventdev/version.map
> @@ -138,6 +138,9 @@ EXPERIMENTAL {
>  	__rte_eventdev_trace_port_setup;
>  	# added in 20.11
>  	rte_event_pmd_pci_probe_named;
> +
> +	#added in 21.05
> +	rte_event_vector_pool_create;
>  };
>=20
>  INTERNAL {
> --
> 2.17.1