From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 829B8A0A02; Thu, 25 Mar 2021 11:37:17 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F229740693; Thu, 25 Mar 2021 11:37:16 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 4270240147 for ; Thu, 25 Mar 2021 11:37:15 +0100 (CET) IronPort-SDR: 7quJpcsGGtk5nAB97aM+9W2D5B5gZPxwosi/Po1gDVMTsZk7gMlhg1SALuC/M2eiV1sd46R+Ia b8IvmFUmNSVQ== X-IronPort-AV: E=McAfee;i="6000,8403,9933"; a="178015794" X-IronPort-AV: E=Sophos;i="5.81,277,1610438400"; d="scan'208";a="178015794" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2021 03:37:13 -0700 IronPort-SDR: iKTZzWZCoKkUBBhWpsAwUXRbMNctMDvvfODbH0528Vf8SV25L6jfcXM4JIaAi93oPhyE4rE+/8 88qtdB8UryRw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,277,1610438400"; d="scan'208";a="608460048" Received: from orsmsx606.amr.corp.intel.com ([10.22.229.19]) by fmsmga005.fm.intel.com with ESMTP; 25 Mar 2021 03:37:13 -0700 Received: from orsmsx608.amr.corp.intel.com (10.22.229.21) by ORSMSX606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Thu, 25 Mar 2021 03:37:13 -0700 Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by orsmsx608.amr.corp.intel.com (10.22.229.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2 via Frontend Transport; Thu, 25 Mar 2021 03:37:13 -0700 Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.106) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2106.2; Thu, 25 Mar 2021 03:37:12 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gYEU+Q0Uxooo2o/b422zhRo/6EW0tSrxxhES46ZHBrbSzOj+4WPck2QttvvrWyyld4ZpjpcQjHKsGAkyV7FPDG1zDLVuAnxA35p9y+gavhe4OP3FA83QNt4OuRIwx8xdX77N6QvnAugnujG9oCut11z7Qr2vwnuFPUD7q4X+7B9Ei0A/Gxm5LCLS2VHK2mtEUeZ33yMjC6lky5aNIo8i7dnnVqf2qkPjigVe58FnIGaaLgnwv6R5NqMDjckrucbtMEI5bNizt8s+kKgbdVejr4yq0zxZgqbGG5URIpvaE9RYu65NAHxtmUEH89d94Xl7LKyOH5GNYFVCS12KmwwGFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VyhLx6hp9oemt194e6YASPkdrSmSCVmQMXGQFjsa7zY=; b=GmQj1YfE+krWdsEfKH+Hja3rCNcpvzXrOYQprlPP1ggJ0laiEH4dQXKuz2VaB/bp1ETY6pxW5roW/kcadcic9z8ZXtnlvVFxVWY+gg8senB5L773acybyWDRghfk2ckkbWfEG3lLvrqMWC4LCTjpYxLxajW4H8aFncjusBiYKpE/vZKjRMGu3cDST5j/lUl2n6xcQGvwu0JX22DqRn3tVKhnbG9uJBsYXNc6SxhXlu9DDwz1F8KIoZQZucBaI8U/PBCFvxyJ7fnBW5UBzsD/qBZMZqMlE4Tv0f0sBIHFvxbfR5cdCjMdFGlyTH+8ea6jfVHHI3ByutDo2WXtYaKuFw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VyhLx6hp9oemt194e6YASPkdrSmSCVmQMXGQFjsa7zY=; b=zq+R4NvskMB4uJORO2lFwNR39EhKNpBZpEi8QM39X9yiSgRmBlXP364RTpF0ZCmGY9HfUzqjw9eNbCQNuIDaqBmoJq23Fc5KtMicT+OJVpVVpThLNkidyWsOt29CPL606X9gJ9gUmpV3EQGiyfXcM37PxiM/7iLZOuRj5T3X6qg= Received: from SN6PR11MB3117.namprd11.prod.outlook.com (2603:10b6:805:d7::32) by SA0PR11MB4672.namprd11.prod.outlook.com (2603:10b6:806:96::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.24; Thu, 25 Mar 2021 10:37:10 +0000 Received: from SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37]) by SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37%5]) with mapi id 15.20.3977.026; Thu, 25 Mar 2021 10:37:10 +0000 From: "Jayatheerthan, Jay" To: "pbhagavatula@marvell.com" , "jerinj@marvell.com" , "Carrillo, Erik G" , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , "hemant.agrawal@nxp.com" , "Van Haaren, Harry" , mattias.ronnblom , "Ma, Liang J" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vector support Thread-Index: AQHXIGuCEr5t5IiR5U6kinZ+xTatzqqURQrg Date: Thu, 25 Mar 2021 10:37:10 +0000 Message-ID: References: <20210319205718.1436-1-pbhagavatula@marvell.com> <20210324050525.4489-1-pbhagavatula@marvell.com> <20210324050525.4489-5-pbhagavatula@marvell.com> In-Reply-To: <20210324050525.4489-5-pbhagavatula@marvell.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.5.1.3 authentication-results: marvell.com; dkim=none (message not signed) header.d=none;marvell.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [136.185.187.198] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 1de2fc77-0783-4443-4713-08d8ef79f22b x-ms-traffictypediagnostic: SA0PR11MB4672: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:7219; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: vZevUrmxJK85Hi4z+TJ/XNhciOiN07uNvsA095ey/V1AV0u/NdRbtqJs3Fqnl2lyVZWoYBow9p2kK7c/fC5wYvQG0nmHIdeNUUkechCtsRwpQEANl0y1Um5nRVoCwCBLSCOCsBb3AxBqpPT+EqqYlqegnlHRMFlPa6HZ1beHnBqvueE0Dq8yWTZ9XSjR6EOnil7vuKAgleY8wWFjBssntbBit3i8NuDYBxTcVkgRDQ3qVC6rbi8EqcCyygyVDx9WE+j1lH05/TACLOkSk+teSH9QLChgcyMX7463w7h4IfpTQC0XkVq9+nWzl1NuymJgbcAtX6XhqLcjW1ZIwn3nfnSs0u/Yr6VQPaXRjYxuDvTEeuzmcfozvR3LRb3lJG5/tNCNRADL2EoCvM2ghlaZxvfsm/1s/hTdOF0GbHGeziDD3Q1NeeuvezAlH3xgfOPqHeqF4Qm2RwOZ1RVJCNweh3v5F4QdxNbB+awUYnuEtr02mCV9dDqkf43fDt+8mBpMmFOprpxy2cXsh8uOUPq/MKlO2U28ILj8ulJGbpV6QvmN2Qgi2W/7MzspXJm+I9dcGU/aAo/spsqoffVAq100v6qjNGL+ZaF8VsRmuLATZf2sL5cG0xFpZNld7owLzD2sRrJGqlW/j+/gcZ8D9pTnJKz2Zwc0uignTp0lWlYXVcOmNLx2rfu+cj7PQkC2LFH4 x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SN6PR11MB3117.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(366004)(396003)(376002)(39860400002)(136003)(346002)(8676002)(83380400001)(52536014)(2906002)(186003)(4326008)(71200400001)(86362001)(5660300002)(6636002)(38100700001)(26005)(66946007)(55016002)(30864003)(316002)(33656002)(64756008)(478600001)(110136005)(8936002)(921005)(76116006)(66476007)(66556008)(53546011)(6506007)(7696005)(66446008)(9686003); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?O2eB74yPDmnLuwWpTZYOshwk8VOEgYIvkmlvHtZacuep/c54YMihAE6wvH/x?= =?us-ascii?Q?K5D/TwriV7Pe4TclOSTK0b+KL2ymA9rRnln9ZtiItOzis8BEUJOkKkJlrq9s?= =?us-ascii?Q?ILqduAGXrbSR61cFlGOtluxWCFngvprKH6/Egr0loGIvPmHXjFoQNJqcysz3?= =?us-ascii?Q?GqSVMjszSgVKnB5MCwTuPqDNkFGsktzyQ+/0w6aXw6LcN56WijWY4MjNfAAm?= =?us-ascii?Q?PHSpITpS4afYUg9N43kB4182v0ulGcPQhjcL7k6nPgr+bmKmIZ5TseHroxYu?= =?us-ascii?Q?GCwILzIynOgR5DyH20NeuJKBH6kA8i9KAtKNjp1cn2XGoOKKMO6sFg4GvgsO?= =?us-ascii?Q?n+ixcZv609CphZFEw7X8j6A8W3Pmn9Tt8bPlkq9uarG8+NkIhS+CMtRdEOPo?= =?us-ascii?Q?XUVi7RNKbMblKWp5HoMjinuwL9lnbbY3bGRGc3Do0BEJNqOVRnfYSCh6rcOw?= =?us-ascii?Q?3mrR82Aha6jhz/VYmYedgMTF+iZra6amNTxGmBnzgeGT20RK1GnE/7RqK0oi?= =?us-ascii?Q?12+Sui4S0SjGgiuR34zQHpVChiQyyUoJvPuYHHlXcGLOKw5Y3fUz6E37laaS?= =?us-ascii?Q?kBrvH+dUO3IMUy2Ea2ZZzRPLjG72RkwHe7Bc4xBz2vQUtIGyJfW9UaTHj0zZ?= =?us-ascii?Q?nP+txH2R92OQ5UCl6CQnLFDAqruKZHUC5VsT6yTiQUQKbRiZqmjtIIKvKLjD?= =?us-ascii?Q?Kzvmbawg5FWOFnGNJxvG+aletOQnkv9KXYvPzeDthR6SE36bqEo/JvZVCwmP?= =?us-ascii?Q?MCFpy+xYZ8/Izi7t729cKd2pYa9etN9jKvpGnLrqqL14Z7KQ3E7FIGCJnc1D?= =?us-ascii?Q?tBz9MU4qkPtPXIxMMuS+H+bd9+SQieI0vEfyzAxR1m/qGVdiGs+2Z7OEDOg+?= =?us-ascii?Q?pUlxxOf3mWnbda/pc11Jh7HM2gi7GrDOo6I4CwuNVWrfh0c+t4qLO9CqErLK?= =?us-ascii?Q?eC96kI4r3I6TUWLjVB27mKkYDtAzyLUeREwpX3iuP/jBR8VLwotzbC7ZxfEh?= =?us-ascii?Q?ryBEwAD5iA+xBl6XIpKuHRKpaT/i65THFr0RbNUrGGsdqmb5XO6/VYXaFqXo?= =?us-ascii?Q?6FOluBD0lfhEWa/xZmNhwuYI0vzTVEaEMXli3GGKQmdcTO8seuTwvjwlgYKW?= =?us-ascii?Q?8LVlyay5zHiyqm4Mt3eJiu4PtK2qnluhERWM6JZPS8aSIKp9kv+2MadEmScK?= =?us-ascii?Q?nfAhOA9MRWk4fHYOCabgFSW/ayrC3VwV8X0HGsaTXIksjv93u1qf+VYpnGGz?= =?us-ascii?Q?/MYYSrknhFtn0Oeo3SBKVzLh8J3iGa9Dy7uskowz1BL9z3MbeMly50d8zerW?= =?us-ascii?Q?FHRAsO0FzPyn7XH5XdNTOPKp?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR11MB3117.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1de2fc77-0783-4443-4713-08d8ef79f22b X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Mar 2021 10:37:10.4539 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: aKO+pE9wQEnzotz4p8jjFAINtnUiu2FKAQ8px6aTr9qDO+jbcMWbKPgCCn9pRYa19UIEhih8k3gxQ+WtZoMGXJLbvrAarqa7TUkEY+mQMUw= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR11MB4672 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vector support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: pbhagavatula@marvell.com > Sent: Wednesday, March 24, 2021 10:35 AM > To: jerinj@marvell.com; Jayatheerthan, Jay ;= Carrillo, Erik G ; Gujjar, > Abhinandan S ; McDaniel, Timothy ; hemant.agrawal@nxp.com; Van > Haaren, Harry ; mattias.ronnblom ; Ma, Liang J > > Cc: dev@dpdk.org; Pavan Nikhilesh > Subject: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vector = support >=20 > From: Pavan Nikhilesh >=20 > Add event vector support for event eth Rx adapter, the implementation > creates vector flows based on port and queue identifier of the received > mbufs. >=20 > Signed-off-by: Pavan Nikhilesh > --- > lib/librte_eventdev/eventdev_pmd.h | 7 +- > .../rte_event_eth_rx_adapter.c | 257 ++++++++++++++++-- > lib/librte_eventdev/rte_eventdev.c | 6 +- > 3 files changed, 250 insertions(+), 20 deletions(-) >=20 > diff --git a/lib/librte_eventdev/eventdev_pmd.h b/lib/librte_eventdev/eve= ntdev_pmd.h > index 9297f1433..0f724ac85 100644 > --- a/lib/librte_eventdev/eventdev_pmd.h > +++ b/lib/librte_eventdev/eventdev_pmd.h > @@ -69,9 +69,10 @@ extern "C" { > } \ > } while (0) >=20 > -#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP \ > - ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | \ > - (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ)) > +#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP = \ > + ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | = \ > + (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) | = \ > + (RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR)) >=20 > #define RTE_EVENT_CRYPTO_ADAPTER_SW_CAP \ > RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA > diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c b/lib/librte_= eventdev/rte_event_eth_rx_adapter.c > index ac8ba5bf0..c71990078 100644 > --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.c > +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c > @@ -26,6 +26,10 @@ > #define BATCH_SIZE 32 > #define BLOCK_CNT_THRESHOLD 10 > #define ETH_EVENT_BUFFER_SIZE (4*BATCH_SIZE) > +#define MAX_VECTOR_SIZE 1024 > +#define MIN_VECTOR_SIZE 4 > +#define MAX_VECTOR_NS 1E9 > +#define MIN_VECTOR_NS 1E5 >=20 > #define ETH_RX_ADAPTER_SERVICE_NAME_LEN 32 > #define ETH_RX_ADAPTER_MEM_NAME_LEN 32 > @@ -59,6 +63,20 @@ struct eth_rx_poll_entry { > uint16_t eth_rx_qid; > }; >=20 > +struct eth_rx_vector_data { > + TAILQ_ENTRY(eth_rx_vector_data) next; > + uint16_t port; > + uint16_t queue; > + uint16_t max_vector_count; > + uint64_t event; > + uint64_t ts; > + uint64_t vector_timeout_ticks; > + struct rte_mempool *vector_pool; > + struct rte_event_vector *vector_ev; > +} __rte_cache_aligned; > + > +TAILQ_HEAD(eth_rx_vector_data_list, eth_rx_vector_data); > + > /* Instance per adapter */ > struct rte_eth_event_enqueue_buffer { > /* Count of events in this buffer */ > @@ -92,6 +110,14 @@ struct rte_event_eth_rx_adapter { > uint32_t wrr_pos; > /* Event burst buffer */ > struct rte_eth_event_enqueue_buffer event_enqueue_buffer; > + /* Vector enable flag */ > + uint8_t ena_vector; > + /* Timestamp of previous vector expiry list traversal */ > + uint64_t prev_expiry_ts; > + /* Minimum ticks to wait before traversing expiry list */ > + uint64_t vector_tmo_ticks; > + /* vector list */ > + struct eth_rx_vector_data_list vector_list; > /* Per adapter stats */ > struct rte_event_eth_rx_adapter_stats stats; > /* Block count, counts up to BLOCK_CNT_THRESHOLD */ > @@ -198,9 +224,11 @@ struct eth_device_info { > struct eth_rx_queue_info { > int queue_enabled; /* True if added */ > int intr_enabled; > + uint8_t ena_vector; > uint16_t wt; /* Polling weight */ > uint32_t flow_id_mask; /* Set to ~0 if app provides flow id else 0 */ > uint64_t event; > + struct eth_rx_vector_data vector_data; > }; >=20 > static struct rte_event_eth_rx_adapter **event_eth_rx_adapter; > @@ -722,6 +750,9 @@ rxa_flush_event_buffer(struct rte_event_eth_rx_adapte= r *rx_adapter) > &rx_adapter->event_enqueue_buffer; > struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter->stats; >=20 > + if (!buf->count) > + return 0; > + > uint16_t n =3D rte_event_enqueue_new_burst(rx_adapter->eventdev_id, > rx_adapter->event_port_id, > buf->events, > @@ -742,6 +773,72 @@ rxa_flush_event_buffer(struct rte_event_eth_rx_adapt= er *rx_adapter) > return n; > } >=20 > +static inline uint16_t > +rxa_create_event_vector(struct rte_event_eth_rx_adapter *rx_adapter, > + struct eth_rx_queue_info *queue_info, > + struct rte_eth_event_enqueue_buffer *buf, > + struct rte_mbuf **mbufs, uint16_t num) > +{ > + struct rte_event *ev =3D &buf->events[buf->count]; > + struct eth_rx_vector_data *vec; > + uint16_t filled, space, sz; > + > + filled =3D 0; > + vec =3D &queue_info->vector_data; > + while (num) { > + if (vec->vector_ev =3D=3D NULL) { > + if (rte_mempool_get(vec->vector_pool, > + (void **)&vec->vector_ev) < 0) { > + rte_pktmbuf_free_bulk(mbufs, num); > + return 0; > + } > + vec->vector_ev->nb_elem =3D 0; > + vec->vector_ev->port =3D vec->port; > + vec->vector_ev->queue =3D vec->queue; > + vec->vector_ev->attr_valid =3D true; > + TAILQ_INSERT_TAIL(&rx_adapter->vector_list, vec, next); > + } else if (vec->vector_ev->nb_elem =3D=3D vec->max_vector_count) { Is there a case where nb_elem > max_vector_count as we accumulate sz to it = ? > + /* Event ready. */ > + ev->event =3D vec->event; > + ev->vec =3D vec->vector_ev; > + ev++; > + filled++; > + vec->vector_ev =3D NULL; > + TAILQ_REMOVE(&rx_adapter->vector_list, vec, next); > + if (rte_mempool_get(vec->vector_pool, > + (void **)&vec->vector_ev) < 0) { > + rte_pktmbuf_free_bulk(mbufs, num); > + return 0; > + } > + vec->vector_ev->nb_elem =3D 0; > + vec->vector_ev->port =3D vec->port; > + vec->vector_ev->queue =3D vec->queue; > + vec->vector_ev->attr_valid =3D true; > + TAILQ_INSERT_TAIL(&rx_adapter->vector_list, vec, next); > + } > + > + space =3D vec->max_vector_count - vec->vector_ev->nb_elem; > + sz =3D num > space ? space : num; > + memcpy(vec->vector_ev->mbufs + vec->vector_ev->nb_elem, mbufs, > + sizeof(void *) * sz); > + vec->vector_ev->nb_elem +=3D sz; > + num -=3D sz; > + mbufs +=3D sz; > + vec->ts =3D rte_rdtsc(); > + } > + > + if (vec->vector_ev->nb_elem =3D=3D vec->max_vector_count) { Same here. > + ev->event =3D vec->event; > + ev->vec =3D vec->vector_ev; > + ev++; > + filled++; > + vec->vector_ev =3D NULL; > + TAILQ_REMOVE(&rx_adapter->vector_list, vec, next); > + } > + > + return filled; > +} I am seeing more than one repeating code chunks in this function. Perhaps, = you can give it a try to not repeat. We can drop if its performance affecti= ng. > + > static inline void > rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter, > uint16_t eth_dev_id, > @@ -770,25 +867,30 @@ rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *r= x_adapter, > rss_mask =3D ~(((m->ol_flags & PKT_RX_RSS_HASH) !=3D 0) - 1); > do_rss =3D !rss_mask && !eth_rx_queue_info->flow_id_mask; The RSS related code is executed for vector case as well. Can this be moved= inside ena_vector if condition ? >=20 > - for (i =3D 0; i < num; i++) { > - m =3D mbufs[i]; > - > - rss =3D do_rss ? > - rxa_do_softrss(m, rx_adapter->rss_key_be) : > - m->hash.rss; > - ev->event =3D event; > - ev->flow_id =3D (rss & ~flow_id_mask) | > - (ev->flow_id & flow_id_mask); > - ev->mbuf =3D m; > - ev++; > + if (!eth_rx_queue_info->ena_vector) { > + for (i =3D 0; i < num; i++) { > + m =3D mbufs[i]; > + > + rss =3D do_rss ? rxa_do_softrss(m, rx_adapter->rss_key_be) > + : m->hash.rss; > + ev->event =3D event; > + ev->flow_id =3D (rss & ~flow_id_mask) | > + (ev->flow_id & flow_id_mask); > + ev->mbuf =3D m; > + ev++; > + } > + } else { > + num =3D rxa_create_event_vector(rx_adapter, eth_rx_queue_info, > + buf, mbufs, num); > } >=20 > - if (dev_info->cb_fn) { > + if (num && dev_info->cb_fn) { >=20 > dropped =3D 0; > nb_cb =3D dev_info->cb_fn(eth_dev_id, rx_queue_id, > - ETH_EVENT_BUFFER_SIZE, buf->count, ev, > - num, dev_info->cb_arg, &dropped); > + ETH_EVENT_BUFFER_SIZE, buf->count, > + &buf->events[buf->count], num, > + dev_info->cb_arg, &dropped); Before this patch, we pass ev which is &buf->events[buf->count] + num as fi= fth param when calling cb_fn. Now, we are passing &buf->events[buf->count] = for non-vector case. Do you see this as an issue? Also, for vector case would it make sense to do pass &buf->events[buf->coun= t] + num ? > if (unlikely(nb_cb > num)) > RTE_EDEV_LOG_ERR("Rx CB returned %d (> %d) events", > nb_cb, num); > @@ -1124,6 +1226,30 @@ rxa_poll(struct rte_event_eth_rx_adapter *rx_adapt= er) > return nb_rx; > } >=20 > +static void > +rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg) > +{ > + struct rte_event_eth_rx_adapter *rx_adapter =3D arg; > + struct rte_eth_event_enqueue_buffer *buf =3D > + &rx_adapter->event_enqueue_buffer; > + struct rte_event *ev; > + > + if (buf->count) > + rxa_flush_event_buffer(rx_adapter); > + > + if (vec->vector_ev->nb_elem =3D=3D 0) > + return; > + ev =3D &buf->events[buf->count]; > + > + /* Event ready. */ > + ev->event =3D vec->event; > + ev->vec =3D vec->vector_ev; > + buf->count++; > + > + vec->vector_ev =3D NULL; > + vec->ts =3D 0; > +} > + > static int > rxa_service_func(void *args) > { > @@ -1137,6 +1263,24 @@ rxa_service_func(void *args) > return 0; > } >=20 > + if (rx_adapter->ena_vector) { > + if ((rte_rdtsc() - rx_adapter->prev_expiry_ts) >=3D > + rx_adapter->vector_tmo_ticks) { > + struct eth_rx_vector_data *vec; > + > + TAILQ_FOREACH(vec, &rx_adapter->vector_list, next) { > + uint64_t elapsed_time =3D rte_rdtsc() - vec->ts; > + > + if (elapsed_time >=3D vec->vector_timeout_ticks) { > + rxa_vector_expire(vec, rx_adapter); > + TAILQ_REMOVE(&rx_adapter->vector_list, > + vec, next); > + } > + } > + rx_adapter->prev_expiry_ts =3D rte_rdtsc(); > + } > + } > + > stats =3D &rx_adapter->stats; > stats->rx_packets +=3D rxa_intr_ring_dequeue(rx_adapter); > stats->rx_packets +=3D rxa_poll(rx_adapter); > @@ -1640,6 +1784,28 @@ rxa_update_queue(struct rte_event_eth_rx_adapter *= rx_adapter, > } > } >=20 > +static void > +rxa_set_vector_data(struct eth_rx_queue_info *queue_info, uint16_t vecto= r_count, > + uint64_t vector_ns, struct rte_mempool *mp, int32_t qid, > + uint16_t port_id) > +{ > +#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) > + struct eth_rx_vector_data *vector_data; > + uint32_t flow_id; > + > + vector_data =3D &queue_info->vector_data; > + vector_data->max_vector_count =3D vector_count; > + vector_data->port =3D port_id; > + vector_data->queue =3D qid; > + vector_data->vector_pool =3D mp; > + vector_data->vector_timeout_ticks =3D > + NSEC2TICK(vector_ns, rte_get_timer_hz()); > + vector_data->ts =3D 0; > + flow_id =3D queue_info->event & 0xFFFFF; > + flow_id =3D flow_id =3D=3D 0 ? (qid & 0xFF) | (port_id & 0xFFFF) : flow= _id; Maybe I am missing something here. Looking at the code it looks like qid an= d port_id may overlap. For e.g., if qid =3D 0x10 and port_id =3D 0x11, flow= _id would end up being 0x11. Is this the expectation? Also, it may be usefu= l to document flow_id format. Comparing this format with existing RSS hash based method, are we saying th= at all mbufs received in a rx burst are part of same flow when vectorizatio= n is used? > + vector_data->event =3D (queue_info->event & ~0xFFFFF) | flow_id; > +} > + > static void > rxa_sw_del(struct rte_event_eth_rx_adapter *rx_adapter, > struct eth_device_info *dev_info, > @@ -1741,6 +1907,44 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx_= adapter, > } > } >=20 > +static void > +rxa_sw_event_vector_configure( > + struct rte_event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, > + int rx_queue_id, > + const struct rte_event_eth_rx_adapter_event_vector_config *config) > +{ > + struct eth_device_info *dev_info =3D &rx_adapter->eth_devices[eth_dev_i= d]; > + struct eth_rx_queue_info *queue_info; > + struct rte_event *qi_ev; > + > + if (rx_queue_id =3D=3D -1) { > + uint16_t nb_rx_queues; > + uint16_t i; > + > + nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > + for (i =3D 0; i < nb_rx_queues; i++) > + rxa_sw_event_vector_configure(rx_adapter, eth_dev_id, i, > + config); > + return; > + } > + > + queue_info =3D &dev_info->rx_queue[rx_queue_id]; > + qi_ev =3D (struct rte_event *)&queue_info->event; > + queue_info->ena_vector =3D 1; > + qi_ev->event_type =3D RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; > + rxa_set_vector_data(queue_info, config->vector_sz, > + config->vector_timeout_ns, config->vector_mp, > + rx_queue_id, dev_info->dev->data->port_id); > + rx_adapter->ena_vector =3D 1; > + rx_adapter->vector_tmo_ticks =3D > + rx_adapter->vector_tmo_ticks ? > + RTE_MIN(config->vector_timeout_ns << 1, > + rx_adapter->vector_tmo_ticks) : > + config->vector_timeout_ns << 1; > + rx_adapter->prev_expiry_ts =3D 0; > + TAILQ_INIT(&rx_adapter->vector_list); > +} > + > static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, > uint16_t eth_dev_id, > int rx_queue_id, > @@ -2081,6 +2285,15 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id, > return -EINVAL; > } >=20 > + if ((cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) =3D=3D 0 && > + (queue_conf->rx_queue_flags & > + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR)) { > + RTE_EDEV_LOG_ERR("Event vectorization is not supported," > + " eth port: %" PRIu16 " adapter id: %" PRIu8, > + eth_dev_id, id); > + return -EINVAL; > + } > + > if ((cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) =3D=3D 0 && > (rx_queue_id !=3D -1)) { > RTE_EDEV_LOG_ERR("Rx queues can only be connected to single " > @@ -2143,6 +2356,17 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id, > return 0; > } >=20 > +static int > +rxa_sw_vector_limits(struct rte_event_eth_rx_adapter_vector_limits *limi= ts) > +{ > + limits->max_sz =3D MAX_VECTOR_SIZE; > + limits->min_sz =3D MIN_VECTOR_SIZE; > + limits->max_timeout_ns =3D MAX_VECTOR_NS; > + limits->min_timeout_ns =3D MIN_VECTOR_NS; > + > + return 0; > +} > + > int > rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t eth_dev_id, > int32_t rx_queue_id) > @@ -2333,7 +2557,8 @@ rte_event_eth_rx_adapter_queue_event_vector_config( > ret =3D dev->dev_ops->eth_rx_adapter_event_vector_config( > dev, &rte_eth_devices[eth_dev_id], rx_queue_id, config); > } else { > - ret =3D -ENOTSUP; > + rxa_sw_event_vector_configure(rx_adapter, eth_dev_id, > + rx_queue_id, config); > } >=20 > return ret; > @@ -2371,7 +2596,7 @@ rte_event_eth_rx_adapter_vector_limits_get( > ret =3D dev->dev_ops->eth_rx_adapter_vector_limits_get( > dev, &rte_eth_devices[eth_port_id], limits); > } else { > - ret =3D -ENOTSUP; > + ret =3D rxa_sw_vector_limits(limits); > } >=20 > return ret; > diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte= _eventdev.c > index f95edc075..254a31b1f 100644 > --- a/lib/librte_eventdev/rte_eventdev.c > +++ b/lib/librte_eventdev/rte_eventdev.c > @@ -122,7 +122,11 @@ rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, ui= nt16_t eth_port_id, >=20 > if (caps =3D=3D NULL) > return -EINVAL; > - *caps =3D 0; > + > + if (dev->dev_ops->eth_rx_adapter_caps_get =3D=3D NULL) > + *caps =3D RTE_EVENT_ETH_RX_ADAPTER_SW_CAP; > + else > + *caps =3D 0; Any reason why we had to set default caps value? I am thinking if sw event = device is used, it would set it anyways. >=20 > return dev->dev_ops->eth_rx_adapter_caps_get ? > (*dev->dev_ops->eth_rx_adapter_caps_get)(dev, > -- > 2.17.1