From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 36D74A0A02; Fri, 26 Mar 2021 07:26:30 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AABA940685; Fri, 26 Mar 2021 07:26:29 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id E79C24067B for ; Fri, 26 Mar 2021 07:26:26 +0100 (CET) IronPort-SDR: ooQM8zihOkC9P2p9HsoxWEzm0ZiZOs3fpc2meYaxWTLLWdHkdMCcUrUYj+Yx3mQ6GpV+a2WYIr b0Cjw2f8JNSQ== X-IronPort-AV: E=McAfee;i="6000,8403,9934"; a="171072592" X-IronPort-AV: E=Sophos;i="5.81,279,1610438400"; d="scan'208";a="171072592" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2021 23:26:25 -0700 IronPort-SDR: THZcII6i4ppNM4n2OAQxcUgX2thozYFh+VU8vaOQnhpdeWv2hFD0rM6GpB3CZPsMb76BBDifRa 1E4AX0qswMng== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,279,1610438400"; d="scan'208";a="608784309" Received: from orsmsx606.amr.corp.intel.com ([10.22.229.19]) by fmsmga005.fm.intel.com with ESMTP; 25 Mar 2021 23:26:25 -0700 Received: from orsmsx606.amr.corp.intel.com (10.22.229.19) by ORSMSX606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Thu, 25 Mar 2021 23:26:18 -0700 Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by orsmsx606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2 via Frontend Transport; Thu, 25 Mar 2021 23:26:18 -0700 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.168) by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2106.2; Thu, 25 Mar 2021 23:26:18 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nii36YDveqdPtMP1XGB4tQVOjatUp6EI4IlgOM6oVpNoeitpZaT6I5oEGk+LBEOqery9MynlVzm2K2UPzckdvrqU8Nd4Lw1eU+D+fajNeboGnEs3IRNM/iIyCCXB3b8mCOg9eM1MVRA8T/304GvrbKSYKwcCy5/zPWxO2XtNyYqGV5GRdszs/S2D+31gllVZkiSdGDGXrNgQa1x5UKkwmCLMftDqXDdxilMI1Ihox2S8lbgOoniy/Dhry+OnXUOzxy/pfjj5VcCKdGLSJimwqA46jGbuiHt6QmjRfZVsRqrDQKce4K4v8Mm7sqIoZRWJ2zY6jsLSpMW3KQyV4z8HiQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=87DwbN8JW0PtVRmEX3x97G/AOb/6cBjdf8RhNakzKGg=; b=R+pHm7vSXiPtkJB/wlOKGLbp5VZKMNw2GR9g3r57ZVqrctFGBDudy832p6aMR/NX1WRJgLi1sfltMuTNX5r7mPA6TxqT9v4YHPVGzBeTLeTBEkOFCf5jcrpcJVPw4mLBYT/ytPVH9EtmgFwbqwgoORi2BmzIxwU7YAbf3cfvSvRmAXTyhrF6g2kW0DH2FKei1fL2C3c3ONzspuCtR9JOzDTAEnvbTsRNGgiy9dc1HL18KBA8uotA6h3dNtwNsqqnoyOLLXw5BpSY1wMLRRF/Z/l6mCIdlC+e6NoxyEou2wfQzF8+QILzMyf0mOjWoQUpU9ctoskwXcWYovM2Sj4Y7g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=87DwbN8JW0PtVRmEX3x97G/AOb/6cBjdf8RhNakzKGg=; b=qLfn1wUkQBz2wT2EKU4RTDG+SpKGTpj/VK1HpBy9kjafo+W8ARDnxuLodx58pgoxkN1vRj02cdXowRkY8tIrUlR6o7T85lTptI+u0M5sREu3OoY7aoWCBV42AEywqTi6M+3jPBLgyHDcM5IizxaIfVFaqvS53JG1MzFvv/Y2rUc= Received: from SN6PR11MB3117.namprd11.prod.outlook.com (2603:10b6:805:d7::32) by SN6PR11MB2765.namprd11.prod.outlook.com (2603:10b6:805:5f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.25; Fri, 26 Mar 2021 06:26:14 +0000 Received: from SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37]) by SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37%5]) with mapi id 15.20.3977.026; Fri, 26 Mar 2021 06:26:14 +0000 From: "Jayatheerthan, Jay" To: Pavan Nikhilesh Bhagavatula , "Jerin Jacob Kollanukkaran" , "Carrillo, Erik G" , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , "hemant.agrawal@nxp.com" , "Van Haaren, Harry" , mattias.ronnblom , "Ma, Liang J" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vector support Thread-Index: AQHXIGuCEr5t5IiR5U6kinZ+xTatzqqURQrggABrl4CAAAmb8A== Date: Fri, 26 Mar 2021 06:26:14 +0000 Message-ID: References: <20210319205718.1436-1-pbhagavatula@marvell.com> <20210324050525.4489-1-pbhagavatula@marvell.com> <20210324050525.4489-5-pbhagavatula@marvell.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.5.1.3 authentication-results: marvell.com; dkim=none (message not signed) header.d=none;marvell.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [136.185.187.198] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 9d43cf8e-c345-49fd-f87c-08d8f0200e64 x-ms-traffictypediagnostic: SN6PR11MB2765: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8273; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: MiHkyC9QOqMq4jB7PsuKiw4PLP7kRqo9ONejMBfmWp58iYVREALEbxkrJVEueVeGj1efBEFetGsoYflZKps/RPQcEXolWiSrZ8dC+l5pM5M0C8GS1QtPgONssJiRLrtGoDV316TGTUpnq4NRWbbrHwMk9TqulLVrnM1F2eb6GAczoM/wVQH04l/ryaNNbQf7MhEaOxwutlrNW1U3EbgSQGS5oeZp/PGBB7eRVmwAr0lhPIv/VRuSsP9A1IBUzFYexlXzKHnOGkD9/IEZjanS7Nh9dpajrLfMVja79e6zDB03N9+54t4pOUHAnlFpnkOg5VYxQEW0TffneDgC1vGI/bMF7g89LndrQ9nezmsqWMoMjMpI7InuZD3IQeW1a04dBI5IL1bpk9kFAht1KpirS7117NyZcRxRudKGcIhXpiserPhCP0KMA9cdWyOVwx8dTUJ5ICttcGTX3j7qeYMSsq4HdY0gaFXxn9BABt3aagYj0t+hYahTkAIfxBRMUG6tuUbnB762c4av/4NezsyBlRp/JptD84AWLMZz7EikOuEpGrSMEKkoaSlwoj5DqUH+SdGLnFw4zsppTxStPLLPHpah0gg3XQUzsZB+7u1m8ho+N1TyCrJwG6ZIoue4dgnghRHKw/QAah5Sn6P5nIOeJC6g4ezdagMw5HCehxsqcZDr/BfHPELy5wbU/94quifG x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SN6PR11MB3117.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(346002)(39860400002)(396003)(366004)(376002)(136003)(6506007)(33656002)(53546011)(64756008)(55016002)(52536014)(86362001)(186003)(66946007)(76116006)(66556008)(66446008)(26005)(2906002)(7696005)(66476007)(38100700001)(8936002)(6636002)(83380400001)(5660300002)(30864003)(71200400001)(921005)(8676002)(110136005)(478600001)(9686003)(316002)(4326008)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?gR/yqJqJN3MnVpgT1y/sP9/KhHjw8O3GLYvP9D8rjRMOJ9RDqqcV+S/rVJqx?= =?us-ascii?Q?PBTXRwFiItKxnxRSwKo4j1Qdz6e4E1EO7Syqt//207v2UKxlbtdL3c1xe12v?= =?us-ascii?Q?5JHv5hz30rAY6Hb8q5feqt7xWIbCrfCqKCTuUQTEev89W/8UA1yVOMs6YeaB?= =?us-ascii?Q?IZ834YbX6sy7gi7OEddGWbr6HTFdTlh2eb94w9oBvlgCrsNEb3dYOPGtaMCy?= =?us-ascii?Q?pL5cch29Osvr1c7Ai0OyIAhSLC+jvjs+djxie/t4rfnXH1LswELogbZNmUhm?= =?us-ascii?Q?bFiQCEfImY0WxegV0jXgyKmrlE/jbNbVUFmKcT7szYeM2DZ/us8U/qoVLsP2?= =?us-ascii?Q?B8wVMKmTxAUllrYJ/xqCc3zj84U6If0kPug/OVhFNMopWWT/xqb9mqwAUvRx?= =?us-ascii?Q?oxIqapeJjgvKtUgiqgzq3Y+T4BOIJrI886r9+yKq7ZDfimwzBO1zrEctGd/g?= =?us-ascii?Q?FDmveFhHID5EEag8REXBkqj1r7P7X4wxS9YgOot3eX2VtX72q1ilvc08n8lJ?= =?us-ascii?Q?Hx/cMk8rxcr2QyOcI09R365ma1XkL0SQSAIBzCz3BaCliVoaYGHgBAP8lv82?= =?us-ascii?Q?OALNEc/q77KH3ghMYr/1koMYshaRrhL+M2+epXnjFUCODOFKsbYvw4T3Lv+8?= =?us-ascii?Q?KA7WihEakCqzpTCOdYLee/iqZRa7OMsiEF937vRrPs6eVSTTmeEIEzWzTEb5?= =?us-ascii?Q?PAvkAvZ02rfCDeLrVwqaftYubqXUQZRVCZMpnJcO22V5d/Qvz3zd/J99klqq?= =?us-ascii?Q?D1wEWaQr1QnWQf0RdK0VDW3TJ1p2pI8U96sqg4azvhBf0faya3OacWxffmh+?= =?us-ascii?Q?P6FFG8PxJ8N4Xqc1p8d6J+QKriW7aFaiVoyfn9rgOdm1MKLyg0Fh6rPyarOR?= =?us-ascii?Q?/fL3FybrE00UytiXr2RknxMDBqwB5jRea5n2ZurhDH1EfDzrt7pZRBSCt1vR?= =?us-ascii?Q?h57sV9K8ENB7IiRWQh6U4oDX992veiCLgXcl7ZX8btGleXYc3XER1c7UUzSY?= =?us-ascii?Q?lKeEeh6BCFEXPZMNZZyjQjy66g40qeJ5x1q/oXyILB/qsyASbGcuijIYKF1y?= =?us-ascii?Q?C/h+XKsUTrZnNw8o47iRCS3VguRKWBIlNVH+bNW2dIIRdbxPsulqjt4VeEMp?= =?us-ascii?Q?PUr4fi7kwR+ozqgyh439NYwNuch1uxUBBrC0d5agiAA2HPqU/1Bn2yqXZnNi?= =?us-ascii?Q?kSe2bZbMbrJ12Y7AlUPDTJNvM74+Ysm0zLWjZeEbkQ+QixKyfNAEZrUdWt3Y?= =?us-ascii?Q?zdT85OswdNUc5zW885oXH/LNI6ptQf46rVxfU4+2mNnIxrN0B3MK2pklQVFZ?= =?us-ascii?Q?TdfM4Ev5bjhHSCRW8+BfSy6o?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR11MB3117.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9d43cf8e-c345-49fd-f87c-08d8f0200e64 X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Mar 2021 06:26:14.2553 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: c2QO3SJyEINUVFd/iHYB7LkVCcqZoYfYmhtTySXZf+nInFCsNX8clFmowfd8GI2qorcd1Dprf7yMG5e7bVusw/QZCVMcNjSSnFUsZfwbWgg= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR11MB2765 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vector support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Pavan Nikhilesh Bhagavatula > Sent: Thursday, March 25, 2021 6:44 PM > To: Jayatheerthan, Jay ; Jerin Jacob Kollanu= kkaran ; Carrillo, Erik G > ; Gujjar, Abhinandan S ; McDaniel, Timothy > ; hemant.agrawal@nxp.com; Van Haaren, Harry <= harry.van.haaren@intel.com>; mattias.ronnblom > ; Ma, Liang J > Cc: dev@dpdk.org > Subject: RE: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event vec= tor support >=20 >=20 >=20 > >-----Original Message----- > >From: Jayatheerthan, Jay > >Sent: Thursday, March 25, 2021 4:07 PM > >To: Pavan Nikhilesh Bhagavatula ; Jerin > >Jacob Kollanukkaran ; Carrillo, Erik G > >; Gujjar, Abhinandan S > >; McDaniel, Timothy > >; hemant.agrawal@nxp.com; Van > >Haaren, Harry ; mattias.ronnblom > >; Ma, Liang J > > > >Cc: dev@dpdk.org > >Subject: [EXT] RE: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter > >event vector support > > > >External Email > > > >---------------------------------------------------------------------- > >> -----Original Message----- > >> From: pbhagavatula@marvell.com > >> Sent: Wednesday, March 24, 2021 10:35 AM > >> To: jerinj@marvell.com; Jayatheerthan, Jay > >; Carrillo, Erik G > >; Gujjar, > >> Abhinandan S ; McDaniel, Timothy > >; hemant.agrawal@nxp.com; Van > >> Haaren, Harry ; mattias.ronnblom > >; Ma, Liang J > >> > >> Cc: dev@dpdk.org; Pavan Nikhilesh > >> Subject: [dpdk-dev] [PATCH v5 4/8] eventdev: add Rx adapter event > >vector support > >> > >> From: Pavan Nikhilesh > >> > >> Add event vector support for event eth Rx adapter, the > >implementation > >> creates vector flows based on port and queue identifier of the > >received > >> mbufs. > >> > >> Signed-off-by: Pavan Nikhilesh > >> --- > >> lib/librte_eventdev/eventdev_pmd.h | 7 +- > >> .../rte_event_eth_rx_adapter.c | 257 ++++++++++++++++-= - > >> lib/librte_eventdev/rte_eventdev.c | 6 +- > >> 3 files changed, 250 insertions(+), 20 deletions(-) > >> > >> diff --git a/lib/librte_eventdev/eventdev_pmd.h > >b/lib/librte_eventdev/eventdev_pmd.h > >> index 9297f1433..0f724ac85 100644 > >> --- a/lib/librte_eventdev/eventdev_pmd.h > >> +++ b/lib/librte_eventdev/eventdev_pmd.h > >> @@ -69,9 +69,10 @@ extern "C" { > >> } \ > >> } while (0) > >> > >> -#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP \ > >> - > > ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | > >\ > >> - > > (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ)) > >> +#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP > >\ > >> + ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | > >\ > >> + (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) | > >\ > >> + (RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR)) > >> > >> #define RTE_EVENT_CRYPTO_ADAPTER_SW_CAP \ > >> > > RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA > >> diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >b/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >> index ac8ba5bf0..c71990078 100644 > >> --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >> +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >> @@ -26,6 +26,10 @@ > >> #define BATCH_SIZE 32 > >> #define BLOCK_CNT_THRESHOLD 10 > >> #define ETH_EVENT_BUFFER_SIZE (4*BATCH_SIZE) > >> +#define MAX_VECTOR_SIZE 1024 > >> +#define MIN_VECTOR_SIZE 4 > >> +#define MAX_VECTOR_NS 1E9 > >> +#define MIN_VECTOR_NS 1E5 > >> > >> #define ETH_RX_ADAPTER_SERVICE_NAME_LEN 32 > >> #define ETH_RX_ADAPTER_MEM_NAME_LEN 32 > >> @@ -59,6 +63,20 @@ struct eth_rx_poll_entry { > >> uint16_t eth_rx_qid; > >> }; > >> > >> +struct eth_rx_vector_data { > >> + TAILQ_ENTRY(eth_rx_vector_data) next; > >> + uint16_t port; > >> + uint16_t queue; > >> + uint16_t max_vector_count; > >> + uint64_t event; > >> + uint64_t ts; > >> + uint64_t vector_timeout_ticks; > >> + struct rte_mempool *vector_pool; > >> + struct rte_event_vector *vector_ev; > >> +} __rte_cache_aligned; > >> + > >> +TAILQ_HEAD(eth_rx_vector_data_list, eth_rx_vector_data); > >> + > >> /* Instance per adapter */ > >> struct rte_eth_event_enqueue_buffer { > >> /* Count of events in this buffer */ > >> @@ -92,6 +110,14 @@ struct rte_event_eth_rx_adapter { > >> uint32_t wrr_pos; > >> /* Event burst buffer */ > >> struct rte_eth_event_enqueue_buffer event_enqueue_buffer; > >> + /* Vector enable flag */ > >> + uint8_t ena_vector; > >> + /* Timestamp of previous vector expiry list traversal */ > >> + uint64_t prev_expiry_ts; > >> + /* Minimum ticks to wait before traversing expiry list */ > >> + uint64_t vector_tmo_ticks; > >> + /* vector list */ > >> + struct eth_rx_vector_data_list vector_list; > >> /* Per adapter stats */ > >> struct rte_event_eth_rx_adapter_stats stats; > >> /* Block count, counts up to BLOCK_CNT_THRESHOLD */ > >> @@ -198,9 +224,11 @@ struct eth_device_info { > >> struct eth_rx_queue_info { > >> int queue_enabled; /* True if added */ > >> int intr_enabled; > >> + uint8_t ena_vector; > >> uint16_t wt; /* Polling weight */ > >> uint32_t flow_id_mask; /* Set to ~0 if app provides flow id else > >0 */ > >> uint64_t event; > >> + struct eth_rx_vector_data vector_data; > >> }; > >> > >> static struct rte_event_eth_rx_adapter **event_eth_rx_adapter; > >> @@ -722,6 +750,9 @@ rxa_flush_event_buffer(struct > >rte_event_eth_rx_adapter *rx_adapter) > >> &rx_adapter->event_enqueue_buffer; > >> struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter- > >>stats; > >> > >> + if (!buf->count) > >> + return 0; > >> + > >> uint16_t n =3D rte_event_enqueue_new_burst(rx_adapter- > >>eventdev_id, > >> rx_adapter->event_port_id, > >> buf->events, > >> @@ -742,6 +773,72 @@ rxa_flush_event_buffer(struct > >rte_event_eth_rx_adapter *rx_adapter) > >> return n; > >> } > >> > >> +static inline uint16_t > >> +rxa_create_event_vector(struct rte_event_eth_rx_adapter > >*rx_adapter, > >> + struct eth_rx_queue_info *queue_info, > >> + struct rte_eth_event_enqueue_buffer *buf, > >> + struct rte_mbuf **mbufs, uint16_t num) > >> +{ > >> + struct rte_event *ev =3D &buf->events[buf->count]; > >> + struct eth_rx_vector_data *vec; > >> + uint16_t filled, space, sz; > >> + > >> + filled =3D 0; > >> + vec =3D &queue_info->vector_data; > >> + while (num) { > >> + if (vec->vector_ev =3D=3D NULL) { > >> + if (rte_mempool_get(vec->vector_pool, > >> + (void **)&vec->vector_ev) < > >0) { > >> + rte_pktmbuf_free_bulk(mbufs, num); > >> + return 0; > >> + } > >> + vec->vector_ev->nb_elem =3D 0; > >> + vec->vector_ev->port =3D vec->port; > >> + vec->vector_ev->queue =3D vec->queue; > >> + vec->vector_ev->attr_valid =3D true; > >> + TAILQ_INSERT_TAIL(&rx_adapter->vector_list, > >vec, next); > >> + } else if (vec->vector_ev->nb_elem =3D=3D vec- > >>max_vector_count) { > > > >Is there a case where nb_elem > max_vector_count as we accumulate > >sz to it ? >=20 > I don't think so, that would overflow the vector event. >=20 > > > >> + /* Event ready. */ > >> + ev->event =3D vec->event; > >> + ev->vec =3D vec->vector_ev; > >> + ev++; > >> + filled++; > >> + vec->vector_ev =3D NULL; > >> + TAILQ_REMOVE(&rx_adapter->vector_list, vec, > >next); > >> + if (rte_mempool_get(vec->vector_pool, > >> + (void **)&vec->vector_ev) < > >0) { > >> + rte_pktmbuf_free_bulk(mbufs, num); > >> + return 0; > >> + } > >> + vec->vector_ev->nb_elem =3D 0; > >> + vec->vector_ev->port =3D vec->port; > >> + vec->vector_ev->queue =3D vec->queue; > >> + vec->vector_ev->attr_valid =3D true; > >> + TAILQ_INSERT_TAIL(&rx_adapter->vector_list, > >vec, next); > >> + } > >> + > >> + space =3D vec->max_vector_count - vec->vector_ev- > >>nb_elem; > >> + sz =3D num > space ? space : num; > >> + memcpy(vec->vector_ev->mbufs + vec->vector_ev- > >>nb_elem, mbufs, > >> + sizeof(void *) * sz); > >> + vec->vector_ev->nb_elem +=3D sz; > >> + num -=3D sz; > >> + mbufs +=3D sz; > >> + vec->ts =3D rte_rdtsc(); > >> + } > >> + > >> + if (vec->vector_ev->nb_elem =3D=3D vec->max_vector_count) { > > > >Same here. > > > >> + ev->event =3D vec->event; > >> + ev->vec =3D vec->vector_ev; > >> + ev++; > >> + filled++; > >> + vec->vector_ev =3D NULL; > >> + TAILQ_REMOVE(&rx_adapter->vector_list, vec, next); > >> + } > >> + > >> + return filled; > >> +} > > > >I am seeing more than one repeating code chunks in this function. > >Perhaps, you can give it a try to not repeat. We can drop if its > >performance affecting. >=20 > I will try to move them to inline functions and test. >=20 > > > >> + > >> static inline void > >> rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter, > >> uint16_t eth_dev_id, > >> @@ -770,25 +867,30 @@ rxa_buffer_mbufs(struct > >rte_event_eth_rx_adapter *rx_adapter, > >> rss_mask =3D ~(((m->ol_flags & PKT_RX_RSS_HASH) !=3D 0) - 1); > >> do_rss =3D !rss_mask && !eth_rx_queue_info->flow_id_mask; > > > >The RSS related code is executed for vector case as well. Can this be > >moved inside ena_vector if condition ? >=20 > RSS is used to generate the event flowid, in vector case the flow id > Will be a combination of port and queue id. > The idea is that flows having the same RSS LSB will end up in the same > queue. >=20 I meant to say, rss_mask and do_rss are used only when ena_vector is false.= Could be moved inside the appropriate condition ? > > > >> > >> - for (i =3D 0; i < num; i++) { > >> - m =3D mbufs[i]; > >> - > >> - rss =3D do_rss ? > >> - rxa_do_softrss(m, rx_adapter->rss_key_be) : > >> - m->hash.rss; > >> - ev->event =3D event; > >> - ev->flow_id =3D (rss & ~flow_id_mask) | > >> - (ev->flow_id & flow_id_mask); > >> - ev->mbuf =3D m; > >> - ev++; > >> + if (!eth_rx_queue_info->ena_vector) { > >> + for (i =3D 0; i < num; i++) { > >> + m =3D mbufs[i]; > >> + > >> + rss =3D do_rss ? rxa_do_softrss(m, rx_adapter- > >>rss_key_be) > >> + : m->hash.rss; > >> + ev->event =3D event; > >> + ev->flow_id =3D (rss & ~flow_id_mask) | > >> + (ev->flow_id & flow_id_mask); > >> + ev->mbuf =3D m; > >> + ev++; > >> + } > >> + } else { > >> + num =3D rxa_create_event_vector(rx_adapter, > >eth_rx_queue_info, > >> + buf, mbufs, num); > >> } > >> > >> - if (dev_info->cb_fn) { > >> + if (num && dev_info->cb_fn) { > >> > >> dropped =3D 0; > >> nb_cb =3D dev_info->cb_fn(eth_dev_id, rx_queue_id, > >> - ETH_EVENT_BUFFER_SIZE, buf- > >>count, ev, > >> - num, dev_info->cb_arg, > >&dropped); > >> + ETH_EVENT_BUFFER_SIZE, buf- > >>count, > >> + &buf->events[buf->count], > >num, > >> + dev_info->cb_arg, &dropped); > > > >Before this patch, we pass ev which is &buf->events[buf->count] + num > >as fifth param when calling cb_fn. Now, we are passing &buf- > >>events[buf->count] for non-vector case. Do you see this as an issue? > > >=20 > The callback function takes in the array newly formed events i.e. we need > to pass the start of array and the count. >=20 > the previous code had a bug where it passes the end of the event list. ok, that makes sense. >=20 > >Also, for vector case would it make sense to do pass &buf->events[buf- > >>count] + num ? > > > >> if (unlikely(nb_cb > num)) > >> RTE_EDEV_LOG_ERR("Rx CB returned %d (> %d) > >events", > >> nb_cb, num); > >> @@ -1124,6 +1226,30 @@ rxa_poll(struct rte_event_eth_rx_adapter > >*rx_adapter) > >> return nb_rx; > >> } > >> > >> +static void > >> +rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg) > >> +{ > >> + struct rte_event_eth_rx_adapter *rx_adapter =3D arg; > >> + struct rte_eth_event_enqueue_buffer *buf =3D > >> + &rx_adapter->event_enqueue_buffer; > >> + struct rte_event *ev; > >> + > >> + if (buf->count) > >> + rxa_flush_event_buffer(rx_adapter); > >> + > >> + if (vec->vector_ev->nb_elem =3D=3D 0) > >> + return; > >> + ev =3D &buf->events[buf->count]; > >> + > >> + /* Event ready. */ > >> + ev->event =3D vec->event; > >> + ev->vec =3D vec->vector_ev; > >> + buf->count++; > >> + > >> + vec->vector_ev =3D NULL; > >> + vec->ts =3D 0; > >> +} > >> + > >> static int > >> rxa_service_func(void *args) > >> { > >> @@ -1137,6 +1263,24 @@ rxa_service_func(void *args) > >> return 0; > >> } > >> > >> + if (rx_adapter->ena_vector) { > >> + if ((rte_rdtsc() - rx_adapter->prev_expiry_ts) >=3D > >> + rx_adapter->vector_tmo_ticks) { > >> + struct eth_rx_vector_data *vec; > >> + > >> + TAILQ_FOREACH(vec, &rx_adapter->vector_list, > >next) { > >> + uint64_t elapsed_time =3D rte_rdtsc() - > >vec->ts; > >> + > >> + if (elapsed_time >=3D vec- > >>vector_timeout_ticks) { > >> + rxa_vector_expire(vec, > >rx_adapter); > >> + TAILQ_REMOVE(&rx_adapter- > >>vector_list, > >> + vec, next); > >> + } > >> + } > >> + rx_adapter->prev_expiry_ts =3D rte_rdtsc(); > >> + } > >> + } > >> + > >> stats =3D &rx_adapter->stats; > >> stats->rx_packets +=3D rxa_intr_ring_dequeue(rx_adapter); > >> stats->rx_packets +=3D rxa_poll(rx_adapter); > >> @@ -1640,6 +1784,28 @@ rxa_update_queue(struct > >rte_event_eth_rx_adapter *rx_adapter, > >> } > >> } > >> > >> +static void > >> +rxa_set_vector_data(struct eth_rx_queue_info *queue_info, > >uint16_t vector_count, > >> + uint64_t vector_ns, struct rte_mempool *mp, int32_t > >qid, > >> + uint16_t port_id) > >> +{ > >> +#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) > >> + struct eth_rx_vector_data *vector_data; > >> + uint32_t flow_id; > >> + > >> + vector_data =3D &queue_info->vector_data; > >> + vector_data->max_vector_count =3D vector_count; > >> + vector_data->port =3D port_id; > >> + vector_data->queue =3D qid; > >> + vector_data->vector_pool =3D mp; > >> + vector_data->vector_timeout_ticks =3D > >> + NSEC2TICK(vector_ns, rte_get_timer_hz()); > >> + vector_data->ts =3D 0; > >> + flow_id =3D queue_info->event & 0xFFFFF; > >> + flow_id =3D flow_id =3D=3D 0 ? (qid & 0xFF) | (port_id & 0xFFFF) : > >flow_id; > > > >Maybe I am missing something here. Looking at the code it looks like > >qid and port_id may overlap. For e.g., if qid =3D 0x10 and port_id =3D 0= x11, > >flow_id would end up being 0x11. Is this the expectation? Also, it may > >be useful to document flow_id format. >=20 > The flow_id is 20 bit, I guess we could do 12bit queue_id and 8bit port > as a flow. This sounds reasonable to me. It would be useful to have the flow_id format= and how it is used for vectorization in Rx/Tx adapter documentation. >=20 > >Comparing this format with existing RSS hash based method, are we > >saying that all mbufs received in a rx burst are part of same flow when > >vectorization is used? >=20 > Yes, the hard way to do this is to use a hash table and treating each > mbuf having an unique flow. >=20 > > > >> + vector_data->event =3D (queue_info->event & ~0xFFFFF) | > >flow_id; > >> +} > >> + > >> static void > >> rxa_sw_del(struct rte_event_eth_rx_adapter *rx_adapter, > >> struct eth_device_info *dev_info, > >> @@ -1741,6 +1907,44 @@ rxa_add_queue(struct > >rte_event_eth_rx_adapter *rx_adapter, > >> } > >> } > >> > >> +static void > >> +rxa_sw_event_vector_configure( > >> + struct rte_event_eth_rx_adapter *rx_adapter, uint16_t > >eth_dev_id, > >> + int rx_queue_id, > >> + const struct rte_event_eth_rx_adapter_event_vector_config > >*config) > >> +{ > >> + struct eth_device_info *dev_info =3D &rx_adapter- > >>eth_devices[eth_dev_id]; > >> + struct eth_rx_queue_info *queue_info; > >> + struct rte_event *qi_ev; > >> + > >> + if (rx_queue_id =3D=3D -1) { > >> + uint16_t nb_rx_queues; > >> + uint16_t i; > >> + > >> + nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > >> + for (i =3D 0; i < nb_rx_queues; i++) > >> + rxa_sw_event_vector_configure(rx_adapter, > >eth_dev_id, i, > >> + config); > >> + return; > >> + } > >> + > >> + queue_info =3D &dev_info->rx_queue[rx_queue_id]; > >> + qi_ev =3D (struct rte_event *)&queue_info->event; > >> + queue_info->ena_vector =3D 1; > >> + qi_ev->event_type =3D > >RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; > >> + rxa_set_vector_data(queue_info, config->vector_sz, > >> + config->vector_timeout_ns, config- > >>vector_mp, > >> + rx_queue_id, dev_info->dev->data->port_id); > >> + rx_adapter->ena_vector =3D 1; > >> + rx_adapter->vector_tmo_ticks =3D > >> + rx_adapter->vector_tmo_ticks ? > >> + RTE_MIN(config->vector_timeout_ns << 1, > >> + rx_adapter->vector_tmo_ticks) : > >> + config->vector_timeout_ns << 1; > >> + rx_adapter->prev_expiry_ts =3D 0; > >> + TAILQ_INIT(&rx_adapter->vector_list); > >> +} > >> + > >> static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, > >> uint16_t eth_dev_id, > >> int rx_queue_id, > >> @@ -2081,6 +2285,15 @@ > >rte_event_eth_rx_adapter_queue_add(uint8_t id, > >> return -EINVAL; > >> } > >> > >> + if ((cap & > >RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) =3D=3D 0 && > >> + (queue_conf->rx_queue_flags & > >> + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR)) { > >> + RTE_EDEV_LOG_ERR("Event vectorization is not > >supported," > >> + " eth port: %" PRIu16 " adapter id: %" > >PRIu8, > >> + eth_dev_id, id); > >> + return -EINVAL; > >> + } > >> + > >> if ((cap & > >RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) =3D=3D 0 && > >> (rx_queue_id !=3D -1)) { > >> RTE_EDEV_LOG_ERR("Rx queues can only be connected > >to single " > >> @@ -2143,6 +2356,17 @@ > >rte_event_eth_rx_adapter_queue_add(uint8_t id, > >> return 0; > >> } > >> > >> +static int > >> +rxa_sw_vector_limits(struct > >rte_event_eth_rx_adapter_vector_limits *limits) > >> +{ > >> + limits->max_sz =3D MAX_VECTOR_SIZE; > >> + limits->min_sz =3D MIN_VECTOR_SIZE; > >> + limits->max_timeout_ns =3D MAX_VECTOR_NS; > >> + limits->min_timeout_ns =3D MIN_VECTOR_NS; > >> + > >> + return 0; > >> +} > >> + > >> int > >> rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t > >eth_dev_id, > >> int32_t rx_queue_id) > >> @@ -2333,7 +2557,8 @@ > >rte_event_eth_rx_adapter_queue_event_vector_config( > >> ret =3D dev->dev_ops- > >>eth_rx_adapter_event_vector_config( > >> dev, &rte_eth_devices[eth_dev_id], > >rx_queue_id, config); > >> } else { > >> - ret =3D -ENOTSUP; > >> + rxa_sw_event_vector_configure(rx_adapter, > >eth_dev_id, > >> + rx_queue_id, config); > >> } > >> > >> return ret; > >> @@ -2371,7 +2596,7 @@ > >rte_event_eth_rx_adapter_vector_limits_get( > >> ret =3D dev->dev_ops- > >>eth_rx_adapter_vector_limits_get( > >> dev, &rte_eth_devices[eth_port_id], limits); > >> } else { > >> - ret =3D -ENOTSUP; > >> + ret =3D rxa_sw_vector_limits(limits); > >> } > >> > >> return ret; > >> diff --git a/lib/librte_eventdev/rte_eventdev.c > >b/lib/librte_eventdev/rte_eventdev.c > >> index f95edc075..254a31b1f 100644 > >> --- a/lib/librte_eventdev/rte_eventdev.c > >> +++ b/lib/librte_eventdev/rte_eventdev.c > >> @@ -122,7 +122,11 @@ rte_event_eth_rx_adapter_caps_get(uint8_t > >dev_id, uint16_t eth_port_id, > >> > >> if (caps =3D=3D NULL) > >> return -EINVAL; > >> - *caps =3D 0; > >> + > >> + if (dev->dev_ops->eth_rx_adapter_caps_get =3D=3D NULL) > >> + *caps =3D RTE_EVENT_ETH_RX_ADAPTER_SW_CAP; > >> + else > >> + *caps =3D 0; > > > >Any reason why we had to set default caps value? I am thinking if sw > >event device is used, it would set it anyways. > > >=20 > There are multiple sw event devices which don't implement caps_get > function, this changes solves that. >=20 > >> > >> return dev->dev_ops->eth_rx_adapter_caps_get ? > >> (*dev->dev_ops- > >>eth_rx_adapter_caps_get)(dev, > >> -- > >> 2.17.1