From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A3D8AA0A02; Fri, 26 Mar 2021 08:09:09 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E65740685; Fri, 26 Mar 2021 08:09:09 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id EC3B74067B for ; Fri, 26 Mar 2021 08:09:06 +0100 (CET) IronPort-SDR: ifhlVmuKqprHVp2PcF7PuruSMKM0O92hVwHbO061O5s7m2/yRlrE6Bg8Utkj+mr8lSEq3a4hp0 dWCQwDqesqpQ== X-IronPort-AV: E=McAfee;i="6000,8403,9934"; a="187800991" X-IronPort-AV: E=Sophos;i="5.81,279,1610438400"; d="scan'208";a="187800991" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Mar 2021 00:09:04 -0700 IronPort-SDR: ruxPmfzok6TodwOCskC38Kyux7v6jZPmPjB/hEmIlaIsOpdA6aUsMEVm8J8NL/u9DWG+Nt5Fqd oem0UHm/Ml6Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,279,1610438400"; d="scan'208";a="514961892" Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84]) by fmsmga001.fm.intel.com with ESMTP; 26 Mar 2021 00:09:04 -0700 Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Fri, 26 Mar 2021 00:09:04 -0700 Received: from fmsmsx607.amr.corp.intel.com (10.18.126.87) by fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Fri, 26 Mar 2021 00:09:03 -0700 Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by fmsmsx607.amr.corp.intel.com (10.18.126.87) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2 via Frontend Transport; Fri, 26 Mar 2021 00:09:03 -0700 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.177) by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2106.2; Fri, 26 Mar 2021 00:09:03 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eaaoRTHtCwQCdkuqXe1ALGj/c2YyRT8SKEVO4TwdkSqhN6U/FyCZL+1wMC/YhmXjgsFk5cse4Aa/h9F7WQXzGgL/TKojMfPHlwbmIJOebcPdEaLdwPaJCSzHfdutCLupJLp10Hid3uzVlY6JuY6ZOk266s1cYwnJ2ZnSaxUA2rkwz6mQ88b8QYEEyHLMhorQ304fmg/j620ukLaAmOtWHiKGkk3AK27Af9JFZO0jLlLBYVuT5jB74SbFaRIR5V7ho2jB0+c+gcfUWyEBwYr/MS06NQnDgjq3TSxFdGEleLBagH2AwzCHElKopyRO/gnMAXt29zSce32G8tPLm27NBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Xio+0Wgtmu4Yb01vqRIfypY1sB01NAJWHStTptRo+Es=; b=Y5tY9fwyokLjbYxb3jL2y9+xHtEIsae4/AmED1kiH9L2GKBlGKLJqpUk66XcpJ6/xff2sCdJcPqZTcCZ6FSRuEGHAoI+nvTMUgT3Koz4zVBHRbNyjyVHwi7ribyyqYkIBtV1IJ+9SAPBApITBU46/yxkoqTEAyNHqDesYWj4W1OZ4xhD3ZRPKEzoKt3a1RnkDYgMi0MUVZjFTCjj8Qm4U1dsB8Z8XwZSCHOIMaA0vkzk1uAoqahrgV4JHVptf/Pq6DY7C126yyTT1U1gj9XM36lg9QZVk4fv2ChxO3HPaDzWTbJpTPusP1wgeHCnhHJJR0CnXmUkWn988GAQvtNRBA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Xio+0Wgtmu4Yb01vqRIfypY1sB01NAJWHStTptRo+Es=; b=Fg8PQQQnXoP1FiJxcfmxdkeIJEvvNOYMcS9KEoX4XfmP5Yf5OolV+Z/pkdMApUVvoxBGh5gUshOiUalZBVe3y60QBMtzxEWqFa0Np72GKApc78An7qRHD13JMMzdd57xgvqKuC7UVfg/6jNlV0Rn4LlMNtEm1DZJauu1ikhSFtA= Received: from SN6PR11MB3117.namprd11.prod.outlook.com (2603:10b6:805:d7::32) by SN6PR11MB3455.namprd11.prod.outlook.com (2603:10b6:805:bb::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.18; Fri, 26 Mar 2021 07:09:01 +0000 Received: from SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37]) by SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37%5]) with mapi id 15.20.3977.026; Fri, 26 Mar 2021 07:09:01 +0000 From: "Jayatheerthan, Jay" To: Pavan Nikhilesh Bhagavatula , "Jerin Jacob Kollanukkaran" , "Carrillo, Erik G" , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , "hemant.agrawal@nxp.com" , "Van Haaren, Harry" , mattias.ronnblom , "Ma, Liang J" , Ray Kinsella , Neil Horman CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx adapter event vector config Thread-Index: AQHXIGt7bizamW0Fw0mE3AcADMtPiKqUl9AAgAAkUACAARat4A== Date: Fri, 26 Mar 2021 07:09:01 +0000 Message-ID: References: <20210319205718.1436-1-pbhagavatula@marvell.com> <20210324050525.4489-1-pbhagavatula@marvell.com> <20210324050525.4489-9-pbhagavatula@marvell.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.5.1.3 authentication-results: marvell.com; dkim=none (message not signed) header.d=none;marvell.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [136.185.187.198] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: eab1e11c-4011-4571-8a82-08d8f026088f x-ms-traffictypediagnostic: SN6PR11MB3455: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:3513; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: DtwnWxxtWa49J/ELMs1oEvIr1oC0Mmiua/AjKPrpD573DEHELzktq/QeaG/a/3/poAs0z1hjhjdjz6jM1rCp08X7bqoLbs6x4ZPAmtUcxDQAqJ+KT0AME4D7NW1LUp4pqjxOP4nZojcrLtGsxh+qzaB2Ah2wk3/dS2A9fA+489BSDEmGRyOx2YHFoIWFW54dGeNikLwfWadmVyfzstI4DsXImJfq3YqQyh5lvbEk8W26oyqeVe4Vl9A8LUMrJb8cpfuqduA6D0zO0/PpLg6gJBnOMhHMpydTIYfR7RvC6TQR/HBR2dtin0cw8dexgUQL2PUBKIZE961BH7RA31Hp3+3qfCpkve/RX/F/3+PuOQMP3Q7vaFjviFSgETevJ+b1pP6fN0LC1n5wZxxBiISSL/zGO41vDSjxw4so8n+Jr+bG1EbaxYzr8f7NaNt+4uwBiREmMyuqa/AMH0tCy2hnhg4wZ2J0nMd4kbRQL268yUjmzVg9UulpwDDasYldJdqFXMZQ0qLIDIXS2sH8cAISEOx+Q/5egC1mlfKvXskp4iCrOjr2pwts2fEgvbOHbasPf4Ldk0JK7hbZmvI2w+wB/j5U3GhT/G0oNYymuAbu5We8auBSmEFkVpJh/CUaZZPMcq6Pgsnq6k4PMz72KwpZ1xW17FgIGtbpoaROVAx+S6Ea7cyREWg7V1504wNgOxR7 x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SN6PR11MB3117.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(376002)(396003)(136003)(346002)(39860400002)(366004)(921005)(8936002)(478600001)(7696005)(5660300002)(110136005)(86362001)(55016002)(8676002)(2906002)(53546011)(83380400001)(30864003)(71200400001)(38100700001)(52536014)(6506007)(186003)(9686003)(33656002)(76116006)(316002)(64756008)(66556008)(66476007)(66946007)(66446008)(4326008)(26005); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?KJZffSEx4FQ1ho6y/edQECA4uSiQZZm058Yk9pNV0k6i5O4IiMkZMPPchZAi?= =?us-ascii?Q?ERbzV5EhvFmmf7KD2vKqDA3+zUix4L0z9C1ITbTiGgld/nKiy46EVKsGBpEL?= =?us-ascii?Q?Fh02vrZa4vC2Y4CI2xHjoe5LCUdkKGOIzd0V14jW0oi4N1ba276vZlBH8+0b?= =?us-ascii?Q?UCjHuCc9eCrYfTJTbM7hLu9YHyZSUcRfMwjhlizzGPQeSz1noAAfawXQiNfD?= =?us-ascii?Q?wSaEFfI746jm1f6AdE3Ikls2nckHUXa9VqOI9/r7Q7NlbEzhUtQ2dmtvn/y7?= =?us-ascii?Q?fnaemEuUe/KXqrn9UDMskpWuFt7pjkegUP71f6OPz0Yvsd6qO0LtUx8XioUv?= =?us-ascii?Q?EfjKNjTG0t0g1249ZL0tKkCLDVTdPK24kq3iHq/nk0BTNbNwo8HI7qTiYxX1?= =?us-ascii?Q?7o2xWXN1vhCDoalINYGXkUdajjWPz7uQxxttGbJ6SGaK0j8AJ0++e4zH857+?= =?us-ascii?Q?UpBP+SPSrM6eA+ipHsDPFQmWOc45mL54xJh176xSLWZH353wiyYDS0k/LDLg?= =?us-ascii?Q?raaaIcitRGBcPhI2RzYYwKdynVzWWZl6TzaSjqWaAeFgzcUm9UEaSd3w03aT?= =?us-ascii?Q?JvRfaFMaWWuWPsIqyIET+Zq5tuK97aSYLnwMblMACT7fYfK6nZIIAK23mip2?= =?us-ascii?Q?0pe6eqbtbGNHwwg3vuLHhxm9QL88aj7WI8YyY3CPqoVHvEwhSdEKIFORc0kB?= =?us-ascii?Q?UwvPLrAu/lf3RiB03swnscvnkLOtFN4NrmkaATEVdYqgWkW4Zc9eZEH84VIl?= =?us-ascii?Q?wgIEkF1nDTPRqRkzDvw8GuWnkcfqMy9VjN3WUgmFIhoGkj+rcUBYHEse306F?= =?us-ascii?Q?mFzHsVcd+gRF7mHlhOl+QHN/nLdQpuWqGWYGFY9rJIFOyWIM2P9g4r0Qrak8?= =?us-ascii?Q?H2He/fPVMaQGOyIhNgU+958ohvqfjQbK3VYDLJme2AqH57zsz8fad8kw6Y20?= =?us-ascii?Q?9tf+CmCvtUYftjyAGIvE1QpTX50MosKnH4Ihfw+9e4m16pMoXfCcGvjCo+Df?= =?us-ascii?Q?gyNyqI/xzIXRfz2JdiTyErp2H9kt2cybvTnGGZvshtVQXAfvwCS3zeEhvST2?= =?us-ascii?Q?9TGVCjVkvtTQ8zvs74leJ/XkTbNgMHNLTieKdgz2P57UB/AtQeRqreVMAdCC?= =?us-ascii?Q?Ei2OJ6+bvO3p+oD/+/9V63t+009Ffda+SVtffj3DFfGkDKqnRoZbUJ+Q33oC?= =?us-ascii?Q?GmLxkt8lp2B/LVCM9evrCKzs3lubVB4nzgEX6B5d2VvOcm7GFmTtEiMN5GOm?= =?us-ascii?Q?HVAjS8yUQHeXHz55xT29aXKx+KwYg4VDJqztoO6m7fpXbRLUvP8/Ph7KMsi4?= =?us-ascii?Q?fMCRnuCGYiwCSw8qcXMS1KwT?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR11MB3117.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: eab1e11c-4011-4571-8a82-08d8f026088f X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Mar 2021 07:09:01.5143 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Zyhq0EYyghLQIW4VuIv7jEQPuk+VAAovByzidikDbEtkusMdtnbwpo5aoeQwRYiDmlbHC9gxfV9p7lSA9VdAcAyVyAm3qVFJrJbUvRBPHIU= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR11MB3455 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx adapter event vector config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Pavan Nikhilesh Bhagavatula > Sent: Thursday, March 25, 2021 7:25 PM > To: Jayatheerthan, Jay ; Jerin Jacob Kollanu= kkaran ; Carrillo, Erik G > ; Gujjar, Abhinandan S ; McDaniel, Timothy > ; hemant.agrawal@nxp.com; Van Haaren, Harry <= harry.van.haaren@intel.com>; mattias.ronnblom > ; Ma, Liang J ; Ray = Kinsella ; Neil Horman > > Cc: dev@dpdk.org > Subject: RE: [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx adapt= er event vector config >=20 > >> From: pbhagavatula@marvell.com > >> Sent: Wednesday, March 24, 2021 10:35 AM > >> To: jerinj@marvell.com; Jayatheerthan, Jay > >; Carrillo, Erik G > >; Gujjar, > >> Abhinandan S ; McDaniel, Timothy > >; hemant.agrawal@nxp.com; Van > >> Haaren, Harry ; mattias.ronnblom > >; Ma, Liang J > >> ; Ray Kinsella ; Neil Horman > > > >> Cc: dev@dpdk.org; Pavan Nikhilesh > >> Subject: [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx > >adapter event vector config > >> > >> From: Pavan Nikhilesh > >> > >> Include vector configuration into the structure > >> ``rte_event_eth_rx_adapter_queue_conf`` used when configuring rest > >> of the Rx adapter ethernet device Rx queue parameters. > >> This simplifies event vector configuration as it avoids splitting > >> configuration per Rx queue. > >> > >> Signed-off-by: Pavan Nikhilesh > >> --- > >> app/test-eventdev/test_pipeline_common.c | 16 +- > >> lib/librte_eventdev/eventdev_pmd.h | 29 --- > >> .../rte_event_eth_rx_adapter.c | 168 ++++++-----------= - > >> .../rte_event_eth_rx_adapter.h | 27 --- > >> lib/librte_eventdev/version.map | 1 - > >> 5 files changed, 57 insertions(+), 184 deletions(-) > >> > >> diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test- > >eventdev/test_pipeline_common.c > >> index d5ef90500..76aee254b 100644 > >> --- a/app/test-eventdev/test_pipeline_common.c > >> +++ b/app/test-eventdev/test_pipeline_common.c > >> @@ -331,7 +331,6 @@ pipeline_event_rx_adapter_setup(struct > >evt_options *opt, uint8_t stride, > >> uint16_t prod; > >> struct rte_mempool *vector_pool =3D NULL; > >> struct rte_event_eth_rx_adapter_queue_conf queue_conf; > >> - struct rte_event_eth_rx_adapter_event_vector_config > >vec_conf; > >> > >> memset(&queue_conf, 0, > >> sizeof(struct > >rte_event_eth_rx_adapter_queue_conf)); > >> @@ -397,8 +396,12 @@ pipeline_event_rx_adapter_setup(struct > >evt_options *opt, uint8_t stride, > >> } > >> > >> if (cap & > >RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) { > >> + queue_conf.vector_sz =3D opt- > >>vector_size; > >> + queue_conf.vector_timeout_ns =3D > >> + opt->vector_tmo_nsec; > >> queue_conf.rx_queue_flags |=3D > >> > > RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR; > >> + queue_conf.vector_mp =3D vector_pool; > >> } else { > >> evt_err("Rx adapter doesn't support > >event vector"); > >> return -EINVAL; > >> @@ -418,17 +421,6 @@ pipeline_event_rx_adapter_setup(struct > >evt_options *opt, uint8_t stride, > >> return ret; > >> } > >> > >> - if (opt->ena_vector) { > >> - vec_conf.vector_sz =3D opt->vector_size; > >> - vec_conf.vector_timeout_ns =3D opt- > >>vector_tmo_nsec; > >> - vec_conf.vector_mp =3D vector_pool; > >> - if > >(rte_event_eth_rx_adapter_queue_event_vector_config( > >> - prod, prod, -1, &vec_conf) < 0) { > >> - evt_err("Failed to configure event > >vectorization for Rx adapter"); > >> - return -EINVAL; > >> - } > >> - } > >> - > >> if (!(cap & > >RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) { > >> uint32_t service_id =3D -1U; > >> > >> diff --git a/lib/librte_eventdev/eventdev_pmd.h > >b/lib/librte_eventdev/eventdev_pmd.h > >> index 0f724ac85..63b3bc4b5 100644 > >> --- a/lib/librte_eventdev/eventdev_pmd.h > >> +++ b/lib/librte_eventdev/eventdev_pmd.h > >> @@ -667,32 +667,6 @@ typedef int > >(*eventdev_eth_rx_adapter_vector_limits_get_t)( > >> const struct rte_eventdev *dev, const struct rte_eth_dev > >*eth_dev, > >> struct rte_event_eth_rx_adapter_vector_limits *limits); > >> > >> -struct rte_event_eth_rx_adapter_event_vector_config; > >> -/** > >> - * Enable event vector on an given Rx queue of a ethernet devices > >belonging to > >> - * the Rx adapter. > >> - * > >> - * @param dev > >> - * Event device pointer > >> - * > >> - * @param eth_dev > >> - * Ethernet device pointer > >> - * > >> - * @param rx_queue_id > >> - * The Rx queue identifier > >> - * > >> - * @param config > >> - * Pointer to the event vector configuration structure. > >> - * > >> - * @return > >> - * - 0: Success. > >> - * - <0: Error code returned by the driver function. > >> - */ > >> -typedef int (*eventdev_eth_rx_adapter_event_vector_config_t)( > >> - const struct rte_eventdev *dev, const struct rte_eth_dev > >*eth_dev, > >> - int32_t rx_queue_id, > >> - const struct rte_event_eth_rx_adapter_event_vector_config > >*config); > >> - > >> typedef uint32_t rte_event_pmd_selftest_seqn_t; > >> extern int rte_event_pmd_selftest_seqn_dynfield_offset; > >> > >> @@ -1118,9 +1092,6 @@ struct rte_eventdev_ops { > >> eventdev_eth_rx_adapter_vector_limits_get_t > >> eth_rx_adapter_vector_limits_get; > >> /**< Get event vector limits for the Rx adapter */ > >> - eventdev_eth_rx_adapter_event_vector_config_t > >> - eth_rx_adapter_event_vector_config; > >> - /**< Configure Rx adapter with event vector */ > >> > >> eventdev_timer_adapter_caps_get_t timer_adapter_caps_get; > >> /**< Get timer adapter capabilities */ > >> diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >b/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >> index c71990078..a1990637f 100644 > >> --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >> +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >> @@ -1882,6 +1882,25 @@ rxa_add_queue(struct > >rte_event_eth_rx_adapter *rx_adapter, > >> } else > >> qi_ev->flow_id =3D 0; > >> > >> + if (conf->rx_queue_flags & > >> + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR) { > >> + queue_info->ena_vector =3D 1; > >> + qi_ev->event_type =3D > >RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; > >> + rxa_set_vector_data(queue_info, conf->vector_sz, > >> + conf->vector_timeout_ns, conf- > >>vector_mp, > >> + rx_queue_id, dev_info->dev->data- > >>port_id); > >> + rx_adapter->ena_vector =3D 1; > >> + rx_adapter->vector_tmo_ticks =3D > >> + rx_adapter->vector_tmo_ticks > >> + ? RTE_MIN(queue_info->vector_data > >> + .vector_timeout_ticks, > >> + rx_adapter- > >>vector_tmo_ticks) > >> + : queue_info- > >>vector_data.vector_timeout_ticks; > >> + rx_adapter->vector_tmo_ticks <<=3D 1; > > > >Any reason why we left shift here ? Applicable in patch 4/8 as well. >=20 > Just so that we have half the precision of the lowest timeout, helps to > Maintain accuracy. Maybe I am missing something here. For e.g. if the lowest timeout is 20 tic= ks, would you want 10 or 40 ? Currently, its set to 40. >=20 > > > >> + TAILQ_INIT(&rx_adapter->vector_list); > > > >Can doing TAILQ_INIT every time a queue is added cause existing > >elements to be wiped out ? Applicable in patch 4/8 as well. >=20 > I don't think queues can be added when adapter is already started > rxa_sw_add Isn't thready safe. Yes, it takes lock before calling rxa_sa_add. For internal_port implementat= ion, I don't see any lock being taken. Besides, since its adapter related, = it would be more relevant in adapter create than in queue add. >=20 > > > >> + rx_adapter->prev_expiry_ts =3D 0; > > > >Can setting this every time a queue is added affect existing queues > >created with vector support and passing traffic ? Applicable in patch 4/= 8 > >as well. >=20 > Same as above. >=20 Same reasoning as above. > > > >> + } > >> + > >> rxa_update_queue(rx_adapter, dev_info, rx_queue_id, 1); > >> if (rxa_polled_queue(dev_info, rx_queue_id)) { > >> rx_adapter->num_rx_polled +=3D !pollq; > >> @@ -1907,44 +1926,6 @@ rxa_add_queue(struct > >rte_event_eth_rx_adapter *rx_adapter, > >> } > >> } > >> > >> -static void > >> -rxa_sw_event_vector_configure( > >> - struct rte_event_eth_rx_adapter *rx_adapter, uint16_t > >eth_dev_id, > >> - int rx_queue_id, > >> - const struct rte_event_eth_rx_adapter_event_vector_config > >*config) > >> -{ > >> - struct eth_device_info *dev_info =3D &rx_adapter- > >>eth_devices[eth_dev_id]; > >> - struct eth_rx_queue_info *queue_info; > >> - struct rte_event *qi_ev; > >> - > >> - if (rx_queue_id =3D=3D -1) { > >> - uint16_t nb_rx_queues; > >> - uint16_t i; > >> - > >> - nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > >> - for (i =3D 0; i < nb_rx_queues; i++) > >> - rxa_sw_event_vector_configure(rx_adapter, > >eth_dev_id, i, > >> - config); > >> - return; > >> - } > >> - > >> - queue_info =3D &dev_info->rx_queue[rx_queue_id]; > >> - qi_ev =3D (struct rte_event *)&queue_info->event; > >> - queue_info->ena_vector =3D 1; > >> - qi_ev->event_type =3D > >RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; > >> - rxa_set_vector_data(queue_info, config->vector_sz, > >> - config->vector_timeout_ns, config- > >>vector_mp, > >> - rx_queue_id, dev_info->dev->data->port_id); > >> - rx_adapter->ena_vector =3D 1; > >> - rx_adapter->vector_tmo_ticks =3D > >> - rx_adapter->vector_tmo_ticks ? > >> - RTE_MIN(config->vector_timeout_ns << 1, > >> - rx_adapter->vector_tmo_ticks) : > >> - config->vector_timeout_ns << 1; > >> - rx_adapter->prev_expiry_ts =3D 0; > >> - TAILQ_INIT(&rx_adapter->vector_list); > >> -} > >> - > >> static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, > >> uint16_t eth_dev_id, > >> int rx_queue_id, > >> @@ -2258,6 +2239,7 @@ > >rte_event_eth_rx_adapter_queue_add(uint8_t id, > >> struct rte_event_eth_rx_adapter *rx_adapter; > >> struct rte_eventdev *dev; > >> struct eth_device_info *dev_info; > >> + struct rte_event_eth_rx_adapter_vector_limits limits; > >> > >> RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, - > >EINVAL); > >> RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL); > >> @@ -2294,6 +2276,39 @@ > >rte_event_eth_rx_adapter_queue_add(uint8_t id, > >> return -EINVAL; > >> } > >> > >> + if (queue_conf->rx_queue_flags & > >> + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR) { > > > >Perhaps, you could move the previous if condition here and just check > >for (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) >=20 > No, the above check is not a capability check, it is to check if applicat= ion > requested to enable vectorization for this queue or not. In current code, same condition check (queue_conf->rx_queue_flags & RTE_EVE= NT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR ) is done twice next to each other, wh= ich could be avoided. if ((cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) =3D=3D 0 && ----> (queue_conf->rx_queue_flags & RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR)) { RTE_EDEV_LOG_ERR("Event vectorization is not supported," " eth port: %" PRIu16 " adapter id: %" PRIu8, eth_dev_id, id); return -EINVAL; } ----> if (queue_conf->rx_queue_flags & RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR) { ... } >=20 > > > >> + ret =3D rte_event_eth_rx_adapter_vector_limits_get( > >> + rx_adapter->eventdev_id, eth_dev_id, &limits); > >> + if (ret < 0) { > >> + RTE_EDEV_LOG_ERR("Failed to get event > >device vector limits," > >> + " eth port: %" PRIu16 > >> + " adapter id: %" PRIu8, > >> + eth_dev_id, id); > >> + return -EINVAL; > >> + } > >> + if (queue_conf->vector_sz < limits.min_sz || > >> + queue_conf->vector_sz > limits.max_sz || > >> + queue_conf->vector_timeout_ns < > >limits.min_timeout_ns || > >> + queue_conf->vector_timeout_ns > > >limits.max_timeout_ns || > >> + queue_conf->vector_mp =3D=3D NULL) { > >> + RTE_EDEV_LOG_ERR("Invalid event vector > >configuration," > >> + " eth port: %" PRIu16 > >> + " adapter id: %" PRIu8, > >> + eth_dev_id, id); > >> + return -EINVAL; > >> + } > >> + if (queue_conf->vector_mp->elt_size < > >> + (sizeof(struct rte_event_vector) + > >> + (sizeof(uintptr_t) * queue_conf->vector_sz))) { > >> + RTE_EDEV_LOG_ERR("Invalid event vector > >configuration," > >> + " eth port: %" PRIu16 > >> + " adapter id: %" PRIu8, > >> + eth_dev_id, id); > >> + return -EINVAL; > >> + } > >> + } > >> + > >> if ((cap & > >RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) =3D=3D 0 && > >> (rx_queue_id !=3D -1)) { > >> RTE_EDEV_LOG_ERR("Rx queues can only be connected > >to single " > >> @@ -2487,83 +2502,6 @@ > >rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t eth_dev_id, > >> return ret; > >> } > >> > >> -int > >> -rte_event_eth_rx_adapter_queue_event_vector_config( > >> - uint8_t id, uint16_t eth_dev_id, int32_t rx_queue_id, > >> - struct rte_event_eth_rx_adapter_event_vector_config *config) > >> -{ > >> - struct rte_event_eth_rx_adapter_vector_limits limits; > >> - struct rte_event_eth_rx_adapter *rx_adapter; > >> - struct rte_eventdev *dev; > >> - uint32_t cap; > >> - int ret; > >> - > >> - RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, - > >EINVAL); > >> - RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL); > >> - > >> - rx_adapter =3D rxa_id_to_adapter(id); > >> - if ((rx_adapter =3D=3D NULL) || (config =3D=3D NULL)) > >> - return -EINVAL; > >> - > >> - dev =3D &rte_eventdevs[rx_adapter->eventdev_id]; > >> - ret =3D rte_event_eth_rx_adapter_caps_get(rx_adapter- > >>eventdev_id, > >> - eth_dev_id, &cap); > >> - if (ret) { > >> - RTE_EDEV_LOG_ERR("Failed to get adapter caps edev > >%" PRIu8 > >> - "eth port %" PRIu16, > >> - id, eth_dev_id); > >> - return ret; > >> - } > >> - > >> - if (!(cap & > >RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR)) { > >> - RTE_EDEV_LOG_ERR("Event vectorization is not > >supported," > >> - " eth port: %" PRIu16 " adapter id: %" > >PRIu8, > >> - eth_dev_id, id); > >> - return -EINVAL; > >> - } > >> - > >> - ret =3D rte_event_eth_rx_adapter_vector_limits_get( > >> - rx_adapter->eventdev_id, eth_dev_id, &limits); > >> - if (ret) { > >> - RTE_EDEV_LOG_ERR("Failed to get vector limits edev > >%" PRIu8 > >> - "eth port %" PRIu16, > >> - rx_adapter->eventdev_id, eth_dev_id); > >> - return ret; > >> - } > >> - > >> - if (config->vector_sz < limits.min_sz || > >> - config->vector_sz > limits.max_sz || > >> - config->vector_timeout_ns < limits.min_timeout_ns || > >> - config->vector_timeout_ns > limits.max_timeout_ns || > >> - config->vector_mp =3D=3D NULL) { > >> - RTE_EDEV_LOG_ERR("Invalid event vector > >configuration," > >> - " eth port: %" PRIu16 " adapter id: %" > >PRIu8, > >> - eth_dev_id, id); > >> - return -EINVAL; > >> - } > >> - if (config->vector_mp->elt_size < > >> - (sizeof(struct rte_event_vector) + > >> - (sizeof(uintptr_t) * config->vector_sz))) { > >> - RTE_EDEV_LOG_ERR("Invalid event vector > >configuration," > >> - " eth port: %" PRIu16 " adapter id: %" > >PRIu8, > >> - eth_dev_id, id); > >> - return -EINVAL; > >> - } > >> - > >> - if (cap & > >RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) { > >> - RTE_FUNC_PTR_OR_ERR_RET( > >> - *dev->dev_ops- > >>eth_rx_adapter_event_vector_config, > >> - -ENOTSUP); > >> - ret =3D dev->dev_ops- > >>eth_rx_adapter_event_vector_config( > >> - dev, &rte_eth_devices[eth_dev_id], > >rx_queue_id, config); > >> - } else { > >> - rxa_sw_event_vector_configure(rx_adapter, > >eth_dev_id, > >> - rx_queue_id, config); > >> - } > >> - > >> - return ret; > >> -} > >> - > >> int > >> rte_event_eth_rx_adapter_vector_limits_get( > >> uint8_t dev_id, uint16_t eth_port_id, > >> diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.h > >b/lib/librte_eventdev/rte_event_eth_rx_adapter.h > >> index 7407cde00..3f8b36229 100644 > >> --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.h > >> +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.h > >> @@ -171,9 +171,6 @@ struct rte_event_eth_rx_adapter_queue_conf > >{ > >> * The event adapter sets ev.event_type to > >RTE_EVENT_TYPE_ETHDEV in the > >> * enqueued event. > >> */ > >> -}; > >> - > >> -struct rte_event_eth_rx_adapter_event_vector_config { > >> uint16_t vector_sz; > >> /**< > >> * Indicates the maximum number for mbufs to combine and > >form a vector. > >> @@ -548,30 +545,6 @@ int > >rte_event_eth_rx_adapter_vector_limits_get( > >> uint8_t dev_id, uint16_t eth_port_id, > >> struct rte_event_eth_rx_adapter_vector_limits *limits); > >> > >> -/** > >> - * Configure event vectorization for a given ethernet device queue, > >that has > >> - * been added to a event eth Rx adapter. > >> - * > >> - * @param id > >> - * The identifier of the ethernet Rx event adapter. > >> - * > >> - * @param eth_dev_id > >> - * The identifier of the ethernet device. > >> - * > >> - * @param rx_queue_id > >> - * Ethernet device receive queue index. > >> - * If rx_queue_id is -1, then all Rx queues configured for the ether= net > >device > >> - * are configured with event vectorization. > >> - * > >> - * @return > >> - * - 0: Success, Receive queue configured correctly. > >> - * - <0: Error code on failure. > >> - */ > >> -__rte_experimental > >> -int rte_event_eth_rx_adapter_queue_event_vector_config( > >> - uint8_t id, uint16_t eth_dev_id, int32_t rx_queue_id, > >> - struct rte_event_eth_rx_adapter_event_vector_config > >*config); > >> - > >> #ifdef __cplusplus > >> } > >> #endif > >> diff --git a/lib/librte_eventdev/version.map > >b/lib/librte_eventdev/version.map > >> index 902df0ae3..34c1c830e 100644 > >> --- a/lib/librte_eventdev/version.map > >> +++ b/lib/librte_eventdev/version.map > >> @@ -142,7 +142,6 @@ EXPERIMENTAL { > >> #added in 21.05 > >> rte_event_vector_pool_create; > >> rte_event_eth_rx_adapter_vector_limits_get; > >> - rte_event_eth_rx_adapter_queue_event_vector_config; > >> }; > >> > >> INTERNAL { > >> -- > >> 2.17.1