From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0559A0A02; Thu, 25 Mar 2021 13:27:40 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 428ED4067B; Thu, 25 Mar 2021 13:27:40 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 603F540147 for ; Thu, 25 Mar 2021 13:27:37 +0100 (CET) IronPort-SDR: LgKZvJ4HowIsZtOJRBOLnsUiTmLN9YJW6SklRbm3cwVQ36G8hZZuUpwQtIk8yFBRiMRxv40nPp ATHLeVXyP/GQ== X-IronPort-AV: E=McAfee;i="6000,8403,9933"; a="276042302" X-IronPort-AV: E=Sophos;i="5.81,277,1610438400"; d="scan'208";a="276042302" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Mar 2021 05:27:35 -0700 IronPort-SDR: E+85ZI9eQx1lZuRxbCOmnCMqwcyhinLHpzHKXxWsTFiEI6QjemeSa7pcLTp0wWTlZqAeVZ0Hn9 uRZr/mhfm46A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,277,1610438400"; d="scan'208";a="375050005" Received: from fmsmsx606.amr.corp.intel.com ([10.18.126.86]) by orsmga003.jf.intel.com with ESMTP; 25 Mar 2021 05:27:35 -0700 Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Thu, 25 Mar 2021 05:27:35 -0700 Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Thu, 25 Mar 2021 05:27:34 -0700 Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2 via Frontend Transport; Thu, 25 Mar 2021 05:27:34 -0700 Received: from NAM02-BL2-obe.outbound.protection.outlook.com (104.47.38.54) by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2106.2; Thu, 25 Mar 2021 05:27:34 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Ys99hfK8XKZuqM2SkQqp+TJVcBy9e+dVfqbHOsNJprSWBrUvoLptPc1NwgZRjOKbQ4tzeko+PJsefxfvNcICzL09bDy2OSgHSAsMghgpqG/aLpyiEPI4q6k3QVo9C6s7q3FllZGEQOSizWE4Yl0/9yxQmTx3vYGJvIKaZlYVZlpSdAqmnU901pay3xcLrn94X7nhKuVimPu73QQBAczSV2jICDnkHjjr9LNmfotHWhrahaGz/OmffLGoyX0RaPObL07I5VsZKX70c3nuRxpXx+fzCqxYi7kTpsngR+zTHr5h/KHWqfvjXbx4znNSzu2aX1jdhCt4kQFLx7/XwjnEyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cPSmtKkdhWyd1PPsUHCYr0J1r9tx6lTgPw2x5DEP6jk=; b=mR5LBuUNPy10kAVt+P9RhkN7Q+/5ellkaudSwyngnCdTWypo46nZnQjqOSlJfc3akhdYFFKAbFmrB9z5X4CS0E4+YsvQKBMpLiH/PCUmQyGJqXkXmtceD85F++kZNA8X3TtiuaxYmsot6ptMIudj/VQN8mAKR8XA3h7JEU383JyvNYTFH62B2PMlF1fwfywyQdVuDpwqDau5CaouZL+Jcrwh7VPu4kpcz/pQNbSkgo34OuhOvEvdm43ESTNVC6BvzRS6U3KkuwS55xdBxzlMFYzD9kDVM6L1291lvcJirg0goDbGW8psbhVFGegjY3YbI1aRUjZMcpNNcju4W4oG0Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cPSmtKkdhWyd1PPsUHCYr0J1r9tx6lTgPw2x5DEP6jk=; b=dirQfgl53GOdBof1T/wOVkmExDOrKiWoD0/i2Ff4em7mXLwvv/bs2aUe67G7JP9sZ1jEGV/mRhZa3GWKBFwZEd5v8+nlveqBxE33CseI2F/DY7YM4jRobL3nCsvuB+qPDhdwM1lkhl8GxbbTRVtFM1BgwErjyzX1lr63WiKhwbE= Received: from SN6PR11MB3117.namprd11.prod.outlook.com (2603:10b6:805:d7::32) by SN6PR11MB3216.namprd11.prod.outlook.com (2603:10b6:805:c1::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.29; Thu, 25 Mar 2021 12:27:31 +0000 Received: from SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37]) by SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37%5]) with mapi id 15.20.3977.026; Thu, 25 Mar 2021 12:27:31 +0000 From: "Jayatheerthan, Jay" To: "pbhagavatula@marvell.com" , "jerinj@marvell.com" , "Carrillo, Erik G" , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , "hemant.agrawal@nxp.com" , "Van Haaren, Harry" , mattias.ronnblom , "Ma, Liang J" , Ray Kinsella , Neil Horman CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx adapter event vector config Thread-Index: AQHXIGt7bizamW0Fw0mE3AcADMtPiKqUl9AA Date: Thu, 25 Mar 2021 12:27:31 +0000 Message-ID: References: <20210319205718.1436-1-pbhagavatula@marvell.com> <20210324050525.4489-1-pbhagavatula@marvell.com> <20210324050525.4489-9-pbhagavatula@marvell.com> In-Reply-To: <20210324050525.4489-9-pbhagavatula@marvell.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.5.1.3 authentication-results: marvell.com; dkim=none (message not signed) header.d=none;marvell.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [136.185.187.198] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 4a86373e-9e1c-4c63-213c-08d8ef895c64 x-ms-traffictypediagnostic: SN6PR11MB3216: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2582; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 2nDCpbF+42grpZp9GkXvHFZX/r72tvAbopzzD2V6qvtlWuJu5qCYGYnJ0xi2njopczoU8I3YOirMaNJ6xDPMUJBDeYKfs+NxE9JIs8Wnv9HW73YWdEwRBQtKXJT+eeoOFPfj+jbbhV8bDl7F3N4MokiRf7svgt/20oUFywwTe9toFyyG/y520thsH3LwCITHrk2uUhZNGmz/RZWvisBDB1I+43R9nfQw3aB46s+vj0mLmqxjkpVODW/FI1/cK9fxFz3iXGfNLlKhynMsJ06LTiHj8eiBCH4jiRjmRxScFU6ERu2VfiVw45KzN5+5pRJ9r8LVeo0jthTzee4tttNZmJMbUrMHhg/dHH4itEHZeDo2Zx4N8PIO78NBR2aLzr7BjBST5kB5aqfqjNsis1KTngBGwP5XSl0g9pKLmDsmdhqCKuk+10cdDmv8cC+THYKvcpKpWY8KPchxQesP8iPngMKRVgKWICMg1RdUTQsh0OYSFJ145QKctTQiJjc7e7O75Z1HTgHd1wcCs8OHaqxfcacTQIwXub1InoyxaWdODTvCDJ9mLGb1IwHH+VxLK2d323NzUOe5xrYk4Zz+A7/+W5Zi3gFkxMQV7EnJXQmcXlBHslbCdvU4cN+C4AfNQYAbbah+hk+9wPaeDS8Hn5qg1kL7Ybt7zh2+OMxS9yuqaa9XCDbgJ9UuVQsT58LJSzTr x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SN6PR11MB3117.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(346002)(136003)(396003)(376002)(366004)(39860400002)(76116006)(66476007)(86362001)(66446008)(66946007)(64756008)(66556008)(9686003)(55016002)(8676002)(8936002)(7696005)(71200400001)(38100700001)(26005)(186003)(53546011)(6506007)(5660300002)(30864003)(33656002)(52536014)(110136005)(4326008)(2906002)(478600001)(316002)(83380400001)(921005); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?R2d45HZhPV4jZFA/SQSVmNMI6CxxWFU5SiJsof7oD1VfNR4Vik2webPYzbuQ?= =?us-ascii?Q?9Woy8ntqkPvwCw8hIsGpFYCj/y2AvrIgvPMMPe3EUxkg38yJO3zBNs8DUuHL?= =?us-ascii?Q?8DHf/QTNLRKZW5t8Vg3fHp3+tvfJqJCWH6K2pToG3NnqZz2tipYG1P8B9X1E?= =?us-ascii?Q?RVrkMhP8KwRVi2cCJMglutsrQmZuc8sh1DATaaIMDl7XRCg2p8afWawAV5Du?= =?us-ascii?Q?PwEs6ZSwMaYst6MHS7PrQ455qcT1sEWslQBN4AzP8wqsSonQrC6SkHENu2lL?= =?us-ascii?Q?DLeW9bB9JEv5NFM9Y6Sv91XqLDHyv7BObGBB9Mr0eKp6LuaW9px2tj9SlAk6?= =?us-ascii?Q?jQunJRGXoz9fMF4jK9w5FXUETULL0UmD4L1LzKBdA9WYAObCcuUkx9aQGMMv?= =?us-ascii?Q?08fNpO4C+0UCwDcRQ+jSQJrZfkqik1kCz4JwMrPEUjLeitIo2+ncs+rJ+ZL9?= =?us-ascii?Q?2R+ei92Bv7pg6zLaEoQOFiC69TJSC6ZfrRAN95GXYXXwYvxErjACNsWm297X?= =?us-ascii?Q?7ZlPSIynQzcjFV5W+F+PxdJ2GNTTbUMA4ERE82UAcVc/7E93M2ZceMn9scWa?= =?us-ascii?Q?bDIIFIQ71cyu5ZxocTNs7L9/ATb9cCKU7bJQCAjoEAnkGUtDH5ixZoaD3Y+e?= =?us-ascii?Q?vXf/Sx5Sv5Tn6nIOK8G0IYqwOtTPKxtoWYqKR2CiR9bLP1t8ZT66EDImrrKG?= =?us-ascii?Q?Twghs3GfIJz6YA6HoYDRxMom+uQjYugSge77sK8/yRb1MEv+vLV06ppZfs1s?= =?us-ascii?Q?PWzPMc1tMmJhHPkHC5/Y5oayd26Eeae8j9ESixQtGU0eNxK47Ws5U+mz5On3?= =?us-ascii?Q?xaZZmCdsI/LsbGvEjasZ3r4Hmo7G/8iZ/dO7XZ0zjq5M8OJVLBMYqmzXQdnh?= =?us-ascii?Q?I193jdDIZo+f4cKYce1aUbwmJ+j1F4ghyfunRLBz6NtqBWchpmyeWjDAc019?= =?us-ascii?Q?EJBoFVlTs6cvvExuVnu/MnuRscedN/XSbiSEnksiyB3ynHfcpj/wS+IjWkCT?= =?us-ascii?Q?ybtLTPT6EoSve2Acl1kt6HMy9AjI1D7u+cwEDUntNr1aiGgJH3pky/E3Cplk?= =?us-ascii?Q?dw+FgAQMrbYSiidqN/0NElNb+wyepbafouDLcUvoUspCtf2YIZc0D5bD89RR?= =?us-ascii?Q?5jIV9t1efiWD1shzqM12SAWaRNmg4AMebH1Rk5RulRlhnoljjxm7CiM/iauA?= =?us-ascii?Q?+79nj7221c3/ZRhXBTSK2AcWTqk4EyO1c+j1NWKjl9Nx3fmMW8Frts3s7nVS?= =?us-ascii?Q?/MZPAOsA991cZg4VWjdXqkz6o9egFYMcop6Wu0jSDk9YjbEp89yRGB0dUR25?= =?us-ascii?Q?75Bji5YNbnzcTjH2SvwlgOfl?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR11MB3117.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4a86373e-9e1c-4c63-213c-08d8ef895c64 X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Mar 2021 12:27:31.1169 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: hu76jejm215jKdxwWNB/m86MAFFUTvaC4ZpnNHHJ4SeBDDgEx8EMxjTOafl3oaZdNCp6JMj4Aa4xd5gkLEYUAlgpiq1zioQ3AFPrcG8xalQ= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR11MB3216 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx adapter event vector config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: pbhagavatula@marvell.com > Sent: Wednesday, March 24, 2021 10:35 AM > To: jerinj@marvell.com; Jayatheerthan, Jay ;= Carrillo, Erik G ; Gujjar, > Abhinandan S ; McDaniel, Timothy ; hemant.agrawal@nxp.com; Van > Haaren, Harry ; mattias.ronnblom ; Ma, Liang J > ; Ray Kinsella ; Neil Horman > Cc: dev@dpdk.org; Pavan Nikhilesh > Subject: [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx adapter e= vent vector config >=20 > From: Pavan Nikhilesh >=20 > Include vector configuration into the structure > ``rte_event_eth_rx_adapter_queue_conf`` used when configuring rest > of the Rx adapter ethernet device Rx queue parameters. > This simplifies event vector configuration as it avoids splitting > configuration per Rx queue. >=20 > Signed-off-by: Pavan Nikhilesh > --- > app/test-eventdev/test_pipeline_common.c | 16 +- > lib/librte_eventdev/eventdev_pmd.h | 29 --- > .../rte_event_eth_rx_adapter.c | 168 ++++++------------ > .../rte_event_eth_rx_adapter.h | 27 --- > lib/librte_eventdev/version.map | 1 - > 5 files changed, 57 insertions(+), 184 deletions(-) >=20 > diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev= /test_pipeline_common.c > index d5ef90500..76aee254b 100644 > --- a/app/test-eventdev/test_pipeline_common.c > +++ b/app/test-eventdev/test_pipeline_common.c > @@ -331,7 +331,6 @@ pipeline_event_rx_adapter_setup(struct evt_options *o= pt, uint8_t stride, > uint16_t prod; > struct rte_mempool *vector_pool =3D NULL; > struct rte_event_eth_rx_adapter_queue_conf queue_conf; > - struct rte_event_eth_rx_adapter_event_vector_config vec_conf; >=20 > memset(&queue_conf, 0, > sizeof(struct rte_event_eth_rx_adapter_queue_conf)); > @@ -397,8 +396,12 @@ pipeline_event_rx_adapter_setup(struct evt_options *= opt, uint8_t stride, > } >=20 > if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) { > + queue_conf.vector_sz =3D opt->vector_size; > + queue_conf.vector_timeout_ns =3D > + opt->vector_tmo_nsec; > queue_conf.rx_queue_flags |=3D > RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR; > + queue_conf.vector_mp =3D vector_pool; > } else { > evt_err("Rx adapter doesn't support event vector"); > return -EINVAL; > @@ -418,17 +421,6 @@ pipeline_event_rx_adapter_setup(struct evt_options *= opt, uint8_t stride, > return ret; > } >=20 > - if (opt->ena_vector) { > - vec_conf.vector_sz =3D opt->vector_size; > - vec_conf.vector_timeout_ns =3D opt->vector_tmo_nsec; > - vec_conf.vector_mp =3D vector_pool; > - if (rte_event_eth_rx_adapter_queue_event_vector_config( > - prod, prod, -1, &vec_conf) < 0) { > - evt_err("Failed to configure event vectorization for Rx adapter"); > - return -EINVAL; > - } > - } > - > if (!(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) { > uint32_t service_id =3D -1U; >=20 > diff --git a/lib/librte_eventdev/eventdev_pmd.h b/lib/librte_eventdev/eve= ntdev_pmd.h > index 0f724ac85..63b3bc4b5 100644 > --- a/lib/librte_eventdev/eventdev_pmd.h > +++ b/lib/librte_eventdev/eventdev_pmd.h > @@ -667,32 +667,6 @@ typedef int (*eventdev_eth_rx_adapter_vector_limits_= get_t)( > const struct rte_eventdev *dev, const struct rte_eth_dev *eth_dev, > struct rte_event_eth_rx_adapter_vector_limits *limits); >=20 > -struct rte_event_eth_rx_adapter_event_vector_config; > -/** > - * Enable event vector on an given Rx queue of a ethernet devices belong= ing to > - * the Rx adapter. > - * > - * @param dev > - * Event device pointer > - * > - * @param eth_dev > - * Ethernet device pointer > - * > - * @param rx_queue_id > - * The Rx queue identifier > - * > - * @param config > - * Pointer to the event vector configuration structure. > - * > - * @return > - * - 0: Success. > - * - <0: Error code returned by the driver function. > - */ > -typedef int (*eventdev_eth_rx_adapter_event_vector_config_t)( > - const struct rte_eventdev *dev, const struct rte_eth_dev *eth_dev, > - int32_t rx_queue_id, > - const struct rte_event_eth_rx_adapter_event_vector_config *config); > - > typedef uint32_t rte_event_pmd_selftest_seqn_t; > extern int rte_event_pmd_selftest_seqn_dynfield_offset; >=20 > @@ -1118,9 +1092,6 @@ struct rte_eventdev_ops { > eventdev_eth_rx_adapter_vector_limits_get_t > eth_rx_adapter_vector_limits_get; > /**< Get event vector limits for the Rx adapter */ > - eventdev_eth_rx_adapter_event_vector_config_t > - eth_rx_adapter_event_vector_config; > - /**< Configure Rx adapter with event vector */ >=20 > eventdev_timer_adapter_caps_get_t timer_adapter_caps_get; > /**< Get timer adapter capabilities */ > diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c b/lib/librte_= eventdev/rte_event_eth_rx_adapter.c > index c71990078..a1990637f 100644 > --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.c > +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c > @@ -1882,6 +1882,25 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx_= adapter, > } else > qi_ev->flow_id =3D 0; >=20 > + if (conf->rx_queue_flags & > + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR) { > + queue_info->ena_vector =3D 1; > + qi_ev->event_type =3D RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; > + rxa_set_vector_data(queue_info, conf->vector_sz, > + conf->vector_timeout_ns, conf->vector_mp, > + rx_queue_id, dev_info->dev->data->port_id); > + rx_adapter->ena_vector =3D 1; > + rx_adapter->vector_tmo_ticks =3D > + rx_adapter->vector_tmo_ticks > + ? RTE_MIN(queue_info->vector_data > + .vector_timeout_ticks, > + rx_adapter->vector_tmo_ticks) > + : queue_info->vector_data.vector_timeout_ticks; > + rx_adapter->vector_tmo_ticks <<=3D 1; Any reason why we left shift here ? Applicable in patch 4/8 as well. > + TAILQ_INIT(&rx_adapter->vector_list); Can doing TAILQ_INIT every time a queue is added cause existing elements to= be wiped out ? Applicable in patch 4/8 as well. > + rx_adapter->prev_expiry_ts =3D 0; Can setting this every time a queue is added affect existing queues created= with vector support and passing traffic ? Applicable in patch 4/8 as well. > + } > + > rxa_update_queue(rx_adapter, dev_info, rx_queue_id, 1); > if (rxa_polled_queue(dev_info, rx_queue_id)) { > rx_adapter->num_rx_polled +=3D !pollq; > @@ -1907,44 +1926,6 @@ rxa_add_queue(struct rte_event_eth_rx_adapter *rx_= adapter, > } > } >=20 > -static void > -rxa_sw_event_vector_configure( > - struct rte_event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, > - int rx_queue_id, > - const struct rte_event_eth_rx_adapter_event_vector_config *config) > -{ > - struct eth_device_info *dev_info =3D &rx_adapter->eth_devices[eth_dev_i= d]; > - struct eth_rx_queue_info *queue_info; > - struct rte_event *qi_ev; > - > - if (rx_queue_id =3D=3D -1) { > - uint16_t nb_rx_queues; > - uint16_t i; > - > - nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > - for (i =3D 0; i < nb_rx_queues; i++) > - rxa_sw_event_vector_configure(rx_adapter, eth_dev_id, i, > - config); > - return; > - } > - > - queue_info =3D &dev_info->rx_queue[rx_queue_id]; > - qi_ev =3D (struct rte_event *)&queue_info->event; > - queue_info->ena_vector =3D 1; > - qi_ev->event_type =3D RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; > - rxa_set_vector_data(queue_info, config->vector_sz, > - config->vector_timeout_ns, config->vector_mp, > - rx_queue_id, dev_info->dev->data->port_id); > - rx_adapter->ena_vector =3D 1; > - rx_adapter->vector_tmo_ticks =3D > - rx_adapter->vector_tmo_ticks ? > - RTE_MIN(config->vector_timeout_ns << 1, > - rx_adapter->vector_tmo_ticks) : > - config->vector_timeout_ns << 1; > - rx_adapter->prev_expiry_ts =3D 0; > - TAILQ_INIT(&rx_adapter->vector_list); > -} > - > static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, > uint16_t eth_dev_id, > int rx_queue_id, > @@ -2258,6 +2239,7 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id, > struct rte_event_eth_rx_adapter *rx_adapter; > struct rte_eventdev *dev; > struct eth_device_info *dev_info; > + struct rte_event_eth_rx_adapter_vector_limits limits; >=20 > RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); > RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL); > @@ -2294,6 +2276,39 @@ rte_event_eth_rx_adapter_queue_add(uint8_t id, > return -EINVAL; > } >=20 > + if (queue_conf->rx_queue_flags & > + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR) { Perhaps, you could move the previous if condition here and just check for (= cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) > + ret =3D rte_event_eth_rx_adapter_vector_limits_get( > + rx_adapter->eventdev_id, eth_dev_id, &limits); > + if (ret < 0) { > + RTE_EDEV_LOG_ERR("Failed to get event device vector limits," > + " eth port: %" PRIu16 > + " adapter id: %" PRIu8, > + eth_dev_id, id); > + return -EINVAL; > + } > + if (queue_conf->vector_sz < limits.min_sz || > + queue_conf->vector_sz > limits.max_sz || > + queue_conf->vector_timeout_ns < limits.min_timeout_ns || > + queue_conf->vector_timeout_ns > limits.max_timeout_ns || > + queue_conf->vector_mp =3D=3D NULL) { > + RTE_EDEV_LOG_ERR("Invalid event vector configuration," > + " eth port: %" PRIu16 > + " adapter id: %" PRIu8, > + eth_dev_id, id); > + return -EINVAL; > + } > + if (queue_conf->vector_mp->elt_size < > + (sizeof(struct rte_event_vector) + > + (sizeof(uintptr_t) * queue_conf->vector_sz))) { > + RTE_EDEV_LOG_ERR("Invalid event vector configuration," > + " eth port: %" PRIu16 > + " adapter id: %" PRIu8, > + eth_dev_id, id); > + return -EINVAL; > + } > + } > + > if ((cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) =3D=3D 0 && > (rx_queue_id !=3D -1)) { > RTE_EDEV_LOG_ERR("Rx queues can only be connected to single " > @@ -2487,83 +2502,6 @@ rte_event_eth_rx_adapter_queue_del(uint8_t id, uin= t16_t eth_dev_id, > return ret; > } >=20 > -int > -rte_event_eth_rx_adapter_queue_event_vector_config( > - uint8_t id, uint16_t eth_dev_id, int32_t rx_queue_id, > - struct rte_event_eth_rx_adapter_event_vector_config *config) > -{ > - struct rte_event_eth_rx_adapter_vector_limits limits; > - struct rte_event_eth_rx_adapter *rx_adapter; > - struct rte_eventdev *dev; > - uint32_t cap; > - int ret; > - > - RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); > - RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL); > - > - rx_adapter =3D rxa_id_to_adapter(id); > - if ((rx_adapter =3D=3D NULL) || (config =3D=3D NULL)) > - return -EINVAL; > - > - dev =3D &rte_eventdevs[rx_adapter->eventdev_id]; > - ret =3D rte_event_eth_rx_adapter_caps_get(rx_adapter->eventdev_id, > - eth_dev_id, &cap); > - if (ret) { > - RTE_EDEV_LOG_ERR("Failed to get adapter caps edev %" PRIu8 > - "eth port %" PRIu16, > - id, eth_dev_id); > - return ret; > - } > - > - if (!(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR)) { > - RTE_EDEV_LOG_ERR("Event vectorization is not supported," > - " eth port: %" PRIu16 " adapter id: %" PRIu8, > - eth_dev_id, id); > - return -EINVAL; > - } > - > - ret =3D rte_event_eth_rx_adapter_vector_limits_get( > - rx_adapter->eventdev_id, eth_dev_id, &limits); > - if (ret) { > - RTE_EDEV_LOG_ERR("Failed to get vector limits edev %" PRIu8 > - "eth port %" PRIu16, > - rx_adapter->eventdev_id, eth_dev_id); > - return ret; > - } > - > - if (config->vector_sz < limits.min_sz || > - config->vector_sz > limits.max_sz || > - config->vector_timeout_ns < limits.min_timeout_ns || > - config->vector_timeout_ns > limits.max_timeout_ns || > - config->vector_mp =3D=3D NULL) { > - RTE_EDEV_LOG_ERR("Invalid event vector configuration," > - " eth port: %" PRIu16 " adapter id: %" PRIu8, > - eth_dev_id, id); > - return -EINVAL; > - } > - if (config->vector_mp->elt_size < > - (sizeof(struct rte_event_vector) + > - (sizeof(uintptr_t) * config->vector_sz))) { > - RTE_EDEV_LOG_ERR("Invalid event vector configuration," > - " eth port: %" PRIu16 " adapter id: %" PRIu8, > - eth_dev_id, id); > - return -EINVAL; > - } > - > - if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) { > - RTE_FUNC_PTR_OR_ERR_RET( > - *dev->dev_ops->eth_rx_adapter_event_vector_config, > - -ENOTSUP); > - ret =3D dev->dev_ops->eth_rx_adapter_event_vector_config( > - dev, &rte_eth_devices[eth_dev_id], rx_queue_id, config); > - } else { > - rxa_sw_event_vector_configure(rx_adapter, eth_dev_id, > - rx_queue_id, config); > - } > - > - return ret; > -} > - > int > rte_event_eth_rx_adapter_vector_limits_get( > uint8_t dev_id, uint16_t eth_port_id, > diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.h b/lib/librte_= eventdev/rte_event_eth_rx_adapter.h > index 7407cde00..3f8b36229 100644 > --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.h > +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.h > @@ -171,9 +171,6 @@ struct rte_event_eth_rx_adapter_queue_conf { > * The event adapter sets ev.event_type to RTE_EVENT_TYPE_ETHDEV in the > * enqueued event. > */ > -}; > - > -struct rte_event_eth_rx_adapter_event_vector_config { > uint16_t vector_sz; > /**< > * Indicates the maximum number for mbufs to combine and form a vector. > @@ -548,30 +545,6 @@ int rte_event_eth_rx_adapter_vector_limits_get( > uint8_t dev_id, uint16_t eth_port_id, > struct rte_event_eth_rx_adapter_vector_limits *limits); >=20 > -/** > - * Configure event vectorization for a given ethernet device queue, that= has > - * been added to a event eth Rx adapter. > - * > - * @param id > - * The identifier of the ethernet Rx event adapter. > - * > - * @param eth_dev_id > - * The identifier of the ethernet device. > - * > - * @param rx_queue_id > - * Ethernet device receive queue index. > - * If rx_queue_id is -1, then all Rx queues configured for the ethernet= device > - * are configured with event vectorization. > - * > - * @return > - * - 0: Success, Receive queue configured correctly. > - * - <0: Error code on failure. > - */ > -__rte_experimental > -int rte_event_eth_rx_adapter_queue_event_vector_config( > - uint8_t id, uint16_t eth_dev_id, int32_t rx_queue_id, > - struct rte_event_eth_rx_adapter_event_vector_config *config); > - > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/versio= n.map > index 902df0ae3..34c1c830e 100644 > --- a/lib/librte_eventdev/version.map > +++ b/lib/librte_eventdev/version.map > @@ -142,7 +142,6 @@ EXPERIMENTAL { > #added in 21.05 > rte_event_vector_pool_create; > rte_event_eth_rx_adapter_vector_limits_get; > - rte_event_eth_rx_adapter_queue_event_vector_config; > }; >=20 > INTERNAL { > -- > 2.17.1