From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0C5AA0A02; Fri, 26 Mar 2021 10:44:30 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3BC2D40685; Fri, 26 Mar 2021 10:44:30 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 20D6B4067B for ; Fri, 26 Mar 2021 10:44:28 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 12Q9QKER011977; Fri, 26 Mar 2021 02:44:26 -0700 Received: from nam04-sn1-obe.outbound.protection.outlook.com (mail-sn1nam04lp2054.outbound.protection.outlook.com [104.47.44.54]) by mx0b-0016f401.pphosted.com with ESMTP id 37h11pj26q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 26 Mar 2021 02:44:26 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=J19Dlzbsv5ZVO+/OZUq1zGMxDdeVfY5Wqd9Ns6zi1GXg49XPAMe4ZsJTPdwepm/KptYC16o12w8+IpwrHEYKWAi0z3k9+b9E3jHB7KIyh+86q4SsnmTlgfYZkuW6i+UC4KCYdMheITJ4aKkla9u/wZHs1KOdfpNn5KYaNgPU5ZWh0rbBp92YmktzDHM9a+BMhvJuT1+t1kzsCSqMY6lR9cnAx4fl5AxPyGErctAIKOAZVYd6a7r5aFHvGs+YqQDzJf7pihgJiC9FNr50YkZbEu5fMR1HAXRLvJArUzbnU19l6E3V7CAiimzWsFJ5w24RR0nWA7rwHNEaud7KHS/l1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7m9y7PecIT9jdkhkt9PvMRvj9WG7WH4QVLqdmEpeF+E=; b=F/Y2UKGKYjW3HGJR3iQNkGP8fhfV0mOZrPtnepabbnt9upO55W0dJy7FB+eQX5UnJpUwzxBQNeOMIBLeVLwFb0E6EbKmV063kmmqMJ2pShaFOhmCCRvny8wxCq/OP088x8JQSls1EyBsyoRrhuWxHaA+G3wpuIvbnmApuqt1l1C0d02W47mZ9k4q5Fc1xAnr5YbvPOLR1aeZAvZxa+6GUbeQd7/uvTgR0481ZrdyVdA62f0tMVbVy3r5l1JySn5VZIlb6dle9WO9M8iis7kOu0XWAYbe+5DT6pSHHfub4zrJo5gtlDX8+dsIytXW4UyZ0utCb3YsY4kQSe1PdyJgyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=marvell.com; dmarc=pass action=none header.from=marvell.com; dkim=pass header.d=marvell.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.onmicrosoft.com; s=selector1-marvell-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=7m9y7PecIT9jdkhkt9PvMRvj9WG7WH4QVLqdmEpeF+E=; b=kHCvJoRopRhplo1lMh+Tf1F7MgNnCvKz9TvQvWj8q3MgvxOYUPCp8C4yMyhkq5ouFrqgJgM56ITMvfvCIy6rNsurz9OUir4/TGSejRFEd5UjCunOKAkuX6V+vWe0pgWkDg2xibzq8qluvbr5lYPs4o7IvxbfQImkqFk44YxQM+M= Received: from PH0PR18MB4086.namprd18.prod.outlook.com (2603:10b6:510:3::9) by PH0PR18MB3909.namprd18.prod.outlook.com (2603:10b6:510:2b::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.18; Fri, 26 Mar 2021 09:44:23 +0000 Received: from PH0PR18MB4086.namprd18.prod.outlook.com ([fe80::51dd:b5d6:af81:172d]) by PH0PR18MB4086.namprd18.prod.outlook.com ([fe80::51dd:b5d6:af81:172d%4]) with mapi id 15.20.3955.024; Fri, 26 Mar 2021 09:44:23 +0000 From: Pavan Nikhilesh Bhagavatula To: "Jayatheerthan, Jay" , Jerin Jacob Kollanukkaran , "Carrillo, Erik G" , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , "hemant.agrawal@nxp.com" , "Van Haaren, Harry" , mattias.ronnblom , "Ma, Liang J" , Ray Kinsella , Neil Horman CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx adapter event vector config Thread-Index: AQHXIGtznwbAkMA42Eu70xnzmNsyUqqUo5yAgAANxUCAASuTgIAAICEw Date: Fri, 26 Mar 2021 09:44:23 +0000 Message-ID: References: <20210319205718.1436-1-pbhagavatula@marvell.com> <20210324050525.4489-1-pbhagavatula@marvell.com> <20210324050525.4489-9-pbhagavatula@marvell.com> In-Reply-To: Accept-Language: en-IN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=marvell.com; x-originating-ip: [49.37.166.152] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 9b7fc1eb-d72b-492b-8abd-08d8f03bbd02 x-ms-traffictypediagnostic: PH0PR18MB3909: x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:3513; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: mcqihc9tM4rl05QtyuVSjK1g6Nfx1Qn9mrksz6rDg5UbcDi91Js+vR0XPTfioAuqCYiCCVCzEGrD60uahCsa+XvEK74y3FaQYdTjPHtuFw8jsDKdVSBBXGXmbrd5FtYmnMfGrWd1aqZk3ZfluzCBnPs9sbwTp6do70Rj+Yg+BEm0Vj4wUhEtcGjNfj5fAqUeRps0MZnpFlmaJAqzaMrlEgF8C7b3m7qD3Ovcjh+VbUdWd40PBPXhwKkEmEKTrsbBldA9YHqJANZSZTynwgYT3DTqbh7MWD74T3aBKdha6F6FDpTck3Qf+TDuz7lwT7ZPf0KdDzJ2aaDUTLlS2eahehZJ2JdlBaR1ZTy1lBW3bQYtHgCad4VlgSchy3IIc2lPCpnhGXDilNdJVcuGqbYJFinN9sGAVo3LfHyyQpO8kiwWVCibSfgFfVG3xnXv3zgzF5/yBo/ebbaYeN2/KxkIpEgWJS+vbJ/7InjwuBwhv+oxWBMXX9McTuAolz20U8Yx5uEH7EbKnzrNZo43CIVmI+cA2OjXI5b9z9QKr2VEkmL+YcSJojf+u+wHkdQU9DgE0pUZxRPZhbVw4sk5rgiFEAS1FIDYBvp9rIGgIk91IJPXiRrFEEF8b1c+WIvvG7IUTQA2rpe2UU2nqxyqOS/zM4Bp0jPgS7jbFhFYZjU/F4hNnJK7Jh4Vvw7lqk7BeU1A x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH0PR18MB4086.namprd18.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(39860400002)(346002)(136003)(366004)(396003)(4326008)(7416002)(9686003)(52536014)(2906002)(53546011)(86362001)(5660300002)(6506007)(66446008)(921005)(83380400001)(64756008)(33656002)(478600001)(8676002)(66476007)(186003)(110136005)(38100700001)(66556008)(30864003)(7696005)(316002)(76116006)(55016002)(8936002)(71200400001)(26005)(66946007); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?OkpY3UCbwhKPEQt/GxKG/mApzwZTZFKiRYLjsIYzuSuniyHImxOwIpekbtr8?= =?us-ascii?Q?0HxueyghAWSVGz53tcMmRtCjkV955vPSVhMWm107+RJ5xE9vH9lBI3qd1j3T?= =?us-ascii?Q?/jQiM+8pwURksDnBonzEy+KV4QmjhZpfmK0blwi7HWTkhzDztmvgnEZE0iyi?= =?us-ascii?Q?R3OCU2djLhl8taGleAjoC/sEhIl/YFbL0QwE8I4tZki2fC4XvDvuyPFz6TRH?= =?us-ascii?Q?2Bn5W1BK0WJlsBS2tc0wUIH5dWEIjQcwv2GqhqsQ6FDBPmvvsfzdcqRCBxKH?= =?us-ascii?Q?+8VA2DeN4k3ovwGp3zUJbDpnQFw2WXCa0c2pDTKVV+LfB4/xMoU10PIYlwGr?= =?us-ascii?Q?drTVDAzvZvlGwDdg0IZXulILvqorOJ+9edPZT5Uji0Bcb/zFy9QrjlqHo/nF?= =?us-ascii?Q?IQZ6avdNynOwpQDRvhQ3+fwmmAk5SFVhUiyeYSyeR2wxiGKUwydCJ2v+72lK?= =?us-ascii?Q?zwV4oqIQsXwft8ATbktmVEPvWR7S9BVBmqHpmc/kOysAklCZRNTPXzqlRg6L?= =?us-ascii?Q?/1zGQB2l4lGWJ5eE/D9Iut1yH3E4/eaEwTvm5WisXINAmQu6axdkZCYACLml?= =?us-ascii?Q?ZoTfKf7PXyQ3enSh8RHGZTLCm3oKXzFNkBuwOEjCHoVM3GxiEJOXAxEgZ2aq?= =?us-ascii?Q?ugNo2gMipf4NVKfMVKxDwxD0+KYAULDtB4hMZ9FcN2YkVbHSKY6TBbOog1SZ?= =?us-ascii?Q?eWT9pGSwVgrTDaV3Bquyaj9Kjycagw9BGihryIIcWqMKJEcIR8fi9/tOoWiz?= =?us-ascii?Q?Ns2zvt3m3uW48qaN/kVTrdpDSxakJii7Mi7hKqaVw7v/5D6tCFowLldGgL/r?= =?us-ascii?Q?v80427+ik6qdvkkJIFWl9kohCau2PqNYXRL20NojeDpw315tvqmmDA3uvz10?= =?us-ascii?Q?CAMfPbpr6jSl4mQaDKwEL1LlCi2IIptOFdWjq9nzwCwtUOvXkVAGjAewqLsB?= =?us-ascii?Q?XSoyBJe+WDC/gM3NhOELfXmZkQmu1YaMLBAtIyTZYBEyei24smvPUijkrBjW?= =?us-ascii?Q?vLtJM4KLAxRiKhiFblM5ZKgHlssmWXQwQwGolvJvbuQQ9JKZO0oSs/svHPOy?= =?us-ascii?Q?WhoTodT+mibj4UMAVr5scEKh//Wb6nidBWcnNPB3UhY1V2ccazcP0WXZw6WT?= =?us-ascii?Q?6VcZ56gkuHfKMuFxU9dPGh9DjTeWbwKNdNlTEVR21nrgfap7RCNbLKLSVDfi?= =?us-ascii?Q?q0uO40Q+WYo7mE4j0j5z5mnwefOuIp221YQW2ZrLfTRnp7RNoUTMEMJcuDo3?= =?us-ascii?Q?OdEWSMKMbB2z4ucO4kwJ7eKKf/mZhGFJIwOJkypKeserSOAM/6C8eevoQme3?= =?us-ascii?Q?3d74epZ9mptR/c1+oKL9Wu6o?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: marvell.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: PH0PR18MB4086.namprd18.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9b7fc1eb-d72b-492b-8abd-08d8f03bbd02 X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Mar 2021 09:44:23.6169 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 70e1fb47-1155-421d-87fc-2e58f638b6e0 X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: CzFApb2Z13fKeGslfE7XPQLd25CEVcskFRLWm+yCegS0sheCbn5u7RopWkjFmEbDia2KQoruehER4b73P6RZV2m/TJ7Pdd/G8O84I+i5r7E= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR18MB3909 X-Proofpoint-ORIG-GUID: MBbKfjXDim12WqruTCVD2ZoXvrnclqH7 X-Proofpoint-GUID: MBbKfjXDim12WqruTCVD2ZoXvrnclqH7 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-26_03:2021-03-26, 2021-03-26 signatures=0 Subject: Re: [dpdk-dev] [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx adapter event vector config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" >> -----Original Message----- >> From: Pavan Nikhilesh Bhagavatula >> Sent: Thursday, March 25, 2021 7:25 PM >> To: Jayatheerthan, Jay ; Jerin Jacob >Kollanukkaran ; Carrillo, Erik G >> ; Gujjar, Abhinandan S >; McDaniel, Timothy >> ; hemant.agrawal@nxp.com; Van >Haaren, Harry ; mattias.ronnblom >> ; Ma, Liang J >; Ray Kinsella ; Neil Horman >> >> Cc: dev@dpdk.org >> Subject: RE: [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx >adapter event vector config >> >> >> From: pbhagavatula@marvell.com >> >> Sent: Wednesday, March 24, 2021 10:35 AM >> >> To: jerinj@marvell.com; Jayatheerthan, Jay >> >; Carrillo, Erik G >> >; Gujjar, >> >> Abhinandan S ; McDaniel, Timothy >> >; hemant.agrawal@nxp.com; Van >> >> Haaren, Harry ; mattias.ronnblom >> >; Ma, Liang J >> >> ; Ray Kinsella ; Neil >Horman >> > >> >> Cc: dev@dpdk.org; Pavan Nikhilesh >> >> Subject: [dpdk-dev v21.11] [PATCH v5 8/8] eventdev: simplify Rx >> >adapter event vector config >> >> >> >> From: Pavan Nikhilesh >> >> >> >> Include vector configuration into the structure >> >> ``rte_event_eth_rx_adapter_queue_conf`` used when configuring >rest >> >> of the Rx adapter ethernet device Rx queue parameters. >> >> This simplifies event vector configuration as it avoids splitting >> >> configuration per Rx queue. >> >> >> >> Signed-off-by: Pavan Nikhilesh >> >> --- >> >> app/test-eventdev/test_pipeline_common.c | 16 +- >> >> lib/librte_eventdev/eventdev_pmd.h | 29 --- >> >> .../rte_event_eth_rx_adapter.c | 168 ++++++----------= -- >> >> .../rte_event_eth_rx_adapter.h | 27 --- >> >> lib/librte_eventdev/version.map | 1 - >> >> 5 files changed, 57 insertions(+), 184 deletions(-) >> >> >> >> diff --git a/app/test-eventdev/test_pipeline_common.c >b/app/test- >> >eventdev/test_pipeline_common.c >> >> index d5ef90500..76aee254b 100644 >> >> --- a/app/test-eventdev/test_pipeline_common.c >> >> +++ b/app/test-eventdev/test_pipeline_common.c >> >> @@ -331,7 +331,6 @@ pipeline_event_rx_adapter_setup(struct >> >evt_options *opt, uint8_t stride, >> >> uint16_t prod; >> >> struct rte_mempool *vector_pool =3D NULL; >> >> struct rte_event_eth_rx_adapter_queue_conf queue_conf; >> >> - struct rte_event_eth_rx_adapter_event_vector_config >> >vec_conf; >> >> >> >> memset(&queue_conf, 0, >> >> sizeof(struct >> >rte_event_eth_rx_adapter_queue_conf)); >> >> @@ -397,8 +396,12 @@ pipeline_event_rx_adapter_setup(struct >> >evt_options *opt, uint8_t stride, >> >> } >> >> >> >> if (cap & >> >RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) { >> >> + queue_conf.vector_sz =3D opt- >> >>vector_size; >> >> + queue_conf.vector_timeout_ns =3D >> >> + opt->vector_tmo_nsec; >> >> queue_conf.rx_queue_flags |=3D >> >> >> > RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR; >> >> + queue_conf.vector_mp =3D vector_pool; >> >> } else { >> >> evt_err("Rx adapter doesn't support >> >event vector"); >> >> return -EINVAL; >> >> @@ -418,17 +421,6 @@ pipeline_event_rx_adapter_setup(struct >> >evt_options *opt, uint8_t stride, >> >> return ret; >> >> } >> >> >> >> - if (opt->ena_vector) { >> >> - vec_conf.vector_sz =3D opt->vector_size; >> >> - vec_conf.vector_timeout_ns =3D opt- >> >>vector_tmo_nsec; >> >> - vec_conf.vector_mp =3D vector_pool; >> >> - if >> >(rte_event_eth_rx_adapter_queue_event_vector_config( >> >> - prod, prod, -1, &vec_conf) < 0) { >> >> - evt_err("Failed to configure event >> >vectorization for Rx adapter"); >> >> - return -EINVAL; >> >> - } >> >> - } >> >> - >> >> if (!(cap & >> >RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) { >> >> uint32_t service_id =3D -1U; >> >> >> >> diff --git a/lib/librte_eventdev/eventdev_pmd.h >> >b/lib/librte_eventdev/eventdev_pmd.h >> >> index 0f724ac85..63b3bc4b5 100644 >> >> --- a/lib/librte_eventdev/eventdev_pmd.h >> >> +++ b/lib/librte_eventdev/eventdev_pmd.h >> >> @@ -667,32 +667,6 @@ typedef int >> >(*eventdev_eth_rx_adapter_vector_limits_get_t)( >> >> const struct rte_eventdev *dev, const struct rte_eth_dev >> >*eth_dev, >> >> struct rte_event_eth_rx_adapter_vector_limits *limits); >> >> >> >> -struct rte_event_eth_rx_adapter_event_vector_config; >> >> -/** >> >> - * Enable event vector on an given Rx queue of a ethernet devices >> >belonging to >> >> - * the Rx adapter. >> >> - * >> >> - * @param dev >> >> - * Event device pointer >> >> - * >> >> - * @param eth_dev >> >> - * Ethernet device pointer >> >> - * >> >> - * @param rx_queue_id >> >> - * The Rx queue identifier >> >> - * >> >> - * @param config >> >> - * Pointer to the event vector configuration structure. >> >> - * >> >> - * @return >> >> - * - 0: Success. >> >> - * - <0: Error code returned by the driver function. >> >> - */ >> >> -typedef int (*eventdev_eth_rx_adapter_event_vector_config_t)( >> >> - const struct rte_eventdev *dev, const struct rte_eth_dev >> >*eth_dev, >> >> - int32_t rx_queue_id, >> >> - const struct rte_event_eth_rx_adapter_event_vector_config >> >*config); >> >> - >> >> typedef uint32_t rte_event_pmd_selftest_seqn_t; >> >> extern int rte_event_pmd_selftest_seqn_dynfield_offset; >> >> >> >> @@ -1118,9 +1092,6 @@ struct rte_eventdev_ops { >> >> eventdev_eth_rx_adapter_vector_limits_get_t >> >> eth_rx_adapter_vector_limits_get; >> >> /**< Get event vector limits for the Rx adapter */ >> >> - eventdev_eth_rx_adapter_event_vector_config_t >> >> - eth_rx_adapter_event_vector_config; >> >> - /**< Configure Rx adapter with event vector */ >> >> >> >> eventdev_timer_adapter_caps_get_t timer_adapter_caps_get; >> >> /**< Get timer adapter capabilities */ >> >> diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> >b/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> >> index c71990078..a1990637f 100644 >> >> --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> >> +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c >> >> @@ -1882,6 +1882,25 @@ rxa_add_queue(struct >> >rte_event_eth_rx_adapter *rx_adapter, >> >> } else >> >> qi_ev->flow_id =3D 0; >> >> >> >> + if (conf->rx_queue_flags & >> >> + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR) { >> >> + queue_info->ena_vector =3D 1; >> >> + qi_ev->event_type =3D >> >RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; >> >> + rxa_set_vector_data(queue_info, conf->vector_sz, >> >> + conf->vector_timeout_ns, conf- >> >>vector_mp, >> >> + rx_queue_id, dev_info->dev->data- >> >>port_id); >> >> + rx_adapter->ena_vector =3D 1; >> >> + rx_adapter->vector_tmo_ticks =3D >> >> + rx_adapter->vector_tmo_ticks >> >> + ? RTE_MIN(queue_info->vector_data >> >> + .vector_timeout_ticks, >> >> + rx_adapter- >> >>vector_tmo_ticks) >> >> + : queue_info- >> >>vector_data.vector_timeout_ticks; >> >> + rx_adapter->vector_tmo_ticks <<=3D 1; >> > >> >Any reason why we left shift here ? Applicable in patch 4/8 as well. >> >> Just so that we have half the precision of the lowest timeout, helps to >> Maintain accuracy. > >Maybe I am missing something here. For e.g. if the lowest timeout is 20 >ticks, would you want 10 or 40 ? Currently, its set to 40. My bad this should be Rshift, I will fix it in 4/8 too. > >> >> > >> >> + TAILQ_INIT(&rx_adapter->vector_list); >> > >> >Can doing TAILQ_INIT every time a queue is added cause existing >> >elements to be wiped out ? Applicable in patch 4/8 as well. >> >> I don't think queues can be added when adapter is already started >> rxa_sw_add Isn't thready safe. > >Yes, it takes lock before calling rxa_sa_add. For internal_port >implementation, I don't see any lock being taken. Besides, since its >adapter related, it would be more relevant in adapter create than in >queue add. Sure, I will move the init code to adapter create. > >> >> > >> >> + rx_adapter->prev_expiry_ts =3D 0; >> > >> >Can setting this every time a queue is added affect existing queues >> >created with vector support and passing traffic ? Applicable in patch >4/8 >> >as well. >> >> Same as above. >> > >Same reasoning as above. > >> > >> >> + } >> >> + >> >> rxa_update_queue(rx_adapter, dev_info, rx_queue_id, 1); >> >> if (rxa_polled_queue(dev_info, rx_queue_id)) { >> >> rx_adapter->num_rx_polled +=3D !pollq; >> >> @@ -1907,44 +1926,6 @@ rxa_add_queue(struct >> >rte_event_eth_rx_adapter *rx_adapter, >> >> } >> >> } >> >> >> >> -static void >> >> -rxa_sw_event_vector_configure( >> >> - struct rte_event_eth_rx_adapter *rx_adapter, uint16_t >> >eth_dev_id, >> >> - int rx_queue_id, >> >> - const struct rte_event_eth_rx_adapter_event_vector_config >> >*config) >> >> -{ >> >> - struct eth_device_info *dev_info =3D &rx_adapter- >> >>eth_devices[eth_dev_id]; >> >> - struct eth_rx_queue_info *queue_info; >> >> - struct rte_event *qi_ev; >> >> - >> >> - if (rx_queue_id =3D=3D -1) { >> >> - uint16_t nb_rx_queues; >> >> - uint16_t i; >> >> - >> >> - nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; >> >> - for (i =3D 0; i < nb_rx_queues; i++) >> >> - rxa_sw_event_vector_configure(rx_adapter, >> >eth_dev_id, i, >> >> - config); >> >> - return; >> >> - } >> >> - >> >> - queue_info =3D &dev_info->rx_queue[rx_queue_id]; >> >> - qi_ev =3D (struct rte_event *)&queue_info->event; >> >> - queue_info->ena_vector =3D 1; >> >> - qi_ev->event_type =3D >> >RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; >> >> - rxa_set_vector_data(queue_info, config->vector_sz, >> >> - config->vector_timeout_ns, config- >> >>vector_mp, >> >> - rx_queue_id, dev_info->dev->data->port_id); >> >> - rx_adapter->ena_vector =3D 1; >> >> - rx_adapter->vector_tmo_ticks =3D >> >> - rx_adapter->vector_tmo_ticks ? >> >> - RTE_MIN(config->vector_timeout_ns << 1, >> >> - rx_adapter->vector_tmo_ticks) : >> >> - config->vector_timeout_ns << 1; >> >> - rx_adapter->prev_expiry_ts =3D 0; >> >> - TAILQ_INIT(&rx_adapter->vector_list); >> >> -} >> >> - >> >> static int rxa_sw_add(struct rte_event_eth_rx_adapter >*rx_adapter, >> >> uint16_t eth_dev_id, >> >> int rx_queue_id, >> >> @@ -2258,6 +2239,7 @@ >> >rte_event_eth_rx_adapter_queue_add(uint8_t id, >> >> struct rte_event_eth_rx_adapter *rx_adapter; >> >> struct rte_eventdev *dev; >> >> struct eth_device_info *dev_info; >> >> + struct rte_event_eth_rx_adapter_vector_limits limits; >> >> >> >> RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, - >> >EINVAL); >> >> RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL); >> >> @@ -2294,6 +2276,39 @@ >> >rte_event_eth_rx_adapter_queue_add(uint8_t id, >> >> return -EINVAL; >> >> } >> >> >> >> + if (queue_conf->rx_queue_flags & >> >> + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR) { >> > >> >Perhaps, you could move the previous if condition here and just >check >> >for (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) >> >> No, the above check is not a capability check, it is to check if >application >> requested to enable vectorization for this queue or not. > >In current code, same condition check (queue_conf->rx_queue_flags & >RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR ) is done >twice next to each other, which could be avoided. > > if ((cap & >RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) =3D=3D 0 && >----> (queue_conf->rx_queue_flags & > RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR)) { > RTE_EDEV_LOG_ERR("Event vectorization is not >supported," > " eth port: %" PRIu16 " adapter id: %" >PRIu8, > eth_dev_id, id); > return -EINVAL; > } > >----> if (queue_conf->rx_queue_flags & > RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR) { > ... > } > Ah, I see it now, will fix it in next version. >> >> > >> >> + ret =3D rte_event_eth_rx_adapter_vector_limits_get( >> >> + rx_adapter->eventdev_id, eth_dev_id, &limits); >> >> + if (ret < 0) { >> >> + RTE_EDEV_LOG_ERR("Failed to get event >> >device vector limits," >> >> + " eth port: %" PRIu16 >> >> + " adapter id: %" PRIu8, >> >> + eth_dev_id, id); >> >> + return -EINVAL; >> >> + } >> >> + if (queue_conf->vector_sz < limits.min_sz || >> >> + queue_conf->vector_sz > limits.max_sz || >> >> + queue_conf->vector_timeout_ns < >> >limits.min_timeout_ns || >> >> + queue_conf->vector_timeout_ns > >> >limits.max_timeout_ns || >> >> + queue_conf->vector_mp =3D=3D NULL) { >> >> + RTE_EDEV_LOG_ERR("Invalid event vector >> >configuration," >> >> + " eth port: %" PRIu16 >> >> + " adapter id: %" PRIu8, >> >> + eth_dev_id, id); >> >> + return -EINVAL; >> >> + } >> >> + if (queue_conf->vector_mp->elt_size < >> >> + (sizeof(struct rte_event_vector) + >> >> + (sizeof(uintptr_t) * queue_conf->vector_sz))) { >> >> + RTE_EDEV_LOG_ERR("Invalid event vector >> >configuration," >> >> + " eth port: %" PRIu16 >> >> + " adapter id: %" PRIu8, >> >> + eth_dev_id, id); >> >> + return -EINVAL; >> >> + } >> >> + } >> >> + >> >> if ((cap & >> >RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) =3D=3D 0 && >> >> (rx_queue_id !=3D -1)) { >> >> RTE_EDEV_LOG_ERR("Rx queues can only be connected >> >to single " >> >> @@ -2487,83 +2502,6 @@ >> >rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t >eth_dev_id, >> >> return ret; >> >> } >> >> >> >> -int >> >> -rte_event_eth_rx_adapter_queue_event_vector_config( >> >> - uint8_t id, uint16_t eth_dev_id, int32_t rx_queue_id, >> >> - struct rte_event_eth_rx_adapter_event_vector_config *config) >> >> -{ >> >> - struct rte_event_eth_rx_adapter_vector_limits limits; >> >> - struct rte_event_eth_rx_adapter *rx_adapter; >> >> - struct rte_eventdev *dev; >> >> - uint32_t cap; >> >> - int ret; >> >> - >> >> - RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, - >> >EINVAL); >> >> - RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL); >> >> - >> >> - rx_adapter =3D rxa_id_to_adapter(id); >> >> - if ((rx_adapter =3D=3D NULL) || (config =3D=3D NULL)) >> >> - return -EINVAL; >> >> - >> >> - dev =3D &rte_eventdevs[rx_adapter->eventdev_id]; >> >> - ret =3D rte_event_eth_rx_adapter_caps_get(rx_adapter- >> >>eventdev_id, >> >> - eth_dev_id, &cap); >> >> - if (ret) { >> >> - RTE_EDEV_LOG_ERR("Failed to get adapter caps edev >> >%" PRIu8 >> >> - "eth port %" PRIu16, >> >> - id, eth_dev_id); >> >> - return ret; >> >> - } >> >> - >> >> - if (!(cap & >> >RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR)) { >> >> - RTE_EDEV_LOG_ERR("Event vectorization is not >> >supported," >> >> - " eth port: %" PRIu16 " adapter id: %" >> >PRIu8, >> >> - eth_dev_id, id); >> >> - return -EINVAL; >> >> - } >> >> - >> >> - ret =3D rte_event_eth_rx_adapter_vector_limits_get( >> >> - rx_adapter->eventdev_id, eth_dev_id, &limits); >> >> - if (ret) { >> >> - RTE_EDEV_LOG_ERR("Failed to get vector limits edev >> >%" PRIu8 >> >> - "eth port %" PRIu16, >> >> - rx_adapter->eventdev_id, eth_dev_id); >> >> - return ret; >> >> - } >> >> - >> >> - if (config->vector_sz < limits.min_sz || >> >> - config->vector_sz > limits.max_sz || >> >> - config->vector_timeout_ns < limits.min_timeout_ns || >> >> - config->vector_timeout_ns > limits.max_timeout_ns || >> >> - config->vector_mp =3D=3D NULL) { >> >> - RTE_EDEV_LOG_ERR("Invalid event vector >> >configuration," >> >> - " eth port: %" PRIu16 " adapter id: %" >> >PRIu8, >> >> - eth_dev_id, id); >> >> - return -EINVAL; >> >> - } >> >> - if (config->vector_mp->elt_size < >> >> - (sizeof(struct rte_event_vector) + >> >> - (sizeof(uintptr_t) * config->vector_sz))) { >> >> - RTE_EDEV_LOG_ERR("Invalid event vector >> >configuration," >> >> - " eth port: %" PRIu16 " adapter id: %" >> >PRIu8, >> >> - eth_dev_id, id); >> >> - return -EINVAL; >> >> - } >> >> - >> >> - if (cap & >> >RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) { >> >> - RTE_FUNC_PTR_OR_ERR_RET( >> >> - *dev->dev_ops- >> >>eth_rx_adapter_event_vector_config, >> >> - -ENOTSUP); >> >> - ret =3D dev->dev_ops- >> >>eth_rx_adapter_event_vector_config( >> >> - dev, &rte_eth_devices[eth_dev_id], >> >rx_queue_id, config); >> >> - } else { >> >> - rxa_sw_event_vector_configure(rx_adapter, >> >eth_dev_id, >> >> - rx_queue_id, config); >> >> - } >> >> - >> >> - return ret; >> >> -} >> >> - >> >> int >> >> rte_event_eth_rx_adapter_vector_limits_get( >> >> uint8_t dev_id, uint16_t eth_port_id, >> >> diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.h >> >b/lib/librte_eventdev/rte_event_eth_rx_adapter.h >> >> index 7407cde00..3f8b36229 100644 >> >> --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.h >> >> +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.h >> >> @@ -171,9 +171,6 @@ struct >rte_event_eth_rx_adapter_queue_conf >> >{ >> >> * The event adapter sets ev.event_type to >> >RTE_EVENT_TYPE_ETHDEV in the >> >> * enqueued event. >> >> */ >> >> -}; >> >> - >> >> -struct rte_event_eth_rx_adapter_event_vector_config { >> >> uint16_t vector_sz; >> >> /**< >> >> * Indicates the maximum number for mbufs to combine and >> >form a vector. >> >> @@ -548,30 +545,6 @@ int >> >rte_event_eth_rx_adapter_vector_limits_get( >> >> uint8_t dev_id, uint16_t eth_port_id, >> >> struct rte_event_eth_rx_adapter_vector_limits *limits); >> >> >> >> -/** >> >> - * Configure event vectorization for a given ethernet device >queue, >> >that has >> >> - * been added to a event eth Rx adapter. >> >> - * >> >> - * @param id >> >> - * The identifier of the ethernet Rx event adapter. >> >> - * >> >> - * @param eth_dev_id >> >> - * The identifier of the ethernet device. >> >> - * >> >> - * @param rx_queue_id >> >> - * Ethernet device receive queue index. >> >> - * If rx_queue_id is -1, then all Rx queues configured for the >ethernet >> >device >> >> - * are configured with event vectorization. >> >> - * >> >> - * @return >> >> - * - 0: Success, Receive queue configured correctly. >> >> - * - <0: Error code on failure. >> >> - */ >> >> -__rte_experimental >> >> -int rte_event_eth_rx_adapter_queue_event_vector_config( >> >> - uint8_t id, uint16_t eth_dev_id, int32_t rx_queue_id, >> >> - struct rte_event_eth_rx_adapter_event_vector_config >> >*config); >> >> - >> >> #ifdef __cplusplus >> >> } >> >> #endif >> >> diff --git a/lib/librte_eventdev/version.map >> >b/lib/librte_eventdev/version.map >> >> index 902df0ae3..34c1c830e 100644 >> >> --- a/lib/librte_eventdev/version.map >> >> +++ b/lib/librte_eventdev/version.map >> >> @@ -142,7 +142,6 @@ EXPERIMENTAL { >> >> #added in 21.05 >> >> rte_event_vector_pool_create; >> >> rte_event_eth_rx_adapter_vector_limits_get; >> >> - rte_event_eth_rx_adapter_queue_event_vector_config; >> >> }; >> >> >> >> INTERNAL { >> >> -- >> >> 2.17.1