From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5620AA034F; Wed, 31 Mar 2021 08:55:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CCE234069E; Wed, 31 Mar 2021 08:55:09 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 91E3940141 for ; Wed, 31 Mar 2021 08:55:07 +0200 (CEST) IronPort-SDR: /Mx4JILnil6mM70DJJXAngQZnGTdlqh7Vp0cy9oDPC2POgdPYb52A0lKURKjwIRCE+Mm7EG6+v 3lEzQT/uIEHg== X-IronPort-AV: E=McAfee;i="6000,8403,9939"; a="253267925" X-IronPort-AV: E=Sophos;i="5.81,293,1610438400"; d="scan'208";a="253267925" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2021 23:55:04 -0700 IronPort-SDR: xe/4GJhctz1VslwaiLrlkoGmNfEeq/EZazaut3JRMa1l1RDPIPq9inlgBKIRdiTBtE672iUTrW KhOVRQ352saw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,293,1610438400"; d="scan'208";a="455356507" Received: from fmsmsx604.amr.corp.intel.com ([10.18.126.84]) by orsmga001.jf.intel.com with ESMTP; 30 Mar 2021 23:55:04 -0700 Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by fmsmsx604.amr.corp.intel.com (10.18.126.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Tue, 30 Mar 2021 23:55:04 -0700 Received: from fmsmsx608.amr.corp.intel.com (10.18.126.88) by fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Tue, 30 Mar 2021 23:55:03 -0700 Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by fmsmsx608.amr.corp.intel.com (10.18.126.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2 via Frontend Transport; Tue, 30 Mar 2021 23:55:03 -0700 Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.102) by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2106.2; Tue, 30 Mar 2021 23:55:03 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IxpqB5N7h/5TOt4tm1Mob0BGfdWy7Q1SI6Yc6Zvl4ARzCg2Xedoj1x6KDqi8QrLrBWtZp404Mk2lR1j+keNREj7z64sHYc0PWfwqeJc/sdWGSO17zDCssAE+MKXAoYADc7QRcxWCew2Oq6fWE+1q6oyVDB6JKsKmyW2xWIVw4cyiEevMP/L4xIgmRIKuFarm2fQf0jyAFGM/ZULv/IxF9QT5Qirw2fyarIfHjW/rCYP11n8f3ygefJJ2Vc+yJ0Q9DgM3DnHoPl+qK71Z/cw7aa5SYTJFGd+1jGCtJ117QJLua6O3Sh1RQOWkcfvyNub+qObxW0muI1Zg3VIX4bgpuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9l2WoUHYSZ0bIdYrzEnFVxh5GLuMFtbwZOVn36Lnn6U=; b=OqDEFrHB58F0f+41FjQ3AvKz3gopeaglNX6fMpgfAvEcnHXOWDIbJLT8ctqXyd7Uy9gLBUlyDNOgfzJ5KSCfRfjZo5dnM1YwwBkwaD7FSedWH3d+xCrr41Dz8NuwcL+dCFs57nhJwaVmrNikR9TdHRHynocqjQesr7k3VCV1yjNOrcMkxUt159huSuRxR216nfqOWYB2FnuGeGbBNGXl0nGzjs8b/xSCegN43D+oV8qFK03SeRQCxkETFgY2WZ5n2htsrHQwgssTYabzIuA/ek9x4EAbZ8i6arq1VQsfMlyNsVTxL0wG/ejs9fAJwr2iOO9Kl3FINgJ+Kry6W3PfRw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9l2WoUHYSZ0bIdYrzEnFVxh5GLuMFtbwZOVn36Lnn6U=; b=tlahdwQxahZeHADQdq9B4r+QsuxJn8XcLojUTsOTPqObEGkx/Ze+lwrMzgY7zJQbLFCaWtB1UlIHYxTIivnaKGOBiem/X35YyYu+7ZKrR4vLKU9iw+oj8CCeUog8w0YL1XDuZnGgMMDfQuMdhlqygNxYoE3K2pGP1yrkqxulmw8= Received: from SN6PR11MB3117.namprd11.prod.outlook.com (2603:10b6:805:d7::32) by SA0PR11MB4640.namprd11.prod.outlook.com (2603:10b6:806:9b::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3999.27; Wed, 31 Mar 2021 06:55:01 +0000 Received: from SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37]) by SN6PR11MB3117.namprd11.prod.outlook.com ([fe80::4579:2bc0:3dc8:8b37%5]) with mapi id 15.20.3977.033; Wed, 31 Mar 2021 06:55:01 +0000 From: "Jayatheerthan, Jay" To: Pavan Nikhilesh Bhagavatula , "Jerin Jacob Kollanukkaran" , "Carrillo, Erik G" , "Gujjar, Abhinandan S" , "McDaniel, Timothy" , "hemant.agrawal@nxp.com" , "Van Haaren, Harry" , mattias.ronnblom , "Ma, Liang J" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v9 4/8] eventdev: add Rx adapter event vector support Thread-Index: AQHXJT4ABwUOcBTqMUmtVibvniictqqdoBgggAAG2QCAAAD74A== Date: Wed, 31 Mar 2021 06:55:00 +0000 Message-ID: References: <20210326140850.7332-1-pbhagavatula@marvell.com> <20210330082212.707-1-pbhagavatula@marvell.com> <20210330082212.707-5-pbhagavatula@marvell.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.5.1.3 authentication-results: marvell.com; dkim=none (message not signed) header.d=none;marvell.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [223.226.90.31] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 2822e9ee-5311-433e-579f-08d8f411e7b5 x-ms-traffictypediagnostic: SA0PR11MB4640: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:8273; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: vgA8VwaCWaNqMOx7l91Uwy+VtJPOxPpD9zwNLlCQlcxKK/38YHX5yP4sxQudIyD1FdQGU0uBFJ4okPm1r0L4XH4PUilJ35ffLHjpNrbQoj/ZiIipUD7FEAPTDb3prBUKcrYPe9Zy2eb1vLbYO6PwGzopzMdcjotalBmmyKa9/kaZPi3hIMcUA24QVmagcBneAGQ6AVGHILv0HVnmyzYSOOmycxpSdIoCtukkMWAhegEEBWwXx5n3qZqpuLirljnAeIq+K1mzeR3ABK+zoto5PLqvZJq1sklyDyERe6NYSPVnXM3OX8Mubzqm+lQEEkbp3LVKp9RypwaTWIc8o7LxJHnMfbGxKhnUb08U0+WxG+h+k4Y6twPoE1EeR0rHsXlMeb0Bx/L5jeX3S0qJGNVTpsi252uBUCsl58Mctf90x62Ztosj7em4Sl0TKpNL6V8fyXBMAcevFCM5ulgOa8KcyiQ6N5pk0RYUQpo2IhxbYybgGQr8cpJG8PfzrjCYnaMJCSQbXQ5DZ5PuYRjJ6XczJaMOaRzrF+V2Zz4/DgJ/vnsaR8IGDPSNtwWEojfTwuG5qeR4nvIbqxL79Y6Xa7oqQqq+s+DLTb9jsSt8+t/0mpB91YrsLfEhqWEKjfh4Qe9+rv32PcEZdG76CV7jJRfP9S9pOU7erE3L9XrzaFfb0o7KqewTo/EJwv3K7WYf2051 x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SN6PR11MB3117.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(396003)(346002)(376002)(136003)(366004)(39860400002)(6506007)(53546011)(55016002)(33656002)(38100700001)(55236004)(5660300002)(9686003)(71200400001)(83380400001)(921005)(6636002)(8936002)(478600001)(186003)(4326008)(8676002)(86362001)(316002)(110136005)(7696005)(76116006)(30864003)(2906002)(52536014)(66476007)(66946007)(64756008)(26005)(66556008)(66446008)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?+Jl7mnfb8kDzJNM1+zMVVZK62nnzCC3k1Vw0zHqXtRooxCbvR62r5AFCNNoO?= =?us-ascii?Q?EDwqW4hKCs9GkVQR3tBlxNokB+gf2lehk2PPCxhbLKRwQCD0z6rp0hXzZ7PK?= =?us-ascii?Q?OVXZlY5pw2eQeZa9cxzG9b3VRgMZ/49mP41bR3vwskFeq/zGdYqt01PcT7AZ?= =?us-ascii?Q?FVsUSrpmpR5mrEZ9q/x9wEPMurbkd1iu5GNStrqqWb3m3qVMDOnm7Z1EwqeO?= =?us-ascii?Q?HUKxKSGWB0/9+j/OKJIcVMQ8s4IP/fJBUHouJcP/z2ldvQEw6e5oQQuHyBeU?= =?us-ascii?Q?OPjsppm0rbf/27ZOKOAPsOJs43Kjw5SZf0dHaRVwSa7OuMYR10Ctm1ubh/cA?= =?us-ascii?Q?VcORUWxMQketZbQr6+3rmOjbI6/+JzHZJci0G/NzJ3TuPVQ7F4r0GELyjR+d?= =?us-ascii?Q?oFA0IgXuvgPapnmzjne9BUi0alg2vuxWn66DwvseA/zVjg6BIddTVHcRc0np?= =?us-ascii?Q?tuuPPoTTvTguZx8Eyr3ldvMIZy353V0FeRh1HZKN5AxH0uuR9Yuyzioopau3?= =?us-ascii?Q?I5otyTdFbE1Ysxw+EAEltztRzNeWKnY3H9v5DgFuT5K6loY/DhX5enGa02Zj?= =?us-ascii?Q?MJbku4Yo+nMAHBGAc5OtpOUIY+YUeKliEiGsIxCoe73h7xGtU8LrE5M2dxw8?= =?us-ascii?Q?/189AiTPl4Ok49Bl2kdI1U5IHBvmYuuDKr/jR7hCnIXgxrRwjfx3Bnt3pWOH?= =?us-ascii?Q?DcRPZmSIqDbfqjPh+iPCbUZ/D1N93BGJfrsOUOGuWKD6TppWYZj66SZETdVz?= =?us-ascii?Q?Wonk+GfprEZH48aa91hA7GLA8Fe38I9O8S/hG2k7EQWMNYCOl2hL4QfQmNTw?= =?us-ascii?Q?FLKVjf8cIHKgyDQrvrBFRivp3Vn5asfHAoal79IY0aljAw/bYPQ94Uj4PcYH?= =?us-ascii?Q?zE2gx6HTotIyevpwWft3V/5mRHsSYRiOt/vOWi3+8eZbk4VJw+otRtJTXNke?= =?us-ascii?Q?igzemwx7mlWuN2HKp2CWC6Y3X1Sc7fndnRuHdMBywJdy63ZE7/8P2VwFPITT?= =?us-ascii?Q?NJi3PaWuw9qKIkl0bIU4zHKwDovTjGPIHQ5NhyjgTmFcp3f9V+NDnXhQrEHy?= =?us-ascii?Q?BVDiNFm5OAvGHaTTvezR2wNbUJptCDO5TFLATlCMymQ0X86VYx/0lIW5hHUR?= =?us-ascii?Q?UzGR9GuLRlqhnwclrwvkZ1gSJRR5NxJYIVxMJ8GgiImPd8zL53XxePFqxHjf?= =?us-ascii?Q?q/Ry+xMS2i4WJaLHMC6WE+Ooa2F5U9DDtJY4ygGrtwNSdUiYSWF05hBo8JlL?= =?us-ascii?Q?Y2UVP2abln1h43IUG5FDY4porTOAGyVR0h4HzyGmWxAVx0IwLaeIYJHSU9pB?= =?us-ascii?Q?mK+gYnCwNKXQFLaVb+BqKigz?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: SN6PR11MB3117.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 2822e9ee-5311-433e-579f-08d8f411e7b5 X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Mar 2021 06:55:01.1328 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: gnWFs2oqcd7SgAqveoPICJf8TAJuyJfy5RIYwbxvMLBgH7GLUqYxRKhxMYirtTIVSJ+K0nJbiYQYUw0fZG3+1GCqVQkd1LOHFdHkGY64fqg= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR11MB4640 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v9 4/8] eventdev: add Rx adapter event vector support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Pavan Nikhilesh Bhagavatula > Sent: Wednesday, March 31, 2021 12:10 PM > To: Jayatheerthan, Jay ; Jerin Jacob Kollanu= kkaran ; Carrillo, Erik G > ; Gujjar, Abhinandan S ; McDaniel, Timothy > ; hemant.agrawal@nxp.com; Van Haaren, Harry <= harry.van.haaren@intel.com>; mattias.ronnblom > ; Ma, Liang J > Cc: dev@dpdk.org > Subject: RE: [dpdk-dev] [PATCH v9 4/8] eventdev: add Rx adapter event vec= tor support >=20 > >> -----Original Message----- > >> From: pbhagavatula@marvell.com > >> Sent: Tuesday, March 30, 2021 1:52 PM > >> To: jerinj@marvell.com; Jayatheerthan, Jay > >; Carrillo, Erik G > >; Gujjar, > >> Abhinandan S ; McDaniel, Timothy > >; hemant.agrawal@nxp.com; Van > >> Haaren, Harry ; mattias.ronnblom > >; Ma, Liang J > >> > >> Cc: dev@dpdk.org; Pavan Nikhilesh > >> Subject: [dpdk-dev] [PATCH v9 4/8] eventdev: add Rx adapter event > >vector support > >> > >> From: Pavan Nikhilesh > >> > >> Add event vector support for event eth Rx adapter, the > >implementation > >> creates vector flows based on port and queue identifier of the > >received > >> mbufs. > >> The flow id for SW Rx event vectorization will use 12-bits of queue > >> identifier and 8-bits port identifier when custom flow id is not set > >> for simplicity. > >> > >> Signed-off-by: Pavan Nikhilesh > >> --- > >> .../prog_guide/event_ethernet_rx_adapter.rst | 11 + > >> lib/librte_eventdev/eventdev_pmd.h | 7 +- > >> .../rte_event_eth_rx_adapter.c | 278 ++++++++++++++++-= - > >> lib/librte_eventdev/rte_eventdev.c | 6 +- > >> 4 files changed, 278 insertions(+), 24 deletions(-) > >> > >> diff --git a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst > >b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst > >> index 5eefef355..06fa864fa 100644 > >> --- a/doc/guides/prog_guide/event_ethernet_rx_adapter.rst > >> +++ b/doc/guides/prog_guide/event_ethernet_rx_adapter.rst > >> @@ -224,3 +224,14 @@ A loop processing ``rte_event_vector`` > >containing mbufs is shown below. > >> case ... > >> ... > >> } > >> + > >> +Rx event vectorization for SW Rx adapter > >> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > >> + > >> +For SW based event vectorization, i.e., when the > >> +``RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT`` is not set > >in the adapter's > >> +capabilities flags for a particular ethernet device, the service > >function > >> +creates a single event vector flow for all the mbufs arriving on the > >given > >> +Rx queue. > >> +The 20-bit event flow identifier is set to 12-bits of Rx queue identi= fier > >> +and 8-bits of ethernet device identifier. Could you add a format like below ? 19 12,11 0 +-----------+--------------+ | port_id | queue_id | +-----------+--------------+ > >> diff --git a/lib/librte_eventdev/eventdev_pmd.h > >b/lib/librte_eventdev/eventdev_pmd.h > >> index 9297f1433..0f724ac85 100644 > >> --- a/lib/librte_eventdev/eventdev_pmd.h > >> +++ b/lib/librte_eventdev/eventdev_pmd.h > >> @@ -69,9 +69,10 @@ extern "C" { > >> } \ > >> } while (0) > >> > >> -#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP \ > >> - > > ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | > >\ > >> - > > (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ)) > >> +#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP > >\ > >> + ((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | > >\ > >> + (RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) | > >\ > >> + (RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR)) > >> > >> #define RTE_EVENT_CRYPTO_ADAPTER_SW_CAP \ > >> > > RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA > >> diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >b/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >> index ac8ba5bf0..e273b3acf 100644 > >> --- a/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >> +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c > >> @@ -26,6 +26,10 @@ > >> #define BATCH_SIZE 32 > >> #define BLOCK_CNT_THRESHOLD 10 > >> #define ETH_EVENT_BUFFER_SIZE (4*BATCH_SIZE) > >> +#define MAX_VECTOR_SIZE 1024 > >> +#define MIN_VECTOR_SIZE 4 > >> +#define MAX_VECTOR_NS 1E9 > >> +#define MIN_VECTOR_NS 1E5 > >> > >> #define ETH_RX_ADAPTER_SERVICE_NAME_LEN 32 > >> #define ETH_RX_ADAPTER_MEM_NAME_LEN 32 > >> @@ -59,6 +63,20 @@ struct eth_rx_poll_entry { > >> uint16_t eth_rx_qid; > >> }; > >> > >> +struct eth_rx_vector_data { > >> + TAILQ_ENTRY(eth_rx_vector_data) next; > >> + uint16_t port; > >> + uint16_t queue; > >> + uint16_t max_vector_count; > >> + uint64_t event; > >> + uint64_t ts; > >> + uint64_t vector_timeout_ticks; > >> + struct rte_mempool *vector_pool; > >> + struct rte_event_vector *vector_ev; > >> +} __rte_cache_aligned; > >> + > >> +TAILQ_HEAD(eth_rx_vector_data_list, eth_rx_vector_data); > >> + > >> /* Instance per adapter */ > >> struct rte_eth_event_enqueue_buffer { > >> /* Count of events in this buffer */ > >> @@ -92,6 +110,14 @@ struct rte_event_eth_rx_adapter { > >> uint32_t wrr_pos; > >> /* Event burst buffer */ > >> struct rte_eth_event_enqueue_buffer event_enqueue_buffer; > >> + /* Vector enable flag */ > >> + uint8_t ena_vector; > >> + /* Timestamp of previous vector expiry list traversal */ > >> + uint64_t prev_expiry_ts; > >> + /* Minimum ticks to wait before traversing expiry list */ > >> + uint64_t vector_tmo_ticks; > >> + /* vector list */ > >> + struct eth_rx_vector_data_list vector_list; > >> /* Per adapter stats */ > >> struct rte_event_eth_rx_adapter_stats stats; > >> /* Block count, counts up to BLOCK_CNT_THRESHOLD */ > >> @@ -198,9 +224,11 @@ struct eth_device_info { > >> struct eth_rx_queue_info { > >> int queue_enabled; /* True if added */ > >> int intr_enabled; > >> + uint8_t ena_vector; > >> uint16_t wt; /* Polling weight */ > >> uint32_t flow_id_mask; /* Set to ~0 if app provides flow id else > >0 */ > >> uint64_t event; > >> + struct eth_rx_vector_data vector_data; > >> }; > >> > >> static struct rte_event_eth_rx_adapter **event_eth_rx_adapter; > >> @@ -722,6 +750,9 @@ rxa_flush_event_buffer(struct > >rte_event_eth_rx_adapter *rx_adapter) > >> &rx_adapter->event_enqueue_buffer; > >> struct rte_event_eth_rx_adapter_stats *stats =3D &rx_adapter- > >>stats; > >> > >> + if (!buf->count) > >> + return 0; > >> + > >> uint16_t n =3D rte_event_enqueue_new_burst(rx_adapter- > >>eventdev_id, > >> rx_adapter->event_port_id, > >> buf->events, > >> @@ -742,6 +773,77 @@ rxa_flush_event_buffer(struct > >rte_event_eth_rx_adapter *rx_adapter) > >> return n; > >> } > >> > >> +static inline void > >> +rxa_init_vector(struct rte_event_eth_rx_adapter *rx_adapter, > >> + struct eth_rx_vector_data *vec) > >> +{ > >> + vec->vector_ev->nb_elem =3D 0; > >> + vec->vector_ev->port =3D vec->port; > >> + vec->vector_ev->queue =3D vec->queue; > >> + vec->vector_ev->attr_valid =3D true; > >> + TAILQ_INSERT_TAIL(&rx_adapter->vector_list, vec, next); > >> +} > >> + > >> +static inline uint16_t > >> +rxa_create_event_vector(struct rte_event_eth_rx_adapter > >*rx_adapter, > >> + struct eth_rx_queue_info *queue_info, > >> + struct rte_eth_event_enqueue_buffer *buf, > >> + struct rte_mbuf **mbufs, uint16_t num) > >> +{ > >> + struct rte_event *ev =3D &buf->events[buf->count]; > >> + struct eth_rx_vector_data *vec; > >> + uint16_t filled, space, sz; > >> + > >> + filled =3D 0; > >> + vec =3D &queue_info->vector_data; > >> + > >> + if (vec->vector_ev =3D=3D NULL) { > >> + if (rte_mempool_get(vec->vector_pool, > >> + (void **)&vec->vector_ev) < 0) { > >> + rte_pktmbuf_free_bulk(mbufs, num); > >> + return 0; > >> + } > >> + rxa_init_vector(rx_adapter, vec); > >> + } > >> + while (num) { > >> + if (vec->vector_ev->nb_elem =3D=3D vec- > >>max_vector_count) { > >> + /* Event ready. */ > >> + ev->event =3D vec->event; > >> + ev->vec =3D vec->vector_ev; > >> + ev++; > >> + filled++; > >> + vec->vector_ev =3D NULL; > >> + TAILQ_REMOVE(&rx_adapter->vector_list, vec, > >next); > >> + if (rte_mempool_get(vec->vector_pool, > >> + (void **)&vec->vector_ev) < > >0) { > >> + rte_pktmbuf_free_bulk(mbufs, num); > >> + return 0; > >> + } > >> + rxa_init_vector(rx_adapter, vec); > >> + } > >> + > >> + space =3D vec->max_vector_count - vec->vector_ev- > >>nb_elem; > >> + sz =3D num > space ? space : num; > >> + memcpy(vec->vector_ev->mbufs + vec->vector_ev- > >>nb_elem, mbufs, > >> + sizeof(void *) * sz); > >> + vec->vector_ev->nb_elem +=3D sz; > >> + num -=3D sz; > >> + mbufs +=3D sz; > >> + vec->ts =3D rte_rdtsc(); > >> + } > >> + > >> + if (vec->vector_ev->nb_elem =3D=3D vec->max_vector_count) { > >> + ev->event =3D vec->event; > >> + ev->vec =3D vec->vector_ev; > >> + ev++; > >> + filled++; > >> + vec->vector_ev =3D NULL; > >> + TAILQ_REMOVE(&rx_adapter->vector_list, vec, next); > >> + } > >> + > >> + return filled; > >> +} > >> + > >> static inline void > >> rxa_buffer_mbufs(struct rte_event_eth_rx_adapter *rx_adapter, > >> uint16_t eth_dev_id, > >> @@ -766,29 +868,33 @@ rxa_buffer_mbufs(struct > >rte_event_eth_rx_adapter *rx_adapter, > >> uint16_t nb_cb; > >> uint16_t dropped; > >> > >> - /* 0xffff ffff if PKT_RX_RSS_HASH is set, otherwise 0 */ > >> - rss_mask =3D ~(((m->ol_flags & PKT_RX_RSS_HASH) !=3D 0) - 1); > >> - do_rss =3D !rss_mask && !eth_rx_queue_info->flow_id_mask; > >> - > >> - for (i =3D 0; i < num; i++) { > >> - m =3D mbufs[i]; > >> - > >> - rss =3D do_rss ? > >> - rxa_do_softrss(m, rx_adapter->rss_key_be) : > >> - m->hash.rss; > >> - ev->event =3D event; > >> - ev->flow_id =3D (rss & ~flow_id_mask) | > >> - (ev->flow_id & flow_id_mask); > >> - ev->mbuf =3D m; > >> - ev++; > >> + if (!eth_rx_queue_info->ena_vector) { > >> + /* 0xffff ffff if PKT_RX_RSS_HASH is set, otherwise 0 */ > >> + rss_mask =3D ~(((m->ol_flags & PKT_RX_RSS_HASH) !=3D 0) - > >1); > >> + do_rss =3D !rss_mask && !eth_rx_queue_info- > >>flow_id_mask; > >> + for (i =3D 0; i < num; i++) { > >> + m =3D mbufs[i]; > >> + > >> + rss =3D do_rss ? rxa_do_softrss(m, rx_adapter- > >>rss_key_be) > >> + : m->hash.rss; > >> + ev->event =3D event; > >> + ev->flow_id =3D (rss & ~flow_id_mask) | > >> + (ev->flow_id & flow_id_mask); > >> + ev->mbuf =3D m; > >> + ev++; > >> + } > >> + } else { > >> + num =3D rxa_create_event_vector(rx_adapter, > >eth_rx_queue_info, > >> + buf, mbufs, num); > >> } > >> > >> - if (dev_info->cb_fn) { > >> + if (num && dev_info->cb_fn) { > >> > >> dropped =3D 0; > >> nb_cb =3D dev_info->cb_fn(eth_dev_id, rx_queue_id, > >> - ETH_EVENT_BUFFER_SIZE, buf- > >>count, ev, > >> - num, dev_info->cb_arg, > >&dropped); > >> + ETH_EVENT_BUFFER_SIZE, buf- > >>count, > >> + &buf->events[buf->count], > >num, > >> + dev_info->cb_arg, &dropped); > >> if (unlikely(nb_cb > num)) > >> RTE_EDEV_LOG_ERR("Rx CB returned %d (> %d) > >events", > >> nb_cb, num); > >> @@ -1124,6 +1230,30 @@ rxa_poll(struct rte_event_eth_rx_adapter > >*rx_adapter) > >> return nb_rx; > >> } > >> > >> +static void > >> +rxa_vector_expire(struct eth_rx_vector_data *vec, void *arg) > >> +{ > >> + struct rte_event_eth_rx_adapter *rx_adapter =3D arg; > >> + struct rte_eth_event_enqueue_buffer *buf =3D > >> + &rx_adapter->event_enqueue_buffer; > >> + struct rte_event *ev; > >> + > >> + if (buf->count) > >> + rxa_flush_event_buffer(rx_adapter); > >> + > >> + if (vec->vector_ev->nb_elem =3D=3D 0) > >> + return; > >> + ev =3D &buf->events[buf->count]; > >> + > >> + /* Event ready. */ > >> + ev->event =3D vec->event; > >> + ev->vec =3D vec->vector_ev; > >> + buf->count++; > >> + > >> + vec->vector_ev =3D NULL; > >> + vec->ts =3D 0; > >> +} > >> + > >> static int > >> rxa_service_func(void *args) > >> { > >> @@ -1137,6 +1267,24 @@ rxa_service_func(void *args) > >> return 0; > >> } > >> > >> + if (rx_adapter->ena_vector) { > >> + if ((rte_rdtsc() - rx_adapter->prev_expiry_ts) >=3D > >> + rx_adapter->vector_tmo_ticks) { > >> + struct eth_rx_vector_data *vec; > >> + > >> + TAILQ_FOREACH(vec, &rx_adapter->vector_list, > >next) { > >> + uint64_t elapsed_time =3D rte_rdtsc() - > >vec->ts; > >> + > >> + if (elapsed_time >=3D vec- > >>vector_timeout_ticks) { > >> + rxa_vector_expire(vec, > >rx_adapter); > >> + TAILQ_REMOVE(&rx_adapter- > >>vector_list, > >> + vec, next); > >> + } > >> + } > >> + rx_adapter->prev_expiry_ts =3D rte_rdtsc(); > >> + } > >> + } > >> + > >> stats =3D &rx_adapter->stats; > >> stats->rx_packets +=3D rxa_intr_ring_dequeue(rx_adapter); > >> stats->rx_packets +=3D rxa_poll(rx_adapter); > >> @@ -1640,11 +1788,35 @@ rxa_update_queue(struct > >rte_event_eth_rx_adapter *rx_adapter, > >> } > >> } > >> > >> +static void > >> +rxa_set_vector_data(struct eth_rx_queue_info *queue_info, > >uint16_t vector_count, > >> + uint64_t vector_ns, struct rte_mempool *mp, int32_t > >qid, > >> + uint16_t port_id) > >> +{ > >> +#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) > >> + struct eth_rx_vector_data *vector_data; > >> + uint32_t flow_id; > >> + > >> + vector_data =3D &queue_info->vector_data; > >> + vector_data->max_vector_count =3D vector_count; > >> + vector_data->port =3D port_id; > >> + vector_data->queue =3D qid; > >> + vector_data->vector_pool =3D mp; > >> + vector_data->vector_timeout_ticks =3D > >> + NSEC2TICK(vector_ns, rte_get_timer_hz()); > >> + vector_data->ts =3D 0; > >> + flow_id =3D queue_info->event & 0xFFFFF; > >> + flow_id =3D > >> + flow_id =3D=3D 0 ? (qid & 0xFFF) | (port_id & 0xFF) << 12 : > >flow_id; > >> + vector_data->event =3D (queue_info->event & ~0xFFFFF) | > >flow_id; > >> +} > >> + > >> static void > >> rxa_sw_del(struct rte_event_eth_rx_adapter *rx_adapter, > >> struct eth_device_info *dev_info, > >> int32_t rx_queue_id) > >> { > >> + struct eth_rx_vector_data *vec; > >> int pollq; > >> int intrq; > >> int sintrq; > >> @@ -1663,6 +1835,14 @@ rxa_sw_del(struct > >rte_event_eth_rx_adapter *rx_adapter, > >> return; > >> } > >> > >> + /* Push all the partial event vectors to event device. */ > >> + TAILQ_FOREACH(vec, &rx_adapter->vector_list, next) { > >> + if (vec->queue !=3D rx_queue_id) > >> + continue; > >> + rxa_vector_expire(vec, rx_adapter); > >> + TAILQ_REMOVE(&rx_adapter->vector_list, vec, next); > >> + } > >> + > > > >We are doing packet related activity (rxa_flush_event_buffer()) outside > >of rxa_service_func() although it wouldn't be running since queue del > >code has the lock. It would also be done in the context of a control > >thread. I don't know if there is a precedence for this. What do you thin= k > >of just freeing the vector data and mbufs ? >=20 > Since we are just enqueueing to event device it should work fine. > The teardown sequence for event devices is to quiesce all producers first > and finally the event device. > This will make sure that the packets get handled properly with the stop > callback registered through `rte_event_dev_stop_flush_callback_register`. >=20 > > > >> pollq =3D rxa_polled_queue(dev_info, rx_queue_id); > >> intrq =3D rxa_intr_queue(dev_info, rx_queue_id); > >> sintrq =3D rxa_shared_intr(dev_info, rx_queue_id); > >> @@ -1741,6 +1921,42 @@ rxa_add_queue(struct > >rte_event_eth_rx_adapter *rx_adapter, > >> } > >> } > >> > >> +static void > >> +rxa_sw_event_vector_configure( > >> + struct rte_event_eth_rx_adapter *rx_adapter, uint16_t > >eth_dev_id, > >> + int rx_queue_id, > >> + const struct rte_event_eth_rx_adapter_event_vector_config > >*config) > >> +{ > >> + struct eth_device_info *dev_info =3D &rx_adapter- > >>eth_devices[eth_dev_id]; > >> + struct eth_rx_queue_info *queue_info; > >> + struct rte_event *qi_ev; > >> + > >> + if (rx_queue_id =3D=3D -1) { > >> + uint16_t nb_rx_queues; > >> + uint16_t i; > >> + > >> + nb_rx_queues =3D dev_info->dev->data->nb_rx_queues; > >> + for (i =3D 0; i < nb_rx_queues; i++) > >> + rxa_sw_event_vector_configure(rx_adapter, > >eth_dev_id, i, > >> + config); > >> + return; > >> + } > >> + > >> + queue_info =3D &dev_info->rx_queue[rx_queue_id]; > >> + qi_ev =3D (struct rte_event *)&queue_info->event; > >> + queue_info->ena_vector =3D 1; > >> + qi_ev->event_type =3D > >RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR; > >> + rxa_set_vector_data(queue_info, config->vector_sz, > >> + config->vector_timeout_ns, config- > >>vector_mp, > >> + rx_queue_id, dev_info->dev->data->port_id); > >> + rx_adapter->ena_vector =3D 1; > >> + rx_adapter->vector_tmo_ticks =3D > >> + rx_adapter->vector_tmo_ticks ? > >> + RTE_MIN(config->vector_timeout_ns >> 1, > >> + rx_adapter->vector_tmo_ticks) : > >> + config->vector_timeout_ns >> 1; > >> +} > >> + > >> static int rxa_sw_add(struct rte_event_eth_rx_adapter *rx_adapter, > >> uint16_t eth_dev_id, > >> int rx_queue_id, > >> @@ -1967,6 +2183,7 @@ > >rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id, > >> rx_adapter->conf_cb =3D conf_cb; > >> rx_adapter->conf_arg =3D conf_arg; > >> rx_adapter->id =3D id; > >> + TAILQ_INIT(&rx_adapter->vector_list); > >> strcpy(rx_adapter->mem_name, mem_name); > >> rx_adapter->eth_devices =3D rte_zmalloc_socket(rx_adapter- > >>mem_name, > >> RTE_MAX_ETHPORTS * > >> @@ -2081,6 +2298,15 @@ > >rte_event_eth_rx_adapter_queue_add(uint8_t id, > >> return -EINVAL; > >> } > >> > >> + if ((cap & > >RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR) =3D=3D 0 && > >> + (queue_conf->rx_queue_flags & > >> + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR)) { > >> + RTE_EDEV_LOG_ERR("Event vectorization is not > >supported," > >> + " eth port: %" PRIu16 " adapter id: %" > >PRIu8, > >> + eth_dev_id, id); > >> + return -EINVAL; > >> + } > >> + > >> if ((cap & > >RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) =3D=3D 0 && > >> (rx_queue_id !=3D -1)) { > >> RTE_EDEV_LOG_ERR("Rx queues can only be connected > >to single " > >> @@ -2143,6 +2369,17 @@ > >rte_event_eth_rx_adapter_queue_add(uint8_t id, > >> return 0; > >> } > >> > >> +static int > >> +rxa_sw_vector_limits(struct > >rte_event_eth_rx_adapter_vector_limits *limits) > >> +{ > >> + limits->max_sz =3D MAX_VECTOR_SIZE; > >> + limits->min_sz =3D MIN_VECTOR_SIZE; > >> + limits->max_timeout_ns =3D MAX_VECTOR_NS; > >> + limits->min_timeout_ns =3D MIN_VECTOR_NS; > >> + > >> + return 0; > >> +} > >> + > >> int > >> rte_event_eth_rx_adapter_queue_del(uint8_t id, uint16_t > >eth_dev_id, > >> int32_t rx_queue_id) > >> @@ -2333,7 +2570,8 @@ > >rte_event_eth_rx_adapter_queue_event_vector_config( > >> ret =3D dev->dev_ops- > >>eth_rx_adapter_event_vector_config( > >> dev, &rte_eth_devices[eth_dev_id], > >rx_queue_id, config); > >> } else { > >> - ret =3D -ENOTSUP; > >> + rxa_sw_event_vector_configure(rx_adapter, > >eth_dev_id, > >> + rx_queue_id, config); > >> } > >> > >> return ret; > >> @@ -2371,7 +2609,7 @@ > >rte_event_eth_rx_adapter_vector_limits_get( > >> ret =3D dev->dev_ops- > >>eth_rx_adapter_vector_limits_get( > >> dev, &rte_eth_devices[eth_port_id], limits); > >> } else { > >> - ret =3D -ENOTSUP; > >> + ret =3D rxa_sw_vector_limits(limits); > >> } > >> > >> return ret; > >> diff --git a/lib/librte_eventdev/rte_eventdev.c > >b/lib/librte_eventdev/rte_eventdev.c > >> index be0499c52..62824654b 100644 > >> --- a/lib/librte_eventdev/rte_eventdev.c > >> +++ b/lib/librte_eventdev/rte_eventdev.c > >> @@ -122,7 +122,11 @@ rte_event_eth_rx_adapter_caps_get(uint8_t > >dev_id, uint16_t eth_port_id, > >> > >> if (caps =3D=3D NULL) > >> return -EINVAL; > >> - *caps =3D 0; > >> + > >> + if (dev->dev_ops->eth_rx_adapter_caps_get =3D=3D NULL) > >> + *caps =3D RTE_EVENT_ETH_RX_ADAPTER_SW_CAP; > >> + else > >> + *caps =3D 0; > >> > >> return dev->dev_ops->eth_rx_adapter_caps_get ? > >> (*dev->dev_ops- > >>eth_rx_adapter_caps_get)(dev, > >> -- > >> 2.17.1 With the above change, you can add Acked-by: Jay Jayatheerthan .