From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7E716460EA; Wed, 22 Jan 2025 15:49:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1D48B402D5; Wed, 22 Jan 2025 15:49:39 +0100 (CET) Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) by mails.dpdk.org (Postfix) with ESMTP id 29BA4402BC for ; Wed, 22 Jan 2025 15:49:38 +0100 (CET) Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-4362bae4d7dso51483755e9.1 for ; Wed, 22 Jan 2025 06:49:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737557378; x=1738162178; darn=dpdk.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=6mmDxSjNNeC0IkT82QlcTfmPSJ/2naxT/UWdnJLF32E=; b=TkuzyTadER6fDWP69ow31Rr/PmnQd5/yYEOoPcGMClzmcFlAQ8lhZ9WAY7xcv6Rfm7 t3BwoGUwQVpn1rxSb7xfc24sX6WeuTEHlD3DRPrbt68xdMOFAO1iRa0IJtKDQlMa3JgP HG/e7Zfk/0LOIVUWWdlVG5f4KDfM3fyBXOmzU+yw188x/wD67TPGAskBrNMFknAvSMKf CPJDzSkuDUtKuKglRgaG6HzNFoPpaOsXdOjC5dtlIjmcEjLFl1+X2n5tu5GJogHw/Nsp zoCMT5zKW6RRma2gSJttDGFkJ2pG2c42KUuTJES+QH3z/juU95fW1JXoG6Z9r4X5bWhc r1cA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737557378; x=1738162178; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6mmDxSjNNeC0IkT82QlcTfmPSJ/2naxT/UWdnJLF32E=; b=b5UOg98J3yHxehwQ7af/bu+vBDljC0k7sOkF+BqfhHuGmtRXIJmSZ6KhKQ3gZAjWLp m4d7PMc5nQiTavO/iD2fK4M+WFAIyR2vxssXXREEY5RT1rbV4AR23bZ04jRJutLySBS6 zfuISheu4JASyMn4IfnJOY3u+Tk14oyFKDq4arxj0N3mESk4NQEQiuaOmELodoRJWyfT u5O/8YVw0Xr3aGTn+4ZH/Xm/3qe3l0ulxtWawf31uY+tQ6XmCiMYPyNw1GkcBKKcfYh6 /29zhOehg2c7iVUsldP9nfvb+In5UiD4m9qoaomMjNMAq8PQA+E1/Dp5e5wH/QdHRpU+ WBJQ== X-Gm-Message-State: AOJu0YzaxW5ip7fdd+Wfdtf0hUOpxd80D8Z5XH0IDf4OT81Y0jn2hOYa +M5gjPlCrQs9kuReVBVEEMT/xsH1bciN/w1euF4O4Fz4w3LOIBMo X-Gm-Gg: ASbGncvkd2a+1E7gtR8owzGvgggkbhS6Kkka8RrKhHpYnXyahFmiViwpkBHUxC1DkiB OB8zIyOXvK5VTdwa4RhXxGtnXbXM3YoasVgoeeAPx4GboHUm3h4CVrrecFOjWiXz248rMJ39vly 1aOxnrSWyo8HME+VLP9dMsXbxE13m9duQnJbB1czogo/fgqmQ5F1JwnDEPxuNs6tTj1z44SzYl3 im178Cr3BoJUeV87b2pJPYgEQoqsEoemSqoUGUOvcH0HS3uAHgOrVSWBjdLxB3CPcII9Lr8TdMu WVi3mN50An8= X-Google-Smtp-Source: AGHT+IHDSHPoyPowqadJv3WZkP8H1RISqBtNQ5Erh9Czv7z63f/AEO1huqEnucIFnxMhaOiDKvMX/w== X-Received: by 2002:a05:6000:1a85:b0:38b:ef22:d8c3 with SMTP id ffacd0b85a97d-38bf57a69b4mr21709419f8f.35.1737557377258; Wed, 22 Jan 2025 06:49:37 -0800 (PST) Received: from [192.168.1.150] ([185.157.14.57]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf327556dsm16473876f8f.71.2025.01.22.06.49.36 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 22 Jan 2025 06:49:36 -0800 (PST) Message-ID: Date: Wed, 22 Jan 2025 15:49:35 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] service: add service maintenance callback To: =?UTF-8?Q?Mattias_R=C3=B6nnblom?= , Piotr Krzewinski , harry.van.haaren@intel.com Cc: dev@dpdk.org, Luka Jankovic , Stephen Hemminger , Jerin Jacob Kollanukkaran References: <20241231100208.1105045-1-piotr.krzewinski@ericsson.com> Content-Language: en-US From: Piotr Krzewinski In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 1/7/2025 11:01 AM, Mattias Rönnblom wrote: > On 2024-12-31 11:02, Piotr Krzewinski wrote: >> Add option to register a callback running on service lcores >> along regular services, which gets information about the service loop. >> It enables doing maintenance work or power saving during periods when >> all registered services are idling. >> >> As an example application that is doing dequeue from multiple >> event ports using single service lcore (e.g. using rte dispatcher library) >> may want to wait for new events inside the maintenance callback when >> there is no work available on ANY of the ports. >> This is not possible using non zero dequeue timeout without increasing >> latency of work that is scheduled to other event ports. >> > > If the purpose of this mechanism is to allow user-defined power management, we should try to find a more specific name. In a UNIX kernel, this kind of thing happens in the "idle loop" (or "idle task"). The user would be responsible for implementing the "idle governor" (to use Linux terminology). > > "idle hook", "idle callback", or "idle handler" maybe. > My initial idea, apart of power management aspects, was that such a hook could allow for some more complex but not time sensitive maintenance work to be done in periods of low traffic / low service core usage. Though it may be a bit far fetched and not a real use case. 'Idle hook/callback' name would fit this intention as well. > For an app using both eventdev+dispatcher lib and *other* non-trivial RTE services, the issue is really that the work scheduler (i.e., the event device) does not know about all work being performed. > > That said, a solution to that larger issue likely involves some extensive rework of such an app, and potentially DPDK changes as well. The kind of callback suggested in this RFC may well serve as a stop gap solution which allows the implementation of some basic power management support. > Well, we have a deployment using discussed mechanism currently due to the limitations you point out, so I figured that there may be other users that would benefit from that option. > In the light of we (or at least I) don't really know what we are doing here, maybe it's better to have this as a pure "iteration hook/callback", without any particular opionion on how it should be used. > > Such a solution, with arrays of service call result codes and service ids, would come with a little bit more complexity/overhead. > > Stephen and Jerin, your input would be greatly appreciated on this matter. Especially the "bigger picture" question. > I am a bit afraid of the amount of refactoring in service framework required for this approach and that it would perhaps introduce significant overhead. I feel that tracking return codes from all the various services inside the hook would be a bit more troublesome from application perspective and does not enable many more usecases. But if there is general agreement that it is better option I can try to do some prototyping in this direction. > > The existence of this new API should probably be touched upon in the user guide as well. And the release change log. Good idea, will fix in the next version once the naming/purpose and general idea is agreed upon. > > It should be made clear which thread (the service lcore's) runs this callback, and when (after each iteration). > > It should be clear if multiple callbacks are allowed per lcore. > > What happens if a callback is already registered? > Thanks, will try to clarify in v3. >> + * @param callback Function callback to register >> + * @param lcore Id of the service core. > > It could be useful to have shorthand for "all current service cores". Either a separate function, or a special lcore id for the above function. > > LCORE_ID_ANY could be used, but would make it look like you registered the hook on any *one* service lcore, which wouldn't be the case. > > Maybe not worth the trouble. > Hard to say if there is any similar notion of SERVICE_LCORE_ALL anywhere and I didn't really see a need for it. >> + * @retval 0 Successfully registered the callback. >> + *         -EINVAL Attempted to register an invalid callback or the > > What is an "invalid callback"? NULL? > Yes, NULL is the only invalid case. Best Regards, Piotr