From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A94DBA00BE; Wed, 27 May 2020 22:57:51 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6523D1DAD2; Wed, 27 May 2020 22:57:50 +0200 (CEST) Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by dpdk.org (Postfix) with ESMTP id AE2AB1DA5F for ; Wed, 27 May 2020 22:57:48 +0200 (CEST) Received: by mail-pj1-f67.google.com with SMTP id k2so1090632pjs.2 for ; Wed, 27 May 2020 13:57:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=z2Aba0xN20V0wgkcLmENuXgAE5nlh0B271a+MeU5Vuw=; b=EC9SNVDT83F8JxXl52qNCRp9fGU/fwP8dwmLz+RQM/hveUbS0wQbtqkWKkfpthh6hn /gHm2VOCWh0emVWmChLoZ4n8vSwf7or5fsIvha+q+02xGquXnueXMmdVy4JMhY9pW159 9ur9GOo3sQAN7kZ0T8Q/Z5p/JjmY7ZKr1H8++Wm3HXbhBAn5S4u9gAT5OlK4mEGvbpff uR5WY8Ut4oIXu/8OGv5/go/zOhbgz05+sOpnL6tuJttZIUodhWMfZYSt+i3LHhdZOnWE yJhZPN82/utxmHbjWB8JeODEVML0TU3b9g8Ji71AOK3a+5xDbnMe8I7op/Q595FLPK99 /2uQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=z2Aba0xN20V0wgkcLmENuXgAE5nlh0B271a+MeU5Vuw=; b=CAqhIaTiKHRHrI86YOJVup++OAEvP7J7IdbFazLpV9qfyzWsfHNG04e+Q16H55a7RE g//WscWOuxO5w8iePuhKDe8+SdOaIIqfvqkED5igjsTCyTQlrtJKIY/iieF5Zha38OaQ YQ+qUb4MqPIMerupdjXJH/8+IgcQtTNxLOrA/8AK5bduVLaFTguz6eIXo9BKM7egBWgV qGbgt4KVGu33ZGPw1Td7+Zl29FWSJ+o4Vkj4Ims+NuupXdAuMrikMKB82Oh21GOT8C26 yWe562BWnWoePZxzwX4VViOnlcRfRYSChGD4EfQt3ANXCv9CJE00Ji+B7iM9YvZQCvEx XAGw== X-Gm-Message-State: AOAM533uhFrRhbe21InoNAl9UgPOTzYV6jaDGwX1nsQFZUQSANngdsVX 4gITk39Im0ewj8fkljedd7Qf2A== X-Google-Smtp-Source: ABdhPJyNLV+GmHylXOgk48QQFKJlkHSIugXAtia0qo2UWfebwExQKv8di+J0yz3E+PhRgOgQgiOP+A== X-Received: by 2002:a17:902:dc86:: with SMTP id n6mr232224pld.17.1590613067442; Wed, 27 May 2020 13:57:47 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id m12sm3119381pjf.44.2020.05.27.13.57.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 May 2020 13:57:47 -0700 (PDT) Date: Wed, 27 May 2020 13:57:39 -0700 From: Stephen Hemminger To: Jerin Jacob Cc: Anatoly Burakov , dpdk-dev , David Hunt , Liang Ma Message-ID: <20200527135739.5e77ca35@hermes.lan> In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [RFC 0/6] Power-optimized RX for Ethernet devices X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, 27 May 2020 23:03:59 +0530 Jerin Jacob wrote: > On Wed, May 27, 2020 at 10:32 PM Anatoly Burakov > wrote: > > > > This patchset proposes a simple API for Ethernet drivers > > to cause the CPU to enter a power-optimized state while > > waiting for packets to arrive, along with a set of > > (hopefully generic) intrinsics that facilitate that. This > > is achieved through cooperation with the NIC driver that > > will allow us to know address of the next NIC RX ring > > packet descriptor, and wait for writes on it. > > > > On IA, this is achieved through using UMONITOR/UMWAIT > > instructions. They are used in their raw opcode form > > because there is no widespread compiler support for > > them yet. Still, the API is made generic enough to > > hopefully support other architectures, if they happen > > to implement similar instructions. > > > > To achieve power savings, there is a very simple mechanism > > used: we're counting empty polls, and if a certain threshold > > is reached, we get the address of next RX ring descriptor > > from the NIC driver, arm the monitoring hardware, and > > enter a power-optimized state. We will then wake up when > > either a timeout happens, or a write happens (or generally > > whenever CPU feels like waking up - this is platform- > > specific), and proceed as normal. The empty poll counter is > > reset whenever we actually get packets, so we only go to > > sleep when we know nothing is going on. > > > > Why are we putting it into ethdev as opposed to leaving > > this up to the application? Our customers specifically > > requested a way to do it wit minimal changes to the > > application code. The current approach allows to just > > flip a switch and automagically have power savings. > > > > There are certain limitations in this patchset right now: > > - Currently, only 1:1 core to queue mapping is supported, > > meaning that each lcore must at most handle RX on a > > single queue > > - Currently, power management is enabled per-port, not > > per-queue > > - There is potential to greatly increase TX latency if we > > are buffering things, and go to sleep before sending > > packets > > - The API is not perfect and could use some improvement > > and discussion > > - The API doesn't extend to other device types > > - The intrinsics are platform-specific, so ethdev has > > some platform-specific code in it > > - Support was only implemented for devices using > > net/ixgbe, net/i40e and net/ice drivers > > > > Hopefully this would generate enough feedback to clear > > a path forward! > > Just for my understanding: > > How/Is this solution is superior than Rx queue interrupt based scheme that > applied in l3fwd-power? > > What I meant by superior here, as an example, > a)Is there any power savings in mill watt vs interrupt scheme? > b) Is there improvement on time reduction between switching from/to a > different state > (i.e how fast it can move from low power state to full power state) vs > interrupt scheme. > etc > > or This just for just pushing all the logic to ethdev so that > applications can be transparent? > The interrupt scheme is going to get better power management since the core can go to WAIT. This scheme does look interesting in theory since it will be lower latency. but has a number of issues: * changing drivers * can not multiplex multiple queues per core; you are assuming a certain threading model * what if thread is preempted * what about thread in a VM * platform specific: ARM and x86 have different semantics here