From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E565243F6A; Thu, 2 May 2024 18:17:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B63CC402B2; Thu, 2 May 2024 18:17:05 +0200 (CEST) Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by mails.dpdk.org (Postfix) with ESMTP id E3FEA40299 for ; Thu, 2 May 2024 18:17:03 +0200 (CEST) Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-2b27c532e50so2151415a91.2 for ; Thu, 02 May 2024 09:17:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1714666623; x=1715271423; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=y00Kl0PqCaUbIQ5dMbpplYf+ljqaoNAqe4XsVRb4Fbc=; b=NJlI7PKyj+AW43Jfj32HiLma0a0fk5j8V7m2q+rcoDIZ7WcYUHV6I3cFagc9NZ5lc7 4PGfMk6Yur6o6b8StfEzOY7XLZxdDk8Me9UY2gXZBriTQFhg95Uhy7hmprRrCHlTpMF1 QRuqQqC36DD15sGeutDkpqzIdy/qpqhFkyYuaNy1ab59bjCC7/N+MEstP8KSHBweYBFz tWlUjOduxcizDiMCM87ZfZhsVu+yJlFGo/6/+FcG0wG1swlUYXXouu+fAPwCTTKfgxtV bO6176K2MiVpiN+2RQFUcsCEdqD8EDwxTaLIo25w+JwXOcV3dYmsjlRkmOri3j7Uij8Y c54A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714666623; x=1715271423; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=y00Kl0PqCaUbIQ5dMbpplYf+ljqaoNAqe4XsVRb4Fbc=; b=e0QLutagvJ1+44wZynk/RTwAnYwX7ftG2QLFDr0ErouxTRGAKlqfnGNKeg4e6CDVPB UyaGTN6HxE3FUpbT+YRwQjIsDsoF08/Np0+PweIH3Rm8TKbN7jC7e80XAfa8ngxZ9OXK D2pOZEhDj5aVlY8OwhC5qyPVRXtv6YhqA5x7qKMdnjJbgIPeqiXUSYXMcxZ4GjzHayq4 7tkJ3xwGTzsV65u5fPgg4s90fXx1zdU67jI/lSbB1/Q6e83uZgUq8kLoH5TokTOlXhr0 qcuwM7QmmsMqbuDfNzT6yTAiTTXyyoHvj8vK1J/uq4wAQ3H/CaKi75NfBwHjm+bp2zuE ZwnA== X-Gm-Message-State: AOJu0YzTso6iZpnZo63LLHphyo6iVB3HFLxfaVvzVDc4KezviMVbrTqb VOLfP8EtDzWXT8wPeDpDj7d8nvGi8eIAhud8PUiyxyMPKpI5G9tp9Nuag0RRQtA= X-Google-Smtp-Source: AGHT+IEWwm+655letKQeTtWMsS+hEDMGNxAbtYCWY9Vz9m4kZYHDCwcGEgYHyX+Ckkc0NQzpudI97Q== X-Received: by 2002:a17:90a:d914:b0:2b2:84a1:5cb with SMTP id c20-20020a17090ad91400b002b284a105cbmr274918pjv.9.1714666622810; Thu, 02 May 2024 09:17:02 -0700 (PDT) Received: from hermes.local (204-195-96-226.wavecable.com. [204.195.96.226]) by smtp.gmail.com with ESMTPSA id t14-20020a17090a510e00b002b2827dcb5esm1413166pjh.9.2024.05.02.09.16.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 May 2024 09:17:02 -0700 (PDT) Date: Thu, 2 May 2024 09:16:58 -0700 From: Stephen Hemminger To: Ferruh Yigit Cc: dev@dpdk.org, "John W. Linville" , Mattias =?UTF-8?B?UsO2bm5ibG9t?= , Morten =?UTF-8?B?QnI=?= =?UTF-8?B?w7hydXA=?= Subject: Re: [PATCH] net/af_packet: fix statistics Message-ID: <20240502091658.2d51ca75@hermes.local> In-Reply-To: References: <20240430154129.7347-1-stephen@networkplumber.org> <9025199c-585c-4779-9f4e-360845707088@amd.com> <20240501094303.751e95b4@hermes.local> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Thu, 2 May 2024 15:12:43 +0100 Ferruh Yigit wrote: > I am not referring multiple core sharing a queue, this is wrong for DPDK. > > For single core case, if a variable doesn't have 'volatile' qualifier > and load/stores are not atomic, as compiler is not aware that there may > be other threads will access this value, what prevents compiler to hold > stats value in a register and store it memory at later stage, let say at > the end of the application forwarding loop? No other driver does this; it is not a problem since once rx/tx burst function returns, there is no state the compiler is aware of. > For this case keeping 'volatile' qualifier works. But Mattias has a > suggestion to use "normal load + normal add + atomic store" in datapath. > > > For getting stats, which will be in different thread, you are already > casting it to 'volatile' to enforce compiler read from memory each time. > > > Additionally when we take stats reset into account, which can happen in > different thread, 'volatile' is not enough. > For reliable stats reset, either we need to use full atomics, or offset > logic which I am trying to in other patch. If you want to attack the problem in the general case, there should be standard set of functions for handling per-core stats. If you look at Linux kernel you will see macros around this function: netdev_core_stats_inc((struct net_device *dev, u32 offset) { struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats); unsigned long __percpu *field; ... field = (__force unsigned long __percpu *)((__force void *)p + offset); this_cpu_inc(*field); } The expansion of this_cpu_inc is nested but ends up being: *field += 1; No volatile is needed. The conclusion I reach from this is: - atomic is not needed for per-cpu data. See "volatile considered harmful" using a form of compiler barrier is useful but should be hidden in generic code. - what is the state of the better per-lcore patches? that should be used here. - need a more generic percpu stats infrastructure - ethedev needs to have a generic percpu infra and not reinvented in xdp, tap, packet, etc.