From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 50FB5A0096 for ; Mon, 3 Jun 2019 09:33:38 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 88D3E1B949; Mon, 3 Jun 2019 09:33:37 +0200 (CEST) Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by dpdk.org (Postfix) with ESMTP id 69C391B948 for ; Mon, 3 Jun 2019 09:33:36 +0200 (CEST) Received: by mail-lj1-f196.google.com with SMTP id i21so604427ljj.3 for ; Mon, 03 Jun 2019 00:33:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=vZMS+s4yGKgfc7DgYftOEKrxT29x0Jw3ahNWKdh3/34=; b=Lgj2I9bSBE5LlNBsGg1D/IWDezKGqgGOZsinn6W9jjGQNkm1xOAZHclgf8mjlSKaAA ib4fdneVypIUnI1pTzyLoakfavblDU8EnijZcxWEGIUnZrnDfuVl5dr54ZnFbigelc3B qjza5AYI+zlE/0M1UPLNSwbjFagIDLmYozpQo9HFxs8JPl6Rjgaswxuq26DGtiRW5V5W RByShy7pbeJRJqPG8e8dBMXjVxaC0oQ31xv/Z9oK1wMuTtYUzRnHn3TJ4eZCqwfj3ZIt WmYAwmoi8hsY73nlz/anJ7vH5N3gLmFibYDEv4qehxgdInEy4sAOTShKfEMD4F2987IP OgjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=vZMS+s4yGKgfc7DgYftOEKrxT29x0Jw3ahNWKdh3/34=; b=Y77oa/13czwqw0ohheOQmb7uMHcqKyuG5cydD0zpGtONgpvKMXAwDnvhGPhCJ2DCKD ZgXIO7QZtsyatTSCP3VEPGdU5Nh56ffNTqd/jNM+TCPMsvpjYPuWq4jUq0UDUDq79K7F 97zp6r6p7aLjcdIY7nynKEauCCsmMy4io0ZiqpK5QmwR1wUsRatipQuCs/Jz2+aGR8iE /MUk59Oddnfo/iQBzF7Yjn0plv5wQipwXJzRlGhpjSY0ro843mBxXviE6J1cIAUwR0l9 fhzpBym7OrJQlgeVOqUlLGSWJ5GW2BOXPKXFENpD8N7JJfP1RhVS4DiYxQKUkZVImByX EzyA== X-Gm-Message-State: APjAAAXZxZ8B3GMrYnQx8MqwRERLfDjmxbLpJ3E4g+4fMkfCrhXQxZEj WlvDx7Gp9WsRyK6gWIbX8nw8zA== X-Google-Smtp-Source: APXvYqzb/N1VeQlGvRTAueTCa8AlUEYARQikZUi+uourrY/WKaBJNGf0Pwb2lcOptEodUxwuKZuYbg== X-Received: by 2002:a2e:2b11:: with SMTP id q17mr13026731lje.23.1559547215937; Mon, 03 Jun 2019 00:33:35 -0700 (PDT) Received: from [10.0.0.49] (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id n3sm903071lfh.3.2019.06.03.00.33.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Jun 2019 00:33:34 -0700 (PDT) To: Anatoly Burakov , dev@dpdk.org Cc: Marcin Wojtas , Guy Tzalik , Evgeny Schemeilin , stephen@networkplumber.org, thomas@monjalon.net, david.marchand@redhat.com References: <5f6e26e27ad524f85ee9a911aeebae69f1ec0c1a.1559147228.git.anatoly.burakov@intel.com> From: =?UTF-8?Q?Micha=c5=82_Krawczyk?= Message-ID: <2f73f49d-e13b-ac1f-9e32-80b9d39b1166@semihalf.com> Date: Mon, 3 Jun 2019 09:33:33 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <5f6e26e27ad524f85ee9a911aeebae69f1ec0c1a.1559147228.git.anatoly.burakov@intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH 24/25] net/ena: fix direct access to shared memory config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 29.05.2019 18:31, Anatoly Burakov wrote: > The ENA driver calculates a ring's NUMA node affinity by directly > accessing the memzone list. Fix it to do it through the public > API's instead. > > Signed-off-by: Anatoly Burakov > --- > drivers/net/ena/ena_ethdev.c | 18 +++--------------- > 1 file changed, 3 insertions(+), 15 deletions(-) > > diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c > index b6651fc0f..e745e9e92 100644 > --- a/drivers/net/ena/ena_ethdev.c > +++ b/drivers/net/ena/ena_ethdev.c > @@ -274,20 +274,6 @@ static const struct eth_dev_ops ena_dev_ops = { > > #define NUMA_NO_NODE SOCKET_ID_ANY > > -static inline int ena_cpu_to_node(int cpu) > -{ > - struct rte_config *config = rte_eal_get_configuration(); > - struct rte_fbarray *arr = &config->mem_config->memzones; > - const struct rte_memzone *mz; > - > - if (unlikely(cpu >= RTE_MAX_MEMZONE)) > - return NUMA_NO_NODE; > - > - mz = rte_fbarray_get(arr, cpu); > - > - return mz->socket_id; > -} > - > static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, > struct ena_com_rx_ctx *ena_rx_ctx) > { > @@ -1099,6 +1085,7 @@ static int ena_create_io_queue(struct ena_ring *ring) > { > struct ena_adapter *adapter; > struct ena_com_dev *ena_dev; > + struct rte_memseg_list *msl; > struct ena_com_create_io_ctx ctx = > /* policy set to _HOST just to satisfy icc compiler */ > { ENA_ADMIN_PLACEMENT_POLICY_HOST, > @@ -1126,7 +1113,8 @@ static int ena_create_io_queue(struct ena_ring *ring) > } > ctx.qid = ena_qid; > ctx.msix_vector = -1; /* interrupts not used */ > - ctx.numa_node = ena_cpu_to_node(ring->id); > + msl = rte_mem_virt2memseg_list(ring); > + ctx.numa_node = msl->socket_id; > > rc = ena_com_create_io_queue(ena_dev, &ctx); > if (rc) { > Hi Anatoly, I'm not sure why the previous maintainers implemented this that way, I can only guess. I think that they were assuming, that each queue will be assigned to the lcore which is equal to ring id. They probably also misunderstood how the memzones are working and they thought that each lcore is having assigned only one memzone which is being mapped 1 to 1. They wanted to prevent cross NUMA data acces, when the CPU is operating in the different NUMA zone and the IO queues memory resides in the other. I think that above solution won't prevent that neither, as you are using ring address, which is being allocated together with struct ena_adapter (it is just an array), so it will probably reside in the single numa zone. I'm currently thinking on solution that could help us to determine on which numa zone the queue descriptors will be allocated and on which the lcore assigned to the queue will be working, but have no any ideas for now :) Anyway, your fix won't break anything, as the previous solution wasn't working as it was supposed to work, so before I will fix that, we can keep that patch to prevent direct usage of the memzone. Thanks, Michal