From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by dpdk.space (Postfix) with ESMTP id CCAD2A0096
	for <public@inbox.dpdk.org>; Mon,  3 Jun 2019 15:36:13 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 9E5951B96E;
	Mon,  3 Jun 2019 15:36:13 +0200 (CEST)
Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com
 [209.85.167.65]) by dpdk.org (Postfix) with ESMTP id 2EDE11B96B
 for <dev@dpdk.org>; Mon,  3 Jun 2019 15:36:12 +0200 (CEST)
Received: by mail-lf1-f65.google.com with SMTP id l26so13595478lfh.13
 for <dev@dpdk.org>; Mon, 03 Jun 2019 06:36:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=semihalf-com.20150623.gappssmtp.com; s=20150623;
 h=subject:from:to:cc:references:message-id:date:user-agent
 :mime-version:in-reply-to:content-language:content-transfer-encoding;
 bh=022d9Lmwopv2G7zIC7DzByo8YdDy8FjXNStb+08+XwU=;
 b=OxV/BJ1fegfIQvuVL5fWsFEBR3x25jfA7JYM+ABCLXrKv+feudk7cQiUcZUxDBt6/6
 E/O565Rcfm3tXb79293+2ncJd1ArQS2zaDgVnVfg4OHa70lyqHB7I3uQDakSK+39InO7
 vA2Xp4ux1ZVR3aHTVWC83OnGaRgEKzFBccRvD6oAWj9e6pekg6T0d1VO8e/ac6uIMAcY
 nqw8RrG0VIxt2SaKz1DtosZE7BlB98HCH2Cns3wD+24gdIz3Ja8Hm5hg8XfRl9yFOtLC
 fjp/VPOfJwHHtXzDybHkZXBqPMmJs32P/kbduQpFF0H0EHCdM//DixBiGJxLI0BrgzcU
 WMuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:references:message-id:date
 :user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=022d9Lmwopv2G7zIC7DzByo8YdDy8FjXNStb+08+XwU=;
 b=oiz2SwwgpXYW0p6PShukiz6Owwbiy2cwxSpCGFPa2lPGjAmSGGMrfQuPlF5qrJVE2e
 x8sdwz/ZDHSCWR/VrSq2SfDUL9RHDg455yhHVtHfafwfkGPdk+w1qzHBz0s7CE27PCE1
 YTV+NaExPKwLBZJ+Cq1kCxvhw/EiIcHHnkRhQt+3mNplrO8P2C6fhRaExHoglwOkJVv0
 T6yL4il5gcraVpJmMqtIuTqrEoEVZbPjCSXJAy7reEU+sEijP4p4uttwu3edqfjw7EtA
 viBXy8Rv7tRP5iWsvekyWqBOZRvtoj4UURDAaQMbmLqDUXb+AEYWD756mCV0etASncXa
 MfSA==
X-Gm-Message-State: APjAAAXfMc0+J0LGAxB6CVg2RdcZreg4SX9Hw9NEHn/J7i4sXow/BHgm
 RId2NNZeZ1J5Mn/oRgXCJHbP9Q==
X-Google-Smtp-Source: APXvYqyAA7zEgXNnSIkivh73RZa8lrbAp9NcxJWVGPFJOfDzCfpCDl/s+gxJx7DWzhFj5BkpHXkWzA==
X-Received: by 2002:a19:e34e:: with SMTP id c14mr13551957lfk.47.1559568971734; 
 Mon, 03 Jun 2019 06:36:11 -0700 (PDT)
Received: from [10.0.0.49] (31-172-191-173.noc.fibertech.net.pl.
 [31.172.191.173])
 by smtp.gmail.com with ESMTPSA id w3sm3122261lji.19.2019.06.03.06.36.10
 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 03 Jun 2019 06:36:11 -0700 (PDT)
From: =?UTF-8?Q?Micha=c5=82_Krawczyk?= <mk@semihalf.com>
To: Anatoly Burakov <anatoly.burakov@intel.com>, dev@dpdk.org
Cc: Marcin Wojtas <mw@semihalf.com>, Guy Tzalik <gtzalik@amazon.com>,
 Evgeny Schemeilin <evgenys@amazon.com>, stephen@networkplumber.org,
 thomas@monjalon.net, david.marchand@redhat.com
References: <cover.1559147228.git.anatoly.burakov@intel.com>
 <5f6e26e27ad524f85ee9a911aeebae69f1ec0c1a.1559147228.git.anatoly.burakov@intel.com>
 <2f73f49d-e13b-ac1f-9e32-80b9d39b1166@semihalf.com>
Message-ID: <4faeb0be-fe10-d866-1027-0f3ef351cd3a@semihalf.com>
Date: Mon, 3 Jun 2019 15:36:10 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101
 Thunderbird/52.9.1
MIME-Version: 1.0
In-Reply-To: <2f73f49d-e13b-ac1f-9e32-80b9d39b1166@semihalf.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
Subject: Re: [dpdk-dev] [PATCH 24/25] net/ena: fix direct access to shared
	memory config
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On 03.06.2019 09:33, MichaƂ Krawczyk wrote:
> On 29.05.2019 18:31, Anatoly Burakov wrote:
>> The ENA driver calculates a ring's NUMA node affinity by directly
>> accessing the memzone list. Fix it to do it through the public
>> API's instead.
>>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>>   drivers/net/ena/ena_ethdev.c | 18 +++---------------
>>   1 file changed, 3 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
>> index b6651fc0f..e745e9e92 100644
>> --- a/drivers/net/ena/ena_ethdev.c
>> +++ b/drivers/net/ena/ena_ethdev.c
>> @@ -274,20 +274,6 @@ static const struct eth_dev_ops ena_dev_ops = {
>>   #define NUMA_NO_NODE    SOCKET_ID_ANY
>> -static inline int ena_cpu_to_node(int cpu)
>> -{
>> -    struct rte_config *config = rte_eal_get_configuration();
>> -    struct rte_fbarray *arr = &config->mem_config->memzones;
>> -    const struct rte_memzone *mz;
>> -
>> -    if (unlikely(cpu >= RTE_MAX_MEMZONE))
>> -        return NUMA_NO_NODE;
>> -
>> -    mz = rte_fbarray_get(arr, cpu);
>> -
>> -    return mz->socket_id;
>> -}
>> -
>>   static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf,
>>                          struct ena_com_rx_ctx *ena_rx_ctx)
>>   {
>> @@ -1099,6 +1085,7 @@ static int ena_create_io_queue(struct ena_ring 
>> *ring)
>>   {
>>       struct ena_adapter *adapter;
>>       struct ena_com_dev *ena_dev;
>> +    struct rte_memseg_list *msl;
>>       struct ena_com_create_io_ctx ctx =
>>           /* policy set to _HOST just to satisfy icc compiler */
>>           { ENA_ADMIN_PLACEMENT_POLICY_HOST,
>> @@ -1126,7 +1113,8 @@ static int ena_create_io_queue(struct ena_ring 
>> *ring)
>>       }
>>       ctx.qid = ena_qid;
>>       ctx.msix_vector = -1; /* interrupts not used */
>> -    ctx.numa_node = ena_cpu_to_node(ring->id);
>> +    msl = rte_mem_virt2memseg_list(ring);
>> +    ctx.numa_node = msl->socket_id;
>>       rc = ena_com_create_io_queue(ena_dev, &ctx);
>>       if (rc) {
>>
> 
> Hi Anatoly,
> 
> I'm not sure why the previous maintainers implemented this that way, I 
> can only guess. I think that they were assuming, that each queue will be 
> assigned to the lcore which is equal to ring id. They probably also 
> misunderstood how the memzones are working and they thought that each 
> lcore is having assigned only one memzone which is being mapped 1 to 1.
> 
> They wanted to prevent cross NUMA data acces, when the CPU is operating 
> in the different NUMA zone and the IO queues memory resides in the 
> other. I think that above solution won't prevent that neither, as you 
> are using ring address, which is being allocated together with
> struct ena_adapter (it is just an array), so it will probably reside in 
> the single numa zone.
> 
> I'm currently thinking on solution that could help us to determine on 
> which numa zone the queue descriptors will be allocated and on which the 
> lcore assigned to the queue will be working, but have no any ideas for 
> now :)
> 
> Anyway, your fix won't break anything, as the previous solution wasn't 
> working as it was supposed to work, so before I will fix that, we can 
> keep that patch to prevent direct usage of the memzone.
> 
> Thanks,
> Michal

After investigation I think that we should use socket_id provided by the 
tx/rx queue setup functions.
Could you, please, abandon this patch? I will send the proper fix soon.

Thanks,
Michal