From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 181B8A0543 for ; Thu, 7 Jul 2022 02:26:54 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A3A2D4069D; Thu, 7 Jul 2022 02:26:53 +0200 (CEST) Received: from mail-lf1-f54.google.com (mail-lf1-f54.google.com [209.85.167.54]) by mails.dpdk.org (Postfix) with ESMTP id E065140691 for ; Thu, 7 Jul 2022 02:26:51 +0200 (CEST) Received: by mail-lf1-f54.google.com with SMTP id d12so5191163lfq.12 for ; Wed, 06 Jul 2022 17:26:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2fxSDZE3OXYKmabRGvJR6KC6L9WFMNgwBntLilcKzAk=; b=RJk5+lS/nlScZE45q8pIQb2NPtEgO8gqT4/z1C+K6G9h9bFasPQn/PqCTmgQDyqggT WyPCOhertb1+2F7hp7C96n4ujfbtiFsvY9bFe+wiqpqULfDhw28KZFG+WG+fGr02MLkM CYsY8YQpaUG31TkEjHIboLKK/jB42NIIaMi2oKErGszZj3O4rQAd5398MBXdYvd7OU8J YeWka8qwBmcc6fvY9HNdc/LxUVEC5No+sf3gYXnSauupm5yrFfb34GsaNP4U9J9yObhP P2YspJbmfH4olZSvMEaXX63/lBOOwHTlY5nZl0LvEycVPyW83kT9uZsJZz+4Lzfit8xS Kgvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2fxSDZE3OXYKmabRGvJR6KC6L9WFMNgwBntLilcKzAk=; b=U4vgC6GiU/JizAZBmWuXWmwrpE9uZXNPEkwUaBFlyFCXC9UrHfqT8R+Ybzd255mkOe I4s3YHXKX7B50VIUIJQQC77Jhfsba1NuYmMLuXZlG4Y7lxXtjQDNUuc96aFBzE4WcBL0 x4jqgKZ6kv2NoGdMGYAYWJbwLrNkKY+DRukr5z60IpqOdrucK1N2JMj3IH/Z7O+j+qdG 385tH3GLUJHcp+nFe54VfLBDmq8TGPqhN4yawxz+KEEBHH04StLllJFtG+1ytkdsdIfX AJtPqSTmL8JoGtp4ev67CPCQhawI6o5WBSGxl+AN8UQfmrJAw9tbR6VF9uwNHeP6TeEW n9LA== X-Gm-Message-State: AJIora+KBBoo3aZRCdLv4v26ZNNM6qonEfjD8sTh432s+kBleuWpdHj7 zY6O9T/juUkkHzNiW/kg7sY= X-Google-Smtp-Source: AGRyM1uWkKbpE6EXOD0crZ1tjx1pJ04noRWOP7SZUzyZhkLXw8J5GsegIAG9shVnt7c52em8/GDfWQ== X-Received: by 2002:a05:6512:262a:b0:47f:74eb:e21c with SMTP id bt42-20020a056512262a00b0047f74ebe21cmr29014212lfb.154.1657153611179; Wed, 06 Jul 2022 17:26:51 -0700 (PDT) Received: from sovereign (broadband-37-110-65-23.ip.moscow.rt.ru. [37.110.65.23]) by smtp.gmail.com with ESMTPSA id v20-20020a05651203b400b0047255d210e4sm6543202lfp.19.2022.07.06.17.26.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Jul 2022 17:26:50 -0700 (PDT) Date: Thu, 7 Jul 2022 03:26:49 +0300 From: Dmitry Kozlyuk To: Antonio Di Bacco Cc: users@dpdk.org Subject: Re: Shared memory between two primary DPDK processes Message-ID: <20220707032649.481da02d@sovereign> In-Reply-To: References: <20220408162629.372dfd0d@sovereign> X-Mailer: Claws Mail 3.18.0 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org 2022-07-07 00:14 (UTC+0200), Antonio Di Bacco: > Dear Dmitry, >=20 > I tried to follow this approach and if I allocate 1GB on primary > process number 1, then I can mmap that memory on the primary process > number 2. > I also tried to convert the virt addr of the allocation made in > primary 1 to phys and then I converted the virt addr returned by mmap > in primary 2 and I got the same phys addr. >=20 > Unfortunately, if I try to allocated only 10 MB for example in primary > 1, then mmap in primary 2 succeeds but it seems that this virt addr > doesn't correspond to the same phys memory as in primary 1. >=20 > In the primary 2, the mmap is used like this: >=20 > int flags =3D MAP_SHARED | MAP_HUGETLB ; >=20 > uint64_t* addr =3D (uint64_t*) mmap(NULL, sz, PROT_READ|PROT_WRITE, > flags, my_mem_fd, off); Hi Antonio, =46rom `man 2 mmap`: Huge page (Huge TLB) mappings For mappings that employ huge pages, the requirements for the arguments of mmap() and munmap() differ somewhat from the requiremen= ts for mappings that use the native system page size. For mmap(), offset must be a multiple of the underlying huge page size. The system automatically aligns length to be a multiple of the underlying huge page size. For munmap(), addr, and length must both be a multiple of the underlying huge page size. Probably process 1 maps a 1 GB hugepage: DPDK does so if 1 GB hugepages are used even if you only allocate 10 MB. You can examine memseg to see what size it is (not memzone!). Hugepage size is a property of each mounted HugeTBL filesystem. It determines which kernel pool to use. Pools are over different sets of physical pages. This means that the kernel doesn't allow to map given page frames as 1 GB and 2 MB hugepages at the same time via hugetlbfs. I'm surprised mmap() works at all in your case and suspect that it is mapping 2 MB hugepages in process 2. The solution may be, in process 2: base_offset =3D RTE_ALIGN_FLOOR(offset, hugepage_size) map_addr =3D mmap(fd, size=3Dhugepage_size, offset=3Dbase_offset) addr =3D RTE_PTR_ADD(map_addr, offset - base_offset) Note that if [offset; offset+size) crosses a hugepage boundary, you have to map more than one page.