From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <users-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id D7A60A0540
	for <public@inbox.dpdk.org>; Thu,  7 Jul 2022 10:48:38 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id ADFA140A7B;
	Thu,  7 Jul 2022 10:48:38 +0200 (CEST)
Received: from mail-ot1-f47.google.com (mail-ot1-f47.google.com
 [209.85.210.47]) by mails.dpdk.org (Postfix) with ESMTP id 2CBD1406B4
 for <users@dpdk.org>; Thu,  7 Jul 2022 10:48:37 +0200 (CEST)
Received: by mail-ot1-f47.google.com with SMTP id
 r17-20020a056830449100b0061c1b3840a0so412278otv.3
 for <users@dpdk.org>; Thu, 07 Jul 2022 01:48:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=3aiZv1ZYiI32cxzNzODOP/ojYDbYWqtJ9lZ8QP/mbFs=;
 b=WtevDkP60UR/TL60O4a0iFXOWmb/5motVqp3VJu0VvPKHvne/3S2c6cTLPZV18g7Xc
 B0hyVVPSg/ZJmpmHkyU1P84wFT/32UKF9++sApY5UD00H0CeMBwbiEXQDeHWEvjGbnzt
 Ue6g9gIQF5u1y0E7wT67VvL416al04Pl2lptWPY+i8hPR01Q2f37dI3IxBIBJcIBxuN6
 +9EyBvnmZ8fHsUBbDhSMgVsuI121pDBEe/QC6LqMjLrdgo392bdJ2DRRmGYL7AQ3GCgi
 cX7bcaZQXtpLymTmhLFFHvFF10RZ5kueZrLGGBzMkC8OCi54PGFP5JOs8o++eWbD41fI
 5qRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20210112;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=3aiZv1ZYiI32cxzNzODOP/ojYDbYWqtJ9lZ8QP/mbFs=;
 b=vRZbcytP91Ea7M9AAudrcvUN2A9ZTZjVu8yLtF3y5Nq+g0Rz5BVgPJHafRc4oWeF+y
 OGzQ79cgnT7zQ/jh0rQpv/rCGCCBquMXeNTnbwJfW7IPzxxxbbvsoLlhJN9uUy6kbbPM
 A7t3HlSy4dVqti2LpVZEhsqRTlJJshVF1ijow0RqL9P7cOtIK/LIXQbO6RPFtkay4Dp5
 3hUJY0BdACAwi/Y91RM5R62sbfquuspkKbkRaoswYz64E7PDk+Z2IM+7cnu8WHSxL1+j
 +ogdYNdlLyAu63jNs1pAuL7ohglWAKh9zCvvvRg4p6KHt+ZtDjnz70pObV1fqb1tqMx2
 PfAA==
X-Gm-Message-State: AJIora8MtVOcJJ0o38Vzok82wU54mmrbj+iPAiAqMR3ZXHvUbJAyaC82
 7l4PSr5erHb0m4CYpo1PW2y7cbGEJ1Q4lFdVaSc=
X-Google-Smtp-Source: AGRyM1sTJVMX6i7E0tSUkO6p31Q5c9ytjNk1D6A94B3nmrFu8y+ZWunTofZINgVnoemq3Qc96+teOWAFmil3mVguIcE=
X-Received: by 2002:a9d:ba3:0:b0:618:fced:5bd4 with SMTP id
 32-20020a9d0ba3000000b00618fced5bd4mr10096568oth.230.1657183716330; Thu, 07
 Jul 2022 01:48:36 -0700 (PDT)
MIME-Version: 1.0
References: <CAO8pfFmiSYf=z5kK4EBwJkyJEpSCUrCqZspbNb3dC8nEPipUBw@mail.gmail.com>
 <20220408162629.372dfd0d@sovereign>
 <CAO8pfFmnzJmYUd7i0tvZmCB6xjeNbWKRrL8-ecZDui8Q15EwfA@mail.gmail.com>
 <20220707032649.481da02d@sovereign>
In-Reply-To: <20220707032649.481da02d@sovereign>
From: Antonio Di Bacco <a.dibacco.ks@gmail.com>
Date: Thu, 7 Jul 2022 10:48:25 +0200
Message-ID: <CAO8pfFm_3Huz7Wi+=TiJgS-JyMySoT6BZNp-cAeahhKfHwpkVw@mail.gmail.com>
Subject: Re: Shared memory between two primary DPDK processes
To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
Cc: users@dpdk.org
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: users@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK usage discussions <users.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/users>,
 <mailto:users-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/users/>
List-Post: <mailto:users@dpdk.org>
List-Help: <mailto:users-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/users>,
 <mailto:users-request@dpdk.org?subject=subscribe>
Errors-To: users-bounces@dpdk.org

You are right, process 1 is always allocating 1GB even if I request
only 10MB, and memseg->hugepage_sz is 1GB.
When I use rte_memseg_get_fd_offset I get an FD and the offset is 0
(correct) because that's the memseg offset not the memzone.
To access the memory allocated by process 1 I need to take into
account also the offset between memzone and memseg.
And then I need to add (memzone->iova -  memseg->iova) to the address
returned by mmap.






On Thu, Jul 7, 2022 at 2:26 AM Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> wrote:
>
> 2022-07-07 00:14 (UTC+0200), Antonio Di Bacco:
> > Dear Dmitry,
> >
> > I tried to follow this approach and if I allocate 1GB on primary
> > process number 1, then I can mmap that memory on the primary process
> > number 2.
> > I also tried to convert the virt addr of the allocation made in
> > primary 1 to phys and then I converted the virt addr returned by mmap
> > in primary 2 and I got the same phys addr.
> >
> > Unfortunately, if I try to allocated only 10 MB for example in primary
> > 1, then mmap in primary 2 succeeds but it seems that this virt addr
> > doesn't correspond to the same phys memory as in primary 1.
> >
> > In the primary 2, the mmap is used like this:
> >
> >     int flags = MAP_SHARED | MAP_HUGETLB ;
> >
> >     uint64_t* addr = (uint64_t*) mmap(NULL, sz, PROT_READ|PROT_WRITE,
> > flags, my_mem_fd, off);
>
> Hi Antonio,
>
> From `man 2 mmap`:
>
>    Huge page (Huge TLB) mappings
>        For  mappings that employ huge pages, the requirements for the
>        arguments of mmap() and munmap() differ somewhat from the requirements
>        for mappings that use the native system page size.
>
>        For mmap(), offset must be a multiple of the underlying huge page
>        size.  The system automatically aligns length to be a  multiple  of
>        the underlying huge page size.
>
>        For munmap(), addr, and length must both be a multiple of the
>        underlying huge page size.
>
> Probably process 1 maps a 1 GB hugepage:
> DPDK does so if 1 GB hugepages are used even if you only allocate 10 MB.
> You can examine memseg to see what size it is (not memzone!).
> Hugepage size is a property of each mounted HugeTBL filesystem.
> It determines which kernel pool to use.
> Pools are over different sets of physical pages.
> This means that the kernel doesn't allow to map given page frames
> as 1 GB and 2 MB hugepages at the same time via hugetlbfs.
> I'm surprised mmap() works at all in your case
> and suspect that it is mapping 2 MB hugepages in process 2.
>
> The solution may be, in process 2:
>
> base_offset = RTE_ALIGN_FLOOR(offset, hugepage_size)
> map_addr = mmap(fd, size=hugepage_size, offset=base_offset)
> addr = RTE_PTR_ADD(map_addr, offset - base_offset)
>
> Note that if [offset; offset+size) crosses a hugepage boundary,
> you have to map more than one page.