From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <users-bounces@dpdk.org> Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A1DEA04FD for <public@inbox.dpdk.org>; Fri, 8 Apr 2022 23:14:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0030640041; Fri, 8 Apr 2022 23:14:14 +0200 (CEST) Received: from mail-lj1-f170.google.com (mail-lj1-f170.google.com [209.85.208.170]) by mails.dpdk.org (Postfix) with ESMTP id ECE5E4003F for <users@dpdk.org>; Fri, 8 Apr 2022 23:14:13 +0200 (CEST) Received: by mail-lj1-f170.google.com with SMTP id bn33so13011714ljb.6 for <users@dpdk.org>; Fri, 08 Apr 2022 14:14:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=9HxxZ6YnTMWAd8coPpx9OoP5iwLsplQpt9BaHEmvvRU=; b=aU45vUPNGuyW9XC+4rxFWjnrUUqBxhiYEXzlVQHOwgyT07Sm5ZTc4FWWkXjjtzjCOJ CCUkdbCCakdqcKm4JzpDd/rUiBjjH5INIC+4G/MjjoMd3Z1qW2L1+HTBLbNE3ZXdzrnX 5jXcSp1vhbqFiM1QJEgs5tT8M7SQXNUkreSUNqywIsuv6OiNTicQzRHZ097MUKkMRrb8 6gU5sJVJcM85sPw8/Xq+trfRB3yxaEgBOEAxn87eKeR2f3u9sjvRZGQYLD7G+EYHnbiZ 1PahJM/7hsWMR742x8oMcqWhlG4TbOt66P215kB6lYuDS9Qa9OtVph+UkgCf4YTGDGVa icNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=9HxxZ6YnTMWAd8coPpx9OoP5iwLsplQpt9BaHEmvvRU=; b=sBrRvfJTYmwtRFQTcnxfGksksAE94EO2nd6q+sRKKXcJMZ/T/s9WcyqxoeYZvL/aGa da0v2EXyj6Z8VWwZf7wGfOGb2nntLXS09BsMb4oRfiaNRCOhmdPuiimWq4fetBEPU8sF kfQKwgT4F97l0D/B/BQCVVzM8pBsC4mHeIGUmBG2W8veissjtHnqtJt1y+rgcwY5IFJ/ wRgWKy/V1gjaX8bT7NL4JXYnIG3cW7Ijs816TGztW2gFMeqWY3fcP9eLbJlj527mNfVB /053I5sXsYXYKgKQDCHB9ayTkiWKuJBGiwSyK7B2M2/ZtRk9XhSLEegbbkiutC945wXz gm0Q== X-Gm-Message-State: AOAM531SZXaOhjhrFS5X1ehLD5aOhgs4fuxU9TnaR+lmy7mQyrgkMyql 9i/9tRWQheUmWEEX76KFxh/HanTKxe9cv7hqFgg= X-Google-Smtp-Source: ABdhPJzCtLaxvO75wXJLzBDVNRi1mVsVmrOrvguz0TYEyFG44b5+VaFg2qV8fuA9B5iXN/uxo/Jp68WqjsHozLP2+bo= X-Received: by 2002:a2e:9549:0:b0:24b:4f99:4e76 with SMTP id t9-20020a2e9549000000b0024b4f994e76mr2805916ljh.350.1649452453476; Fri, 08 Apr 2022 14:14:13 -0700 (PDT) MIME-Version: 1.0 References: <CAO8pfFmiSYf=z5kK4EBwJkyJEpSCUrCqZspbNb3dC8nEPipUBw@mail.gmail.com> <20220408162629.372dfd0d@sovereign> <6c204c1a-bf3c-f7e0-c899-284d9f8938c7@xilinx.com> In-Reply-To: <6c204c1a-bf3c-f7e0-c899-284d9f8938c7@xilinx.com> From: Antonio Di Bacco <a.dibacco.ks@gmail.com> Date: Fri, 8 Apr 2022 23:14:02 +0200 Message-ID: <CAO8pfFm-oLEuFDWUW=z=bomxu_4QBhegCTvtrvaJqGxVmtiO6A@mail.gmail.com> Subject: Re: Shared memory between two primary DPDK processes To: Ferruh Yigit <ferruh.yigit@xilinx.com> Cc: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>, users@dpdk.org Content-Type: multipart/alternative; boundary="000000000000903b5d05dc2b13f0" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions <users.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/users>, <mailto:users-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/users/> List-Post: <mailto:users@dpdk.org> List-Help: <mailto:users-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/users>, <mailto:users-request@dpdk.org?subject=subscribe> Errors-To: users-bounces@dpdk.org --000000000000903b5d05dc2b13f0 Content-Type: text/plain; charset="UTF-8" Il giorno ven 8 apr 2022 alle ore 16:36 Ferruh Yigit < ferruh.yigit@xilinx.com> ha scritto: > On 4/8/2022 2:26 PM, Dmitry Kozlyuk wrote: > > CAUTION: This message has originated from an External Source. Please use > proper judgment and caution when opening attachments, clicking links, or > responding to this email. > > > > > > 2022-04-08 14:31 (UTC+0200), Antonio Di Bacco: > >> I know that it is possible to share memory between a primary and > secondary > >> process using rte_memzone_reserve_aligned to allocate memory in primary > >> that is "seen" also by the secondary. If we have two primary processes > >> (started with different file-prefix) the same approach is not feasible. > I > >> wonder how to share a chunk of memory hosted on a hugepage between two > >> primaries. > >> > >> Regards. > > > > Hi Antonio, > > > > Correction: all hugepages allocated by DPDK are shared > > between primary and secondary processes, not only memzones. > > > > I assume we're talking about processes within one host, > > because your previous similar question was about sharing memory between > hosts > > (as we have discussed offline), which is out of scope for DPDK. > > > > As for the question directly, you need to map the same part of the same > file > > in the second primary as the hugepage is mapped from in the first > primary. > > I don't recommend to work with file paths, because their management > > is not straightforward (--single-file-segments, for one) and is > undocumented. > > > > There is a way to share DPDK memory segment file descriptors. > > Although public, this DPDK API is dangerous in the sense that you must > > clearly understand what you're doing and how DPDK works. > > Hence the question: what is the task you need this sharing for? > > Maybe there is a simpler way. > > > > 1. In the first primary: > > > > mz = rte_memzone_reserve() > > ms = rte_mem_virt2memseg(mz->addr) > > fd = rte_memseg_get_fd(ms) > > offset = rte_memseg_get_fd_offset(ms) > > > > 2. Use Unix domain sockets with SCM_RIGHTS > > to send "fd" and "offset" to the second primary. > > > > 3. In the second primary, after receiving "fd" and "offset": > > > > flags = MAP_SHARED | MAP_HUGE | (30 << MAP_HUGE_SHIFT) > > addr = mmap(fd, offset, flags) > > > > Note that "mz" may consist of multiple "ms" depending on the sizes > > of the zone and hugepages, and on the zone alignment. > > Also "addr" may (and probably will) differ from "mz->addr". > > It is possible to pass "mz->addr" and try to force it, > > like DPDK does for primary/secondary. > > > > Also 'net/memif' driver can be used: > https://doc.dpdk.org/guides/nics/memif.html Yes, I know about memif. Our application is currently using a chunk of shared memory, a primary process writes on it and a secondary reads from it. Now the secondary will become a primary, sort of a promotion, and MEMIF would be fine but the paradigm should change a little bit compared to a shared memory approach. MEMIF is an interface over a shared memory, we would need the opposite, a shared memory over a network interface. Thank you. --000000000000903b5d05dc2b13f0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable <div dir=3D"ltr"><div dir=3D"ltr"></div><br><div class=3D"gmail_quote"><div= dir=3D"ltr" class=3D"gmail_attr">Il giorno ven 8 apr 2022 alle ore 16:36 F= erruh Yigit <<a href=3D"mailto:ferruh.yigit@xilinx.com">ferruh.yigit@xil= inx.com</a>> ha scritto:<br></div><blockquote class=3D"gmail_quote" styl= e=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);paddin= g-left:1ex">On 4/8/2022 2:26 PM, Dmitry Kozlyuk wrote:<br> > CAUTION: This message has originated from an External Source. Please u= se proper judgment and caution when opening attachments, clicking links, or= responding to this email.<br> > <br> > <br> > 2022-04-08 14:31 (UTC+0200), Antonio Di Bacco:<br> >> I know that it is possible to share memory between a primary and s= econdary<br> >> process using rte_memzone_reserve_aligned to allocate memory in pr= imary<br> >> that is "seen" also by the secondary. If we have two pri= mary processes<br> >> (started with different file-prefix) the same approach is not feas= ible. I<br> >> wonder how to share a chunk of memory hosted on a hugepage between= two<br> >> primaries.<br> >><br> >> Regards.<br> > <br> > Hi Antonio,<br> > <br> > Correction: all hugepages allocated by DPDK are shared<br> > between primary and secondary processes, not only memzones.<br> > <br> > I assume we're talking about processes within one host,<br> > because your previous similar question was about sharing memory betwee= n hosts<br> > (as we have discussed offline), which is out of scope for DPDK.<br> > <br> > As for the question directly, you need to map the same part of the sam= e file<br> > in the second primary as the hugepage is mapped from in the first prim= ary.<br> > I don't recommend to work with file paths, because their managemen= t<br> > is not straightforward (--single-file-segments, for one) and is undocu= mented.<br> > <br> > There is a way to share DPDK memory segment file descriptors.<br> > Although public, this DPDK API is dangerous in the sense that you must= <br> > clearly understand what you're doing and how DPDK works.<br> > Hence the question: what is the task you need this sharing for?<br> > Maybe there is a simpler way.<br> > <br> > 1. In the first primary:<br> > <br> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mz =3D rte_memzone_reserve()<br> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 ms =3D rte_mem_virt2memseg(mz->ad= dr)<br> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fd =3D rte_memseg_get_fd(ms)<br> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 offset =3D rte_memseg_get_fd_offset(= ms)<br> > <br> > 2. Use Unix domain sockets with SCM_RIGHTS<br> >=C2=A0 =C2=A0 =C2=A0to send "fd" and "offset" to th= e second primary.<br> > <br> > 3. In the second primary, after receiving "fd" and "off= set":<br> > <br> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 flags =3D MAP_SHARED | MAP_HUGE | (3= 0 << MAP_HUGE_SHIFT)<br> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 addr =3D mmap(fd, offset, flags)<br> > <br> > Note that "mz" may consist of multiple "ms" depend= ing on the sizes<br> > of the zone and hugepages, and on the zone alignment.<br> > Also "addr" may (and probably will) differ from "mz->= ;addr".<br> > It is possible to pass "mz->addr" and try to force it,<br= > > like DPDK does for primary/secondary.<br> > <br> <br> Also 'net/memif' driver can be used:<br> <a href=3D"https://doc.dpdk.org/guides/nics/memif.html" rel=3D"noreferrer" = target=3D"_blank">https://doc.dpdk.org/guides/nics/memif.html</a></blockquo= te><div><br></div><div>Yes, I know about memif. Our application is currentl= y using a chunk of shared memory, a primary process writes on it and a seco= ndary reads from it.=C2=A0 Now the secondary will become a primary, sort of= a promotion, and MEMIF would be fine but the paradigm should change a litt= le bit compared to a shared memory approach.=C2=A0</div><div>MEMIF is an in= terface over a shared memory, we would need the opposite, a shared memory o= ver a network interface.</div><div><br></div><div>Thank you.</div></div></d= iv> --000000000000903b5d05dc2b13f0--