From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1ACC6A0542; Mon, 24 Oct 2022 17:34:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 493C342C05; Mon, 24 Oct 2022 17:33:19 +0200 (CEST) Received: from mail-yw1-f169.google.com (mail-yw1-f169.google.com [209.85.128.169]) by mails.dpdk.org (Postfix) with ESMTP id 72FA74069C for ; Mon, 24 Oct 2022 13:32:34 +0200 (CEST) Received: by mail-yw1-f169.google.com with SMTP id 00721157ae682-368edbc2c18so82490697b3.13 for ; Mon, 24 Oct 2022 04:32:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=to:subject:message-id:date:mime-version:from:from:to:cc:subject :date:message-id:reply-to; bh=WcW9Ulowblt3Fn6mL35rnTgA5zrncdpx32hNXa/byHU=; b=GTXClKVCYP5YG4+fbgTNxDJ1iXQLXepfl2kn+KvSIO8/N9aLyzC2gz0YU0zfk02y7w noVb0vXa9vxJyR+TnnEsYFoPwRGfLLomb5OnSNQxwr4FAOupNEc5HUVPvyazwwr8f7cl YFrCx4CQflknPLxsNZnMBESEu0aBCNdPBTO0+VN8YkMhek09MCAR6Yv29O5bcRACF+6O 8ezEgfO8nhrko1m1xmCi7azvm6EfiKnwgUygvioWEcrQ09qgFW47Lx2+coOt+LBMbBe2 24tjzN6LveJX72O+Z6DUaRNzQhSE1I2zJtVRclDuvkXsK9P0BzLV1qFZbVHNSi42KdVR a6kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=to:subject:message-id:date:mime-version:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=WcW9Ulowblt3Fn6mL35rnTgA5zrncdpx32hNXa/byHU=; b=xL7v3Xeyfmh0a03of0n1d2HZ1XbZnr3IHtIhGvUX2dv8t3T/HG8BP0PZ2HNO5GIih7 v0MCktERJLExby6DXEJsELnFF7xCISN5Kb0FXLBbDNFtPdClZpYtDOIKc/VUQH594Gfn A90b9RY0ZO7voBHTT/9bUJvudftgKDOprQxFrhOkvXvojttW+bddrEQxpprlQ9VUTnOr Up94Ehcif1mifItBUgZIQcDlJE5M54fEQKXmzVONa9MZPcuf4MPGN8bmtrCFc1P9J9n+ +vd4Si0OWTVPsygIAwBxZRZxs5YCNZHNR0uneBz1s5bIqZGHmHn2z2u0Gt6eLxzI17CG EKrw== X-Gm-Message-State: ACrzQf1D2avshPkgAHSlxOUh2kGmIEwey3dVXhrQfHMVXpqT5A88EjJ1 qibc4e6BPJJa5aaspLju9KcQEWcgY4jJJ3bGsAEbc7hCZLY= X-Google-Smtp-Source: AMsMyM4agd1EaAD8ln9C/HY17iZbcRKEl5iszyfm3ZwuJaLD882972ZRgua1ZQJ/fp6bg2dTyXDj/ythHHoUY5dL+Rc= X-Received: by 2002:a81:df07:0:b0:367:7b4e:f52 with SMTP id c7-20020a81df07000000b003677b4e0f52mr22460988ywn.475.1666611153128; Mon, 24 Oct 2022 04:32:33 -0700 (PDT) Received: from 349319672217 named unknown by gmailapi.google.com with HTTPREST; Mon, 24 Oct 2022 06:32:32 -0500 From: changfengnan Mime-Version: 1.0 Date: Mon, 24 Oct 2022 06:32:32 -0500 Message-ID: Subject: Question about the logical of remap_segment when memory init To: "dev@dpdk.org" Content-Type: multipart/alternative; boundary="000000000000c2fc1005ebc62578" X-Mailman-Approved-At: Mon, 24 Oct 2022 17:33:00 +0200 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --000000000000c2fc1005ebc62578 Content-Type: text/plain; charset="UTF-8" When doing test in spdk, we found a problem: when we try alloc 2048 hugepages for dpdk, sometimes memory init would be fail. This is some useful log: EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 ... EAL: Trying to obtain current memory policy. EAL: Requesting 11264 pages of size 2MB from socket 0 EAL: Requesting 9216 pages of size 2MB from socket 1 EAL: Attempting to map 22528M on socket 0 EAL: Could not find space for memseg. Please increase 32768 and/or 65536 in configuration. EAL: Couldn't remap hugepage files into memseg lists EAL: FATAL: Cannot init memory And the hugepages mapping in node: cat /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages 11264 9216 This is some config about memseg: #define RTE_MAX_MEMSEG_LISTS 128 #define RTE_MAX_MEMSEG_PER_LIST 8192 #define RTE_MAX_MEM_MB_PER_LIST 32768 #define RTE_MAX_MEMSEG_PER_TYPE 32768 #define RTE_MAX_MEM_MB_PER_TYPE 65536 We found if the number of continuous hugepgaes larger than RTE_MAX_MEMSEG_PER_LIST, remap_segment would fail, even though we have other memseg lists can use. As the log showed, we have 4 memseg lists, each can hold 8192 segment, but if we have 11264 continuous hugepgaes, remap_segment can not find a single memseg list to hold those hugepgaes. So my question is why remap_segment have to map a chunk of continuous hugepgaes to a single memseg list ? Can we split it to two memseg lists? We had tried to do like this, it seems ok for our environment. Is there any potential risk ? --000000000000c2fc1005ebc62578 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
=
When doing test in spdk, we found a problem: when we try al= loc 2048 hugepages for dpdk, someti= mes memory init would be fail<= span style=3D"font-size:14px">.
This is some = useful log:
EAL: Detected= memory type: socket_id:0 hugepage_sz:2097152
EAL: Detected= memory type: socket_id:1 hugepage_sz:2097152
EAL: Creating= 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
...
EAL: Trying t= o obtain current memory policy.
EAL: Requesti= ng 11264 pages of size 2MB from socket 0
EAL: Requesti= ng 9216 pages of size 2MB from socket 1
EAL: Attempti= ng to map 22528M on socket 0
EAL: Could no= t find space for memseg. Please increase 32768 and/or 65536 in configuratio= n.
EAL: Couldn&#= 39;t remap hugepage files into memseg lists
EAL: FATAL: C= annot init memory

And the hugepag= es mapping in node:
cat /sys/devi= ces/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages
11264
9216=20
This i= s some config about memseg:
#define RTE_MAX_MEMSEG_LISTS 128
#define RTE_MAX_MEMSEG_PER_LIST 8192
#define RTE_MAX_MEM_MB_PER_LIST 32768
#define RTE_MAX_MEMSEG_PER_TYPE 32768
#define RTE_MAX_MEM_MB_PER_TYPE 65536

We found if the number of continuous hugepgaes= larger than RTE_MAX_MEMSEG_PER_LIST, remap_segment would fail, even though= we have other memseg lists can use. As the log showed, we have 4 memseg li= sts, each can hold 8192 segment, but if we have 11264 continuous hugepgaes,= remap_segment can not find a single memseg list to hold those hugepgaes. S= o my question is why remap_segment=C2=A0 have to map a chunk of=C2=A0 conti= nuous hugepgaes to a=C2=A0 single memseg list ? Can we split it to two mems= eg lists? We had tried to do like this, it seems ok for our environment. Is= there any potential risk ?

--000000000000c2fc1005ebc62578--