DPDK patches and discussions
 help / color / mirror / Atom feed
From: Alejandro Lucero <alejandro.lucero@netronome.com>
To: Thomas Monjalon <thomas@monjalon.net>
Cc: lei.a.yao@intel.com, dev <dev@dpdk.org>,
	"Xu, Qian Q" <qian.q.xu@intel.com>,
	xueqin.lin@intel.com, "Burakov,
	Anatoly" <anatoly.burakov@intel.com>,
	 Ferruh Yigit <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] [PATCH v3 0/6] use IOVAs check based on DMA mask
Date: Mon, 29 Oct 2018 10:11:09 +0000	[thread overview]
Message-ID: <CAD+H990PQAvQc+HHE+6fv9JLJbVKVwTpViLotnE72=8RHM8JEA@mail.gmail.com> (raw)
In-Reply-To: <1651382.pnTT7vZl36@xps>

I know what is going on.

In patchset version 3 I forgot to remove an old code. Anatoly spotted that
and I was going to send another version for fixing it. Before sending the
new version I saw that report about a problem with dma_mask and I'm afraid
I did not send another version with the fix ...

Yao, can you try with next patch?:

*diff --git a/lib/librte_eal/common/eal_common_memory.c
b/lib/librte_eal/common/eal_common_memory.c*

*index ef656bbad..26adf46c0 100644*

*--- a/lib/librte_eal/common/eal_common_memory.c*

*+++ b/lib/librte_eal/common/eal_common_memory.c*

@@ -458,10 +458,6 @@ rte_eal_check_dma_mask(uint8_t maskbits)

                return -1;

        }



-       /* keep the more restricted maskbit */

-       if (!mcfg->dma_maskbits || maskbits < mcfg->dma_maskbits)

-               mcfg->dma_maskbits = maskbits;

-

        /* create dma mask */

        mask = ~((1ULL << maskbits) - 1);

On Mon, Oct 29, 2018 at 9:48 AM Thomas Monjalon <thomas@monjalon.net> wrote:

> 29/10/2018 10:36, Yao, Lei A:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > 29/10/2018 09:23, Yao, Lei A:
> > > > Hi, Lucero, Thomas
> > > >
> > > > This patch set will cause deadlock during memory initialization.
> > > > rte_memseg_walk and try_expand_heap both will lock
> > > > the file &mcfg->memory_hotplug_lock. So dead lock will occur.
> > > >
> > > > #0       rte_memseg_walk
> > > > #1  <-rte_eal_check_dma_mask
> > > > #2  <-alloc_pages_on_heap
> > > > #3  <-try_expand_heap_primary
> > > > #4  <-try_expand_heap
> > > >
> > > > Log as following:
> > > > EAL: TSC frequency is ~2494156 KHz
> > > > EAL: Master lcore 0 is ready (tid=7ffff7fe3c00;cpuset=[0])
> > > > [New Thread 0x7ffff5e0d700 (LWP 330350)]
> > > > EAL: lcore 1 is ready (tid=7ffff5e0d700;cpuset=[1])
> > > > EAL: Trying to obtain current memory policy.
> > > > EAL: Setting policy MPOL_PREFERRED for socket 0
> > > > EAL: Restoring previous memory policy: 0
> > > >
> > > > Could you have a check on this? A lot of test cases in our validation
> > > > team fail because of this. Thanks a lot!
> > >
> > > Can we just call rte_memseg_walk_thread_unsafe()?
> > >
> > > +Cc Anatoly
> >
> > Hi, Thomas
> >
> > I change to rte_memseg_walk_thread_unsafe(), still
> > Can't work.
> >
> > EAL: Setting policy MPOL_PREFERRED for socket 0
> > EAL: Restoring previous memory policy: 0
> > EAL: memseg iova 140000000, len 40000000, out of range
> > EAL:    using dma mask ffffffffffffffff
> > EAL: alloc_pages_on_heap(): couldn't allocate memory due to DMA mask
> > EAL: Trying to obtain current memory policy.
> > EAL: Setting policy MPOL_PREFERRED for socket 1
> > EAL: Restoring previous memory policy: 0
> > EAL: memseg iova 1bc0000000, len 40000000, out of range
> > EAL:    using dma mask ffffffffffffffff
> > EAL: alloc_pages_on_heap(): couldn't allocate memory due to DMA mask
> > error allocating rte services array
> > EAL: FATAL: rte_service_init() failed
> > EAL: rte_service_init() failed
> > PANIC in main():
>
> I think it is showing there are at least 2 issues:
>         1/ deadlock
>         2/ allocation does not comply with mask check (out of range)
>
>
>

  reply	other threads:[~2018-10-29 10:11 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-05 12:45 Alejandro Lucero
2018-10-05 12:45 ` [dpdk-dev] [PATCH v3 1/6] mem: add function for checking memsegs IOVAs addresses Alejandro Lucero
2018-10-10  8:56   ` Tu, Lijuan
2018-10-11  9:26     ` Alejandro Lucero
2018-10-28 21:03   ` Thomas Monjalon
2018-10-05 12:45 ` [dpdk-dev] [PATCH v3 2/6] mem: use address hint for mapping hugepages Alejandro Lucero
2018-10-29 16:08   ` Dariusz Stojaczyk
2018-10-29 16:40     ` Alejandro Lucero
2018-10-05 12:45 ` [dpdk-dev] [PATCH v3 3/6] bus/pci: check iommu addressing limitation just once Alejandro Lucero
2018-10-05 12:45 ` [dpdk-dev] [PATCH v3 4/6] bus/pci: use IOVAs dmak mask check when setting IOVA mode Alejandro Lucero
2018-10-05 12:45 ` [dpdk-dev] [PATCH v3 5/6] net/nfp: check hugepages IOVAs based on DMA mask Alejandro Lucero
2018-10-05 12:45 ` [dpdk-dev] [PATCH v3 6/6] net/nfp: support IOVA VA mode Alejandro Lucero
2018-10-28 21:04 ` [dpdk-dev] [PATCH v3 0/6] use IOVAs check based on DMA mask Thomas Monjalon
2018-10-29  8:23   ` Yao, Lei A
2018-10-29  8:42     ` Thomas Monjalon
2018-10-29  9:07       ` Thomas Monjalon
2018-10-29  9:25         ` Alejandro Lucero
2018-10-29  9:44           ` Yao, Lei A
2018-10-29  9:36       ` Yao, Lei A
2018-10-29  9:48         ` Thomas Monjalon
2018-10-29 10:11           ` Alejandro Lucero [this message]
2018-10-29 10:15             ` Alejandro Lucero
2018-10-29 11:39               ` Alejandro Lucero
2018-10-29 11:46                 ` Thomas Monjalon
2018-10-29 12:55                   ` Alejandro Lucero
2018-10-29 13:18                     ` Yao, Lei A
2018-10-29 13:40                       ` Alejandro Lucero
2018-10-29 14:18                         ` Thomas Monjalon
2018-10-29 14:35                           ` Alejandro Lucero
2018-10-29 18:54                           ` Yongseok Koh
2018-10-29 19:37                             ` Alejandro Lucero
2018-10-30 10:10                               ` Burakov, Anatoly
2018-10-30 10:11                           ` Burakov, Anatoly
2018-10-30 10:19                             ` Alejandro Lucero
2018-10-30  3:20                         ` Lin, Xueqin
2018-10-30  9:41                           ` Alejandro Lucero
2018-10-30 10:33                             ` Lin, Xueqin
2018-10-30 10:38                               ` Alejandro Lucero
2018-10-30 12:21                                 ` Lin, Xueqin
2018-10-30 12:37                                   ` Alejandro Lucero
2018-10-30 14:04                                     ` Alejandro Lucero
2018-10-30 14:14                                       ` Burakov, Anatoly
2018-10-30 14:45                                         ` Alejandro Lucero
2018-10-30 14:45                                       ` Lin, Xueqin
2018-10-30 14:57                                         ` Alejandro Lucero
2018-10-30 15:09                                           ` Lin, Xueqin
2018-10-30 10:18                 ` Burakov, Anatoly
2018-10-30 10:23                   ` Alejandro Lucero
  -- strict thread matches above, loose matches on Subject: below --
2018-07-04 12:53 Alejandro Lucero

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAD+H990PQAvQc+HHE+6fv9JLJbVKVwTpViLotnE72=8RHM8JEA@mail.gmail.com' \
    --to=alejandro.lucero@netronome.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=lei.a.yao@intel.com \
    --cc=qian.q.xu@intel.com \
    --cc=thomas@monjalon.net \
    --cc=xueqin.lin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).