From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-f54.google.com (mail-ed1-f54.google.com [209.85.208.54]) by dpdk.org (Postfix) with ESMTP id 0FC261B100 for ; Thu, 11 Oct 2018 11:26:12 +0200 (CEST) Received: by mail-ed1-f54.google.com with SMTP id y19-v6so7601406edd.2 for ; Thu, 11 Oct 2018 02:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=fU9mzgjKIL0YXUkKGpWOJZsyAQxAq1BYk//Q2QBH4cs=; b=baSE6gEMTlWRTnupgOq4UHyd9/Otb2/CfJ5/gFeKBQdeShVoAOHmV328UIfqBZXvn6 ZBmG+gucJNy4XBWSd2k/8liNH+SD4ARRgm1S8ouOPYVBmcHHF8PnJdyTeUNwrat+XgCy yKi6kOv+rS4lqnPcKCs1GwDZ6OTnrJlt6zUCKLaBzUUMI6Q6Gp2j79ok7AY3uAAiUkuK IuKjw+Iitxpb+fGlfGop5415O7b/M3AhGwyp0go+HT/vy3bdwjamIftmHzQx20nDNvSQ h2OrYwXuOIwQIEFM7LoRX8YkeF/HH/suzWTpjS2Aiu3verUkbnQH9mwsZQm6sp436ymX 3Org== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=fU9mzgjKIL0YXUkKGpWOJZsyAQxAq1BYk//Q2QBH4cs=; b=inaW3wbB3aeS0m3lf3hFd7wrNfzscrOqwmuXRvDIHFvDulC+32dNYUXvG0RQB7oai3 BSbwgzN90mmeDQZ09uqAA7GmTeYbHEd+P6wR0Gd/lBULEfZIXmsdFVsOxkKbybOhb9kW KcrWl41sGdmIzxreYmoDsKVdMjknWXBBmV9ntZB9XQ0XJNYJO3fipCjn3cXxWprjnVdD mvj4yFCk8/ysGMbCl4GiFx3RD5Ne+mjT1pB9HKF/gziREBoonGD3DwS1fFp0tXskAIcu QnKV0aog/D1i6V6+HCjPYpHcSwAY+x3jlQyb5urYbBs+46mwNU5U8m3UpVgEwp8Ytmap JG7g== X-Gm-Message-State: ABuFfoiIogLg8NXA9RZDgPCsh/rVcWOTwDNLasBeuHmsuKSprQIQ/N0J s3OhgoduSS3nvd1pfPRvHzGBGTzuso/OUC0XJx16Nw== X-Google-Smtp-Source: ACcGV616eQnvRKd3PcgWRc7X4W/ZME1/NRVVMPkfwG2BKP5IbmUAf9LNsmYLb0fV/WIqjfrjJ4BnohPyl4pK+E83Zrg= X-Received: by 2002:a05:6402:1541:: with SMTP id p1mr1588107edx.104.1539249971669; Thu, 11 Oct 2018 02:26:11 -0700 (PDT) MIME-Version: 1.0 References: <1538743527-8285-1-git-send-email-alejandro.lucero@netronome.com> <1538743527-8285-2-git-send-email-alejandro.lucero@netronome.com> <8CE3E05A3F976642AAB0F4675D0AD20E0B974AA8@SHSMSX101.ccr.corp.intel.com> In-Reply-To: <8CE3E05A3F976642AAB0F4675D0AD20E0B974AA8@SHSMSX101.ccr.corp.intel.com> From: Alejandro Lucero Date: Thu, 11 Oct 2018 10:26:00 +0100 Message-ID: To: lijuan.tu@intel.com Cc: dev Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH v3 1/6] mem: add function for checking memsegs IOVAs addresses X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Oct 2018 09:26:12 -0000 On Wed, Oct 10, 2018 at 10:00 AM Tu, Lijuan wrote: > Hi > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Alejandro Lucero > > Sent: Friday, October 5, 2018 8:45 PM > > To: dev@dpdk.org > > Subject: [dpdk-dev] [PATCH v3 1/6] mem: add function for checking > > memsegs IOVAs addresses > > > > A device can suffer addressing limitations. This function checks memsegs > > have iovas within the supported range based on dma mask. > > > > PMDs should use this function during initialization if device suffers > > addressing limitations, returning an error if this function returns > memsegs > > out of range. > > > > Another usage is for emulated IOMMU hardware with addressing limitations. > > > > It is necessary to save the most restricted dma mask for checking out > > memory allocated dynamically after initialization. > > > > Signed-off-by: Alejandro Lucero > > Reviewed-by: Anatoly Burakov > > --- > > doc/guides/rel_notes/release_18_11.rst | 10 ++++ > > lib/librte_eal/common/eal_common_memory.c | 60 > > +++++++++++++++++++++++ > > lib/librte_eal/common/include/rte_eal_memconfig.h | 3 ++ > > lib/librte_eal/common/include/rte_memory.h | 3 ++ > > lib/librte_eal/common/malloc_heap.c | 12 +++++ > > lib/librte_eal/linuxapp/eal/eal.c | 2 + > > lib/librte_eal/rte_eal_version.map | 1 + > > 7 files changed, 91 insertions(+) > > > > diff --git a/doc/guides/rel_notes/release_18_11.rst > > b/doc/guides/rel_notes/release_18_11.rst > > index 2133a5b..c806dc6 100644 > > --- a/doc/guides/rel_notes/release_18_11.rst > > +++ b/doc/guides/rel_notes/release_18_11.rst > > @@ -104,6 +104,14 @@ New Features > > the specified port. The port must be stopped before the command call > in > > order > > to reconfigure queues. > > > > +* **Added check for ensuring allocated memory addressable by devices.** > > + > > + Some devices can have addressing limitations so a new function, > > + ``rte_eal_check_dma_mask``, has been added for checking allocated > > + memory is not out of the device range. Because now memory can be > > + dynamically allocated after initialization, a dma mask is kept and > > + any new allocated memory will be checked out against that dma mask > > + and rejected if out of range. If more than one device has addressing > > limitations, the dma mask is the more restricted one. > > > > API Changes > > ----------- > > @@ -156,6 +164,8 @@ ABI Changes > > ``rte_config`` structure on account of improving DPDK usability > > when > > using either ``--legacy-mem`` or ``--single-file-segments`` > flags. > > > > +* eal: added ``dma_maskbits`` to ``rte_mem_config`` for keeping more > > restricted > > + dma mask based on devices addressing limitations. > > > > Removed Items > > ------------- > > diff --git a/lib/librte_eal/common/eal_common_memory.c > > b/lib/librte_eal/common/eal_common_memory.c > > index 0b69804..c482f0d 100644 > > --- a/lib/librte_eal/common/eal_common_memory.c > > +++ b/lib/librte_eal/common/eal_common_memory.c > > @@ -385,6 +385,66 @@ struct virtiova { > > rte_memseg_walk(dump_memseg, f); > > } > > > > +static int > > +check_iova(const struct rte_memseg_list *msl __rte_unused, > > + const struct rte_memseg *ms, void *arg) { > > + uint64_t *mask = arg; > > + rte_iova_t iova; > > + > > + /* higher address within segment */ > > + iova = (ms->iova + ms->len) - 1; > > + if (!(iova & *mask)) > > + return 0; > > + > > + RTE_LOG(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of > > range\n", > > + ms->iova, ms->len); > > + > > + RTE_LOG(DEBUG, EAL, "\tusing dma mask %"PRIx64"\n", *mask); > > + return 1; > > +} > > + > > +#if defined(RTE_ARCH_64) > > +#define MAX_DMA_MASK_BITS 63 > > +#else > > +#define MAX_DMA_MASK_BITS 31 > > +#endif > > + > > +/* check memseg iovas are within the required range based on dma mask > > +*/ int __rte_experimental rte_eal_check_dma_mask(uint8_t maskbits) { > > + struct rte_mem_config *mcfg = > > rte_eal_get_configuration()->mem_config; > > + uint64_t mask; > > + > > + /* sanity check */ > > + if (maskbits > MAX_DMA_MASK_BITS) { > > + RTE_LOG(ERR, EAL, "wrong dma mask size %u (Max: %u)\n", > > + maskbits, MAX_DMA_MASK_BITS); > > + return -1; > > + } > > + > > + /* create dma mask */ > > + mask = ~((1ULL << maskbits) - 1); > > + > > + if (rte_memseg_walk(check_iova, &mask)) > > [Lijuan]In my environment, testpmd halts at rte_memseg_walk() when > maskbits is 0. > > Can you explain this further? Who is calling rte_eal_check_dma_mask with mask 0? is this a X86_64 system? The only explanation I can find is the IOMMU hardware reporting mgaw=0 what I would say is something completely wrong. > > + /* > > + * Dma mask precludes hugepage usage. > > + * This device can not be used and we do not need to keep > > + * the dma mask. > > + */ > > + return 1; > > + > > + /* > > + * we need to keep the more restricted maskbit for checking > > + * potential dynamic memory allocation in the future. > > + */ > > + mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits : > > + RTE_MIN(mcfg->dma_maskbits, maskbits); > > + > > + return 0; > > +} > > + > > /* return the number of memory channels */ unsigned > > rte_memory_get_nchannel(void) { diff --git > > a/lib/librte_eal/common/include/rte_eal_memconfig.h > > b/lib/librte_eal/common/include/rte_eal_memconfig.h > > index 62a21c2..b5dff70 100644 > > --- a/lib/librte_eal/common/include/rte_eal_memconfig.h > > +++ b/lib/librte_eal/common/include/rte_eal_memconfig.h > > @@ -81,6 +81,9 @@ struct rte_mem_config { > > /* legacy mem and single file segments options are shared */ > > uint32_t legacy_mem; > > uint32_t single_file_segments; > > + > > + /* keeps the more restricted dma mask */ > > + uint8_t dma_maskbits; > > } __attribute__((__packed__)); > > > > > > diff --git a/lib/librte_eal/common/include/rte_memory.h > > b/lib/librte_eal/common/include/rte_memory.h > > index 14bd277..c349d6c 100644 > > --- a/lib/librte_eal/common/include/rte_memory.h > > +++ b/lib/librte_eal/common/include/rte_memory.h > > @@ -454,6 +454,9 @@ typedef int (*rte_memseg_list_walk_t)(const struct > > rte_memseg_list *msl, > > */ > > unsigned rte_memory_get_nrank(void); > > > > +/* check memsegs iovas are within a range based on dma mask */ int > > +rte_eal_check_dma_mask(uint8_t maskbits); > > + > > /** > > * Drivers based on uio will not load unless physical > > * addresses are obtainable. It is only possible to get diff --git > > a/lib/librte_eal/common/malloc_heap.c > > b/lib/librte_eal/common/malloc_heap.c > > index ac7bbb3..3b5b2b6 100644 > > --- a/lib/librte_eal/common/malloc_heap.c > > +++ b/lib/librte_eal/common/malloc_heap.c > > @@ -259,11 +259,13 @@ struct malloc_elem * > > int socket, unsigned int flags, size_t align, size_t bound, > > bool contig, struct rte_memseg **ms, int n_segs) { > > + struct rte_mem_config *mcfg = > > rte_eal_get_configuration()->mem_config; > > struct rte_memseg_list *msl; > > struct malloc_elem *elem = NULL; > > size_t alloc_sz; > > int allocd_pages; > > void *ret, *map_addr; > > + uint64_t mask; > > > > alloc_sz = (size_t)pg_sz * n_segs; > > > > @@ -291,6 +293,16 @@ struct malloc_elem * > > goto fail; > > } > > > > + if (mcfg->dma_maskbits) { > > + mask = ~((1ULL << mcfg->dma_maskbits) - 1); > > + if (rte_eal_check_dma_mask(mask)) { > > + RTE_LOG(ERR, EAL, > > + "%s(): couldn't allocate memory due to DMA > mask\n", > > + __func__); > > + goto fail; > > + } > > + } > > + > > /* add newly minted memsegs to malloc heap */ > > elem = malloc_heap_add_memory(heap, msl, map_addr, alloc_sz); > > > > diff --git a/lib/librte_eal/linuxapp/eal/eal.c > > b/lib/librte_eal/linuxapp/eal/eal.c > > index 4a55d3b..dfe1b8c 100644 > > --- a/lib/librte_eal/linuxapp/eal/eal.c > > +++ b/lib/librte_eal/linuxapp/eal/eal.c > > @@ -263,6 +263,8 @@ enum rte_iova_mode > > * processes could later map the config into this exact location */ > > rte_config.mem_config->mem_cfg_addr = (uintptr_t) > > rte_mem_cfg_addr; > > > > + rte_config.mem_config->dma_maskbits = 0; > > + > > } > > > > /* attach to an existing shared memory config */ diff --git > > a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map > > index 73282bb..2baefce 100644 > > --- a/lib/librte_eal/rte_eal_version.map > > +++ b/lib/librte_eal/rte_eal_version.map > > @@ -291,6 +291,7 @@ EXPERIMENTAL { > > rte_devargs_parsef; > > rte_devargs_remove; > > rte_devargs_type_count; > > + rte_eal_check_dma_mask; > > rte_eal_cleanup; > > rte_eal_hotplug_add; > > rte_eal_hotplug_remove; > > -- > > 1.9.1 > >