From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 8A8253798; Fri, 16 Jun 2017 10:59:39 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Jun 2017 01:59:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,346,1493708400"; d="scan'208";a="1183262462" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.221.28]) by fmsmga002.fm.intel.com with SMTP; 16 Jun 2017 01:59:36 -0700 Received: by (sSMTP sendmail emulation); Fri, 16 Jun 2017 09:59:35 +0100 Date: Fri, 16 Jun 2017 09:59:35 +0100 From: Bruce Richardson To: Sam Cc: dev@dpdk.org, users@dpdk.org Message-ID: <20170616085935.GC82628@bricha3-MOBL3.ger.corp.intel.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Organization: Intel Research and =?iso-8859-1?Q?De=ACvel?= =?iso-8859-1?Q?opment?= Ireland Ltd. User-Agent: Mutt/1.8.1 (2017-04-11) Subject: Re: [dpdk-dev] [DPDK-memory] how qemu waste such long time under dpdk huge page envriment? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Jun 2017 08:59:40 -0000 On Fri, Jun 16, 2017 at 04:26:40PM +0800, Sam wrote: > BTW, while running ovs-dpdk, this log is also take long time, does that > mean dpdk request large memory take long time? > > EAL: Setting up physically contiguous memory... > When running with 1G pages, I found that the mmap system call takes a considerable amount of time to execute. I think this is due to the kernel zero-ing out the 1G pages. IIRC on one system I measured it as taking about 0.4 seconds per 1G page. /Bruce > > 2017-06-16 16:11 GMT+08:00 Sam : > > > Hi all, > > > > I'm running `QEMU_CMD ...` to create a vm under dpdk huge page envriment > > (which set huge page 1G). And I enable all events in qemu. > > > > For qemu and ovs-dpdk(ovs-2.4.9 with dpdk-2.2.0) environment, detail log > > is: > > > > > 30012@1497443246.678304:object_dynamic_cast_assert > > qemu:memory-region->qemu:memory-region (/home/hu > > > anghuai/cloud/contrib/qemu-2.6.0/memory.c:1076:memory_region_initfn) > > > 30012@1497443256.274866:object_dynamic_cast_assert > > qio-channel-socket->qio-channel-socket (io/chann > > > el-socket.c:389:qio_channel_socket_init) > > > > > > I don't know why qemu doing 'memory_region_initfn' function in this 10 > > second, does anyone know this? > > > > static void memory_region_initfn(Object *obj) > >> { > >> MemoryRegion *mr = MEMORY_REGION(obj); > >> ObjectProperty *op; > >> mr->ops = &unassigned_mem_ops; > >> mr->enabled = true; > >> mr->romd_mode = true; > >> mr->global_locking = true; > >> mr->destructor = memory_region_destructor_none; > >> QTAILQ_INIT(&mr->subregions); > >> QTAILQ_INIT(&mr->coalesced); > >> op = object_property_add(OBJECT(mr), "container", > >> "link<" TYPE_MEMORY_REGION ">", > >> memory_region_get_container, > >> NULL, /* memory_region_set_container */ > >> NULL, NULL, &error_abort); > >> op->resolve = memory_region_resolve_container; > >> object_property_add(OBJECT(mr), "addr", "uint64", > >> memory_region_get_addr, > >> NULL, /* memory_region_set_addr */ > >> NULL, NULL, &error_abort); > >> object_property_add(OBJECT(mr), "priority", "uint32", > >> memory_region_get_priority, > >> NULL, /* memory_region_set_priority */ > >> NULL, NULL, &error_abort); > >> object_property_add_bool(OBJECT(mr), "may-overlap", > >> memory_region_get_may_overlap, > >> NULL, /* memory_region_set_may_overlap */ > >> &error_abort); > >> object_property_add(OBJECT(mr), "size", "uint64", > >> memory_region_get_size, > >> NULL, /* memory_region_set_size, */ > >> NULL, NULL, &error_abort); > >> } > > > > > > Thank you~ > >