From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 646517CCA for ; Mon, 17 Sep 2018 15:06:19 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Sep 2018 06:06:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,385,1531810800"; d="scan'208";a="84193325" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.237.220.55]) ([10.237.220.55]) by orsmga003.jf.intel.com with ESMTP; 17 Sep 2018 06:06:14 -0700 To: Tiwei Bie Cc: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org, seanbh@gmail.com References: <20180905042852.6212-1-tiwei.bie@intel.com> <20180905042852.6212-3-tiwei.bie@intel.com> <5bf3a5ef-790c-40af-d4ca-1fc748dd6c04@intel.com> <20180907113559.GA22407@debian> <4b106cd0-6ccf-f80c-90fa-1421cf47f9a8@intel.com> <20180910035929.GA23854@debian> <20180917115751.GA7807@debian> From: "Burakov, Anatoly" Message-ID: <5459512e-8114-92c9-ad10-5b60bcb956f9@intel.com> Date: Mon, 17 Sep 2018 14:06:13 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20180917115751.GA7807@debian> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH 2/3] net/virtio-user: avoid parsing process mappings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Sep 2018 13:06:20 -0000 On 17-Sep-18 12:57 PM, Tiwei Bie wrote: > On Mon, Sep 17, 2018 at 11:17:42AM +0100, Burakov, Anatoly wrote: >> On 10-Sep-18 4:59 AM, Tiwei Bie wrote: >>> On Fri, Sep 07, 2018 at 01:21:35PM +0100, Burakov, Anatoly wrote: >>>> On 07-Sep-18 12:35 PM, Tiwei Bie wrote: >>>>> On Fri, Sep 07, 2018 at 10:39:16AM +0100, Burakov, Anatoly wrote: >>>>>> On 05-Sep-18 5:28 AM, Tiwei Bie wrote: >>>>>>> Recently some memory APIs were introduced to allow users to >>>>>>> get the file descriptor and offset for each memory segment. >>>>>>> We can leverage those APIs to get rid of the /proc magic on >>>>>>> memory table preparation in vhost-user backend. >>>>>>> >>>>>>> Signed-off-by: Tiwei Bie >>>>>>> --- >>>>>> >>>>>> >>>>>> >>>>>>> - for (i = 0; i < num; ++i) { >>>>>>> - mr = &msg->payload.memory.regions[i]; >>>>>>> - mr->guest_phys_addr = huges[i].addr; /* use vaddr! */ >>>>>>> - mr->userspace_addr = huges[i].addr; >>>>>>> - mr->memory_size = huges[i].size; >>>>>>> - mr->mmap_offset = 0; >>>>>>> - fds[i] = open(huges[i].path, O_RDWR); >>>>>>> + if (rte_memseg_get_fd_offset_thread_unsafe(ms, &offset) < 0) { >>>>>>> + PMD_DRV_LOG(ERR, "Failed to get offset, ms=%p rte_errno=%d", >>>>>>> + ms, rte_errno); >>>>>>> + return -1; >>>>>>> + } >>>>>>> + >>>>>>> + start_addr = (uint64_t)(uintptr_t)ms->addr; >>>>>>> + end_addr = start_addr + ms->len; >>>>>>> + >>>>>>> + for (i = 0; i < wa->region_nr; i++) { >>>>>> >>>>>> There has to be a better way than to run this loop on every segment. Maybe >>>>>> store last-used region, and only do a region look up if there's a mismatch? >>>>>> Generally, in single-file segments mode, you'll get lots of segments from >>>>>> one memseg list one after the other, so you will need to do a lookup once >>>>>> memseg list changes, instead of on each segment. >>>>> >>>>> We may have holes in one memseg list due to dynamic free. >>>>> Do you mean we just need to do rte_memseg_contig_walk() >>>>> and we can assume that fds of the contiguous memegs will >>>>> be the same? >>>> >>>> No, i didn't mean that. >>>> >>>> Whether or not you are in single-file segments mode, you still need to scan >>>> each segment. However, you lose your state when you exit this function, and >>>> thus have to look up the appropriate memory region (that matches your fd) >>>> again, over and over. It would be good if you could store last-used memory >>>> region somewhere, so that next time you come back into this function, if the >>>> memseg has the same fd, you will not have to look it up. >>>> >>>> Something like this: >>>> >>>> struct walk_arg { >>>> *last_used; >>>> >>>> } >>>> >>>> int walk_func() { >>>> >>>> cur_region = wa->last_used; // check if it matches >>>> if (cur->region->fd != fd) { >>>> // fd is different - we've changed the segment >>>> >>>> wa->last_used = cur_region >>>> } >>>> } >>> >>> Thanks for the code. :) >>> >>>> >>>> So, cache last used region to not have to look it up again, because chances >>>> are, you won't have to. That way, you will still do region lookups, but only >>>> if you have to - not every time. >>> >>> I can do it, but I'm not sure this optimization is really >>> necessary. Because this loop should be quite fast, as the >>> max number of regions permitted by vhost-user is quite >>> small. And actually we need to do that loop at least once >>> for each packet in vhost-user's dequeue and enqueue path, >>> i.e. the data path. >> >> The number of regions is small, but the number of segments may be in the >> thousands. Think worst case - 8K segments in the 16th region > > The number of regions permitted by vhost-user is 8. > And most likely, we just have two regions as the > single-file-segment mode is mandatory when using > 2MB pages. > >> - with my code, >> you will execute only 16 iterations on first segment and use "last used" for >> the rest of the segments, > > We still need to do 8K iterations on the segments. Yes, but not 8K times (up to) 8 regions. Anyway, it's your driver, up to you :) > >> while with your code, it'll be 8K times 16 :) > > IMO, what we really need is a way to reduce "8K", > i.e. reduce the number of segments (which could > be thousands currently) we need to parse. > > And the loop should be faster than the function > call to rte_memseg_get_fd_thread_unsafe() and > rte_memseg_get_fd_offset_thread_unsafe() (which > are also called for each segment). Unfortunately, we cannot do that, because we cannot make any assumptions about underlying structure of fd's. Technically, i could introduce an fd-centric walk function (i.e. so that, instead of per-memseg or per-contiguous area walk, you'd get a per-fd walk), but i really don't want to pollute the API with another walk function. > >> >> You'll have to clarify the "for each packet" part, not sure i follow. > > Take the vhost-PMD as an example, when doing Rx burst > and Tx burst, for each mbuf (i.e. packet), we need to > do that loop at least once. Ah, OK, i get you - if it's fast enough to use on the data path, it's fast enough for memory mapping code :) Fair enough. > >> >> -- >> Thanks, >> Anatoly > -- Thanks, Anatoly