DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Liu, Yong" <yong.liu@intel.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
	"Ye, Xiaolong" <xiaolong.ye@intel.com>,
	"Wang, Zhihong" <zhihong.wang@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v3 2/2] vhost: binary search address mapping table
Date: Tue, 28 Apr 2020 15:38:35 +0000	[thread overview]
Message-ID: <86228AFD5BCD8E4EBFD2B90117B5E81E6354779B@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <0328ee15-d26a-3a21-035b-077361417191@redhat.com>



> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Tuesday, April 28, 2020 11:28 PM
> To: Liu, Yong <yong.liu@intel.com>; Ye, Xiaolong <xiaolong.ye@intel.com>;
> Wang, Zhihong <zhihong.wang@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v3 2/2] vhost: binary search address mapping table
> 
> 
> 
> On 4/28/20 11:13 AM, Marvin Liu wrote:
> > If Tx zero copy enabled, gpa to hpa mapping table is updated one by
> > one. This will harm performance when guest memory backend using 2M
> > hugepages. Now utilize binary search to find the entry in mapping
> > table, meanwhile set threshold to 256 entries for linear search.
> >
> > Signed-off-by: Marvin Liu <yong.liu@intel.com>
> >
> > diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile
> > index e592795f2..8769afaad 100644
> > --- a/lib/librte_vhost/Makefile
> > +++ b/lib/librte_vhost/Makefile
> > @@ -10,7 +10,7 @@ EXPORT_MAP := rte_vhost_version.map
> >
> >  CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
> >  CFLAGS += -I vhost_user
> > -CFLAGS += -fno-strict-aliasing
> > +CFLAGS += -fno-strict-aliasing -Wno-maybe-uninitialized
> >  LDLIBS += -lpthread
> >
> >  ifeq ($(RTE_TOOLCHAIN), gcc)
> > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> > index 507dbf214..a0fee39d5 100644
> > --- a/lib/librte_vhost/vhost.h
> > +++ b/lib/librte_vhost/vhost.h
> > @@ -546,20 +546,46 @@ extern int vhost_data_log_level;
> >  #define MAX_VHOST_DEVICE	1024
> >  extern struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
> >
> > +#define VHOST_BINARY_SEARCH_THRESH 256
> > +static int guest_page_addrcmp(const void *p1, const void *p2)
> > +{
> > +	const struct guest_page *page1 = (const struct guest_page *)p1;
> > +	const struct guest_page *page2 = (const struct guest_page *)p2;
> > +
> > +	if (page1->guest_phys_addr > page2->guest_phys_addr)
> > +		return 1;
> > +	if (page1->guest_phys_addr < page2->guest_phys_addr)
> > +		return -1;
> > +
> > +	return 0;
> > +}
> > +
> >  /* Convert guest physical address to host physical address */
> >  static __rte_always_inline rte_iova_t
> >  gpa_to_hpa(struct virtio_net *dev, uint64_t gpa, uint64_t size)
> >  {
> >  	uint32_t i;
> >  	struct guest_page *page;
> > -
> > -	for (i = 0; i < dev->nr_guest_pages; i++) {
> > -		page = &dev->guest_pages[i];
> > -
> > -		if (gpa >= page->guest_phys_addr &&
> > -		    gpa + size < page->guest_phys_addr + page->size) {
> > -			return gpa - page->guest_phys_addr +
> > -			       page->host_phys_addr;
> > +	struct guest_page key;
> > +
> > +	if (dev->nr_guest_pages >= VHOST_BINARY_SEARCH_THRESH) {
> 
> I would have expected the binary search to be more efficient for much
> smaller number of pages. Have you done some tests to define this
> threshold value?
> 
Maxime,
In my unit test, binary search will be better when size over 16. But it won't be the case with real VM.
I have tested around 128 to 1024 pages,  the benefit can be seen around 256. So threshold is set to it.

Thanks,
Marvin

> > +		key.guest_phys_addr = gpa;
> > +		page = bsearch(&key, dev->guest_pages, dev-
> >nr_guest_pages,
> > +			       sizeof(struct guest_page), guest_page_addrcmp);
> > +		if (page) {
> > +			if (gpa + size < page->guest_phys_addr + page->size)
> > +				return gpa - page->guest_phys_addr +
> > +					page->host_phys_addr;
> > +		}
> 
> Is all the generated code inlined?
> 
The compare function hasn't been inlined.  Will inline it in next version.

> I see that in the elf file:
> 2386: 0000000000874f70    16 FUNC    LOCAL  DEFAULT   13
> guest_page_addrcmp
> 
> > +	} else {
> > +		for (i = 0; i < dev->nr_guest_pages; i++) {
> > +			page = &dev->guest_pages[i];
> > +
> > +			if (gpa >= page->guest_phys_addr &&
> > +			    gpa + size < page->guest_phys_addr +
> > +			    page->size)
> > +				return gpa - page->guest_phys_addr +
> > +				       page->host_phys_addr;
> >  		}
> >  	}
> >
> > diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
> > index 79fcb9d19..15e50d27d 100644
> > --- a/lib/librte_vhost/vhost_user.c
> > +++ b/lib/librte_vhost/vhost_user.c
> > @@ -965,6 +965,12 @@ add_guest_pages(struct virtio_net *dev, struct
> rte_vhost_mem_region *reg,
> >  		reg_size -= size;
> >  	}
> >
> > +	/* sort guest page array if over binary search threshold */
> > +	if (dev->nr_guest_pages >= VHOST_BINARY_SEARCH_THRESH) {
> > +		qsort((void *)dev->guest_pages, dev->nr_guest_pages,
> > +			sizeof(struct guest_page), guest_page_addrcmp);
> > +	}
> > +
> >  	return 0;
> >  }
> >
> >


  reply	other threads:[~2020-04-28 15:38 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-16 15:33 [dpdk-dev] [PATCH] vhost: cache guest/vhost physical address mapping Marvin Liu
2020-03-16 13:48 ` Ye Xiaolong
2020-03-17  1:01   ` Liu, Yong
2020-04-01 14:50 ` [dpdk-dev] [PATCH v2 1/2] vhost: utilize dpdk dynamic memory allocator Marvin Liu
2020-04-01 10:08   ` Gavin Hu
2020-04-01 14:50   ` [dpdk-dev] [PATCH v2 2/2] vhost: cache gpa to hpa translation Marvin Liu
2020-04-01 10:07     ` Gavin Hu
2020-04-01 13:01       ` Liu, Yong
2020-04-02  3:04         ` Gavin Hu
2020-04-02  4:45           ` Liu, Yong
2020-04-03  8:22         ` Ma, LihongX
2020-04-02  2:57     ` Ye Xiaolong
2020-04-27  8:45     ` Maxime Coquelin
2020-04-28  0:44       ` Liu, Yong
2020-04-02  2:57   ` [dpdk-dev] [PATCH v2 1/2] vhost: utilize dpdk dynamic memory allocator Ye Xiaolong
2020-04-03  8:22   ` Ma, LihongX
2020-04-15 11:15   ` Maxime Coquelin
2020-04-28  9:13 ` [dpdk-dev] [PATCH v3 " Marvin Liu
2020-04-28  9:13   ` [dpdk-dev] [PATCH v3 2/2] vhost: binary search address mapping table Marvin Liu
2020-04-28 15:28     ` Maxime Coquelin
2020-04-28 15:38       ` Liu, Yong [this message]
2020-04-28 12:51   ` [dpdk-dev] [PATCH v3 1/2] vhost: utilize dpdk dynamic memory allocator Maxime Coquelin
2020-04-29  1:00 ` [dpdk-dev] [PATCH v4 1/2] net/virtio: add support Virtio link speed feature Marvin Liu
2020-04-29  1:00   ` [dpdk-dev] [PATCH v4 1/2] vhost: utilize dpdk dynamic memory allocator Marvin Liu
2020-04-29  1:00   ` [dpdk-dev] [PATCH v4 2/2] vhost: binary search address mapping table Marvin Liu
2020-04-29  1:01   ` [dpdk-dev] [PATCH v4 2/2] vhost: utilize dpdk dynamic memory allocator Marvin Liu
2020-04-29  1:06     ` Liu, Yong
2020-04-29 17:47       ` Maxime Coquelin
2020-04-29  1:04 ` [dpdk-dev] [PATCH v4 1/2] " Marvin Liu
2020-04-29  1:04   ` [dpdk-dev] [PATCH v4 2/2] vhost: binary search address mapping table Marvin Liu
2020-04-29 11:50     ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=86228AFD5BCD8E4EBFD2B90117B5E81E6354779B@SHSMSX103.ccr.corp.intel.com \
    --to=yong.liu@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=xiaolong.ye@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).