DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xie, Huawei" <huawei.xie@intel.com>
To: Tetsuya.Mukawa <mukawa@igel.co.jp>,
	"Ouyang, Changchun" <changchun.ouyang@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: Katsuya MATSUBARA <matsu@igel.co.jp>,
	"nakajima.yoshihiro@lab.ntt.co.jp"
	<nakajima.yoshihiro@lab.ntt.co.jp>,
	Hitoshi Masutani <masutani.hitoshi@lab.ntt.co.jp>
Subject: Re: [dpdk-dev] [RFC] lib/librte_vhost: qemu vhost-user support into DPDK vhost library
Date: Wed, 27 Aug 2014 05:56:31 +0000	[thread overview]
Message-ID: <C37D651A908B024F974696C65296B57B0F27C711@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <53FD6C4E.5040907@igel.co.jp>



> -----Original Message-----
> From: Tetsuya.Mukawa [mailto:mukawa@igel.co.jp]
> Sent: Wednesday, August 27, 2014 1:28 PM
> To: Ouyang, Changchun; dev@dpdk.org
> Cc: Xie, Huawei; Katsuya MATSUBARA; nakajima.yoshihiro@lab.ntt.co.jp;
> Hitoshi Masutani
> Subject: Re: [dpdk-dev] [RFC] lib/librte_vhost: qemu vhost-user support into
> DPDK vhost library
> 
> Hi Changchun,
> 
> (2014/08/27 14:01), Ouyang, Changchun wrote:
> > Agree with you, the performance should be same as the data path (RX/TX) is
> not affected,
> > The difference between implementation only exists in the virtio device
> creation and destroy stage.
> Yes, I agree. Also There may be the difference, if a virtio-net driver
> on a guest isn't poll mode like a virtio-net device driver in the
> kernel. In the case, existing vhost implementation uses the eventfd
> kernel module, and vhost-user implementation uses eventfd to kick the
> driver. So I guess there will be the difference.
For virtio-net device driver, there is still no difference. Existing solution creates an eventfd module to install a fd in DPDK process pointing to the eventfd in qemu process. In vhost-user, the UNIX domain socket will do that work, create a new fd, install it in target DPDK server process, and make it point to the eventfd in qemu process.
> 
> Anyway, about device creation and destruction, the difference will come
> from transmission speed between unix domain socket and CUSE. I am not
> sure which is faster.
> 
> Thanks,
> Tetsuya
> 
> 
> >
> > Regards,
> > Changchun
> >
> >> -----Original Message-----
> >> From: Tetsuya.Mukawa [mailto:mukawa@igel.co.jp]
> >> Sent: Wednesday, August 27, 2014 12:39 PM
> >> To: Ouyang, Changchun; dev@dpdk.org
> >> Cc: Xie, Huawei; Katsuya MATSUBARA; nakajima.yoshihiro@lab.ntt.co.jp;
> >> Hitoshi Masutani
> >> Subject: Re: [dpdk-dev] [RFC] lib/librte_vhost: qemu vhost-user support into
> >> DPDK vhost library
> >>
> >>
> >> (2014/08/27 9:43), Ouyang, Changchun wrote:
> >>> Do we have performance comparison between both implementation?
> >> Hi Changchun,
> >>
> >> If DPDK applications are running on both guest and host side, the
> >> performance should be almost same, because while transmitting data virt
> >> queues are accessed by virtio-net PMD and libvhost. In libvhost, the existing
> >> vhost implementation and a vhost-user implementation will shares or uses
> >> same code to access virt queues. So I guess the performance will be almost
> >> same.
> >>
> >> Thanks,
> >> Tetsuya
> >>
> >>
> >>> Thanks
> >>> Changchun
> >>>
> >>>
> >>> -----Original Message-----
> >>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xie, Huawei
> >>> Sent: Tuesday, August 26, 2014 7:06 PM
> >>> To: dev@dpdk.org
> >>> Subject: Re: [dpdk-dev] [RFC] lib/librte_vhost: qemu vhost-user
> >>> support into DPDK vhost library
> >>>
> >>> Hi all:
> >>> We are implementing qemu official vhost-user interface into DPDK vhost
> >> library, so there would be two coexisting implementations for user space
> >> vhost backend.
> >>> Pro and cons in my mind:
> >>> Existing solution:
> >>> Pros:  works with qemu version before 2.1;  Cons: depends on eventfd
> >> proxy kernel module and extra maintenance effort Qemu vhost-user:
> >>>                Pros:  qemu official us-vhost interface;     Cons: only available after
> >> qemu 2.1
> >>> BR.
> >>> huawei

  reply	other threads:[~2014-08-27  5:52 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-26 11:05 Xie, Huawei
2014-08-27  0:43 ` Ouyang, Changchun
2014-08-27  4:39   ` Tetsuya.Mukawa
2014-08-27  5:01     ` Ouyang, Changchun
2014-08-27  5:27       ` Tetsuya.Mukawa
2014-08-27  5:56         ` Xie, Huawei [this message]
2014-08-27  6:07           ` Tetsuya.Mukawa
2014-08-27  5:58         ` Tetsuya.Mukawa
2014-08-27  6:00         ` Ouyang, Changchun
2014-08-27  6:09           ` Tetsuya.Mukawa
2014-09-13  5:27 ` Linhaifeng
2014-09-16  1:36   ` Xie, Huawei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C37D651A908B024F974696C65296B57B0F27C711@SHSMSX101.ccr.corp.intel.com \
    --to=huawei.xie@intel.com \
    --cc=changchun.ouyang@intel.com \
    --cc=dev@dpdk.org \
    --cc=masutani.hitoshi@lab.ntt.co.jp \
    --cc=matsu@igel.co.jp \
    --cc=mukawa@igel.co.jp \
    --cc=nakajima.yoshihiro@lab.ntt.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).