From: 王志克 <wangzhike@jd.com>
To: "Tan, Jianfeng" <jianfeng.tan@intel.com>,
"avi.cohen@huawei.com" <avi.cohen@huawei.com>,
"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] VIRTIO for containers
Date: Thu, 26 Oct 2017 12:53:43 +0000 [thread overview]
Message-ID: <6DAF063A35010343823807B082E5681F41D2DF53@mbx05.360buyAD.local> (raw)
In-Reply-To: <030b706c-1566-36de-79cb-74af834f6a65@intel.com>
Hi,
Thanks for reply.
To put tcp/ip rx into app thread, actually, might avoid that with a little change on tap driver. Currently, we use netif_rx/netif_receive_skb() to rx in tap, which could result in going up to the tcp/ip stack in the vhost kthread. Instead, we could backlog the packets into other cpu (application thread's cpu?).
[Wang Zhike] Then in this case, another kthread like ksoftirq will be kicked, right?
In my understanding, the advantage is that the rx performance can be even improvement, while disadvantage is that more cpu resource is used and another queue is needed. If that can be done in a smart way, like system has idle CPUs, we can use this way, else fall back to only use one kernel thread. Just my 2 cents.
Br,
Wang Zhike
From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
Sent: Thursday, October 26, 2017 4:53 PM
To: 王志克; avi.cohen@huawei.com; users@dpdk.org
Subject: Re: VIRTIO for containers
Hi,
[Wang Zhike] I once saw you mentioned that something like mmap solution may be used. Is it still on your roadmap? I am not sure whether it is same as the “vhost tx zero copy”.
Can I know the forecasted day that the optimization can be done? Some Linux kernel upstream module would be updated, or DPDK module? Just want to know which modules will be touched.
Yes, I was planning to do that. But found out it helps on user->kernel path; not so easy for kernel->user path. It’s not the same as “vhost tx zero copy” (there are some restrictions BTW). The packet mmap would share a bulk of memory with user and kernel space, so that we don’t need to copy (the effect is the same with “vhost tx zero copy”). As for the date, it still lack of detailed design and feasibility analysis.
1) Yes, we have done some initial tests internally, with testpmd as the vswitch instead of OVS-DPDK; and we were comparing with KNI for exceptional path.
[Wang Zhike]Can you please kindly indicate how to configure for KNI mode? I would like to also compare it.
Now KNI is a vdev now. You can refer to this link: http://dpdk.org/doc/guides/nics/kni.html
2) We also see similar asymmetric result. For user->kernel path, it not only copies data from mbuf to skb, but also might go above to tcp stack (you can check using perf).
[Wang Zhike] Yes, indeed. User->kernel path, tcp/ip related work is done by vhost thread, while kernel to user thread, tcp/ip related work is done by the app (my case netperf) in syscall.
To put tcp/ip rx into app thread, actually, might avoid that with a little change on tap driver. Currently, we use netif_rx/netif_receive_skb() to rx in tap, which could result in going up to the tcp/ip stack in the vhost kthread. Instead, we could backlog the packets into other cpu (application thread's cpu?).
Thanks,
Jianfeng
next prev parent reply other threads:[~2017-10-26 12:53 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-20 9:24 王志克
2017-10-20 16:54 ` Tan, Jianfeng
2017-10-24 3:15 ` 王志克
2017-10-24 9:45 ` 王志克
2017-10-25 7:34 ` Tan, Jianfeng
2017-10-25 9:58 ` 王志克
2017-10-26 8:53 ` Tan, Jianfeng
2017-10-26 12:53 ` 王志克 [this message]
2017-10-27 1:58 ` Tan, Jianfeng
2017-10-31 4:25 ` 王志克
2017-11-01 2:58 ` Tan, Jianfeng
-- strict thread matches above, loose matches on Subject: below --
2017-06-25 15:13 Avi Cohen (A)
2017-06-26 3:14 ` Tan, Jianfeng
2017-06-26 6:16 ` Avi Cohen (A)
2017-06-26 11:58 ` Tan, Jianfeng
2017-06-26 12:06 ` Avi Cohen (A)
2017-06-27 14:22 ` Tan, Jianfeng
2017-06-28 6:45 ` Avi Cohen (A)
2017-07-03 7:21 ` Tan, Jianfeng
2017-07-09 15:32 ` Avi Cohen (A)
2017-07-10 3:28 ` Tan, Jianfeng
2017-07-10 6:49 ` Avi Cohen (A)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6DAF063A35010343823807B082E5681F41D2DF53@mbx05.360buyAD.local \
--to=wangzhike@jd.com \
--cc=avi.cohen@huawei.com \
--cc=jianfeng.tan@intel.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).