DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] VIRTIO  for containers
@ 2017-06-25 15:13 Avi Cohen (A)
  2017-06-26  3:14 ` Tan, Jianfeng
  0 siblings, 1 reply; 22+ messages in thread
From: Avi Cohen (A) @ 2017-06-25 15:13 UTC (permalink / raw)
  To: dpdk-ovs, users

Hello,
Does  anyone know the status of this project  http://dpdk.org/ml/archives/dev/2015-November/027732.html  - 
Implementing a virtio device for containers ?   

Best Regards
avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO  for containers
  2017-06-25 15:13 [dpdk-users] VIRTIO for containers Avi Cohen (A)
@ 2017-06-26  3:14 ` Tan, Jianfeng
  2017-06-26  6:16   ` Avi Cohen (A)
  0 siblings, 1 reply; 22+ messages in thread
From: Tan, Jianfeng @ 2017-06-26  3:14 UTC (permalink / raw)
  To: Avi Cohen (A), dpdk-ovs, users

Hi Avi,

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Avi Cohen (A)
> Sent: Sunday, June 25, 2017 11:13 PM
> To: dpdk-ovs@lists.01.org; users@dpdk.org
> Subject: [dpdk-users] VIRTIO for containers
> 
> Hello,
> Does  anyone know the status of this project
> http://dpdk.org/ml/archives/dev/2015-November/027732.html  -
> Implementing a virtio device for containers ?

It has been upstreamed since v16.07. Here is a howto doc: http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.html


Thanks,
Jianfeng

> 
> Best Regards
> avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO  for containers
  2017-06-26  3:14 ` Tan, Jianfeng
@ 2017-06-26  6:16   ` Avi Cohen (A)
  2017-06-26 11:58     ` Tan, Jianfeng
  0 siblings, 1 reply; 22+ messages in thread
From: Avi Cohen (A) @ 2017-06-26  6:16 UTC (permalink / raw)
  To: Tan, Jianfeng, dpdk-ovs, users

Thanks Jianfeng,
For containers that are *not running*  DPDK  -  are there any thoughts to develop a 'virtio like' device? , for example to connect the container to OVS-DPDK ? 
I've tested the performance of a container connected to OVS-DPDK  via vdev-af_packet  and processed by virtual PMD, and its performance is good [uses mmap'ed to userspace  - zero copy RX/TX ring buffer]
but not good as  the performance  of a  VM connected  to OVS-DPDK (@host) via vhost-user virtio.
Best Regards
avi

> -----Original Message-----
> From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> Sent: Monday, 26 June, 2017 6:15 AM
> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> Subject: RE: VIRTIO for containers
> 
> Hi Avi,
> 
> > -----Original Message-----
> > From: users [mailto:users-bounces@dpdk.org] On Behalf Of Avi Cohen (A)
> > Sent: Sunday, June 25, 2017 11:13 PM
> > To: dpdk-ovs@lists.01.org; users@dpdk.org
> > Subject: [dpdk-users] VIRTIO for containers
> >
> > Hello,
> > Does  anyone know the status of this project
> > http://dpdk.org/ml/archives/dev/2015-November/027732.html  -
> > Implementing a virtio device for containers ?
> 
> It has been upstreamed since v16.07. Here is a howto doc:
> http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.h
> tml
> 
> 
> Thanks,
> Jianfeng
> 
> >
> > Best Regards
> > avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO  for containers
  2017-06-26  6:16   ` Avi Cohen (A)
@ 2017-06-26 11:58     ` Tan, Jianfeng
  2017-06-26 12:06       ` Avi Cohen (A)
  0 siblings, 1 reply; 22+ messages in thread
From: Tan, Jianfeng @ 2017-06-26 11:58 UTC (permalink / raw)
  To: Avi Cohen (A), dpdk-ovs, users

Avi,

> -----Original Message-----
> From: Avi Cohen (A) [mailto:avi.cohen@huawei.com]
> Sent: Monday, June 26, 2017 2:17 PM
> To: Tan, Jianfeng; dpdk-ovs@lists.01.org; users@dpdk.org
> Subject: RE: VIRTIO for containers
> 
> Thanks Jianfeng,
> For containers that are *not running*  DPDK  -  are there any thoughts to
> develop a 'virtio like' device? , for example to connect the container to OVS-
> DPDK ?

We have developed virtio-user + vhost-kernel as the backend. In that scenario, you can add the tap interface into a container network namespace. And there's a vhost kthread to push the data out to user space.

And I cannot guarantee the performance as it has diametric model in VM (virtio) - OVS-DPDK (vhost).


> I've tested the performance of a container connected to OVS-DPDK  via
> vdev-af_packet  and processed by virtual PMD, and its performance is good
> [uses mmap'ed to userspace  - zero copy RX/TX ring buffer]
> but not good as  the performance  of a  VM connected  to OVS-DPDK (@host)
> via vhost-user virtio.
> Best Regards
> avi
> 
> > -----Original Message-----
> > From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> > Sent: Monday, 26 June, 2017 6:15 AM
> > To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> > Subject: RE: VIRTIO for containers
> >
> > Hi Avi,
> >
> > > -----Original Message-----
> > > From: users [mailto:users-bounces@dpdk.org] On Behalf Of Avi Cohen (A)
> > > Sent: Sunday, June 25, 2017 11:13 PM
> > > To: dpdk-ovs@lists.01.org; users@dpdk.org
> > > Subject: [dpdk-users] VIRTIO for containers
> > >
> > > Hello,
> > > Does  anyone know the status of this project
> > > http://dpdk.org/ml/archives/dev/2015-November/027732.html  -
> > > Implementing a virtio device for containers ?
> >
> > It has been upstreamed since v16.07. Here is a howto doc:
> >
> http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.
> h
> > tml
> >
> >
> > Thanks,
> > Jianfeng
> >
> > >
> > > Best Regards
> > > avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO  for containers
  2017-06-26 11:58     ` Tan, Jianfeng
@ 2017-06-26 12:06       ` Avi Cohen (A)
  2017-06-27 14:22         ` Tan, Jianfeng
  0 siblings, 1 reply; 22+ messages in thread
From: Avi Cohen (A) @ 2017-06-26 12:06 UTC (permalink / raw)
  To: Tan, Jianfeng, dpdk-ovs, users

Thank You Jianfeng

> We have developed virtio-user + vhost-kernel as the backend. In that
> scenario, you can add the tap interface into a container network namespace.
> And there's a vhost kthread to push the data out to user space.
> 
> And I cannot guarantee the performance as it has diametric model in VM
> (virtio) - OVS-DPDK (vhost).
> 
[Avi Cohen (A)] 
Can you refer to a document how to run this setup?
Best Regards
avi
> 
> > I've tested the performance of a container connected to OVS-DPDK  via
> > vdev-af_packet  and processed by virtual PMD, and its performance is
> > good [uses mmap'ed to userspace  - zero copy RX/TX ring buffer] but
> > not good as  the performance  of a  VM connected  to OVS-DPDK (@host)
> > via vhost-user virtio.
> > Best Regards
> > avi
> >
> > > -----Original Message-----
> > > From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> > > Sent: Monday, 26 June, 2017 6:15 AM
> > > To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> > > Subject: RE: VIRTIO for containers
> > >
> > > Hi Avi,
> > >
> > > > -----Original Message-----
> > > > From: users [mailto:users-bounces@dpdk.org] On Behalf Of Avi Cohen
> > > > (A)
> > > > Sent: Sunday, June 25, 2017 11:13 PM
> > > > To: dpdk-ovs@lists.01.org; users@dpdk.org
> > > > Subject: [dpdk-users] VIRTIO for containers
> > > >
> > > > Hello,
> > > > Does  anyone know the status of this project
> > > > http://dpdk.org/ml/archives/dev/2015-November/027732.html  -
> > > > Implementing a virtio device for containers ?
> > >
> > > It has been upstreamed since v16.07. Here is a howto doc:
> > >
> > http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.
> > h
> > > tml
> > >
> > >
> > > Thanks,
> > > Jianfeng
> > >
> > > >
> > > > Best Regards
> > > > avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-06-26 12:06       ` Avi Cohen (A)
@ 2017-06-27 14:22         ` Tan, Jianfeng
  2017-06-28  6:45           ` Avi Cohen (A)
  0 siblings, 1 reply; 22+ messages in thread
From: Tan, Jianfeng @ 2017-06-27 14:22 UTC (permalink / raw)
  To: Avi Cohen (A), dpdk-ovs, users



On 6/26/2017 8:06 PM, Avi Cohen (A) wrote:
> Thank You Jianfeng
>
>> We have developed virtio-user + vhost-kernel as the backend. In that
>> scenario, you can add the tap interface into a container network namespace.
>> And there's a vhost kthread to push the data out to user space.
>>
>> And I cannot guarantee the performance as it has diametric model in VM
>> (virtio) - OVS-DPDK (vhost).
>>
> [Avi Cohen (A)]
> Can you refer to a document how to run this setup?

Please refer to 
http://dpdk.org/doc/guides/howto/virtio_user_as_exceptional_path.html

Thanks,
Jianfeng

> Best Regards
> avi
>>> I've tested the performance of a container connected to OVS-DPDK  via
>>> vdev-af_packet  and processed by virtual PMD, and its performance is
>>> good [uses mmap'ed to userspace  - zero copy RX/TX ring buffer] but
>>> not good as  the performance  of a  VM connected  to OVS-DPDK (@host)
>>> via vhost-user virtio.
>>> Best Regards
>>> avi
>>>
>>>> -----Original Message-----
>>>> From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
>>>> Sent: Monday, 26 June, 2017 6:15 AM
>>>> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
>>>> Subject: RE: VIRTIO for containers
>>>>
>>>> Hi Avi,
>>>>
>>>>> -----Original Message-----
>>>>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Avi Cohen
>>>>> (A)
>>>>> Sent: Sunday, June 25, 2017 11:13 PM
>>>>> To: dpdk-ovs@lists.01.org; users@dpdk.org
>>>>> Subject: [dpdk-users] VIRTIO for containers
>>>>>
>>>>> Hello,
>>>>> Does  anyone know the status of this project
>>>>> http://dpdk.org/ml/archives/dev/2015-November/027732.html  -
>>>>> Implementing a virtio device for containers ?
>>>> It has been upstreamed since v16.07. Here is a howto doc:
>>>>
>>> http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.
>>> h
>>>> tml
>>>>
>>>>
>>>> Thanks,
>>>> Jianfeng
>>>>
>>>>> Best Regards
>>>>> avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-06-27 14:22         ` Tan, Jianfeng
@ 2017-06-28  6:45           ` Avi Cohen (A)
  2017-07-03  7:21             ` Tan, Jianfeng
  0 siblings, 1 reply; 22+ messages in thread
From: Avi Cohen (A) @ 2017-06-28  6:45 UTC (permalink / raw)
  To: Tan, Jianfeng, dpdk-ovs, users

Thank you Jianfeng

> -----Original Message-----
> From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> Sent: Tuesday, 27 June, 2017 5:22 PM
> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> Subject: Re: VIRTIO for containers
> 
> 
> 
> On 6/26/2017 8:06 PM, Avi Cohen (A) wrote:
> > Thank You Jianfeng
> >
> >> We have developed virtio-user + vhost-kernel as the backend. In that
> >> scenario, you can add the tap interface into a container network
> namespace.
> >> And there's a vhost kthread to push the data out to user space.
> >>
> >> And I cannot guarantee the performance as it has diametric model in
> >> VM
> >> (virtio) - OVS-DPDK (vhost).
> >>
> > [Avi Cohen (A)]
> > Can you refer to a document how to run this setup?
> 
> Please refer to
> http://dpdk.org/doc/guides/howto/virtio_user_as_exceptional_path.html
> 
[Avi Cohen (A)] 
My setup includes a container and ovs-dpdk , i still not sure about:
 - How to set the virtio backend port in the ovs-dpdk ?
 - How to set the container with the virtio frontend ?
Best Regards
avi

> Thanks,
> Jianfeng
> 
> > Best Regards
> > avi
> >>> I've tested the performance of a container connected to OVS-DPDK
> >>> via vdev-af_packet  and processed by virtual PMD, and its
> >>> performance is good [uses mmap'ed to userspace  - zero copy RX/TX
> >>> ring buffer] but not good as  the performance  of a  VM connected
> >>> to OVS-DPDK (@host) via vhost-user virtio.
> >>> Best Regards
> >>> avi
> >>>
> >>>> -----Original Message-----
> >>>> From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> >>>> Sent: Monday, 26 June, 2017 6:15 AM
> >>>> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> >>>> Subject: RE: VIRTIO for containers
> >>>>
> >>>> Hi Avi,
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Avi Cohen
> >>>>> (A)
> >>>>> Sent: Sunday, June 25, 2017 11:13 PM
> >>>>> To: dpdk-ovs@lists.01.org; users@dpdk.org
> >>>>> Subject: [dpdk-users] VIRTIO for containers
> >>>>>
> >>>>> Hello,
> >>>>> Does  anyone know the status of this project
> >>>>> http://dpdk.org/ml/archives/dev/2015-November/027732.html  -
> >>>>> Implementing a virtio device for containers ?
> >>>> It has been upstreamed since v16.07. Here is a howto doc:
> >>>>
> >>>
> http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.
> >>> h
> >>>> tml
> >>>>
> >>>>
> >>>> Thanks,
> >>>> Jianfeng
> >>>>
> >>>>> Best Regards
> >>>>> avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-06-28  6:45           ` Avi Cohen (A)
@ 2017-07-03  7:21             ` Tan, Jianfeng
  2017-07-09 15:32               ` Avi Cohen (A)
  0 siblings, 1 reply; 22+ messages in thread
From: Tan, Jianfeng @ 2017-07-03  7:21 UTC (permalink / raw)
  To: Avi Cohen (A), dpdk-ovs, users



> -----Original Message-----
> From: Avi Cohen (A) [mailto:avi.cohen@huawei.com]
> Sent: Wednesday, June 28, 2017 2:45 PM
> To: Tan, Jianfeng; dpdk-ovs@lists.01.org; users@dpdk.org
> Subject: RE: VIRTIO for containers
> 
> Thank you Jianfeng
> 
> > -----Original Message-----
> > From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> > Sent: Tuesday, 27 June, 2017 5:22 PM
> > To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> > Subject: Re: VIRTIO for containers
> >
> >
> >
> > On 6/26/2017 8:06 PM, Avi Cohen (A) wrote:
> > > Thank You Jianfeng
> > >
> > >> We have developed virtio-user + vhost-kernel as the backend. In that
> > >> scenario, you can add the tap interface into a container network
> > namespace.
> > >> And there's a vhost kthread to push the data out to user space.
> > >>
> > >> And I cannot guarantee the performance as it has diametric model in
> > >> VM
> > >> (virtio) - OVS-DPDK (vhost).
> > >>
> > > [Avi Cohen (A)]
> > > Can you refer to a document how to run this setup?
> >
> > Please refer to
> > http://dpdk.org/doc/guides/howto/virtio_user_as_exceptional_path.html
> >
> [Avi Cohen (A)]
> My setup includes a container and ovs-dpdk , i still not sure about:
>  - How to set the virtio backend port in the ovs-dpdk ?

For OVS-DPDK, you need a version above 2.7.0. Below command is used to create a virtio-user port with vhost-kernel backend:
# ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=virtio_user0,path=/dev/vhost-net

>  - How to set the container with the virtio frontend ?

No, containers will not hold the virtio frontend in this case. Above ovs-vsctl command will generate a virtio-user port in OVS, and a tap interface in kernel, you can assign the tap interface into a net namespace of some container so that its networking flow will go through OVS-DPDK then to outside.

Thanks,
Jianfeng

> Best Regards
> avi
> 
> > Thanks,
> > Jianfeng
> >
> > > Best Regards
> > > avi
> > >>> I've tested the performance of a container connected to OVS-DPDK
> > >>> via vdev-af_packet  and processed by virtual PMD, and its
> > >>> performance is good [uses mmap'ed to userspace  - zero copy RX/TX
> > >>> ring buffer] but not good as  the performance  of a  VM connected
> > >>> to OVS-DPDK (@host) via vhost-user virtio.
> > >>> Best Regards
> > >>> avi
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> > >>>> Sent: Monday, 26 June, 2017 6:15 AM
> > >>>> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> > >>>> Subject: RE: VIRTIO for containers
> > >>>>
> > >>>> Hi Avi,
> > >>>>
> > >>>>> -----Original Message-----
> > >>>>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Avi
> Cohen
> > >>>>> (A)
> > >>>>> Sent: Sunday, June 25, 2017 11:13 PM
> > >>>>> To: dpdk-ovs@lists.01.org; users@dpdk.org
> > >>>>> Subject: [dpdk-users] VIRTIO for containers
> > >>>>>
> > >>>>> Hello,
> > >>>>> Does  anyone know the status of this project
> > >>>>> http://dpdk.org/ml/archives/dev/2015-November/027732.html  -
> > >>>>> Implementing a virtio device for containers ?
> > >>>> It has been upstreamed since v16.07. Here is a howto doc:
> > >>>>
> > >>>
> > http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.
> > >>> h
> > >>>> tml
> > >>>>
> > >>>>
> > >>>> Thanks,
> > >>>> Jianfeng
> > >>>>
> > >>>>> Best Regards
> > >>>>> avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-07-03  7:21             ` Tan, Jianfeng
@ 2017-07-09 15:32               ` Avi Cohen (A)
  2017-07-10  3:28                 ` Tan, Jianfeng
  0 siblings, 1 reply; 22+ messages in thread
From: Avi Cohen (A) @ 2017-07-09 15:32 UTC (permalink / raw)
  To: Tan, Jianfeng, dpdk-ovs, users

Thanks you Jianfeng - pls see inline marked  [Avi Cohen (A)]

> -----Original Message-----
> From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> Sent: Monday, 03 July, 2017 10:22 AM
> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> Subject: RE: VIRTIO for containers
> 
> 
> 
> > -----Original Message-----
> > From: Avi Cohen (A) [mailto:avi.cohen@huawei.com]
> > Sent: Wednesday, June 28, 2017 2:45 PM
> > To: Tan, Jianfeng; dpdk-ovs@lists.01.org; users@dpdk.org
> > Subject: RE: VIRTIO for containers
> >
> > Thank you Jianfeng
> >
> > > -----Original Message-----
> > > From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> > > Sent: Tuesday, 27 June, 2017 5:22 PM
> > > To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> > > Subject: Re: VIRTIO for containers
> > >
> > >
> > >
> > > On 6/26/2017 8:06 PM, Avi Cohen (A) wrote:
> > > > Thank You Jianfeng
> > > >
> > > >> We have developed virtio-user + vhost-kernel as the backend. In
> > > >> that scenario, you can add the tap interface into a container
> > > >> network
> > > namespace.
> > > >> And there's a vhost kthread to push the data out to user space.
> > > >>
> > > >> And I cannot guarantee the performance as it has diametric model
> > > >> in VM
> > > >> (virtio) - OVS-DPDK (vhost).
> > > >>
> > > > [Avi Cohen (A)]
> > > > Can you refer to a document how to run this setup?
> > >
> > > Please refer to
> > > http://dpdk.org/doc/guides/howto/virtio_user_as_exceptional_path.htm
> > > l
> > >
> > [Avi Cohen (A)]
> > My setup includes a container and ovs-dpdk , i still not sure about:
> >  - How to set the virtio backend port in the ovs-dpdk ?
> 
> For OVS-DPDK, you need a version above 2.7.0. Below command is used to
> create a virtio-user port with vhost-kernel backend:
> # ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk
> options:dpdk-devargs=virtio_user0,path=/dev/vhost-net
> 
> >  - How to set the container with the virtio frontend ?
> 
> No, containers will not hold the virtio frontend in this case. Above ovs-vsctl
> command will generate a virtio-user port in OVS, and a tap interface in
> kernel, you can assign the tap interface into a net namespace of some
> container so that its networking flow will go through OVS-DPDK then to
> outside.
[Avi Cohen (A)] 
Thanks you Jianfeng
I've tested it and the performance looks very good compared to native ovs.
I have 1 more question:
You wrote " there's a vhost kthread to push the data out to user space " - 
Is that mean a copy from userspace to kernel (and viceversa) or there is a zero-copy mmap  like in AF_PACKET which handles TX/RX rings in userspace ?
Best Regards
avi
> 
> Thanks,
> Jianfeng
> 
> > Best Regards
> > avi
> >
> > > Thanks,
> > > Jianfeng
> > >
> > > > Best Regards
> > > > avi
> > > >>> I've tested the performance of a container connected to OVS-DPDK
> > > >>> via vdev-af_packet  and processed by virtual PMD, and its
> > > >>> performance is good [uses mmap'ed to userspace  - zero copy
> > > >>> RX/TX ring buffer] but not good as  the performance  of a  VM
> > > >>> connected to OVS-DPDK (@host) via vhost-user virtio.
> > > >>> Best Regards
> > > >>> avi
> > > >>>
> > > >>>> -----Original Message-----
> > > >>>> From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> > > >>>> Sent: Monday, 26 June, 2017 6:15 AM
> > > >>>> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> > > >>>> Subject: RE: VIRTIO for containers
> > > >>>>
> > > >>>> Hi Avi,
> > > >>>>
> > > >>>>> -----Original Message-----
> > > >>>>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Avi
> > Cohen
> > > >>>>> (A)
> > > >>>>> Sent: Sunday, June 25, 2017 11:13 PM
> > > >>>>> To: dpdk-ovs@lists.01.org; users@dpdk.org
> > > >>>>> Subject: [dpdk-users] VIRTIO for containers
> > > >>>>>
> > > >>>>> Hello,
> > > >>>>> Does  anyone know the status of this project
> > > >>>>> http://dpdk.org/ml/archives/dev/2015-November/027732.html  -
> > > >>>>> Implementing a virtio device for containers ?
> > > >>>> It has been upstreamed since v16.07. Here is a howto doc:
> > > >>>>
> > > >>>
> > >
> http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.
> > > >>> h
> > > >>>> tml
> > > >>>>
> > > >>>>
> > > >>>> Thanks,
> > > >>>> Jianfeng
> > > >>>>
> > > >>>>> Best Regards
> > > >>>>> avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-07-09 15:32               ` Avi Cohen (A)
@ 2017-07-10  3:28                 ` Tan, Jianfeng
  2017-07-10  6:49                   ` Avi Cohen (A)
  0 siblings, 1 reply; 22+ messages in thread
From: Tan, Jianfeng @ 2017-07-10  3:28 UTC (permalink / raw)
  To: Avi Cohen (A), dpdk-ovs, users



> -----Original Message-----
> From: Avi Cohen (A) [mailto:avi.cohen@huawei.com]
> Sent: Sunday, July 9, 2017 11:32 PM
> To: Tan, Jianfeng; dpdk-ovs@lists.01.org; users@dpdk.org
> Subject: RE: VIRTIO for containers
> 
> Thanks you Jianfeng - pls see inline marked  [Avi Cohen (A)]
> 
> > -----Original Message-----
> > From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> > Sent: Monday, 03 July, 2017 10:22 AM
> > To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> > Subject: RE: VIRTIO for containers
> >
> >
> >
> > > -----Original Message-----
> > > From: Avi Cohen (A) [mailto:avi.cohen@huawei.com]
> > > Sent: Wednesday, June 28, 2017 2:45 PM
> > > To: Tan, Jianfeng; dpdk-ovs@lists.01.org; users@dpdk.org
> > > Subject: RE: VIRTIO for containers
> > >
> > > Thank you Jianfeng
> > >
> > > > -----Original Message-----
> > > > From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> > > > Sent: Tuesday, 27 June, 2017 5:22 PM
> > > > To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> > > > Subject: Re: VIRTIO for containers
> > > >
> > > >
> > > >
> > > > On 6/26/2017 8:06 PM, Avi Cohen (A) wrote:
> > > > > Thank You Jianfeng
> > > > >
> > > > >> We have developed virtio-user + vhost-kernel as the backend. In
> > > > >> that scenario, you can add the tap interface into a container
> > > > >> network
> > > > namespace.
> > > > >> And there's a vhost kthread to push the data out to user space.
> > > > >>
> > > > >> And I cannot guarantee the performance as it has diametric model
> > > > >> in VM
> > > > >> (virtio) - OVS-DPDK (vhost).
> > > > >>
> > > > > [Avi Cohen (A)]
> > > > > Can you refer to a document how to run this setup?
> > > >
> > > > Please refer to
> > > >
> http://dpdk.org/doc/guides/howto/virtio_user_as_exceptional_path.htm
> > > > l
> > > >
> > > [Avi Cohen (A)]
> > > My setup includes a container and ovs-dpdk , i still not sure about:
> > >  - How to set the virtio backend port in the ovs-dpdk ?
> >
> > For OVS-DPDK, you need a version above 2.7.0. Below command is used to
> > create a virtio-user port with vhost-kernel backend:
> > # ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk
> > options:dpdk-devargs=virtio_user0,path=/dev/vhost-net
> >
> > >  - How to set the container with the virtio frontend ?
> >
> > No, containers will not hold the virtio frontend in this case. Above ovs-vsctl
> > command will generate a virtio-user port in OVS, and a tap interface in
> > kernel, you can assign the tap interface into a net namespace of some
> > container so that its networking flow will go through OVS-DPDK then to
> > outside.
> [Avi Cohen (A)]
> Thanks you Jianfeng
> I've tested it and the performance looks very good compared to native ovs.
> I have 1 more question:
> You wrote " there's a vhost kthread to push the data out to user space " -
> Is that mean a copy from userspace to kernel (and viceversa) or there is a
> zero-copy mmap  like in AF_PACKET which handles TX/RX rings in userspace ?
> Best Regards
> avi

So far it needs data copy at least from kernel to user path; there's an experimental feature, named experimental_zcopytx, to avoid data copy, but not very useful due to the implementation limitation.

Packet mmap (similar to AF_PACKET) is exactly the direction we were discussing for the further optimization. Plus, an optimized vhost thread model (current thread model is: one thread for one rx-tx queue pair) is also considered.

Thanks,
Jianfeng

> >
> > Thanks,
> > Jianfeng
> >
> > > Best Regards
> > > avi
> > >
> > > > Thanks,
> > > > Jianfeng
> > > >
> > > > > Best Regards
> > > > > avi
> > > > >>> I've tested the performance of a container connected to OVS-
> DPDK
> > > > >>> via vdev-af_packet  and processed by virtual PMD, and its
> > > > >>> performance is good [uses mmap'ed to userspace  - zero copy
> > > > >>> RX/TX ring buffer] but not good as  the performance  of a  VM
> > > > >>> connected to OVS-DPDK (@host) via vhost-user virtio.
> > > > >>> Best Regards
> > > > >>> avi
> > > > >>>
> > > > >>>> -----Original Message-----
> > > > >>>> From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> > > > >>>> Sent: Monday, 26 June, 2017 6:15 AM
> > > > >>>> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> > > > >>>> Subject: RE: VIRTIO for containers
> > > > >>>>
> > > > >>>> Hi Avi,
> > > > >>>>
> > > > >>>>> -----Original Message-----
> > > > >>>>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Avi
> > > Cohen
> > > > >>>>> (A)
> > > > >>>>> Sent: Sunday, June 25, 2017 11:13 PM
> > > > >>>>> To: dpdk-ovs@lists.01.org; users@dpdk.org
> > > > >>>>> Subject: [dpdk-users] VIRTIO for containers
> > > > >>>>>
> > > > >>>>> Hello,
> > > > >>>>> Does  anyone know the status of this project
> > > > >>>>> http://dpdk.org/ml/archives/dev/2015-November/027732.html
> -
> > > > >>>>> Implementing a virtio device for containers ?
> > > > >>>> It has been upstreamed since v16.07. Here is a howto doc:
> > > > >>>>
> > > > >>>
> > > >
> > http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.
> > > > >>> h
> > > > >>>> tml
> > > > >>>>
> > > > >>>>
> > > > >>>> Thanks,
> > > > >>>> Jianfeng
> > > > >>>>
> > > > >>>>> Best Regards
> > > > >>>>> avi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-07-10  3:28                 ` Tan, Jianfeng
@ 2017-07-10  6:49                   ` Avi Cohen (A)
  0 siblings, 0 replies; 22+ messages in thread
From: Avi Cohen (A) @ 2017-07-10  6:49 UTC (permalink / raw)
  To: Tan, Jianfeng, dpdk-ovs, users



> -----Original Message-----
> From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
> Sent: Monday, 10 July, 2017 6:28 AM
> To: Avi Cohen (A); dpdk-ovs@lists.01.org; users@dpdk.org
> Subject: RE: VIRTIO for containers
> > [Avi Cohen (A)]
> > Thanks you Jianfeng
> > I've tested it and the performance looks very good compared to native ovs.
> > I have 1 more question:
> > You wrote " there's a vhost kthread to push the data out to user space
> > " - Is that mean a copy from userspace to kernel (and viceversa) or
> > there is a zero-copy mmap  like in AF_PACKET which handles TX/RX rings in
> userspace ?
> > Best Regards
> > avi
> 
> So far it needs data copy at least from kernel to user path; there's an
> experimental feature, named experimental_zcopytx, to avoid data copy, but
> not very useful due to the implementation limitation.
[Avi Cohen (A)] 
Thanks Jianfeng
The penalty here that this vhost-kthread consumes very much CPU  on a high throughput scenario  - more than 80% CPU on ~ 10Gbps throughput
And this is in addition to the 100% CPU of the PMD threads
Also when PMD threads can be shared between multiple containers - the vhost-kthread is per container.
Best Regards
avi
> 
> Packet mmap (similar to AF_PACKET) is exactly the direction we were
> discussing for the further optimization. Plus, an optimized vhost thread
> model (current thread model is: one thread for one rx-tx queue pair) is also
> considered.
> 
> Thanks,
> Jianfeng
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-31  4:25           ` 王志克
@ 2017-11-01  2:58             ` Tan, Jianfeng
  0 siblings, 0 replies; 22+ messages in thread
From: Tan, Jianfeng @ 2017-11-01  2:58 UTC (permalink / raw)
  To: 王志克, users

Hi Zhike,

On 10/31/2017 12:25 PM, 王志克 wrote:
> Hi,
>
> I tested KNI, and compared with virtio-user. The result is beyond my 
> expectation:
>
> The KNI performance is better (+30%) in simpe netperf test with TCP 
> and different size UDP. I though they have similar performance, but it 
> proved that KNI performed better in my test. Not sure why.

This is expected. As KNI has a better thread model, its kthread only 
processes user->kernel path; the kernel->user path is processed in 
ksoftirq thread.


>
> Note in my test, I did not enable checksum/gso/… offloading and 
> multi-queue, since we need do vxLan encapsulation using SW. I am using 
> ovs2.8.1 and dpdk 17.05.2.


And below is the feature table. Note that OVS (mainstream) so far does 
not integrate LRO/TSO etc.

                                                         KNI             
virtio-user
Multi-seg (user->kernel)              Y    Y
Multi-seg (kernel->user)              N     Y
Multi-queue                                  N         Y
Csum offload (user->kernel)       Y Y
Csum offload (kernel->user)       N                             Y
Zero copy (user->kernel) N                             Experimental
Zero copy (kernel->user) N                             N


>
> In addition, one queue pair on virtio-user would create one vhost 
> thread. If we have many containters, it seems hard to manage the CPU 
> usage. Is there any proposal/practice to limit the vhost kthread CPU 
> resource?

Yes, this is another thread model problem.

There is proposal from Redhat and IBM on this: 
http://events.linuxfoundation.org/sites/events/files/slides/kvm_forum_2015_vhost_sharing_is_better.pdf.
But not sure when it will be ready.

Thanks,
Jianfeng

>
> Br,
> Wang Zhike

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-26  8:53         ` Tan, Jianfeng
  2017-10-26 12:53           ` 王志克
@ 2017-10-31  4:25           ` 王志克
  2017-11-01  2:58             ` Tan, Jianfeng
  1 sibling, 1 reply; 22+ messages in thread
From: 王志克 @ 2017-10-31  4:25 UTC (permalink / raw)
  To: Tan, Jianfeng, users

Hi,

I tested KNI, and compared with virtio-user. The result is beyond my expectation:

The KNI performance is better (+30%) in simpe netperf test with TCP and different size UDP. I though they have similar performance, but it proved that KNI performed better in my test. Not sure why.

Note in my test, I did not enable checksum/gso/… offloading and multi-queue, since we need do vxLan encapsulation using SW. I am using ovs2.8.1 and dpdk 17.05.2.

In addition, one queue pair on virtio-user would create one vhost thread. If we have many containters, it seems hard to manage the CPU usage. Is there any proposal/practice to limit the vhost kthread CPU resource?

Br,
Wang Zhike

From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
Sent: Thursday, October 26, 2017 4:53 PM
To: 王志克; avi.cohen@huawei.com; users@dpdk.org
Subject: Re: VIRTIO for containers

Hi,


[Wang Zhike] I once saw you mentioned that something like mmap solution may be used. Is it still on your roadmap? I am not sure whether it is same as the “vhost tx zero copy”.
Can I know the forecasted day that the optimization can be done? Some Linux kernel upstream module would be updated, or DPDK module? Just want to know which modules will be touched.

Yes, I was planning to do that. But found out it helps on user->kernel path; not so easy for kernel->user path. It’s not the same as “vhost tx zero copy” (there are some restrictions BTW). The packet mmap would share a bulk of memory with user and kernel space, so that we don’t need to copy (the effect is the same with “vhost tx zero copy”). As for the date, it still lack of detailed design and feasibility analysis.



1) Yes, we have done some initial tests internally, with testpmd as the vswitch instead of OVS-DPDK; and we were comparing with KNI for exceptional path.
[Wang Zhike]Can you please kindly indicate how to configure for KNI mode? I would like to also compare it.

Now KNI is a vdev now. You can refer to this link: http://dpdk.org/doc/guides/nics/kni.html




2) We also see similar asymmetric result. For user->kernel path, it not only copies data from mbuf to skb, but also might go above to tcp stack (you can check using perf).
[Wang Zhike] Yes, indeed.  User->kernel path, tcp/ip related work is done by vhost thread, while kernel to user  thread, tcp/ip related work is done by the app (my case netperf) in syscall.


To put tcp/ip rx into app thread, actually, might avoid that with a little change on tap driver. Currently, we use netif_rx/netif_receive_skb() to rx in tap, which could result in going up to the tcp/ip stack in the vhost kthread. Instead, we could backlog the packets into other cpu (application thread's cpu?).

Thanks,
Jianfeng




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-26 12:53           ` 王志克
@ 2017-10-27  1:58             ` Tan, Jianfeng
  0 siblings, 0 replies; 22+ messages in thread
From: Tan, Jianfeng @ 2017-10-27  1:58 UTC (permalink / raw)
  To: 王志克, avi.cohen, users

Hi Zhike,


On 10/26/2017 8:53 PM, 王志克 wrote:
> Hi,
>
> Thanks for reply.
>
> To put tcp/ip rx into app thread, actually, might avoid that with a 
> little change on tap driver. Currently, we use 
> netif_rx/netif_receive_skb() to rx in tap, which could result in going 
> up to the tcp/ip stack in the vhost kthread. Instead, we could backlog 
> the packets into other cpu (application thread's cpu?).
>
> [Wang Zhike] Then in this case, another kthread like ksoftirq will be 
> kicked, right?
>
> In my understanding, the advantage is that the rx performance can be 
> even improvement, while disadvantage is that more cpu resource is used 
> and another queue is needed. If that can be done in a smart way, like 
> system has idle CPUs, we can use this way, else fall back to only use 
> one kernel thread. Just my 2 cents.

Yes, make sense. We need a smart mechanism to decide if it is handled in 
vhost kthread or ksoftirqd kthread. And also, we could even avoid 
forking a vhost kthread, to avoid too many context switches.

Thanks,
Jianfeng

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-26  8:53         ` Tan, Jianfeng
@ 2017-10-26 12:53           ` 王志克
  2017-10-27  1:58             ` Tan, Jianfeng
  2017-10-31  4:25           ` 王志克
  1 sibling, 1 reply; 22+ messages in thread
From: 王志克 @ 2017-10-26 12:53 UTC (permalink / raw)
  To: Tan, Jianfeng, avi.cohen, users

Hi,

Thanks for reply.

To put tcp/ip rx into app thread, actually, might avoid that with a little change on tap driver. Currently, we use netif_rx/netif_receive_skb() to rx in tap, which could result in going up to the tcp/ip stack in the vhost kthread. Instead, we could backlog the packets into other cpu (application thread's cpu?).
[Wang Zhike] Then in this case, another kthread like ksoftirq will be kicked, right?
In my understanding, the advantage is that the rx performance can be even improvement, while disadvantage is that more cpu resource is used and another queue is needed. If that can be done in a smart way, like system has idle CPUs, we can use this way, else fall back to only use one kernel thread. Just my 2 cents.

Br,
Wang Zhike

From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
Sent: Thursday, October 26, 2017 4:53 PM
To: 王志克; avi.cohen@huawei.com; users@dpdk.org
Subject: Re: VIRTIO for containers

Hi,


[Wang Zhike] I once saw you mentioned that something like mmap solution may be used. Is it still on your roadmap? I am not sure whether it is same as the “vhost tx zero copy”.
Can I know the forecasted day that the optimization can be done? Some Linux kernel upstream module would be updated, or DPDK module? Just want to know which modules will be touched.

Yes, I was planning to do that. But found out it helps on user->kernel path; not so easy for kernel->user path. It’s not the same as “vhost tx zero copy” (there are some restrictions BTW). The packet mmap would share a bulk of memory with user and kernel space, so that we don’t need to copy (the effect is the same with “vhost tx zero copy”). As for the date, it still lack of detailed design and feasibility analysis.



1) Yes, we have done some initial tests internally, with testpmd as the vswitch instead of OVS-DPDK; and we were comparing with KNI for exceptional path.
[Wang Zhike]Can you please kindly indicate how to configure for KNI mode? I would like to also compare it.

Now KNI is a vdev now. You can refer to this link: http://dpdk.org/doc/guides/nics/kni.html




2) We also see similar asymmetric result. For user->kernel path, it not only copies data from mbuf to skb, but also might go above to tcp stack (you can check using perf).
[Wang Zhike] Yes, indeed.  User->kernel path, tcp/ip related work is done by vhost thread, while kernel to user  thread, tcp/ip related work is done by the app (my case netperf) in syscall.


To put tcp/ip rx into app thread, actually, might avoid that with a little change on tap driver. Currently, we use netif_rx/netif_receive_skb() to rx in tap, which could result in going up to the tcp/ip stack in the vhost kthread. Instead, we could backlog the packets into other cpu (application thread's cpu?).

Thanks,
Jianfeng




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-25  9:58       ` 王志克
@ 2017-10-26  8:53         ` Tan, Jianfeng
  2017-10-26 12:53           ` 王志克
  2017-10-31  4:25           ` 王志克
  0 siblings, 2 replies; 22+ messages in thread
From: Tan, Jianfeng @ 2017-10-26  8:53 UTC (permalink / raw)
  To: 王志克, avi.cohen, users

Hi,

> [Wang Zhike] I once saw you mentioned that something like mmap 
> solution may be used. Is it still on your roadmap? I am not sure 
> whether it is same as the “vhost tx zero copy”.
>
> Can I know the forecasted day that the optimization can be done? Some 
> Linux kernel upstream module would be updated, or DPDK module? Just 
> want to know which modules will be touched.
>

Yes, I was planning to do that. But found out it helps on user->kernel 
path; not so easy for kernel->user path. It’s not the same as “vhost tx 
zero copy” (there are some restrictions BTW). The packet mmap would 
share a bulk of memory with user and kernel space, so that we don’t need 
to copy (the effect is the same with “vhost tx zero copy”). As for the 
date, it still lack of detailed design and feasibility analysis.

> 1) Yes, we have done some initial tests internally, with testpmd as 
> the vswitch instead of OVS-DPDK; and we were comparing with KNI for 
> exceptional path.
>
> [Wang Zhike]Can you please kindly indicate how to configure for KNI 
> mode? I would like to also compare it.
>

Now KNI is a vdev now. You can refer to this link: 
http://dpdk.org/doc/guides/nics/kni.html


> 2) We also see similar asymmetric result. For user->kernel path, it 
> not only copies data from mbuf to skb, but also might go above to tcp 
> stack (you can check using perf).
>
> [Wang Zhike] Yes, indeed. User->kernel path, tcp/ip related work is 
> done by vhost thread, while kernel to userthread, tcp/ip related work 
> is done by the app (my case netperf) in syscall.
>
>

To put tcp/ip rx into app thread, actually, might avoid that with a 
little change on tap driver. Currently, we use 
netif_rx/netif_receive_skb() to rx in tap, which could result in going 
up to the tcp/ip stack in the vhost kthread. Instead, we could backlog 
the packets into other cpu (application thread's cpu?).

Thanks,
Jianfeng

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-25  7:34     ` Tan, Jianfeng
@ 2017-10-25  9:58       ` 王志克
  2017-10-26  8:53         ` Tan, Jianfeng
  0 siblings, 1 reply; 22+ messages in thread
From: 王志克 @ 2017-10-25  9:58 UTC (permalink / raw)
  To: Tan, Jianfeng, avi.cohen, users

Hi Jianfeng,

Thanks for your reply. Some feedback in line.

BR,
Wang Zhike

From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
Sent: Wednesday, October 25, 2017 3:34 PM
To: 王志克; avi.cohen@huawei.com; users@dpdk.org
Subject: RE: VIRTIO for containers

Hi Zhike,

Welcome to summit patch to fix the bug.

For sender view, there are still some tricky configuration to make it faster, like enable the LRO, so that we can put off tcp fragmentation at the NIC when sending out; similar to checksum.
For receiver view, you might want to increase the ring size to get better performance. We are also looking at how to make “vhost tx zero copy” efficient.

[Wang Zhike] I once saw you mentioned that something like mmap solution may be used. Is it still on your roadmap? I am not sure whether it is same as the “vhost tx zero copy”.
Can I know the forecasted day that the optimization can be done? Some Linux kernel upstream module would be updated, or DPDK module? Just want to know which modules will be touched.

1) Yes, we have done some initial tests internally, with testpmd as the vswitch instead of OVS-DPDK; and we were comparing with KNI for exceptional path.
[Wang Zhike]Can you please kindly indicate how to configure for KNI mode? I would like to also compare it.

2) We also see similar asymmetric result. For user->kernel path, it not only copies data from mbuf to skb, but also might go above to tcp stack (you can check using perf).
[Wang Zhike] Yes, indeed.  User->kernel path, tcp/ip related work is done by vhost thread, while kernel to user  thread, tcp/ip related work is done by the app (my case netperf) in syscall.

Thanks,
Jianfeng

From: 王志克 [mailto:wangzhike@jd.com]
Sent: Tuesday, October 24, 2017 5:46 PM
To: Tan, Jianfeng <jianfeng.tan@intel.com<mailto:jianfeng.tan@intel.com>>; avi.cohen@huawei.com<mailto:avi.cohen@huawei.com>; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: VIRTIO for containers

Hi  Jianfeng,

It is proven that there is SW bug on DPDK17.05.2, which leads to this issue. I will submit patch later this week.

Now I can succeed to configure the container.  I did some simple test with netperf (one server and one client in different hosts), and some result comparing container with kernel OVS:
1. From netperf sender view, the tx thoughput increase about 100%. On sender, there is almost no packet loss. That means kernel vhost thread timely transfer data from kernel to userspace OVS+DPDK.
2. From netperf recvier view, the rx thoughput increase about 50%. On recviever, the packet loss happens on virtiouserx. There is no packet loss on tap port. That means kernel vhost thread is slow to transfer data from userspace OVS+DPDK to kernel.

May I ask some question? Thanks.

1) Did you have some benchmark data for performance?
2) Is there any explanation why the kernel vhost thread speed is different for two direction (from kernel to user, and vice versa)?

Welcome feedback if someone has such data.

Br,
Wang Zhike
From: 王志克
Sent: Tuesday, October 24, 2017 11:16 AM
To: 'Tan, Jianfeng'; avi.cohen@huawei.com<mailto:avi.cohen@huawei.com>; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: VIRTIO for containers

Thanks Jianfeng.

I finally realized that I used DPDK16.11 which does NOT support this function.

Then I use latest DPDK (17.05.2) and OVS (2.8.1), but still does not work.

ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=virtio_user0,path=/dev/vhost-net
ovs-vsctl: Error detected while setting up 'virtiouser0': could not add network device virtiouser0 to ofproto (No such device).  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

lsmod |grep vhost
vhost_net              18152  0
vhost                  33338  1 vhost_net
macvtap                22363  1 vhost_net
tun                    27141  6 vhost_net

2017-10-23T19:00:42.743Z|00163|netdev_dpdk|INFO|Device 'virtio_user0,path=/dev/vhost-net' attached to DPDK
2017-10-23T19:00:42.743Z|00164|netdev_dpdk|WARN|Rx checksum offload is not supported on port 2
2017-10-23T19:00:42.743Z|00165|netdev_dpdk|ERR|Interface virtiouser0 MTU (1500) setup error: Invalid argument
2017-10-23T19:00:42.743Z|00166|netdev_dpdk|ERR|Interface virtiouser0(rxq:1 txq:1) configure error: Invalid argument
2017-10-23T19:00:42.743Z|00167|dpif_netdev|ERR|Failed to set interface virtiouser0 new configuration
2017-10-23T19:00:42.743Z|00168|bridge|WARN|could not add network device virtiouser0 to ofproto (No such device)

Which versions (dpdk and ovs) are you using? Thanks

Br,
Wang Zhike

From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
Sent: Saturday, October 21, 2017 12:55 AM
To: 王志克; avi.cohen@huawei.com<mailto:avi.cohen@huawei.com>; dpdk-ovs@lists.01.org<mailto:dpdk-ovs@lists.01.org>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: VIRTIO for containers


Hi Zhike,

On 10/20/2017 5:24 PM, 王志克 wrote:
I read this thread, and try to do the same way (legacy containers connect to ovs+dpdk). However, I meet following error when creating ovs port.

ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=net_virtio_user0,path=/dev/vhost-net
ovs-vsctl: Error detected while setting up 'virtiouser0': Error attaching device 'net_virtio_user0,path=/dev/vhost-net' to DPDK.  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

It should not try to connect this file /dev/vhost-net if this file exists, instead it will use ioctls on it. So please check if you have vhost and vhost-net ko probed into kernel.

Thanks,
Jianfeng


Debug shows that it calls virtio_user_dev_init()->vhost_user_setup(), and failed in connect() with target /dev/vhost-net. The errno is ECONNREFUSED.
Below command indeed shows no one is listening.
lsof | grep vhost-net

In kernel OVS, I guess qemu-kvm would listne to /dev/vhost-net. But for ovs_dpdk and container, what extra work need be done? Appreciate any help.

Br,
Wang Zhike




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-24  9:45   ` 王志克
@ 2017-10-25  7:34     ` Tan, Jianfeng
  2017-10-25  9:58       ` 王志克
  0 siblings, 1 reply; 22+ messages in thread
From: Tan, Jianfeng @ 2017-10-25  7:34 UTC (permalink / raw)
  To: 王志克, avi.cohen, users

Hi Zhike,

Welcome to summit patch to fix the bug.

For sender view, there are still some tricky configuration to make it faster, like enable the LRO, so that we can put off tcp fragmentation at the NIC when sending out; similar to checksum.
For receiver view, you might want to increase the ring size to get better performance. We are also looking at how to make “vhost tx zero copy” efficient.

1) Yes, we have done some initial tests internally, with testpmd as the vswitch instead of OVS-DPDK; and we were comparing with KNI for exceptional path.
2) We also see similar asymmetric result. For user->kernel path, it not only copies data from mbuf to skb, but also might go above to tcp stack (you can check using perf).

Thanks,
Jianfeng

From: 王志克 [mailto:wangzhike@jd.com]
Sent: Tuesday, October 24, 2017 5:46 PM
To: Tan, Jianfeng <jianfeng.tan@intel.com>; avi.cohen@huawei.com; users@dpdk.org
Subject: RE: VIRTIO for containers

Hi  Jianfeng,

It is proven that there is SW bug on DPDK17.05.2, which leads to this issue. I will submit patch later this week.

Now I can succeed to configure the container.  I did some simple test with netperf (one server and one client in different hosts), and some result comparing container with kernel OVS:
1. From netperf sender view, the tx thoughput increase about 100%. On sender, there is almost no packet loss. That means kernel vhost thread timely transfer data from kernel to userspace OVS+DPDK.
2. From netperf recvier view, the rx thoughput increase about 50%. On recviever, the packet loss happens on virtiouserx. There is no packet loss on tap port. That means kernel vhost thread is slow to transfer data from userspace OVS+DPDK to kernel.

May I ask some question? Thanks.

1) Did you have some benchmark data for performance?
2) Is there any explanation why the kernel vhost thread speed is different for two direction (from kernel to user, and vice versa)?

Welcome feedback if someone has such data.

Br,
Wang Zhike
From: 王志克
Sent: Tuesday, October 24, 2017 11:16 AM
To: 'Tan, Jianfeng'; avi.cohen@huawei.com<mailto:avi.cohen@huawei.com>; users@dpdk.org<mailto:users@dpdk.org>
Subject: RE: VIRTIO for containers

Thanks Jianfeng.

I finally realized that I used DPDK16.11 which does NOT support this function.

Then I use latest DPDK (17.05.2) and OVS (2.8.1), but still does not work.

ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=virtio_user0,path=/dev/vhost-net
ovs-vsctl: Error detected while setting up 'virtiouser0': could not add network device virtiouser0 to ofproto (No such device).  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

lsmod |grep vhost
vhost_net              18152  0
vhost                  33338  1 vhost_net
macvtap                22363  1 vhost_net
tun                    27141  6 vhost_net

2017-10-23T19:00:42.743Z|00163|netdev_dpdk|INFO|Device 'virtio_user0,path=/dev/vhost-net' attached to DPDK
2017-10-23T19:00:42.743Z|00164|netdev_dpdk|WARN|Rx checksum offload is not supported on port 2
2017-10-23T19:00:42.743Z|00165|netdev_dpdk|ERR|Interface virtiouser0 MTU (1500) setup error: Invalid argument
2017-10-23T19:00:42.743Z|00166|netdev_dpdk|ERR|Interface virtiouser0(rxq:1 txq:1) configure error: Invalid argument
2017-10-23T19:00:42.743Z|00167|dpif_netdev|ERR|Failed to set interface virtiouser0 new configuration
2017-10-23T19:00:42.743Z|00168|bridge|WARN|could not add network device virtiouser0 to ofproto (No such device)

Which versions (dpdk and ovs) are you using? Thanks

Br,
Wang Zhike

From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
Sent: Saturday, October 21, 2017 12:55 AM
To: 王志克; avi.cohen@huawei.com<mailto:avi.cohen@huawei.com>; dpdk-ovs@lists.01.org<mailto:dpdk-ovs@lists.01.org>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: VIRTIO for containers


Hi Zhike,

On 10/20/2017 5:24 PM, 王志克 wrote:
I read this thread, and try to do the same way (legacy containers connect to ovs+dpdk). However, I meet following error when creating ovs port.

ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=net_virtio_user0,path=/dev/vhost-net
ovs-vsctl: Error detected while setting up 'virtiouser0': Error attaching device 'net_virtio_user0,path=/dev/vhost-net' to DPDK.  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

It should not try to connect this file /dev/vhost-net if this file exists, instead it will use ioctls on it. So please check if you have vhost and vhost-net ko probed into kernel.

Thanks,
Jianfeng



Debug shows that it calls virtio_user_dev_init()->vhost_user_setup(), and failed in connect() with target /dev/vhost-net. The errno is ECONNREFUSED.
Below command indeed shows no one is listening.
lsof | grep vhost-net

In kernel OVS, I guess qemu-kvm would listne to /dev/vhost-net. But for ovs_dpdk and container, what extra work need be done? Appreciate any help.

Br,
Wang Zhike




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-20 16:54 ` Tan, Jianfeng
  2017-10-24  3:15   ` 王志克
@ 2017-10-24  9:45   ` 王志克
  2017-10-25  7:34     ` Tan, Jianfeng
  1 sibling, 1 reply; 22+ messages in thread
From: 王志克 @ 2017-10-24  9:45 UTC (permalink / raw)
  To: Tan, Jianfeng, avi.cohen, users

Hi  Jianfeng,

It is proven that there is SW bug on DPDK17.05.2, which leads to this issue. I will submit patch later this week.

Now I can succeed to configure the container.  I did some simple test with netperf (one server and one client in different hosts), and some result comparing container with kernel OVS:
1. From netperf sender view, the tx thoughput increase about 100%. On sender, there is almost no packet loss. That means kernel vhost thread timely transfer data from kernel to userspace OVS+DPDK.
2. From netperf recvier view, the rx thoughput increase about 50%. On recviever, the packet loss happens on virtiouserx. There is no packet loss on tap port. That means kernel vhost thread is slow to transfer data from userspace OVS+DPDK to kernel.

May I ask some question? Thanks.

1) Did you have some benchmark data for performance?
2) Is there any explanation why the kernel vhost thread speed is different for two direction (from kernel to user, and vice versa)?

Welcome feedback if someone has such data.

Br,
Wang Zhike
From: 王志克
Sent: Tuesday, October 24, 2017 11:16 AM
To: 'Tan, Jianfeng'; avi.cohen@huawei.com; users@dpdk.org
Subject: RE: VIRTIO for containers

Thanks Jianfeng.

I finally realized that I used DPDK16.11 which does NOT support this function.

Then I use latest DPDK (17.05.2) and OVS (2.8.1), but still does not work.

ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=virtio_user0,path=/dev/vhost-net
ovs-vsctl: Error detected while setting up 'virtiouser0': could not add network device virtiouser0 to ofproto (No such device).  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

lsmod |grep vhost
vhost_net              18152  0
vhost                  33338  1 vhost_net
macvtap                22363  1 vhost_net
tun                    27141  6 vhost_net

2017-10-23T19:00:42.743Z|00163|netdev_dpdk|INFO|Device 'virtio_user0,path=/dev/vhost-net' attached to DPDK
2017-10-23T19:00:42.743Z|00164|netdev_dpdk|WARN|Rx checksum offload is not supported on port 2
2017-10-23T19:00:42.743Z|00165|netdev_dpdk|ERR|Interface virtiouser0 MTU (1500) setup error: Invalid argument
2017-10-23T19:00:42.743Z|00166|netdev_dpdk|ERR|Interface virtiouser0(rxq:1 txq:1) configure error: Invalid argument
2017-10-23T19:00:42.743Z|00167|dpif_netdev|ERR|Failed to set interface virtiouser0 new configuration
2017-10-23T19:00:42.743Z|00168|bridge|WARN|could not add network device virtiouser0 to ofproto (No such device)

Which versions (dpdk and ovs) are you using? Thanks

Br,
Wang Zhike

From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
Sent: Saturday, October 21, 2017 12:55 AM
To: 王志克; avi.cohen@huawei.com<mailto:avi.cohen@huawei.com>; dpdk-ovs@lists.01.org<mailto:dpdk-ovs@lists.01.org>; users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: VIRTIO for containers


Hi Zhike,

On 10/20/2017 5:24 PM, 王志克 wrote:
I read this thread, and try to do the same way (legacy containers connect to ovs+dpdk). However, I meet following error when creating ovs port.

ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=net_virtio_user0,path=/dev/vhost-net
ovs-vsctl: Error detected while setting up 'virtiouser0': Error attaching device 'net_virtio_user0,path=/dev/vhost-net' to DPDK.  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

It should not try to connect this file /dev/vhost-net if this file exists, instead it will use ioctls on it. So please check if you have vhost and vhost-net ko probed into kernel.

Thanks,
Jianfeng



Debug shows that it calls virtio_user_dev_init()->vhost_user_setup(), and failed in connect() with target /dev/vhost-net. The errno is ECONNREFUSED.
Below command indeed shows no one is listening.
lsof | grep vhost-net

In kernel OVS, I guess qemu-kvm would listne to /dev/vhost-net. But for ovs_dpdk and container, what extra work need be done? Appreciate any help.

Br,
Wang Zhike




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-20 16:54 ` Tan, Jianfeng
@ 2017-10-24  3:15   ` 王志克
  2017-10-24  9:45   ` 王志克
  1 sibling, 0 replies; 22+ messages in thread
From: 王志克 @ 2017-10-24  3:15 UTC (permalink / raw)
  To: Tan, Jianfeng, avi.cohen, users

Thanks Jianfeng.

I finally realized that I used DPDK16.11 which does NOT support this function.

Then I use latest DPDK (17.05.2) and OVS (2.8.1), but still does not work.

ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=virtio_user0,path=/dev/vhost-net
ovs-vsctl: Error detected while setting up 'virtiouser0': could not add network device virtiouser0 to ofproto (No such device).  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

lsmod |grep vhost
vhost_net              18152  0
vhost                  33338  1 vhost_net
macvtap                22363  1 vhost_net
tun                    27141  6 vhost_net

2017-10-23T19:00:42.743Z|00163|netdev_dpdk|INFO|Device 'virtio_user0,path=/dev/vhost-net' attached to DPDK
2017-10-23T19:00:42.743Z|00164|netdev_dpdk|WARN|Rx checksum offload is not supported on port 2
2017-10-23T19:00:42.743Z|00165|netdev_dpdk|ERR|Interface virtiouser0 MTU (1500) setup error: Invalid argument
2017-10-23T19:00:42.743Z|00166|netdev_dpdk|ERR|Interface virtiouser0(rxq:1 txq:1) configure error: Invalid argument
2017-10-23T19:00:42.743Z|00167|dpif_netdev|ERR|Failed to set interface virtiouser0 new configuration
2017-10-23T19:00:42.743Z|00168|bridge|WARN|could not add network device virtiouser0 to ofproto (No such device)

Which versions (dpdk and ovs) are you using? Thanks

Br,
Wang Zhike

From: Tan, Jianfeng [mailto:jianfeng.tan@intel.com]
Sent: Saturday, October 21, 2017 12:55 AM
To: 王志克; avi.cohen@huawei.com; dpdk-ovs@lists.01.org; users@dpdk.org
Subject: Re: VIRTIO for containers


Hi Zhike,

On 10/20/2017 5:24 PM, 王志克 wrote:
I read this thread, and try to do the same way (legacy containers connect to ovs+dpdk). However, I meet following error when creating ovs port.

ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=net_virtio_user0,path=/dev/vhost-net
ovs-vsctl: Error detected while setting up 'virtiouser0': Error attaching device 'net_virtio_user0,path=/dev/vhost-net' to DPDK.  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

It should not try to connect this file /dev/vhost-net if this file exists, instead it will use ioctls on it. So please check if you have vhost and vhost-net ko probed into kernel.

Thanks,
Jianfeng



Debug shows that it calls virtio_user_dev_init()->vhost_user_setup(), and failed in connect() with target /dev/vhost-net. The errno is ECONNREFUSED.
Below command indeed shows no one is listening.
lsof | grep vhost-net

In kernel OVS, I guess qemu-kvm would listne to /dev/vhost-net. But for ovs_dpdk and container, what extra work need be done? Appreciate any help.

Br,
Wang Zhike




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
  2017-10-20  9:24 王志克
@ 2017-10-20 16:54 ` Tan, Jianfeng
  2017-10-24  3:15   ` 王志克
  2017-10-24  9:45   ` 王志克
  0 siblings, 2 replies; 22+ messages in thread
From: Tan, Jianfeng @ 2017-10-20 16:54 UTC (permalink / raw)
  To: 王志克, avi.cohen, dpdk-ovs, users

Hi Zhike,


On 10/20/2017 5:24 PM, 王志克 wrote:
> [dpdk-users] VIRTIO for containers
>
> I read this thread, and try to do the same way (legacy containers 
> connect to ovs+dpdk). However, I meet following error when creating 
> ovs port.
>
> /ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 
> type=dpdk options:dpdk-devargs=net_virtio_user0,path=/dev/vhost-net/
>
> ovs-vsctl: Error detected while setting up 'virtiouser0': Error 
> attaching device 'net_virtio_user0,path=/dev/vhost-net' to DPDK.See 
> ovs-vswitchd log for details.
>
> ovs-vsctl: The default log directory is "/var/log/openvswitch".
>

It should not try to connect this file /dev/vhost-net if this file 
exists, instead it will use ioctls on it. So please check if you have 
vhost and vhost-net ko probed into kernel.

Thanks,
Jianfeng

> Debug shows that it calls virtio_user_dev_init()->vhost_user_setup(), 
> and failed in connect() with target /dev/vhost-net. The errno is 
> ECONNREFUSED.
>
> Below command indeed shows no one is listening.
>
> /lsof | grep vhost-net/
>
> //
>
> In kernel OVS, I guess qemu-kvm would listne to /dev/vhost-net. But 
> for ovs_dpdk and container, what extra work need be done? Appreciate 
> any help.
>
> Br,
>
> Wang Zhike
>
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [dpdk-users] VIRTIO for containers
@ 2017-10-20  9:24 王志克
  2017-10-20 16:54 ` Tan, Jianfeng
  0 siblings, 1 reply; 22+ messages in thread
From: 王志克 @ 2017-10-20  9:24 UTC (permalink / raw)
  To: Tan, Jianfeng (jianfeng.tan@intel.com), avi.cohen, dpdk-ovs, users

I read this thread, and try to do the same way (legacy containers connect to ovs+dpdk). However, I meet following error when creating ovs port.

ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk options:dpdk-devargs=net_virtio_user0,path=/dev/vhost-net
ovs-vsctl: Error detected while setting up 'virtiouser0': Error attaching device 'net_virtio_user0,path=/dev/vhost-net' to DPDK.  See ovs-vswitchd log for details.
ovs-vsctl: The default log directory is "/var/log/openvswitch".

Debug shows that it calls virtio_user_dev_init()->vhost_user_setup(), and failed in connect() with target /dev/vhost-net. The errno is ECONNREFUSED.
Below command indeed shows no one is listening.
lsof | grep vhost-net

In kernel OVS, I guess qemu-kvm would listne to /dev/vhost-net. But for ovs_dpdk and container, what extra work need be done? Appreciate any help.

Br,
Wang Zhike



> -----Original Message-----

> From: Avi Cohen (A) [mailto:avi.cohen at huawei.com<http://dpdk.org/ml/listinfo/users>]

> Sent: Sunday, July 9, 2017 11:32 PM

> To: Tan, Jianfeng; dpdk-ovs at lists.01.org<http://dpdk.org/ml/listinfo/users>; users at dpdk.org<http://dpdk.org/ml/listinfo/users>

> Subject: RE: VIRTIO for containers

>

> Thanks you Jianfeng - pls see inline marked  [Avi Cohen (A)]

>

> > -----Original Message-----

> > From: Tan, Jianfeng [mailto:jianfeng.tan at intel.com<http://dpdk.org/ml/listinfo/users>]

> > Sent: Monday, 03 July, 2017 10:22 AM

> > To: Avi Cohen (A); dpdk-ovs at lists.01.org<http://dpdk.org/ml/listinfo/users>; users at dpdk.org<http://dpdk.org/ml/listinfo/users>

> > Subject: RE: VIRTIO for containers

> >

> >

> >

> > > -----Original Message-----

> > > From: Avi Cohen (A) [mailto:avi.cohen at huawei.com<http://dpdk.org/ml/listinfo/users>]

> > > Sent: Wednesday, June 28, 2017 2:45 PM

> > > To: Tan, Jianfeng; dpdk-ovs at lists.01.org<http://dpdk.org/ml/listinfo/users>; users at dpdk.org<http://dpdk.org/ml/listinfo/users>

> > > Subject: RE: VIRTIO for containers

> > >

> > > Thank you Jianfeng

> > >

> > > > -----Original Message-----

> > > > From: Tan, Jianfeng [mailto:jianfeng.tan at intel.com<http://dpdk.org/ml/listinfo/users>]

> > > > Sent: Tuesday, 27 June, 2017 5:22 PM

> > > > To: Avi Cohen (A); dpdk-ovs at lists.01.org<http://dpdk.org/ml/listinfo/users>; users at dpdk.org<http://dpdk.org/ml/listinfo/users>

> > > > Subject: Re: VIRTIO for containers

> > > >

> > > >

> > > >

> > > > On 6/26/2017 8:06 PM, Avi Cohen (A) wrote:

> > > > > Thank You Jianfeng

> > > > >

> > > > >> We have developed virtio-user + vhost-kernel as the backend. In

> > > > >> that scenario, you can add the tap interface into a container

> > > > >> network

> > > > namespace.

> > > > >> And there's a vhost kthread to push the data out to user space.

> > > > >>

> > > > >> And I cannot guarantee the performance as it has diametric model

> > > > >> in VM

> > > > >> (virtio) - OVS-DPDK (vhost).

> > > > >>

> > > > > [Avi Cohen (A)]

> > > > > Can you refer to a document how to run this setup?

> > > >

> > > > Please refer to

> > > >

> http://dpdk.org/doc/guides/howto/virtio_user_as_exceptional_path.htm

> > > > l

> > > >

> > > [Avi Cohen (A)]

> > > My setup includes a container and ovs-dpdk , i still not sure about:

> > >  - How to set the virtio backend port in the ovs-dpdk ?

> >

> > For OVS-DPDK, you need a version above 2.7.0. Below command is used to

> > create a virtio-user port with vhost-kernel backend:

> > # ovs-vsctl add-port br0 virtiouser0 -- set Interface virtiouser0 type=dpdk

> > options:dpdk-devargs=virtio_user0,path=/dev/vhost-net

> >

> > >  - How to set the container with the virtio frontend ?

> >

> > No, containers will not hold the virtio frontend in this case. Above ovs-vsctl

> > command will generate a virtio-user port in OVS, and a tap interface in

> > kernel, you can assign the tap interface into a net namespace of some

> > container so that its networking flow will go through OVS-DPDK then to

> > outside.

> [Avi Cohen (A)]

> Thanks you Jianfeng

> I've tested it and the performance looks very good compared to native ovs.

> I have 1 more question:

> You wrote " there's a vhost kthread to push the data out to user space " -

> Is that mean a copy from userspace to kernel (and viceversa) or there is a

> zero-copy mmap  like in AF_PACKET which handles TX/RX rings in userspace ?

> Best Regards

> avi



So far it needs data copy at least from kernel to user path; there's an experimental feature, named experimental_zcopytx, to avoid data copy, but not very useful due to the implementation limitation.



Packet mmap (similar to AF_PACKET) is exactly the direction we were discussing for the further optimization. Plus, an optimized vhost thread model (current thread model is: one thread for one rx-tx queue pair) is also considered.



Thanks,

Jianfeng



> >

> > Thanks,

> > Jianfeng

> >

> > > Best Regards

> > > avi

> > >

> > > > Thanks,

> > > > Jianfeng

> > > >

> > > > > Best Regards

> > > > > avi

> > > > >>> I've tested the performance of a container connected to OVS-

> DPDK

> > > > >>> via vdev-af_packet  and processed by virtual PMD, and its

> > > > >>> performance is good [uses mmap'ed to userspace  - zero copy

> > > > >>> RX/TX ring buffer] but not good as  the performance  of a  VM

> > > > >>> connected to OVS-DPDK (@host) via vhost-user virtio.

> > > > >>> Best Regards

> > > > >>> avi

> > > > >>>

> > > > >>>> -----Original Message-----

> > > > >>>> From: Tan, Jianfeng [mailto:jianfeng.tan at intel.com<http://dpdk.org/ml/listinfo/users>]

> > > > >>>> Sent: Monday, 26 June, 2017 6:15 AM

> > > > >>>> To: Avi Cohen (A); dpdk-ovs at lists.01.org<http://dpdk.org/ml/listinfo/users>; users at dpdk.org<http://dpdk.org/ml/listinfo/users>

> > > > >>>> Subject: RE: VIRTIO for containers

> > > > >>>>

> > > > >>>> Hi Avi,

> > > > >>>>

> > > > >>>>> -----Original Message-----

> > > > >>>>> From: users [mailto:users-bounces at dpdk.org<http://dpdk.org/ml/listinfo/users>] On Behalf Of Avi

> > > Cohen

> > > > >>>>> (A)

> > > > >>>>> Sent: Sunday, June 25, 2017 11:13 PM

> > > > >>>>> To: dpdk-ovs at lists.01.org<http://dpdk.org/ml/listinfo/users>; users at dpdk.org<http://dpdk.org/ml/listinfo/users>

> > > > >>>>> Subject: [dpdk-users] VIRTIO for containers

> > > > >>>>>

> > > > >>>>> Hello,

> > > > >>>>> Does  anyone know the status of this project

> > > > >>>>> http://dpdk.org/ml/archives/dev/2015-November/027732.html

> -

> > > > >>>>> Implementing a virtio device for containers ?

> > > > >>>> It has been upstreamed since v16.07. Here is a howto doc:

> > > > >>>>

> > > > >>>

> > > >

> > http://dpdk.org/doc/guides/howto/virtio_user_for_container_networking.

> > > > >>> h

> > > > >>>> tml

> > > > >>>>

> > > > >>>>

> > > > >>>> Thanks,

> > > > >>>> Jianfeng

> > > > >>>>

> > > > >>>>> Best Regards

> > > > >>>>> avi



________________________________

  *   Previous message: [dpdk-users] VIRTIO for containers <http://dpdk.org/ml/archives/users/2017-July/002162.html>
  *   Next message: [dpdk-users] VIRTIO for containers <http://dpdk.org/ml/archives/users/2017-July/002165.html>
  *   Messages sorted by: [ date ]<http://dpdk.org/ml/archives/users/2017-July/date.html#2164> [ thread ]<http://dpdk.org/ml/archives/users/2017-July/thread.html#2164> [ subject ]<http://dpdk.org/ml/archives/users/2017-July/subject.html#2164> [ author ]<http://dpdk.org/ml/archives/users/2017-July/author.html#2164>

________________________________
More information about the users mailing list<http://dpdk.org/ml/listinfo/users>

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2017-11-01  2:58 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-25 15:13 [dpdk-users] VIRTIO for containers Avi Cohen (A)
2017-06-26  3:14 ` Tan, Jianfeng
2017-06-26  6:16   ` Avi Cohen (A)
2017-06-26 11:58     ` Tan, Jianfeng
2017-06-26 12:06       ` Avi Cohen (A)
2017-06-27 14:22         ` Tan, Jianfeng
2017-06-28  6:45           ` Avi Cohen (A)
2017-07-03  7:21             ` Tan, Jianfeng
2017-07-09 15:32               ` Avi Cohen (A)
2017-07-10  3:28                 ` Tan, Jianfeng
2017-07-10  6:49                   ` Avi Cohen (A)
2017-10-20  9:24 王志克
2017-10-20 16:54 ` Tan, Jianfeng
2017-10-24  3:15   ` 王志克
2017-10-24  9:45   ` 王志克
2017-10-25  7:34     ` Tan, Jianfeng
2017-10-25  9:58       ` 王志克
2017-10-26  8:53         ` Tan, Jianfeng
2017-10-26 12:53           ` 王志克
2017-10-27  1:58             ` Tan, Jianfeng
2017-10-31  4:25           ` 王志克
2017-11-01  2:58             ` Tan, Jianfeng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).