From: "Tu, Lijuan" <lijuan.tu@intel.com>
To: "Wang, Yinan" <yinan.wang@intel.com>, "dts@dpdk.org" <dts@dpdk.org>
Cc: "Wang, Yinan" <yinan.wang@intel.com>
Subject: Re: [dts] [PATCH v1] test_plans/virtio_user_for_container_networking: add test plan for container networking with virtio-user
Date: Wed, 29 May 2019 02:13:18 +0000 [thread overview]
Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BA86501@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <20190521183350.40370-1-yinan.wang@intel.com>
Applied, thanks
> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan
> Sent: Wednesday, May 22, 2019 2:34 AM
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] test_plans/virtio_user_for_container_networking:
> add test plan for container networking with virtio-user
>
> From: Wang Yinan <yinan.wang@intel.com>
>
> Signed-off-by: Wang Yinan <yinan.wang@intel.com>
> ---
> ...ser_for_container_networking_test_plan.rst | 108 ++++++++++++++++++
> 1 file changed, 108 insertions(+)
> create mode 100644
> test_plans/virtio_user_for_container_networking_test_plan.rst
>
> diff --git a/test_plans/virtio_user_for_container_networking_test_plan.rst
> b/test_plans/virtio_user_for_container_networking_test_plan.rst
> new file mode 100644
> index 0000000..2d68f5f
> --- /dev/null
> +++ b/test_plans/virtio_user_for_container_networking_test_plan.rst
> @@ -0,0 +1,108 @@
> +.. Copyright (c) <2019>, Intel Corporation
> + All rights reserved.
> +
> + Redistribution and use in source and binary forms, with or without
> + modification, are permitted provided that the following conditions
> + are met:
> +
> + - Redistributions of source code must retain the above copyright
> + notice, this list of conditions and the following disclaimer.
> +
> + - Redistributions in binary form must reproduce the above copyright
> + notice, this list of conditions and the following disclaimer in
> + the documentation and/or other materials provided with the
> + distribution.
> +
> + - Neither the name of Intel Corporation nor the names of its
> + contributors may be used to endorse or promote products derived
> + from this software without specific prior written permission.
> +
> + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS
> + FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> + COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
> INDIRECT,
> + INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
> + (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> OR
> + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
> CONTRACT,
> + STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
> ADVISED
> + OF THE POSSIBILITY OF SUCH DAMAGE.
> +
> +==============================================
> +Virtio_user for container networking test plan
> +==============================================
> +
> +Description
> +===========
> +
> +Container becomes more and more popular for strengths, like low
> +overhead, fast boot-up time, and easy to deploy, etc.
> +Virtio, in essence, is a shm-based solution to transmit/receive
> +packets. How is memory shared? In VM's case, qemu always shares the
> +whole physical layout of VM to vhost backend. But it's not feasible for
> +a container, as a process, to share all virtual memory regions to
> +backend. So only those virtual memory regions (aka, hugepages
> +initialized in DPDK) are sent to backend. It restricts that only addresses in
> these areas can be used to transmit or receive packets.
> +
> +Limitations
> +-----------
> +We have below limitations in this solution:
> + * Cannot work with --huge-unlink option. As we need to reopen the
> hugepage
> + file to share with vhost backend.
> + * Cannot work with --no-huge option. Currently, DPDK uses anonymous
> mapping
> + under this option which cannot be reopened to share with vhost backend.
> + * Cannot work when there are more than
> VHOST_MEMORY_MAX_NREGIONS(8) hugepages.
> + If you have more regions (especially when 2MB hugepages are used), the
> option,
> + --single-file-segments, can help to reduce the number of shared files.
> + * Applications should not use file name like HUGEFILE_FMT ("%smap_%d").
> That
> + will bring confusion when sharing hugepage files with backend by name.
> + * Root privilege is a must. DPDK resolves physical addresses of hugepages
> + which seems not necessary, and some discussions are going on to remove
> this
> + restriction.
> +
> +Test Case 1: packet forward test for container networking
> +=========================================================
> +
> +1. Mount hugepage::
> +
> + mkdir /mnt/huge
> + mount -t hugetlbfs nodev /mnt/huge
> +
> +2. Bind one port to igb_uio, launch vhost::
> +
> + ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --file-prefix=vhost
> + --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i
> +
> +2. Start a container instance with a virtio-user port::
> +
> + docker run -i -t --privileged -v /root/dpdk/vhost-net:/tmp/vhost-net -v
> /mnt/huge:/dev/hugepages \
> + -v /root/dpdk:/root/dpdk dpdk_image ./root/dpdk/x86_64-native-
> linuxapp-gcc/app/testpmd -l 3-4 -n 4 -m 1024 --no-pci --file-prefix=container
> \
> + --vdev=virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net -- -i
> +
> +3. Send packet with packet generator with different packet size,includes [64,
> 128, 256, 512, 1024, 1518], check virtio could receive and fwd packets
> correctly in container::
> +
> + testpmd>show port stats all
> +
> +Test Case 2: packet forward with multi-queues for container networking
> +===============================================================
> =======
> +
> +1. Mount hugepage::
> +
> + mkdir /mnt/huge
> + mount -t hugetlbfs nodev /mnt/huge
> +
> +2. Bind one port to igb_uio, launch vhost::
> +
> + ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 --file-prefix=vhost
> + --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i
> + --nb-cores=2
> +
> +2. Start a container instance with a virtio-user port::
> +
> + docker run -i -t --privileged -v /root/dpdk/vhost-net:/tmp/vhost-net -v
> /mnt/huge:/dev/hugepages \
> + -v /root/dpdk:/root/dpdk dpdk_image ./root/dpdk/x86_64-native-
> linuxapp-gcc/app/testpmd -l 4-6 -n 4 -m 1024 --no-pci --file-prefix=container
> \
> +
> + --vdev=virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=2
> + -- -i --rxq=2 --txq=2 --nb-cores=2
> +
> +3. Send packet with packet generator with different packet size,includes [64,
> 128, 256, 512, 1024, 1518], check virtio could receive and fwd packets in
> container with two queues::
> +
> + testpmd>show port stats all
> + testpmd>stop
> --
> 2.17.1
prev parent reply other threads:[~2019-05-29 2:13 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-21 18:33 Yinan
2019-05-22 6:01 ` Li, WenjieX A
2019-05-29 2:13 ` Tu, Lijuan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8CE3E05A3F976642AAB0F4675D0AD20E0BA86501@SHSMSX101.ccr.corp.intel.com \
--to=lijuan.tu@intel.com \
--cc=dts@dpdk.org \
--cc=yinan.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).