test suite reviews and discussions
 help / color / mirror / Atom feed
From: Wei Ling <weix.ling@intel.com>
To: dts@dpdk.org
Cc: Wei Ling <weix.ling@intel.com>
Subject: [dts][PATCH V1 1/4] test_plans/pvp_virtio_user_4k_pages_test_plan: modify testplan format
Date: Fri, 15 Apr 2022 17:04:27 +0800	[thread overview]
Message-ID: <20220415090427.261176-1-weix.ling@intel.com> (raw)

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 6320 bytes --]

Modify test_plans/pvp_virtio_user_4k_pages_test_plan with new format.

Signed-off-by: Wei Ling <weix.ling@intel.com>
---
 .../pvp_virtio_user_4k_pages_test_plan.rst    | 108 +++++++++++++-----
 1 file changed, 80 insertions(+), 28 deletions(-)

diff --git a/test_plans/pvp_virtio_user_4k_pages_test_plan.rst b/test_plans/pvp_virtio_user_4k_pages_test_plan.rst
index a34a4422..6fed10b1 100644
--- a/test_plans/pvp_virtio_user_4k_pages_test_plan.rst
+++ b/test_plans/pvp_virtio_user_4k_pages_test_plan.rst
@@ -34,60 +34,112 @@
 vhost/virtio-user pvp with 4K-pages test plan
 =============================================
 
-Dpdk 19.02 add support for using virtio-user without hugepages. The --no-huge mode was augmented to use memfd-backed memory (on systems that support memfd), to allow using virtio-user-based NICs without hugepages.
+Dpdk 19.02 add support for using virtio-user without hugepages.
+The --no-huge mode was augmented to use memfd-backed memory (on systems that support memfd),
+to allow using virtio-user-based NICs without hugepages.
+
+For more about dpdk-testpmd sample, please refer to the DPDK docments:
+https://doc.dpdk.org/guides/testpmd_app_ug/run_app.html
+
+For virtio-user vdev parameter, you can refer to the DPDK docments:
+https://doc.dpdk.org/guides/nics/virtio.html#virtio-paths-selection-and-usage.
+
 
 Prerequisites
--------------
-Turn off transparent hugepage in grub by adding GRUB_CMDLINE_LINUX="transparent_hugepage=never"
+=============
+
+Topology
+--------
+Test flow: Vhost-user-->Virtio-user
+
+Hardware
+--------
+Supportted NICs: ALL
+
+Software
+--------
+Trex:http://trex-tgn.cisco.com/trex/release/v2.26.tar.gz
+
+General set up
+--------------
+1. Compile DPDK::
+
+	# CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=<dpdk build dir>
+	# ninja -C <dpdk build dir> -j 110
+
+2. Get the PCI device ID and DMA device ID of DUT, for example, 0000:18:00.0 is PCI device ID, 0000:00:04.0, 0000:00:04.1 is DMA device ID::
+
+	<dpdk dir># ./usertools/dpdk-devbind.py -s
+
+	Network devices using kernel driver
+	===================================
+	0000:18:00.0 'Device 159b' if=ens785f0 drv=ice unused=vfio-pci
+
+	DMA devices using kernel driver
+	===============================
+	0000:00:04.0 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
+	0000:00:04.1 'Sky Lake-E CBDMA Registers 2021' drv=ioatdma unused=vfio-pci
+
+Test case
+=========
+
+Common steps
+------------
+1. Bind 1 NIC port and CBDMA channels to vfio-pci::
+
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci <DUT port pci device id>
+
+	For example, bind 1 NIC port ::
+	<dpdk dir># ./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0
 
 Test Case1: Basic test vhost/virtio-user split ring with 4K-pages
-=================================================================
+-----------------------------------------------------------------
+This case uses testpmd to test split ring path with 4K-pages forward packets.
 
-1. Bind one port to vfio-pci, launch vhost::
+1. Bind 1 NIC port to vfio-pci, launch vhost::
 
-    modprobe vfio-pci
-    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1' -- -i --no-numa --socket-num=0
-    testpmd>start
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1' -- -i --no-numa --socket-num=0
+	testpmd> start
 
 2. Prepare tmpfs with 4K-pages::
 
-    mkdir /mnt/tmpfs_yinan
-    mount tmpfs /mnt/tmpfs_yinan -t tmpfs -o size=4G
+	mkdir /mnt/tmpfs_yinan
+	mount tmpfs /mnt/tmpfs_yinan -t tmpfs -o size=4G
 
 3. Launch virtio-user with 4K-pages::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \
-    --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1 -- -i
-    testpmd>start
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,queues=1 -- -i
+	testpmd> set fwd mac
+	testpmd> start
 
 4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
-    testpmd>show port stats all
+	testpmd> show port stats all
 
 Test Case2: Basic test vhost/virtio-user packed ring with 4K-pages
-==================================================================
+------------------------------------------------------------------
+This case uses testpmd to test packed ring path with 4K-pages forward packets.
 
 1. Bind one port to vfio-pci, launch vhost::
 
-    modprobe vfio-pci
-    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \
-    --vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1' -- -i --no-numa --socket-num=0
-    testpmd>start
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 3-4 -n 4 -m 1024 --no-huge --file-prefix=vhost \
+	--vdev 'net_vhost0,iface=/tmp/vhost-net,queues=1' -- -i --no-numa --socket-num=0
+	testpmd> start
 
 2. Prepare tmpfs with 4K-pages::
 
-    mkdir /mnt/tmpfs_yinan
-    mount tmpfs /mnt/tmpfs_yinan -t tmpfs -o size=4G
+	mkdir /mnt/tmpfs_yinan
+	mount tmpfs /mnt/tmpfs_yinan -t tmpfs -o size=4G
 
 3. Launch virtio-user with 4K-pages::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \
-    --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i
-    testpmd>start
+	<dpdk dir># ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-6 -n 4 --no-huge -m 1024 --file-prefix=virtio-user \
+	--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net,packed_vq=1,queues=1 -- -i
+	testpmd> set fwd mac
+	testpmd> start
 
 4. Send packet with packet generator with different packet size,includes [64, 128, 256, 512, 1024, 1518], check the throughput with below command::
 
-    testpmd>show port stats all
\ No newline at end of file
+	testpmd> show port stats all
-- 
2.25.1


                 reply	other threads:[~2022-04-15  9:04 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220415090427.261176-1-weix.ling@intel.com \
    --to=weix.ling@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).