test suite reviews and discussions
 help / color / Atom feed
* [dts] [PATCH V1]test_plans: update virtio related test plans
@ 2020-09-07  8:22 Xiao Qimai
  2020-09-07  8:37 ` Xiao, QimaiX
  2020-09-10  1:10 ` Tu, Lijuan
  0 siblings, 2 replies; 3+ messages in thread
From: Xiao Qimai @ 2020-09-07  8:22 UTC (permalink / raw)
  To: dts; +Cc: Xiao Qimai

[-- Warning: decoded text below may be mangled --]
[-- Attachment #0: Type: text/plain; charset=y, Size: 210303 bytes --]

1. remove vlan in qemu command, since higher version of qemu not support
this parameter;
2. remove --socket-mem and --legacy-mem in testpmd cmd

Signed-off-by: Xiao Qimai <qimaix.xiao@intel.com>
---
 test_plans/dpdk_gro_lib_test_plan.rst         |  16 +--
 test_plans/dpdk_gso_lib_test_plan.rst         |  12 +-
 ...ack_multi_paths_port_restart_test_plan.rst |  54 ++++----
 .../loopback_multi_queues_test_plan.rst       | 116 ++++++++---------
 ...back_virtio_user_server_mode_test_plan.rst |  84 ++++++------
 .../perf_virtio_user_loopback_test_plan.rst   |  54 ++++----
 test_plans/perf_virtio_user_pvp_test_plan.rst |  32 ++---
 .../pvp_diff_qemu_version_test_plan.rst       |   8 +-
 ...emu_multi_paths_port_restart_test_plan.rst |  26 ++--
 test_plans/pvp_share_lib_test_plan.rst        |   6 +-
 .../pvp_vhost_user_reconnect_test_plan.rst    |  76 +++++------
 test_plans/pvp_virtio_bonding_test_plan.rst   |  12 +-
 ...pvp_virtio_user_2M_hugepages_test_plan.rst |   8 +-
 ...er_multi_queues_port_restart_test_plan.rst |  60 ++++-----
 .../vdev_primary_secondary_test_plan.rst      |   4 +-
 test_plans/vhost_1024_ethports_test_plan.rst  |   4 +-
 test_plans/vhost_cbdma_test_plan.rst          |   8 +-
 .../vhost_dequeue_zero_copy_test_plan.rst     |  64 ++++-----
 .../vhost_multi_queue_qemu_test_plan.rst      |   6 +-
 test_plans/vhost_pmd_xstats_test_plan.rst     |  66 +++++-----
 test_plans/vhost_qemu_mtu_test_plan.rst       |   2 +-
 .../vhost_user_live_migration_test_plan.rst   |  32 ++---
 .../virtio_event_idx_interrupt_test_plan.rst  |  28 ++--
 .../virtio_pvp_regression_test_plan.rst       |  28 ++--
 ...tio_user_as_exceptional_path_test_plan.rst |   6 +-
 ...ser_for_container_networking_test_plan.rst |   4 +-
 test_plans/vm2vm_virtio_pmd_test_plan.rst     |  16 +--
 test_plans/vm2vm_virtio_user_test_plan.rst    | 122 +++++++++---------
 28 files changed, 477 insertions(+), 477 deletions(-)

diff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst
index 3e06906a..ea0244c1 100644
--- a/test_plans/dpdk_gro_lib_test_plan.rst
+++ b/test_plans/dpdk_gro_lib_test_plan.rst
@@ -130,7 +130,7 @@ Test Case1: DPDK GRO lightmode test with tcp/ipv4 traffic
 2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 1::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x
-    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
+    ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
     testpmd>stop
@@ -151,7 +151,7 @@ Test Case1: DPDK GRO lightmode test with tcp/ipv4 traffic
     taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
-       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6001-:22 \
        -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
        -chardev socket,id=char0,path=./vhost-net \
        -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
@@ -182,7 +182,7 @@ Test Case2: DPDK GRO heavymode test with tcp/ipv4 traffic
 2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 2::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x
-    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
+    ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
     testpmd>stop
@@ -203,7 +203,7 @@ Test Case2: DPDK GRO heavymode test with tcp/ipv4 traffic
     taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
-       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6001-:22 \
        -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
        -chardev socket,id=char0,path=./vhost-net \
        -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
@@ -234,7 +234,7 @@ Test Case3: DPDK GRO heavymode_flush4 test with tcp/ipv4 traffic
 2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 4::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x
-    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
+    ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
     testpmd>stop
@@ -255,7 +255,7 @@ Test Case3: DPDK GRO heavymode_flush4 test with tcp/ipv4 traffic
     taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
-       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6001-:22 \
        -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
        -chardev socket,id=char0,path=./vhost-net \
        -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
@@ -301,7 +301,7 @@ Vxlan topology
 2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval to 4::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x
-    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
+    ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
     testpmd>stop
@@ -325,7 +325,7 @@ Vxlan topology
     taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
-       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6001-:22 \
        -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
        -chardev socket,id=char0,path=./vhost-net \
        -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
diff --git a/test_plans/dpdk_gso_lib_test_plan.rst b/test_plans/dpdk_gso_lib_test_plan.rst
index 8de5f56a..be1bdd20 100644
--- a/test_plans/dpdk_gso_lib_test_plan.rst
+++ b/test_plans/dpdk_gso_lib_test_plan.rst
@@ -99,7 +99,7 @@ Test Case1: DPDK GSO test with tcp traffic
 2. Bind nic1 to igb_uio, launch vhost-user with testpmd::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x       # xx:xx.x is the pci addr of nic1
-    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
+    ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
     testpmd>stop
@@ -119,7 +119,7 @@ Test Case1: DPDK GSO test with tcp traffic
     qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
-       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6001-:22 \
        -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
        -chardev socket,id=char0,path=./vhost-net \
        -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
@@ -159,7 +159,7 @@ Test Case3: DPDK GSO test with vxlan traffic
 2. Bind nic1 to igb_uio, launch vhost-user with testpmd::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x
-    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
+    ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
     testpmd>stop
@@ -181,7 +181,7 @@ Test Case3: DPDK GSO test with vxlan traffic
     qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
-       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6001-:22 \
        -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
        -chardev socket,id=char0,path=./vhost-net \
        -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
@@ -213,7 +213,7 @@ Test Case4: DPDK GSO test with gre traffic
 2. Bind nic1 to igb_uio, launch vhost-user with testpmd::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x
-    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
+    ./testpmd -l 2-4 -n 4 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --txd=1024 --rxd=1024
     testpmd>set fwd csum
     testpmd>stop
@@ -235,7 +235,7 @@ Test Case4: DPDK GSO test with gre traffic
     qemu-system-x86_64 -name us-vhost-vm1 \
        -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
        -numa node,memdev=mem \
-       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
+       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6001-:22 \
        -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
        -chardev socket,id=char0,path=./vhost-net \
        -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
diff --git a/test_plans/loopback_multi_paths_port_restart_test_plan.rst b/test_plans/loopback_multi_paths_port_restart_test_plan.rst
index 3da94ade..d9a8d304 100644
--- a/test_plans/loopback_multi_paths_port_restart_test_plan.rst
+++ b/test_plans/loopback_multi_paths_port_restart_test_plan.rst
@@ -45,14 +45,14 @@ Test Case 1: loopback test with packed ring mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -86,14 +86,14 @@ Test Case 2: loopback test with packed ring non-mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -127,14 +127,14 @@ Test Case 3: loopback test with packed ring inorder mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -168,14 +168,14 @@ Test Case 4: loopback test with packed ring inorder non-mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -209,14 +209,14 @@ Test Case 5: loopback test with split ring inorder mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -250,14 +250,14 @@ Test Case 6: loopback test with split ring inorder non-mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -291,14 +291,14 @@ Test Case 7: loopback test with split ring mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -332,14 +332,14 @@ Test Case 8: loopback test with split ring non-mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \
     -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -373,14 +373,14 @@ Test Case 9: loopback test with split ring vector_rx path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
diff --git a/test_plans/loopback_multi_queues_test_plan.rst b/test_plans/loopback_multi_queues_test_plan.rst
index 635b0703..0cea2b11 100644
--- a/test_plans/loopback_multi_queues_test_plan.rst
+++ b/test_plans/loopback_multi_queues_test_plan.rst
@@ -45,15 +45,15 @@ Test Case 1: loopback with virtio 1.1 mergeable path using 1 queue and 8 queues
 1. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -76,15 +76,15 @@ Test Case 1: loopback with virtio 1.1 mergeable path using 1 queue and 8 queues
 6. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-9 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
     -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -105,15 +105,15 @@ Test Case 2: loopback with virtio 1.1 non-mergeable path using 1 queue and 8 que
 1. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -136,15 +136,15 @@ Test Case 2: loopback with virtio 1.1 non-mergeable path using 1 queue and 8 que
 6. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-9 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
     -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -165,15 +165,15 @@ Test Case 3: loopback with virtio 1.0 inorder mergeable path using 1 queue and 8
 1. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -196,15 +196,15 @@ Test Case 3: loopback with virtio 1.0 inorder mergeable path using 1 queue and 8
 6. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-9 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
     -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -225,15 +225,15 @@ Test Case 4: loopback with virtio 1.0 inorder non-mergeable path using 1 queue a
 1. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -256,15 +256,15 @@ Test Case 4: loopback with virtio 1.0 inorder non-mergeable path using 1 queue a
 6. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-9 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
     -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=1 \
     -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -285,15 +285,15 @@ Test Case 5: loopback with virtio 1.0 mergeable path using 1 queue and 8 queues
 1. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -316,15 +316,15 @@ Test Case 5: loopback with virtio 1.0 mergeable path using 1 queue and 8 queues
 6. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-9 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
     -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=1,in_order=0 \
     -- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -345,15 +345,15 @@ Test Case 6: loopback with virtio 1.0 non-mergeable path using 1 queue and 8 que
 1. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -376,15 +376,15 @@ Test Case 6: loopback with virtio 1.0 non-mergeable path using 1 queue and 8 que
 6. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-9 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
     -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -405,15 +405,15 @@ Test Case 7: loopback with virtio 1.0 vector_rx path using 1 queue and 8 queues
 1. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -436,15 +436,15 @@ Test Case 7: loopback with virtio 1.0 vector_rx path using 1 queue and 8 queues
 6. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-9 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
     -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -465,15 +465,15 @@ Test Case 8: loopback with virtio 1.1 inorder mergeable path using 1 queue and 8
 1. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -496,15 +496,15 @@ Test Case 8: loopback with virtio 1.1 inorder mergeable path using 1 queue and 8
 6. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-9 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
     -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -525,14 +525,14 @@ Test Case 9: loopback with virtio 1.1 inorder non-mergeable path using 1 queue a
 1. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
+    ./testpmd -l 1-2 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --rx-offloads=0x10 --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -561,8 +561,8 @@ Test Case 9: loopback with virtio 1.1 inorder non-mergeable path using 1 queue a
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --rx-offloads=0x10 --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -590,8 +590,8 @@ Test Case 10: loopback with virtio 1.1 vectorized path using 1 queue and 8 queue
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -614,15 +614,15 @@ Test Case 10: loopback with virtio 1.1 vectorized path using 1 queue and 8 queue
 6. Launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-9 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
     -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 7. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 10-18 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
     testpmd>set fwd mac
diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
index 1fcbf5d6..f30e3a55 100644
--- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
+++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
@@ -44,14 +44,14 @@ Test Case 1: Basic test for packed ring server mode
 
 1. Launch virtio-user as server mode::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=1,packed_vq=1 -- -i --rxq=1 --txq=1 --no-numa
     >set fwd mac
     >start
 
 2. Launch vhost as client mode::
 
-    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=/tmp/sock0,client=1,queues=1' -- -i --rxq=1 --txq=1 --nb-cores=1
     >set fwd mac
     >start tx_first 32
@@ -65,14 +65,14 @@ Test Case 2:  Basic test for split ring server mode
 
 1. Launch virtio-user as server mode::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=1 -- -i --rxq=1 --txq=1 --no-numa
     >set fwd mac
     >start
 
 2. Launch vhost as client mode::
 
-    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=/tmp/sock0,client=1,queues=1' -- -i --rxq=1 --txq=1 --nb-cores=1
     >set fwd mac
     >start tx_first 32
@@ -87,14 +87,14 @@ Test Case 3: loopback reconnect test with split ring mergeable path and server m
 1. launch vhost as client mode with 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start
 
 2. Launch virtio-user as server mode with 2 queues::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -107,7 +107,7 @@ Test Case 3: loopback reconnect test with split ring mergeable path and server m
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -129,7 +129,7 @@ Test Case 3: loopback reconnect test with split ring mergeable path and server m
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -159,14 +159,14 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and
 1. launch vhost as client mode with 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start
 
 2. Launch virtio-user as server mode with 2 queues::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -179,7 +179,7 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -201,7 +201,7 @@ Test Case 4: loopback reconnect test with split ring inorder mergeable path and
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=1,in_order=1\
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -231,14 +231,14 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path
 1. launch vhost as client mode with 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start
 
 2. Launch virtio-user as server mode with 2 queues::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -251,7 +251,7 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -273,7 +273,7 @@ Test Case 5: loopback reconnect test with split ring inorder non-mergeable path
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -303,14 +303,14 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv
 1. launch vhost as client mode with 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start
 
 2. Launch virtio-user as server mode with 2 queues::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -323,7 +323,7 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -345,7 +345,7 @@ Test Case 6: loopback reconnect test with split ring non-mergeable path and serv
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -375,14 +375,14 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m
 1. launch vhost as client mode with 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start
 
 2. Launch virtio-user as server mode with 2 queues::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -395,7 +395,7 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -417,7 +417,7 @@ Test Case 7: loopback reconnect test with split ring vector_rx path and server m
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
     -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -447,14 +447,14 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server
 1. launch vhost as client mode with 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start
 
 2. Launch virtio-user as server mode with 2 queues::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -467,7 +467,7 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -489,7 +489,7 @@ Test Case 8: loopback reconnect test with packed ring mergeable path and server
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -519,14 +519,14 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser
 1. launch vhost as client mode with 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start
 
 2. Launch virtio-user as server mode with 2 queues::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -539,7 +539,7 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -561,7 +561,7 @@ Test Case 9: loopback reconnect test with packed ring non-mergeable path and ser
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -591,14 +591,14 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an
 1. launch vhost as client mode with 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start
 
 2. Launch virtio-user as server mode with 2 queues::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -611,7 +611,7 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -633,7 +633,7 @@ Test Case 10: loopback reconnect test with packed ring inorder mergeable path an
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -663,14 +663,14 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat
 1. launch vhost as client mode with 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start
 
 2. Launch virtio-user as server mode with 2 queues::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -683,7 +683,7 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -705,7 +705,7 @@ Test Case 11: loopback reconnect test with packed ring inorder non-mergeable pat
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -755,7 +755,7 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve
 
 4. Relaunch vhost and send packets::
 
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
     --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
     >start tx_first 32
@@ -777,7 +777,7 @@ Test Case 12: loopback reconnect test with packed ring vectorized path and serve
 
 8. Relaunch virtio-user and send packets::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
diff --git a/test_plans/perf_virtio_user_loopback_test_plan.rst b/test_plans/perf_virtio_user_loopback_test_plan.rst
index b1dcc327..11514dd8 100644
--- a/test_plans/perf_virtio_user_loopback_test_plan.rst
+++ b/test_plans/perf_virtio_user_loopback_test_plan.rst
@@ -45,14 +45,14 @@ Test Case 1: loopback test with packed ring mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -73,14 +73,14 @@ Test Case 2: loopback test with packed ring non-mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -101,14 +101,14 @@ Test Case 3: loopback test with packed ring inorder mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -129,14 +129,14 @@ Test Case 4: loopback test with packed ring inorder non-mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,in_order=1,mrg_rxbuf=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -157,14 +157,14 @@ Test Case 5: loopback test with split ring mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -185,14 +185,14 @@ Test Case 6: loopback test with split ring non-mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -213,14 +213,14 @@ Test Case 7: loopback test with split ring vector_rx path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -241,14 +241,14 @@ Test Case 8: loopback test with split ring inorder mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -269,14 +269,14 @@ Test Case 9: loopback test with split ring inorder non-mergeable path
 1. Launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
+    ./testpmd -n 4 -l 2-4  --no-pci \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=1,mrg_rxbuf=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
diff --git a/test_plans/perf_virtio_user_pvp_test_plan.rst b/test_plans/perf_virtio_user_pvp_test_plan.rst
index 11c15504..8021c9db 100644
--- a/test_plans/perf_virtio_user_pvp_test_plan.rst
+++ b/test_plans/perf_virtio_user_pvp_test_plan.rst
@@ -52,7 +52,7 @@ Test Case 1: pvp test with packed ring mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-3  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-3  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -60,8 +60,8 @@ Test Case 1: pvp test with packed ring mergeable path
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -77,15 +77,15 @@ Test Case 2: virtio single core performance test with packed ring mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>set fwd io
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -101,14 +101,14 @@ Test Case 3: vhost single core performance test with packed ring mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -l 7-9 -n 4  --socket-mem 1024,1024 --legacy-mem --file-prefix=virtio \
+    ./testpmd -l 7-9 -n 4  --file-prefix=virtio \
     --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024
     >set fwd io
@@ -124,7 +124,7 @@ Test Case 4: pvp test with split ring mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -132,8 +132,8 @@ Test Case 4: pvp test with split ring mergeable path
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -149,15 +149,15 @@ Test Case 5: virtio single core performance test with split ring mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i --nb-cores=2 --txd=1024 --rxd=1024
     testpmd>set fwd io
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -173,14 +173,14 @@ Test Case 6: vhost single core performance test with split ring mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-prefix=vhost \
+    ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
     --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -l 7-9 -n 4  --socket-mem 1024,1024 --legacy-mem --file-prefix=virtio \
+    ./testpmd -l 7-9 -n 4  --file-prefix=virtio \
     --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net,queues=1,in_order=0,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --txd=1024 --rxd=1024
     >set fwd io
diff --git a/test_plans/pvp_diff_qemu_version_test_plan.rst b/test_plans/pvp_diff_qemu_version_test_plan.rst
index 1c7bf468..612e0e26 100644
--- a/test_plans/pvp_diff_qemu_version_test_plan.rst
+++ b/test_plans/pvp_diff_qemu_version_test_plan.rst
@@ -50,7 +50,7 @@ Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -65,7 +65,7 @@ Test Case 1: PVP multi qemu version test with virtio 0.95 mergeable path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
     -netdev user,id=netdev0,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net \
     -netdev type=vhost-user,id=netdev1,chardev=char0,vhostforce \
@@ -88,7 +88,7 @@ Test Case 2: PVP test with virtio 1.0 mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -103,7 +103,7 @@ Test Case 2: PVP test with virtio 1.0 mergeable path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
     -netdev user,id=netdev0,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net \
     -netdev type=vhost-user,id=netdev1,chardev=char0,vhostforce \
diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
index 98ae651c..9456fdc4 100644
--- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
+++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
@@ -51,7 +51,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -64,8 +64,8 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -98,7 +98,7 @@ Test Case 2: pvp test with virtio 0.95 normal path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -111,7 +111,7 @@ Test Case 2: pvp test with virtio 0.95 normal path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
@@ -144,7 +144,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -157,7 +157,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
@@ -190,7 +190,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -203,7 +203,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -236,7 +236,7 @@ Test Case 5: pvp test with virtio 1.0 normal path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -249,7 +249,7 @@ Test Case 5: pvp test with virtio 1.0 normal path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
@@ -282,7 +282,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
@@ -295,7 +295,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
diff --git a/test_plans/pvp_share_lib_test_plan.rst b/test_plans/pvp_share_lib_test_plan.rst
index 38f74e1c..dfc87dc5 100644
--- a/test_plans/pvp_share_lib_test_plan.rst
+++ b/test_plans/pvp_share_lib_test_plan.rst
@@ -58,13 +58,13 @@ Test Case1: Vhost/virtio-user pvp share lib test with niantic
 
 4. Bind niantic port with igb_uio, use option ``-d`` to load the dynamic pmd when launch vhost::
 
-    ./testpmd  -c 0x03 -n 4 --socket-mem 1024,1024 --legacy-mem -d librte_pmd_vhost.so.2.1 -d librte_pmd_ixgbe.so.2.1 -d librte_mempool_ring.so.1.1 \
+    ./testpmd  -c 0x03 -n 4 -d librte_pmd_vhost.so.2.1 -d librte_pmd_ixgbe.so.2.1 -d librte_mempool_ring.so.1.1 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i
     testpmd>start
 
 5. Launch virtio-user::
 
-    ./testpmd -c 0x0c -n 4 --socket-mem 1024,1024 --legacy-mem -d librte_pmd_virtio.so.1.1 -d librte_mempool_ring.so.1.1 \
+    ./testpmd -c 0x0c -n 4 -d librte_pmd_virtio.so.1.1 -d librte_mempool_ring.so.1.1 \
     --no-pci --file-prefix=virtio  --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net -- -i
     testpmd>start
 
@@ -79,6 +79,6 @@ Similar as Test Case1, all steps are similar except step 4:
 
 4. Bind fortville port with igb_uio, use option ``-d`` to load the dynamic pmd when launch vhost::
 
-    ./testpmd  -c 0x03 -n 4 --socket-mem 1024,1024 --legacy-mem -d librte_pmd_vhost.so.2.1 -d librte_pmd_i40e.so.1.1 -d librte_mempool_ring.so.1.1 \
+    ./testpmd  -c 0x03 -n 4 -d librte_pmd_vhost.so.2.1 -d librte_pmd_i40e.so.1.1 -d librte_mempool_ring.so.1.1 \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i
     testpmd>start
\ No newline at end of file
diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
index e6f75869..6641d447 100644
--- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
+++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
@@ -61,7 +61,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
 1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
+    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
     testpmd>start
 
@@ -72,8 +72,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -92,7 +92,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 5. On host, quit vhost-user, then re-launch the vhost-user with below command::
 
     testpmd>quit
-    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
+    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
     testpmd>start
 
@@ -106,7 +106,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
 1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
+    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
     testpmd>start
 
@@ -117,8 +117,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -151,7 +151,7 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
 
 1. Bind one port to igb_uio, launch the vhost by below command::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -162,8 +162,8 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -174,8 +174,8 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -200,7 +200,7 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
 6. On host, quit vhost-user, then re-launch the vhost-user with below command::
 
     testpmd>quit
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -213,7 +213,7 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
 
 1. Bind one port to igb_uio, launch the vhost by below command::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -224,8 +224,8 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -236,8 +236,8 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -280,7 +280,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 
 1. Launch the vhost by below commands, enable the client mode and tso::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 3. Launch VM1 and VM2::
@@ -290,8 +290,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -302,8 +302,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -324,7 +324,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 6. Kill the vhost-user, then re-launch the vhost-user::
 
     testpmd>quit
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue.
@@ -335,7 +335,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 
 1. Launch the vhost by below commands, enable the client mode and tso::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 3. Launch VM1 and VM2::
@@ -345,8 +345,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -357,8 +357,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16-1.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6003-:22 \
     -chardev socket,id=char0,path=./vhost-net1,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -394,7 +394,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
 1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
+    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
     testpmd>start
 
@@ -425,7 +425,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 5. On host, quit vhost-user, then re-launch the vhost-user with below command::
 
     testpmd>quit
-    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
+    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
     testpmd>start
 
@@ -439,7 +439,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG
 
 1. Bind one port to igb_uio, then launch vhost with client mode by below commands::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
+    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
     testpmd>set fwd mac
     testpmd>start
 
@@ -484,7 +484,7 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro
 
 1. Bind one port to igb_uio, launch the vhost by below command::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -533,7 +533,7 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro
 6. On host, quit vhost-user, then re-launch the vhost-user with below command::
 
     testpmd>quit
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -546,7 +546,7 @@ Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro
 
 1. Bind one port to igb_uio, launch the vhost by below command::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -613,7 +613,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 
 1. Launch the vhost by below commands, enable the client mode and tso::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 3. Launch VM1 and VM2::
@@ -657,7 +657,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 6. Kill the vhost-user, then re-launch the vhost-user::
 
     testpmd>quit
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue.
@@ -668,7 +668,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
 
 1. Launch the vhost by below commands, enable the client mode and tso::
 
-    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 3. Launch VM1 and VM2::
diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst b/test_plans/pvp_virtio_bonding_test_plan.rst
index c45b3f78..90438cc9 100644
--- a/test_plans/pvp_virtio_bonding_test_plan.rst
+++ b/test_plans/pvp_virtio_bonding_test_plan.rst
@@ -52,7 +52,7 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
 
 1. Bind one port to igb_uio,launch vhost by below command::
 
-    ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
+    ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -61,8 +61,8 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 --> Vhost--> NIC--> TG
     qemu-system-x86_64 -name vm0 -enable-kvm -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \
     -device virtio-serial -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \
     -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait \
-    -net nic,vlan=0,macaddr=00:00:00:c7:56:64,addr=1f \
-    -net user,vlan=0,hostfwd=tcp:127.0.0.1:6008-:22 \
+    -net nic,macaddr=00:00:00:c7:56:64,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6008-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
@@ -114,7 +114,7 @@ Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 t
 
 1. Bind one port to igb_uio,launch vhost by below command::
 
-    ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
+    ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -123,8 +123,8 @@ Test case 2: vhost-user/virtio-pmd pvp bonding test with different mode from 1 t
     qemu-system-x86_64 -name vm0 -enable-kvm -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \
     -device virtio-serial -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \
     -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait \
-    -net nic,vlan=0,macaddr=00:00:00:c7:56:64,addr=1f \
-    -net user,vlan=0,hostfwd=tcp:127.0.0.1:6008-:22 \
+    -net nic,macaddr=00:00:00:c7:56:64,addr=1f \
+    -net user,hostfwd=tcp:127.0.0.1:6008-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
diff --git a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
index 6a80b895..89af30f7 100644
--- a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
+++ b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
@@ -46,12 +46,12 @@ Test Case1:  Basic test for virtio-user split ring 2M hugepage
 
 2. Bind one port to igb_uio, launch vhost::
 
-    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --file-prefix=vhost \
+    ./testpmd -l 3-4 -n 4 --file-prefix=vhost \
     --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
 
 3. Launch virtio-user with 2M hugepage::
 
-    ./testpmd -l 5-6 -n 4  --no-pci --socket-mem 1024,1024 --single-file-segments --file-prefix=virtio-user \
+    ./testpmd -l 5-6 -n 4  --no-pci --single-file-segments --file-prefix=virtio-user \
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,queues=1 -- -i
 
 
@@ -66,12 +66,12 @@ Test Case1:  Basic test for virtio-user packed ring 2M hugepage
 
 2. Bind one port to igb_uio, launch vhost::
 
-    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --file-prefix=vhost \
+    ./testpmd -l 3-4 -n 4 --file-prefix=vhost \
     --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
 
 3. Launch virtio-user with 2M hugepage::
 
-    ./testpmd -l 5-6 -n 4  --no-pci --socket-mem 1024,1024 --single-file-segments --file-prefix=virtio-user \
+    ./testpmd -l 5-6 -n 4  --no-pci --single-file-segments --file-prefix=virtio-user \
     --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,packed_vq=1,queues=1 -- -i
 
 
diff --git a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
index 059b457e..c50c9aca 100644
--- a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
+++ b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
@@ -54,15 +54,15 @@ Test Case 1: pvp 2 queues test with packed ring mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=255 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255
     >set fwd mac
@@ -92,15 +92,15 @@ Test Case 2: pvp 2 queues test with packed ring non-mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=255 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255
     >set fwd mac
@@ -125,15 +125,15 @@ Test Case 3: pvp 2 queues test with split ring inorder mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -158,15 +158,15 @@ Test Case 4: pvp 2 queues test with split ring inorder non-mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=0,vectorized=1 \
     -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -191,15 +191,15 @@ Test Case 5: pvp 2 queues test with split ring mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -224,15 +224,15 @@ Test Case 6: pvp 2 queues test with split ring non-mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -257,15 +257,15 @@ Test Case 7: pvp 2 queues test with split ring vector_rx path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \
     -- -i --tx-offloads=0x0 --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -290,15 +290,15 @@ Test Case 8: pvp 2 queues test with packed ring inorder mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=255 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255
     >set fwd mac
@@ -323,15 +323,15 @@ Test Case 9: pvp 2 queues test with packed ring inorder non-mergeable path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \
     -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255
     >set fwd mac
@@ -356,15 +356,15 @@ Test Case 10: pvp 2 queues test with packed ring vectorized path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255
     >set fwd mac
diff --git a/test_plans/vdev_primary_secondary_test_plan.rst b/test_plans/vdev_primary_secondary_test_plan.rst
index 33d240e8..a148fcbe 100644
--- a/test_plans/vdev_primary_secondary_test_plan.rst
+++ b/test_plans/vdev_primary_secondary_test_plan.rst
@@ -143,7 +143,7 @@ SW preparation: Change one line of the symmetric_mp sample and rebuild::
 
 1. Bind one port to igb_uio, launch testpmd by below command::
 
-    ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+    ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd txonly
     testpmd>start
 
@@ -181,7 +181,7 @@ Test Case 2: Virtio-pmd primary and secondary process hotplug test
 
 1. Launch testpmd by below command::
 
-    ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024
+    ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd txonly
     testpmd>start
 
diff --git a/test_plans/vhost_1024_ethports_test_plan.rst b/test_plans/vhost_1024_ethports_test_plan.rst
index 636ddf52..c31f62a3 100644
--- a/test_plans/vhost_1024_ethports_test_plan.rst
+++ b/test_plans/vhost_1024_ethports_test_plan.rst
@@ -47,11 +47,11 @@ Test Case1:  Basic test for launch vhost with 1024 ethports
 
 2. Launch vhost with 1024 vdev::
 
-    ./testpmd -c 0x3000 -n 4 --socket-mem 10240,10240  --file-prefix=vhost --vdev 'eth_vhost0,iface=vhost-net,queues=1' \
+    ./testpmd -c 0x3000 -n 4 --file-prefix=vhost --vdev 'eth_vhost0,iface=vhost-net,queues=1' \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' ... -- -i # only list two vdev, here ommit other 1022 vdevs, from eth_vhost2 to eth_vhost1023
 
 3. Change "CONFIG_RTE_MAX_ETHPORTS" back to 32 in DPDK configure file::
 
     vi ./config/common_base
     +CONFIG_RTE_MAX_ETHPORTS=32
-    -CONFIG_RTE_MAX_ETHPORTS=1024
\ No newline at end of file
+    -CONFIG_RTE_MAX_ETHPORTS=1024
diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index e94a9974..dfe064a2 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -118,8 +118,8 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 7. Relaunch virtio-user with vector_rx path, then repeat step 3::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in_order=0,queues=1 \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -130,7 +130,7 @@ Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations
 
 1. Bind two cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  --socket-mem 1024,1024 --legacy-mem \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@80:04.6],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
      set fwd mac
@@ -174,7 +174,7 @@ Test Case2: Dynamic queue number test for DMA-accelerated vhost Tx operations
 
 6. Relaunch vhost with another two cbdma channels, check perforamnce can get target and RX/TX can work normally in two queueus::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  --socket-mem 1024,1024 --legacy-mem \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.0],dmathr=512' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
     >set fwd mac
diff --git a/test_plans/vhost_dequeue_zero_copy_test_plan.rst b/test_plans/vhost_dequeue_zero_copy_test_plan.rst
index 0c1743cb..29fba85f 100644
--- a/test_plans/vhost_dequeue_zero_copy_test_plan.rst
+++ b/test_plans/vhost_dequeue_zero_copy_test_plan.rst
@@ -55,7 +55,7 @@ Test Case 1: pvp split ring dequeue zero-copy test
 1. Bind one 40G port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1,dequeue-zero-copy=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
     testpmd>set fwd mac
@@ -65,8 +65,8 @@ Test Case 1: pvp split ring dequeue zero-copy test
     qemu-system-x86_64 -name vm1 \
      -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
      -chardev socket,id=char0,path=./vhost-net \
      -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
@@ -95,7 +95,7 @@ Test Case 2: pvp split ring dequeue zero-copy test with 2 queues
 1. Bind one 40G port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 2-4 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,dequeue-zero-copy=1' -- \
     -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 --txfreet=992
     testpmd>set fwd mac
@@ -105,8 +105,8 @@ Test Case 2: pvp split ring dequeue zero-copy test with 2 queues
     qemu-system-x86_64 -name vm1 \
      -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
      -chardev socket,id=char0,path=./vhost-net \
      -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=8,rx_queue_size=1024,tx_queue_size=1024 \
@@ -138,7 +138,7 @@ Test Case 3: pvp split ring dequeue zero-copy test with driver reload test
 1. Bind one 40G port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-5 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=16,dequeue-zero-copy=1,client=1' -- \
     -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024 --txfreet=992
     testpmd>set fwd mac
@@ -148,8 +148,8 @@ Test Case 3: pvp split ring dequeue zero-copy test with driver reload test
     qemu-system-x86_64 -name vm1 \
      -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
      -chardev socket,id=char0,path=./vhost-net,server \
      -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,rx_queue_size=1024,tx_queue_size=1024 \
@@ -158,7 +158,7 @@ Test Case 3: pvp split ring dequeue zero-copy test with driver reload test
 3. On VM, bind virtio net to igb_uio and run testpmd::
 
     ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
-    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
+    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
     testpmd>set fwd rxonly
     testpmd>start
 
@@ -173,7 +173,7 @@ Test Case 3: pvp split ring dequeue zero-copy test with driver reload test
 6. Relaunch testpmd at virtio side in VM for driver reloading::
 
     testpmd>quit
-    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
+    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -190,7 +190,7 @@ Test Case 4: pvp split ring dequeue zero-copy test with maximum txfreet
 
 1. Bind one 40G port to igb_uio, then launch testpmd by below command::
 
-     ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \
+     ./testpmd -l 1-5 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=16,dequeue-zero-copy=1,client=1' -- \
     -i --nb-cores=4 --rxq=16 --txq=16  --txfreet=988 --txrs=4 --txd=992 --rxd=992
     testpmd>set fwd mac
@@ -200,8 +200,8 @@ Test Case 4: pvp split ring dequeue zero-copy test with maximum txfreet
     qemu-system-x86_64 -name vm1 \
      -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
      -chardev socket,id=char0,path=./vhost-net,server \
      -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,rx_queue_size=1024,tx_queue_size=1024 \
@@ -210,7 +210,7 @@ Test Case 4: pvp split ring dequeue zero-copy test with maximum txfreet
 3. On VM, bind virtio net to igb_uio and run testpmd::
 
     ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
-    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
+    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -232,7 +232,7 @@ Test Case 5: pvp split ring dequeue zero-copy test with vector_rx path
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=1,dequeue-zero-copy=1' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
     testpmd>set fwd mac
@@ -240,8 +240,8 @@ Test Case 5: pvp split ring dequeue zero-copy test with vector_rx path
 
 2. Launch virtio-user by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=0,vectorized=1,queue_size=1024,server=1 \
     -- -i --tx-offloads=0x0 --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
@@ -259,7 +259,7 @@ Test Case 6: pvp packed ring dequeue zero-copy test
 1. Bind one 40G port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1,dequeue-zero-copy=1' -- \
     -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
     testpmd>set fwd mac
@@ -269,8 +269,8 @@ Test Case 6: pvp packed ring dequeue zero-copy test
     qemu-system-x86_64 -name vm1 \
      -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
      -chardev socket,id=char0,path=./vhost-net \
      -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,packed=on \
@@ -299,7 +299,7 @@ Test Case 7: pvp packed ring dequeue zero-copy test with 2 queues
 1. Bind one 40G port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 2-4 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,dequeue-zero-copy=1' -- \
     -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 --txfreet=992
     testpmd>set fwd mac
@@ -309,8 +309,8 @@ Test Case 7: pvp packed ring dequeue zero-copy test with 2 queues
     qemu-system-x86_64 -name vm1 \
      -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
      -chardev socket,id=char0,path=./vhost-net \
      -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=8,rx_queue_size=1024,tx_queue_size=1024,packed=on \
@@ -342,7 +342,7 @@ Test Case 8: pvp packed ring dequeue zero-copy test with driver reload test
 1. Bind one 40G port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-5 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=16,dequeue-zero-copy=1,client=1' -- \
     -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024 --txfreet=992
     testpmd>set fwd mac
@@ -352,8 +352,8 @@ Test Case 8: pvp packed ring dequeue zero-copy test with driver reload test
     qemu-system-x86_64 -name vm1 \
      -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
-     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f \
+     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
      -chardev socket,id=char0,path=./vhost-net,server \
      -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,rx_queue_size=1024,tx_queue_size=1024,packed=on \
@@ -362,7 +362,7 @@ Test Case 8: pvp packed ring dequeue zero-copy test with driver reload test
 3. On VM, bind virtio net to igb_uio and run testpmd::
 
     ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
-    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
+    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
     testpmd>set fwd rxonly
     testpmd>start
 
@@ -377,7 +377,7 @@ Test Case 8: pvp packed ring dequeue zero-copy test with driver reload test
 6. Relaunch testpmd at virtio side in VM for driver reloading::
 
     testpmd>quit
-    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
+    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -395,7 +395,7 @@ Test Case 9: pvp packed ring dequeue zero-copy test with ring size is not power
 1. Bind one port to igb_uio, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=1,dequeue-zero-copy=1' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
     testpmd>set fwd mac
@@ -403,8 +403,8 @@ Test Case 9: pvp packed ring dequeue zero-copy test with ring size is not power
 
 2. Launch virtio-user by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,in_order=0,mrg_rxbuf=1,packed_vq=1,queue_size=1025,server=1 \
     -- -i --rx-offloads=0x10 --nb-cores=1 --txd=1025 --rxd=1025
     >set fwd mac
diff --git a/test_plans/vhost_multi_queue_qemu_test_plan.rst b/test_plans/vhost_multi_queue_qemu_test_plan.rst
index bb13a815..abaf7af6 100644
--- a/test_plans/vhost_multi_queue_qemu_test_plan.rst
+++ b/test_plans/vhost_multi_queue_qemu_test_plan.rst
@@ -45,7 +45,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one port to igb_uio, then launch testpmd by below command: 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
     -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
@@ -88,7 +88,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
    ensure the vhost using 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
     -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
@@ -164,7 +164,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
    ensure the vhost using 2 queues::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
+    ./testpmd -c 0xe -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
     -i --nb-cores=1 --rxq=1 --txq=1
     testpmd>set fwd mac
diff --git a/test_plans/vhost_pmd_xstats_test_plan.rst b/test_plans/vhost_pmd_xstats_test_plan.rst
index 316b4a32..8caee819 100644
--- a/test_plans/vhost_pmd_xstats_test_plan.rst
+++ b/test_plans/vhost_pmd_xstats_test_plan.rst
@@ -53,15 +53,15 @@ Test Case 1: xstats test with packed ring mergeable path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -83,15 +83,15 @@ Test Case 2: xstats test with packed ring non-mergeable path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -111,15 +111,15 @@ Test Case 3: xstats stability test with split ring inorder mergeable path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -141,15 +141,15 @@ Test Case 4: xstats test with split ring inorder non-mergeable path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=1,mrg_rxbuf=0 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -169,15 +169,15 @@ Test Case 5: xstats test with split ring mergeable path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -197,15 +197,15 @@ Test Case 6: xstats test with split ring non-mergeable path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -225,15 +225,15 @@ Test Case 7: xstats test with split ring vector_rx path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \
     -- -i --tx-offloads=0x0 --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -253,15 +253,15 @@ Test Case 8: xstats test with packed ring inorder mergeable path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1 \
     -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -283,15 +283,15 @@ Test Case 9: xstats test with packed ring inorder non-mergeable path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -311,15 +311,15 @@ Test Case 10: xstats test with packed ring vectorized path
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
     -- -i --rss-ip --nb-cores=2 --rxq=2 --txq=2
     >set fwd mac
@@ -339,15 +339,15 @@ Test Case 11: xstats test with packed ring vectorized path with ring size is not
 1. Bind one port to vfio-pci, then launch vhost by below command::
 
     rm -rf vhost-net*
-    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
+    ./testpmd -n 4 -l 2-4  \
     --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2 --rxq=2 --txq=2
     testpmd>set fwd mac
     testpmd>start
 
 2. Launch virtio-user by below command::
 
-    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
-    --legacy-mem --no-pci --file-prefix=virtio \
+    ./testpmd -n 4 -l 5-7 \
+    --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \
     -- -i --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255
     >set fwd mac
diff --git a/test_plans/vhost_qemu_mtu_test_plan.rst b/test_plans/vhost_qemu_mtu_test_plan.rst
index d7f01ee9..60b01cfb 100644
--- a/test_plans/vhost_qemu_mtu_test_plan.rst
+++ b/test_plans/vhost_qemu_mtu_test_plan.rst
@@ -46,7 +46,7 @@ Test Case: Test the MTU in virtio-net
 =====================================
 1. Launch the testpmd by below commands on host, and config mtu::
 
-    ./testpmd -c 0xc -n 4 --socket-mem 2048,2048 \
+    ./testpmd -c 0xc -n 4 \
     --vdev 'net_vhost0,iface=vhost-net,queues=1' \
     -- -i --txd=512 --rxd=128 --nb-cores=1 --port-topology=chained
     testpmd> set fwd mac
diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst
index 2626f7af..22ff76d5 100644
--- a/test_plans/vhost_user_live_migration_test_plan.rst
+++ b/test_plans/vhost_user_live_migration_test_plan.rst
@@ -77,7 +77,7 @@ On host server side:
 2. Bind host port to igb_uio and start testpmd with vhost port::
 
     host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
-    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     host server# testpmd>start
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
@@ -100,7 +100,7 @@ On the backup server, run the vhost testpmd on the host and launch VM:
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
     backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
-    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     backup server # testpmd>start
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
@@ -177,7 +177,7 @@ On host server side:
 2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu::
 
     host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
-    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --socket-mem 1024,1024 -- -i
+    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
 
@@ -199,7 +199,7 @@ On the backup server, run the vhost testpmd on the host and launch VM:
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
     backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
-    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --socket-mem 1024,1024 -- -i
+    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
 
@@ -277,7 +277,7 @@ On host server side:
 2. Bind host port to igb_uio and start testpmd with vhost port::
 
     host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
-    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     host server# testpmd>start
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
@@ -300,7 +300,7 @@ On the backup server, run the vhost testpmd on the host and launch VM:
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
     backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
-    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     backup server # testpmd>start
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
@@ -365,7 +365,7 @@ On host server side:
 2. Bind host port to igb_uio and start testpmd with vhost port::
 
     host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
-    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-cores=4 --rxq=4 --txq=4
+    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
     host server# testpmd>start
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
@@ -388,7 +388,7 @@ On the backup server, run the vhost testpmd on the host and launch VM:
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
     backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
-    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-cores=4 --rxq=4 --txq=4
+    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
     backup server # testpmd>start
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
@@ -457,7 +457,7 @@ On host server side:
 2. Bind host port to igb_uio and start testpmd with vhost port::
 
     host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
-    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     host server# testpmd>start
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
@@ -480,7 +480,7 @@ On the backup server, run the vhost testpmd on the host and launch VM:
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
     backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
-    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     backup server # testpmd>start
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
@@ -557,7 +557,7 @@ On host server side:
 2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu::
 
     host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
-    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --socket-mem 1024,1024 -- -i
+    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
 
@@ -579,7 +579,7 @@ On the backup server, run the vhost testpmd on the host and launch VM:
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
     backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
-    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --socket-mem 1024,1024 -- -i
+    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
 
@@ -657,7 +657,7 @@ On host server side:
 2. Bind host port to igb_uio and start testpmd with vhost port::
 
     host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
-    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     host server# testpmd>start
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
@@ -680,7 +680,7 @@ On the backup server, run the vhost testpmd on the host and launch VM:
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
     backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
-    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
     backup server # testpmd>start
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
@@ -745,7 +745,7 @@ On host server side:
 2. Bind host port to igb_uio and start testpmd with vhost port::
 
     host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
-    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-cores=4 --rxq=4 --txq=4
+    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
     host server# testpmd>start
 
 3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
@@ -768,7 +768,7 @@ On the backup server, run the vhost testpmd on the host and launch VM:
     backup server # mkdir /mnt/huge
     backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
     backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
-    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-cores=4 --rxq=4 --txq=4
+    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
     backup server # testpmd>start
 
 5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
diff --git a/test_plans/virtio_event_idx_interrupt_test_plan.rst b/test_plans/virtio_event_idx_interrupt_test_plan.rst
index 1fbadee0..6cb00ab7 100644
--- a/test_plans/virtio_event_idx_interrupt_test_plan.rst
+++ b/test_plans/virtio_event_idx_interrupt_test_plan.rst
@@ -52,7 +52,7 @@ Test Case 1: Compare interrupt times with and without split ring virtio event id
 1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
+    ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
     --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
@@ -62,7 +62,7 @@ Test Case 1: Compare interrupt times with and without split ring virtio event id
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :12 -daemonize
@@ -85,7 +85,7 @@ Test Case 2: Split ring virtio-pci driver reload test
 1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM::
@@ -94,7 +94,7 @@ Test Case 2: Split ring virtio-pci driver reload test
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :12 -daemonize
@@ -123,7 +123,7 @@ Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 1
 1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-17 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+    ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
     testpmd>start
 
 2. Launch VM::
@@ -132,7 +132,7 @@ Test Case 3: Wake up split ring virtio-net cores with event idx interrupt mode 1
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
      -vnc :12 -daemonize
@@ -158,7 +158,7 @@ Test Case 4: Compare interrupt times with and without packed ring virtio event i
 1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
+    ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
     --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
@@ -168,7 +168,7 @@ Test Case 4: Compare interrupt times with and without packed ring virtio event i
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
      -vnc :12 -daemonize
@@ -191,7 +191,7 @@ Test Case 5: Packed ring virtio-pci driver reload test
 1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>start
 
 2. Launch VM::
@@ -200,7 +200,7 @@ Test Case 5: Packed ring virtio-pci driver reload test
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
      -vnc :12 -daemonize
@@ -229,7 +229,7 @@ Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode
 1. Bind one nic port to igb_uio, then launch the vhost sample by below commands::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-17 -n 4 --socket-mem 2048,2048 --legacy-mem --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
+    ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
     testpmd>start
 
 2. Launch VM::
@@ -238,7 +238,7 @@ Test Case 6: Wake up packed ring virtio-net cores with event idx interrupt mode
     qemu-system-x86_64 -name us-vhost-vm1 \
      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \
      -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6004-:22 \
      -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=16 \
      -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=40,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
      -vnc :12 -daemonize
@@ -303,7 +303,7 @@ Test Case 8: wake up vhost-user cores with event idx interrupt mode and cbdma en
 1. Launch l3fwd-power example app with client mode::
 
     ./examples/l3fwd-power/build/l3fwd-power -l 1-16 \
-    -n 4 --socket-mem 1024,1024 --legacy-mem \
+    -n 4 \
     --log-level=9 \
     --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' \
     -- -p 0x1 \
@@ -317,7 +317,7 @@ Test Case 8: wake up vhost-user cores with event idx interrupt mode and cbdma en
 3. Relauch l3fwd-power sample for port up::
 
     ./examples/l3fwd-power/build/l3fwd-power -l 1-16 \
-    -n 4 --socket-mem 1024,1024 --legacy-mem \
+    -n 4 \
     --log-level=9 \
     --vdev 'eth_vhost0,iface=/vhost-net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' \
     -- -p 0x1 \
diff --git a/test_plans/virtio_pvp_regression_test_plan.rst b/test_plans/virtio_pvp_regression_test_plan.rst
index 69df61e9..df76b544 100644
--- a/test_plans/virtio_pvp_regression_test_plan.rst
+++ b/test_plans/virtio_pvp_regression_test_plan.rst
@@ -52,7 +52,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-3 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
@@ -64,7 +64,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2  \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
@@ -91,7 +91,7 @@ Test Case 2: pvp test with virtio 0.95 non-mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-3 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
@@ -103,7 +103,7 @@ Test Case 2: pvp test with virtio 0.95 non-mergeable path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2  \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
@@ -130,7 +130,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-3 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
@@ -142,7 +142,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2  \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
@@ -169,7 +169,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-3 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
@@ -181,7 +181,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2  \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
@@ -208,7 +208,7 @@ Test Case 5: pvp test with virtio 1.0 non-mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-3 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
@@ -220,7 +220,7 @@ Test Case 5: pvp test with virtio 1.0 non-mergeable path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2  \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
@@ -247,7 +247,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-3 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
@@ -259,7 +259,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
     -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu16.img  \
     -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \
     -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2 -daemonize \
-    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
+    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
     -chardev socket,id=char0,path=./vhost-net,server \
     -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=2  \
     -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq=on,vectors=15 \
@@ -286,7 +286,7 @@ Test Case 7: pvp test with virtio 1.1 mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-3 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
@@ -325,7 +325,7 @@ Test Case 8: pvp test with virtio 1.1 non-mergeable path
 1. Bind one port to igb_uio, then launch testpmd by below command::
 
     rm -rf vhost-net*
-    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
+    ./testpmd -l 1-3 -n 4 \
     --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
diff --git a/test_plans/virtio_user_as_exceptional_path_test_plan.rst b/test_plans/virtio_user_as_exceptional_path_test_plan.rst
index a261e15c..f04271fa 100644
--- a/test_plans/virtio_user_as_exceptional_path_test_plan.rst
+++ b/test_plans/virtio_user_as_exceptional_path_test_plan.rst
@@ -74,7 +74,7 @@ Flow:tap0-->vhost-net-->virtio_user-->nic0-->nic1
 3. Bind nic0 to igb_uio and launch the virtio_user with testpmd::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0
-    ./testpmd -c 0xc0000 -n 4 --socket-mem 1024,1024 --file-prefix=test2 \
+    ./testpmd -c 0xc0000 -n 4 --file-prefix=test2 \
     --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024 -- -i --rxd=1024 --txd=1024
     testpmd>set fwd csum
     testpmd>stop
@@ -126,7 +126,7 @@ Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<-->TG
 2. Bind the physical port to igb_uio, launch testpmd with one queue for virtio_user::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024  --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=1 -- -i --rxd=1024 --txd=1024
+    ./testpmd -l 1-2 -n 4  --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=1 -- -i --rxd=1024 --txd=1024
 
 3. Check if there is a tap device generated::
 
@@ -156,7 +156,7 @@ Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<-->TG
 2. Bind the physical port to igb_uio, launch testpmd with two queues for virtio_user::
 
     ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024  --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=2 -- -i --rxd=1024 --txd=1024 --txq=2 --rxq=2 --nb-cores=1
+    ./testpmd -l 1-2 -n 4  --file-prefix=test2 --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-net,queue_size=1024,queues=2 -- -i --rxd=1024 --txd=1024 --txq=2 --rxq=2 --nb-cores=1
 
 3. Check if there is a tap device generated::
 
diff --git a/test_plans/virtio_user_for_container_networking_test_plan.rst b/test_plans/virtio_user_for_container_networking_test_plan.rst
index 2d68f5f0..15c9c248 100644
--- a/test_plans/virtio_user_for_container_networking_test_plan.rst
+++ b/test_plans/virtio_user_for_container_networking_test_plan.rst
@@ -72,7 +72,7 @@ Test Case 1: packet forward test for container networking
 
 2. Bind one port to igb_uio, launch vhost::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024  --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i
+    ./testpmd -l 1-2 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i
 
 2. Start a container instance with a virtio-user port::
 
@@ -94,7 +94,7 @@ Test Case 2: packet forward with multi-queues for container networking
 
 2. Bind one port to igb_uio, launch vhost::
 
-    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024  --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2
+    ./testpmd -l 1-3 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2
 
 2. Start a container instance with a virtio-user port::
 
diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst b/test_plans/vm2vm_virtio_pmd_test_plan.rst
index 11daaabb..db410e48 100644
--- a/test_plans/vm2vm_virtio_pmd_test_plan.rst
+++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst
@@ -62,7 +62,7 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with vector_rx path
 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
 
      rm -rf vhost-net*
-    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -117,7 +117,7 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with normal path
 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
 
      rm -rf vhost-net*
-    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -172,7 +172,7 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with vector_rx path
 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
 
      rm -rf vhost-net*
-    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -227,7 +227,7 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with normal path
 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
 
      rm -rf vhost-net*
-    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -281,7 +281,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd mergeable path with payload valid check
 
 1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
 
-    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -369,7 +369,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd mergeable path with payload valid ch
 
 1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
 
-    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -457,7 +457,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd mergeable path with payload valid ch
 
 1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
 
-    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
@@ -546,7 +546,7 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with normal path
 1. Bind one physical nic port to igb_uio, then launch the testpmd by below commands::
 
      rm -rf vhost-net*
-    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
     testpmd>set fwd mac
     testpmd>start
 
diff --git a/test_plans/vm2vm_virtio_user_test_plan.rst b/test_plans/vm2vm_virtio_user_test_plan.rst
index d0be8144..14f9b438 100644
--- a/test_plans/vm2vm_virtio_user_test_plan.rst
+++ b/test_plans/vm2vm_virtio_user_test_plan.rst
@@ -64,13 +64,13 @@ Test Case 1: packed virtqueue vm2vm mergeable path test
 
 1. Launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -83,7 +83,7 @@ Test Case 1: packed virtqueue vm2vm mergeable path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -106,7 +106,7 @@ Test Case 1: packed virtqueue vm2vm mergeable path test
 
 7. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -118,7 +118,7 @@ Test Case 1: packed virtqueue vm2vm mergeable path test
 
 9. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 --no-pci \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
     testpmd>set burst 1
@@ -142,13 +142,13 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test
 
 1. Launch testpmd by below command::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -161,7 +161,7 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -179,7 +179,7 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test
 
 6. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -191,7 +191,7 @@ Test Case 2: packed virtqueue vm2vm inorder mergeable path test
 
 8. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -212,13 +212,13 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test
 
 1. Launch testpmd by below command::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -229,7 +229,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -246,7 +246,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test
 
 6. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -258,7 +258,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable path test
 
 8. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -275,13 +275,13 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test
 
 1. Launch testpmd by below command::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \
     -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256
@@ -294,7 +294,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \
     -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256
@@ -311,7 +311,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test
 
 6. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -323,7 +323,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-mergeable path test
 
 8. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \
     -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256
@@ -340,13 +340,13 @@ Test Case 5: split virtqueue vm2vm mergeable path test
 
 1. Launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -359,7 +359,7 @@ Test Case 5: split virtqueue vm2vm mergeable path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -382,7 +382,7 @@ Test Case 5: split virtqueue vm2vm mergeable path test
 
 7. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -394,7 +394,7 @@ Test Case 5: split virtqueue vm2vm mergeable path test
 
 9. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -419,13 +419,13 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test
 
 1. Launch testpmd by below command::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -438,7 +438,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -455,7 +455,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test
 
 6. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -467,7 +467,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable path test
 
 8. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -488,13 +488,13 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test
 
 1. Launch testpmd by below command::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
@@ -505,7 +505,7 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
@@ -522,7 +522,7 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test
 
 6. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -534,7 +534,7 @@ Test Case 7: split virtqueue vm2vm non-mergeable path test
 
 8. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
     -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
@@ -551,13 +551,13 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test
 
 1. Launch testpmd by below command::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -570,7 +570,7 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -587,7 +587,7 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test
 
 6. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -599,7 +599,7 @@ Test Case 8: split virtqueue vm2vm inorder non-mergeable path test
 
 8. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -616,13 +616,13 @@ Test Case 9: split virtqueue vm2vm vector_rx path test
 
 1. Launch testpmd by below command::
 
-    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -633,7 +633,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -650,7 +650,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path test
 
 6. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -662,7 +662,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path test
 
 8. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -679,13 +679,13 @@ Test Case 10: packed virtqueue vm2vm vectorized path test
 
 1. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -698,7 +698,7 @@ Test Case 10: packed virtqueue vm2vm vectorized path test
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -715,7 +715,7 @@ Test Case 10: packed virtqueue vm2vm vectorized path test
 
 6. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -727,7 +727,7 @@ Test Case 10: packed virtqueue vm2vm vectorized path test
 
 8. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \
     -- -i --nb-cores=1 --txd=256 --rxd=256
@@ -744,13 +744,13 @@ Test Case 10: packed virtqueue vm2vm vectorized path test with ring size is not
 
 1. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \
     --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \
     -- -i --nb-cores=1 --txd=255 --rxd=255
@@ -763,7 +763,7 @@ Test Case 10: packed virtqueue vm2vm vectorized path test with ring size is not
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \
     -- -i --nb-cores=1 --txd=255 --rxd=255
@@ -780,7 +780,7 @@ Test Case 10: packed virtqueue vm2vm vectorized path test with ring size is not
 
 6. Launch testpmd by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --file-prefix=vhost  \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-prefix=vhost  \
     --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
     -i --nb-cores=1 --no-flush-rx
     testpmd>set fwd rxonly
@@ -792,7 +792,7 @@ Test Case 10: packed virtqueue vm2vm vectorized path test with ring size is not
 
 8. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=255 \
     -- -i --nb-cores=1 --txd=255 --rxd=255
@@ -815,7 +815,7 @@ Test Case 11: split virtqueue vm2vm inorder mergeable path multi-queues payload
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
@@ -828,7 +828,7 @@ Test Case 11: split virtqueue vm2vm inorder mergeable path multi-queues payload
 
 4. Launch virtio-user0 and send packets::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
@@ -846,7 +846,7 @@ Test Case 11: split virtqueue vm2vm inorder mergeable path multi-queues payload
 
 6. Restart step 1-3, Launch virtio-user0 and send packets::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
@@ -874,7 +874,7 @@ Test Case 12: split virtqueue vm2vm mergeable path multi-queues payload check wi
 
 2. Launch virtio-user1 by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
     --no-pci --file-prefix=virtio1 \
     --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
@@ -887,7 +887,7 @@ Test Case 12: split virtqueue vm2vm mergeable path multi-queues payload check wi
 
 4. Launch virtio-user0 and send 8k length packets::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
@@ -903,7 +903,7 @@ Test Case 12: split virtqueue vm2vm mergeable path multi-queues payload check wi
 
 6. Restart step 1-3, Launch virtio-user0 and send packets::
 
-    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
+    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
     --no-pci --file-prefix=virtio \
     --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
     -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
-- 
2.25.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dts] [PATCH V1]test_plans: update virtio related test plans
  2020-09-07  8:22 [dts] [PATCH V1]test_plans: update virtio related test plans Xiao Qimai
@ 2020-09-07  8:37 ` Xiao, QimaiX
  2020-09-10  1:10 ` Tu, Lijuan
  1 sibling, 0 replies; 3+ messages in thread
From: Xiao, QimaiX @ 2020-09-07  8:37 UTC (permalink / raw)
  To: dts

Tested-by: Xiao Qimai <qimaix.xiao@intel.com>

Regards,
Xiao Qimai

> -----Original Message-----
> From: Xiao, QimaiX <qimaix.xiao@intel.com>
> Sent: Monday, September 7, 2020 4:22 PM
> To: dts@dpdk.org
> Cc: Xiao, QimaiX <qimaix.xiao@intel.com>
> Subject: [dts][PATCH V1]test_plans: update virtio related test plans
> 
> 1. remove vlan in qemu command, since higher version of qemu not support
> this parameter;
> 2. remove --socket-mem and --legacy-mem in testpmd cmd
> 
> Signed-off-by: Xiao Qimai <qimaix.xiao@intel.com>
> ---
>  test_plans/dpdk_gro_lib_test_plan.rst         |  16 +--
>  test_plans/dpdk_gso_lib_test_plan.rst         |  12 +-
>  ...ack_multi_paths_port_restart_test_plan.rst |  54 ++++----
>  .../loopback_multi_queues_test_plan.rst       | 116 ++++++++---------
>  ...back_virtio_user_server_mode_test_plan.rst |  84 ++++++------
>  .../perf_virtio_user_loopback_test_plan.rst   |  54 ++++----
>  test_plans/perf_virtio_user_pvp_test_plan.rst |  32 ++---
>  .../pvp_diff_qemu_version_test_plan.rst       |   8 +-
>  ...emu_multi_paths_port_restart_test_plan.rst |  26 ++--
>  test_plans/pvp_share_lib_test_plan.rst        |   6 +-
>  .../pvp_vhost_user_reconnect_test_plan.rst    |  76 +++++------
>  test_plans/pvp_virtio_bonding_test_plan.rst   |  12 +-
>  ...pvp_virtio_user_2M_hugepages_test_plan.rst |   8 +-
>  ...er_multi_queues_port_restart_test_plan.rst |  60 ++++-----
>  .../vdev_primary_secondary_test_plan.rst      |   4 +-
>  test_plans/vhost_1024_ethports_test_plan.rst  |   4 +-
>  test_plans/vhost_cbdma_test_plan.rst          |   8 +-
>  .../vhost_dequeue_zero_copy_test_plan.rst     |  64 ++++-----
>  .../vhost_multi_queue_qemu_test_plan.rst      |   6 +-
>  test_plans/vhost_pmd_xstats_test_plan.rst     |  66 +++++-----
>  test_plans/vhost_qemu_mtu_test_plan.rst       |   2 +-
>  .../vhost_user_live_migration_test_plan.rst   |  32 ++---
>  .../virtio_event_idx_interrupt_test_plan.rst  |  28 ++--
>  .../virtio_pvp_regression_test_plan.rst       |  28 ++--
>  ...tio_user_as_exceptional_path_test_plan.rst |   6 +-
>  ...ser_for_container_networking_test_plan.rst |   4 +-
>  test_plans/vm2vm_virtio_pmd_test_plan.rst     |  16 +--
>  test_plans/vm2vm_virtio_user_test_plan.rst    | 122 +++++++++---------
>  28 files changed, 477 insertions(+), 477 deletions(-)
> 
> diff --git a/test_plans/dpdk_gro_lib_test_plan.rst
> b/test_plans/dpdk_gro_lib_test_plan.rst
> index 3e06906a..ea0244c1 100644
> --- a/test_plans/dpdk_gro_lib_test_plan.rst
> +++ b/test_plans/dpdk_gro_lib_test_plan.rst
> @@ -130,7 +130,7 @@ Test Case1: DPDK GRO lightmode test with tcp/ipv4
> traffic
>  2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval
> to 1::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x
> -    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
> +    ./testpmd -l 2-4 -n 4 \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --txd=1024 --rxd=1024
>      testpmd>set fwd csum
>      testpmd>stop
> @@ -151,7 +151,7 @@ Test Case1: DPDK GRO lightmode test with tcp/ipv4
> traffic
>      taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
>         -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
>         -numa node,memdev=mem \
> -       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
> +       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6001-:22 \
>         -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
>         -chardev socket,id=char0,path=./vhost-net \
>         -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> @@ -182,7 +182,7 @@ Test Case2: DPDK GRO heavymode test with tcp/ipv4
> traffic
>  2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval
> to 2::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x
> -    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
> +    ./testpmd -l 2-4 -n 4 \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --txd=1024 --rxd=1024
>      testpmd>set fwd csum
>      testpmd>stop
> @@ -203,7 +203,7 @@ Test Case2: DPDK GRO heavymode test with tcp/ipv4
> traffic
>      taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
>         -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
>         -numa node,memdev=mem \
> -       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
> +       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6001-:22 \
>         -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
>         -chardev socket,id=char0,path=./vhost-net \
>         -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> @@ -234,7 +234,7 @@ Test Case3: DPDK GRO heavymode_flush4 test with
> tcp/ipv4 traffic
>  2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval
> to 4::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x
> -    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
> +    ./testpmd -l 2-4 -n 4 \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --txd=1024 --rxd=1024
>      testpmd>set fwd csum
>      testpmd>stop
> @@ -255,7 +255,7 @@ Test Case3: DPDK GRO heavymode_flush4 test with
> tcp/ipv4 traffic
>      taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
>         -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
>         -numa node,memdev=mem \
> -       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
> +       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6001-:22 \
>         -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
>         -chardev socket,id=char0,path=./vhost-net \
>         -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> @@ -301,7 +301,7 @@ Vxlan topology
>  2. Bind nic1 to igb_uio, launch vhost-user with testpmd and set flush interval
> to 4::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x
> -    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
> +    ./testpmd -l 2-4 -n 4 \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --txd=1024 --rxd=1024
>      testpmd>set fwd csum
>      testpmd>stop
> @@ -325,7 +325,7 @@ Vxlan topology
>      taskset -c 13 qemu-system-x86_64 -name us-vhost-vm1 \
>         -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
>         -numa node,memdev=mem \
> -       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
> +       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6001-:22 \
>         -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
>         -chardev socket,id=char0,path=./vhost-net \
>         -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> diff --git a/test_plans/dpdk_gso_lib_test_plan.rst
> b/test_plans/dpdk_gso_lib_test_plan.rst
> index 8de5f56a..be1bdd20 100644
> --- a/test_plans/dpdk_gso_lib_test_plan.rst
> +++ b/test_plans/dpdk_gso_lib_test_plan.rst
> @@ -99,7 +99,7 @@ Test Case1: DPDK GSO test with tcp traffic
>  2. Bind nic1 to igb_uio, launch vhost-user with testpmd::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x       # xx:xx.x is the pci addr of nic1
> -    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
> +    ./testpmd -l 2-4 -n 4 \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --txd=1024 --rxd=1024
>      testpmd>set fwd csum
>      testpmd>stop
> @@ -119,7 +119,7 @@ Test Case1: DPDK GSO test with tcp traffic
>      qemu-system-x86_64 -name us-vhost-vm1 \
>         -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
>         -numa node,memdev=mem \
> -       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
> +       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6001-:22 \
>         -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
>         -chardev socket,id=char0,path=./vhost-net \
>         -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> @@ -159,7 +159,7 @@ Test Case3: DPDK GSO test with vxlan traffic
>  2. Bind nic1 to igb_uio, launch vhost-user with testpmd::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x
> -    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
> +    ./testpmd -l 2-4 -n 4 \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --txd=1024 --rxd=1024
>      testpmd>set fwd csum
>      testpmd>stop
> @@ -181,7 +181,7 @@ Test Case3: DPDK GSO test with vxlan traffic
>      qemu-system-x86_64 -name us-vhost-vm1 \
>         -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
>         -numa node,memdev=mem \
> -       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
> +       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6001-:22 \
>         -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
>         -chardev socket,id=char0,path=./vhost-net \
>         -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> @@ -213,7 +213,7 @@ Test Case4: DPDK GSO test with gre traffic
>  2. Bind nic1 to igb_uio, launch vhost-user with testpmd::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x
> -    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024  --legacy-mem \
> +    ./testpmd -l 2-4 -n 4 \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --txd=1024 --rxd=1024
>      testpmd>set fwd csum
>      testpmd>stop
> @@ -235,7 +235,7 @@ Test Case4: DPDK GSO test with gre traffic
>      qemu-system-x86_64 -name us-vhost-vm1 \
>         -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
>         -numa node,memdev=mem \
> -       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6001-:22 \
> +       -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -
> net nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6001-:22 \
>         -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
>         -chardev socket,id=char0,path=./vhost-net \
>         -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
> diff --git a/test_plans/loopback_multi_paths_port_restart_test_plan.rst
> b/test_plans/loopback_multi_paths_port_restart_test_plan.rst
> index 3da94ade..d9a8d304 100644
> --- a/test_plans/loopback_multi_paths_port_restart_test_plan.rst
> +++ b/test_plans/loopback_multi_paths_port_restart_test_plan.rst
> @@ -45,14 +45,14 @@ Test Case 1: loopback test with packed ring mergeable
> path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -86,14 +86,14 @@ Test Case 2: loopback test with packed ring non-
> mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -127,14 +127,14 @@ Test Case 3: loopback test with packed ring inorder
> mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -168,14 +168,14 @@ Test Case 4: loopback test with packed ring inorder
> non-mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
>      -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -209,14 +209,14 @@ Test Case 5: loopback test with split ring inorder
> mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=1,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -250,14 +250,14 @@ Test Case 6: loopback test with split ring inorder
> non-mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=1,mrg_rxbuf=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -291,14 +291,14 @@ Test Case 7: loopback test with split ring mergeable
> path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -332,14 +332,14 @@ Test Case 8: loopback test with split ring non-
> mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=0,vectorized=1 \
>      -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -373,14 +373,14 @@ Test Case 9: loopback test with split ring vector_rx
> path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=0,vectorized=1 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      >set fwd mac
> diff --git a/test_plans/loopback_multi_queues_test_plan.rst
> b/test_plans/loopback_multi_queues_test_plan.rst
> index 635b0703..0cea2b11 100644
> --- a/test_plans/loopback_multi_queues_test_plan.rst
> +++ b/test_plans/loopback_multi_queues_test_plan.rst
> @@ -45,15 +45,15 @@ Test Case 1: loopback with virtio 1.1 mergeable path
> using 1 queue and 8 queues
>  1. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -76,15 +76,15 @@ Test Case 1: loopback with virtio 1.1 mergeable path
> using 1 queue and 8 queues
>  6. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-9 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
>      -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -105,15 +105,15 @@ Test Case 2: loopback with virtio 1.1 non-mergeable
> path using 1 queue and 8 que
>  1. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -136,15 +136,15 @@ Test Case 2: loopback with virtio 1.1 non-mergeable
> path using 1 queue and 8 que
>  6. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-9 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
>      -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -165,15 +165,15 @@ Test Case 3: loopback with virtio 1.0 inorder
> mergeable path using 1 queue and 8
>  1. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -196,15 +196,15 @@ Test Case 3: loopback with virtio 1.0 inorder
> mergeable path using 1 queue and 8
>  6. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-9 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
>      -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -225,15 +225,15 @@ Test Case 4: loopback with virtio 1.0 inorder non-
> mergeable path using 1 queue a
>  1. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,mrg_rxbuf=0,in_order=1 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -256,15 +256,15 @@ Test Case 4: loopback with virtio 1.0 inorder non-
> mergeable path using 1 queue a
>  6. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-9 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
>      -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,mrg_rxbuf=0,in_order=1 \
>      -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -285,15 +285,15 @@ Test Case 5: loopback with virtio 1.0 mergeable path
> using 1 queue and 8 queues
>  1. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -316,15 +316,15 @@ Test Case 5: loopback with virtio 1.0 mergeable path
> using 1 queue and 8 queues
>  6. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-9 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
>      -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,mrg_rxbuf=1,in_order=0 \
>      -- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --
> rxd=1024
>      testpmd>set fwd mac
> @@ -345,15 +345,15 @@ Test Case 6: loopback with virtio 1.0 non-mergeable
> path using 1 queue and 8 que
>  1. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,mrg_rxbuf=0,in_order=0,vectorized=1 \
>      -- -i --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -376,15 +376,15 @@ Test Case 6: loopback with virtio 1.0 non-mergeable
> path using 1 queue and 8 que
>  6. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-9 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
>      -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1 \
>      -- -i --enable-hw-vlan-strip --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --
> rxd=1024
>      testpmd>set fwd mac
> @@ -405,15 +405,15 @@ Test Case 7: loopback with virtio 1.0 vector_rx path
> using 1 queue and 8 queues
>  1. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,mrg_rxbuf=0,in_order=0,vectorized=1 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -436,15 +436,15 @@ Test Case 7: loopback with virtio 1.0 vector_rx path
> using 1 queue and 8 queues
>  6. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-9 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
>      -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,mrg_rxbuf=0,in_order=0,vectorized=1 \
>      -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -465,15 +465,15 @@ Test Case 8: loopback with virtio 1.1 inorder
> mergeable path using 1 queue and 8
>  1. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -496,15 +496,15 @@ Test Case 8: loopback with virtio 1.1 inorder
> mergeable path using 1 queue and 8
>  6. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-9 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
>      -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -525,14 +525,14 @@ Test Case 9: loopback with virtio 1.1 inorder non-
> mergeable path using 1 queue a
>  1. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci --vdev
> 'eth_vhost0,iface=vhost-net,queues=1' -- \
> +    ./testpmd -l 1-2 -n 4 --no-pci --vdev 'eth_vhost0,iface=vhost-
> net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
>      -- -i --rx-offloads=0x10 --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -561,8 +561,8 @@ Test Case 9: loopback with virtio 1.1 inorder non-
> mergeable path using 1 queue a
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
>      -- -i --rx-offloads=0x10 --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -590,8 +590,8 @@ Test Case 10: loopback with virtio 1.1 vectorized path
> using 1 queue and 8 queue
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -614,15 +614,15 @@ Test Case 10: loopback with virtio 1.1 vectorized
> path using 1 queue and 8 queue
>  6. Launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-9 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
>      -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  7. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 10-18 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=8,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
>      -- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> diff --git a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> index 1fcbf5d6..f30e3a55 100644
> --- a/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> +++ b/test_plans/loopback_virtio_user_server_mode_test_plan.rst
> @@ -44,14 +44,14 @@ Test Case 1: Basic test for packed ring server mode
> 
>  1. Launch virtio-user as server mode::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \
>      --
> vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,qu
> eues=1,packed_vq=1 -- -i --rxq=1 --txq=1 --no-numa
>      >set fwd mac
>      >start
> 
>  2. Launch vhost as client mode::
> 
> -    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=vhost \
> +    ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'net_vhost0,iface=/tmp/sock0,client=1,queues=1' -- -i --rxq=1 --
> txq=1 --nb-cores=1
>      >set fwd mac
>      >start tx_first 32
> @@ -65,14 +65,14 @@ Test Case 2:  Basic test for split ring server mode
> 
>  1. Launch virtio-user as server mode::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -l 1-2 -n 4 --no-pci --file-prefix=virtio \
>      --
> vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,qu
> eues=1 -- -i --rxq=1 --txq=1 --no-numa
>      >set fwd mac
>      >start
> 
>  2. Launch vhost as client mode::
> 
> -    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=vhost \
> +    ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'net_vhost0,iface=/tmp/sock0,client=1,queues=1' -- -i --rxq=1 --
> txq=1 --nb-cores=1
>      >set fwd mac
>      >start tx_first 32
> @@ -87,14 +87,14 @@ Test Case 3: loopback reconnect test with split ring
> mergeable path and server m
>  1. launch vhost as client mode with 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start
> 
>  2. Launch virtio-user as server mode with 2 queues::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -107,7 +107,7 @@ Test Case 3: loopback reconnect test with split ring
> mergeable path and server m
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -129,7 +129,7 @@ Test Case 3: loopback reconnect test with split ring
> mergeable path and server m
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -159,14 +159,14 @@ Test Case 4: loopback reconnect test with split ring
> inorder mergeable path and
>  1. launch vhost as client mode with 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start
> 
>  2. Launch virtio-user as server mode with 2 queues::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=1,in_order=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -179,7 +179,7 @@ Test Case 4: loopback reconnect test with split ring
> inorder mergeable path and
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -201,7 +201,7 @@ Test Case 4: loopback reconnect test with split ring
> inorder mergeable path and
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=1,in_order=1\
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -231,14 +231,14 @@ Test Case 5: loopback reconnect test with split ring
> inorder non-mergeable path
>  1. launch vhost as client mode with 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start
> 
>  2. Launch virtio-user as server mode with 2 queues::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=0,in_order=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -251,7 +251,7 @@ Test Case 5: loopback reconnect test with split ring
> inorder non-mergeable path
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -273,7 +273,7 @@ Test Case 5: loopback reconnect test with split ring
> inorder non-mergeable path
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=0,in_order=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -303,14 +303,14 @@ Test Case 6: loopback reconnect test with split ring
> non-mergeable path and serv
>  1. launch vhost as client mode with 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start
> 
>  2. Launch virtio-user as server mode with 2 queues::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -323,7 +323,7 @@ Test Case 6: loopback reconnect test with split ring
> non-mergeable path and serv
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -345,7 +345,7 @@ Test Case 6: loopback reconnect test with split ring
> non-mergeable path and serv
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -375,14 +375,14 @@ Test Case 7: loopback reconnect test with split ring
> vector_rx path and server m
>  1. launch vhost as client mode with 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start
> 
>  2. Launch virtio-user as server mode with 2 queues::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
>      -- -i --nb-cores=2 --rxq=2 --txq=2
>      >set fwd mac
> @@ -395,7 +395,7 @@ Test Case 7: loopback reconnect test with split ring
> vector_rx path and server m
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -417,7 +417,7 @@ Test Case 7: loopback reconnect test with split ring
> vector_rx path and server m
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,mrg_rxbuf=0,in_order=0,vectorized=1 \
>      -- -i --nb-cores=2 --rxq=2 --txq=2
>      >set fwd mac
> @@ -447,14 +447,14 @@ Test Case 8: loopback reconnect test with packed
> ring mergeable path and server
>  1. launch vhost as client mode with 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start
> 
>  2. Launch virtio-user as server mode with 2 queues::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -467,7 +467,7 @@ Test Case 8: loopback reconnect test with packed ring
> mergeable path and server
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -489,7 +489,7 @@ Test Case 8: loopback reconnect test with packed ring
> mergeable path and server
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -519,14 +519,14 @@ Test Case 9: loopback reconnect test with packed
> ring non-mergeable path and ser
>  1. launch vhost as client mode with 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start
> 
>  2. Launch virtio-user as server mode with 2 queues::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -539,7 +539,7 @@ Test Case 9: loopback reconnect test with packed ring
> non-mergeable path and ser
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -561,7 +561,7 @@ Test Case 9: loopback reconnect test with packed ring
> non-mergeable path and ser
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -591,14 +591,14 @@ Test Case 10: loopback reconnect test with packed
> ring inorder mergeable path an
>  1. launch vhost as client mode with 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start
> 
>  2. Launch virtio-user as server mode with 2 queues::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -611,7 +611,7 @@ Test Case 10: loopback reconnect test with packed
> ring inorder mergeable path an
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -633,7 +633,7 @@ Test Case 10: loopback reconnect test with packed
> ring inorder mergeable path an
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -663,14 +663,14 @@ Test Case 11: loopback reconnect test with packed
> ring inorder non-mergeable pat
>  1. launch vhost as client mode with 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start
> 
>  2. Launch virtio-user as server mode with 2 queues::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1
> \
>      -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2
> --txq=2
>      >set fwd mac
> @@ -683,7 +683,7 @@ Test Case 11: loopback reconnect test with packed
> ring inorder non-mergeable pat
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -705,7 +705,7 @@ Test Case 11: loopback reconnect test with packed
> ring inorder non-mergeable pat
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1
> \
>      -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2
> --txq=2
>      >set fwd mac
> @@ -755,7 +755,7 @@ Test Case 12: loopback reconnect test with packed
> ring vectorized path and serve
> 
>  4. Relaunch vhost and send packets::
> 
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --
> file-prefix=vhost \
> +    ./testpmd -c 0xe -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=2' -- -i --nb-cores=2 -
> -rxq=2 --txq=2
>      >set fwd mac
>      >start tx_first 32
> @@ -777,7 +777,7 @@ Test Case 12: loopback reconnect test with packed
> ring vectorized path and serve
> 
>  8. Relaunch virtio-user and send packets::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,server=1,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1
> \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> diff --git a/test_plans/perf_virtio_user_loopback_test_plan.rst
> b/test_plans/perf_virtio_user_loopback_test_plan.rst
> index b1dcc327..11514dd8 100644
> --- a/test_plans/perf_virtio_user_loopback_test_plan.rst
> +++ b/test_plans/perf_virtio_user_loopback_test_plan.rst
> @@ -45,14 +45,14 @@ Test Case 1: loopback test with packed ring mergeable
> path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -73,14 +73,14 @@ Test Case 2: loopback test with packed ring non-
> mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -101,14 +101,14 @@ Test Case 3: loopback test with packed ring inorder
> mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,in_order=1,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -129,14 +129,14 @@ Test Case 4: loopback test with packed ring inorder
> non-mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,in_order=1,mrg_rxbuf=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -157,14 +157,14 @@ Test Case 5: loopback test with split ring mergeable
> path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -185,14 +185,14 @@ Test Case 6: loopback test with split ring non-
> mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -213,14 +213,14 @@ Test Case 7: loopback test with split ring vector_rx
> path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=0 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      >set fwd mac
> @@ -241,14 +241,14 @@ Test Case 8: loopback test with split ring inorder
> mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=1,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -269,14 +269,14 @@ Test Case 9: loopback test with split ring inorder
> non-mergeable path
>  1. Launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem --no-pci \
> +    ./testpmd -n 4 -l 2-4  --no-pci \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=1,mrg_rxbuf=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> diff --git a/test_plans/perf_virtio_user_pvp_test_plan.rst
> b/test_plans/perf_virtio_user_pvp_test_plan.rst
> index 11c15504..8021c9db 100644
> --- a/test_plans/perf_virtio_user_pvp_test_plan.rst
> +++ b/test_plans/perf_virtio_user_pvp_test_plan.rst
> @@ -52,7 +52,7 @@ Test Case 1: pvp test with packed ring mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-3  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-3  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -60,8 +60,8 @@ Test Case 1: pvp test with packed ring mergeable path
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --
> rxd=1024
>      >set fwd mac
> @@ -77,15 +77,15 @@ Test Case 2: virtio single core performance test with
> packed ring mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=2 --txd=1024 --rxd=1024
>      testpmd>set fwd io
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -101,14 +101,14 @@ Test Case 3: vhost single core performance test
> with packed ring mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=vhost \
> +    ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -l 7-9 -n 4  --socket-mem 1024,1024 --legacy-mem --file-
> prefix=virtio \
> +    ./testpmd -l 7-9 -n 4  --file-prefix=virtio \
>      --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --
> txd=1024 --rxd=1024
>      >set fwd io
> @@ -124,7 +124,7 @@ Test Case 4: pvp test with split ring mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -132,8 +132,8 @@ Test Case 4: pvp test with split ring mergeable path
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --
> rxd=1024
>      >set fwd mac
> @@ -149,15 +149,15 @@ Test Case 5: virtio single core performance test with
> split ring mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1,client=0'
> -- -i --nb-cores=2 --txd=1024 --rxd=1024
>      testpmd>set fwd io
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=1 --
> txd=1024 --rxd=1024
>      >set fwd mac
> @@ -173,14 +173,14 @@ Test Case 6: vhost single core performance test
> with split ring mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --legacy-mem --no-pci --file-
> prefix=vhost \
> +    ./testpmd -l 3-4 -n 4 --no-pci --file-prefix=vhost \
>      --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -l 7-9 -n 4  --socket-mem 1024,1024 --legacy-mem --file-
> prefix=virtio \
> +    ./testpmd -l 7-9 -n 4  --file-prefix=virtio \
>      --vdev=virtio_user0,mac=00:11:22:33:44:10,path=./vhost-
> net,queues=1,in_order=0,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --
> txd=1024 --rxd=1024
>      >set fwd io
> diff --git a/test_plans/pvp_diff_qemu_version_test_plan.rst
> b/test_plans/pvp_diff_qemu_version_test_plan.rst
> index 1c7bf468..612e0e26 100644
> --- a/test_plans/pvp_diff_qemu_version_test_plan.rst
> +++ b/test_plans/pvp_diff_qemu_version_test_plan.rst
> @@ -50,7 +50,7 @@ Test Case 1: PVP multi qemu version test with virtio 0.95
> mergeable path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -65,7 +65,7 @@ Test Case 1: PVP multi qemu version test with virtio 0.95
> mergeable path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
>      -netdev user,id=netdev0,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net \
>      -netdev type=vhost-user,id=netdev1,chardev=char0,vhostforce \
> @@ -88,7 +88,7 @@ Test Case 2: PVP test with virtio 1.0 mergeable path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -103,7 +103,7 @@ Test Case 2: PVP test with virtio 1.0 mergeable path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
>      -netdev user,id=netdev0,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net \
>      -netdev type=vhost-user,id=netdev1,chardev=char0,vhostforce \
> diff --git a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
> b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
> index 98ae651c..9456fdc4 100644
> --- a/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
> +++ b/test_plans/pvp_qemu_multi_paths_port_restart_test_plan.rst
> @@ -51,7 +51,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -64,8 +64,8 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -98,7 +98,7 @@ Test Case 2: pvp test with virtio 0.95 normal path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -111,7 +111,7 @@ Test Case 2: pvp test with virtio 0.95 normal path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -144,7 +144,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -157,7 +157,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -190,7 +190,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -203,7 +203,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024 \
> @@ -236,7 +236,7 @@ Test Case 5: pvp test with virtio 1.0 normal path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -249,7 +249,7 @@ Test Case 5: pvp test with virtio 1.0 normal path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
> @@ -282,7 +282,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
> @@ -295,7 +295,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024 \
> diff --git a/test_plans/pvp_share_lib_test_plan.rst
> b/test_plans/pvp_share_lib_test_plan.rst
> index 38f74e1c..dfc87dc5 100644
> --- a/test_plans/pvp_share_lib_test_plan.rst
> +++ b/test_plans/pvp_share_lib_test_plan.rst
> @@ -58,13 +58,13 @@ Test Case1: Vhost/virtio-user pvp share lib test with
> niantic
> 
>  4. Bind niantic port with igb_uio, use option ``-d`` to load the dynamic pmd
> when launch vhost::
> 
> -    ./testpmd  -c 0x03 -n 4 --socket-mem 1024,1024 --legacy-mem -d
> librte_pmd_vhost.so.2.1 -d librte_pmd_ixgbe.so.2.1 -d
> librte_mempool_ring.so.1.1 \
> +    ./testpmd  -c 0x03 -n 4 -d librte_pmd_vhost.so.2.1 -d
> librte_pmd_ixgbe.so.2.1 -d librte_mempool_ring.so.1.1 \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i
>      testpmd>start
> 
>  5. Launch virtio-user::
> 
> -    ./testpmd -c 0x0c -n 4 --socket-mem 1024,1024 --legacy-mem -d
> librte_pmd_virtio.so.1.1 -d librte_mempool_ring.so.1.1 \
> +    ./testpmd -c 0x0c -n 4 -d librte_pmd_virtio.so.1.1 -d
> librte_mempool_ring.so.1.1 \
>      --no-pci --file-prefix=virtio  --
> vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net -- -i
>      testpmd>start
> 
> @@ -79,6 +79,6 @@ Similar as Test Case1, all steps are similar except step 4:
> 
>  4. Bind fortville port with igb_uio, use option ``-d`` to load the dynamic pmd
> when launch vhost::
> 
> -    ./testpmd  -c 0x03 -n 4 --socket-mem 1024,1024 --legacy-mem -d
> librte_pmd_vhost.so.2.1 -d librte_pmd_i40e.so.1.1 -d
> librte_mempool_ring.so.1.1 \
> +    ./testpmd  -c 0x03 -n 4 -d librte_pmd_vhost.so.2.1 -d
> librte_pmd_i40e.so.1.1 -d librte_mempool_ring.so.1.1 \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=1' -- -i
>      testpmd>start
> \ No newline at end of file
> diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> index e6f75869..6641d447 100644
> --- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> +++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst
> @@ -61,7 +61,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC-->
> TG
> 
>  1. Bind one port to igb_uio, then launch vhost with client mode by below
> commands::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev
> 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
> +    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-
> net,client=1,queues=1' -- -i --nb-cores=1
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -72,8 +72,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC-->
> TG
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -92,7 +92,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC-->
> TG
>  5. On host, quit vhost-user, then re-launch the vhost-user with below
> command::
> 
>      testpmd>quit
> -    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev
> 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
> +    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-
> net,client=1,queues=1' -- -i --nb-cores=1
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -106,7 +106,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC-->
> TG
> 
>  1. Bind one port to igb_uio, then launch vhost with client mode by below
> commands::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev
> 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
> +    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-
> net,client=1,queues=1' -- -i --nb-cores=1
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -117,8 +117,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC-->
> TG
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -151,7 +151,7 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with
> multi VMs reconnect from
> 
>  1. Bind one port to igb_uio, launch the vhost by below command::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=1 --txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -162,8 +162,8 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with
> multi VMs reconnect from
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -174,8 +174,8 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with
> multi VMs reconnect from
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16-1.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6003-:22 \
>      -chardev socket,id=char0,path=./vhost-net1,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -200,7 +200,7 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with
> multi VMs reconnect from
>  6. On host, quit vhost-user, then re-launch the vhost-user with below
> command::
> 
>      testpmd>quit
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=1 --txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -213,7 +213,7 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with
> multi VMs reconnect from
> 
>  1. Bind one port to igb_uio, launch the vhost by below command::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=1 --txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -224,8 +224,8 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with
> multi VMs reconnect from
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -236,8 +236,8 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with
> multi VMs reconnect from
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16-1.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6003-:22 \
>      -chardev socket,id=char0,path=./vhost-net1,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -280,7 +280,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
> 
>  1. Launch the vhost by below commands, enable the client mode and tso::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --
> file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --
> vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>start
> 
>  3. Launch VM1 and VM2::
> @@ -290,8 +290,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -302,8 +302,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16-1.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6003-:22 \
>      -chardev socket,id=char0,path=./vhost-net1,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -324,7 +324,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
>  6. Kill the vhost-user, then re-launch the vhost-user::
> 
>      testpmd>quit
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --
> file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --
> vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>start
> 
>  7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the
> iperf traffic can be continue.
> @@ -335,7 +335,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
> 
>  1. Launch the vhost by below commands, enable the client mode and tso::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --
> file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --
> vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>start
> 
>  3. Launch VM1 and VM2::
> @@ -345,8 +345,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -357,8 +357,8 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16-1.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -    -net user,vlan=2,hostfwd=tcp:127.0.0.1:6003-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6003-:22 \
>      -chardev socket,id=char0,path=./vhost-net1,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -394,7 +394,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC-->
> TG
> 
>  1. Bind one port to igb_uio, then launch vhost with client mode by below
> commands::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev
> 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
> +    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-
> net,client=1,queues=1' -- -i --nb-cores=1
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -425,7 +425,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC-->
> TG
>  5. On host, quit vhost-user, then re-launch the vhost-user with below
> command::
> 
>      testpmd>quit
> -    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev
> 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
> +    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-
> net,client=1,queues=1' -- -i --nb-cores=1
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -439,7 +439,7 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC-->
> TG
> 
>  1. Bind one port to igb_uio, then launch vhost with client mode by below
> commands::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 1024,1024 --legacy-mem --vdev
> 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1
> +    ./testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-
> net,client=1,queues=1' -- -i --nb-cores=1
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -484,7 +484,7 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring
> with multi VMs reconnect fro
> 
>  1. Bind one port to igb_uio, launch the vhost by below command::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=1 --txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -533,7 +533,7 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring
> with multi VMs reconnect fro
>  6. On host, quit vhost-user, then re-launch the vhost-user with below
> command::
> 
>      testpmd>quit
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=1 --txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -546,7 +546,7 @@ Test Case 14: vhost-user/virtio-pmd pvp packed ring
> with multi VMs reconnect fro
> 
>  1. Bind one port to igb_uio, launch the vhost by below command::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=1 --txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -613,7 +613,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
> 
>  1. Launch the vhost by below commands, enable the client mode and tso::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --
> file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --
> vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>start
> 
>  3. Launch VM1 and VM2::
> @@ -657,7 +657,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
>  6. Kill the vhost-user, then re-launch the vhost-user::
> 
>      testpmd>quit
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --
> file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --
> vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>start
> 
>  7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the
> iperf traffic can be continue.
> @@ -668,7 +668,7 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2
> 
>  1. Launch the vhost by below commands, enable the client mode and tso::
> 
> -    ./testpmd -c 0x30 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci --
> file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --
> vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
> +    ./testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1'  -- -i --nb-cores=1 --
> txd=1024 --rxd=1024
>      testpmd>start
> 
>  3. Launch VM1 and VM2::
> diff --git a/test_plans/pvp_virtio_bonding_test_plan.rst
> b/test_plans/pvp_virtio_bonding_test_plan.rst
> index c45b3f78..90438cc9 100644
> --- a/test_plans/pvp_virtio_bonding_test_plan.rst
> +++ b/test_plans/pvp_virtio_bonding_test_plan.rst
> @@ -52,7 +52,7 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 -->
> Vhost--> NIC--> TG
> 
>  1. Bind one port to igb_uio,launch vhost by below command::
> 
> -    ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev
> 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev
> 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=4 --txd=1024 --rxd=1024
> +    ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-
> net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-
> net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-
> net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -61,8 +61,8 @@ Flow: TG--> NIC --> Vhost --> Virtio3 --> Virtio4 -->
> Vhost--> NIC--> TG
>      qemu-system-x86_64 -name vm0 -enable-kvm -chardev
> socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \
>      -device virtio-serial -device
> virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \
>      -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait \
> -    -net nic,vlan=0,macaddr=00:00:00:c7:56:64,addr=1f \
> -    -net user,vlan=0,hostfwd=tcp:127.0.0.1:6008-:22 \
> +    -net nic,macaddr=00:00:00:c7:56:64,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6008-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
> @@ -114,7 +114,7 @@ Test case 2: vhost-user/virtio-pmd pvp bonding test
> with different mode from 1 t
> 
>  1. Bind one port to igb_uio,launch vhost by below command::
> 
> -    ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,client=1,queues=1' --vdev
> 'net_vhost2,iface=vhost-net2,client=1,queues=1' --vdev
> 'net_vhost3,iface=vhost-net3,client=1,queues=1'  -- -i --port-
> topology=chained --nb-cores=4 --txd=1024 --rxd=1024
> +    ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-
> net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,client=1,queues=1' --vdev 'net_vhost2,iface=vhost-
> net2,client=1,queues=1' --vdev 'net_vhost3,iface=vhost-
> net3,client=1,queues=1'  -- -i --port-topology=chained --nb-cores=4 --
> txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -123,8 +123,8 @@ Test case 2: vhost-user/virtio-pmd pvp bonding test
> with different mode from 1 t
>      qemu-system-x86_64 -name vm0 -enable-kvm -chardev
> socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \
>      -device virtio-serial -device
> virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \
>      -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait \
> -    -net nic,vlan=0,macaddr=00:00:00:c7:56:64,addr=1f \
> -    -net user,vlan=0,hostfwd=tcp:127.0.0.1:6008-:22 \
> +    -net nic,macaddr=00:00:00:c7:56:64,addr=1f \
> +    -net user,hostfwd=tcp:127.0.0.1:6008-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \
> diff --git a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
> b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
> index 6a80b895..89af30f7 100644
> --- a/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
> +++ b/test_plans/pvp_virtio_user_2M_hugepages_test_plan.rst
> @@ -46,12 +46,12 @@ Test Case1:  Basic test for virtio-user split ring 2M
> hugepage
> 
>  2. Bind one port to igb_uio, launch vhost::
> 
> -    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --file-prefix=vhost \
> +    ./testpmd -l 3-4 -n 4 --file-prefix=vhost \
>      --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
> 
>  3. Launch virtio-user with 2M hugepage::
> 
> -    ./testpmd -l 5-6 -n 4  --no-pci --socket-mem 1024,1024 --single-file-
> segments --file-prefix=virtio-user \
> +    ./testpmd -l 5-6 -n 4  --no-pci --single-file-segments --file-prefix=virtio-
> user \
>      --
> vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,queues=1 --
> -i
> 
> 
> @@ -66,12 +66,12 @@ Test Case1:  Basic test for virtio-user packed ring 2M
> hugepage
> 
>  2. Bind one port to igb_uio, launch vhost::
> 
> -    ./testpmd -l 3-4 -n 4 --socket-mem 1024,1024 --file-prefix=vhost \
> +    ./testpmd -l 3-4 -n 4 --file-prefix=vhost \
>      --vdev 'net_vhost0,iface=/tmp/sock0,queues=1' -- -i
> 
>  3. Launch virtio-user with 2M hugepage::
> 
> -    ./testpmd -l 5-6 -n 4  --no-pci --socket-mem 1024,1024 --single-file-
> segments --file-prefix=virtio-user \
> +    ./testpmd -l 5-6 -n 4  --no-pci --single-file-segments --file-prefix=virtio-
> user \
>      --
> vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,packed_vq
> =1,queues=1 -- -i
> 
> 
> diff --git
> a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
> b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
> index 059b457e..c50c9aca 100644
> --- a/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
> +++ b/test_plans/pvp_virtio_user_multi_queues_port_restart_test_plan.rst
> @@ -54,15 +54,15 @@ Test Case 1: pvp 2 queues test with packed ring
> mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0,queue_size=255 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2 --txd=255 --rxd=255
>      >set fwd mac
> @@ -92,15 +92,15 @@ Test Case 2: pvp 2 queues test with packed ring non-
> mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0,queue_size=255 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2 --txd=255 --rxd=255
>      >set fwd mac
> @@ -125,15 +125,15 @@ Test Case 3: pvp 2 queues test with split ring
> inorder mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=1,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -158,15 +158,15 @@ Test Case 4: pvp 2 queues test with split ring
> inorder non-mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=1,mrg_rxbuf=0,vectorized=1 \
>      -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2
> --txq=2
>      >set fwd mac
> @@ -191,15 +191,15 @@ Test Case 5: pvp 2 queues test with split ring
> mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=0,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -224,15 +224,15 @@ Test Case 6: pvp 2 queues test with split ring non-
> mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -257,15 +257,15 @@ Test Case 7: pvp 2 queues test with split ring
> vector_rx path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \
>      -- -i --tx-offloads=0x0 --rss-ip --nb-cores=2 --rxq=2 --txq=2
>      >set fwd mac
> @@ -290,15 +290,15 @@ Test Case 8: pvp 2 queues test with packed ring
> inorder mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1,queue_size=255 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2 --txd=255 --rxd=255
>      >set fwd mac
> @@ -323,15 +323,15 @@ Test Case 9: pvp 2 queues test with packed ring
> inorder non-mergeable path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_s
> ize=255 \
>      -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2
> --txq=2 --txd=255 --rxd=255
>      >set fwd mac
> @@ -356,15 +356,15 @@ Test Case 10: pvp 2 queues test with packed ring
> vectorized path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_s
> ize=255 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2 --txd=255 --rxd=255
>      >set fwd mac
> diff --git a/test_plans/vdev_primary_secondary_test_plan.rst
> b/test_plans/vdev_primary_secondary_test_plan.rst
> index 33d240e8..a148fcbe 100644
> --- a/test_plans/vdev_primary_secondary_test_plan.rst
> +++ b/test_plans/vdev_primary_secondary_test_plan.rst
> @@ -143,7 +143,7 @@ SW preparation: Change one line of the
> symmetric_mp sample and rebuild::
> 
>  1. Bind one port to igb_uio, launch testpmd by below command::
> 
> -    ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 -
> -txq=2 --txd=1024 --rxd=1024
> +    ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-
> net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --
> rxd=1024
>      testpmd>set fwd txonly
>      testpmd>start
> 
> @@ -181,7 +181,7 @@ Test Case 2: Virtio-pmd primary and secondary
> process hotplug test
> 
>  1. Launch testpmd by below command::
> 
> -    ./testpmd -l 1-6 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=2,client=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 -
> -txq=2 --txd=1024 --rxd=1024
> +    ./testpmd -l 1-6 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-
> net,queues=2,client=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=2,client=1'  -- -i --nb-cores=4 --rxq=2 --txq=2 --txd=1024 --
> rxd=1024
>      testpmd>set fwd txonly
>      testpmd>start
> 
> diff --git a/test_plans/vhost_1024_ethports_test_plan.rst
> b/test_plans/vhost_1024_ethports_test_plan.rst
> index 636ddf52..c31f62a3 100644
> --- a/test_plans/vhost_1024_ethports_test_plan.rst
> +++ b/test_plans/vhost_1024_ethports_test_plan.rst
> @@ -47,11 +47,11 @@ Test Case1:  Basic test for launch vhost with 1024
> ethports
> 
>  2. Launch vhost with 1024 vdev::
> 
> -    ./testpmd -c 0x3000 -n 4 --socket-mem 10240,10240  --file-prefix=vhost --
> vdev 'eth_vhost0,iface=vhost-net,queues=1' \
> +    ./testpmd -c 0x3000 -n 4 --file-prefix=vhost --vdev
> 'eth_vhost0,iface=vhost-net,queues=1' \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' ... -- -i # only list two vdev,
> here ommit other 1022 vdevs, from eth_vhost2 to eth_vhost1023
> 
>  3. Change "CONFIG_RTE_MAX_ETHPORTS" back to 32 in DPDK configure file::
> 
>      vi ./config/common_base
>      +CONFIG_RTE_MAX_ETHPORTS=32
> -    -CONFIG_RTE_MAX_ETHPORTS=1024
> \ No newline at end of file
> +    -CONFIG_RTE_MAX_ETHPORTS=1024
> diff --git a/test_plans/vhost_cbdma_test_plan.rst
> b/test_plans/vhost_cbdma_test_plan.rst
> index e94a9974..dfe064a2 100644
> --- a/test_plans/vhost_cbdma_test_plan.rst
> +++ b/test_plans/vhost_cbdma_test_plan.rst
> @@ -118,8 +118,8 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
> 
>  7. Relaunch virtio-user with vector_rx path, then repeat step 3::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --
> vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=0,in
> _order=0,queues=1 \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      >set fwd mac
> @@ -130,7 +130,7 @@ Test Case2: Dynamic queue number test for DMA-
> accelerated vhost Tx operations
> 
>  1. Bind two cbdma port and one nic port to igb_uio, then launch vhost by
> below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  --socket-mem
> 1024,1024 --legacy-mem \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  \
>      --file-prefix=vhost --vdev
> 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.5;txq1@8
> 0:04.6],dmathr=1024' \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
>       set fwd mac
> @@ -174,7 +174,7 @@ Test Case2: Dynamic queue number test for DMA-
> accelerated vhost Tx operations
> 
>  6. Relaunch vhost with another two cbdma channels, check perforamnce can
> get target and RX/TX can work normally in two queueus::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  --socket-mem
> 1024,1024 --legacy-mem \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 28-29  \
>      --file-prefix=vhost --vdev
> 'net_vhost0,iface=/tmp/s0,queues=2,client=1,dmas=[txq0@80:04.0],dmathr
> =512' \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txq=2 --rxq=2
>      >set fwd mac
> diff --git a/test_plans/vhost_dequeue_zero_copy_test_plan.rst
> b/test_plans/vhost_dequeue_zero_copy_test_plan.rst
> index 0c1743cb..29fba85f 100644
> --- a/test_plans/vhost_dequeue_zero_copy_test_plan.rst
> +++ b/test_plans/vhost_dequeue_zero_copy_test_plan.rst
> @@ -55,7 +55,7 @@ Test Case 1: pvp split ring dequeue zero-copy test
>  1. Bind one 40G port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1,dequeue-zero-copy=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
>      testpmd>set fwd mac
> @@ -65,8 +65,8 @@ Test Case 1: pvp split ring dequeue zero-copy test
>      qemu-system-x86_64 -name vm1 \
>       -cpu host -enable-kvm -m 4096 -object memory-backend-
> file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>       -chardev socket,id=char0,path=./vhost-net \
>       -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024 \
> @@ -95,7 +95,7 @@ Test Case 2: pvp split ring dequeue zero-copy test with 2
> queues
>  1. Bind one 40G port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 2-4 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,dequeue-zero-copy=1' -- \
>      -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 --txfreet=992
>      testpmd>set fwd mac
> @@ -105,8 +105,8 @@ Test Case 2: pvp split ring dequeue zero-copy test
> with 2 queues
>      qemu-system-x86_64 -name vm1 \
>       -cpu host -enable-kvm -m 4096 -object memory-backend-
> file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>       -chardev socket,id=char0,path=./vhost-net \
>       -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce,queues=2 \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=8,
> rx_queue_size=1024,tx_queue_size=1024 \
> @@ -138,7 +138,7 @@ Test Case 3: pvp split ring dequeue zero-copy test
> with driver reload test
>  1. Bind one 40G port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-5 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=16,dequeue-zero-
> copy=1,client=1' -- \
>      -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024 --txfreet=992
>      testpmd>set fwd mac
> @@ -148,8 +148,8 @@ Test Case 3: pvp split ring dequeue zero-copy test
> with driver reload test
>      qemu-system-x86_64 -name vm1 \
>       -cpu host -enable-kvm -m 4096 -object memory-backend-
> file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>       -chardev socket,id=char0,path=./vhost-net,server \
>       -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce,queues=16 \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=4
> 0,rx_queue_size=1024,tx_queue_size=1024 \
> @@ -158,7 +158,7 @@ Test Case 3: pvp split ring dequeue zero-copy test
> with driver reload test
>  3. On VM, bind virtio net to igb_uio and run testpmd::
> 
>      ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
> -    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --
> txq=16 --txd=1024 --rxd=1024
> +    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --
> rxd=1024
>      testpmd>set fwd rxonly
>      testpmd>start
> 
> @@ -173,7 +173,7 @@ Test Case 3: pvp split ring dequeue zero-copy test
> with driver reload test
>  6. Relaunch testpmd at virtio side in VM for driver reloading::
> 
>      testpmd>quit
> -    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --
> txq=16 --txd=1024 --rxd=1024
> +    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --
> rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -190,7 +190,7 @@ Test Case 4: pvp split ring dequeue zero-copy test
> with maximum txfreet
> 
>  1. Bind one 40G port to igb_uio, then launch testpmd by below command::
> 
> -     ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \
> +     ./testpmd -l 1-5 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=16,dequeue-zero-
> copy=1,client=1' -- \
>      -i --nb-cores=4 --rxq=16 --txq=16  --txfreet=988 --txrs=4 --txd=992 --
> rxd=992
>      testpmd>set fwd mac
> @@ -200,8 +200,8 @@ Test Case 4: pvp split ring dequeue zero-copy test
> with maximum txfreet
>      qemu-system-x86_64 -name vm1 \
>       -cpu host -enable-kvm -m 4096 -object memory-backend-
> file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>       -chardev socket,id=char0,path=./vhost-net,server \
>       -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce,queues=16 \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=4
> 0,rx_queue_size=1024,tx_queue_size=1024 \
> @@ -210,7 +210,7 @@ Test Case 4: pvp split ring dequeue zero-copy test
> with maximum txfreet
>  3. On VM, bind virtio net to igb_uio and run testpmd::
> 
>      ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
> -    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --
> txq=16 --txd=1024 --rxd=1024
> +    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --
> rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -232,7 +232,7 @@ Test Case 5: pvp split ring dequeue zero-copy test
> with vector_rx path
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-4  --socket-mem
> 1024,1024 --legacy-mem \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,queues=1,client=1,dequeue-zero-copy=1' \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
>      testpmd>set fwd mac
> @@ -240,8 +240,8 @@ Test Case 5: pvp split ring dequeue zero-copy test
> with vector_rx path
> 
>  2. Launch virtio-user by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=0,vectorized=1,queue_size=1024,server=1 \
>      -- -i --tx-offloads=0x0 --nb-cores=1 --txd=1024 --rxd=1024
>      >set fwd mac
> @@ -259,7 +259,7 @@ Test Case 6: pvp packed ring dequeue zero-copy test
>  1. Bind one 40G port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1,dequeue-zero-copy=1' -- \
>      -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
>      testpmd>set fwd mac
> @@ -269,8 +269,8 @@ Test Case 6: pvp packed ring dequeue zero-copy test
>      qemu-system-x86_64 -name vm1 \
>       -cpu host -enable-kvm -m 4096 -object memory-backend-
> file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>       -chardev socket,id=char0,path=./vhost-net \
>       -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024,packed=on \
> @@ -299,7 +299,7 @@ Test Case 7: pvp packed ring dequeue zero-copy test
> with 2 queues
>  1. Bind one 40G port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 2-4 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 2-4 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,dequeue-zero-copy=1' -- \
>      -i --nb-cores=2 --rxq=2 --txq=2 --txd=1024 --rxd=1024 --txfreet=992
>      testpmd>set fwd mac
> @@ -309,8 +309,8 @@ Test Case 7: pvp packed ring dequeue zero-copy test
> with 2 queues
>      qemu-system-x86_64 -name vm1 \
>       -cpu host -enable-kvm -m 4096 -object memory-backend-
> file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>       -chardev socket,id=char0,path=./vhost-net \
>       -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce,queues=2 \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=8,
> rx_queue_size=1024,tx_queue_size=1024,packed=on \
> @@ -342,7 +342,7 @@ Test Case 8: pvp packed ring dequeue zero-copy test
> with driver reload test
>  1. Bind one 40G port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-5 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-5 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=16,dequeue-zero-
> copy=1,client=1' -- \
>      -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --rxd=1024 --txfreet=992
>      testpmd>set fwd mac
> @@ -352,8 +352,8 @@ Test Case 8: pvp packed ring dequeue zero-copy test
> with driver reload test
>      qemu-system-x86_64 -name vm1 \
>       -cpu host -enable-kvm -m 4096 -object memory-backend-
> file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=5,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f \
> -     -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f \
> +     -net user,hostfwd=tcp:127.0.0.1:6002-:22 \
>       -chardev socket,id=char0,path=./vhost-net,server \
>       -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce,queues=16 \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=4
> 0,rx_queue_size=1024,tx_queue_size=1024,packed=on \
> @@ -362,7 +362,7 @@ Test Case 8: pvp packed ring dequeue zero-copy test
> with driver reload test
>  3. On VM, bind virtio net to igb_uio and run testpmd::
> 
>      ./usertools/dpdk-devbind.py --bind=igb_uio xx:xx.x
> -    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --
> txq=16 --txd=1024 --rxd=1024
> +    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --
> rxd=1024
>      testpmd>set fwd rxonly
>      testpmd>start
> 
> @@ -377,7 +377,7 @@ Test Case 8: pvp packed ring dequeue zero-copy test
> with driver reload test
>  6. Relaunch testpmd at virtio side in VM for driver reloading::
> 
>      testpmd>quit
> -    ./testpmd -l 0-4 -n 4 --socket-mem 1024,0 -- -i --nb-cores=4 --rxq=16 --
> txq=16 --txd=1024 --rxd=1024
> +    ./testpmd -l 0-4 -n 4 -- -i --nb-cores=4 --rxq=16 --txq=16 --txd=1024 --
> rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -395,7 +395,7 @@ Test Case 9: pvp packed ring dequeue zero-copy test
> with ring size is not power
>  1. Bind one port to igb_uio, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-4  --socket-mem
> 1024,1024 --legacy-mem \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,queues=1,client=1,dequeue-zero-copy=1' \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024 --txfreet=992
>      testpmd>set fwd mac
> @@ -403,8 +403,8 @@ Test Case 9: pvp packed ring dequeue zero-copy test
> with ring size is not power
> 
>  2. Launch virtio-user by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,in_order=0,mrg_rxbuf=1,packed_vq=1,queue_size=1025,server=1 \
>      -- -i --rx-offloads=0x10 --nb-cores=1 --txd=1025 --rxd=1025
>      >set fwd mac
> diff --git a/test_plans/vhost_multi_queue_qemu_test_plan.rst
> b/test_plans/vhost_multi_queue_qemu_test_plan.rst
> index bb13a815..abaf7af6 100644
> --- a/test_plans/vhost_multi_queue_qemu_test_plan.rst
> +++ b/test_plans/vhost_multi_queue_qemu_test_plan.rst
> @@ -45,7 +45,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
> 
>  1. Bind one port to igb_uio, then launch testpmd by below command:
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
>      -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
> @@ -88,7 +88,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
>     ensure the vhost using 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
>      -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
> @@ -164,7 +164,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
>     ensure the vhost using 2 queues::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -c 0xe -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \
>      -i --nb-cores=1 --rxq=1 --txq=1
>      testpmd>set fwd mac
> diff --git a/test_plans/vhost_pmd_xstats_test_plan.rst
> b/test_plans/vhost_pmd_xstats_test_plan.rst
> index 316b4a32..8caee819 100644
> --- a/test_plans/vhost_pmd_xstats_test_plan.rst
> +++ b/test_plans/vhost_pmd_xstats_test_plan.rst
> @@ -53,15 +53,15 @@ Test Case 1: xstats test with packed ring mergeable
> path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -83,15 +83,15 @@ Test Case 2: xstats test with packed ring non-
> mergeable path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -111,15 +111,15 @@ Test Case 3: xstats stability test with split ring
> inorder mergeable path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=1,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -141,15 +141,15 @@ Test Case 4: xstats test with split ring inorder non-
> mergeable path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=1,mrg_rxbuf=0 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -169,15 +169,15 @@ Test Case 5: xstats test with split ring mergeable
> path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=0,mrg_rxbuf=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -197,15 +197,15 @@ Test Case 6: xstats test with split ring non-
> mergeable path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -225,15 +225,15 @@ Test Case 7: xstats test with split ring vector_rx path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,in_order=0,mrg_rxbuf=0,vectorized=1 \
>      -- -i --tx-offloads=0x0 --rss-ip --nb-cores=2 --rxq=2 --txq=2
>      >set fwd mac
> @@ -253,15 +253,15 @@ Test Case 8: xstats test with packed ring inorder
> mergeable path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2 -
> -txq=2
>      >set fwd mac
> @@ -283,15 +283,15 @@ Test Case 9: xstats test with packed ring inorder
> non-mergeable path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
>      -- -i --rx-offloads=0x10 --enable-hw-vlan-strip --rss-ip --nb-cores=2 --rxq=2
> --txq=2
>      >set fwd mac
> @@ -311,15 +311,15 @@ Test Case 10: xstats test with packed ring vectorized
> path
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1 \
>      -- -i --rss-ip --nb-cores=2 --rxq=2 --txq=2
>      >set fwd mac
> @@ -339,15 +339,15 @@ Test Case 11: xstats test with packed ring vectorized
> path with ring size is not
>  1. Bind one port to vfio-pci, then launch vhost by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -n 4 -l 2-4  --socket-mem 1024,1024 --legacy-mem \
> +    ./testpmd -n 4 -l 2-4  \
>      --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,client=0'
> -- -i --nb-cores=2 --rxq=2 --txq=2
>      testpmd>set fwd mac
>      testpmd>start
> 
>  2. Launch virtio-user by below command::
> 
> -    ./testpmd -n 4 -l 5-7 --socket-mem 1024,1024 \
> -    --legacy-mem --no-pci --file-prefix=virtio \
> +    ./testpmd -n 4 -l 5-7 \
> +    --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_s
> ize=255 \
>      -- -i --rss-ip --nb-cores=2 --rxq=2 --txq=2 --txd=255 --rxd=255
>      >set fwd mac
> diff --git a/test_plans/vhost_qemu_mtu_test_plan.rst
> b/test_plans/vhost_qemu_mtu_test_plan.rst
> index d7f01ee9..60b01cfb 100644
> --- a/test_plans/vhost_qemu_mtu_test_plan.rst
> +++ b/test_plans/vhost_qemu_mtu_test_plan.rst
> @@ -46,7 +46,7 @@ Test Case: Test the MTU in virtio-net
>  =====================================
>  1. Launch the testpmd by below commands on host, and config mtu::
> 
> -    ./testpmd -c 0xc -n 4 --socket-mem 2048,2048 \
> +    ./testpmd -c 0xc -n 4 \
>      --vdev 'net_vhost0,iface=vhost-net,queues=1' \
>      -- -i --txd=512 --rxd=128 --nb-cores=1 --port-topology=chained
>      testpmd> set fwd mac
> diff --git a/test_plans/vhost_user_live_migration_test_plan.rst
> b/test_plans/vhost_user_live_migration_test_plan.rst
> index 2626f7af..22ff76d5 100644
> --- a/test_plans/vhost_user_live_migration_test_plan.rst
> +++ b/test_plans/vhost_user_live_migration_test_plan.rst
> @@ -77,7 +77,7 @@ On host server side:
>  2. Bind host port to igb_uio and start testpmd with vhost port::
> 
>      host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
> -    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
> +    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
>      host server# testpmd>start
> 
>  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu
> monitor port, 5555 as the SSH port::
> @@ -100,7 +100,7 @@ On the backup server, run the vhost testpmd on the
> host and launch VM:
>      backup server # mkdir /mnt/huge
>      backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
>      backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
> -    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -
> - -i
> +    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
>      backup server # testpmd>start
> 
>  5. Launch VM on the backup server, the script is similar to host, need add " -
> incoming tcp:0:4444 " for live migration and make sure the VM image is the
> NFS mounted folder, VM image is the exact one on host server::
> @@ -177,7 +177,7 @@ On host server side:
>  2. Bind host port to igb_uio and start testpmd with vhost port,note not start
> vhost port before launching qemu::
> 
>      host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
> -    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --
> socket-mem 1024,1024 -- -i
> +    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
> 
>  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu
> monitor port, 5555 as the SSH port::
> 
> @@ -199,7 +199,7 @@ On the backup server, run the vhost testpmd on the
> host and launch VM:
>      backup server # mkdir /mnt/huge
>      backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
>      backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
> -    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --
> socket-mem 1024,1024 -- -i
> +    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
> 
>  5. Launch VM on the backup server, the script is similar to host, need add " -
> incoming tcp:0:4444 " for live migration and make sure the VM image is the
> NFS mounted folder, VM image is the exact one on host server::
> 
> @@ -277,7 +277,7 @@ On host server side:
>  2. Bind host port to igb_uio and start testpmd with vhost port::
> 
>      host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
> -    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
> +    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
>      host server# testpmd>start
> 
>  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu
> monitor port, 5555 as the SSH port::
> @@ -300,7 +300,7 @@ On the backup server, run the vhost testpmd on the
> host and launch VM:
>      backup server # mkdir /mnt/huge
>      backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
>      backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
> -    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -
> - -i
> +    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
>      backup server # testpmd>start
> 
>  5. Launch VM on the backup server, the script is similar to host, need add " -
> incoming tcp:0:4444 " for live migration and make sure the VM image is the
> NFS mounted folder, VM image is the exact one on host server::
> @@ -365,7 +365,7 @@ On host server side:
>  2. Bind host port to igb_uio and start testpmd with vhost port::
> 
>      host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
> -    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev
> 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-
> cores=4 --rxq=4 --txq=4
> +    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev
> 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
>      host server# testpmd>start
> 
>  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu
> monitor port, 5555 as the SSH port::
> @@ -388,7 +388,7 @@ On the backup server, run the vhost testpmd on the
> host and launch VM:
>      backup server # mkdir /mnt/huge
>      backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
>      backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
> -    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --
> vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i -
> -nb-cores=4 --rxq=4 --txq=4
> +    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --
> vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --
> txq=4
>      backup server # testpmd>start
> 
>  5. Launch VM on the backup server, the script is similar to host, need add " -
> incoming tcp:0:4444 " for live migration and make sure the VM image is the
> NFS mounted folder, VM image is the exact one on host server::
> @@ -457,7 +457,7 @@ On host server side:
>  2. Bind host port to igb_uio and start testpmd with vhost port::
> 
>      host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
> -    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
> +    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
>      host server# testpmd>start
> 
>  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu
> monitor port, 5555 as the SSH port::
> @@ -480,7 +480,7 @@ On the backup server, run the vhost testpmd on the
> host and launch VM:
>      backup server # mkdir /mnt/huge
>      backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
>      backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
> -    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -
> - -i
> +    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
>      backup server # testpmd>start
> 
>  5. Launch VM on the backup server, the script is similar to host, need add " -
> incoming tcp:0:4444 " for live migration and make sure the VM image is the
> NFS mounted folder, VM image is the exact one on host server::
> @@ -557,7 +557,7 @@ On host server side:
>  2. Bind host port to igb_uio and start testpmd with vhost port,note not start
> vhost port before launching qemu::
> 
>      host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
> -    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --
> socket-mem 1024,1024 -- -i
> +    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
> 
>  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu
> monitor port, 5555 as the SSH port::
> 
> @@ -579,7 +579,7 @@ On the backup server, run the vhost testpmd on the
> host and launch VM:
>      backup server # mkdir /mnt/huge
>      backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
>      backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
> -    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --
> socket-mem 1024,1024 -- -i
> +    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' -- -i
> 
>  5. Launch VM on the backup server, the script is similar to host, need add " -
> incoming tcp:0:4444 " for live migration and make sure the VM image is the
> NFS mounted folder, VM image is the exact one on host server::
> 
> @@ -657,7 +657,7 @@ On host server side:
>  2. Bind host port to igb_uio and start testpmd with vhost port::
> 
>      host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
> -    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
> +    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --
> vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
>      host server# testpmd>start
> 
>  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu
> monitor port, 5555 as the SSH port::
> @@ -680,7 +680,7 @@ On the backup server, run the vhost testpmd on the
> host and launch VM:
>      backup server # mkdir /mnt/huge
>      backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
>      backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
> -    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -
> - -i
> +    backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n
> 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' -- -i
>      backup server # testpmd>start
> 
>  5. Launch VM on the backup server, the script is similar to host, need add " -
> incoming tcp:0:4444 " for live migration and make sure the VM image is the
> NFS mounted folder, VM image is the exact one on host server::
> @@ -745,7 +745,7 @@ On host server side:
>  2. Bind host port to igb_uio and start testpmd with vhost port::
> 
>      host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
> -    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev
> 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-
> cores=4 --rxq=4 --txq=4
> +    host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev
> 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4
>      host server# testpmd>start
> 
>  3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu
> monitor port, 5555 as the SSH port::
> @@ -768,7 +768,7 @@ On the backup server, run the vhost testpmd on the
> host and launch VM:
>      backup server # mkdir /mnt/huge
>      backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
>      backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
> -    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --
> vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i -
> -nb-cores=4 --rxq=4 --txq=4
> +    backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --
> vdev 'net_vhost0,iface=./vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --
> txq=4
>      backup server # testpmd>start
> 
>  5. Launch VM on the backup server, the script is similar to host, need add " -
> incoming tcp:0:4444 " for live migration and make sure the VM image is the
> NFS mounted folder, VM image is the exact one on host server::
> diff --git a/test_plans/virtio_event_idx_interrupt_test_plan.rst
> b/test_plans/virtio_event_idx_interrupt_test_plan.rst
> index 1fbadee0..6cb00ab7 100644
> --- a/test_plans/virtio_event_idx_interrupt_test_plan.rst
> +++ b/test_plans/virtio_event_idx_interrupt_test_plan.rst
> @@ -52,7 +52,7 @@ Test Case 1: Compare interrupt times with and without
> split ring virtio event id
>  1. Bind one nic port to igb_uio, then launch the vhost sample by below
> commands::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
> +    ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,queues=1' -- -i
>      --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>start
> 
> @@ -62,7 +62,7 @@ Test Case 1: Compare interrupt times with and without
> split ring virtio event id
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,
> guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
>       -vnc :12 -daemonize
> @@ -85,7 +85,7 @@ Test Case 2: Split ring virtio-pci driver reload test
>  1. Bind one nic port to igb_uio, then launch the vhost sample by below
> commands::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1
> --txd=1024 --rxd=1024
> +    ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
>      testpmd>start
> 
>  2. Launch VM::
> @@ -94,7 +94,7 @@ Test Case 2: Split ring virtio-pci driver reload test
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,
> guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on \
>       -vnc :12 -daemonize
> @@ -123,7 +123,7 @@ Test Case 3: Wake up split ring virtio-net cores with
> event idx interrupt mode 1
>  1. Bind one nic port to igb_uio, then launch the vhost sample by below
> commands::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-17 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-
> cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
> +    ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-
> net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
>      testpmd>start
> 
>  2. Launch VM::
> @@ -132,7 +132,7 @@ Test Case 3: Wake up split ring virtio-net cores with
> event idx interrupt mode 1
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce,queues=16 \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=4
> 0,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ec
> n=on \
>       -vnc :12 -daemonize
> @@ -158,7 +158,7 @@ Test Case 4: Compare interrupt times with and
> without packed ring virtio event i
>  1. Bind one nic port to igb_uio, then launch the vhost sample by below
> commands::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i
> +    ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,queues=1' -- -i
>      --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>start
> 
> @@ -168,7 +168,7 @@ Test Case 4: Compare interrupt times with and
> without packed ring virtio event i
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,
> guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
>       -vnc :12 -daemonize
> @@ -191,7 +191,7 @@ Test Case 5: Packed ring virtio-pci driver reload test
>  1. Bind one nic port to igb_uio, then launch the vhost sample by below
> commands::
> 
>      rm -rf vhost-net*
> -    ./testpmd -c 0xF0000000 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1
> --txd=1024 --rxd=1024
> +    ./testpmd -c 0xF0000000 -n 4 --file-prefix=vhost --vdev
> 'net_vhost,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
>      testpmd>start
> 
>  2. Launch VM::
> @@ -200,7 +200,7 @@ Test Case 5: Packed ring virtio-pci driver reload test
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,
> guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on,packed=on \
>       -vnc :12 -daemonize
> @@ -229,7 +229,7 @@ Test Case 6: Wake up packed ring virtio-net cores with
> event idx interrupt mode
>  1. Bind one nic port to igb_uio, then launch the vhost sample by below
> commands::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-17 -n 4 --socket-mem 2048,2048 --legacy-mem --file-
> prefix=vhost --vdev 'net_vhost,iface=vhost-net,queues=16' -- -i --nb-
> cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
> +    ./testpmd -l 1-17 -n 4 --file-prefix=vhost --vdev 'net_vhost,iface=vhost-
> net,queues=16' -- -i --nb-cores=16 --txd=1024 --rxd=1024 --rxq=16 --txq=16
>      testpmd>start
> 
>  2. Launch VM::
> @@ -238,7 +238,7 @@ Test Case 6: Wake up packed ring virtio-net cores with
> event idx interrupt mode
>      qemu-system-x86_64 -name us-vhost-vm1 \
>       -cpu host -enable-kvm -m 2048 -object memory-backend-
> file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
> node,memdev=mem -mem-prealloc \
>       -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu16.img  \
> -     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6004-:22 \
> +     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6004-:22 \
>       -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-
> user,id=mynet1,chardev=char0,vhostforce,queues=16 \
>       -device virtio-net-
> pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=4
> 0,csum=on,gso=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ec
> n=on,packed=on \
>       -vnc :12 -daemonize
> @@ -303,7 +303,7 @@ Test Case 8: wake up vhost-user cores with event idx
> interrupt mode and cbdma en
>  1. Launch l3fwd-power example app with client mode::
> 
>      ./examples/l3fwd-power/build/l3fwd-power -l 1-16 \
> -    -n 4 --socket-mem 1024,1024 --legacy-mem \
> +    -n 4 \
>      --log-level=9 \
>      --vdev 'eth_vhost0,iface=/vhost-
> net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;t
> xq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@
> 00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@0
> 0:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' \
>      -- -p 0x1 \
> @@ -317,7 +317,7 @@ Test Case 8: wake up vhost-user cores with event idx
> interrupt mode and cbdma en
>  3. Relauch l3fwd-power sample for port up::
> 
>      ./examples/l3fwd-power/build/l3fwd-power -l 1-16 \
> -    -n 4 --socket-mem 1024,1024 --legacy-mem \
> +    -n 4 \
>      --log-level=9 \
>      --vdev 'eth_vhost0,iface=/vhost-
> net0,queues=16,client=1,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;t
> xq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@
> 00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@0
> 0:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' \
>      -- -p 0x1 \
> diff --git a/test_plans/virtio_pvp_regression_test_plan.rst
> b/test_plans/virtio_pvp_regression_test_plan.rst
> index 69df61e9..df76b544 100644
> --- a/test_plans/virtio_pvp_regression_test_plan.rst
> +++ b/test_plans/virtio_pvp_regression_test_plan.rst
> @@ -52,7 +52,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-3 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 -
> -rxq=2 --txq=2 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> @@ -64,7 +64,7 @@ Test Case 1: pvp test with virtio 0.95 mergeable path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=2  \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,rx_queue_size=1
> 024,tx_queue_size=1024,mq=on,vectors=15 \
> @@ -91,7 +91,7 @@ Test Case 2: pvp test with virtio 0.95 non-mergeable
> path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-3 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 -
> -rxq=2 --txq=2 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> @@ -103,7 +103,7 @@ Test Case 2: pvp test with virtio 0.95 non-mergeable
> path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=2  \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1
> 024,tx_queue_size=1024,mq=on,vectors=15 \
> @@ -130,7 +130,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-3 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 -
> -rxq=2 --txq=2 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> @@ -142,7 +142,7 @@ Test Case 3: pvp test with virtio 0.95 vrctor_rx path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=2  \
>      -device virtio-net-
> pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,rx_queue_size=1
> 024,tx_queue_size=1024,mq=on,vectors=15 \
> @@ -169,7 +169,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-3 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 -
> -rxq=2 --txq=2 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> @@ -181,7 +181,7 @@ Test Case 4: pvp test with virtio 1.0 mergeable path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=2  \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=on,rx_queue_size=1024,tx_queue_size=1024,mq
> =on,vectors=15 \
> @@ -208,7 +208,7 @@ Test Case 5: pvp test with virtio 1.0 non-mergeable
> path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-3 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 -
> -rxq=2 --txq=2 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> @@ -220,7 +220,7 @@ Test Case 5: pvp test with virtio 1.0 non-mergeable
> path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=2  \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq
> =on,vectors=15 \
> @@ -247,7 +247,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-3 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 -
> -rxq=2 --txq=2 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> @@ -259,7 +259,7 @@ Test Case 6: pvp test with virtio 1.0 vrctor_rx path
>      -numa node,memdev=mem -mem-prealloc -drive
> file=/home/osimg/ubuntu16.img  \
>      -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -
> device virtio-serial \
>      -device virtserialport,chardev=vm2_qga0,name=org.qemu.guest_agent.2
> -daemonize \
> -    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \
> +    -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net
> nic,macaddr=00:00:00:08:e8:aa,addr=1f -net
> user,hostfwd=tcp:127.0.0.1:6002-:22 \
>      -chardev socket,id=char0,path=./vhost-net,server \
>      -netdev type=vhost-
> user,id=netdev0,chardev=char0,vhostforce,queues=2  \
>      -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-
> modern=false,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024,mq
> =on,vectors=15 \
> @@ -286,7 +286,7 @@ Test Case 7: pvp test with virtio 1.1 mergeable path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-3 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 -
> -rxq=2 --txq=2 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> @@ -325,7 +325,7 @@ Test Case 8: pvp test with virtio 1.1 non-mergeable
> path
>  1. Bind one port to igb_uio, then launch testpmd by below command::
> 
>      rm -rf vhost-net*
> -    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024 \
> +    ./testpmd -l 1-3 -n 4 \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=2,client=1' -- -i --nb-cores=2 -
> -rxq=2 --txq=2 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> diff --git a/test_plans/virtio_user_as_exceptional_path_test_plan.rst
> b/test_plans/virtio_user_as_exceptional_path_test_plan.rst
> index a261e15c..f04271fa 100644
> --- a/test_plans/virtio_user_as_exceptional_path_test_plan.rst
> +++ b/test_plans/virtio_user_as_exceptional_path_test_plan.rst
> @@ -74,7 +74,7 @@ Flow:tap0-->vhost-net-->virtio_user-->nic0-->nic1
>  3. Bind nic0 to igb_uio and launch the virtio_user with testpmd::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0
> -    ./testpmd -c 0xc0000 -n 4 --socket-mem 1024,1024 --file-prefix=test2 \
> +    ./testpmd -c 0xc0000 -n 4 --file-prefix=test2 \
>      --vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-
> net,queue_size=1024 -- -i --rxd=1024 --txd=1024
>      testpmd>set fwd csum
>      testpmd>stop
> @@ -126,7 +126,7 @@ Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<--
> >TG
>  2. Bind the physical port to igb_uio, launch testpmd with one queue for
> virtio_user::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024  --file-prefix=test2 --
> vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-
> net,queue_size=1024,queues=1 -- -i --rxd=1024 --txd=1024
> +    ./testpmd -l 1-2 -n 4  --file-prefix=test2 --
> vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-
> net,queue_size=1024,queues=1 -- -i --rxd=1024 --txd=1024
> 
>  3. Check if there is a tap device generated::
> 
> @@ -156,7 +156,7 @@ Flow: tap0<-->vhost-net<-->virtio_user<-->nic0<--
> >TG
>  2. Bind the physical port to igb_uio, launch testpmd with two queues for
> virtio_user::
> 
>      ./dpdk-devbind.py -b igb_uio xx:xx.x        # xx:xx.x is the pci addr of nic0
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024  --file-prefix=test2 --
> vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-
> net,queue_size=1024,queues=2 -- -i --rxd=1024 --txd=1024 --txq=2 --rxq=2 --
> nb-cores=1
> +    ./testpmd -l 1-2 -n 4  --file-prefix=test2 --
> vdev=virtio_user0,mac=00:01:02:03:04:05,path=/dev/vhost-
> net,queue_size=1024,queues=2 -- -i --rxd=1024 --txd=1024 --txq=2 --rxq=2 --
> nb-cores=1
> 
>  3. Check if there is a tap device generated::
> 
> diff --git a/test_plans/virtio_user_for_container_networking_test_plan.rst
> b/test_plans/virtio_user_for_container_networking_test_plan.rst
> index 2d68f5f0..15c9c248 100644
> --- a/test_plans/virtio_user_for_container_networking_test_plan.rst
> +++ b/test_plans/virtio_user_for_container_networking_test_plan.rst
> @@ -72,7 +72,7 @@ Test Case 1: packet forward test for container
> networking
> 
>  2. Bind one port to igb_uio, launch vhost::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024  --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net,queues=1,client=0' -- -i
> +    ./testpmd -l 1-2 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,queues=1,client=0' -- -i
> 
>  2. Start a container instance with a virtio-user port::
> 
> @@ -94,7 +94,7 @@ Test Case 2: packet forward with multi-queues for
> container networking
> 
>  2. Bind one port to igb_uio, launch vhost::
> 
> -    ./testpmd -l 1-3 -n 4 --socket-mem 1024,1024  --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net,queues=2,client=0' -- -i --nb-cores=2
> +    ./testpmd -l 1-3 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-
> net,queues=2,client=0' -- -i --nb-cores=2
> 
>  2. Start a container instance with a virtio-user port::
> 
> diff --git a/test_plans/vm2vm_virtio_pmd_test_plan.rst
> b/test_plans/vm2vm_virtio_pmd_test_plan.rst
> index 11daaabb..db410e48 100644
> --- a/test_plans/vm2vm_virtio_pmd_test_plan.rst
> +++ b/test_plans/vm2vm_virtio_pmd_test_plan.rst
> @@ -62,7 +62,7 @@ Test Case 1: VM2VM vhost-user/virtio-pmd with
> vector_rx path
>  1. Bind one physical nic port to igb_uio, then launch the testpmd by below
> commands::
> 
>       rm -rf vhost-net*
> -    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci -
> -file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -117,7 +117,7 @@ Test Case 2: VM2VM vhost-user/virtio-pmd with
> normal path
>  1. Bind one physical nic port to igb_uio, then launch the testpmd by below
> commands::
> 
>       rm -rf vhost-net*
> -    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci -
> -file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -172,7 +172,7 @@ Test Case 3: VM2VM vhost-user/virtio1.0-pmd with
> vector_rx path
>  1. Bind one physical nic port to igb_uio, then launch the testpmd by below
> commands::
> 
>       rm -rf vhost-net*
> -    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci -
> -file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -227,7 +227,7 @@ Test Case 4: VM2VM vhost-user/virtio1.0-pmd with
> normal path
>  1. Bind one physical nic port to igb_uio, then launch the testpmd by below
> commands::
> 
>       rm -rf vhost-net*
> -    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci -
> -file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -281,7 +281,7 @@ Test Case 5: VM2VM vhost-user/virtio-pmd
> mergeable path with payload valid check
> 
>  1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
> 
> -    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci -
> -file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -369,7 +369,7 @@ Test Case 6: VM2VM vhost-user/virtio1.0-pmd
> mergeable path with payload valid ch
> 
>  1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
> 
> -    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci -
> -file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -457,7 +457,7 @@ Test Case 7: VM2VM vhost-user/virtio1.1-pmd
> mergeable path with payload valid ch
> 
>  1. Bind virtio with igb_uio driver, launch the testpmd by below commands::
> 
> -    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci -
> -file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> @@ -546,7 +546,7 @@ Test Case 8: VM2VM vhost-user/virtio1.1-pmd with
> normal path
>  1. Bind one physical nic port to igb_uio, then launch the testpmd by below
> commands::
> 
>       rm -rf vhost-net*
> -    ./testpmd -c 0xc0000 -n 4 --socket-mem 2048,2048 --legacy-mem --no-pci -
> -file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net0,queues=1' --vdev
> 'net_vhost1,iface=vhost-net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --
> rxd=1024
> +    ./testpmd -c 0xc0000 -n 4 --no-pci --file-prefix=vhost --vdev
> 'net_vhost0,iface=vhost-net0,queues=1' --vdev 'net_vhost1,iface=vhost-
> net1,queues=1'  -- -i --nb-cores=1 --txd=1024 --rxd=1024
>      testpmd>set fwd mac
>      testpmd>start
> 
> diff --git a/test_plans/vm2vm_virtio_user_test_plan.rst
> b/test_plans/vm2vm_virtio_user_test_plan.rst
> index d0be8144..14f9b438 100644
> --- a/test_plans/vm2vm_virtio_user_test_plan.rst
> +++ b/test_plans/vm2vm_virtio_user_test_plan.rst
> @@ -64,13 +64,13 @@ Test Case 1: packed virtqueue vm2vm mergeable
> path test
> 
>  1. Launch vhost by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -83,7 +83,7 @@ Test Case 1: packed virtqueue vm2vm mergeable path
> test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -106,7 +106,7 @@ Test Case 1: packed virtqueue vm2vm mergeable
> path test
> 
>  7. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -118,7 +118,7 @@ Test Case 1: packed virtqueue vm2vm mergeable
> path test
> 
>  9. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 --no-pci \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
>      testpmd>set burst 1
> @@ -142,13 +142,13 @@ Test Case 2: packed virtqueue vm2vm inorder
> mergeable path test
> 
>  1. Launch testpmd by below command::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -161,7 +161,7 @@ Test Case 2: packed virtqueue vm2vm inorder
> mergeable path test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -179,7 +179,7 @@ Test Case 2: packed virtqueue vm2vm inorder
> mergeable path test
> 
>  6. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -191,7 +191,7 @@ Test Case 2: packed virtqueue vm2vm inorder
> mergeable path test
> 
>  8. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -212,13 +212,13 @@ Test Case 3: packed virtqueue vm2vm non-
> mergeable path test
> 
>  1. Launch testpmd by below command::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -229,7 +229,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable
> path test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -246,7 +246,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable
> path test
> 
>  6. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -258,7 +258,7 @@ Test Case 3: packed virtqueue vm2vm non-mergeable
> path test
> 
>  8. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -275,13 +275,13 @@ Test Case 4: packed virtqueue vm2vm inorder non-
> mergeable path test
> 
>  1. Launch testpmd by below command::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \
>      -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256
> @@ -294,7 +294,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-
> mergeable path test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \
>      -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256
> @@ -311,7 +311,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-
> mergeable path test
> 
>  6. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -323,7 +323,7 @@ Test Case 4: packed virtqueue vm2vm inorder non-
> mergeable path test
> 
>  8. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \
>      -- -i --rx-offloads=0x10 --nb-cores=1 --txd=256 --rxd=256
> @@ -340,13 +340,13 @@ Test Case 5: split virtqueue vm2vm mergeable path
> test
> 
>  1. Launch vhost by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -359,7 +359,7 @@ Test Case 5: split virtqueue vm2vm mergeable path
> test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -382,7 +382,7 @@ Test Case 5: split virtqueue vm2vm mergeable path
> test
> 
>  7. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -394,7 +394,7 @@ Test Case 5: split virtqueue vm2vm mergeable path
> test
> 
>  9. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -419,13 +419,13 @@ Test Case 6: split virtqueue vm2vm inorder
> mergeable path test
> 
>  1. Launch testpmd by below command::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -438,7 +438,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable
> path test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -455,7 +455,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable
> path test
> 
>  6. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -467,7 +467,7 @@ Test Case 6: split virtqueue vm2vm inorder mergeable
> path test
> 
>  8. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -488,13 +488,13 @@ Test Case 7: split virtqueue vm2vm non-mergeable
> path test
> 
>  1. Launch testpmd by below command::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
> @@ -505,7 +505,7 @@ Test Case 7: split virtqueue vm2vm non-mergeable
> path test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
> @@ -522,7 +522,7 @@ Test Case 7: split virtqueue vm2vm non-mergeable
> path test
> 
>  6. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -534,7 +534,7 @@ Test Case 7: split virtqueue vm2vm non-mergeable
> path test
> 
>  8. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256 --enable-hw-vlan-strip
> @@ -551,13 +551,13 @@ Test Case 8: split virtqueue vm2vm inorder non-
> mergeable path test
> 
>  1. Launch testpmd by below command::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -570,7 +570,7 @@ Test Case 8: split virtqueue vm2vm inorder non-
> mergeable path test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -587,7 +587,7 @@ Test Case 8: split virtqueue vm2vm inorder non-
> mergeable path test
> 
>  6. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -599,7 +599,7 @@ Test Case 8: split virtqueue vm2vm inorder non-
> mergeable path test
> 
>  8. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=1 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -616,13 +616,13 @@ Test Case 9: split virtqueue vm2vm vector_rx path
> test
> 
>  1. Launch testpmd by below command::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_
> size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -633,7 +633,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path
> test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    ./testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_s
> ize=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -650,7 +650,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path
> test
> 
>  6. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -662,7 +662,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path
> test
> 
>  8. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_
> size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -679,13 +679,13 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test
> 
>  1. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_
> size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -698,7 +698,7 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_s
> ize=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -715,7 +715,7 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test
> 
>  6. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -727,7 +727,7 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test
> 
>  8. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_
> size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> @@ -744,13 +744,13 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test with ring size is not
> 
>  1. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_
> size=255 \
>      -- -i --nb-cores=1 --txd=255 --rxd=255
> @@ -763,7 +763,7 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test with ring size is not
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_s
> ize=255 \
>      -- -i --nb-cores=1 --txd=255 --rxd=255
> @@ -780,7 +780,7 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test with ring size is not
> 
>  6. Launch testpmd by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --no-pci --file-
> prefix=vhost  \
>      --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
>      testpmd>set fwd rxonly
> @@ -792,7 +792,7 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test with ring size is not
> 
>  8. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_
> size=255 \
>      -- -i --nb-cores=1 --txd=255 --rxd=255
> @@ -815,7 +815,7 @@ Test Case 11: split virtqueue vm2vm inorder
> mergeable path multi-queues payload
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
> @@ -828,7 +828,7 @@ Test Case 11: split virtqueue vm2vm inorder
> mergeable path multi-queues payload
> 
>  4. Launch virtio-user0 and send packets::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
> @@ -846,7 +846,7 @@ Test Case 11: split virtqueue vm2vm inorder
> mergeable path multi-queues payload
> 
>  6. Restart step 1-3, Launch virtio-user0 and send packets::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=1 \
>      -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
> @@ -874,7 +874,7 @@ Test Case 12: split virtqueue vm2vm mergeable path
> multi-queues payload check wi
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 \
>      --no-pci --file-prefix=virtio1 \
>      --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
> @@ -887,7 +887,7 @@ Test Case 12: split virtqueue vm2vm mergeable path
> multi-queues payload check wi
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
> @@ -903,7 +903,7 @@ Test Case 12: split virtqueue vm2vm mergeable path
> multi-queues payload check wi
> 
>  6. Restart step 1-3, Launch virtio-user0 and send packets::
> 
> -    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 \
>      --no-pci --file-prefix=virtio \
>      --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=2,server=1,packed_vq=0,mrg_rxbuf=1,in_order=0 \
>      -- -i --nb-cores=1 --rxq=2 --txq=2 --txd=256 --rxd=256
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dts] [PATCH V1]test_plans: update virtio related test plans
  2020-09-07  8:22 [dts] [PATCH V1]test_plans: update virtio related test plans Xiao Qimai
  2020-09-07  8:37 ` Xiao, QimaiX
@ 2020-09-10  1:10 ` Tu, Lijuan
  1 sibling, 0 replies; 3+ messages in thread
From: Tu, Lijuan @ 2020-09-10  1:10 UTC (permalink / raw)
  To: Xiao, QimaiX, dts; +Cc: Xiao, QimaiX

> Subject: [dts] [PATCH V1]test_plans: update virtio related test plans
> 
> 1. remove vlan in qemu command, since higher version of qemu not support
> this parameter;
> 2. remove --socket-mem and --legacy-mem in testpmd cmd
> 
> Signed-off-by: Xiao Qimai <qimaix.xiao@intel.com>

Applied

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, back to index

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-07  8:22 [dts] [PATCH V1]test_plans: update virtio related test plans Xiao Qimai
2020-09-07  8:37 ` Xiao, QimaiX
2020-09-10  1:10 ` Tu, Lijuan

test suite reviews and discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/dts/0 dts/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dts dts/ http://inbox.dpdk.org/dts \
		dts@dpdk.org
	public-inbox-index dts


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dts


AGPL code for this site: git clone https://public-inbox.org/ public-inbox