* [dts] [PATCH]test_plans: add new cases to af_xdp_test_plan
@ 2019-07-23 5:51 Peng Yuan
2019-08-07 7:16 ` Tu, Lijuan
0 siblings, 1 reply; 2+ messages in thread
From: Peng Yuan @ 2019-07-23 5:51 UTC (permalink / raw)
To: dts; +Cc: Peng Yuan
Add five cases to af_xdp_test_plan.rst covering zero copy, multiqueue
and need_wakeup
Signed-off-by: Peng Yuan <yuan.peng@intel.com>
diff --git a/test_plans/af_xdp_test_plan.rst b/test_plans/af_xdp_test_plan.rst
index 4576fef..d12ef0c 100644
--- a/test_plans/af_xdp_test_plan.rst
+++ b/test_plans/af_xdp_test_plan.rst
@@ -51,7 +51,7 @@ Prerequisites
2. The NIC is located on the socket 1, so we define the cores of socket 1.
-3. Take the kernel >= v5.1-rc1, build kernel and replace your host
+3. Take the kernel >= v5.2-rc2, build kernel and replace your host
kernel with it.
Update compiler to the proper version.
Make sure you turn on XDP sockets when compiling::
@@ -60,59 +60,239 @@ Prerequisites
Networking options -->
[ * ] XDP sockets
+ Then compile the kernel::
+
+ make -j16
+ make modules_install install
+
4. Build libbpf in tools/lib/bpf::
cd tools/lib/bpf
- make
+ make install_lib prefix=/usr
+ make install_headers prefix=/usr
+
+5. Explicitly enable AF_XDP pmd by adding below line to
+ config/common_linux::
+
+ CONFIG_RTE_LIBRTE_PMD_AF_XDP=y
- Then copy the libbpf.a and libbpf.so to /usr/lib64
+ Then build DPDK.
-5. Set DUT port only has one queue::
+6. Set DUT port only has one queue::
ethtool -L enp216s0f0 combined 1
ethtool -L enp216s0f1 combined 1
-6. Explicitly enable AF_XDP pmd by adding below line to
- config/common_linux::
+Test case 1: single port
+========================
- CONFIG_RTE_LIBRTE_PMD_AF_XDP=y
+1. Start the testpmd::
+
+ ./testpmd -l 29,30 -n 6 --no-pci --vdev net_af_xdp0,iface=enp216s0f0 \
+ -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
+
+2. Assign the kernel core::
+
+ ./set_irq_affinity 34 enp216s0f0
+
+3. Send packets by packet generator with different packet size,
+ from 64 bytes to 1518 bytes, check the throughput.
-Test case 1: 1 port, 1 kernel core, 1 user core
-===============================================
+Test case 2: two ports
+======================
1. Start the testpmd::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 30,31 --no-pci -n 6 \
- --vdev net_af_xdp0,iface=enp216s0f0,queue=0 --file-prefix=port0 \
- --socket-mem 1024,1024 -- -a --nb-cores=1 -i --rxq=1 --txq=1 \
- --port-topology=loop
+ ./x86_64-native-linuxapp-gcc/app/testpmd -l 29,30-31 --no-pci -n 6 \
+ --vdev net_af_xdp0,iface=enp216s0f0 --vdev net_af_xdp1,iface=enp216s0f1 \
+ -- -i --nb-cores=2 --rxq=1 --txq=1
2. Assign the kernel core::
./set_irq_affinity 33 enp216s0f0
+ ./set_irq_affinity 34 enp216s0f1
-3. Send packet with packet generator with different packet size,
- from 64 bytes to 1518 bytes, check the throughput.
+3. Send packets by packet generator port0 with different packet size,
+ from 64 bytes to 1518 bytes, check the throughput at port1.
-Test case 2: 2 ports, 2 kernel cores, 2 user cores
-==================================================
+4. Send packets by packet generator port0 and port1 with different packet size,
+ from 64 bytes to 1518 bytes, check the throughput at port0 and port1.
-1. Start the testpmd::
+Test case 3: zero copy
+======================
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 30,32 --no-pci -n 6 \
- --vdev net_af_xdp0,iface=enp216s0f0,queue=0 --file-prefix=port0 \
- --socket-mem 1024,1024 -- -a --nb-cores=1 -i --rxq=1 --txq=1 \
- --port-topology=loop
+1. Start the testpmd::
- ./x86_64-native-linuxapp-gcc/app/testpmd -l 31,33 --no-pci -n 6 \
- --vdev net_af_xdp1,iface=enp216s0f1,queue=0 --file-prefix=port1 \
- --socket-mem 1024,1024 -- -a --nb-cores=1 -i --rxq=1 --txq=1 \
- --port-topology=loop
+ ./testpmd -l 29,30 -n 6 --no-pci \
+ --vdev net_af_xdp0,iface=enp216s0f0,pmd_zero_copy=1 \
+ -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
2. Assign the kernel core::
- ./set_irq_affinity 35 enp216s0f0
- ./set_irq_affinity 36 enp216s0f1
+ ./set_irq_affinity 34 enp216s0f0
-3. Send packet with packet generator with different packet size,
+3. Send packets by packet generator with different packet size,
from 64 bytes to 1518 bytes, check the throughput.
+
+Test case 4: multiqueue
+=======================
+
+1. One queue.
+
+ 1) Start the testpmd with one queue::
+
+ ./testpmd -l 29,30 -n 6 --no-pci \
+ --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1 \
+ -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
+
+ 2) Assign the kernel core::
+
+ ./set_irq_affinity 34 enp216s0f0
+
+ 3) Send packets with different dst IP address by packet generator
+ with different packet size from 64 bytes to 1518 bytes, check the throughput.
+
+2. Four queues.
+
+ 1) Set hardware queue::
+
+ ethtool -L enp216s0f0 combined 4
+
+ 2)Start the testpmd with four queues::
+
+ ./testpmd -l 29,30-33 -n 6 --no-pci \
+ --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4 \
+ -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
+
+ 3)Assign the kernel core::
+
+ ./set_irq_affinity 34-37 enp216s0f0
+
+ 4)Send packets with different dst IP address by packet generator
+ with different packet size from 64 bytes to 1518 bytes, check the throughput.
+ The packets were distributed to the four queues.
+
+Test case 5: multiqueue and zero copy
+=====================================
+
+1. One queue and zero copy.
+
+ 1) Set hardware queue::
+
+ ethtool -L enp216s0f0 combined 1
+
+ 2) Start the testpmd with one queue::
+
+ ./testpmd -l 29,30 -n 6 --no-pci \
+ --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1,pmd_zero_copy=1 \
+ -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
+
+ 3) Assign the kernel core::
+
+ ./set_irq_affinity 34 enp216s0f0
+
+ 4) Send packets with different dst IP address by packet generator
+ with different packet size from 64 bytes to 1518 bytes, check the throughput.
+ Expect the performance is better than non-zero-copy.
+
+2. Four queues and zero copy.
+
+ 1) Set hardware queue::
+
+ ethtool -L enp216s0f0 combined 4
+
+ 2) Start the testpmd with four queues::
+
+ ./testpmd -l 29,30-33 -n 6 --no-pci \
+ --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4,pmd_zero_copy=1 \
+ -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
+
+ 3) Assign the kernel core::
+
+ ./set_irq_affinity 34-37 enp216s0f0
+
+ 4) Send packets with different dst IP address by packet generator
+ with different packet size from 64 bytes to 1518 bytes, check the throughput.
+ The packets were distributed to the four queues.
+ Expect the performance of four queues is better than one queue.
+ Expect the performance is better than non-zero-copy.
+
+Test case 6: need_wakeup
+========================
+
+1. Set hardware queue::
+
+ ethtool -L enp216s0f0 combined 1
+
+2. Start the testpmd with one queue::
+
+ ./testpmd -l 29,30 -n 6 --no-pci --vdev net_af_xdp0,iface=enp216s0f0 \
+ -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
+
+3. Assign the same core::
+
+ ./set_irq_affinity 30 enp216s0f0
+
+4. Send packets by packet generator with different packet size from 64 bytes
+ to 1518 bytes, check the throughput.
+ Expect the performance is better than no need_wakeup.
+
+Test case 7: xdpsock sample performance
+=======================================
+
+1. One queue.
+
+ 1) Set hardware queue::
+
+ ethtool -L enp216s0f0 combined 1
+
+ 2) Start the xdp socket with one queue::
+
+ #taskset -c 30 ./xdpsock -l -i enp216s0f0
+
+ 3) Assign the kernel core::
+
+ ./set_irq_affinity 34 enp216s0f0
+
+ 4) Send packets with different dst IP address by packet generator
+ with different packet size from 64 bytes to 1518 bytes, check the throughput.
+
+2. Four queues.
+
+ 1) Set hardware queue::
+
+ ethtool -L enp216s0f0 combined 4
+
+ 2) Start the xdp socket with four queues::
+
+ #taskset -c 30 ./xdpsock -l -i enp216s0f0 -q 0
+ #taskset -c 31 ./xdpsock -l -i enp216s0f0 -q 1
+ #taskset -c 32 ./xdpsock -l -i enp216s0f0 -q 2
+ #taskset -c 33 ./xdpsock -l -i enp216s0f0 -q 3
+
+ 3)Assign the kernel core::
+
+ ./set_irq_affinity 34-37 enp216s0f0
+
+ 4)Send packets with different dst IP address by packet generator
+ with different packet size from 64 bytes to 1518 bytes, check the throughput.
+ The packets were distributed to the four queues.
+ Expect the performance of four queues is better than one queue.
+
+3. Need_wakeup.
+
+ 1) Set hardware queue::
+
+ ethtool -L enp216s0f0 combined 1
+
+ 2) Start the xdp socket with four queues::
+
+ #taskset -c 30 ./xdpsock -l -i enp216s0f0
+
+ 3)Assign the kernel core::
+
+ ./set_irq_affinity 30 enp216s0f0
+
+ 4) Send packets by packet generator with different packet size from 64 bytes
+ to 1518 bytes, check the throughput.
+ Expect the performance is better than no need_wakeup.
--
2.14.3
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [dts] [PATCH]test_plans: add new cases to af_xdp_test_plan
2019-07-23 5:51 [dts] [PATCH]test_plans: add new cases to af_xdp_test_plan Peng Yuan
@ 2019-08-07 7:16 ` Tu, Lijuan
0 siblings, 0 replies; 2+ messages in thread
From: Tu, Lijuan @ 2019-08-07 7:16 UTC (permalink / raw)
To: Peng, Yuan, dts; +Cc: Peng, Yuan
Applied, thanks
> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Peng Yuan
> Sent: Tuesday, July 23, 2019 1:52 PM
> To: dts@dpdk.org
> Cc: Peng, Yuan <yuan.peng@intel.com>
> Subject: [dts] [PATCH]test_plans: add new cases to af_xdp_test_plan
>
> Add five cases to af_xdp_test_plan.rst covering zero copy, multiqueue and
> need_wakeup
>
> Signed-off-by: Peng Yuan <yuan.peng@intel.com>
>
> diff --git a/test_plans/af_xdp_test_plan.rst
> b/test_plans/af_xdp_test_plan.rst index 4576fef..d12ef0c 100644
> --- a/test_plans/af_xdp_test_plan.rst
> +++ b/test_plans/af_xdp_test_plan.rst
> @@ -51,7 +51,7 @@ Prerequisites
>
> 2. The NIC is located on the socket 1, so we define the cores of socket 1.
>
> -3. Take the kernel >= v5.1-rc1, build kernel and replace your host
> +3. Take the kernel >= v5.2-rc2, build kernel and replace your host
> kernel with it.
> Update compiler to the proper version.
> Make sure you turn on XDP sockets when compiling::
> @@ -60,59 +60,239 @@ Prerequisites
> Networking options -->
> [ * ] XDP sockets
>
> + Then compile the kernel::
> +
> + make -j16
> + make modules_install install
> +
> 4. Build libbpf in tools/lib/bpf::
>
> cd tools/lib/bpf
> - make
> + make install_lib prefix=/usr
> + make install_headers prefix=/usr
> +
> +5. Explicitly enable AF_XDP pmd by adding below line to
> + config/common_linux::
> +
> + CONFIG_RTE_LIBRTE_PMD_AF_XDP=y
>
> - Then copy the libbpf.a and libbpf.so to /usr/lib64
> + Then build DPDK.
>
> -5. Set DUT port only has one queue::
> +6. Set DUT port only has one queue::
>
> ethtool -L enp216s0f0 combined 1
> ethtool -L enp216s0f1 combined 1
>
> -6. Explicitly enable AF_XDP pmd by adding below line to
> - config/common_linux::
> +Test case 1: single port
> +========================
>
> - CONFIG_RTE_LIBRTE_PMD_AF_XDP=y
> +1. Start the testpmd::
> +
> + ./testpmd -l 29,30 -n 6 --no-pci --vdev net_af_xdp0,iface=enp216s0f0 \
> + -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
> +
> +2. Assign the kernel core::
> +
> + ./set_irq_affinity 34 enp216s0f0
> +
> +3. Send packets by packet generator with different packet size,
> + from 64 bytes to 1518 bytes, check the throughput.
>
> -Test case 1: 1 port, 1 kernel core, 1 user core -
> ===============================================
> +Test case 2: two ports
> +======================
>
> 1. Start the testpmd::
>
> - ./x86_64-native-linuxapp-gcc/app/testpmd -l 30,31 --no-pci -n 6 \
> - --vdev net_af_xdp0,iface=enp216s0f0,queue=0 --file-prefix=port0 \
> - --socket-mem 1024,1024 -- -a --nb-cores=1 -i --rxq=1 --txq=1 \
> - --port-topology=loop
> + ./x86_64-native-linuxapp-gcc/app/testpmd -l 29,30-31 --no-pci -n 6 \
> + --vdev net_af_xdp0,iface=enp216s0f0 --vdev
> net_af_xdp1,iface=enp216s0f1 \
> + -- -i --nb-cores=2 --rxq=1 --txq=1
>
> 2. Assign the kernel core::
>
> ./set_irq_affinity 33 enp216s0f0
> + ./set_irq_affinity 34 enp216s0f1
>
> -3. Send packet with packet generator with different packet size,
> - from 64 bytes to 1518 bytes, check the throughput.
> +3. Send packets by packet generator port0 with different packet size,
> + from 64 bytes to 1518 bytes, check the throughput at port1.
>
> -Test case 2: 2 ports, 2 kernel cores, 2 user cores -
> ==================================================
> +4. Send packets by packet generator port0 and port1 with different packet
> size,
> + from 64 bytes to 1518 bytes, check the throughput at port0 and port1.
>
> -1. Start the testpmd::
> +Test case 3: zero copy
> +======================
>
> - ./x86_64-native-linuxapp-gcc/app/testpmd -l 30,32 --no-pci -n 6 \
> - --vdev net_af_xdp0,iface=enp216s0f0,queue=0 --file-prefix=port0 \
> - --socket-mem 1024,1024 -- -a --nb-cores=1 -i --rxq=1 --txq=1 \
> - --port-topology=loop
> +1. Start the testpmd::
>
> - ./x86_64-native-linuxapp-gcc/app/testpmd -l 31,33 --no-pci -n 6 \
> - --vdev net_af_xdp1,iface=enp216s0f1,queue=0 --file-prefix=port1 \
> - --socket-mem 1024,1024 -- -a --nb-cores=1 -i --rxq=1 --txq=1 \
> - --port-topology=loop
> + ./testpmd -l 29,30 -n 6 --no-pci \
> + --vdev net_af_xdp0,iface=enp216s0f0,pmd_zero_copy=1 \
> + -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
>
> 2. Assign the kernel core::
>
> - ./set_irq_affinity 35 enp216s0f0
> - ./set_irq_affinity 36 enp216s0f1
> + ./set_irq_affinity 34 enp216s0f0
>
> -3. Send packet with packet generator with different packet size,
> +3. Send packets by packet generator with different packet size,
> from 64 bytes to 1518 bytes, check the throughput.
> +
> +Test case 4: multiqueue
> +=======================
> +
> +1. One queue.
> +
> + 1) Start the testpmd with one queue::
> +
> + ./testpmd -l 29,30 -n 6 --no-pci \
> + --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1 \
> + -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
> +
> + 2) Assign the kernel core::
> +
> + ./set_irq_affinity 34 enp216s0f0
> +
> + 3) Send packets with different dst IP address by packet generator
> + with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> +
> +2. Four queues.
> +
> + 1) Set hardware queue::
> +
> + ethtool -L enp216s0f0 combined 4
> +
> + 2)Start the testpmd with four queues::
> +
> + ./testpmd -l 29,30-33 -n 6 --no-pci \
> + --vdev net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4 \
> + -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
> +
> + 3)Assign the kernel core::
> +
> + ./set_irq_affinity 34-37 enp216s0f0
> +
> + 4)Send packets with different dst IP address by packet generator
> + with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> + The packets were distributed to the four queues.
> +
> +Test case 5: multiqueue and zero copy
> +=====================================
> +
> +1. One queue and zero copy.
> +
> + 1) Set hardware queue::
> +
> + ethtool -L enp216s0f0 combined 1
> +
> + 2) Start the testpmd with one queue::
> +
> + ./testpmd -l 29,30 -n 6 --no-pci \
> + --vdev
> net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=1,pmd_zero_co
> py=1 \
> + -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
> +
> + 3) Assign the kernel core::
> +
> + ./set_irq_affinity 34 enp216s0f0
> +
> + 4) Send packets with different dst IP address by packet generator
> + with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> + Expect the performance is better than non-zero-copy.
> +
> +2. Four queues and zero copy.
> +
> + 1) Set hardware queue::
> +
> + ethtool -L enp216s0f0 combined 4
> +
> + 2) Start the testpmd with four queues::
> +
> + ./testpmd -l 29,30-33 -n 6 --no-pci \
> + --vdev
> net_af_xdp0,iface=enp216s0f0,start_queue=0,queue_count=4,pmd_zero_co
> py=1 \
> + -- -i --nb-cores=4 --rxq=4 --txq=4 --port-topology=loop
> +
> + 3) Assign the kernel core::
> +
> + ./set_irq_affinity 34-37 enp216s0f0
> +
> + 4) Send packets with different dst IP address by packet generator
> + with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> + The packets were distributed to the four queues.
> + Expect the performance of four queues is better than one queue.
> + Expect the performance is better than non-zero-copy.
> +
> +Test case 6: need_wakeup
> +========================
> +
> +1. Set hardware queue::
> +
> + ethtool -L enp216s0f0 combined 1
> +
> +2. Start the testpmd with one queue::
> +
> + ./testpmd -l 29,30 -n 6 --no-pci --vdev net_af_xdp0,iface=enp216s0f0 \
> + -- -i --nb-cores=1 --rxq=1 --txq=1 --port-topology=loop
> +
> +3. Assign the same core::
> +
> + ./set_irq_affinity 30 enp216s0f0
> +
> +4. Send packets by packet generator with different packet size from 64
> bytes
> + to 1518 bytes, check the throughput.
> + Expect the performance is better than no need_wakeup.
> +
> +Test case 7: xdpsock sample performance
> +=======================================
> +
> +1. One queue.
> +
> + 1) Set hardware queue::
> +
> + ethtool -L enp216s0f0 combined 1
> +
> + 2) Start the xdp socket with one queue::
> +
> + #taskset -c 30 ./xdpsock -l -i enp216s0f0
> +
> + 3) Assign the kernel core::
> +
> + ./set_irq_affinity 34 enp216s0f0
> +
> + 4) Send packets with different dst IP address by packet generator
> + with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> +
> +2. Four queues.
> +
> + 1) Set hardware queue::
> +
> + ethtool -L enp216s0f0 combined 4
> +
> + 2) Start the xdp socket with four queues::
> +
> + #taskset -c 30 ./xdpsock -l -i enp216s0f0 -q 0
> + #taskset -c 31 ./xdpsock -l -i enp216s0f0 -q 1
> + #taskset -c 32 ./xdpsock -l -i enp216s0f0 -q 2
> + #taskset -c 33 ./xdpsock -l -i enp216s0f0 -q 3
> +
> + 3)Assign the kernel core::
> +
> + ./set_irq_affinity 34-37 enp216s0f0
> +
> + 4)Send packets with different dst IP address by packet generator
> + with different packet size from 64 bytes to 1518 bytes, check the
> throughput.
> + The packets were distributed to the four queues.
> + Expect the performance of four queues is better than one queue.
> +
> +3. Need_wakeup.
> +
> + 1) Set hardware queue::
> +
> + ethtool -L enp216s0f0 combined 1
> +
> + 2) Start the xdp socket with four queues::
> +
> + #taskset -c 30 ./xdpsock -l -i enp216s0f0
> +
> + 3)Assign the kernel core::
> +
> + ./set_irq_affinity 30 enp216s0f0
> +
> + 4) Send packets by packet generator with different packet size from 64
> bytes
> + to 1518 bytes, check the throughput.
> + Expect the performance is better than no need_wakeup.
> --
> 2.14.3
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-08-07 7:16 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-23 5:51 [dts] [PATCH]test_plans: add new cases to af_xdp_test_plan Peng Yuan
2019-08-07 7:16 ` Tu, Lijuan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).