From: "Liu, Yong" <yong.liu@intel.com>
To: "Liu, Jijiang" <jijiang.liu@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v3 00/10] Add a VXLAN sample
Date: Tue, 9 Jun 2015 09:28:59 +0000 [thread overview]
Message-ID: <86228AFD5BCD8E4EBFD2B90117B5E81E10E3A37F@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <1433732508-32430-1-git-send-email-jijiang.liu@intel.com>
Tested-by: Yong Liu <yong.liu@intel.com>
- Tested Commit: c1715402df8f7fdb2392e12703d5b6f81fd5f447
- OS: Fedora20 3.15.5
- GCC: gcc version 4.8.3 20140911
- CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
- NIC: Intel Corporation Device XL710 [8086:1584] Firmware 4.33
- Default x86_64-native-linuxapp-gcc configuration
- Prerequisites: set up dpdk vhost-user running environment
allocate enough hugepages for both vxlan sample and virtual machine
- Total 5 cases, 5 passed, 0 failed
- Prerequisites command / instruction:
Update qemu-system-x86_64 to version 2.2.0 which support hugepage based memory
Prepare vhost-use requested modules
modprobe fuse
modprobe cuse
insmod lib/librte_vhost/eventfd_link/eventfd_link.ko
Allocate 4096*2M hugepages for vm and dpdk
echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
- Case: vxlan_sample_encap
Description: check vxlan sample encap function work fine
Command / instruction:
Start vxlan sample with only encapsulation enable
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 0 \
--encap 1 --decap 0
Wait for vhost-net socket device created and message dumped.
VHOST_CONFIG: bind to vhost-net
Start virtual machine with hugepage based memory and two vhost-user devices
qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \
-cpu host -smp 4 -m 4096 \
-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc \
-chardev socket,id=char0,path=./dpdk/vhost-net \
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=netdev0,mac=00:00:20:00:00:20 \
-chardev socket,id=char1,path=./dpdk/vhost-net \
-netdev type=vhost-user,id=netdev1,chardev=char1,vhostforce \
-device virtio-net-pci,netdev=netdev1,mac=00:00:20:00:00:21 \
-drive file=/storage/vm-image/vm0.img -vnc :1
Login into virtual machine and start testpmd with additional arguments
testpmd -c f -n 3 -- -i --txqflags=0xf00 --disable-hw-vlan
Start packet forward of testpmd and transit several packets for mac learning
testpmd> set fwd mac
testpmd> start tx_first
Make sure virtIO port registered normally.
VHOST_CONFIG: virtio is now ready for processing.
VHOST_DATA: (1) Device has been added to data core 56
VHOST_DATA: (1) MAC_ADDRESS 00:00:20:00:00:21 and VNI 1000 registered
VHOST_DATA: (0) MAC_ADDRESS 00:00:20:00:00:20 and VNI 1000 registered
Send normal udp packet to PF device and packet dmac match PF device
Verify packet has been recevied in virtIO port0 and forwarded by port1
testpmd> show port stats all
Verify encapsulated packet received on PF device
- Case: vxlan_sample_decap
Description: check vxlan sample decap function work fine
Command / instruction:
Start vxlan sample with only de-capsulation enable
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 0 \
--encap 0 --decap 1
Start vhost-user test environment like case vxlan_sample_encap
Send vxlan packet Ether(dst=PF mac)/IP/UDP/vni(1000)/
Ether(dst=virtIO port0)/IP/UDP to PF device
Verify that packet received by virtIO port0 and forwarded by virtIO port1.
testpmd> show port stats all
Verify that PF received packet just the same as inner packet
Send vxlan packet Ether(dst=PF mac)/IP/UDP/vni(1000)/
Ether(dst=virtIO port1)/IP/UDP to PF device
Verify that packet received by virtIO port1 and forwarded by virtIO port0.
testpmd> show port stats all
Make sure PF received packet received inner packet with mac reversed.
- Case: vxlan_sample_encap_and_decap
Description: check vxlan sample decap&encap work fine in the same time
Command / instruction:
Start vxlan sample with only de-capsulation enable
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 0 \
--encap 1 --decap 1
Start vhost-user test environment like case vxlan_sample_encap
Ether(dst=PF mac)/IP/UDP/vni(1000)/ Ether(dst=virtIO port0)/IP/UDP
Send vxlan packet Ether(dst=PF mac)/IP/UDP/vni(1000)/
Ether(dst=virtIO port0)/IP/UDP to PF device
Verify that packet received by virtIO port0 and forwarded by virtIO port1.
testpmd> show port stats all
Verify encapsulated packet received on PF device.
Verify that inner packet src and dst mac address have been conversed.
- Case: vxlan_sample_chksum
Description: check vxlan sample transmit checksum work fine
Command / instruction:
Start vxlan sample with only decapsulation enable
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 1 \
--encap 1 --decap 1
Start vhost-user test environment like case vxlan_sample_encap
Send vxlan packet with Ether(dst = PF mac)/IP/UDP/vni(1000)/
Ether(dst = virtIO port0)/IP wrong chksum/ UDP wrong chksum
Verify that packet recevied by virtIO port0 and forwarded by virtIO port1.
testpmd> show port stats all
Verify encapsulated packet received on PF device.
Verify that inner packet src and dst mac address have been conversed.
Verify that inner packet ip checksum and udp checksum were corrected.
Send vxlan packet with Ether(dst = PF mac)/IP/UDP/vni(1000)/
Ether(dst = virtIO port0)/IP wrong chksum/ TCP wrong chksum
Verify that packet recevied by virtIO port0 and forwarded by virtIO port1.
testpmd> show port stats all
Verify encapsulated packet received on PF device.
Verify that inner packet src and dst mac address have been conversed.
Verify that inner packet ip checksum and tcp checksum were corrected.
Send vxlan packet with Ether(dst = PF mac)/IP/UDP/vni(1000)/
Ether(dst = virtIO port0)/IP wrong chksum/ SCTP wrong chksum
Verify that packet received by virtIO port0 and forwarded by virtIO port1.
testpmd> show port stats all
Verify encapsulated packet received on PF device.
Verify that inner packet src and dst mac address have been conversed.
Verify that inner packet ip checksum and sctp checksum were corrected.
- Case: vxlan_sample_tso
Description: check vxlan sample tso work fine
Command / instruction:
Start vxlan sample with tso enable, tx checksum must enable too
For hardware limitation, tso segment size must be larger 256
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 1 \
--encap 1 --decap 1 --tso-segsz 256
Start vhost-user test environment like case vxlan_sample_encap
Send vxlan packet with Ether(dst = PF mac)/IP/UDP/vni(1000)/
Ether(dst = virtIO port0)/TCP/892 Bytes data, total length will be 1000
Verify that packet recevied by virtIO port0 and forwarded by virtIO port1.
testpmd> show port stats all
Verify that four separated vxlan packets received on PF devices.
Make sure tcp packet payload is 256, 256, 256 and 124.
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jijiang Liu
> Sent: Monday, June 08, 2015 11:02 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v3 00/10] Add a VXLAN sample
>
> This VXLAN sample simulates a VXLAN Tunnel Endpoint (VTEP) termination in
> DPDK, which is used to demonstrate the offload and filtering capabilities
> of i40 NIC for VXLAN packet.
>
> And this sample uses the basic virtio devices management function from
> vHost example, and the US-vHost interface and tunnel filtering mechanism
> to direct the traffic to/from a specific VM.
>
> In addition, this sample is also designed to show how tunneling protocols
> can be handled. For the vHost interface, we do not need to support zero
> copy/inter VM packet transfer, etc. The approaches we took would be of
> benefit to you because we put a pluggable structure in place so that the
> application could be easily extended to support a new tunneling protocol.
>
> The software framework is as follows:
>
>
> |-------------------| |-------------------|
> | VM-1(VNI:100) | | VM-2(VNI:200) |
> | |------| |------| | | |------| |------| |
> | |vport0| |vport1| | | |vport0| |vport1| |
> |-|------|-|------|-| |-|------|-|------|-| Guests
> \ /
> |-------------\-------/--------|
> | us-vHost interface |
> | |-|----|--| |
> | decap| | TEP| | encap | DPDK App
> | |-|----|--| |
> | | | |
> |------------|----|------------|
> | |
> |-------------|----|---------------|
> |tunnel filter| | IP/L4 Tx csum |
> |IP/L4 csum | | TSO |
> |packet type | | | NIC
> |packet recogn| | |
> |-------------|----|---------------|
> | |
> | |
> | |
> /-------\
> VXLAN Tunnel
>
> The sample will support the followings:
> 1> Tunneling packet recognition.
>
> 2> The port of UDP tunneling is configurable
>
> 3> Directing incoming traffic to the correct queue based on the tunnel
> filter type such as inner MAC address and VNI.
>
> The VNI will be assigned from a static internal table based on the us-
> vHost device ID. Each device will receive a unique device ID. The inner
> MAC will be learned by the first packet transmitted from a device.
>
> 4> Decapsulation of Rx VXLAN traffic. This is a software only operation.
>
> 5> Encapsulation of Tx VXLAN traffic. This is a software only operation.
>
> 6> Tx outer IP, inner IP and L4 checksum offload
>
> 7> TSO support for tunneling packet
>
> The limitations:
> 1. No ARP support.
> 2. There are some duplicated source codes because I used the basic virtio
> device management function from VHOST sample. Considering that the current
> VHOST sample is quite complicated and huge enough, I think we shall have
> a separate sample for tunneling packet processing.
> 3. Currently, only the i40e NIC is tested in the sample, but other types
> of NICs will also be supported if they are able to support tunneling
> packet filter.
>
> v2 changes:
> Fixed an issue about the 'nb_ports' duplication in check_ports_num().
> Removed the inaccurate comment in main.c
> Fixed an issue about TSO offload.
>
> v3 changes:
> Changed some viriable name that don't follow coding rules.
> Removed the limitation of VXLAN packet size due to TSO support.
> Removed useless 'll_root_used' variable in vxlan_setup.c file.
> Removed defination and use of '_htons'.
>
> Jijiang Liu (10):
> create VXLAN sample framework using virtio device management function
> add basic VXLAN structures
> addthe pluggable structures
> implement VXLAN packet processing
> add udp port configuration
> add filter type configuration
> add tx checksum offload configuration
> add TSO offload configuration
> add Rx checksum statistics
> add encapsulation and decapsulation flags
>
>
> examples/Makefile | 1 +
> examples/tep_termination/Makefile | 55 ++
> examples/tep_termination/main.c | 1205
> ++++++++++++++++++++++++++++++++
> examples/tep_termination/main.h | 129 ++++
> examples/tep_termination/vxlan.c | 262 +++++++
> examples/tep_termination/vxlan.h | 76 ++
> examples/tep_termination/vxlan_setup.c | 444 ++++++++++++
> examples/tep_termination/vxlan_setup.h | 77 ++
> 8 files changed, 2249 insertions(+), 0 deletions(-)
> create mode 100644 examples/tep_termination/Makefile
> create mode 100644 examples/tep_termination/main.c
> create mode 100644 examples/tep_termination/main.h
> create mode 100644 examples/tep_termination/vxlan.c
> create mode 100644 examples/tep_termination/vxlan.h
> create mode 100644 examples/tep_termination/vxlan_setup.c
> create mode 100644 examples/tep_termination/vxlan_setup.h
>
> --
> 1.7.7.6
prev parent reply other threads:[~2015-06-09 9:29 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-08 3:01 Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 01/10] examples/tep_termination:initialize the " Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 02/10] examples/tep_termination:define the basic VXLAN port information Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 03/10] examples/tep_termination:add the pluggable structures for VXLAN packet processing Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 04/10] examples/tep_termination:implement " Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 05/10] examples/tep_termination:add UDP port configuration for UDP tunneling packet Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 06/10] examples/tep_termination:add tunnel filter type configuration Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 07/10] examples/tep_termination:add Tx checksum offload configuration for inner header Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 08/10] examples/tep_termination:add TSO offload configuration Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 09/10] examples/tep_termination:add bad Rx checksum statistics of inner IP and L4 Jijiang Liu
2015-06-08 3:01 ` [dpdk-dev] [PATCH v3 10/10] examples/tep_termination:add the configuration for encapsulation and the decapsulation Jijiang Liu
2015-06-08 8:34 ` [dpdk-dev] [PATCH v3 00/10] Add a VXLAN sample Zhang, Helin
2015-06-16 9:00 ` Thomas Monjalon
2015-06-16 12:49 ` Liu, Jijiang
2015-06-16 13:31 ` Thomas Monjalon
2015-06-16 13:38 ` Liu, Jijiang
2015-06-09 9:28 ` Liu, Yong [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=86228AFD5BCD8E4EBFD2B90117B5E81E10E3A37F@SHSMSX103.ccr.corp.intel.com \
--to=yong.liu@intel.com \
--cc=dev@dpdk.org \
--cc=jijiang.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).