DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] DPDK vDPA application on Mellanox ConnectX 6Dx
@ 2020-07-13 14:29 William Tu
  2020-07-25 16:07 ` William Tu
  0 siblings, 1 reply; 2+ messages in thread
From: William Tu @ 2020-07-13 14:29 UTC (permalink / raw)
  To: dev; +Cc: smadaminov

Hi,

We are setting up a testbed using mellanox connectX 6Dx
MCX621102AN-ADAT (2x25G) and
are playing with vDPA features.  We setup two machines (traffic gen and KVM)
and connect them back-to-back with two connectX 6Dx.

At this moment we don't know which component might be misconfigured
(qemu, vdpa app, or HW nic). Any feedback for debugging is appreciated!

At KVM machine we follow [1] to setup vdpa on NIC, basically
1) Enable switchdev
  echo 0000:02:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
  echo switchdev > /sys/class/net/enp2s0f0/compat/devlink/mode
  echo 0000:02:00.2 > /sys/bus/pci/drivers/mlx5_core/bind

$ lspci
  02:00.0 Ethernet controller: Mellanox Technologies MT28841
  02:00.1 Ethernet controller: Mellanox Technologies MT28841
  02:00.2 Ethernet controller: Mellanox Technologies MT28850
  02:00.3 Ethernet controller: Mellanox Technologies MT28850

[11350.951711] mlx5_core 0000:02:00.0: E-Switch: Enable:
mode(OFFLOADS), nvfs(2), active vports(3)
[11351.032413] mlx5_core 0000:02:00.0 enp2s0f0_p0: Link up
[11351.226525] enp2s0f0_pf0vf1: renamed from eth0
[11351.403649] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_p0: link becomes ready
[11351.403951] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_pf0vf0: link
becomes ready
[11351.404162] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_pf0vf1: link
becomes ready

[1]https://docs.mellanox.com/pages/viewpage.action?pageId=25146841#OVSOffloadUsingASAP%C2%B2Direct-ovs-dpdkhwoffloadsOVS-DPDKHardwareOffloads

2) run the DPDK's vdpa appliaction, create /tmp/sock-virtio0
$ ./vdpa -w 0000:02:00.2,class=vdpa --log-level=pmd,info -- -i
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: net_mlx5 (15b3:101e) device: 0000:02:00.2 (socket 0)
EAL: Probe PCI driver: mlx5_vdpa (15b3:101e) device: 0000:02:00.2 (socket 0)
vdpa> create /tmp/sock-virtio0 0000:02:00.2
VHOST_CONFIG: vhost-user server: socket created, fd: 65
VHOST_CONFIG: bind to /tmp/sock-virtio0
vdpa> VHOST_CONFIG: new vhost user connection is 68
VHOST_CONFIG: new device, handle is 0
...
VHOST_CONFIG: virtio is now ready for processing.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 3
mlx5_vdpa: mlx5_vdpa_virtq.c:424: mlx5_vdpa_virtq_enable(): Update
virtq 3 status disable -> enable.
mlx5_vdpa: mlx5_vdpa_virtq.c:133: mlx5_vdpa_virtq_stop(): Query vid 0
vring 3: hw_available_idx=0, hw_used_index=0
mlx5_vdpa: mlx5_vdpa_virtq.c:264: mlx5_vdpa_virtq_setup(): vid 0: Init
last_avail_idx=0, last_used_idx=0 for virtq 3.
VHOST_CONFIG: virtio is now ready for processing.

3) start the VM
$ qemu-system-x86_64 --version
QEMU emulator version 4.2.1
$ qemu-system-x86_64 -enable-kvm -smp 5 -cpu host -m 4G -drive \
    file=/var/lib/libvirt/images/vdpa-vm.qcow2 \
    -serial mon:stdio \
    -chardev socket,id=charnet1,path=/tmp/sock-virtio0 \
    -netdev vhost-user,chardev=charnet1,queues=2,id=hostnet1 \
    -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet1,id=net1,mac=e4:11:c6:d3:45:f2,bus=pci.0,addr=0x6,page-per-vq=on,rx_queue_size=1024,tx_queue_size=1024
\
    -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on
\
    -numa node,memdev=mem -mem-prealloc -name "vdpa-vm"

Once VM boots, inside vm, I can see virtio device, and I bring it up, turns
on promisc mode.
vdpa@vdpa-vm:~$ ethtool -i ens6
driver: virtio_net
version: 1.0.0

4) At traffic gen, start to send packet with dst mac e4:11:c6:d3:45:f2
however, on the VM side, the packet stats is always 0.

On the KVM host side, I saw packets arrived at PF, but not sending to
vf0 (0000:02:00.2)
Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s
rxcmp/s   txcmp/s  rxmcst/s   %ifutil
Average:    enp2s0f0_p0 31544608.00      0.00 1971539.12      0.00
 0.00      0.00      0.00     64.60
Average:    enp2s0f0_pf0vf1      0.00      0.00      0.00      0.00
  0.00      0.00      0.00      0.00

$ mlnx_perf -i enp2s0f0_p0
                    rx_packets: 15,235,681
                      rx_bytes: 914,140,860 Bps      = 7,313.12 Mbps
                  rx_csum_none: 15,235,680
                rx_cache_reuse: 7,617,792
                       ch_poll: 238,057
              rx_out_of_buffer: 19,963,082
      rx_vport_unicast_packets: 35,198,770
        rx_vport_unicast_bytes: 2,252,721,216 Bps    = 18,021.76 Mbps

So I install tc rule
$ tc filter add dev enp2s0f0_p0 protocol ip parent ffff: \
    flower skip_sw action mirred egress redirect dev enp2s0f0_pf0vf0
  filter protocol ip pref 49152 flower chain 0 handle 0x1
  eth_type ipv4
  skip_sw
  in_hw
    action order 1: mirred (Egress Redirect to device enp2s0f0_pf0vf0) stolen
    index 1 ref 1 bind 1 installed 14 sec used 0 sec
    Action statistics:
    Sent 31735712192 bytes 495870503 pkt (dropped 0, overlimits 0 requeues 0)

With the rule, packets show up in KVM's vf, but still no packet
inside VM (ip -s link show) show all zero.

Appreciate for any suggestion to debugging.
Thanks in advance.
William & Sergey

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] DPDK vDPA application on Mellanox ConnectX 6Dx
  2020-07-13 14:29 [dpdk-dev] DPDK vDPA application on Mellanox ConnectX 6Dx William Tu
@ 2020-07-25 16:07 ` William Tu
  0 siblings, 0 replies; 2+ messages in thread
From: William Tu @ 2020-07-25 16:07 UTC (permalink / raw)
  To: dev; +Cc: smadaminov

Hi,

We finally get it worked out! It was due to our misconfiguration in the
dpdk vdpa sample application. We continue measuring the performance
using a dpdk traffic gen (64B UDP) and measure at VM's rx queue.
We wonder if the number below (around 1.8Mpps per core) is expected?

At hypervisor:
qemu and dpdk vdpa app is using 100%
16391 root      20   0 9930.6m  33380  14032 S 100.3  0.1   3:13.86
qemu-system-x86
16046 root      20   0  0.136t  13228   7448 S  95.7  0.0   2:47.43 vdpa

At VM (ens6 is the vdpa virtio dev)
root@vdpa-vm:/dpdk# mlnx_perf -i ens6
Initializing mlnx_perf...
Sampling started.
            rx_queue_0_packets: 1,824,640
              rx_queue_0_bytes: 109,478,400 Bps      = 875.82 Mbps
              rx_queue_0_kicks: 28,367

Thanks
William

On Mon, Jul 13, 2020 at 7:29 AM William Tu <u9012063@gmail.com> wrote:
>
> Hi,
>
> We are setting up a testbed using mellanox connectX 6Dx
> MCX621102AN-ADAT (2x25G) and
> are playing with vDPA features.  We setup two machines (traffic gen and KVM)
> and connect them back-to-back with two connectX 6Dx.
>
> At this moment we don't know which component might be misconfigured
> (qemu, vdpa app, or HW nic). Any feedback for debugging is appreciated!
>
> At KVM machine we follow [1] to setup vdpa on NIC, basically
> 1) Enable switchdev
>   echo 0000:02:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
>   echo switchdev > /sys/class/net/enp2s0f0/compat/devlink/mode
>   echo 0000:02:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
>
> $ lspci
>   02:00.0 Ethernet controller: Mellanox Technologies MT28841
>   02:00.1 Ethernet controller: Mellanox Technologies MT28841
>   02:00.2 Ethernet controller: Mellanox Technologies MT28850
>   02:00.3 Ethernet controller: Mellanox Technologies MT28850
>
> [11350.951711] mlx5_core 0000:02:00.0: E-Switch: Enable:
> mode(OFFLOADS), nvfs(2), active vports(3)
> [11351.032413] mlx5_core 0000:02:00.0 enp2s0f0_p0: Link up
> [11351.226525] enp2s0f0_pf0vf1: renamed from eth0
> [11351.403649] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_p0: link becomes ready
> [11351.403951] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_pf0vf0: link
> becomes ready
> [11351.404162] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_pf0vf1: link
> becomes ready
>
> [1]https://docs.mellanox.com/pages/viewpage.action?pageId=25146841#OVSOffloadUsingASAP%C2%B2Direct-ovs-dpdkhwoffloadsOVS-DPDKHardwareOffloads
>
> 2) run the DPDK's vdpa appliaction, create /tmp/sock-virtio0
> $ ./vdpa -w 0000:02:00.2,class=vdpa --log-level=pmd,info -- -i
> EAL: Detected 12 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Probe PCI driver: net_mlx5 (15b3:101e) device: 0000:02:00.2 (socket 0)
> EAL: Probe PCI driver: mlx5_vdpa (15b3:101e) device: 0000:02:00.2 (socket 0)
> vdpa> create /tmp/sock-virtio0 0000:02:00.2
> VHOST_CONFIG: vhost-user server: socket created, fd: 65
> VHOST_CONFIG: bind to /tmp/sock-virtio0
> vdpa> VHOST_CONFIG: new vhost user connection is 68
> VHOST_CONFIG: new device, handle is 0
> ...
> VHOST_CONFIG: virtio is now ready for processing.
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
> VHOST_CONFIG: set queue enable: 1 to qp idx: 3
> mlx5_vdpa: mlx5_vdpa_virtq.c:424: mlx5_vdpa_virtq_enable(): Update
> virtq 3 status disable -> enable.
> mlx5_vdpa: mlx5_vdpa_virtq.c:133: mlx5_vdpa_virtq_stop(): Query vid 0
> vring 3: hw_available_idx=0, hw_used_index=0
> mlx5_vdpa: mlx5_vdpa_virtq.c:264: mlx5_vdpa_virtq_setup(): vid 0: Init
> last_avail_idx=0, last_used_idx=0 for virtq 3.
> VHOST_CONFIG: virtio is now ready for processing.
>
> 3) start the VM
> $ qemu-system-x86_64 --version
> QEMU emulator version 4.2.1
> $ qemu-system-x86_64 -enable-kvm -smp 5 -cpu host -m 4G -drive \
>     file=/var/lib/libvirt/images/vdpa-vm.qcow2 \
>     -serial mon:stdio \
>     -chardev socket,id=charnet1,path=/tmp/sock-virtio0 \
>     -netdev vhost-user,chardev=charnet1,queues=2,id=hostnet1 \
>     -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet1,id=net1,mac=e4:11:c6:d3:45:f2,bus=pci.0,addr=0x6,page-per-vq=on,rx_queue_size=1024,tx_queue_size=1024
> \
>     -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on
> \
>     -numa node,memdev=mem -mem-prealloc -name "vdpa-vm"
>
> Once VM boots, inside vm, I can see virtio device, and I bring it up, turns
> on promisc mode.
> vdpa@vdpa-vm:~$ ethtool -i ens6
> driver: virtio_net
> version: 1.0.0
>
> 4) At traffic gen, start to send packet with dst mac e4:11:c6:d3:45:f2
> however, on the VM side, the packet stats is always 0.
>
> On the KVM host side, I saw packets arrived at PF, but not sending to
> vf0 (0000:02:00.2)
> Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s
> rxcmp/s   txcmp/s  rxmcst/s   %ifutil
> Average:    enp2s0f0_p0 31544608.00      0.00 1971539.12      0.00
>  0.00      0.00      0.00     64.60
> Average:    enp2s0f0_pf0vf1      0.00      0.00      0.00      0.00
>   0.00      0.00      0.00      0.00
>
> $ mlnx_perf -i enp2s0f0_p0
>                     rx_packets: 15,235,681
>                       rx_bytes: 914,140,860 Bps      = 7,313.12 Mbps
>                   rx_csum_none: 15,235,680
>                 rx_cache_reuse: 7,617,792
>                        ch_poll: 238,057
>               rx_out_of_buffer: 19,963,082
>       rx_vport_unicast_packets: 35,198,770
>         rx_vport_unicast_bytes: 2,252,721,216 Bps    = 18,021.76 Mbps
>
> So I install tc rule
> $ tc filter add dev enp2s0f0_p0 protocol ip parent ffff: \
>     flower skip_sw action mirred egress redirect dev enp2s0f0_pf0vf0
>   filter protocol ip pref 49152 flower chain 0 handle 0x1
>   eth_type ipv4
>   skip_sw
>   in_hw
>     action order 1: mirred (Egress Redirect to device enp2s0f0_pf0vf0) stolen
>     index 1 ref 1 bind 1 installed 14 sec used 0 sec
>     Action statistics:
>     Sent 31735712192 bytes 495870503 pkt (dropped 0, overlimits 0 requeues 0)
>
> With the rule, packets show up in KVM's vf, but still no packet
> inside VM (ip -s link show) show all zero.
>
> Appreciate for any suggestion to debugging.
> Thanks in advance.
> William & Sergey

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-07-25 16:08 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-13 14:29 [dpdk-dev] DPDK vDPA application on Mellanox ConnectX 6Dx William Tu
2020-07-25 16:07 ` William Tu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).