From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48E7EA0540; Mon, 13 Jul 2020 16:30:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0B13C1D704; Mon, 13 Jul 2020 16:30:34 +0200 (CEST) Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) by dpdk.org (Postfix) with ESMTP id 1F6181D69D for ; Mon, 13 Jul 2020 16:30:33 +0200 (CEST) Received: by mail-qk1-f172.google.com with SMTP id b4so12302617qkn.11 for ; Mon, 13 Jul 2020 07:30:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to:cc; bh=bpQhWFnKxN8+2u1tvfKm0zmeHfMWF0C5nDgz+TwxblE=; b=E596XWkeXs/aPLMIIMpiKgyYpTDU6UWa1H2vXQxTII0XpfvRo0xFeqJ+8WOewUyIuJ Wvy8ifc0t/5SeZ2DY6SappsYPuT4fC52RRSFib7mN2BLLS3qo3BVuedHrIK15ZjIFcdG +AOtghQBQI19etQuDe8kmC3btf1aAgSMIGssHiVvJwswRlgySmt/bzXBhBxvrILE3ul4 6y262d9QscagafBYhUY12R8FqcjXilwTYVurnEvKhcHnl7TMeo0xxvHwdXWwTBrdXGK1 LJ/Brx3d311le+f4R9IAPSoDgiflv2WXFSD3PJS1IiTP1T4sYDkkrudQPaT3+y8w54tw cW0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc; bh=bpQhWFnKxN8+2u1tvfKm0zmeHfMWF0C5nDgz+TwxblE=; b=Wwf1MtDh6DMi8z3dBiZhpwU/rAsWByyDgHPgdMQpGusPjHzxFxcm4lGo+jY/f56JQs TvG9WlwiLLs4mIlPdg+RUIj08RZ+Qsv6Zct1ks3WXe8/w/k37GSgWVLoxqvEUTbh2jXm MMTJ0q7lW8R1xsmW89kdz4wwDTy9u1lB2jiqut3uA/1OOo1kXVyMBV9fJ7caWZXfmFOP /NTwhDs4ZQ/Lp/TRQMzi6Mn+u88Y60+MLmoEUrbAIxtbIU2n6xlOUsDWbjBWm8s+LHtD awWmZUAhwh2ZJVqU/r4Qo9UKJDucPrVHRHLfb/KOuaTSah7JdOu4rztMSdKue1HXGu8G /mWw== X-Gm-Message-State: AOAM5322n5EtfU9Tb/QSCtueWwVXGOd8o7oiI0hTBYRUN2Q9J2ft6R0s JMIyE9gJHvSEAXAb4hWBQTxLnDVoBSZm3ChcgkarNSmdTgs= X-Google-Smtp-Source: ABdhPJxVsCNdkFxlk8eN/3YRyydGNXC6/a3EuVaPCr3fEwliAQU+1cDLev/znySp4yDFQoEMEnSgftvnQaESn1zJzAA= X-Received: by 2002:a37:d91:: with SMTP id 139mr77384299qkn.291.1594650632213; Mon, 13 Jul 2020 07:30:32 -0700 (PDT) MIME-Version: 1.0 From: William Tu Date: Mon, 13 Jul 2020 07:29:56 -0700 Message-ID: To: dev@dpdk.org Cc: smadaminov@vmware.com Content-Type: text/plain; charset="UTF-8" Subject: [dpdk-dev] DPDK vDPA application on Mellanox ConnectX 6Dx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, We are setting up a testbed using mellanox connectX 6Dx MCX621102AN-ADAT (2x25G) and are playing with vDPA features. We setup two machines (traffic gen and KVM) and connect them back-to-back with two connectX 6Dx. At this moment we don't know which component might be misconfigured (qemu, vdpa app, or HW nic). Any feedback for debugging is appreciated! At KVM machine we follow [1] to setup vdpa on NIC, basically 1) Enable switchdev echo 0000:02:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind echo switchdev > /sys/class/net/enp2s0f0/compat/devlink/mode echo 0000:02:00.2 > /sys/bus/pci/drivers/mlx5_core/bind $ lspci 02:00.0 Ethernet controller: Mellanox Technologies MT28841 02:00.1 Ethernet controller: Mellanox Technologies MT28841 02:00.2 Ethernet controller: Mellanox Technologies MT28850 02:00.3 Ethernet controller: Mellanox Technologies MT28850 [11350.951711] mlx5_core 0000:02:00.0: E-Switch: Enable: mode(OFFLOADS), nvfs(2), active vports(3) [11351.032413] mlx5_core 0000:02:00.0 enp2s0f0_p0: Link up [11351.226525] enp2s0f0_pf0vf1: renamed from eth0 [11351.403649] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_p0: link becomes ready [11351.403951] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_pf0vf0: link becomes ready [11351.404162] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_pf0vf1: link becomes ready [1]https://docs.mellanox.com/pages/viewpage.action?pageId=25146841#OVSOffloadUsingASAP%C2%B2Direct-ovs-dpdkhwoffloadsOVS-DPDKHardwareOffloads 2) run the DPDK's vdpa appliaction, create /tmp/sock-virtio0 $ ./vdpa -w 0000:02:00.2,class=vdpa --log-level=pmd,info -- -i EAL: Detected 12 lcore(s) EAL: Detected 1 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing VFIO support... EAL: VFIO support initialized EAL: Probe PCI driver: net_mlx5 (15b3:101e) device: 0000:02:00.2 (socket 0) EAL: Probe PCI driver: mlx5_vdpa (15b3:101e) device: 0000:02:00.2 (socket 0) vdpa> create /tmp/sock-virtio0 0000:02:00.2 VHOST_CONFIG: vhost-user server: socket created, fd: 65 VHOST_CONFIG: bind to /tmp/sock-virtio0 vdpa> VHOST_CONFIG: new vhost user connection is 68 VHOST_CONFIG: new device, handle is 0 ... VHOST_CONFIG: virtio is now ready for processing. VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE VHOST_CONFIG: set queue enable: 1 to qp idx: 3 mlx5_vdpa: mlx5_vdpa_virtq.c:424: mlx5_vdpa_virtq_enable(): Update virtq 3 status disable -> enable. mlx5_vdpa: mlx5_vdpa_virtq.c:133: mlx5_vdpa_virtq_stop(): Query vid 0 vring 3: hw_available_idx=0, hw_used_index=0 mlx5_vdpa: mlx5_vdpa_virtq.c:264: mlx5_vdpa_virtq_setup(): vid 0: Init last_avail_idx=0, last_used_idx=0 for virtq 3. VHOST_CONFIG: virtio is now ready for processing. 3) start the VM $ qemu-system-x86_64 --version QEMU emulator version 4.2.1 $ qemu-system-x86_64 -enable-kvm -smp 5 -cpu host -m 4G -drive \ file=/var/lib/libvirt/images/vdpa-vm.qcow2 \ -serial mon:stdio \ -chardev socket,id=charnet1,path=/tmp/sock-virtio0 \ -netdev vhost-user,chardev=charnet1,queues=2,id=hostnet1 \ -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet1,id=net1,mac=e4:11:c6:d3:45:f2,bus=pci.0,addr=0x6,page-per-vq=on,rx_queue_size=1024,tx_queue_size=1024 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \ -numa node,memdev=mem -mem-prealloc -name "vdpa-vm" Once VM boots, inside vm, I can see virtio device, and I bring it up, turns on promisc mode. vdpa@vdpa-vm:~$ ethtool -i ens6 driver: virtio_net version: 1.0.0 4) At traffic gen, start to send packet with dst mac e4:11:c6:d3:45:f2 however, on the VM side, the packet stats is always 0. On the KVM host side, I saw packets arrived at PF, but not sending to vf0 (0000:02:00.2) Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil Average: enp2s0f0_p0 31544608.00 0.00 1971539.12 0.00 0.00 0.00 0.00 64.60 Average: enp2s0f0_pf0vf1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 $ mlnx_perf -i enp2s0f0_p0 rx_packets: 15,235,681 rx_bytes: 914,140,860 Bps = 7,313.12 Mbps rx_csum_none: 15,235,680 rx_cache_reuse: 7,617,792 ch_poll: 238,057 rx_out_of_buffer: 19,963,082 rx_vport_unicast_packets: 35,198,770 rx_vport_unicast_bytes: 2,252,721,216 Bps = 18,021.76 Mbps So I install tc rule $ tc filter add dev enp2s0f0_p0 protocol ip parent ffff: \ flower skip_sw action mirred egress redirect dev enp2s0f0_pf0vf0 filter protocol ip pref 49152 flower chain 0 handle 0x1 eth_type ipv4 skip_sw in_hw action order 1: mirred (Egress Redirect to device enp2s0f0_pf0vf0) stolen index 1 ref 1 bind 1 installed 14 sec used 0 sec Action statistics: Sent 31735712192 bytes 495870503 pkt (dropped 0, overlimits 0 requeues 0) With the rule, packets show up in KVM's vf, but still no packet inside VM (ip -s link show) show all zero. Appreciate for any suggestion to debugging. Thanks in advance. William & Sergey