From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3F39EA0526; Sat, 25 Jul 2020 18:08:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 788251C029; Sat, 25 Jul 2020 18:08:13 +0200 (CEST) Received: from mail-qt1-f177.google.com (mail-qt1-f177.google.com [209.85.160.177]) by dpdk.org (Postfix) with ESMTP id 385CF1C01E for ; Sat, 25 Jul 2020 18:08:12 +0200 (CEST) Received: by mail-qt1-f177.google.com with SMTP id v22so3258910qtq.8 for ; Sat, 25 Jul 2020 09:08:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=A6hcKwMSvwiClYOLdk3cE/MOn4aeQjTC0mGabEyYrQI=; b=HPe4R3Rl6OTKvBagQsy3dhx4FbMlKNfgtIbkIrpkFJGhWLm4+DMkKGNjM4+rVFGmZf 9mlKt8/tKSGe3ljr/64E65GQW0RVt/h2t0GuUw5M6OR/AEHBsiNASW3Cjn5NuoK+UY7m HpkdIjyKZNOV+JAkUk1wn9Bt5sRxagRpduu3rGM7n4xKzwbEesRGLFIHjLFN0/1H3vI2 XIqad69iwAz8BDTJ9N2RFGv4+TZsje1etI6movKclTeVHRJV/2iUrkouwnTRvxCJFPu3 C0ZhtfRiBKaayqE/gVKEa2vTgh1iPFWfDfZ6fygXs2L5jToDKiwsCWQIOUMVwABoaoVQ 29GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=A6hcKwMSvwiClYOLdk3cE/MOn4aeQjTC0mGabEyYrQI=; b=SQnRMXk1m5Wn2jYcmbo5DCS9MuY8h6dWGhjzm34O2zvgENs11grhiMQIZu7iyaYCnJ zSrPDMiPyc+rTH0AQIUbk0al2ts90z+yYdI/y0omyJYqA1XcU2Z0rMBMPD0Q9rX1Ww18 sBszpQAq2ivRaVbLyZChoQ3zV64R+8Tr2B8CSi0taPZX4CPYDIqXRG/TOV/Bgl7h/8qa CU2HJz+Li1rR+YQgnNsNMnDU1YDeZE4Mm+BFYKZD68HHgmlbpz4vAju0kqmMtY8nfL5m b8SCaqzrtlQXJzkYLN8U1R1nh0/w2t59OH2eqEdQiH1SNrspzhqpmM5hmD0sSC8NFxM5 +gdQ== X-Gm-Message-State: AOAM533kOTovPDpY4jhsxh4O7kGFfJBKl0lApn7PxwBt6J4+xmN8NyGT YEz9pedUoE+a+lIb7Ww+ZwWY4wwioZlafqcNvzdKa4G64E0= X-Google-Smtp-Source: ABdhPJwOoAzFPpmqzoQGhxXqGX+o2elBPxTSkEl1qtjuct0QsUrPrIrn7MnTnm1d/G6c8AqTuTJpNqFEJ7Mz6vm9bfs= X-Received: by 2002:aed:2199:: with SMTP id l25mr14352615qtc.309.1595693291138; Sat, 25 Jul 2020 09:08:11 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: William Tu Date: Sat, 25 Jul 2020 09:07:35 -0700 Message-ID: To: dev@dpdk.org Cc: smadaminov@vmware.com Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] DPDK vDPA application on Mellanox ConnectX 6Dx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, We finally get it worked out! It was due to our misconfiguration in the dpdk vdpa sample application. We continue measuring the performance using a dpdk traffic gen (64B UDP) and measure at VM's rx queue. We wonder if the number below (around 1.8Mpps per core) is expected? At hypervisor: qemu and dpdk vdpa app is using 100% 16391 root 20 0 9930.6m 33380 14032 S 100.3 0.1 3:13.86 qemu-system-x86 16046 root 20 0 0.136t 13228 7448 S 95.7 0.0 2:47.43 vdpa At VM (ens6 is the vdpa virtio dev) root@vdpa-vm:/dpdk# mlnx_perf -i ens6 Initializing mlnx_perf... Sampling started. rx_queue_0_packets: 1,824,640 rx_queue_0_bytes: 109,478,400 Bps = 875.82 Mbps rx_queue_0_kicks: 28,367 Thanks William On Mon, Jul 13, 2020 at 7:29 AM William Tu wrote: > > Hi, > > We are setting up a testbed using mellanox connectX 6Dx > MCX621102AN-ADAT (2x25G) and > are playing with vDPA features. We setup two machines (traffic gen and KVM) > and connect them back-to-back with two connectX 6Dx. > > At this moment we don't know which component might be misconfigured > (qemu, vdpa app, or HW nic). Any feedback for debugging is appreciated! > > At KVM machine we follow [1] to setup vdpa on NIC, basically > 1) Enable switchdev > echo 0000:02:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind > echo switchdev > /sys/class/net/enp2s0f0/compat/devlink/mode > echo 0000:02:00.2 > /sys/bus/pci/drivers/mlx5_core/bind > > $ lspci > 02:00.0 Ethernet controller: Mellanox Technologies MT28841 > 02:00.1 Ethernet controller: Mellanox Technologies MT28841 > 02:00.2 Ethernet controller: Mellanox Technologies MT28850 > 02:00.3 Ethernet controller: Mellanox Technologies MT28850 > > [11350.951711] mlx5_core 0000:02:00.0: E-Switch: Enable: > mode(OFFLOADS), nvfs(2), active vports(3) > [11351.032413] mlx5_core 0000:02:00.0 enp2s0f0_p0: Link up > [11351.226525] enp2s0f0_pf0vf1: renamed from eth0 > [11351.403649] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_p0: link becomes ready > [11351.403951] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_pf0vf0: link > becomes ready > [11351.404162] IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0_pf0vf1: link > becomes ready > > [1]https://docs.mellanox.com/pages/viewpage.action?pageId=25146841#OVSOffloadUsingASAP%C2%B2Direct-ovs-dpdkhwoffloadsOVS-DPDKHardwareOffloads > > 2) run the DPDK's vdpa appliaction, create /tmp/sock-virtio0 > $ ./vdpa -w 0000:02:00.2,class=vdpa --log-level=pmd,info -- -i > EAL: Detected 12 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'VA' > EAL: Probing VFIO support... > EAL: VFIO support initialized > EAL: Probe PCI driver: net_mlx5 (15b3:101e) device: 0000:02:00.2 (socket 0) > EAL: Probe PCI driver: mlx5_vdpa (15b3:101e) device: 0000:02:00.2 (socket 0) > vdpa> create /tmp/sock-virtio0 0000:02:00.2 > VHOST_CONFIG: vhost-user server: socket created, fd: 65 > VHOST_CONFIG: bind to /tmp/sock-virtio0 > vdpa> VHOST_CONFIG: new vhost user connection is 68 > VHOST_CONFIG: new device, handle is 0 > ... > VHOST_CONFIG: virtio is now ready for processing. > VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE > VHOST_CONFIG: set queue enable: 1 to qp idx: 3 > mlx5_vdpa: mlx5_vdpa_virtq.c:424: mlx5_vdpa_virtq_enable(): Update > virtq 3 status disable -> enable. > mlx5_vdpa: mlx5_vdpa_virtq.c:133: mlx5_vdpa_virtq_stop(): Query vid 0 > vring 3: hw_available_idx=0, hw_used_index=0 > mlx5_vdpa: mlx5_vdpa_virtq.c:264: mlx5_vdpa_virtq_setup(): vid 0: Init > last_avail_idx=0, last_used_idx=0 for virtq 3. > VHOST_CONFIG: virtio is now ready for processing. > > 3) start the VM > $ qemu-system-x86_64 --version > QEMU emulator version 4.2.1 > $ qemu-system-x86_64 -enable-kvm -smp 5 -cpu host -m 4G -drive \ > file=/var/lib/libvirt/images/vdpa-vm.qcow2 \ > -serial mon:stdio \ > -chardev socket,id=charnet1,path=/tmp/sock-virtio0 \ > -netdev vhost-user,chardev=charnet1,queues=2,id=hostnet1 \ > -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet1,id=net1,mac=e4:11:c6:d3:45:f2,bus=pci.0,addr=0x6,page-per-vq=on,rx_queue_size=1024,tx_queue_size=1024 > \ > -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on > \ > -numa node,memdev=mem -mem-prealloc -name "vdpa-vm" > > Once VM boots, inside vm, I can see virtio device, and I bring it up, turns > on promisc mode. > vdpa@vdpa-vm:~$ ethtool -i ens6 > driver: virtio_net > version: 1.0.0 > > 4) At traffic gen, start to send packet with dst mac e4:11:c6:d3:45:f2 > however, on the VM side, the packet stats is always 0. > > On the KVM host side, I saw packets arrived at PF, but not sending to > vf0 (0000:02:00.2) > Average: IFACE rxpck/s txpck/s rxkB/s txkB/s > rxcmp/s txcmp/s rxmcst/s %ifutil > Average: enp2s0f0_p0 31544608.00 0.00 1971539.12 0.00 > 0.00 0.00 0.00 64.60 > Average: enp2s0f0_pf0vf1 0.00 0.00 0.00 0.00 > 0.00 0.00 0.00 0.00 > > $ mlnx_perf -i enp2s0f0_p0 > rx_packets: 15,235,681 > rx_bytes: 914,140,860 Bps = 7,313.12 Mbps > rx_csum_none: 15,235,680 > rx_cache_reuse: 7,617,792 > ch_poll: 238,057 > rx_out_of_buffer: 19,963,082 > rx_vport_unicast_packets: 35,198,770 > rx_vport_unicast_bytes: 2,252,721,216 Bps = 18,021.76 Mbps > > So I install tc rule > $ tc filter add dev enp2s0f0_p0 protocol ip parent ffff: \ > flower skip_sw action mirred egress redirect dev enp2s0f0_pf0vf0 > filter protocol ip pref 49152 flower chain 0 handle 0x1 > eth_type ipv4 > skip_sw > in_hw > action order 1: mirred (Egress Redirect to device enp2s0f0_pf0vf0) stolen > index 1 ref 1 bind 1 installed 14 sec used 0 sec > Action statistics: > Sent 31735712192 bytes 495870503 pkt (dropped 0, overlimits 0 requeues 0) > > With the rule, packets show up in KVM's vf, but still no packet > inside VM (ip -s link show) show all zero. > > Appreciate for any suggestion to debugging. > Thanks in advance. > William & Sergey