From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-f176.google.com (mail-qt0-f176.google.com [209.85.216.176]) by dpdk.org (Postfix) with ESMTP id 666E91B660 for ; Thu, 9 Nov 2017 11:11:53 +0100 (CET) Received: by mail-qt0-f176.google.com with SMTP id y45so787247qty.6 for ; Thu, 09 Nov 2017 02:11:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=b/6Fia4vYe9PIZkIc+XsRJSZLmlqnqFQH/S+e6RiQTA=; b=k1q2nswISjUr8u7z3MPseZqkoTsYflRKvotMshMNtgHcf6w4AxMTQNdZxS39lHc6At YAn81fQKthoupjLpwOgdzeCanNC9yMDuvusLbLXa2P8LK9Bgi9rmHtKbIiNjhVSQT0n0 3GoNBT1NgfBi1rnfN1kHKYkuU0RBOleLkvymlK3BR4SuumsB6PpxX9GG9MUXGf7vtD9P 1zlZ6nFoUvI6GI1oGIpx16NBmiwdRB+WesNi2u31zcLrEznALsfUkdAlgS3TJzQB+0rB 2odeNRpcNhXnZBHJ6eJVXfeWj/JL19hxhq+3tGr+L2Cjfv8Ho736Ts/5t0SKI7DJGU3O rlpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=b/6Fia4vYe9PIZkIc+XsRJSZLmlqnqFQH/S+e6RiQTA=; b=qgu7XzgFyGtKTuKP54FiaF9P78+3y8jblYn+fS5xY6Fdx60s8r0Tf/KCfGp9gFT4tP UzB/+q/9MTewd23bDpEGpJzQ/cx8/TvJd6MnRb5xTahF//Z9SIgaoWb+V4NQv+fTrJGf ybpZgJVFU5XVAyMvfmtcnJuRFOGK5g7FcVVmkSi0XAfMVmwAfltifb4GVk41CMitVgyc Tzu+zgTupCpE80AK8vsOI6wRPEkAP8kIytubJG3Q5yHxtl4A1rvqY48Z9p3ppUcAn0Vp yMVGQq+JbI35NSWsOOkIjYWSjWiEShoK5RNt2De9hCUw+UQxOlOQ8iXYqNGfp1OiyKOS 3kYw== X-Gm-Message-State: AJaThX4iCiPiN03DdxS5XFHE3TPoRzGGFwAGTwXxL28EfaTzNAz/LQha c2HCLim+z4t79SCWXAW0c3tfFO2e0d9ci+Iq8YY= X-Google-Smtp-Source: ABhQp+TWZSrBEJwVKR5Cmvl6LypQV4rewsGDh7Zru2bvB61qD4L/KdsagDii7ZkAZSeyAUoNA1ik67ZeEhmwIK8e2b8= X-Received: by 10.200.57.86 with SMTP id t22mr5758676qtb.117.1510222312709; Thu, 09 Nov 2017 02:11:52 -0800 (PST) MIME-Version: 1.0 Received: by 10.140.101.179 with HTTP; Thu, 9 Nov 2017 02:11:52 -0800 (PST) From: Sam Date: Thu, 9 Nov 2017 18:11:52 +0800 Message-ID: To: ovs-dev@openvswitch.org, dev@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] Why I can't get max line speed using ovs-dpdk and vhost-user port? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Nov 2017 10:11:53 -0000 Hi all, I'm using ovs-dpdk with vhost-user port and 10 VM started by qemu, topology is: VM1 ~ VM10 | | -----OVS---- | dpdk-bond the start command of ovs-dpdk is: root 8969 200 0.1 107748696 231284 ? S ovs-vswitchd --dpdk -c 0x40004 -n 4 --socket-mem 10240 --proc-type > secondary -w 0000:01:00.0 -w 0000:01:00.1 -- > unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach --log-file > --mlockall --no-chdir > --log-file=/usr/local/var/log/openvswitch/ovs-vswitchd.log > --pidfile=/usr/local/var/run/openvswitch/ovs-vswitchd.pid --detach --monitor qemu command is: root 48635 0.9 0.0 3550624 94492 ? Sl Nov07 30:37 > /usr/local/bin/qemu-system-x86_64_2.6.0 -enable-kvm -cpu > qemu64,+vmx,+ssse3,+sse4.1,+sse4.2,+x2apic,+aes,+avx,+vme,+pat,+ss,+pclmulqdq,+xsave,level=13 > -machine pc,accel=kvm -chardev > socket,id=hmqmondev,port=55924,host=127.0.0.1,nodelay,server,nowait -mon > chardev=hmqmondev,id=hmqmon,mode=readline -rtc > base=utc,clock=host,driftfix=none -usb -device usb-tablet -daemonize > -nodefaults -nodefconfig -no-kvm-pit-reinjection -global > kvm-pit.lost_tick_policy=discard -vga std -k en-us -smp 8 -name > gangyewei-35 -m 2048 -boot order=cdn -vnc :24,password -drive > file=/opt/cloud/workspace/disks/0ce6db23-627c-475d-b7ff-36266ba9492a,if=none,id=drive_0,format=qcow2,cache=none,aio=native > -device virtio-blk-pci,id=dev_drive_0,drive=drive_0,bus=pci.0,addr=0x5 > -drive > file=/opt/cloud/workspace/disks/7f11a37e-28bb-4c54-b903-de2a5b28b284,if=none,id=drive_1,format=qcow2,cache=none,aio=native > -device virtio-blk-pci,id=dev_drive_1,drive=drive_1,bus=pci.0,addr=0x6 > -drive > file=/opt/cloud/workspace/disks/f2a7e4fb-c457-4e60-a147-18e4fadcb4dc,if=none,id=drive_2,format=qcow2,cache=none,aio=native > -device virtio-blk-pci,id=dev_drive_2,drive=drive_2,bus=pci.0,addr=0x7 > -device ide-cd,drive=ide0-cd0,bus=ide.1,unit=1 -drive > id=ide0-cd0,media=cdrom,if=none -chardev > socket,id=char-n-650b42fe,path=/usr/local/var/run/openvswitch/n-650b42fe,server > -netdev type=vhost-user,id=n-650b42fe,chardev=char-n-650b42fe,vhostforce=on > -device > virtio-net-pci,netdev=n-650b42fe,mac=00:22:65:0b:42:fe,id=netdev-n-650b42fe,addr=0xf > -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on > -numa node,memdev=mem -pidfile > /opt/cloud/workspace/servers/a6d3bb5f-fc6c-4891-864c-a0d947d84867/pid > -chardev > socket,path=/opt/cloud/workspace/servers/a6d3bb5f-fc6c-4891-864c-a0d947d84867/qga.sock,server,nowait,id=qga0 > -device virtio-serial -device > virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 the interface between ovs-dpdk and qemu is netdev=n-650b42fe, for ovs-dpdk side, it's vhost-user type interface. the dpdk-bond interface of ovs-dpdk is dpdk bond type. Then I start 10 VM to send TCP packets using iperf3. I use 10Gbps netdevice, but there are only about 5600Mbps through dpdk-bond port, why? Is there someone who have testing of VM and ovs-dpdk bond port, and could you please share the test report?