From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8EC0942C4C; Wed, 7 Jun 2023 10:05:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 76AB040A84; Wed, 7 Jun 2023 10:05:28 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id EBCD940698 for ; Wed, 7 Jun 2023 10:05:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686125126; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=TZKHbHdWLuy/CISATZlZiRWsjWlxiRV/sIxGdL1y/Fs=; b=OVMxvH/hxVQj6O7pcJCaOZXtQBNYpzaDHG33mqxD4aJg6IUWFdqHr1+W5DKbS+VCX4Ijfv uGgam+cH8LBkOyv4QbGu3Wu7+snQXalnUMbFQ4VTL6lo5BNeb7K+QEuU1BXaPOZbivdKb5 Q8pTyYyfxGGHX2Ja0DcDuQPoCHffH34= Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-92-rEHAp9xGPQW9SVRknJi9aA-1; Wed, 07 Jun 2023 04:05:25 -0400 X-MC-Unique: rEHAp9xGPQW9SVRknJi9aA-1 Received: by mail-pg1-f197.google.com with SMTP id 41be03b00d2f7-53f6f7d1881so6309030a12.3 for ; Wed, 07 Jun 2023 01:05:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686125124; x=1688717124; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TZKHbHdWLuy/CISATZlZiRWsjWlxiRV/sIxGdL1y/Fs=; b=WBBtxDBHUTcVl9qFxSa3xexE8UDj31XhsngSOMS/kdFUPyoFO+oyM+/12R6FqCtRrv B198qiNO7Sm8/s+xYkyUInxi/hCmlp1LjqZuU9BhN/dwcmIsrHtltMkxrH29SXcWhDNv 33+TUionoKjd70SxDk3hVm5V2XjEPhPe4aZfnnWjB8apJDpEafFGDJ9Yw0JL8EBMsDEW uAc4wcDQ5zJ5VI4Slg5YhJisiJZecv+6zRThef91sFuPNyAK4n7pS1FoC5SQiZA6nFk7 MWt4Fj1IyaFXcdkbi8+i7ZfnSycNEyYi9lg6omfKY/++z6rJc4svWMlDjdSjP364JLqQ xOFA== X-Gm-Message-State: AC+VfDya7SljRBMLjB9V7AAuFkNqXmE6DOypdSP3ImhVEgYS4BXrhl1e kD69GsfkFIt1L8t/he5SNQZoyCTsmDQ/2q+woqqoU29qsfTzRAD3U8w2MAlR5vD14b0jqFAFPGF JafWFTytY99C7FJVCHdY= X-Received: by 2002:a17:902:c101:b0:1a3:cd4c:8d08 with SMTP id 1-20020a170902c10100b001a3cd4c8d08mr4697039pli.38.1686125124111; Wed, 07 Jun 2023 01:05:24 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6HCfXCyinQsEGJ+iOos96m0Ek95wAtJfoeYqbFyOKTNpd7x5/GJ2zBq1AFFV0e1PttB8XD0NBZbBCXILX6wrc= X-Received: by 2002:a17:902:c101:b0:1a3:cd4c:8d08 with SMTP id 1-20020a170902c10100b001a3cd4c8d08mr4697013pli.38.1686125123782; Wed, 07 Jun 2023 01:05:23 -0700 (PDT) MIME-Version: 1.0 References: <20230606081852.71003-1-maxime.coquelin@redhat.com> In-Reply-To: <20230606081852.71003-1-maxime.coquelin@redhat.com> From: David Marchand Date: Wed, 7 Jun 2023 10:05:12 +0200 Message-ID: Subject: Re: [PATCH v5 00/26] Add VDUSE support to Vhost library To: Maxime Coquelin Cc: dev@dpdk.org, chenbo.xia@intel.com, mkp@redhat.com, fbl@redhat.com, jasowang@redhat.com, cunming.liang@intel.com, xieyongji@bytedance.com, echaudro@redhat.com, eperezma@redhat.com, amorenoz@redhat.com, lulu@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, Jun 6, 2023 at 10:19=E2=80=AFAM Maxime Coquelin wrote: > > This series introduces a new type of backend, VDUSE, > to the Vhost library. > > VDUSE stands for vDPA device in Userspace, it enables > implementing a Virtio device in userspace and have it > attached to the Kernel vDPA bus. > > Once attached to the vDPA bus, it can be used either by > Kernel Virtio drivers, like virtio-net in our case, via > the virtio-vdpa driver. Doing that, the device is visible > to the Kernel networking stack and is exposed to userspace > as a regular netdev. > > It can also be exposed to userspace thanks to the > vhost-vdpa driver, via a vhost-vdpa chardev that can be > passed to QEMU or Virtio-user PMD. > > While VDUSE support is already available in upstream > Kernel, a couple of patches are required to support > network device type: > > https://gitlab.com/mcoquelin/linux/-/tree/vduse_networking_rfc > > In order to attach the created VDUSE device to the vDPA > bus, a recent iproute2 version containing the vdpa tool is > required. > > Benchmark results: > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > On this v2, PVP reference benchmark has been run & compared with > Vhost-user. > > When doing macswap forwarding in the worload, no difference is seen. > When doing io forwarding in the workload, we see 4% performance > degradation with VDUSE, comapred to Vhost-user/Virtio-user. It is > explained by the use of the IOTLB layer in the Vhost-library when using > VDUSE, whereas Vhost-user/Virtio-user does not make use of it. > > Usage: > =3D=3D=3D=3D=3D=3D > > 1. Probe required Kernel modules > # modprobe vdpa > # modprobe vduse > # modprobe virtio-vdpa > > 2. Build (require vduse kernel headers to be available) > # meson build > # ninja -C build > > 3. Create a VDUSE device (vduse0) using Vhost PMD with > testpmd (with 4 queue pairs in this example) > # ./build/app/dpdk-testpmd --no-pci --vdev=3Dnet_vhost0,iface=3D/dev/vdus= e/vduse0,queues=3D4 --log-level=3D*:9 -- -i --txq=3D4 --rxq=3D4 9 is a nice but undefined value. 8 is enough. In general, I prefer "human readable" strings, like *:debug ;-). > > 4. Attach the VDUSE device to the vDPA bus > # vdpa dev add name vduse0 mgmtdev vduse > =3D> The virtio-net netdev shows up (eth0 here) > # ip l show eth0 > 21: eth0: mtu 1500 qdisc mq state UP mo= de DEFAULT group default qlen 1000 > link/ether c2:73:ea:a7:68:6d brd ff:ff:ff:ff:ff:ff > > 5. Start/stop traffic in testpmd > testpmd> start > testpmd> show port stats 0 > ######################## NIC statistics for port 0 ###################= ##### > RX-packets: 11 RX-missed: 0 RX-bytes: 1482 > RX-errors: 0 > RX-nombuf: 0 > TX-packets: 1 TX-errors: 0 TX-bytes: 62 > > Throughput (since last show) > Rx-pps: 0 Rx-bps: 0 > Tx-pps: 0 Tx-bps: 0 > #######################################################################= ##### > testpmd> stop > > 6. Detach the VDUSE device from the vDPA bus > # vdpa dev del vduse0 > > 7. Quit testpmd > testpmd> quit > > Known issues & remaining work: > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D > - Fix issue in FD manager (still polling while FD has been removed) > - Add Netlink support in Vhost library > - Support device reconnection > -> a temporary patch to support reconnection via a tmpfs file is availab= le, > upstream solution would be in-kernel and is being developed. > -> https://gitlab.com/mcoquelin/dpdk-next-virtio/-/commit/5ad06ce14159a9= ce36ee168dd13ef389cec91137 > - Support packed ring > - Provide more performance benchmark results We are missing a reference to the kernel patches required to have vduse accept net devices. I had played with the patches at v1 and it was working ok. I did not review in depth the latest revisions, but I followed your series from the PoC/start. Overall, the series lgtm. For the series, Acked-by: David Marchand --=20 David Marchand