From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1376F4292F; Thu, 13 Apr 2023 09:59:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 02F3C410F9; Thu, 13 Apr 2023 09:59:23 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id CD619410F9 for ; Thu, 13 Apr 2023 09:59:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1681372761; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9znks+yRUGdhXc71WVKuC2LfR8RLBtXX0kRZ23QUXdU=; b=LgclEcV4/YIZ5OO897fShl/An7I8rOwc/7VNo5lK8yyuNEo5048oeNyyw1syoJXVauQRDQ szB9wixN/FHKDMMn/CgaAa7yH4jaMYcx2xlei5HZ1QLjFv1SzOtt9CMhFDNnObwZEOk5kd ezdTb8ZZrzaytKZvDTXLhSUxwoFPFGE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-478-tLItXJ-uMuar1sUxQlrfgw-1; Thu, 13 Apr 2023 03:59:18 -0400 X-MC-Unique: tLItXJ-uMuar1sUxQlrfgw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B3C9E1C0879D; Thu, 13 Apr 2023 07:59:17 +0000 (UTC) Received: from [10.39.208.7] (unknown [10.39.208.7]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E93301121320; Thu, 13 Apr 2023 07:59:11 +0000 (UTC) Message-ID: <58480498-8037-ec45-548a-8027e185fcaf@redhat.com> Date: Thu, 13 Apr 2023 09:59:10 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.8.0 To: "Xia, Chenbo" , =?UTF-8?Q?Morten_Br=c3=b8rup?= , Ferruh Yigit , "dev@dpdk.org" , "david.marchand@redhat.com" , "mkp@redhat.com" , "fbl@redhat.com" , "jasowang@redhat.com" , "Liang, Cunming" , "Xie, Yongji" , "echaudro@redhat.com" , "eperezma@redhat.com" , "amorenoz@redhat.com" References: <20230331154259.1447831-1-maxime.coquelin@redhat.com> <3789c0c7-281b-89c3-45aa-3d985736b04a@amd.com> <107f53d8-c0c3-68a8-1857-4b6ef0165b48@redhat.com> <98CBD80474FA8B44BF855DF32C47DC35D8786B@smartserver.smartshare.dk> From: Maxime Coquelin Subject: Re: [RFC 00/27] Add VDUSE support to Vhost library In-Reply-To: X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi, On 4/13/23 09:08, Xia, Chenbo wrote: >> -----Original Message----- >> From: Morten Brørup >> Sent: Thursday, April 13, 2023 3:41 AM >> To: Maxime Coquelin ; Ferruh Yigit >> ; dev@dpdk.org; david.marchand@redhat.com; Xia, >> Chenbo ; mkp@redhat.com; fbl@redhat.com; >> jasowang@redhat.com; Liang, Cunming ; Xie, Yongji >> ; echaudro@redhat.com; eperezma@redhat.com; >> amorenoz@redhat.com >> Subject: RE: [RFC 00/27] Add VDUSE support to Vhost library >> >>> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com] >>> Sent: Wednesday, 12 April 2023 17.28 >>> >>> Hi Ferruh, >>> >>> On 4/12/23 13:33, Ferruh Yigit wrote: >>>> On 3/31/2023 4:42 PM, Maxime Coquelin wrote: >>>>> This series introduces a new type of backend, VDUSE, >>>>> to the Vhost library. >>>>> >>>>> VDUSE stands for vDPA device in Userspace, it enables >>>>> implementing a Virtio device in userspace and have it >>>>> attached to the Kernel vDPA bus. >>>>> >>>>> Once attached to the vDPA bus, it can be used either by >>>>> Kernel Virtio drivers, like virtio-net in our case, via >>>>> the virtio-vdpa driver. Doing that, the device is visible >>>>> to the Kernel networking stack and is exposed to userspace >>>>> as a regular netdev. >>>>> >>>>> It can also be exposed to userspace thanks to the >>>>> vhost-vdpa driver, via a vhost-vdpa chardev that can be >>>>> passed to QEMU or Virtio-user PMD. >>>>> >>>>> While VDUSE support is already available in upstream >>>>> Kernel, a couple of patches are required to support >>>>> network device type: >>>>> >>>>> https://gitlab.com/mcoquelin/linux/-/tree/vduse_networking_poc >>>>> >>>>> In order to attach the created VDUSE device to the vDPA >>>>> bus, a recent iproute2 version containing the vdpa tool is >>>>> required. >>>> >>>> Hi Maxime, >>>> >>>> Is this a replacement to the existing DPDK vDPA framework? What is the >>>> plan for long term? >>>> >>> >>> No, this is not a replacement for DPDK vDPA framework. >>> >>> We (Red Hat) don't have plans to support DPDK vDPA framework in our >>> products, but there are still contribution to DPDK vDPA by several vDPA >>> hardware vendors (Intel, Nvidia, Xilinx), so I don't think it is going >>> to be deprecated soon. >> >> Ferruh's question made me curious... >> >> I don't know anything about VDUSE or vDPA, and don't use any of it, so >> consider me ignorant in this area. >> >> Is VDUSE an alternative to the existing DPDK vDPA framework? What are the >> differences, e.g. in which cases would an application developer (or user) >> choose one or the other? > > Maxime should give better explanation.. but let me just explain a bit. > > Vendors have vDPA HW that support vDPA framework (most likely in their DPU/IPU > products). This work is introducing a way to emulate a SW vDPA device in > userspace (DPDK), and this SW vDPA device also supports vDPA framework. > > So it's not an alternative to existing DPDK vDPA framework :) Correct. When using DPDK vDPA, the datapath of a Vhost-user port is offloaded to a compatible physical NIC (i.e. a NIC that implements Virtio rings support), the control path remains the same as a regular Vhost-user port, i.e. it provides a Vhost-user unix socket to the application (like QEMU or DPDK Virtio-user PMD). When using Kernel vDPA, the datapath is also offloaded to a vDPA compatible device, and the control path is managed by the vDPA bus. It can either be consumed by a Kernel Virtio device (here Virtio-net) when using Virtio-vDPA. In this case the device is exposed as a regular netdev and, in the case of Kubernetes, can be used as primary interfaces for the pods. Or it can be exposed to user-space via Vhost-vDPA, a chardev that can be seen as an alternative to Vhost-user sockets. In this case it can for example be used by QEMU or DPDK Virtio-user PMD. In Kubernetes, it can be used as a secondary interface. Now comes VDUSE. VDUSE is a Kernel vDPA device, but instead of being a physical device where the Virtio datapath is offloaded, the Virtio datapath is offloaded to a user-space application. With this series, a DPDK application, like OVS-DPDK for instance, can create VDUSE device and expose them either as regular netdev when binding them to Kernel Virtio-net driver via Virtio-vDPA, or as Vhost-vDPA interface to be consumed by another userspace appliation like QEMU or DPDK application using Virtio-user PMD. With this solution, OVS-DPDK could serve both primary and secondary interfaces of Kubernetes pods. I hope it clarifies, I will add these information in the cover-letter for next revisions. Let me know if anything is still unclear. I did a presentation at last DPDK summit [0], maybe the diagrams will help to clarify furthermore. Regards, Maxime > Thanks, > Chenbo > >> >> And if it is a better alternative, perhaps the documentation should >> mention that it is recommended over DPDK vDPA. Just like we started >> recommending alternatives to the KNI driver, so we could phase it out and >> eventually get rid of it. >> >>> >>> Regards, >>> Maxime > [0]: https://static.sched.com/hosted_files/dpdkuserspace22/9f/Open%20DPDK%20to%20containers%20networking%20with%20VDUSE.pdf