From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6D7DA09E4; Thu, 21 Jan 2021 09:48:10 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8A419140EB5; Thu, 21 Jan 2021 09:48:10 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id A132D140E9D for ; Thu, 21 Jan 2021 09:48:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611218888; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5ejFRJF6SNweRA724SHggMV3RfZo4L7JtcW+h7Brn3U=; b=UIiuu3FL8PDNbtz32uNcNL9Q5Xn9nCyDpdbX8BO9blXQQ6hWSo4v4ictAu+a0HTjf46oI4 6LUrh7RWSv/PNcFW5uJUku/f4YTZOHzoYNrOeAuGH4soBL+2j0uLz8YdfwiZ1geqs3oZbT 5ErLMB9P1YG/132P6+dNcYSzzd9Z5io= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-435-Ip8TWmCFNyqwRTx1lgm__Q-1; Thu, 21 Jan 2021 03:48:04 -0500 X-MC-Unique: Ip8TWmCFNyqwRTx1lgm__Q-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A40188144E3; Thu, 21 Jan 2021 08:48:02 +0000 (UTC) Received: from [10.36.110.29] (unknown [10.36.110.29]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AB25B60BF3; Thu, 21 Jan 2021 08:47:56 +0000 (UTC) To: =?UTF-8?B?6LCi5Y2O5LyfKOatpOaXtuatpOWIu++8iQ==?= , ferruh.yigit@intel.com Cc: dev@dpdk.org, anatoly.burakov@intel.com, david.marchand@redhat.com, zhihong.wang@intel.com, chenbo.xia@intel.com, grive@u256.net References: <68ecd941-9c56-4de7-fae2-2ad15bdfd81a@alibaba-inc.com> <1603381885-88819-1-git-send-email-huawei.xhw@alibaba-inc.com> <4a2bfa2c-5911-1281-3684-0250665e4901@alibaba-inc.com> From: Maxime Coquelin Message-ID: <5791302d-4e43-9b26-860f-83d066060a02@redhat.com> Date: Thu, 21 Jan 2021 09:47:54 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <4a2bfa2c-5911-1281-3684-0250665e4901@alibaba-inc.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v5 0/3] support both PIO and MMIO BAR for virtio PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 1/21/21 5:12 AM, 谢华伟(此时此刻) wrote: > > On 2021/1/13 1:37, Maxime Coquelin wrote: >> >> On 10/22/20 5:51 PM, 谢华伟(此时此刻) wrote: >>> From: "huawei.xhw" >>> >>> Legacy virtio-pci only supports PIO BAR resource. As we need to >>> create lots of >>> virtio devices and PIO resource on x86 is very limited, we expose >>> MMIO BAR. >>> >>> Kernel supports both PIO  and MMIO BAR for legacy virtio-pci device. >>> We handles >>> different type of BAR in the similar way. >>> >>> In previous implementation, with igb_uio we get PIO address from igb_uio >>> sysfs entry; with uio_pci_generic, we get PIO address from >>> /proc/ioports. >>> For PIO/MMIO RW, there is different path for different drivers and arch. >>> For VFIO, PIO/MMIO RW is through syscall, which has big performance >>> issue. >> Regarding the performance issue, do you have some numbers to share? >> AFAICS, it can only have an impact on performance when interrupt mode is >> used or queue notification is enabled. >> >> Does your HW Virtio implementation requires notification? > > Yes, hardware needs notification to tell which queue has more buffer. > > vhost backend also needs notification when it is not running in polling > mode. > > It is easy for software backend to sync with frontend whether it needs > notification through memory but a big burden for hardware. Yes, I understand, thanks for the clarification. > Anyway, using vfio ioctl isn't needed at all. virtio PMD is only the > consumer of pci_vfio_ioport_read. My understanding is that using VFIO read/write ops is required for IOMMU enabled case without cap_sys_rawio. And anyway, using inb/outb is just bypassing VFIO. As I suggest in my other reply, it is better to document that in the case of devices having PIO BARs, the user should consider using UIO driver if performance is a concern. > we could consider if we still need pci_vfio_ioport_read related API in > future. I disagree. I think the pci_vfio_ioport_* API is required at least for the IOMMU enabled case. Documentation is the way to go in my opinion, we can also add a warning that performance may be degraded compared to UIO in pci_vfio_ioport_map() when IOMMU is disabled if you think it may help the users. Thanks, Maxime > /huawei >> >> Is performance the only issue to have your HW working with Virtio PMD, >> or is this series also fixing some functionnal issues? >> >> Best regards, >> Maxime >> >