From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 941FFA04F9; Fri, 10 Jan 2020 10:21:08 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 56E651E959; Fri, 10 Jan 2020 10:21:08 +0100 (CET) Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by dpdk.org (Postfix) with ESMTP id 74C9A1E959 for ; Fri, 10 Jan 2020 10:21:07 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 0CBC821F76; Fri, 10 Jan 2020 04:21:07 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Fri, 10 Jan 2020 04:21:07 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=PKKKRk12lKHJaMU5SASYa3o7XWj8B1XnQMYPJXxRRNQ=; b=lTm7U7osMcCP t8+IKBkyJNJeHbBbccwKq0DdzEkqxSeGfGLeqo1xxF4ReMZt2tTuWOeTOsLxP8BS Agj1lJ39OXYn00/Ze/PG+6YNPzVX7ByZDZzNAnlDB1tpoGX9t+hK/jKU8AxFePc2 K7cTHPPL4zPUkxQ0cKh8OW99GIPgst4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=PKKKRk12lKHJaMU5SASYa3o7XWj8B1XnQMYPJXxRR NQ=; b=jHo1L/lHCaQNoYp1QSg+B0VVW5BHt6rkcTyXdN7FhO2FR4j3m+JbTCL/r KbK53rclDS7WbFm6QrThdnUg6VZ0D+fnbUcl/aBU7T8fXRFjh/ajPkiqPH1UGIBW P0Z6M47mhlBIG6BgHjaoBJ/V9ms0a8GnL9Vr8ukxm/Fb0NwA/6HydLy9Sca1S2yv 8Fhk28aWxoF2tNjuNpLs6JNhWnaRUCc80ZVGrGxs4LSdM38n4uJlcyz8Rpu0PquQ 4TC7/TSgvRseFcXgWcvi4BBQ5hkxPsVk3VnVC4QE2nIrVz9iw8PHVZXM01gw4ZPm sXg+lwYmZNaOOwHn+xbJwUZmn38lg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedufedrvdeifedgtddvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecuff homhgrihhnpedtmhgvlhhlrghnohigrdgtohhmpdhouhhtlhhoohhkrdgtohhmpdhoiihl rggsshdrohhrghenucfkphepjeejrddufeegrddvtdefrddukeegnecurfgrrhgrmhepmh grihhlfhhrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtnecuvehluhhsthgv rhfuihiivgeptd X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 78AB780061; Fri, 10 Jan 2020 04:21:05 -0500 (EST) From: Thomas Monjalon To: Matan Azrad , "Xu, Rosen" Cc: Maxime Coquelin , "Bie, Tiwei" , "Wang, Zhihong" , "Wang, Xiao W" , "Yigit, Ferruh" , "dev@dpdk.org" , "Pei, Andy" , Roni Bar Yanai Date: Fri, 10 Jan 2020 10:21:04 +0100 Message-ID: <6026430.K2JlShyGXD@xps> In-Reply-To: <0E78D399C70DA940A335608C6ED296D73AC869EE@SHSMSX104.ccr.corp.intel.com> References: <1577287161-10321-1-git-send-email-matan@mellanox.com> <0E78D399C70DA940A335608C6ED296D73AC869EE@SHSMSX104.ccr.corp.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v1 0/3] Introduce new class for vDPA device drivers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 10/01/2020 03:38, Xu, Rosen: > From: Matan Azrad > > From: Xu, Rosen > > > From: Thomas Monjalon > > > > 09/01/2020 03:27, Xu, Rosen: > > > > > From: Thomas Monjalon > > > > > > 08/01/2020 13:39, Xu, Rosen: > > > > > > > From: Matan Azrad > > > > > > > > From: Xu, Rosen > > > > > > > > > Did you think about OVS DPDK? > > > > > > > > > vDPA is a basic module for OVS, currently it will take > > > > > > > > > some exception path packet processing for OVS, so it still > > > > > > > > > needs to integrate > > > > > > eth_dev. > > > > > > > > > > > > > > > > I don't understand your question. > > > > > > > > > > > > > > > > What do you mean by "integrate eth_dev"? > > > > > > > > > > > > > > My questions is in OVS DPDK scenario vDPA device implements > > > > > > > eth_dev ops, so create a new class and move ifc code to this > > > > > > > new class > > > > is not ok. > > > > > > > > > > > > 1/ I don't understand the relation with OVS. > > > > > > > > > > > > 2/ no, vDPA device implements vDPA ops. > > > > > > If it implements ethdev ops, it is an ethdev device. > > > > > > > > > > > > Please show an example of what you claim. > > > > > > > > > > Answers of 1 and 2. > > > > > > > > > > In OVS DPDK, each network device(such as NIC, vHost etc) of DPDK > > > > > needs to be implemented as rte_eth_dev and provides eth_dev_ops > > > such > > > > > as > > > > packet TX/RX for OVS. > > > > > > > > No, OVS is also using the vhost API for vhost port. > > > > > > Yes, vhost pmd is not a good example. > > > > > > > > Take vHost(Virtio back end) for example, OVS startups vHost > > > > > interface like > > > > this: > > > > > ovs-vsctl add-port br0 vhost-user-1 -- set Interface vhost-user-1 > > > > > type=dpdkvhostuser drivers/net/vhost implements vHost as > > > rte_eth_dev > > > > and integrated in OVS. > > > > > OVS can send/receive packets to/from VM with rte_eth_tx_burst() > > > > > rte_eth_rx_burst() which call eth_dev_ops implementation of > > > > drivers/net/vhost. > > > > > > > > No, it is using rte_vhost_dequeue_burst() and > > > > rte_vhost_enqueue_burst() which are not in ethdev. > > > > > > > > > vDPA is also Virtio back end and works like vHost, same as vHost, > > > > > it will be implemented as rte_eth_dev and also be integrated into OVS. > > > > > > > > No, vDPA is not "implemented as rte_eth_dev". > > > > > > Currently, vDPA isn't integrated with OVS. > > > > > > > > So, it's not ok to move ifc code from drivers/net. > > > > > > > > drivers/net/ifc has no ethdev implementation at all. > > > > > > For OVS hasn't integrated vDPA, it doesn't implement rte_eth_dev, but > > > there are many discussions in OVS community about vDPA, some are from > > > Mellanox, it seems vDPA port will be implemented as rte_eth_dev port > > > in OVS in the near feature. > > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatc > > > h > > > > > work.ozlabs.org%2Fpatch%2F1178474%2F&data=02%7C01%7Cmatan% > > 4 > > > > > 0mellanox.com%7C9e84c2581e2f414e0aca08d794f22e8d%7Ca652971c7d2e4 > > d > > > > > 9ba6a4d149256f461b%7C0%7C0%7C637141640216181763&sdata=TA% > > 2F > > > 0zU495kXUqhC6eP09NDzBZfjJz1dbfkRcDpV%2BYAs%3D&reserved=0 > > > > > > Matan, > > > Could you clarify how OVS integrates vDPA in Mellanox patch? > > > > > > > > > > > Rosen, I'm sorry, these arguments look irrelevant, so I won't > > > > consider them as blocking the integration of this patch. > > > > > > What I mentioned is not blocking the integration of this patch, I just > > > want to get clarification from Matan how to integrate vDPA port in OVS. > > > > > > Hi > > > > OVS like any other application should use the current API of vDPA to attach a > > probed vdpa device to a vhost device. > > See example application /examples/vdpa. > > > > Here, we just introduce a new class to hold all the vDPA drivers, no change in > > the API. > > > > As I understand, no vDPA device is currently integrated in OVS. > > > > I think it can be integrated only when a full offload will be integrated since > > the vDPA device forward the traffic from the HW directly to the virtio queue, > > once it will be there, I guess the offload will be configured by the representor > > of the vdpa device(VF) which is managed by an ethdev device. > > > > > > Matan. > > > Hi, > > I'm still confused about your last sentence " the representor of the vdpa device(VF) which is managed by an ethdev device". > My understanding is that there are some connections and dependency between rte_eth_dev and vdpa device? > Am I right or any other explanations from you? A vDPA port does not allow any ethdev operations (like rte_flow). In order to configure some offloads on the device, OVS needs an ethdev port. In Mellanox case, an ethdev VF representor port can be instantiated. So we may have two ports for the same device: - vDPA for data path with the VM - ethdev for offloads control path