From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3888EA0509; Tue, 5 Apr 2022 13:29:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DAB1D4113D; Tue, 5 Apr 2022 13:29:31 +0200 (CEST) Received: from relay5-d.mail.gandi.net (relay5-d.mail.gandi.net [217.70.183.197]) by mails.dpdk.org (Postfix) with ESMTP id D2F4B40F35 for ; Tue, 5 Apr 2022 13:29:29 +0200 (CEST) Received: (Authenticated sender: i.maximets@ovn.org) by mail.gandi.net (Postfix) with ESMTPSA id 706C31C0005; Tue, 5 Apr 2022 11:29:25 +0000 (UTC) Message-ID: <0633e31c-68fc-618c-e4f8-78a74662078c@ovn.org> Date: Tue, 5 Apr 2022 13:29:25 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Cc: i.maximets@ovn.org, "Pai G, Sunil" , "Stokes, Ian" , "Hu, Jiayu" , "Ferriter, Cian" , "Van Haaren, Harry" , "Maxime Coquelin (maxime.coquelin@redhat.com)" , "ovs-dev@openvswitch.org" , "dev@dpdk.org" , "Mcnamara, John" , "O'Driscoll, Tim" , "Finn, Emma" Content-Language: en-US To: Bruce Richardson References: <22e3ff73-f3d9-abae-1866-90d133af5528@ovn.org> From: Ilya Maximets Subject: Re: OVS DPDK DMA-Dev library/Design Discussion In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 3/30/22 16:09, Bruce Richardson wrote: > On Wed, Mar 30, 2022 at 01:41:34PM +0200, Ilya Maximets wrote: >> On 3/30/22 13:12, Bruce Richardson wrote: >>> On Wed, Mar 30, 2022 at 12:52:15PM +0200, Ilya Maximets wrote: >>>> On 3/30/22 12:41, Ilya Maximets wrote: >>>>> Forking the thread to discuss a memory consistency/ordering model. >>>>> >>>>> AFAICT, dmadev can be anything from part of a CPU to a completely >>>>> separate PCI device. However, I don't see any memory ordering being >>>>> enforced or even described in the dmadev API or documentation. >>>>> Please, point me to the correct documentation, if I somehow missed it. >>>>> >>>>> We have a DMA device (A) and a CPU core (B) writing respectively >>>>> the data and the descriptor info. CPU core (C) is reading the >>>>> descriptor and the data it points too. >>>>> >>>>> A few things about that process: >>>>> >>>>> 1. There is no memory barrier between writes A and B (Did I miss >>>>> them?). Meaning that those operations can be seen by C in a >>>>> different order regardless of barriers issued by C and regardless >>>>> of the nature of devices A and B. >>>>> >>>>> 2. Even if there is a write barrier between A and B, there is >>>>> no guarantee that C will see these writes in the same order >>>>> as C doesn't use real memory barriers because vhost advertises >>>> >>>> s/advertises/does not advertise/ >>>> >>>>> VIRTIO_F_ORDER_PLATFORM. >>>>> >>>>> So, I'm getting to conclusion that there is a missing write barrier >>>>> on the vhost side and vhost itself must not advertise the >>>> >>>> s/must not/must/ >>>> >>>> Sorry, I wrote things backwards. :) >>>> >>>>> VIRTIO_F_ORDER_PLATFORM, so the virtio driver can use actual memory >>>>> barriers. >>>>> >>>>> Would like to hear some thoughts on that topic. Is it a real issue? >>>>> Is it an issue considering all possible CPU architectures and DMA >>>>> HW variants? >>>>> >>> >>> In terms of ordering of operations using dmadev: >>> >>> * Some DMA HW will perform all operations strictly in order e.g. Intel >>> IOAT, while other hardware may not guarantee order of operations/do >>> things in parallel e.g. Intel DSA. Therefore the dmadev API provides the >>> fence operation which allows the order to be enforced. The fence can be >>> thought of as a full memory barrier, meaning no jobs after the barrier can >>> be started until all those before it have completed. Obviously, for HW >>> where order is always enforced, this will be a no-op, but for hardware that >>> parallelizes, we want to reduce the fences to get best performance. >>> >>> * For synchronization between DMA devices and CPUs, where a CPU can only >>> write after a DMA copy has been done, the CPU must wait for the dma >>> completion to guarantee ordering. Once the completion has been returned >>> the completed operation is globally visible to all cores. >> >> Thanks for explanation! Some questions though: >> >> In our case one CPU waits for completion and another CPU is actually using >> the data. IOW, "CPU must wait" is a bit ambiguous. Which CPU must wait? >> >> Or should it be "Once the completion is visible on any core, the completed >> operation is globally visible to all cores." ? >> > > The latter. > Once the change to memory/cache is visible to any core, it is visible to > all ones. This applies to regular CPU memory writes too - at least on IA, > and I expect on many other architectures - once the write is visible > outside the current core it is visible to every other core. Once the data > hits the l1 or l2 cache of any core, any subsequent requests for that data > from any other core will "snoop" the latest data from the cores cache, even > if it has not made its way down to a shared cache, e.g. l3 on most IA > systems. It sounds like you're referring to the "multicopy atomicity" of the architecture. However, that is not universally supported thing. AFAICT, POWER and older ARM systems doesn't support it, so writes performed by one core are not necessarily available to all other cores at the same time. That means that if the CPU0 writes the data and the completion flag, CPU1 reads the completion flag and writes the ring, CPU2 may see the ring write, but may still not see the write of the data, even though there was a control dependency on CPU1. There should be a full memory barrier on CPU1 in order to fulfill the memory ordering requirements for CPU2, IIUC. In our scenario the CPU0 is a DMA device, which may or may not be part of a CPU and may have different memory consistency/ordering requirements. So, the question is: does DPDK DMA API guarantee multicopy atomicity between DMA device and all CPU cores regardless of CPU architecture and a nature of the DMA device? > >> And the main question: >> Are these synchronization claims documented somewhere? >> > > Not explicitly, no. However, the way DMA devices works in the regard if > global observability is absolutely no different from how crypto, > compression, or any other hardware devices work. Doing a memory copy using > a DMA device is exactly the same as doing a no-op crypto, or compression > job with the output going to a separate output buffer. In all cases, a job > cannot be considered completed until you get a hardware completion > notification for it, and one you get that notification, it is globally > observable by all entities. > > The only different for dmadev APIs is that we do have the capability to > specify that jobs must be done in a specific order, using a fence flag, > which is documented in the API documentation. > > /Bruce