From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 618D8A0509; Wed, 30 Mar 2022 12:41:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 34E6040685; Wed, 30 Mar 2022 12:41:35 +0200 (CEST) Received: from relay7-d.mail.gandi.net (relay7-d.mail.gandi.net [217.70.183.200]) by mails.dpdk.org (Postfix) with ESMTP id A99B84013F for ; Wed, 30 Mar 2022 12:41:34 +0200 (CEST) Received: (Authenticated sender: i.maximets@ovn.org) by mail.gandi.net (Postfix) with ESMTPSA id F1C7120002; Wed, 30 Mar 2022 10:41:31 +0000 (UTC) Message-ID: <22e3ff73-f3d9-abae-1866-90d133af5528@ovn.org> Date: Wed, 30 Mar 2022 12:41:30 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Cc: i.maximets@ovn.org, "Mcnamara, John" , "O'Driscoll, Tim" , "Finn, Emma" , "Richardson, Bruce" Content-Language: en-US To: "Pai G, Sunil" , "Stokes, Ian" , "Hu, Jiayu" , "Ferriter, Cian" , "Van Haaren, Harry" , "Maxime Coquelin (maxime.coquelin@redhat.com)" , "ovs-dev@openvswitch.org" , "dev@dpdk.org" References: From: Ilya Maximets Subject: Re: OVS DPDK DMA-Dev library/Design Discussion In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Forking the thread to discuss a memory consistency/ordering model. AFAICT, dmadev can be anything from part of a CPU to a completely separate PCI device. However, I don't see any memory ordering being enforced or even described in the dmadev API or documentation. Please, point me to the correct documentation, if I somehow missed it. We have a DMA device (A) and a CPU core (B) writing respectively the data and the descriptor info. CPU core (C) is reading the descriptor and the data it points too. A few things about that process: 1. There is no memory barrier between writes A and B (Did I miss them?). Meaning that those operations can be seen by C in a different order regardless of barriers issued by C and regardless of the nature of devices A and B. 2. Even if there is a write barrier between A and B, there is no guarantee that C will see these writes in the same order as C doesn't use real memory barriers because vhost advertises VIRTIO_F_ORDER_PLATFORM. So, I'm getting to conclusion that there is a missing write barrier on the vhost side and vhost itself must not advertise the VIRTIO_F_ORDER_PLATFORM, so the virtio driver can use actual memory barriers. Would like to hear some thoughts on that topic. Is it a real issue? Is it an issue considering all possible CPU architectures and DMA HW variants? Best regards, Ilya Maximets.