From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA335A317C for ; Thu, 17 Oct 2019 19:19:13 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 72F961C037; Thu, 17 Oct 2019 19:19:12 +0200 (CEST) Received: from mail-io1-f68.google.com (mail-io1-f68.google.com [209.85.166.68]) by dpdk.org (Postfix) with ESMTP id 45B3D1BFF6 for ; Thu, 17 Oct 2019 19:19:11 +0200 (CEST) Received: by mail-io1-f68.google.com with SMTP id n26so3942310ioj.8 for ; Thu, 17 Oct 2019 10:19:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=2H0waLglPg8BdBJtHtv5cFIFuevpdTTfYjFftMgVVO4=; b=n0QcbDt0QqjjfzVgd8lZwzzgd6qph4B5hZcpylM2M+ZMK+PzQT20zo4wTymPP1A08r lEz9x1ZZI4kIDPUHecuI0Dv+KdBsyAJ6BwmFqd1R2KNpgUE7iMoKj/xBiRJV/Fb3cKko iorFIKeWq6hlD7S23sBZHFUL6PJNIAeV2TULUDb4crk8HGH1ZY0jecZAToQp1+DZAonP xHjZVaQlEm5PDagZcMd7VyVI/aHOJfol2kieMaSXhCiy1GJP5HoSODiblgtfoQ2ilZmP OSa7cNk8vVABp6i5DBZJ2RH4uA2EckHo3M4Ttkf7Xnha69Y+sk5uKWwarV0drADbDRph H8mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=2H0waLglPg8BdBJtHtv5cFIFuevpdTTfYjFftMgVVO4=; b=HFmAzr11Szq9+/VuLkUS2U78jh4+NAjV6bJFp25wh4BFhAIgdYEUijFUlVszkhoB0R ddTjKCGp5d4G/E2J21tkLd7BjCJzPBaizTSFpnO3NLbYIl7vCCpJZJVSnxJJzgz4+97M wPe/PGjumqD6Lf+EO9NSeKmDxx2E6Qpd2WVmS2m0AkiraEHjEHbyvqf3qmQPPDKv5Pii n/YKc1yLWQGJYECymBAFa26TdCUWNQ293NxHmx/m76QnkvlbNAXnrEQqiw6K7IkUDLkF n+80SS566K+PQuOR+LDPDxCu5nInVhJtHWKCKaLRxK1f4fjM6ij5E5Vlz15rFaXV+7Tp HNEw== X-Gm-Message-State: APjAAAVJJdcvV1RF14YqO0OLVAGOBmP/TL62g94C8oqHjBc2Dx028jmT 4QZN0pgTK9eFlAh0YmZWQ307y0Iz1VSEwPrJYpM= X-Google-Smtp-Source: APXvYqyPUxhebhcNW74LQ03LjUuMj7UEu6RwX8ApFAgSuDgI2euX5F1Kku9RhZgRi2lUBxNJXtvyLknQV/WMC+6C3d0= X-Received: by 2002:a6b:6508:: with SMTP id z8mr4031313iob.271.1571332750258; Thu, 17 Oct 2019 10:19:10 -0700 (PDT) MIME-Version: 1.0 References: <20191017072723.36509-1-shahafs@mellanox.com> In-Reply-To: From: Jerin Jacob Date: Thu, 17 Oct 2019 22:48:59 +0530 Message-ID: To: Shahaf Shuler Cc: "dev@dpdk.org" , Thomas Monjalon , "olivier.matz@6wind.com" , "wwasko@nvidia.com" , "spotluri@nvidia.com" , Asaf Penso , Slava Ovsiienko Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline packet X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Oct 17, 2019 at 4:30 PM Shahaf Shuler wrote: > > Thursday, October 17, 2019 11:17 AM, Jerin Jacob: > > Subject: Re: [dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline > > packet > > > > On Thu, Oct 17, 2019 at 12:57 PM Shahaf Shuler > > wrote: > > > > > > Some PMDs inline the mbuf data buffer directly to device. This is in > > > order to save the overhead of the PCI headers involved when the device > > > DMA read the buffer pointer. For some devices it is essential in order > > > to reach the pick BW. > > > > > > However, there are cases where such inlining is in-efficient. For > > > example when the data buffer resides on other device memory (like GPU > > > or storage device). attempt to inline such buffer will result in high > > > PCI overhead for reading and copying the data from the remote device. > > > > Some questions to understand the use case > > # Is this use case where CPU, local DRAM, NW card and GPU memory connected on the coherent bus > > Yes. For example one can allocate GPU memory and map it to the GPU bar, make it accessible from the host CPU through LD/ST. > > > # Assuming the CPU needs to touch the buffer prior to Tx, In that case, it will > > be useful? > > If the CPU needs to modify the data then no. it will be more efficient to copy the data to CPU and then send it. > However there are use cases where the data is DMA w/ zero copy to the GPU (for example) , GPU perform the processing on the data, and then CPU send the mbuf (w/o touching the data). OK. If I understanding it correctly it is for offloading the Network/Compute functions to GPU from NW card and/or CPU. > > > # How the application knows, The data buffer is in GPU memory in order to > > use this flag efficiently? > > Because it made it happen. For example it attached the mbuf external buffer from the other device memory. > > > # Just an random thought, Does it help, if we create two different mempools > > one from local DRAM and one from GPU memory so that the application can > > work transparently. > > But you will still need to teach the PMD which pool it can inline and which cannot. > IMO it is more generic to have it per mbuf. Moreover, application has this info. IMO, we can not use PKT_TX_DONT_INLINE_HINT flag for generic applications, The application usage will be tightly coupled with the platform and capabilities of GPU or Host CPU etc. I think, pushing this logic to the application is bad idea. But if you are writing some custom application and the per packet-level you need to control then this flag may be the only way. > > > > > > > > > > > > > > > > > To support a mixed traffic pattern (some buffers from local DRAM, some > > > buffers from other devices) with high BW, a hint flag is introduced in > > > the mbuf. > > > Application will hint the PMD whether or not it should try to inline > > > the given mbuf data buffer. PMD should do best effort to act upon this > > > request. > > > > > > Signed-off-by: Shahaf Shuler > > > --- > > > lib/librte_mbuf/rte_mbuf.h | 9 +++++++++ > > > 1 file changed, 9 insertions(+) > > > > > > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > > > index 98225ec80b..5934532b7f 100644 > > > --- a/lib/librte_mbuf/rte_mbuf.h > > > +++ b/lib/librte_mbuf/rte_mbuf.h > > > @@ -203,6 +203,15 @@ extern "C" { > > > /* add new TX flags here */ > > > > > > /** > > > + * Hint to PMD to not inline the mbuf data buffer to device > > > + * rather let the device use its DMA engine to fetch the data with > > > +the > > > + * provided pointer. > > > + * > > > + * This flag is a only a hint. PMD should enforce it as best effort. > > > + */ > > > +#define PKT_TX_DONT_INLINE_HINT (1ULL << 39) > > > + > > > +/** > > > * Indicate that the metadata field in the mbuf is in use. > > > */ > > > #define PKT_TX_METADATA (1ULL << 40) > > > -- > > > 2.12.0 > > >