From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id AB24BA3168
	for <public@inbox.dpdk.org>; Thu, 17 Oct 2019 10:16:52 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 419AE1E881;
	Thu, 17 Oct 2019 10:16:52 +0200 (CEST)
Received: from mail-io1-f68.google.com (mail-io1-f68.google.com
 [209.85.166.68]) by dpdk.org (Postfix) with ESMTP id 17A8E1D40E
 for <dev@dpdk.org>; Thu, 17 Oct 2019 10:16:51 +0200 (CEST)
Received: by mail-io1-f68.google.com with SMTP id u8so1917218iom.5
 for <dev@dpdk.org>; Thu, 17 Oct 2019 01:16:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=p900w7TaTqbe8HOpgtcYe7E+V/kBWOZ21nnNZZw7Ko4=;
 b=as2ercPmosKfkHnj+cn5k8DeyWLHwYO2NJ7cqOEAcxApInBFrOOjwAOWg8fkuiDyWR
 kFydTHdmnw4umu6S0elVNd0begEHFqE32hKI2RFKG6eLBfDvdr2It2/wf81xJECRiBa5
 GgRIs2VTeaLVw7IRgw7t0JhxVhDiLoGtEAwXtXZG8KioSC7g3S6QRpEHFEoWbjArgPT3
 V2QxhS9zd1eeRhfqpvexIpFL4gF0mI+jKo3edCxTSKo1YbzwIDiaq7TaJfKvImonSye6
 h+bX3Pnpw2OJKCfsf0WZeXe+6DYx2wimQSjKmZD9kN3dZmqHXRkeIhwQyvf2xJEOA8W2
 Y0eA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=p900w7TaTqbe8HOpgtcYe7E+V/kBWOZ21nnNZZw7Ko4=;
 b=ciXLd+FJv39myqk50aMLHRASvstwwiXycahT5hqybrhLzysFbrmPxAP0phbsUUndzv
 MUDi0Qxwbh7cDtgKgsCPMD7V+K3XkFWyPbVyxGMjnTLyK0K/IQhPxxgJboHPeea+h1Le
 sUiE7Gc/JUPBIIUGQT3CjVGB4sg7peH/26hfu2mHcirlMvY4IlxbHeu1RNWMBZ8nR/f1
 dzwDbKuFCYJq549zwPOBAsCHSiXDcpS45MKyS/Yi9FJNH4fUHbT140zf0TC3Mnf2XPO5
 KrGRsMSD504ZZ40ovuCnC0+lzWrcZMS6nmIzVtvagaZR5Hz8gvq8tvzmTM8VtgtrAE+3
 PNvw==
X-Gm-Message-State: APjAAAXeVqJRS8jYX1FdsgagkLex95vkIFu9SaccANP5Nvgb5jTI8Sek
 QwMPs4zw3xov9hFDZfyeE/Dvg9REzsczhVKq+YE=
X-Google-Smtp-Source: APXvYqzpODdO9WErl6EohTzf1sKaB9o/42TKaJrJIvZN50RTwcayilyhRvFSUULXXt71sKhW4DbEtPfU1e4mMxpbBIw=
X-Received: by 2002:a6b:7609:: with SMTP id g9mr1872875iom.130.1571300210241; 
 Thu, 17 Oct 2019 01:16:50 -0700 (PDT)
MIME-Version: 1.0
References: <20191017072723.36509-1-shahafs@mellanox.com>
In-Reply-To: <20191017072723.36509-1-shahafs@mellanox.com>
From: Jerin Jacob <jerinjacobk@gmail.com>
Date: Thu, 17 Oct 2019 13:46:39 +0530
Message-ID: <CALBAE1MWxGPoyWrw-WTeB4ER72_Ma2ipO-WAssrUOUZPc5m2rg@mail.gmail.com>
To: Shahaf Shuler <shahafs@mellanox.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Thomas Monjalon <thomas@monjalon.net>, 
 "olivier.matz@6wind.com" <olivier.matz@6wind.com>,
 "wwasko@nvidia.com" <wwasko@nvidia.com>, 
 "spotluri@nvidia.com" <spotluri@nvidia.com>, Asaf Penso <asafp@mellanox.com>, 
 Slava Ovsiienko <viacheslavo@mellanox.com>
Content-Type: text/plain; charset="UTF-8"
Subject: Re: [dpdk-dev] [RFC PATCH 20.02] mbuf: hint PMD not to inline packet
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On Thu, Oct 17, 2019 at 12:57 PM Shahaf Shuler <shahafs@mellanox.com> wrote:
>
> Some PMDs inline the mbuf data buffer directly to device. This is in
> order to save the overhead of the PCI headers involved when the device
> DMA read the buffer pointer. For some devices it is essential in order
> to reach the pick BW.
>
> However, there are cases where such inlining is in-efficient. For example
> when the data buffer resides on other device memory (like GPU or storage
> device). attempt to inline such buffer will result in high PCI overhead
> for reading and copying the data from the remote device.

Some questions to understand the use case
# Is this use case where CPU, local DRAM, NW card and GPU memory
connected on the coherent bus
# Assuming the CPU needs to touch the buffer prior to Tx, In that
case, it will be useful?
# How the application knows, The data buffer is in GPU memory in order
to use this flag efficiently?
# Just an random thought, Does it help, if we create two different
mempools one from local DRAM
and one from GPU memory so that the application can work transparently.





>
> To support a mixed traffic pattern (some buffers from local DRAM, some
> buffers from other devices) with high BW, a hint flag is introduced in
> the mbuf.
> Application will hint the PMD whether or not it should try to inline the
> given mbuf data buffer. PMD should do best effort to act upon this
> request.
>
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
> ---
>  lib/librte_mbuf/rte_mbuf.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
>
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 98225ec80b..5934532b7f 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -203,6 +203,15 @@ extern "C" {
>  /* add new TX flags here */
>
>  /**
> + * Hint to PMD to not inline the mbuf data buffer to device
> + * rather let the device use its DMA engine to fetch the data with the
> + * provided pointer.
> + *
> + * This flag is a only a hint. PMD should enforce it as best effort.
> + */
> +#define PKT_TX_DONT_INLINE_HINT (1ULL << 39)
> +
> +/**
>   * Indicate that the metadata field in the mbuf is in use.
>   */
>  #define PKT_TX_METADATA        (1ULL << 40)
> --
> 2.12.0
>