From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 918C8A04F6; Wed, 11 Dec 2019 18:01:42 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6948F1D9E; Wed, 11 Dec 2019 18:01:42 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id A208691 for ; Wed, 11 Dec 2019 18:01:40 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from viacheslavo@mellanox.com) with ESMTPS (AES256-SHA encrypted); 11 Dec 2019 19:01:36 +0200 Received: from pegasus11.mtr.labs.mlnx (pegasus11.mtr.labs.mlnx [10.210.16.104]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id xBBH1agv021393; Wed, 11 Dec 2019 19:01:36 +0200 Received: from pegasus11.mtr.labs.mlnx (localhost [127.0.0.1]) by pegasus11.mtr.labs.mlnx (8.14.7/8.14.7) with ESMTP id xBBH1awh026250; Wed, 11 Dec 2019 17:01:36 GMT Received: (from viacheslavo@localhost) by pegasus11.mtr.labs.mlnx (8.14.7/8.14.7/Submit) id xBBH1ajS026249; Wed, 11 Dec 2019 17:01:36 GMT X-Authentication-Warning: pegasus11.mtr.labs.mlnx: viacheslavo set sender to viacheslavo@mellanox.com using -f From: Viacheslav Ovsiienko To: dev@dpdk.org Cc: shahafs@mellanox.com, matan@mellanox.com, rasland@mellanox.com, thomas@monjalon.net, orika@mellanox.com Date: Wed, 11 Dec 2019 17:01:33 +0000 Message-Id: <1576083693-26199-1-git-send-email-viacheslavo@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <20191017072723.36509-1-shahafs@mellanox.com> References: <20191017072723.36509-1-shahafs@mellanox.com> Subject: [dpdk-dev] [RFC v2] mlx5/net: hint PMD not to inline packet X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some PMDs inline the mbuf data buffer directly to device transmit descriptor. This is in order to save the overhead of the PCI headers imposed when the device DMA reads the data by buffer pointer. For some devices it is essential in order to provide the full bandwidth. However, there are cases where such inlining is in-efficient. For example, when the data buffer resides on other device memory (like GPU or storage device). Attempt to inline such buffer will result in high PCI overhead for reading and copying the data from the remote device to the host memory. To support a mixed traffic pattern (some buffers from local host memory, some buffers from other devices) with high bandwidth, a hint flag is introduced in the mbuf. Application will hint the PMD whether or not it should try to inline the given mbuf data buffer. PMD should do the best effort to act upon this request. The hint flag RTE_NET_MLX5_DYNFLAG_NO_INLINE_NAME is supposed to be dynamic, registered by application with rte_mbuf_dynflag_register(). This flag is purely vendor specific and declared in PMD specific header rte_pmd_mlx5.h, which is intended to be used by specific application. To query the supported specific flags in runtime the private routine is introduced: int rte_pmd_mlx5_get_dyn_flag_names( uint16_t port, char *names[], uint16_t n) It returns the array of currently (over present hardware and configuration) supported specific flags. The "not inline hint" feature operating flow is the following one: - application start - probe the devices, ports are created - query the port capabilities - if port supporting the feature is found - register dynamic flag RTE_NET_MLX5_DYNFLAG_NO_INLINE_NAME - application starts the ports - on dev_start() PMD checks whether the feature flag is registered and enables the feature support in datapath - application might set this flag in ol_flags field of mbuf in the packets being sent and PMD will handle ones appropriately. Signed-off-by: Shahaf Shuler Signed-off-by: Viacheslav Ovsiienko --- v1: https://patches.dpdk.org/patch/61348/