From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78F8EA0C3F; Thu, 15 Apr 2021 13:19:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4BE1D161AA8; Thu, 15 Apr 2021 13:19:29 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 1464916192A for ; Thu, 15 Apr 2021 13:19:27 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from haifeil@nvidia.com) with SMTP; 15 Apr 2021 14:19:27 +0300 Received: from nvidia.com (gen-l-vrt-173.mtl.labs.mlnx [10.234.173.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 13FBJR8f024937; Thu, 15 Apr 2021 14:19:27 +0300 From: Haifei Luo To: dev@dpdk.org Cc: orika@nvidia.com, viacheslavo@nvidia.com, rasland@nvidia.com, xuemingl@nvidia.com, haifeil@nvidia.com Date: Thu, 15 Apr 2021 14:19:22 +0300 Message-Id: <1618485564-128533-1-git-send-email-haifeil@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1618384952-179763-1-git-send-email-haifeil@nvidia.com> References: <1618384952-179763-1-git-send-email-haifeil@nvidia.com> Subject: [dpdk-dev] [PATCH v3 0/2] support single flow dump on MLX5 PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Dump information for all flows are supported and it is useful to dump one flow. Add single flow dump support on MLX5 PMD. Modify API mlx5_flow_dev_dump to support.Modify mlx5_socket since one extra arg flow_ptr is added. The data structure sent to DPDK application from the utility triggering the flow dumps should be packed and endianness must be specified. The native host endianness can be used, all exchange happens within the same host (we use sendmsg aux data and share the file handle, remote approach is not applicable, no inter-host communication happens). The message structure to dump one/all flow(s): struct mlx5_flow_dump_req { uint32_t port_id; uint64_t flow_ptr; } __rte_packed; If flow_ptr is 0, all flows for the specified port will be dumped. Depends-on: series=16367 ("single flow dump") http://patchwork.dpdk.org/project/dpdk/list/?series=16367 V2: Rebase to fix apply patch failure. V3: Fix commments. Modify data structures sent to DPDK application. Haifei Luo (2): common/mlx5: add mlx5 APIs for single flow dump feature net/mlx5: add mlx5 APIs for single flow dump feature drivers/common/mlx5/linux/meson.build | 6 +++-- drivers/common/mlx5/linux/mlx5_glue.c | 13 ++++++++++ drivers/common/mlx5/linux/mlx5_glue.h | 1 + drivers/common/mlx5/mlx5_devx_cmds.c | 14 +++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 2 ++ drivers/common/mlx5/version.map | 1 + drivers/net/mlx5/linux/mlx5_os.h | 3 +++ drivers/net/mlx5/linux/mlx5_socket.c | 47 +++++++++++++++++++++++++++-------- drivers/net/mlx5/mlx5.h | 10 ++++++++ drivers/net/mlx5/mlx5_flow.c | 30 ++++++++++++++++++++-- 10 files changed, 113 insertions(+), 14 deletions(-) -- 1.8.3.1