From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4527341D3D; Wed, 22 Feb 2023 15:12:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5983243016; Wed, 22 Feb 2023 15:12:13 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2083.outbound.protection.outlook.com [40.107.223.83]) by mails.dpdk.org (Postfix) with ESMTP id 3572D42FCC for ; Wed, 22 Feb 2023 15:12:11 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cB5WaVaQlQGZL7XnvIto6mOfo4Ep0tod5arMntn6hWT7oo5X7LSQceygzfm0fxDY/Mg/v40MneqRVfo7NM0i02YsADbPWKUJXxEZTnulusKj3LWwKzO1Gu5u7C0HWMElgw84D+fqP9YGeqnB4UY7c7FAOW8c2OKwmMilHeALtacAsyqxaVc/oxsVEjxsDY9h4sLtI1eZuFG9XNTmCv/ojD9UScsz9EY8y1skIjy3bsIdHgBjAmwXhNOo1Z95n7t6vXJCZwfmCOCe4X9dRMFkneKG1XSjM5nfaaAh3ABCu0Lh2naXOYYXmqGgXJOLNhdC1aYNFQzgYis1GaY2rg3WUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+F/PfXumau6cdLa2vsJZH5Pw2Q6PHLh0wuv6yFQYtKs=; b=hCTxzZ22uQPAF4XiuKd39xDm2WIZsbFiHvNQZv4IkEGvAaaURr4Ac7EyCtprGapYVVzZTl+fbpTiGX7mxgKxuyvqrW8IiVYBBhvPHGLE5hL6GDHIvLeKQ4aLZHz1aF1ThbtSGwVdbhGk/YpP0FbjhwQoO9eMa8dlIwsUeKMKDjzCACVEysEaxT1CsbeMBr4TNZXiBbJEqYIYP4GOLDl97/ApD4dPHuWivmXo87SDrAAL7UhzxBhvmtJo7nOxqQiEnfMKwo3cQdsfOwH6N33tg+ELq7XmJmVrmpJ4FruETotXzj2dKjKd5QHwMq/iaxcARmUxHAPOnezuDbgIHZbNFA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+F/PfXumau6cdLa2vsJZH5Pw2Q6PHLh0wuv6yFQYtKs=; b=HkZ48TcaB3n9KITPI3z0H6Wj+Ji/uqqpwXvwmT6Z0Nb1qCwTLDp3gWIrjadTP29o9DGgNsnUQUwCjNdpo1AjMI/gftn2ijr0PT07IyDCss8FIkMVkKE4XjwxLz8huppxP7xDxubcyuIq1tyS3QLxdyTabf/KhDGfYw03guS78c5RaamfQdkFYjU5unVv/px9OqSHxJu6iWUpByz4VVHvdFxF/BudQWEL9PfElZj2SvpzbHswMxJQkuKQM5RwFMhCXa7W5pL7VnSNQXomc0gLGs8LRiZs7H4NWCTIKus/5YEyXxSwQC/KweIpR3M44vZsxJrevOzNs/uqDI5ZPUTzvA== Received: from BN9PR03CA0753.namprd03.prod.outlook.com (2603:10b6:408:13a::8) by PH0PR12MB7908.namprd12.prod.outlook.com (2603:10b6:510:28e::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6111.21; Wed, 22 Feb 2023 14:12:09 +0000 Received: from BN8NAM11FT098.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13a:cafe::4e) by BN9PR03CA0753.outlook.office365.com (2603:10b6:408:13a::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6134.19 via Frontend Transport; Wed, 22 Feb 2023 14:12:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT098.mail.protection.outlook.com (10.13.177.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6134.19 via Frontend Transport; Wed, 22 Feb 2023 14:12:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 22 Feb 2023 06:11:56 -0800 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 22 Feb 2023 06:11:53 -0800 From: Eli Britstein To: CC: , Thomas Monjalon , Eli Britstein , Ori Kam , Aman Singh , Yuying Zhang Subject: [PATCH 1/2] app/testpmd: change rule type Date: Wed, 22 Feb 2023 16:11:37 +0200 Message-ID: <20230222141139.3233715-1-elibr@nvidia.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT098:EE_|PH0PR12MB7908:EE_ X-MS-Office365-Filtering-Correlation-Id: eb1e1709-ec76-499e-1d7a-08db14dec8f1 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SvMkkk1JFCPEVEMt6EQRFno7CQ1U87+YidOzOaCcVRc1nGJIerd+li1EbqOT1jWLJx8zbYtCYIt+p9Hqc3jP5MhmRBH6YLpwVoFziWundg0x2CBpb4fY+3jgaxwkjHV7USijT66i4/lJc7m0HUDDuQOweP1uN3AHGPaFbEt091Jey+n18G+W6s2x+j7YbraS7ZRaJawWmfhOQLuHgCHs3u1aJ4HX/aE7LSP+4RGswNpMZ176wmplxnBHeXFIgZ5Cf0zBI/JDbFs/HXP9bRdAvSID0/waXXYBm8SHbK/bNvTI9/MDvLwu4A61eToADZF3Cwe1gu1jflA8wQcPUSb2g9cb2stjYQ6MNNP7eBqLBXauQ0KnjQkBByjdtJkx0Ssik1h4QediKjtxb5ggFcrBz15UARkM62tGTkT0fVBWN6pGOdNJI1bwSer2NdklNw3W/egSVz0th0HGWOO2LiUbovDjalW/oN7kgn5qIjTJmqTTULIw602uv2g1azfxsb7pFiVMtPWrRG8csYS7vMOlTOx0wC7GVyjK1c3eUXVVdsRsLm2Wx3HeuIEieE32PH+IwEcxcQ5ps3MmeHAet2v7C9PX14sUI5xS280g3f7gIha/mDeqnPoyIlft9opjFB8vzAbUcq+cZVd3Ic5Evvh7xhLwPQYj4YVv9nM7sRkVJk9sEFhDkf1oSkv+RvCzyW53fSlxJd9wRetdSHfI/J6MA5RLwsp1asiEeA3bgIz2vHo= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199018)(46966006)(36840700001)(40470700004)(83380400001)(70586007)(478600001)(6286002)(16526019)(6666004)(86362001)(186003)(40480700001)(26005)(1076003)(55016003)(36756003)(426003)(70206006)(316002)(8676002)(47076005)(54906003)(7696005)(2616005)(40460700003)(41300700001)(2906002)(82310400005)(356005)(7636003)(8936002)(82740400003)(5660300002)(34020700004)(336012)(36860700001)(6916009)(4326008); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Feb 2023 14:12:08.7722 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: eb1e1709-ec76-499e-1d7a-08db14dec8f1 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT098.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7908 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Change rule type to be uintptr_t (instead of currently uint32_t) to be able to accomodate larger IDs, as a pre-step towards allowing user-id to flows. Signed-off-by: Eli Britstein --- app/test-pmd/cmdline_flow.c | 12 ++++++------ app/test-pmd/config.c | 34 ++++++++++++++++++---------------- app/test-pmd/testpmd.h | 10 +++++----- 3 files changed, 29 insertions(+), 27 deletions(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 9309607f11..a2709e8aa9 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -1085,16 +1085,16 @@ struct buffer { uint8_t *data; } vc; /**< Validate/create arguments. */ struct { - uint32_t *rule; - uint32_t rule_n; + uintptr_t *rule; + uintptr_t rule_n; } destroy; /**< Destroy arguments. */ struct { char file[128]; bool mode; - uint32_t rule; + uintptr_t rule; } dump; /**< Dump arguments. */ struct { - uint32_t rule; + uintptr_t rule; struct rte_flow_action action; } query; /**< Query arguments. */ struct { @@ -9683,7 +9683,7 @@ parse_qo_destroy(struct context *ctx, const struct token *token, void *buf, unsigned int size) { struct buffer *out = buf; - uint32_t *flow_id; + uintptr_t *flow_id; /* Token name must match. */ if (parse_default(ctx, token, str, len, NULL, 0) < 0) @@ -10899,7 +10899,7 @@ comp_rule_id(struct context *ctx, const struct token *token, port = &ports[ctx->port]; for (pf = port->flow_list; pf != NULL; pf = pf->next) { if (buf && i == ent) - return snprintf(buf, size, "%u", pf->id); + return snprintf(buf, size, "%"PRIu64, pf->id); ++i; } if (buf) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 4121c5c9bb..167cb246c5 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2723,7 +2723,7 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, flow = rte_flow_async_create_by_index(port_id, queue_id, &op_attr, pt->table, rule_idx, actions, actions_idx, job, &error); if (!flow) { - uint32_t flow_id = pf->id; + uintptr_t flow_id = pf->id; port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id); free(job); return port_flow_complain(&error); @@ -2734,14 +2734,14 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id, pf->flow = flow; job->pf = pf; port->flow_list = pf; - printf("Flow rule #%u creation enqueued\n", pf->id); + printf("Flow rule #%"PRIu64" creation enqueued\n", pf->id); return 0; } /** Enqueue number of destroy flow rules operations. */ int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, - bool postpone, uint32_t n, const uint32_t *rule) + bool postpone, uint32_t n, const uintptr_t *rule) { struct rte_flow_op_attr op_attr = { .postpone = postpone }; struct rte_port *port; @@ -2788,7 +2788,8 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, ret = port_flow_complain(&error); continue; } - printf("Flow rule #%u destruction enqueued\n", pf->id); + printf("Flow rule #%"PRIu64" destruction enqueued\n", + pf->id); *tmp = pf->next; break; } @@ -3087,7 +3088,7 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id) /** Pull queue operation results from the queue. */ static int port_queue_aged_flow_destroy(portid_t port_id, queueid_t queue_id, - const uint32_t *rule, int nb_flows) + const uintptr_t *rule, int nb_flows) { struct rte_port *port = &ports[port_id]; struct rte_flow_op_result *res; @@ -3150,7 +3151,7 @@ port_queue_flow_aged(portid_t port_id, uint32_t queue_id, uint8_t destroy) { void **contexts; int nb_context, total = 0, idx; - uint32_t *rules = NULL; + uintptr_t *rules = NULL; struct rte_port *port; struct rte_flow_error error; enum age_action_context_type *type; @@ -3206,7 +3207,7 @@ port_queue_flow_aged(portid_t port_id, uint32_t queue_id, uint8_t destroy) switch (*type) { case ACTION_AGE_CONTEXT_TYPE_FLOW: ctx.pf = container_of(type, struct port_flow, age_type); - printf("%-20s\t%" PRIu32 "\t%" PRIu32 "\t%" PRIu32 + printf("%-20s\t%" PRIuPTR "\t%" PRIu32 "\t%" PRIu32 "\t%c%c%c\t\n", "Flow", ctx.pf->id, @@ -3354,13 +3355,13 @@ port_flow_create(portid_t port_id, port->flow_list = pf; if (tunnel_ops->enabled) port_flow_tunnel_offload_cmd_release(port_id, tunnel_ops, pft); - printf("Flow rule #%u created\n", pf->id); + printf("Flow rule #%"PRIu64" created\n", pf->id); return 0; } /** Destroy a number of flow rules. */ int -port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t *rule) +port_flow_destroy(portid_t port_id, uint32_t n, const uintptr_t *rule) { struct rte_port *port; struct port_flow **tmp; @@ -3389,7 +3390,7 @@ port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t *rule) ret = port_flow_complain(&error); continue; } - printf("Flow rule #%u destroyed\n", pf->id); + printf("Flow rule #%"PRIu64" destroyed\n", pf->id); *tmp = pf->next; free(pf); break; @@ -3434,7 +3435,7 @@ port_flow_flush(portid_t port_id) /** Dump flow rules. */ int -port_flow_dump(portid_t port_id, bool dump_all, uint32_t rule_id, +port_flow_dump(portid_t port_id, bool dump_all, uintptr_t rule_id, const char *file_name) { int ret = 0; @@ -3463,7 +3464,8 @@ port_flow_dump(portid_t port_id, bool dump_all, uint32_t rule_id, } } if (found == false) { - fprintf(stderr, "Failed to dump to flow %d\n", rule_id); + fprintf(stderr, "Failed to dump to flow %"PRIu64"\n", + rule_id); return -EINVAL; } } @@ -3493,7 +3495,7 @@ port_flow_dump(portid_t port_id, bool dump_all, uint32_t rule_id, /** Query a flow rule. */ int -port_flow_query(portid_t port_id, uint32_t rule, +port_flow_query(portid_t port_id, uintptr_t rule, const struct rte_flow_action *action) { struct rte_flow_error error; @@ -3515,7 +3517,7 @@ port_flow_query(portid_t port_id, uint32_t rule, if (pf->id == rule) break; if (!pf) { - fprintf(stderr, "Flow rule #%u not found\n", rule); + fprintf(stderr, "Flow rule #%"PRIu64" not found\n", rule); return -ENOENT; } ret = rte_flow_conv(RTE_FLOW_CONV_OP_ACTION_NAME_PTR, @@ -3622,7 +3624,7 @@ port_flow_aged(portid_t port_id, uint8_t destroy) switch (*type) { case ACTION_AGE_CONTEXT_TYPE_FLOW: ctx.pf = container_of(type, struct port_flow, age_type); - printf("%-20s\t%" PRIu32 "\t%" PRIu32 "\t%" PRIu32 + printf("%-20s\t%" PRIu64 "\t%" PRIu32 "\t%" PRIu32 "\t%c%c%c\t\n", "Flow", ctx.pf->id, @@ -3700,7 +3702,7 @@ port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group) const struct rte_flow_action *action = pf->rule.actions; const char *name; - printf("%" PRIu32 "\t%" PRIu32 "\t%" PRIu32 "\t%c%c%c\t", + printf("%" PRIu64 "\t%" PRIu32 "\t%" PRIu32 "\t%c%c%c\t", pf->id, pf->rule.attr->group, pf->rule.attr->priority, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 329a6378a1..ba29d97293 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -215,7 +215,7 @@ struct port_table { struct port_flow { struct port_flow *next; /**< Next flow in list. */ struct port_flow *tmp; /**< Temporary linking. */ - uint32_t id; /**< Flow rule ID. */ + uintptr_t id; /**< Flow rule ID. */ struct rte_flow *flow; /**< Opaque flow object returned by PMD. */ struct rte_flow_conv_rule rule; /**< Saved flow rule description. */ enum age_action_context_type age_type; /**< Age action context type. */ @@ -948,7 +948,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id, const struct rte_flow_item *pattern, const struct rte_flow_action *actions); int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, - bool postpone, uint32_t n, const uint32_t *rule); + bool postpone, uint32_t n, const uintptr_t *rule); int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id, bool postpone, uint32_t id, const struct rte_flow_indir_action_conf *conf, @@ -984,11 +984,11 @@ int port_action_handle_query(portid_t port_id, uint32_t id); void update_age_action_context(const struct rte_flow_action *actions, struct port_flow *pf); int mcast_addr_pool_destroy(portid_t port_id); -int port_flow_destroy(portid_t port_id, uint32_t n, const uint32_t *rule); +int port_flow_destroy(portid_t port_id, uint32_t n, const uintptr_t *rule); int port_flow_flush(portid_t port_id); int port_flow_dump(portid_t port_id, bool dump_all, - uint32_t rule, const char *file_name); -int port_flow_query(portid_t port_id, uint32_t rule, + uintptr_t rule, const char *file_name); +int port_flow_query(portid_t port_id, uintptr_t rule, const struct rte_flow_action *action); void port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group); void port_flow_aged(portid_t port_id, uint8_t destroy); -- 2.25.1