From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0A695A00C3; Tue, 18 Jan 2022 06:09:33 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 065B8426FC; Tue, 18 Jan 2022 06:09:28 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2044.outbound.protection.outlook.com [40.107.220.44]) by mails.dpdk.org (Postfix) with ESMTP id 67D844067E for ; Tue, 18 Jan 2022 06:09:26 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Vt/H3qy0jkfOlcMOx57OMh8X832bgNhCPBrUGn8QMiWKPkt0GgcIr9tiSkDYLQ/640hsIYmW7s0k3hb0GUP6OM4JQMaeFbOYWUx0Tli7Oml9FraCu3FEd+1aftnk8v/EmnjeUMoOfVXlPMr45tZKBuU/xDksbvLf62NL2MOOUgpbRs919LaDmlRh20UILrP4lSLagF7BWZvTvRC3ZAMASfGbp27P0ApkLRdtyQQ/HNUko4q09jZLeN1EnMGQX21oZQ12FM1Jpl/AVLE49PJ8GmwRPnZLV8hZqnWbIOZTLHMauUSitoePdIdvLx0NC8Ndey6GlRX5GHRId5GX3ABVeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KJJga7vr03s1YuWbk7nM2BhADNArS9yMNgfqKaBJP1c=; b=PfDgI0MjIHdejTzOs3+Lf0aAaZoSQyO/Pa+b6aWuWQGu4CztU+2uocwLWxy4nQOsdsyG/cD6MQw2LWfSXvotjWH4WOzGGGK7XDX+PBVtfiADbqNEKapFpTu3Kcr4oNVZkgvKuHcdBiYMnNgBhOno4stdGqiBvlfpafKfSYD43M2hxd6aqgKSRHalgmQgjlHobsLq84j7OJ3r1Ytq/tpGcBct7kPny7gRWI+i2//kND/gQ49wboYOiWsTPBLgPBYkYzxJEJv9CC1cqvcjnf2SH+/5LUOMiwY6ezyVaIWWs/Nfj7C0yLHwqzWCxbCK4tM6QCr0Bb6FIbdBpbS6jou7tw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KJJga7vr03s1YuWbk7nM2BhADNArS9yMNgfqKaBJP1c=; b=dkj3+YUuaE35AdaMFgLX1U3ZniVsgEbHYT1Jl4gO+Sfn7K9HtGRf3+Z9V3TtWjvzzWfChAhE1KNtdDhYZshB5ZoVdH/sRfZyL5+QBsb3jmxdl1JEFGzRQFhk0L5TjmvXJX4R55DsE1hmpzTObr2/pI5YYL3FPrwgmXdorsk+z0t5Ls5Aodi+sOqlJp9vE/KMBTLidKY6vUfukMOMcaapTnNZi80DCBctMG5XQv3i35+nl1+v4gpaYr4rAn2D+gJliQUsIl4vGAmaqoGI0Kh7+lte65PvHFJiI3KuKUj2x6zlU5//vBU1zHOfCeGkef9KKEJ+rQseIDDPc+hQDXNU8Q== Received: from DM6PR12MB3737.namprd12.prod.outlook.com (2603:10b6:5:1c5::32) by MN2PR12MB3215.namprd12.prod.outlook.com (2603:10b6:208:101::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Tue, 18 Jan 2022 05:09:23 +0000 Received: from DM5PR07CA0121.namprd07.prod.outlook.com (2603:10b6:3:13e::11) by DM6PR12MB3737.namprd12.prod.outlook.com (2603:10b6:5:1c5::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.11; Tue, 18 Jan 2022 05:09:22 +0000 Received: from DM6NAM11FT011.eop-nam11.prod.protection.outlook.com (2603:10b6:3:13e:cafe::da) by DM5PR07CA0121.outlook.office365.com (2603:10b6:3:13e::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.10 via Frontend Transport; Tue, 18 Jan 2022 05:09:22 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT011.mail.protection.outlook.com (10.13.172.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 05:09:22 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 05:09:15 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Mon, 17 Jan 2022 21:09:12 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH 07/10] app/testpmd: implement rte flow queue create flow Date: Tue, 18 Jan 2022 07:07:59 +0200 Message-ID: <20220118050802.3915187-8-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118050802.3915187-6-akozyrev@nvidia.com> References: <20220118050802.3915187-6-akozyrev@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: bcbd5123-fdb8-44be-a31a-08d9da40b0bb X-MS-TrafficTypeDiagnostic: DM6PR12MB3737:EE_|MN2PR12MB3215:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:6430; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: B4WuHsB39edExnGmppOhqqFwOYre+oNw2A0D3uIpU2USoAEdT2lgpgXr8xhIT5lIspc7Fz0OGAzqv52PI7j1EcUVYJxMpaGB7E4RKcMTSv0qIqO6gs7PxxcPwhlSADg7tFcaezn6i5SwOererRyW758+G34MoDzEwNnsbZnsOJCL+P7WclWV4UWSBjaKQcdfYE5ykSDhVH11o1BDn4tSO1UcO+NcDdtkE1iDv8cuboDeFz2wWcsxKiEvoTRpuJ2J7WZCiK9L45RmSKKt6eXaFoTX5Es9eeiGyPmFyZ7pqxGgRRXHdy5z+F9gdCfswigm0TSeFr5FLxeJH4rR8GC0xeb14ootHYDR6qNBKZvJGTwXTWylwyu8l9SXf5nvpPoDx2zbnaI86wtteH/77F2nTFv3/uC3Z2RM2POIc5VOFeyS0UOAe/rm2UXD7MAkw4NZ3iQfLcA3bT3KpxA2wjL6Uq7ohSY4Dz37rQPkTf01OLhr3pvB8KShP0dnxnO/13J7wCAkxprawhh707ZGEoGg+OzVN5lGiDibnGTgF20tvXl7+meCdkdKDBTFC7MyVuRxhmvPA9WlIVwwhKMXZQst0p3zR7RT5cPX4+DOdZ/os+96bZBPhlpX6znB877d9ZM46sVFUo65i4bXIwbw9eGWOXrmcECvkJYaMwp0btfEmyV8a5B/dqjIuJh87Gppchde3WgRYcUPcpO4Hov7iYQvnlcgVtPZmujLKtPNMLY7lUi2qDRvI6qe2olEt46OZK8LkDwkSgNfbeC/bQQdzE4888aHoahpIE6uW1g44HmHLsE= X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(36840700001)(40470700002)(46966006)(6666004)(81166007)(316002)(356005)(16526019)(5660300002)(7696005)(2616005)(186003)(6286002)(426003)(82310400004)(508600001)(47076005)(40460700001)(70206006)(30864003)(1076003)(70586007)(336012)(36860700001)(86362001)(26005)(6916009)(2906002)(8936002)(36756003)(54906003)(8676002)(4326008)(55016003)(83380400001)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 05:09:22.5610 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: bcbd5123-fdb8-44be-a31a-08d9da40b0bb X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT011.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3215 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API. Provide the command line interface for enqueueing flow creation/destruction operations. Usage example: testpmd> flow queue 0 create 0 drain yes table 6 item_template 0 action_template 0 pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end testpmd> flow queue 0 destroy 0 drain yes rule 0 Signed-off-by: Alexander Kozyrev --- app/test-pmd/cmdline_flow.c | 266 +++++++++++++++++++- app/test-pmd/config.c | 153 +++++++++++ app/test-pmd/testpmd.h | 7 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 55 ++++ 4 files changed, 480 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index 4dc2a2aaeb..6a8e6fc683 100644 --- a/app/test-pmd/cmdline_flow.c +++ b/app/test-pmd/cmdline_flow.c @@ -59,6 +59,7 @@ enum index { COMMON_ITEM_TEMPLATE_ID, COMMON_ACTION_TEMPLATE_ID, COMMON_TABLE_ID, + COMMON_QUEUE_ID, /* TOP-level command. */ ADD, @@ -91,6 +92,7 @@ enum index { ISOLATE, TUNNEL, FLEX, + QUEUE, /* Flex arguments */ FLEX_ITEM_INIT, @@ -113,6 +115,22 @@ enum index { ACTION_TEMPLATE_SPEC, ACTION_TEMPLATE_MASK, + /* Queue arguments. */ + QUEUE_CREATE, + QUEUE_DESTROY, + + /* Queue create arguments. */ + QUEUE_CREATE_ID, + QUEUE_CREATE_DRAIN, + QUEUE_TABLE, + QUEUE_ITEM_TEMPLATE, + QUEUE_ACTION_TEMPLATE, + QUEUE_SPEC, + + /* Queue destroy arguments. */ + QUEUE_DESTROY_ID, + QUEUE_DESTROY_DRAIN, + /* Table arguments. */ TABLE_CREATE, TABLE_DESTROY, @@ -889,6 +907,8 @@ struct token { struct buffer { enum index command; /**< Flow command. */ portid_t port; /**< Affected port ID. */ + queueid_t queue; /** Async queue ID. */ + bool drain; /** Drain the queue on async oparation */ union { struct { struct rte_flow_port_attr port_attr; @@ -918,6 +938,7 @@ struct buffer { uint32_t action_id; } ia; /* Indirect action query arguments */ struct { + uint32_t table_id; uint32_t it_id; uint32_t at_id; struct rte_flow_attr attr; @@ -1067,6 +1088,18 @@ static const enum index next_table_destroy_attr[] = { ZERO, }; +static const enum index next_queue_subcmd[] = { + QUEUE_CREATE, + QUEUE_DESTROY, + ZERO, +}; + +static const enum index next_queue_destroy_attr[] = { + QUEUE_DESTROY_ID, + END, + ZERO, +}; + static const enum index next_ia_create_attr[] = { INDIRECT_ACTION_CREATE_ID, INDIRECT_ACTION_INGRESS, @@ -2116,6 +2149,12 @@ static int parse_table(struct context *, const struct token *, static int parse_table_destroy(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); +static int parse_qo(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); +static int parse_qo_destroy(struct context *, const struct token *, + const char *, unsigned int, + void *, unsigned int); static int parse_tunnel(struct context *, const struct token *, const char *, unsigned int, void *, unsigned int); @@ -2191,6 +2230,8 @@ static int comp_action_template_id(struct context *, const struct token *, unsigned int, char *, unsigned int); static int comp_table_id(struct context *, const struct token *, unsigned int, char *, unsigned int); +static int comp_queue_id(struct context *, const struct token *, + unsigned int, char *, unsigned int); /** Token definitions. */ static const struct token token_list[] = { @@ -2362,6 +2403,13 @@ static const struct token token_list[] = { .call = parse_int, .comp = comp_table_id, }, + [COMMON_QUEUE_ID] = { + .name = "{queue_id}", + .type = "QUEUE_ID", + .help = "queue id", + .call = parse_int, + .comp = comp_queue_id, + }, /* Top-level command. */ [FLOW] = { .name = "flow", @@ -2383,7 +2431,8 @@ static const struct token token_list[] = { QUERY, ISOLATE, TUNNEL, - FLEX)), + FLEX, + QUEUE)), .call = parse_init, }, /* Top-level command. */ @@ -2641,6 +2690,83 @@ static const struct token token_list[] = { .call = parse_table, }, /* Top-level command. */ + [QUEUE] = { + .name = "queue", + .help = "queue a flow rule operation", + .next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, port)), + .call = parse_qo, + }, + /* Sub-level commands. */ + [QUEUE_CREATE] = { + .name = "create", + .help = "create a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_TABLE), NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo, + }, + [QUEUE_DESTROY] = { + .name = "destroy", + .help = "destroy a flow rule", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_QUEUE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, queue)), + .call = parse_qo_destroy, + }, + /* Queue arguments. */ + [QUEUE_TABLE] = { + .name = "table", + .help = "specify table id", + .next = NEXT(NEXT_ENTRY(QUEUE_ITEM_TEMPLATE), + NEXT_ENTRY(COMMON_TABLE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.table_id)), + .call = parse_qo, + }, + [QUEUE_ITEM_TEMPLATE] = { + .name = "item_template", + .help = "specify item template id", + .next = NEXT(NEXT_ENTRY(QUEUE_ACTION_TEMPLATE), + NEXT_ENTRY(COMMON_ITEM_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.it_id)), + .call = parse_qo, + }, + [QUEUE_ACTION_TEMPLATE] = { + .name = "action_template", + .help = "specify action template id", + .next = NEXT(NEXT_ENTRY(QUEUE_CREATE_DRAIN), + NEXT_ENTRY(COMMON_ACTION_TEMPLATE_ID)), + .args = ARGS(ARGS_ENTRY(struct buffer, + args.vc.at_id)), + .call = parse_qo, + }, + [QUEUE_CREATE_DRAIN] = { + .name = "drain", + .help = "drain queue immediately", + .next = NEXT(NEXT_ENTRY(ITEM_PATTERN), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, drain)), + .call = parse_qo, + }, + [QUEUE_DESTROY_DRAIN] = { + .name = "drain", + .help = "drain queue immediately", + .next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID), + NEXT_ENTRY(COMMON_BOOLEAN)), + .args = ARGS(ARGS_ENTRY(struct buffer, drain)), + .call = parse_qo_destroy, + }, + [QUEUE_DESTROY_ID] = { + .name = "rule", + .help = "specify rule id to destroy", + .next = NEXT(next_queue_destroy_attr, + NEXT_ENTRY(COMMON_UNSIGNED)), + .args = ARGS(ARGS_ENTRY_PTR(struct buffer, + args.destroy.rule)), + .call = parse_qo_destroy, + }, + /* Top-level command. */ [INDIRECT_ACTION] = { .name = "indirect_action", .type = "{command} {port_id} [{arg} [...]]", @@ -8154,6 +8280,111 @@ parse_table_destroy(struct context *ctx, const struct token *token, return len; } +/** Parse tokens for queue create commands. */ +static int +parse_qo(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command) { + if (ctx->curr != QUEUE) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.vc.data = (uint8_t *)out + size; + return len; + } + switch (ctx->curr) { + case QUEUE_CREATE: + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + return len; + case QUEUE_TABLE: + case QUEUE_ITEM_TEMPLATE: + case QUEUE_ACTION_TEMPLATE: + case QUEUE_CREATE_DRAIN: + return len; + case ITEM_PATTERN: + out->args.vc.pattern = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + ctx->object = out->args.vc.pattern; + ctx->objmask = NULL; + return len; + case ACTIONS: + out->args.vc.actions = + (void *)RTE_ALIGN_CEIL((uintptr_t) + (out->args.vc.pattern + + out->args.vc.pattern_n), + sizeof(double)); + ctx->object = out->args.vc.actions; + ctx->objmask = NULL; + return len; + default: + return -1; + } +} + +/** Parse tokens for queue destroy command. */ +static int +parse_qo_destroy(struct context *ctx, const struct token *token, + const char *str, unsigned int len, + void *buf, unsigned int size) +{ + struct buffer *out = buf; + uint32_t *flow_id; + + /* Token name must match. */ + if (parse_default(ctx, token, str, len, NULL, 0) < 0) + return -1; + /* Nothing else to do if there is no buffer. */ + if (!out) + return len; + if (!out->command || out->command == QUEUE) { + if (ctx->curr != QUEUE_DESTROY) + return -1; + if (sizeof(*out) > size) + return -1; + out->command = ctx->curr; + ctx->objdata = 0; + ctx->object = out; + ctx->objmask = NULL; + out->args.destroy.rule = + (void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1), + sizeof(double)); + return len; + } + switch (ctx->curr) { + case QUEUE_DESTROY_ID: + flow_id = out->args.destroy.rule + + out->args.destroy.rule_n++; + if ((uint8_t *)flow_id > (uint8_t *)out + size) + return -1; + ctx->objdata = 0; + ctx->object = flow_id; + ctx->objmask = NULL; + return len; + case QUEUE_DESTROY_DRAIN: + return len; + default: + return -1; + } +} + static int parse_flex(struct context *ctx, const struct token *token, const char *str, unsigned int len, @@ -9193,6 +9424,28 @@ comp_table_id(struct context *ctx, const struct token *token, return i; } +/** Complete available queue IDs. */ +static int +comp_queue_id(struct context *ctx, const struct token *token, + unsigned int ent, char *buf, unsigned int size) +{ + unsigned int i = 0; + struct rte_port *port; + + (void)token; + if (port_id_is_invalid(ctx->port, DISABLED_WARN) || + ctx->port == (portid_t)RTE_PORT_ALL) + return -1; + port = &ports[ctx->port]; + for (i = 0; i < port->queue_nb; i++) { + if (buf && i == ent) + return snprintf(buf, size, "%u", i); + } + if (buf) + return -1; + return i; +} + /** Internal context. */ static struct context cmd_flow_context; @@ -9485,6 +9738,17 @@ cmd_flow_parsed(const struct buffer *in) in->args.table_destroy.table_id_n, in->args.table_destroy.table_id); break; + case QUEUE_CREATE: + port_queue_flow_create(in->port, in->queue, in->drain, + in->args.vc.table_id, in->args.vc.it_id, + in->args.vc.at_id, in->args.vc.pattern, + in->args.vc.actions); + break; + case QUEUE_DESTROY: + port_queue_flow_destroy(in->port, in->queue, in->drain, + in->args.destroy.rule_n, + in->args.destroy.rule); + break; case INDIRECT_ACTION_CREATE: port_action_handle_create( in->port, in->args.vc.attr.group, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 07582fa552..31164d6bf6 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2411,6 +2411,159 @@ port_flow_table_destroy(portid_t port_id, return ret; } +/** Enqueue create flow rule operation. */ +int +port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool drain, uint32_t table_id, + uint32_t item_id, uint32_t action_id, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions) +{ + struct rte_flow_q_ops_attr ops_attr = { .drain = drain }; + struct rte_flow_q_op_res comp = { 0 }; + struct rte_flow *flow; + struct rte_port *port; + struct port_flow *pf; + struct port_table *pt; + uint32_t id = 0; + bool found; + int ret = 0; + struct rte_flow_error error; + struct rte_flow_action_age *age = age_action_get(actions); + + port = &ports[port_id]; + if (port->flow_list) { + if (port->flow_list->id == UINT32_MAX) { + printf("Highest rule ID is already assigned," + " delete it first"); + return -ENOMEM; + } + id = port->flow_list->id + 1; + } + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + found = false; + pt = port->table_list; + while (pt) { + if (table_id == pt->id) { + found = true; + break; + } + pt = pt->next; + } + if (!found) { + printf("Table #%u is invalid\n", table_id); + return -EINVAL; + } + + pf = port_flow_new(NULL, pattern, actions, &error); + if (!pf) + return port_flow_complain(&error); + if (age) { + pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW; + age->context = &pf->age_type; + } + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x11, sizeof(error)); + flow = rte_flow_q_flow_create(port_id, queue_id, &ops_attr, + pt->table, pattern, item_id, actions, action_id, &error); + if (!flow) { + uint32_t flow_id = pf->id; + port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id); + return port_flow_complain(&error); + } + + while (ret == 0) { + /* Poisoning to make sure PMDs update it in case of error. */ + memset(&error, 0x22, sizeof(error)); + ret = rte_flow_q_dequeue(port_id, queue_id, &comp, 1, &error); + if (ret < 0) { + printf("Failed to poll queue\n"); + return -EINVAL; + } + } + + pf->next = port->flow_list; + pf->id = id; + pf->flow = flow; + port->flow_list = pf; + printf("Flow rule #%u creation enqueued\n", pf->id); + return 0; +} + +/** Enqueue number of destroy flow rules operations. */ +int +port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool drain, uint32_t n, const uint32_t *rule) +{ + struct rte_flow_q_ops_attr op_attr = { .drain = drain }; + struct rte_flow_q_op_res comp = { 0 }; + struct rte_port *port; + struct port_flow **tmp; + uint32_t c = 0; + int ret = 0; + + if (port_id_is_invalid(port_id, ENABLED_WARN) || + port_id == (portid_t)RTE_PORT_ALL) + return -EINVAL; + port = &ports[port_id]; + + if (queue_id >= port->queue_nb) { + printf("Queue #%u is invalid\n", queue_id); + return -EINVAL; + } + + tmp = &port->flow_list; + while (*tmp) { + uint32_t i; + + for (i = 0; i != n; ++i) { + struct rte_flow_error error; + struct port_flow *pf = *tmp; + + if (rule[i] != pf->id) + continue; + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x33, sizeof(error)); + if (rte_flow_q_flow_destroy(port_id, queue_id, &op_attr, + pf->flow, &error)) { + ret = port_flow_complain(&error); + continue; + } + + while (ret == 0) { + /* + * Poisoning to make sure PMD + * update it in case of error. + */ + memset(&error, 0x44, sizeof(error)); + ret = rte_flow_q_dequeue(port_id, queue_id, + &comp, 1, &error); + if (ret < 0) { + printf("Failed to poll queue\n"); + return -EINVAL; + } + } + + printf("Flow rule #%u destruction enqueued\n", pf->id); + *tmp = pf->next; + free(pf); + break; + } + if (i == n) + tmp = &(*tmp)->next; + ++c; + } + return ret; +} + /** Create flow rule. */ int port_flow_create(portid_t port_id, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index b8655b9987..99845b9e2f 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -927,6 +927,13 @@ int port_flow_table_create(portid_t port_id, uint32_t id, uint32_t nb_action_templates, uint32_t *action_templates); int port_flow_table_destroy(portid_t port_id, uint32_t n, const uint32_t *table); +int port_queue_flow_create(portid_t port_id, queueid_t queue_id, + bool drain, uint32_t table_id, + uint32_t item_id, uint32_t action_id, + const struct rte_flow_item *pattern, + const struct rte_flow_action *actions); +int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id, + bool drain, uint32_t n, const uint32_t *rule); int port_flow_validate(portid_t port_id, const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index f8a87564be..eb9dff7221 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -3355,6 +3355,19 @@ following sections. pattern {item} [/ {item} [...]] / end actions {action} [/ {action} [...]] / end +- Enqueue creation of a flow rule:: + + flow queue {port_id} create {queue_id} [drain {boolean}] + table {table_id} item_template {item_template_id} + action_template {action_template_id} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +- Enqueue destruction of specific flow rules:: + + flow queue {port_id} destroy {queue_id} + [drain {boolean}] rule {rule_id} [...] + - Create a flow rule:: flow create {port_id} @@ -3654,6 +3667,29 @@ one. **All unspecified object values are automatically initialized to 0.** +Enqueueing creation of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue create`` adds creation operation of a flow rule to a queue. +It is bound to ``rte_flow_q_flow_create()``:: + + flow queue {port_id} create {queue_id} [drain {boolean}] + table {table_id} item_template {item_template_id} + action_template {action_template_id} + pattern {item} [/ {item} [...]] / end + actions {action} [/ {action} [...]] / end + +If successful, it will return a flow rule ID usable with other commands:: + + Flow rule #[...] created + +Otherwise it will show an error message of the form:: + + Caught error type [...] ([...]): [...] + +This command uses the same pattern items and actions as ``flow create``, +their format is described in `Creating flow rules`_. + Attributes ^^^^^^^^^^ @@ -4368,6 +4404,25 @@ Non-existent rule IDs are ignored:: Flow rule #0 destroyed testpmd> +Enqueueing destruction of flow rules +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``flow queue destroy`` adds destruction operations to destroy one or more rules +from their rule ID (as returned by ``flow queue create``) to a queue, +this command calls ``rte_flow_q_flow_destroy()`` as many times as necessary:: + + flow queue {port_id} destroy {queue_id} + [drain {boolean}] rule {rule_id} [...] + +If successful, it will show:: + + Flow rule #[...] destruction enqueued + +It does not report anything for rule IDs that do not exist. The usual error +message is shown when a rule cannot be destroyed:: + + Caught error type [...] ([...]): [...] + Querying flow rules ~~~~~~~~~~~~~~~~~~~ -- 2.18.2