From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5DF4FA0542; Sun, 9 Oct 2022 14:40:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A5564114A; Sun, 9 Oct 2022 14:40:03 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 4355F427EE for ; Sun, 9 Oct 2022 14:40:01 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665319201; x=1696855201; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QJ8xw7H3Vdh/kw7YbH6uy1giSVzTdVvY+hiuuhbdR9w=; b=MzpWC6/32IkiaIi7UhaeKQrXfG2HQORntgfrUNm4xIIuZXcEiCZ7h7EM vQMs5naVOVH3XGcQd3WjXZJ+yO14NvK83mF+eLpGEYBqe5U//XFTD+9hx 2HC9ZDWzn9MO5ei0cNbflCUkCLthhJ1XmGjz0XT5jdU/Qhr1BJvB/+lfF azm6p+urwiSSxHwGO6hATjRe3Z/u5NoDKe7nrhOFllNK2pEOmpeEIuHKB AK8pzqhID/2ccQ15DD8fD38T82qQ9D8lGTKRjqwN/wj0oHAEBLwr4zpTb JYCeMYOF84wtshmYPnhspDlKy6Uz0ALzW6HotT2o98qYEeE4V0vJ7kB41 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="283780097" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="283780097" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:40:00 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10494"; a="656628398" X-IronPort-AV: E=Sophos;i="5.95,171,1661842800"; d="scan'208";a="656628398" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2022 05:39:54 -0700 From: Yuan Wang To: dev@dpdk.org, Aman Singh , Yuying Zhang Cc: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@xilinx.com, mdr@ashroe.eu, xiaoyun.li@intel.com, qi.z.zhang@intel.com, qiming.yang@intel.com, jerinjacobk@gmail.com, viacheslavo@nvidia.com, stephen@networkplumber.org, xuan.ding@intel.com, hpothula@marvell.com, yaqi.tang@intel.com, Yuan Wang , Wenxuan Wu Subject: [PATCH v9 3/4] app/testpmd: add rxhdrs commands and parameters Date: Mon, 10 Oct 2022 04:25:40 +0800 Message-Id: <20221009202541.352724-4-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221009202541.352724-1-yuanx.wang@intel.com> References: <20220812181552.2908067-1-yuanx.wang@intel.com> <20221009202541.352724-1-yuanx.wang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add command line parameter: --rxhdrs=eth[,ipv4] Set the protocol_hdr of segments to scatter packets on receiving if split feature is engaged. And the queues with BUFFER_SPLIT flag. Add interactive mode command: testpmd>set rxhdrs eth,ipv4,ipv4-udp (protocol sequence should be valid) The protocol split feature is off by default. To enable protocol split, you need: 1. Start testpmd with multiple mempools. E.g. --mbuf-size=2048,2048 2. Configure Rx queue with rx_offload buffer split on. 3. Set the protocol type of buffer split. E.g. set rxhdrs eth,eth-ipv4 (default protocols of testpmd : eth|ipv4|ipv6|ipv4-tcp|ipv6-tcp| ipv4-udp|ipv6-udp|ipv4-sctp|ipv6-sctp|grenat|inner-eth| inner-ipv4|inner-ipv6|inner-ipv4-tcp|inner-ipv6-tcp| inner-ipv4-udp|inner-ipv6-udp|inner-ipv4-sctp|inner-ipv6-sctp) Above protocols can be configured in testpmd. But the configuration can only be applied when it is supported by specific pmd. Signed-off-by: Yuan Wang Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu --- app/test-pmd/cmdline.c | 152 +++++++++++++++++++- app/test-pmd/config.c | 108 ++++++++++++++ app/test-pmd/parameters.c | 16 ++- app/test-pmd/testpmd.c | 11 +- app/test-pmd/testpmd.h | 6 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 19 ++- 6 files changed, 303 insertions(+), 9 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 4565a3953a..57ac6828d0 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -181,7 +181,7 @@ static void cmd_help_long_parsed(void *parsed_result, "show (rxq|txq) info (port_id) (queue_id)\n" " Display information for configured RX/TX queue.\n\n" - "show config (rxtx|cores|fwd|rxoffs|rxpkts|txpkts)\n" + "show config (rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts)\n" " Display the given configuration.\n\n" "read rxd (port_id) (queue_id) (rxd_id)\n" @@ -305,6 +305,17 @@ static void cmd_help_long_parsed(void *parsed_result, " Affects only the queues configured with split" " offloads.\n\n" + "set rxhdrs (eth[,ipv4])*\n" + " Set the protocol hdr of each segment to scatter" + " packets on receiving if split feature is engaged." + " Affects only the queues configured with split" + " offloads.\n" + " Supported values: eth|ipv4|ipv6|ipv4-tcp|ipv6-tcp|" + "ipv4-udp|ipv6-udp|ipv4-sctp|ipv6-sctp|" + "grenat|inner-eth|inner-ipv4|inner-ipv6|inner-ipv4-tcp|" + "inner-ipv6-tcp|inner-ipv4-udp|inner-ipv6-udp|" + "inner-ipv4-sctp|inner-ipv6-sctp\n\n" + "set txpkts (x[,y]*)\n" " Set the length of each segment of TXONLY" " and optionally CSUM packets.\n\n" @@ -3366,6 +3377,94 @@ static cmdline_parse_inst_t cmd_stop = { }, }; +static unsigned int +get_ptype(char *value) +{ + uint32_t protocol; + + if (!strcmp(value, "eth")) + protocol = RTE_PTYPE_L2_ETHER; + else if (!strcmp(value, "ipv4")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN; + else if (!strcmp(value, "ipv6")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN; + else if (!strcmp(value, "ipv4-tcp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_TCP; + else if (!strcmp(value, "ipv4-udp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP; + else if (!strcmp(value, "ipv4-sctp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP; + else if (!strcmp(value, "ipv6-tcp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_TCP; + else if (!strcmp(value, "ipv6-udp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_UDP; + else if (!strcmp(value, "ipv6-sctp")) + protocol = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP; + else if (!strcmp(value, "grenat")) + protocol = RTE_PTYPE_TUNNEL_GRENAT; + else if (!strcmp(value, "inner-eth")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER; + else if (!strcmp(value, "inner-ipv4")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN; + else if (!strcmp(value, "inner-ipv6")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN; + else if (!strcmp(value, "inner-ipv4-tcp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP; + else if (!strcmp(value, "inner-ipv4-udp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP; + else if (!strcmp(value, "inner-ipv4-sctp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP; + else if (!strcmp(value, "inner-ipv6-tcp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP; + else if (!strcmp(value, "inner-ipv6-udp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP; + else if (!strcmp(value, "inner-ipv6-sctp")) + protocol = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP; + else { + fprintf(stderr, "Unsupported protocol: %s\n", value); + protocol = RTE_PTYPE_UNKNOWN; + } + + return protocol; +} +/* *** SET RXHDRSLIST *** */ + +unsigned int +parse_hdrs_list(const char *str, const char *item_name, unsigned int max_items, + unsigned int *parsed_items, int check_hdrs_sequence) +{ + unsigned int nb_item; + char *cur; + char *tmp; + unsigned int cur_item, prev_items = 0; + + nb_item = 0; + char *str2 = strdup(str); + cur = strtok_r(str2, ",", &tmp); + while (cur != NULL) { + cur_item = get_ptype(cur); + cur_item &= ~prev_items; + parsed_items[nb_item] = cur_item; + cur = strtok_r(NULL, ",", &tmp); + nb_item++; + prev_items |= cur_item; + } + if (nb_item > max_items) + fprintf(stderr, "Number of %s = %u > %u (maximum items)\n", + item_name, nb_item + 1, max_items); + free(str2); + if (!check_hdrs_sequence) + return nb_item; + return nb_item; +} /* *** SET CORELIST and PORTLIST CONFIGURATION *** */ unsigned int @@ -3735,6 +3834,50 @@ static cmdline_parse_inst_t cmd_set_rxpkts = { }, }; +/* *** SET SEGMENT HEADERS OF RX PACKETS SPLIT *** */ +struct cmd_set_rxhdrs_result { + cmdline_fixed_string_t set; + cmdline_fixed_string_t rxhdrs; + cmdline_fixed_string_t values; +}; + +static void +cmd_set_rxhdrs_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_set_rxhdrs_result *res; + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + res = parsed_result; + nb_segs = parse_hdrs_list(res->values, "segment hdrs", + MAX_SEGS_BUFFER_SPLIT, seg_hdrs, 0); + if (nb_segs > 0) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + cmd_reconfig_device_queue(RTE_PORT_ALL, 0, 1); +} +static cmdline_parse_token_string_t cmd_set_rxhdrs_set = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + set, "set"); +static cmdline_parse_token_string_t cmd_set_rxhdrs_rxhdrs = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + rxhdrs, "rxhdrs"); +static cmdline_parse_token_string_t cmd_set_rxhdrs_values = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + values, NULL); + +static cmdline_parse_inst_t cmd_set_rxhdrs = { + .f = cmd_set_rxhdrs_parsed, + .data = NULL, + .help_str = "set rxhdrs ", + .tokens = { + (void *)&cmd_set_rxhdrs_set, + (void *)&cmd_set_rxhdrs_rxhdrs, + (void *)&cmd_set_rxhdrs_values, + NULL, + }, +}; /* *** SET SEGMENT LENGTHS OF TXONLY PACKETS *** */ struct cmd_set_txpkts_result { @@ -6487,6 +6630,8 @@ static void cmd_showcfg_parsed(void *parsed_result, show_rx_pkt_offsets(); else if (!strcmp(res->what, "rxpkts")) show_rx_pkt_segments(); + else if (!strcmp(res->what, "rxhdrs")) + show_rx_pkt_hdrs(); else if (!strcmp(res->what, "txpkts")) show_tx_pkt_segments(); else if (!strcmp(res->what, "txtimes")) @@ -6499,12 +6644,12 @@ static cmdline_parse_token_string_t cmd_showcfg_port = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, cfg, "config"); static cmdline_parse_token_string_t cmd_showcfg_what = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, what, - "rxtx#cores#fwd#rxoffs#rxpkts#txpkts#txtimes"); + "rxtx#cores#fwd#rxoffs#rxpkts#rxhdrs#txpkts#txtimes"); static cmdline_parse_inst_t cmd_showcfg = { .f = cmd_showcfg_parsed, .data = NULL, - .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|txpkts|txtimes", + .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts|txtimes", .tokens = { (void *)&cmd_showcfg_show, (void *)&cmd_showcfg_port, @@ -12455,6 +12600,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = { (cmdline_parse_inst_t *)&cmd_set_log, (cmdline_parse_inst_t *)&cmd_set_rxoffs, (cmdline_parse_inst_t *)&cmd_set_rxpkts, + (cmdline_parse_inst_t *)&cmd_set_rxhdrs, (cmdline_parse_inst_t *)&cmd_set_txpkts, (cmdline_parse_inst_t *)&cmd_set_txsplit, (cmdline_parse_inst_t *)&cmd_set_txtimes, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index 841e8efe78..dec16a9049 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -4889,6 +4889,114 @@ show_rx_pkt_segments(void) } } +static const char *get_ptype_str(uint32_t ptype) +{ + if ((ptype & (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_TCP)) == + (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_TCP)) + return "ipv4-tcp"; + else if ((ptype & (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP)) == + (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP)) + return "ipv4-udp"; + else if ((ptype & (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP)) == + (RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP)) + return "ipv4-sctp"; + else if ((ptype & (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_TCP)) == + (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_TCP)) + return "ipv6-tcp"; + else if ((ptype & (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_UDP)) == + (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_UDP)) + return "ipv6-udp"; + else if ((ptype & (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP)) == + (RTE_PTYPE_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_L4_SCTP)) + return "ipv6-sctp"; + else if ((ptype & RTE_PTYPE_L4_TCP) == RTE_PTYPE_L4_TCP) + return "tcp"; + else if ((ptype & RTE_PTYPE_L4_UDP) == RTE_PTYPE_L4_UDP) + return "udp"; + else if ((ptype & RTE_PTYPE_L4_SCTP) == RTE_PTYPE_L4_SCTP) + return "sctp"; + else if ((ptype & RTE_PTYPE_L3_IPV4_EXT_UNKNOWN) == RTE_PTYPE_L3_IPV4_EXT_UNKNOWN) + return "ipv4"; + else if ((ptype & RTE_PTYPE_L3_IPV6_EXT_UNKNOWN) == RTE_PTYPE_L3_IPV6_EXT_UNKNOWN) + return "ipv6"; + else if ((ptype & RTE_PTYPE_L2_ETHER) == RTE_PTYPE_L2_ETHER) + return "eth"; + + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP)) == + (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP)) + return "inner-ipv4-tcp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP)) == + (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP)) + return "inner-ipv4-udp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP)) == + (RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP)) + return "inner-ipv4-sctp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP)) == + (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_TCP)) + return "inner-ipv6-tcp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP)) == + (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP)) + return "inner-ipv6-udp"; + else if ((ptype & (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP)) == + (RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_SCTP)) + return "inner-ipv6-sctp"; + else if ((ptype & RTE_PTYPE_INNER_L4_TCP) == RTE_PTYPE_INNER_L4_TCP) + return "inner-tcp"; + else if ((ptype & RTE_PTYPE_INNER_L4_UDP) == RTE_PTYPE_INNER_L4_UDP) + return "inner-udp"; + else if ((ptype & RTE_PTYPE_INNER_L4_SCTP) == RTE_PTYPE_INNER_L4_SCTP) + return "inner-sctp"; + else if ((ptype & RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN) == + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN) + return "inner-ipv4"; + else if ((ptype & RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN) == + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN) + return "inner-ipv6"; + else if ((ptype & RTE_PTYPE_INNER_L2_ETHER) == RTE_PTYPE_INNER_L2_ETHER) + return "inner-eth"; + else if ((ptype & RTE_PTYPE_TUNNEL_GRENAT) == RTE_PTYPE_TUNNEL_GRENAT) + return "grenat"; + else + return "unsupported"; +} + +void +show_rx_pkt_hdrs(void) +{ + uint32_t i, n; + + n = rx_pkt_nb_segs; + printf("Number of segments: %u\n", n); + if (n) { + printf("Packet segs: "); + for (i = 0; i < n - 1; i++) + printf("%s, ", get_ptype_str(rx_pkt_hdr_protos[i])); + printf("payload\n"); + } +} + +void +set_rx_pkt_hdrs(unsigned int *seg_hdrs, unsigned int nb_segs) +{ + unsigned int i; + + if (nb_segs + 1 > MAX_SEGS_BUFFER_SPLIT) { + printf("nb segments per RX packets=%u > " + "MAX_SEGS_BUFFER_SPLIT - ignored\n", nb_segs + 1); + return; + } + + memset(rx_pkt_hdr_protos, 0, sizeof(rx_pkt_hdr_protos)); + + for (i = 0; i < nb_segs; i++) + rx_pkt_hdr_protos[i] = (uint32_t)seg_hdrs[i]; + /* + * We calculate the number of hdrs, but payload is not included, + * so rx_pkt_nb_segs would increase 1. + */ + rx_pkt_nb_segs = nb_segs + 1; +} + void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs) { diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 14752f9571..ff760460ec 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -152,6 +152,7 @@ usage(char* progname) " Used mainly with PCAP drivers.\n"); printf(" --rxoffs=X[,Y]*: set RX segment offsets for split.\n"); printf(" --rxpkts=X[,Y]*: set RX segment sizes to split.\n"); + printf(" --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n"); printf(" --txpkts=X[,Y]*: set TX segment sizes" " or total packet length.\n"); printf(" --txonly-multi-flow: generate multiple flows in txonly mode\n"); @@ -660,6 +661,7 @@ launch_args_parse(int argc, char** argv) { "flow-isolate-all", 0, 0, 0 }, { "rxoffs", 1, 0, 0 }, { "rxpkts", 1, 0, 0 }, + { "rxhdrs", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, { "rxq-share", 2, 0, 0 }, @@ -1254,7 +1256,6 @@ launch_args_parse(int argc, char** argv) if (!strcmp(lgopts[opt_idx].name, "rxpkts")) { unsigned int seg_len[MAX_SEGS_BUFFER_SPLIT]; unsigned int nb_segs; - nb_segs = parse_item_list (optarg, "rxpkt segments", MAX_SEGS_BUFFER_SPLIT, @@ -1264,6 +1265,19 @@ launch_args_parse(int argc, char** argv) else rte_exit(EXIT_FAILURE, "bad rxpkts\n"); } + if (!strcmp(lgopts[opt_idx].name, "rxhdrs")) { + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + nb_segs = parse_hdrs_list + (optarg, "rxpkt segments", + MAX_SEGS_BUFFER_SPLIT, + seg_hdrs, 0); + if (nb_segs > 0) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + else + rte_exit(EXIT_FAILURE, "bad rxpkts\n"); + } if (!strcmp(lgopts[opt_idx].name, "txpkts")) { unsigned seg_lengths[RTE_MAX_SEGS_PER_PKT]; unsigned int nb_segs; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index bb1c901742..5b0f0838dc 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -247,6 +247,7 @@ uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_offs; /**< Number of specified offsets */ +uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; /* * Configuration of packet segments used by the "txonly" processing engine. @@ -2668,12 +2669,16 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; mpx = mbuf_pool_find(socket_id, mp_n); /* Handle zero as mbuf data buffer size. */ - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; rx_seg->offset = i < rx_pkt_nb_offs ? rx_pkt_seg_offsets[i] : 0; rx_seg->mp = mpx ? mpx : mp; + if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) { + rx_seg->proto_hdr = rx_pkt_hdr_protos[i]; + } else { + rx_seg->length = rx_pkt_seg_lengths[i] ? + rx_pkt_seg_lengths[i] : + mbuf_data_size[mp_n]; + } } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index ca2408cb6b..e65be323b8 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -580,6 +580,7 @@ extern uint32_t max_rx_pkt_len; * Configuration of packet segments used to scatter received packets * if some of split features is configured. */ +extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; extern uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; @@ -851,6 +852,9 @@ inc_tx_burst_stats(struct fwd_stream *fs, uint16_t nb_tx) unsigned int parse_item_list(const char *str, const char *item_name, unsigned int max_items, unsigned int *parsed_items, int check_unique_values); +unsigned int parse_hdrs_list(const char *str, const char *item_name, + unsigned int max_item, + unsigned int *parsed_items, int check_unique_values); void launch_args_parse(int argc, char** argv); void cmd_reconfig_device_queue(portid_t id, uint8_t dev, uint8_t queue); void cmdline_read_from_file(const char *filename); @@ -1006,6 +1010,8 @@ void set_record_core_cycles(uint8_t on_off); void set_record_burst_stats(uint8_t on_off); void set_verbose_level(uint16_t vb_level); void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs); +void set_rx_pkt_hdrs(unsigned int *seg_protos, unsigned int nb_segs); +void show_rx_pkt_hdrs(void); void show_rx_pkt_segments(void); void set_rx_pkt_offsets(unsigned int *seg_offsets, unsigned int nb_offs); void show_rx_pkt_offsets(void); diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 1cf814ae89..fdad100944 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -278,7 +278,7 @@ show config Displays the configuration of the application. The configuration comes from the command-line, the runtime or the application defaults:: - testpmd> show config (rxtx|cores|fwd|rxoffs|rxpkts|txpkts|txtimes) + testpmd> show config (rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts|txtimes) The available information categories are: @@ -290,7 +290,9 @@ The available information categories are: * ``rxoffs``: Packet offsets for RX split. -* ``rxpkts``: Packets to RX split configuration. +* ``rxpkts``: Packets to RX length-based split configuration. + +* ``rxhdrs``: Packets to RX proto-based split configuration. * ``txpkts``: Packets to TX configuration. @@ -799,6 +801,19 @@ mbuf for remaining segments will be allocated from the last valid pool). Where x[,y]* represents a CSV list of values, without white space. Zero value means to use the corresponding memory pool data buffer size. +set rxhdrs +~~~~~~~~~~ + +Set the protocol headers of segments to scatter packets on receiving if split +feature is engaged. Affects only the queues configured with split +offloads (currently BUFFER_SPLIT is supported only). + + testpmd> set rxhdrs (eth[,ipv4]*) + +Where eth[,ipv4]* represents a CSV list of values, without white space. If the list +of offsets is shorter than the list of segments the zero offsets will be used +for the remaining segments. + set txpkts ~~~~~~~~~~ -- 2.25.1