From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C103A00C3; Tue, 20 Sep 2022 05:29:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5FC0C40DFB; Tue, 20 Sep 2022 05:29:42 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 3A0914021D for ; Tue, 20 Sep 2022 05:29:40 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663644580; x=1695180580; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=knjmQYnzaQs+Asyo1aZR5n3izA2+hutAHGLKsDhRaLA=; b=JcfioLtvVm2o+6upe8n/iEyZV4Y2ZFc3+amHO1z/Vp0bTjssDxlXtmmy FkXp+lAkGE/LDf24yrrhqpmcHu1XYW8y9/UOsmx9ypOZSui1vfObXSLDl 7x/XcKvW45dJqbovGF6mwy/bP12+fznSxr6ztSJG+QDyZ04YXdZKdQZHb /Oe6b/oPsWlXA9SYArJAhSr6W4AyZ6N3pI/1Fg8MpfCv0HCtx0SjnjJXX LNYXUhOSjgyP6ygTNYGQIT8LNeaQ+H5oZrkIaY7MXs1QMOswJoROkqze7 AGa+sBh0se3pyTF8gNHlduNrv+v4mMYNIazKD+2Rk/5Cx4AiVutEaKcc6 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10475"; a="363544245" X-IronPort-AV: E=Sophos;i="5.93,329,1654585200"; d="scan'208";a="363544245" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2022 20:29:39 -0700 X-IronPort-AV: E=Sophos;i="5.93,329,1654585200"; d="scan'208";a="947490672" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Sep 2022 20:29:34 -0700 From: Yuan Wang To: dev@dpdk.org, Aman Singh , Yuying Zhang Cc: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@xilinx.com, mdr@ashroe.eu, xiaoyun.li@intel.com, qi.z.zhang@intel.com, qiming.yang@intel.com, jerinjacobk@gmail.com, viacheslavo@nvidia.com, stephen@networkplumber.org, xuan.ding@intel.com, hpothula@marvell.com, yaqi.tang@intel.com, Yuan Wang , Wenxuan Wu Subject: [PATCH v4 3/4] app/testpmd: add rxhdrs commands and parameters Date: Tue, 20 Sep 2022 19:12:42 +0800 Message-Id: <20220920111243.1290998-4-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220920111243.1290998-1-yuanx.wang@intel.com> References: <20220812181552.2908067-1-yuanx.wang@intel.com> <20220920111243.1290998-1-yuanx.wang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add command line parameter: --rxhdrs=eth,[ipv4,udp] Set the protocol_hdr of segments to scatter packets on receiving if split feature is engaged. And the queues with BUFFER_SPLIT flag. Add interactive mode command: testpmd>set rxhdrs eth,ipv4,udp (protocol sequence should be valid) The protocol split feature is off by default. To enable protocol split, you need: 1. Start testpmd with multiple mempools. E.g. --mbuf-size=2048,2048 2. Configure Rx queue with rx_offload buffer split on. 3. Set the protocol type of buffer split. E.g. set rxhdrs eth,ipv4 (default protocols of testpmd : eth|ipv4|ipv6|tcp|udp|sctp| inner_eth|inner_ipv4|inner_ipv6| inner_tcp|inner_udp|inner_sctp) Above protocols can be configured in testpmd. But the configuration can only be applied when it is supported by specific pmd. Signed-off-by: Yuan Wang Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu --- app/test-pmd/cmdline.c | 124 +++++++++++++++++++++++++++++++++++++- app/test-pmd/config.c | 70 +++++++++++++++++++++ app/test-pmd/parameters.c | 16 ++++- app/test-pmd/testpmd.c | 2 + app/test-pmd/testpmd.h | 6 ++ 5 files changed, 214 insertions(+), 4 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index b4fe9dfb17..17392f1d2d 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -183,7 +183,7 @@ static void cmd_help_long_parsed(void *parsed_result, "show (rxq|txq) info (port_id) (queue_id)\n" " Display information for configured RX/TX queue.\n\n" - "show config (rxtx|cores|fwd|rxoffs|rxpkts|txpkts)\n" + "show config (rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts)\n" " Display the given configuration.\n\n" "read rxd (port_id) (queue_id) (rxd_id)\n" @@ -307,6 +307,15 @@ static void cmd_help_long_parsed(void *parsed_result, " Affects only the queues configured with split" " offloads.\n\n" + "set rxhdrs (eth[,ipv4])*\n" + " Set the protocol hdr of each segment to scatter" + " packets on receiving if split feature is engaged." + " Affects only the queues configured with split" + " offloads.\n" + " Supported values: eth|ipv4|ipv6|tcp|udp|sctp|" + "inner_eth|inner_ipv4|inner_ipv6|inner_tcp|inner_udp|" + "inner_sctp\n\n" + "set txpkts (x[,y]*)\n" " Set the length of each segment of TXONLY" " and optionally CSUM packets.\n\n" @@ -3456,6 +3465,68 @@ static cmdline_parse_inst_t cmd_stop = { }, }; +static unsigned int +get_ptype(char *value) +{ + uint32_t protocol; + + if (!strcmp(value, "eth")) + protocol = RTE_PTYPE_L2_ETHER; + else if (!strcmp(value, "ipv4")) + protocol = RTE_PTYPE_L3_IPV4; + else if (!strcmp(value, "ipv6")) + protocol = RTE_PTYPE_L3_IPV6; + else if (!strcmp(value, "tcp")) + protocol = RTE_PTYPE_L4_TCP; + else if (!strcmp(value, "udp")) + protocol = RTE_PTYPE_L4_UDP; + else if (!strcmp(value, "sctp")) + protocol = RTE_PTYPE_L4_SCTP; + else if (!strcmp(value, "inner_eth")) + protocol = RTE_PTYPE_INNER_L2_ETHER; + else if (!strcmp(value, "inner_ipv4")) + protocol = RTE_PTYPE_INNER_L3_IPV4; + else if (!strcmp(value, "inner_ipv6")) + protocol = RTE_PTYPE_INNER_L3_IPV6; + else if (!strcmp(value, "inner_tcp")) + protocol = RTE_PTYPE_INNER_L4_TCP; + else if (!strcmp(value, "inner_udp")) + protocol = RTE_PTYPE_INNER_L4_UDP; + else if (!strcmp(value, "inner_sctp")) + protocol = RTE_PTYPE_INNER_L4_SCTP; + else { + fprintf(stderr, "Unsupported protocol: %s\n", value); + protocol = RTE_PTYPE_UNKNOWN; + } + + return protocol; +} +/* *** SET RXHDRSLIST *** */ + +unsigned int +parse_hdrs_list(const char *str, const char *item_name, unsigned int max_items, + unsigned int *parsed_items, int check_hdrs_sequence) +{ + unsigned int nb_item; + char *cur; + char *tmp; + + nb_item = 0; + char *str2 = strdup(str); + cur = strtok_r(str2, ",", &tmp); + while (cur != NULL) { + parsed_items[nb_item] = get_ptype(cur); + cur = strtok_r(NULL, ",", &tmp); + nb_item++; + } + if (nb_item > max_items) + fprintf(stderr, "Number of %s = %u > %u (maximum items)\n", + item_name, nb_item + 1, max_items); + free(str2); + if (!check_hdrs_sequence) + return nb_item; + return nb_item; +} /* *** SET CORELIST and PORTLIST CONFIGURATION *** */ unsigned int @@ -3825,6 +3896,50 @@ static cmdline_parse_inst_t cmd_set_rxpkts = { }, }; +/* *** SET SEGMENT HEADERS OF RX PACKETS SPLIT *** */ +struct cmd_set_rxhdrs_result { + cmdline_fixed_string_t set; + cmdline_fixed_string_t rxhdrs; + cmdline_fixed_string_t values; +}; + +static void +cmd_set_rxhdrs_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_set_rxhdrs_result *res; + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + res = parsed_result; + nb_segs = parse_hdrs_list(res->values, "segment hdrs", + MAX_SEGS_BUFFER_SPLIT, seg_hdrs, 0); + if (nb_segs > 0) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + cmd_reconfig_device_queue(RTE_PORT_ALL, 0, 1); +} +static cmdline_parse_token_string_t cmd_set_rxhdrs_set = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + set, "set"); +static cmdline_parse_token_string_t cmd_set_rxhdrs_rxhdrs = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + rxhdrs, "rxhdrs"); +static cmdline_parse_token_string_t cmd_set_rxhdrs_values = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + values, NULL); + +static cmdline_parse_inst_t cmd_set_rxhdrs = { + .f = cmd_set_rxhdrs_parsed, + .data = NULL, + .help_str = "set rxhdrs ", + .tokens = { + (void *)&cmd_set_rxhdrs_set, + (void *)&cmd_set_rxhdrs_rxhdrs, + (void *)&cmd_set_rxhdrs_values, + NULL, + }, +}; /* *** SET SEGMENT LENGTHS OF TXONLY PACKETS *** */ struct cmd_set_txpkts_result { @@ -6918,6 +7033,8 @@ static void cmd_showcfg_parsed(void *parsed_result, show_rx_pkt_offsets(); else if (!strcmp(res->what, "rxpkts")) show_rx_pkt_segments(); + else if (!strcmp(res->what, "rxhdrs")) + show_rx_pkt_hdrs(); else if (!strcmp(res->what, "txpkts")) show_tx_pkt_segments(); else if (!strcmp(res->what, "txtimes")) @@ -6930,12 +7047,12 @@ static cmdline_parse_token_string_t cmd_showcfg_port = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, cfg, "config"); static cmdline_parse_token_string_t cmd_showcfg_what = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, what, - "rxtx#cores#fwd#rxoffs#rxpkts#txpkts#txtimes"); + "rxtx#cores#fwd#rxoffs#rxpkts#rxhdrs#txpkts#txtimes"); static cmdline_parse_inst_t cmd_showcfg = { .f = cmd_showcfg_parsed, .data = NULL, - .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|txpkts|txtimes", + .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts|txtimes", .tokens = { (void *)&cmd_showcfg_show, (void *)&cmd_showcfg_port, @@ -14166,6 +14283,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = { (cmdline_parse_inst_t *)&cmd_set_log, (cmdline_parse_inst_t *)&cmd_set_rxoffs, (cmdline_parse_inst_t *)&cmd_set_rxpkts, + (cmdline_parse_inst_t *)&cmd_set_rxhdrs, (cmdline_parse_inst_t *)&cmd_set_txpkts, (cmdline_parse_inst_t *)&cmd_set_txsplit, (cmdline_parse_inst_t *)&cmd_set_txtimes, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index a2939867c4..7380ba9c25 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -4940,6 +4940,76 @@ show_rx_pkt_segments(void) } } +static const char *get_ptype_str(uint32_t ptype) +{ + switch (ptype) { + case RTE_PTYPE_L2_ETHER: + return "eth"; + case RTE_PTYPE_L3_IPV4: + return "ipv4"; + case RTE_PTYPE_L3_IPV6: + return "ipv6"; + case RTE_PTYPE_L4_TCP: + return "tcp"; + case RTE_PTYPE_L4_UDP: + return "udp"; + case RTE_PTYPE_L4_SCTP: + return "sctp"; + case RTE_PTYPE_INNER_L2_ETHER: + return "inner_eth"; + case RTE_PTYPE_INNER_L3_IPV4: + return "inner_ipv4"; + case RTE_PTYPE_INNER_L3_IPV6: + return "inner_ipv6"; + case RTE_PTYPE_INNER_L4_TCP: + return "inner_tcp"; + case RTE_PTYPE_INNER_L4_UDP: + return "inner_udp"; + case RTE_PTYPE_INNER_L4_SCTP: + return "inner_sctp"; + default: + return "unsupported"; + } +} + +void +show_rx_pkt_hdrs(void) +{ + uint32_t i, n; + + n = rx_pkt_nb_segs; + printf("Number of segments: %u\n", n); + if (n) { + printf("Packet segs: "); + for (i = 0; i < n - 1; i++) + printf("%s, ", get_ptype_str(rx_pkt_hdr_protos[i])); + printf("payload\n"); + } +} + +void +set_rx_pkt_hdrs(unsigned int *seg_hdrs, unsigned int nb_segs) +{ + unsigned int i; + + if (nb_segs + 1 > MAX_SEGS_BUFFER_SPLIT) { + printf("nb segments per RX packets=%u > " + "MAX_SEGS_BUFFER_SPLIT - ignored\n", nb_segs + 1); + return; + } + + memset(rx_pkt_hdr_protos, 0, sizeof(rx_pkt_hdr_protos)); + + for (i = 0; i < nb_segs; i++) + rx_pkt_hdr_protos[i] = (uint32_t)seg_hdrs[i]; + rx_pkt_hdr_protos[nb_segs] = RTE_PTYPE_ALL_MASK; + /* + * We calculate the number of hdrs, but payload is not included, + * so rx_pkt_nb_segs would increase 1. + */ + rx_pkt_nb_segs = nb_segs + 1; +} + void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs) { diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index e3c9757f3f..87aa0a6589 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -164,6 +164,7 @@ usage(char* progname) " Used mainly with PCAP drivers.\n"); printf(" --rxoffs=X[,Y]*: set RX segment offsets for split.\n"); printf(" --rxpkts=X[,Y]*: set RX segment sizes to split.\n"); + printf(" --rxhdrs=eth[,ipv4]*: set RX segment protocol to split.\n"); printf(" --txpkts=X[,Y]*: set TX segment sizes" " or total packet length.\n"); printf(" --txonly-multi-flow: generate multiple flows in txonly mode\n"); @@ -676,6 +677,7 @@ launch_args_parse(int argc, char** argv) { "flow-isolate-all", 0, 0, 0 }, { "rxoffs", 1, 0, 0 }, { "rxpkts", 1, 0, 0 }, + { "rxhdrs", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, { "rxq-share", 2, 0, 0 }, @@ -1331,7 +1333,6 @@ launch_args_parse(int argc, char** argv) if (!strcmp(lgopts[opt_idx].name, "rxpkts")) { unsigned int seg_len[MAX_SEGS_BUFFER_SPLIT]; unsigned int nb_segs; - nb_segs = parse_item_list (optarg, "rxpkt segments", MAX_SEGS_BUFFER_SPLIT, @@ -1341,6 +1342,19 @@ launch_args_parse(int argc, char** argv) else rte_exit(EXIT_FAILURE, "bad rxpkts\n"); } + if (!strcmp(lgopts[opt_idx].name, "rxhdrs")) { + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + nb_segs = parse_hdrs_list + (optarg, "rxpkt segments", + MAX_SEGS_BUFFER_SPLIT, + seg_hdrs, 0); + if (nb_segs > 0) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + else + rte_exit(EXIT_FAILURE, "bad rxpkts\n"); + } if (!strcmp(lgopts[opt_idx].name, "txpkts")) { unsigned seg_lengths[RTE_MAX_SEGS_PER_PKT]; unsigned int nb_segs; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index addcbcac85..157db60cd4 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -247,6 +247,7 @@ uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_offs; /**< Number of specified offsets */ +uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; /* * Configuration of packet segments used by the "txonly" processing engine. @@ -2685,6 +2686,7 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, rx_seg->offset = i < rx_pkt_nb_offs ? rx_pkt_seg_offsets[i] : 0; rx_seg->mp = mpx ? mpx : mp; + rx_seg->proto_hdr = rx_pkt_hdr_protos[i]; } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index fb2f5195d3..de992c9416 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -562,6 +562,7 @@ extern uint32_t max_rx_pkt_len; * Configuration of packet segments used to scatter received packets * if some of split features is configured. */ +extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; extern uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; @@ -892,6 +893,9 @@ inc_tx_burst_stats(struct fwd_stream *fs, uint16_t nb_tx) unsigned int parse_item_list(const char *str, const char *item_name, unsigned int max_items, unsigned int *parsed_items, int check_unique_values); +unsigned int parse_hdrs_list(const char *str, const char *item_name, + unsigned int max_item, + unsigned int *parsed_items, int check_unique_values); void launch_args_parse(int argc, char** argv); void cmdline_read_from_file(const char *filename); int init_cmdline(void); @@ -1049,6 +1053,8 @@ void set_record_core_cycles(uint8_t on_off); void set_record_burst_stats(uint8_t on_off); void set_verbose_level(uint16_t vb_level); void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs); +void set_rx_pkt_hdrs(unsigned int *seg_protos, unsigned int nb_segs); +void show_rx_pkt_hdrs(void); void show_rx_pkt_segments(void); void set_rx_pkt_offsets(unsigned int *seg_offsets, unsigned int nb_offs); void show_rx_pkt_offsets(void); -- 2.25.1