In order that all queues of pools can receive packets, add enable-rss argument to change rss configuration. Fixes: 6bb97df521aa ("examples/vmdq: new app") Cc: stable@dpdk.org Signed-off-by: Junyu Jiang <junyux.jiang@intel.com> --- examples/vmdq/main.c | 39 ++++++++++++++++++++++++++++++++++----- 1 file changed, 34 insertions(+), 5 deletions(-) diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index 011110920..98032e6a3 100644 --- a/examples/vmdq/main.c +++ b/examples/vmdq/main.c @@ -59,6 +59,7 @@ static uint32_t enabled_port_mask; /* number of pools (if user does not specify any, 8 by default */ static uint32_t num_queues = 8; static uint32_t num_pools = 8; +static uint8_t rss_enable; /* empty vmdq configuration structure. Filled in programatically */ static const struct rte_eth_conf vmdq_conf_default = { @@ -143,6 +144,13 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools) (void)(rte_memcpy(eth_conf, &vmdq_conf_default, sizeof(*eth_conf))); (void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf, sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf))); + if (rss_enable) { + eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS; + eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP | + ETH_RSS_UDP | + ETH_RSS_TCP | + ETH_RSS_SCTP; + } return 0; } @@ -164,6 +172,7 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) uint16_t q; uint16_t queues_per_pool; uint32_t max_nb_pools; + uint64_t rss_hf_tmp; /* * The max pool number from dev_info will be used to validate the pool @@ -209,6 +218,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) if (!rte_eth_dev_is_valid_port(port)) return -1; + rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf; + port_conf.rx_adv_conf.rss_conf.rss_hf &= + dev_info.flow_type_rss_offloads; + if (port_conf.rx_adv_conf.rss_conf.rss_hf != rss_hf_tmp) { + printf("Port %u modified RSS hash function based on hardware support," + "requested:%#"PRIx64" configured:%#"PRIx64"\n", + port, + rss_hf_tmp, + port_conf.rx_adv_conf.rss_conf.rss_hf); + } + /* * Though in this example, we only receive packets from the first queue * of each pool and send packets through first rte_lcore_count() tx @@ -363,7 +383,8 @@ static void vmdq_usage(const char *prgname) { printf("%s [EAL options] -- -p PORTMASK]\n" - " --nb-pools NP: number of pools\n", + " --nb-pools NP: number of pools\n" + " --enable-rss: enable RSS (disabled by default)\n", prgname); } @@ -377,6 +398,7 @@ vmdq_parse_args(int argc, char **argv) const char *prgname = argv[0]; static struct option long_option[] = { {"nb-pools", required_argument, NULL, 0}, + {"enable-rss", 0, NULL, 0}, {NULL, 0, 0, 0} }; @@ -394,11 +416,18 @@ vmdq_parse_args(int argc, char **argv) } break; case 0: - if (vmdq_parse_num_pools(optarg) == -1) { - printf("invalid number of pools\n"); - vmdq_usage(prgname); - return -1; + if (!strcmp(long_option[option_index].name, + "nb-pools")) { + if (vmdq_parse_num_pools(optarg) == -1) { + printf("invalid number of pools\n"); + vmdq_usage(prgname); + return -1; + } } + + if (!strcmp(long_option[option_index].name, + "enable-rss")) + rss_enable = 1; break; default: -- 2.17.1
> -----Original Message-----
> From: Jiang, JunyuX
> Sent: Tuesday, March 3, 2020 17:16
> To: dev@dpdk.org
> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Jiang, JunyuX <junyux.jiang@intel.com>;
> stable@dpdk.org
> Subject: [PATCH] examples/vmdq: fix RSS configuration
>
> In order that all queues of pools can receive packets, add enable-rss
> argument to change rss configuration.
>
> Fixes: 6bb97df521aa ("examples/vmdq: new app")
> Cc: stable@dpdk.org
>
> Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
> ---
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
On 3/3/2020 9:16 AM, Junyu Jiang wrote:
> In order that all queues of pools can receive packets,
> add enable-rss argument to change rss configuration.
>
> Fixes: 6bb97df521aa ("examples/vmdq: new app")
> Cc: stable@dpdk.org
>
> Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Hi Junyu,
I was about to ask to document new 'enable-rss' argument in the sample app
documentation, but it seems there is no documentation for it, if I am not
missing anything.
Would it be possible to add some documentation for it, later update the
documentation with new 'enable-rss' argument in this patch?
Thanks,
ferruh
This patch set fixed a bug of vmdq example, and added a documentation for it. *** BLURB HERE *** Junyu Jiang (2): doc: add user guide for VMDq examples/vmdq: fix RSS configuration MAINTAINERS | 1 + doc/guides/sample_app_ug/index.rst | 1 + doc/guides/sample_app_ug/vmdq_forwarding.rst | 208 +++++++++++++++++++ examples/vmdq/main.c | 39 +++- 4 files changed, 244 insertions(+), 5 deletions(-) create mode 100644 doc/guides/sample_app_ug/vmdq_forwarding.rst -- 2.17.1
currently, there is no documentation for vmdq example, this path added the user guide for vmdq. Signed-off-by: Junyu Jiang <junyux.jiang@intel.com> --- MAINTAINERS | 1 + doc/guides/sample_app_ug/index.rst | 1 + doc/guides/sample_app_ug/vmdq_forwarding.rst | 208 +++++++++++++++++++ 3 files changed, 210 insertions(+) create mode 100644 doc/guides/sample_app_ug/vmdq_forwarding.rst diff --git a/MAINTAINERS b/MAINTAINERS index c3785554f..1802356b0 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1599,5 +1599,6 @@ M: Xiaoyun Li <xiaoyun.li@intel.com> F: examples/tep_termination/ F: examples/vmdq/ +F: doc/guides/sample_app_ug/vmdq_forwarding.rst F: examples/vmdq_dcb/ F: doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst index ac3445147..4b16dd161 100644 --- a/doc/guides/sample_app_ug/index.rst +++ b/doc/guides/sample_app_ug/index.rst @@ -40,6 +40,7 @@ Sample Applications User Guides timer packet_ordering vmdq_dcb_forwarding + vmdq_forwarding vhost vhost_blk vhost_crypto diff --git a/doc/guides/sample_app_ug/vmdq_forwarding.rst b/doc/guides/sample_app_ug/vmdq_forwarding.rst new file mode 100644 index 000000000..df23043d6 --- /dev/null +++ b/doc/guides/sample_app_ug/vmdq_forwarding.rst @@ -0,0 +1,208 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2020 Intel Corporation. + +VMDQ Forwarding Sample Application +========================================== + +The VMDQ Forwarding sample application is a simple example of packet processing using the DPDK. +The application performs L2 forwarding using VMDQ to divide the incoming traffic into queues. +The traffic splitting is performed in hardware by the VMDQ feature of the Intel® 82599 and X710/XL710 Ethernet Controllers. + +Overview +-------- + +This sample application can be used as a starting point for developing a new application that is based on the DPDK and +uses VMDQ for traffic partitioning. + +VMDQ filters split the incoming packets up into different "pools" - each with its own set of RX queues - based upon +the MAC address and VLAN ID within the VLAN tag of the packet. + +All traffic is read from a single incoming port and output on another port, without any processing being performed. +With Intel® 82599 NIC, for example, the traffic is split into 128 queues on input, where each thread of the application reads from +multiple queues. When run with 8 threads, that is, with the -c FF option, each thread receives and forwards packets from 16 queues. + +As supplied, the sample application configures the VMDQ feature to have 32 pools with 4 queues each. +The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the splitting of traffic into 16 pools of 2 queues. +While the Intel® X710 or XL710 Ethernet Controller NICs support many configurations of VMDQ pools of 4 or 8 queues each. +And queues numbers for each VMDQ pool can be changed by setting CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM +in config/common_* file. +The nb-pools parameter can be passed on the command line, after the EAL parameters: + +.. code-block:: console + + ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP + +where, NP can be 8, 16 or 32. + +In Linux* user space, the application can display statistics with the number of packets received on each queue. +To have the application display the statistics, send a SIGHUP signal to the running application process. + +The VMDQ Forwarding sample application is in many ways simpler than the L2 Forwarding application +(see :doc:`l2_forward_real_virtual`) +as it performs unidirectional L2 forwarding of packets from one port to a second port. +No command-line options are taken by this application apart from the standard EAL command-line options. + +Compiling the Application +------------------------- + +To compile the sample application see :doc:`compiling`. + +The application is located in the ``vmdq`` sub-directory. + +Running the Application +----------------------- + +To run the example in a linux environment: + +.. code-block:: console + + user@target:~$ ./build/vmdq_app -l 0-3 -n 4 -- -p 0x3 --nb-pools 16 + +Refer to the *DPDK Getting Started Guide* for general information on running applications and +the Environment Abstraction Layer (EAL) options. + +Explanation +----------- + +The following sections provide some explanation of the code. + +Initialization +~~~~~~~~~~~~~~ + +The EAL, driver and PCI configuration is performed largely as in the L2 Forwarding sample application, +as is the creation of the mbuf pool. +See :doc:`l2_forward_real_virtual`. +Where this example application differs is in the configuration of the NIC port for RX. + +The VMDQ hardware feature is configured at port initialization time by setting the appropriate values in the +rte_eth_conf structure passed to the rte_eth_dev_configure() API. +Initially in the application, +a default structure is provided for VMDQ configuration to be filled in later by the application. + +.. code-block:: c + + /* empty vmdq configuration structure. Filled in programatically */ + static const struct rte_eth_conf vmdq_conf_default = { + .rxmode = { + .mq_mode = ETH_MQ_RX_VMDQ_ONLY, + .split_hdr_size = 0, + }, + + .txmode = { + .mq_mode = ETH_MQ_TX_NONE, + }, + .rx_adv_conf = { + /* + * should be overridden separately in code with + * appropriate values + */ + .vmdq_rx_conf = { + .nb_queue_pools = ETH_8_POOLS, + .enable_default_pool = 0, + .default_pool = 0, + .nb_pool_maps = 0, + .pool_map = {{0, 0},}, + }, + }, + }; + +The get_eth_conf() function fills in an rte_eth_conf structure with the appropriate values, +based on the global vlan_tags array. +For the VLAN IDs, each one can be allocated to possibly multiple pools of queues. +For destination MAC, each VMDQ pool will be assigned with a MAC address. In this sample, each VMDQ pool +is assigned to the MAC like 52:54:00:12:<port_id>:<pool_id>, that is, +the MAC of VMDQ pool 2 on port 1 is 52:54:00:12:01:02. + +.. code-block:: c + + const uint16_t vlan_tags[] = { + 0, 1, 2, 3, 4, 5, 6, 7, + 8, 9, 10, 11, 12, 13, 14, 15, + 16, 17, 18, 19, 20, 21, 22, 23, + 24, 25, 26, 27, 28, 29, 30, 31, + 32, 33, 34, 35, 36, 37, 38, 39, + 40, 41, 42, 43, 44, 45, 46, 47, + 48, 49, 50, 51, 52, 53, 54, 55, + 56, 57, 58, 59, 60, 61, 62, 63, + }; + + /* pool mac addr template, pool mac addr is like: 52 54 00 12 port# pool# */ + static struct rte_ether_addr pool_addr_template = { + .addr_bytes = {0x52, 0x54, 0x00, 0x12, 0x00, 0x00} + }; + + /* + * Builds up the correct configuration for vmdq based on the vlan tags array + * given above, and determine the queue number and pool map number according to + * valid pool number + */ + static inline int + get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools) + { + struct rte_eth_vmdq_rx_conf conf; + unsigned i; + + conf.nb_queue_pools = (enum rte_eth_nb_pools)num_pools; + conf.nb_pool_maps = num_pools; + conf.enable_default_pool = 0; + conf.default_pool = 0; /* set explicit value, even if not used */ + + for (i = 0; i < conf.nb_pool_maps; i++) { + conf.pool_map[i].vlan_id = vlan_tags[i]; + conf.pool_map[i].pools = (1UL << (i % num_pools)); + } + + (void)(rte_memcpy(eth_conf, &vmdq_conf_default, sizeof(*eth_conf))); + (void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf, + sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf))); + return 0; + } + + ...... + + /* + * Set mac for each pool. + * There is no default mac for the pools in i40. + * Removes this after i40e fixes this issue. + */ + for (q = 0; q < num_pools; q++) { + struct rte_ether_addr mac; + mac = pool_addr_template; + mac.addr_bytes[4] = port; + mac.addr_bytes[5] = q; + printf("Port %u vmdq pool %u set mac %02x:%02x:%02x:%02x:%02x:%02x\n", + port, q, + mac.addr_bytes[0], mac.addr_bytes[1], + mac.addr_bytes[2], mac.addr_bytes[3], + mac.addr_bytes[4], mac.addr_bytes[5]); + retval = rte_eth_dev_mac_addr_add(port, &mac, + q + vmdq_pool_base); + if (retval) { + printf("mac addr add failed at pool %d\n", q); + return retval; + } + } + +Once the network port has been initialized using the correct VMDQ values, +the initialization of the port's RX and TX hardware rings is performed similarly to that +in the L2 Forwarding sample application. +See :doc:`l2_forward_real_virtual` for more information. + +Statistics Display +~~~~~~~~~~~~~~~~~~ + +When run in a linux environment, +the VMDQ Forwarding sample application can display statistics showing the number of packets read from each RX queue. +This is provided by way of a signal handler for the SIGHUP signal, +which simply prints to standard output the packet counts in grid form. +Each row of the output is a single pool with the columns being the queue number within that pool. + +To generate the statistics output, use the following command: + +.. code-block:: console + + user@host$ sudo killall -HUP vmdq_app + +Please note that the statistics output will appear on the terminal where the vmdq_app is running, +rather than the terminal from which the HUP signal was sent. + -- 2.17.1
In order that all queues of pools can receive packets, add enable-rss argument to change rss configuration. Fixes: 6bb97df521aa ("examples/vmdq: new app") Cc: stable@dpdk.org Signed-off-by: Junyu Jiang <junyux.jiang@intel.com> Acked-by: Xiaoyun Li <xiaoyun.li@intel.com> --- doc/guides/sample_app_ug/vmdq_forwarding.rst | 6 +-- examples/vmdq/main.c | 39 +++++++++++++++++--- 2 files changed, 37 insertions(+), 8 deletions(-) diff --git a/doc/guides/sample_app_ug/vmdq_forwarding.rst b/doc/guides/sample_app_ug/vmdq_forwarding.rst index df23043d6..658d6742d 100644 --- a/doc/guides/sample_app_ug/vmdq_forwarding.rst +++ b/doc/guides/sample_app_ug/vmdq_forwarding.rst @@ -26,13 +26,13 @@ The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the splitting While the Intel® X710 or XL710 Ethernet Controller NICs support many configurations of VMDQ pools of 4 or 8 queues each. And queues numbers for each VMDQ pool can be changed by setting CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM in config/common_* file. -The nb-pools parameter can be passed on the command line, after the EAL parameters: +The nb-pools and enable-rss parameters can be passed on the command line, after the EAL parameters: .. code-block:: console - ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP + ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP --enable-rss -where, NP can be 8, 16 or 32. +where, NP can be 8, 16 or 32, rss is disabled by default. In Linux* user space, the application can display statistics with the number of packets received on each queue. To have the application display the statistics, send a SIGHUP signal to the running application process. diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index 011110920..98032e6a3 100644 --- a/examples/vmdq/main.c +++ b/examples/vmdq/main.c @@ -59,6 +59,7 @@ static uint32_t enabled_port_mask; /* number of pools (if user does not specify any, 8 by default */ static uint32_t num_queues = 8; static uint32_t num_pools = 8; +static uint8_t rss_enable; /* empty vmdq configuration structure. Filled in programatically */ static const struct rte_eth_conf vmdq_conf_default = { @@ -143,6 +144,13 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools) (void)(rte_memcpy(eth_conf, &vmdq_conf_default, sizeof(*eth_conf))); (void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf, sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf))); + if (rss_enable) { + eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS; + eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP | + ETH_RSS_UDP | + ETH_RSS_TCP | + ETH_RSS_SCTP; + } return 0; } @@ -164,6 +172,7 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) uint16_t q; uint16_t queues_per_pool; uint32_t max_nb_pools; + uint64_t rss_hf_tmp; /* * The max pool number from dev_info will be used to validate the pool @@ -209,6 +218,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) if (!rte_eth_dev_is_valid_port(port)) return -1; + rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf; + port_conf.rx_adv_conf.rss_conf.rss_hf &= + dev_info.flow_type_rss_offloads; + if (port_conf.rx_adv_conf.rss_conf.rss_hf != rss_hf_tmp) { + printf("Port %u modified RSS hash function based on hardware support," + "requested:%#"PRIx64" configured:%#"PRIx64"\n", + port, + rss_hf_tmp, + port_conf.rx_adv_conf.rss_conf.rss_hf); + } + /* * Though in this example, we only receive packets from the first queue * of each pool and send packets through first rte_lcore_count() tx @@ -363,7 +383,8 @@ static void vmdq_usage(const char *prgname) { printf("%s [EAL options] -- -p PORTMASK]\n" - " --nb-pools NP: number of pools\n", + " --nb-pools NP: number of pools\n" + " --enable-rss: enable RSS (disabled by default)\n", prgname); } @@ -377,6 +398,7 @@ vmdq_parse_args(int argc, char **argv) const char *prgname = argv[0]; static struct option long_option[] = { {"nb-pools", required_argument, NULL, 0}, + {"enable-rss", 0, NULL, 0}, {NULL, 0, 0, 0} }; @@ -394,11 +416,18 @@ vmdq_parse_args(int argc, char **argv) } break; case 0: - if (vmdq_parse_num_pools(optarg) == -1) { - printf("invalid number of pools\n"); - vmdq_usage(prgname); - return -1; + if (!strcmp(long_option[option_index].name, + "nb-pools")) { + if (vmdq_parse_num_pools(optarg) == -1) { + printf("invalid number of pools\n"); + vmdq_usage(prgname); + return -1; + } } + + if (!strcmp(long_option[option_index].name, + "enable-rss")) + rss_enable = 1; break; default: -- 2.17.1
Tested-by: Han,YingyaX <yingyax.han@intel.com> -----Original Message----- From: dev <dev-bounces@dpdk.org> On Behalf Of Junyu Jiang Sent: Wednesday, March 25, 2020 2:33 PM To: dev@dpdk.org Cc: Yang, Qiming <qiming.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Jiang, JunyuX <junyux.jiang@intel.com> Subject: [dpdk-dev] [PATCH v2 0/2] examples/vmdq: fix RSS configuration This patch set fixed a bug of vmdq example, and added a documentation for it. *** BLURB HERE *** Junyu Jiang (2): doc: add user guide for VMDq examples/vmdq: fix RSS configuration MAINTAINERS | 1 + doc/guides/sample_app_ug/index.rst | 1 + doc/guides/sample_app_ug/vmdq_forwarding.rst | 208 +++++++++++++++++++ examples/vmdq/main.c | 39 +++- 4 files changed, 244 insertions(+), 5 deletions(-) create mode 100644 doc/guides/sample_app_ug/vmdq_forwarding.rst -- 2.17.1
Tested-by: Han,YingyaX <yingyax.han@intel.com>
BRs,
Yingya
-----Original Message-----
From: stable <stable-bounces@dpdk.org> On Behalf Of Li, Xiaoyun
Sent: Thursday, March 5, 2020 10:03 AM
To: Jiang, JunyuX <junyux.jiang@intel.com>; dev@dpdk.org
Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org
Subject: Re: [dpdk-stable] [PATCH] examples/vmdq: fix RSS configuration
> -----Original Message-----
> From: Jiang, JunyuX
> Sent: Tuesday, March 3, 2020 17:16
> To: dev@dpdk.org
> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Jiang, JunyuX <junyux.jiang@intel.com>;
> stable@dpdk.org
> Subject: [PATCH] examples/vmdq: fix RSS configuration
>
> In order that all queues of pools can receive packets, add enable-rss
> argument to change rss configuration.
>
> Fixes: 6bb97df521aa ("examples/vmdq: new app")
> Cc: stable@dpdk.org
>
> Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
> ---
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
Tested-by: Han,YingyaX <yingyax.han@intel.com> BRs, Yingya -----Original Message----- From: dev <dev-bounces@dpdk.org> On Behalf Of Junyu Jiang Sent: Wednesday, March 25, 2020 2:33 PM To: dev@dpdk.org Cc: Yang, Qiming <qiming.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Jiang, JunyuX <junyux.jiang@intel.com>; stable@dpdk.org Subject: [dpdk-dev] [PATCH v2 2/2] examples/vmdq: fix RSS configuration In order that all queues of pools can receive packets, add enable-rss argument to change rss configuration. Fixes: 6bb97df521aa ("examples/vmdq: new app") Cc: stable@dpdk.org Signed-off-by: Junyu Jiang <junyux.jiang@intel.com> Acked-by: Xiaoyun Li <xiaoyun.li@intel.com> --- doc/guides/sample_app_ug/vmdq_forwarding.rst | 6 +-- examples/vmdq/main.c | 39 +++++++++++++++++--- 2 files changed, 37 insertions(+), 8 deletions(-) diff --git a/doc/guides/sample_app_ug/vmdq_forwarding.rst b/doc/guides/sample_app_ug/vmdq_forwarding.rst index df23043d6..658d6742d 100644 --- a/doc/guides/sample_app_ug/vmdq_forwarding.rst +++ b/doc/guides/sample_app_ug/vmdq_forwarding.rst @@ -26,13 +26,13 @@ The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the splitting While the Intel® X710 or XL710 Ethernet Controller NICs support many configurations of VMDQ pools of 4 or 8 queues each. And queues numbers for each VMDQ pool can be changed by setting CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM in config/common_* file. -The nb-pools parameter can be passed on the command line, after the EAL parameters: +The nb-pools and enable-rss parameters can be passed on the command line, after the EAL parameters: .. code-block:: console - ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP + ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP + --enable-rss -where, NP can be 8, 16 or 32. +where, NP can be 8, 16 or 32, rss is disabled by default. In Linux* user space, the application can display statistics with the number of packets received on each queue. To have the application display the statistics, send a SIGHUP signal to the running application process. diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index 011110920..98032e6a3 100644 --- a/examples/vmdq/main.c +++ b/examples/vmdq/main.c @@ -59,6 +59,7 @@ static uint32_t enabled_port_mask; /* number of pools (if user does not specify any, 8 by default */ static uint32_t num_queues = 8; static uint32_t num_pools = 8; +static uint8_t rss_enable; /* empty vmdq configuration structure. Filled in programatically */ static const struct rte_eth_conf vmdq_conf_default = { @@ -143,6 +144,13 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools) (void)(rte_memcpy(eth_conf, &vmdq_conf_default, sizeof(*eth_conf))); (void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf, sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf))); + if (rss_enable) { + eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS; + eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP | + ETH_RSS_UDP | + ETH_RSS_TCP | + ETH_RSS_SCTP; + } return 0; } @@ -164,6 +172,7 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) uint16_t q; uint16_t queues_per_pool; uint32_t max_nb_pools; + uint64_t rss_hf_tmp; /* * The max pool number from dev_info will be used to validate the pool @@ -209,6 +218,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool) if (!rte_eth_dev_is_valid_port(port)) return -1; + rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf; + port_conf.rx_adv_conf.rss_conf.rss_hf &= + dev_info.flow_type_rss_offloads; + if (port_conf.rx_adv_conf.rss_conf.rss_hf != rss_hf_tmp) { + printf("Port %u modified RSS hash function based on hardware support," + "requested:%#"PRIx64" configured:%#"PRIx64"\n", + port, + rss_hf_tmp, + port_conf.rx_adv_conf.rss_conf.rss_hf); + } + /* * Though in this example, we only receive packets from the first queue * of each pool and send packets through first rte_lcore_count() tx @@ -363,7 +383,8 @@ static void vmdq_usage(const char *prgname) { printf("%s [EAL options] -- -p PORTMASK]\n" - " --nb-pools NP: number of pools\n", + " --nb-pools NP: number of pools\n" + " --enable-rss: enable RSS (disabled by default)\n", prgname); } @@ -377,6 +398,7 @@ vmdq_parse_args(int argc, char **argv) const char *prgname = argv[0]; static struct option long_option[] = { {"nb-pools", required_argument, NULL, 0}, + {"enable-rss", 0, NULL, 0}, {NULL, 0, 0, 0} }; @@ -394,11 +416,18 @@ vmdq_parse_args(int argc, char **argv) } break; case 0: - if (vmdq_parse_num_pools(optarg) == -1) { - printf("invalid number of pools\n"); - vmdq_usage(prgname); - return -1; + if (!strcmp(long_option[option_index].name, + "nb-pools")) { + if (vmdq_parse_num_pools(optarg) == -1) { + printf("invalid number of pools\n"); + vmdq_usage(prgname); + return -1; + } } + + if (!strcmp(long_option[option_index].name, + "enable-rss")) + rss_enable = 1; break; default: -- 2.17.1
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Junyu Jiang
> Sent: Wednesday, March 25, 2020 2:33 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Jiang,
> JunyuX <junyux.jiang@intel.com>; stable@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 2/2] examples/vmdq: fix RSS configuration
>
> In order that all queues of pools can receive packets,
> add enable-rss argument to change rss configuration.
>
> Fixes: 6bb97df521aa ("examples/vmdq: new app")
> Cc: stable@dpdk.org
>
> Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
> Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
> ---
> doc/guides/sample_app_ug/vmdq_forwarding.rst | 6 +--
> examples/vmdq/main.c | 39 +++++++++++++++++---
> 2 files changed, 37 insertions(+), 8 deletions(-)
>
> diff --git a/doc/guides/sample_app_ug/vmdq_forwarding.rst
> b/doc/guides/sample_app_ug/vmdq_forwarding.rst
> index df23043d6..658d6742d 100644
> --- a/doc/guides/sample_app_ug/vmdq_forwarding.rst
> +++ b/doc/guides/sample_app_ug/vmdq_forwarding.rst
> @@ -26,13 +26,13 @@ The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports
> the splitting
> While the Intel® X710 or XL710 Ethernet Controller NICs support many configurations of
> VMDQ pools of 4 or 8 queues each.
> And queues numbers for each VMDQ pool can be changed by setting
> CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
> in config/common_* file.
> -The nb-pools parameter can be passed on the command line, after the EAL parameters:
> +The nb-pools and enable-rss parameters can be passed on the command line, after the
> EAL parameters:
>
> .. code-block:: console
>
> - ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP
> + ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP --enable-rss
>
> -where, NP can be 8, 16 or 32.
> +where, NP can be 8, 16 or 32, rss is disabled by default.
>
> In Linux* user space, the application can display statistics with the number of packets
> received on each queue.
> To have the application display the statistics, send a SIGHUP signal to the running
> application process.
> diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
> index 011110920..98032e6a3 100644
> --- a/examples/vmdq/main.c
> +++ b/examples/vmdq/main.c
> @@ -59,6 +59,7 @@ static uint32_t enabled_port_mask;
> /* number of pools (if user does not specify any, 8 by default */
> static uint32_t num_queues = 8;
> static uint32_t num_pools = 8;
> +static uint8_t rss_enable;
>
> /* empty vmdq configuration structure. Filled in programatically */
> static const struct rte_eth_conf vmdq_conf_default = {
> @@ -143,6 +144,13 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t
> num_pools)
> (void)(rte_memcpy(eth_conf, &vmdq_conf_default, sizeof(*eth_conf)));
> (void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf,
> sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
> + if (rss_enable) {
> + eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
> + eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
> + ETH_RSS_UDP |
> + ETH_RSS_TCP |
> + ETH_RSS_SCTP;
> + }
> return 0;
> }
>
> @@ -164,6 +172,7 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> uint16_t q;
> uint16_t queues_per_pool;
> uint32_t max_nb_pools;
> + uint64_t rss_hf_tmp;
>
> /*
> * The max pool number from dev_info will be used to validate the pool
> @@ -209,6 +218,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
> if (!rte_eth_dev_is_valid_port(port))
> return -1;
>
> + rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
> + port_conf.rx_adv_conf.rss_conf.rss_hf &=
> + dev_info.flow_type_rss_offloads;
> + if (port_conf.rx_adv_conf.rss_conf.rss_hf != rss_hf_tmp) {
> + printf("Port %u modified RSS hash function based on hardware support,"
This is RSS offload type but not hash function.
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Junyu Jiang
> Sent: Wednesday, March 25, 2020 2:33 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Jiang,
> JunyuX <junyux.jiang@intel.com>
> Subject: [dpdk-dev] [PATCH v2 1/2] doc: add user guide for VMDq
>
> currently, there is no documentation for vmdq example,
> this path added the user guide for vmdq.
>
> Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>
> -----Original Message-----
> From: stable <stable-bounces@dpdk.org> On Behalf Of Wu, Jingjing
> Sent: Friday, April 3, 2020 08:08
> To: Jiang, JunyuX <junyux.jiang@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Jiang, JunyuX <junyux.jiang@intel.com>;
> stable@dpdk.org
> Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH v2 2/2] examples/vmdq: fix RSS
> configuration
>
>
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Junyu Jiang
> > Sent: Wednesday, March 25, 2020 2:33 PM
> > To: dev@dpdk.org
> > Cc: Yang, Qiming <qiming.yang@intel.com>; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; Jiang, JunyuX <junyux.jiang@intel.com>;
> > stable@dpdk.org
> > Subject: [dpdk-dev] [PATCH v2 2/2] examples/vmdq: fix RSS
> > configuration
> >
> > In order that all queues of pools can receive packets, add enable-rss
> > argument to change rss configuration.
> >
> > Fixes: 6bb97df521aa ("examples/vmdq: new app")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
> > Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > ---
> > doc/guides/sample_app_ug/vmdq_forwarding.rst | 6 +--
> > examples/vmdq/main.c | 39 +++++++++++++++++---
> > 2 files changed, 37 insertions(+), 8 deletions(-)
> >
> > diff --git a/doc/guides/sample_app_ug/vmdq_forwarding.rst
> > b/doc/guides/sample_app_ug/vmdq_forwarding.rst
> > index df23043d6..658d6742d 100644
> > --- a/doc/guides/sample_app_ug/vmdq_forwarding.rst
> > +++ b/doc/guides/sample_app_ug/vmdq_forwarding.rst
> > @@ -26,13 +26,13 @@ The Intel® 82599 10 Gigabit Ethernet Controller
> > NIC also supports the splitting While the Intel® X710 or XL710
> > Ethernet Controller NICs support many configurations of VMDQ pools of
> > 4 or 8 queues each.
> > And queues numbers for each VMDQ pool can be changed by setting
> > CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
> > in config/common_* file.
> > -The nb-pools parameter can be passed on the command line, after the EAL
> parameters:
> > +The nb-pools and enable-rss parameters can be passed on the command
> > +line, after the
> > EAL parameters:
> >
> > .. code-block:: console
> >
> > - ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP
> > + ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP
> > + --enable-rss
> >
> > -where, NP can be 8, 16 or 32.
> > +where, NP can be 8, 16 or 32, rss is disabled by default.
> >
> > In Linux* user space, the application can display statistics with the
> > number of packets received on each queue.
> > To have the application display the statistics, send a SIGHUP signal
> > to the running application process.
> > diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index
> > 011110920..98032e6a3 100644
> > --- a/examples/vmdq/main.c
> > +++ b/examples/vmdq/main.c
> > @@ -59,6 +59,7 @@ static uint32_t enabled_port_mask;
> > /* number of pools (if user does not specify any, 8 by default */
> > static uint32_t num_queues = 8; static uint32_t num_pools = 8;
> > +static uint8_t rss_enable;
> >
> > /* empty vmdq configuration structure. Filled in programatically */
> > static const struct rte_eth_conf vmdq_conf_default = { @@ -143,6
> > +144,13 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t
> > num_pools)
> > (void)(rte_memcpy(eth_conf, &vmdq_conf_default, sizeof(*eth_conf)));
> > (void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf,
> > sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
> > + if (rss_enable) {
> > + eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
> > + eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
> > + ETH_RSS_UDP |
> > + ETH_RSS_TCP |
> > + ETH_RSS_SCTP;
> > + }
> > return 0;
> > }
> >
> > @@ -164,6 +172,7 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> > uint16_t q;
> > uint16_t queues_per_pool;
> > uint32_t max_nb_pools;
> > + uint64_t rss_hf_tmp;
> >
> > /*
> > * The max pool number from dev_info will be used to validate the
> > pool @@ -209,6 +218,17 @@ port_init(uint16_t port, struct rte_mempool
> *mbuf_pool)
> > if (!rte_eth_dev_is_valid_port(port))
> > return -1;
> >
> > + rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
> > + port_conf.rx_adv_conf.rss_conf.rss_hf &=
> > + dev_info.flow_type_rss_offloads;
> > + if (port_conf.rx_adv_conf.rss_conf.rss_hf != rss_hf_tmp) {
> > + printf("Port %u modified RSS hash function based on hardware
> support,"
>
> This is RSS offload type but not hash function.
* The *rss_hf* field of the *rss_conf* structure indicates the different
* types of IPv4/IPv6 packets to which the RSS hashing must be applied.
* Supplying an *rss_hf* equal to zero disables the RSS feature.
And in testpmd, it's the same.
port->dev_conf.rx_adv_conf.rss_conf.rss_hf =
rss_hf & port->dev_info.flow_type_rss_offloads;
> > > + rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
> > > + port_conf.rx_adv_conf.rss_conf.rss_hf &=
> > > + dev_info.flow_type_rss_offloads;
> > > + if (port_conf.rx_adv_conf.rss_conf.rss_hf != rss_hf_tmp) {
> > > + printf("Port %u modified RSS hash function based on hardware
> > support,"
> >
> > This is RSS offload type but not hash function.
>
> * The *rss_hf* field of the *rss_conf* structure indicates the different
> * types of IPv4/IPv6 packets to which the RSS hashing must be applied.
> * Supplying an *rss_hf* equal to zero disables the RSS feature.
>
> And in testpmd, it's the same.
> port->dev_conf.rx_adv_conf.rss_conf.rss_hf =
> rss_hf & port->dev_info.flow_type_rss_offloads;
OK. I got, the definition of rss_hf at the beginning might be hash function which also the same as RSS offload type.
Ignore my comments then.
BTW hash function is also indicating TOEPLITZ/XOR... in somewhere.
Thanks
Jingjng
On 3/25/2020 6:32 AM, Junyu Jiang wrote:
> This patch set fixed a bug of vmdq example,
> and added a documentation for it.
>
> *** BLURB HERE ***
>
> Junyu Jiang (2):
> doc: add user guide for VMDq
> examples/vmdq: fix RSS configuration
>
Hi Junyu,
Thanks for introducing VMDq user guide, appreciated.
Series applied to dpdk-next-net/master, thanks.
25/03/2020 07:32, Junyu Jiang:
> This patch set fixed a bug of vmdq example,
> and added a documentation for it.
We have 2 directories for VMDq:
examples/vmdq/
examples/vmdq_dcb/
Please, would it be possible to merge them in a single one?
Another related question: is VMDq a feature we want to keep
in future? I thought it could be deprecated. Am I wrong?
What are the devices supporting VMDq?