From: Ciara Power <ciara.power@intel.com>
To: dev@dpdk.org
Cc: techboard@dpdk.org, Ciara Power <ciara.power@intel.com>
Subject: [dpdk-dev] [PATCH v2 6/6] examples/load_balancer: remove example from DPDK
Date: Thu, 24 Oct 2019 14:31:45 +0100 [thread overview]
Message-ID: <20191024133145.12246-7-ciara.power@intel.com> (raw)
In-Reply-To: <20191024133145.12246-1-ciara.power@intel.com>
This example can be removed because DPDK now has a range
of libraries, especially rte_eventdev, that did not exist
previously for load balancing, making this less relevant.
Also, modern NIC cards have greater ability to do load balancing,
e.g. using RSS, over a wider range of fields than earlier cards did.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
MAINTAINERS | 3 -
doc/guides/sample_app_ug/index.rst | 1 -
doc/guides/sample_app_ug/load_balancer.rst | 201 ----
examples/Makefile | 1 -
examples/load_balancer/Makefile | 62 --
examples/load_balancer/config.c | 1030 --------------------
examples/load_balancer/init.c | 520 ----------
examples/load_balancer/main.c | 76 --
examples/load_balancer/main.h | 351 -------
examples/load_balancer/meson.build | 12 -
examples/load_balancer/runtime.c | 642 ------------
examples/meson.build | 1 -
12 files changed, 2900 deletions(-)
delete mode 100644 doc/guides/sample_app_ug/load_balancer.rst
delete mode 100644 examples/load_balancer/Makefile
delete mode 100644 examples/load_balancer/config.c
delete mode 100644 examples/load_balancer/init.c
delete mode 100644 examples/load_balancer/main.c
delete mode 100644 examples/load_balancer/main.h
delete mode 100644 examples/load_balancer/meson.build
delete mode 100644 examples/load_balancer/runtime.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 6b9fa4dc2..3db771bd7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1461,9 +1461,6 @@ F: doc/guides/sample_app_ug/l3_forward.rst
F: examples/link_status_interrupt/
F: doc/guides/sample_app_ug/link_status_intr.rst
-F: examples/load_balancer/
-F: doc/guides/sample_app_ug/load_balancer.rst
-
L-threads - EXPERIMENTAL
M: John McNamara <john.mcnamara@intel.com>
F: examples/performance-thread/
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 5f4924df6..191eebb56 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -30,7 +30,6 @@ Sample Applications User Guides
l3_forward_power_man
l3_forward_access_ctrl
link_status_intr
- load_balancer
server_node_efd
service_cores
multi_process
diff --git a/doc/guides/sample_app_ug/load_balancer.rst b/doc/guides/sample_app_ug/load_balancer.rst
deleted file mode 100644
index 8f2abdfb8..000000000
--- a/doc/guides/sample_app_ug/load_balancer.rst
+++ /dev/null
@@ -1,201 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2010-2014 Intel Corporation.
-
-Load Balancer Sample Application
-================================
-
-The Load Balancer sample application demonstrates the concept of isolating the packet I/O task
-from the application-specific workload.
-Depending on the performance target,
-a number of logical cores (lcores) are dedicated to handle the interaction with the NIC ports (I/O lcores),
-while the rest of the lcores are dedicated to performing the application processing (worker lcores).
-The worker lcores are totally oblivious to the intricacies of the packet I/O activity and
-use the NIC-agnostic interface provided by software rings to exchange packets with the I/O cores.
-
-Overview
---------
-
-The architecture of the Load Balance application is presented in the following figure.
-
-.. _figure_load_bal_app_arch:
-
-.. figure:: img/load_bal_app_arch.*
-
- Load Balancer Application Architecture
-
-
-For the sake of simplicity, the diagram illustrates a specific case of two I/O RX and two I/O TX lcores off loading the packet I/O
-overhead incurred by four NIC ports from four worker cores, with each I/O lcore handling RX/TX for two NIC ports.
-
-I/O RX Logical Cores
-~~~~~~~~~~~~~~~~~~~~
-
-Each I/O RX lcore performs packet RX from its assigned NIC RX rings and then distributes the received packets to the worker threads.
-The application allows each I/O RX lcore to communicate with any of the worker threads,
-therefore each (I/O RX lcore, worker lcore) pair is connected through a dedicated single producer - single consumer software ring.
-
-The worker lcore to handle the current packet is determined by reading a predefined 1-byte field from the input packet:
-
-worker_id = packet[load_balancing_field] % n_workers
-
-Since all the packets that are part of the same traffic flow are expected to have the same value for the load balancing field,
-this scheme also ensures that all the packets that are part of the same traffic flow are directed to the same worker lcore (flow affinity)
-in the same order they enter the system (packet ordering).
-
-I/O TX Logical Cores
-~~~~~~~~~~~~~~~~~~~~
-
-Each I/O lcore owns the packet TX for a predefined set of NIC ports. To enable each worker thread to send packets to any NIC TX port,
-the application creates a software ring for each (worker lcore, NIC TX port) pair,
-with each I/O TX core handling those software rings that are associated with NIC ports that it handles.
-
-Worker Logical Cores
-~~~~~~~~~~~~~~~~~~~~
-
-Each worker lcore reads packets from its set of input software rings and
-routes them to the NIC ports for transmission by dispatching them to output software rings.
-The routing logic is LPM based, with all the worker threads sharing the same LPM rules.
-
-Compiling the Application
--------------------------
-
-To compile the sample application see :doc:`compiling`.
-
-The application is located in the ``load_balancer`` sub-directory.
-
-Running the Application
------------------------
-
-To successfully run the application,
-the command line used to start the application has to be in sync with the traffic flows configured on the traffic generator side.
-
-For examples of application command lines and traffic generator flows, please refer to the DPDK Test Report.
-For more details on how to set up and run the sample applications provided with DPDK package,
-please refer to the *DPDK Getting Started Guide*.
-
-Explanation
------------
-
-Application Configuration
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The application run-time configuration is done through the application command line parameters.
-Any parameter that is not specified as mandatory is optional,
-with the default value hard-coded in the main.h header file from the application folder.
-
-The list of application command line parameters is listed below:
-
-#. --rx "(PORT, QUEUE, LCORE), ...": The list of NIC RX ports and queues handled by the I/O RX lcores.
- This parameter also implicitly defines the list of I/O RX lcores. This is a mandatory parameter.
-
-#. --tx "(PORT, LCORE), ... ": The list of NIC TX ports handled by the I/O TX lcores.
- This parameter also implicitly defines the list of I/O TX lcores.
- This is a mandatory parameter.
-
-#. --w "LCORE, ...": The list of the worker lcores. This is a mandatory parameter.
-
-#. --lpm "IP / PREFIX => PORT; ...": The list of LPM rules used by the worker lcores for packet forwarding.
- This is a mandatory parameter.
-
-#. --rsz "A, B, C, D": Ring sizes:
-
- #. A = The size (in number of buffer descriptors) of each of the NIC RX rings read by the I/O RX lcores.
-
- #. B = The size (in number of elements) of each of the software rings used by the I/O RX lcores to send packets to worker lcores.
-
- #. C = The size (in number of elements) of each of the software rings used by the worker lcores to send packets to I/O TX lcores.
-
- #. D = The size (in number of buffer descriptors) of each of the NIC TX rings written by I/O TX lcores.
-
-#. --bsz "(A, B), (C, D), (E, F)": Burst sizes:
-
- #. A = The I/O RX lcore read burst size from NIC RX.
-
- #. B = The I/O RX lcore write burst size to the output software rings.
-
- #. C = The worker lcore read burst size from the input software rings.
-
- #. D = The worker lcore write burst size to the output software rings.
-
- #. E = The I/O TX lcore read burst size from the input software rings.
-
- #. F = The I/O TX lcore write burst size to the NIC TX.
-
-#. --pos-lb POS: The position of the 1-byte field within the input packet used by the I/O RX lcores
- to identify the worker lcore for the current packet.
- This field needs to be within the first 64 bytes of the input packet.
-
-The infrastructure of software rings connecting I/O lcores and worker lcores is built by the application
-as a result of the application configuration provided by the user through the application command line parameters.
-
-A specific lcore performing the I/O RX role for a specific set of NIC ports can also perform the I/O TX role
-for the same or a different set of NIC ports.
-A specific lcore cannot perform both the I/O role (either RX or TX) and the worker role during the same session.
-
-Example:
-
-.. code-block:: console
-
- ./load_balancer -l 3-7 -n 4 -- --rx "(0,0,3),(1,0,3)" --tx "(0,3),(1,3)" --w "4,5,6,7" --lpm "1.0.0.0/24=>0; 1.0.1.0/24=>1;" --pos-lb 29
-
-There is a single I/O lcore (lcore 3) that handles RX and TX for two NIC ports (ports 0 and 1) that
-handles packets to/from four worker lcores (lcores 4, 5, 6 and 7) that
-are assigned worker IDs 0 to 3 (worker ID for lcore 4 is 0, for lcore 5 is 1, for lcore 6 is 2 and for lcore 7 is 3).
-
-Assuming that all the input packets are IPv4 packets with no VLAN label and the source IP address of the current packet is A.B.C.D,
-the worker lcore for the current packet is determined by byte D (which is byte 29).
-There are two LPM rules that are used by each worker lcore to route packets to the output NIC ports.
-
-The following table illustrates the packet flow through the system for several possible traffic flows:
-
-+------------+----------------+-----------------+------------------------------+--------------+
-| **Flow #** | **Source** | **Destination** | **Worker ID (Worker lcore)** | **Output** |
-| | **IP Address** | **IP Address** | | **NIC Port** |
-| | | | | |
-+============+================+=================+==============================+==============+
-| 1 | 0.0.0.0 | 1.0.0.1 | 0 (4) | 0 |
-| | | | | |
-+------------+----------------+-----------------+------------------------------+--------------+
-| 2 | 0.0.0.1 | 1.0.1.2 | 1 (5) | 1 |
-| | | | | |
-+------------+----------------+-----------------+------------------------------+--------------+
-| 3 | 0.0.0.14 | 1.0.0.3 | 2 (6) | 0 |
-| | | | | |
-+------------+----------------+-----------------+------------------------------+--------------+
-| 4 | 0.0.0.15 | 1.0.1.4 | 3 (7) | 1 |
-| | | | | |
-+------------+----------------+-----------------+------------------------------+--------------+
-
-NUMA Support
-~~~~~~~~~~~~
-
-The application has built-in performance enhancements for the NUMA case:
-
-#. One buffer pool per each CPU socket.
-
-#. One LPM table per each CPU socket.
-
-#. Memory for the NIC RX or TX rings is allocated on the same socket with the lcore handling the respective ring.
-
-In the case where multiple CPU sockets are used in the system,
-it is recommended to enable at least one lcore to fulfill the I/O role for the NIC ports that
-are directly attached to that CPU socket through the PCI Express* bus.
-It is always recommended to handle the packet I/O with lcores from the same CPU socket as the NICs.
-
-Depending on whether the I/O RX lcore (same CPU socket as NIC RX),
-the worker lcore and the I/O TX lcore (same CPU socket as NIC TX) handling a specific input packet,
-are on the same or different CPU sockets, the following run-time scenarios are possible:
-
-#. AAA: The packet is received, processed and transmitted without going across CPU sockets.
-
-#. AAB: The packet is received and processed on socket A,
- but as it has to be transmitted on a NIC port connected to socket B,
- the packet is sent to socket B through software rings.
-
-#. ABB: The packet is received on socket A, but as it has to be processed by a worker lcore on socket B,
- the packet is sent to socket B through software rings.
- The packet is transmitted by a NIC port connected to the same CPU socket as the worker lcore that processed it.
-
-#. ABC: The packet is received on socket A, it is processed by an lcore on socket B,
- then it has to be transmitted out by a NIC connected to socket C.
- The performance price for crossing the CPU socket boundary is paid twice for this packet.
diff --git a/examples/Makefile b/examples/Makefile
index 6bba09ce9..62d0865c5 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -48,7 +48,6 @@ ifeq ($(CONFIG_RTE_LIBRTE_LPM)$(CONFIG_RTE_LIBRTE_HASH),yy)
DIRS-$(CONFIG_RTE_LIBRTE_POWER) += l3fwd-power
endif
DIRS-y += link_status_interrupt
-DIRS-$(CONFIG_RTE_LIBRTE_LPM) += load_balancer
DIRS-y += multi_process
DIRS-y += ntb
DIRS-$(CONFIG_RTE_LIBRTE_REORDER) += packet_ordering
diff --git a/examples/load_balancer/Makefile b/examples/load_balancer/Makefile
deleted file mode 100644
index caae8a107..000000000
--- a/examples/load_balancer/Makefile
+++ /dev/null
@@ -1,62 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2010-2014 Intel Corporation
-
-# binary name
-APP = load_balancer
-
-# all source are stored in SRCS-y
-SRCS-y := main.c config.c init.c runtime.c
-
-# Build using pkg-config variables if possible
-ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
-
-all: shared
-.PHONY: shared static
-shared: build/$(APP)-shared
- ln -sf $(APP)-shared build/$(APP)
-static: build/$(APP)-static
- ln -sf $(APP)-static build/$(APP)
-
-PKGCONF=pkg-config --define-prefix
-
-PC_FILE := $(shell $(PKGCONF) --path libdpdk)
-CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk)
-LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)
-LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk)
-
-build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build
- $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)
-
-build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build
- $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC)
-
-build:
- @mkdir -p $@
-
-.PHONY: clean
-clean:
- rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared
- test -d build && rmdir -p build || true
-
-else # Build using legacy build system
-
-ifeq ($(RTE_SDK),)
-$(error "Please define RTE_SDK environment variable")
-endif
-
-# Default target, detect a build directory, by looking for a path with a .config
-RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config)))))
-
-include $(RTE_SDK)/mk/rte.vars.mk
-
-CFLAGS += -O3 -g
-CFLAGS += $(WERROR_FLAGS)
-
-# workaround for a gcc bug with noreturn attribute
-# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603
-ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
-CFLAGS_main.o += -Wno-return-type
-endif
-
-include $(RTE_SDK)/mk/rte.extapp.mk
-endif
diff --git a/examples/load_balancer/config.c b/examples/load_balancer/config.c
deleted file mode 100644
index 972c85c5b..000000000
--- a/examples/load_balancer/config.c
+++ /dev/null
@@ -1,1030 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdint.h>
-#include <inttypes.h>
-#include <sys/types.h>
-#include <string.h>
-#include <sys/queue.h>
-#include <stdarg.h>
-#include <errno.h>
-#include <getopt.h>
-
-#include <rte_common.h>
-#include <rte_byteorder.h>
-#include <rte_log.h>
-#include <rte_memory.h>
-#include <rte_memcpy.h>
-#include <rte_eal.h>
-#include <rte_launch.h>
-#include <rte_atomic.h>
-#include <rte_cycles.h>
-#include <rte_prefetch.h>
-#include <rte_lcore.h>
-#include <rte_per_lcore.h>
-#include <rte_branch_prediction.h>
-#include <rte_interrupts.h>
-#include <rte_random.h>
-#include <rte_debug.h>
-#include <rte_ether.h>
-#include <rte_ethdev.h>
-#include <rte_mempool.h>
-#include <rte_mbuf.h>
-#include <rte_ip.h>
-#include <rte_tcp.h>
-#include <rte_lpm.h>
-#include <rte_string_fns.h>
-
-#include "main.h"
-
-struct app_params app;
-
-static const char usage[] =
-" \n"
-" load_balancer <EAL PARAMS> -- <APP PARAMS> \n"
-" \n"
-"Application manadatory parameters: \n"
-" --rx \"(PORT, QUEUE, LCORE), ...\" : List of NIC RX ports and queues \n"
-" handled by the I/O RX lcores \n"
-" --tx \"(PORT, LCORE), ...\" : List of NIC TX ports handled by the I/O TX \n"
-" lcores \n"
-" --w \"LCORE, ...\" : List of the worker lcores \n"
-" --lpm \"IP / PREFIX => PORT; ...\" : List of LPM rules used by the worker \n"
-" lcores for packet forwarding \n"
-" \n"
-"Application optional parameters: \n"
-" --rsz \"A, B, C, D\" : Ring sizes \n"
-" A = Size (in number of buffer descriptors) of each of the NIC RX \n"
-" rings read by the I/O RX lcores (default value is %u) \n"
-" B = Size (in number of elements) of each of the SW rings used by the\n"
-" I/O RX lcores to send packets to worker lcores (default value is\n"
-" %u) \n"
-" C = Size (in number of elements) of each of the SW rings used by the\n"
-" worker lcores to send packets to I/O TX lcores (default value is\n"
-" %u) \n"
-" D = Size (in number of buffer descriptors) of each of the NIC TX \n"
-" rings written by I/O TX lcores (default value is %u) \n"
-" --bsz \"(A, B), (C, D), (E, F)\" : Burst sizes \n"
-" A = I/O RX lcore read burst size from NIC RX (default value is %u) \n"
-" B = I/O RX lcore write burst size to output SW rings (default value \n"
-" is %u) \n"
-" C = Worker lcore read burst size from input SW rings (default value \n"
-" is %u) \n"
-" D = Worker lcore write burst size to output SW rings (default value \n"
-" is %u) \n"
-" E = I/O TX lcore read burst size from input SW rings (default value \n"
-" is %u) \n"
-" F = I/O TX lcore write burst size to NIC TX (default value is %u) \n"
-" --pos-lb POS : Position of the 1-byte field within the input packet used by\n"
-" the I/O RX lcores to identify the worker lcore for the current \n"
-" packet (default value is %u) \n";
-
-void
-app_print_usage(void)
-{
- printf(usage,
- APP_DEFAULT_NIC_RX_RING_SIZE,
- APP_DEFAULT_RING_RX_SIZE,
- APP_DEFAULT_RING_TX_SIZE,
- APP_DEFAULT_NIC_TX_RING_SIZE,
- APP_DEFAULT_BURST_SIZE_IO_RX_READ,
- APP_DEFAULT_BURST_SIZE_IO_RX_WRITE,
- APP_DEFAULT_BURST_SIZE_WORKER_READ,
- APP_DEFAULT_BURST_SIZE_WORKER_WRITE,
- APP_DEFAULT_BURST_SIZE_IO_TX_READ,
- APP_DEFAULT_BURST_SIZE_IO_TX_WRITE,
- APP_DEFAULT_IO_RX_LB_POS
- );
-}
-
-#ifndef APP_ARG_RX_MAX_CHARS
-#define APP_ARG_RX_MAX_CHARS 4096
-#endif
-
-#ifndef APP_ARG_RX_MAX_TUPLES
-#define APP_ARG_RX_MAX_TUPLES 128
-#endif
-
-static int
-str_to_unsigned_array(
- const char *s, size_t sbuflen,
- char separator,
- unsigned num_vals,
- unsigned *vals)
-{
- char str[sbuflen+1];
- char *splits[num_vals];
- char *endptr = NULL;
- int i, num_splits = 0;
-
- /* copy s so we don't modify original string */
- strlcpy(str, s, sizeof(str));
- num_splits = rte_strsplit(str, sizeof(str), splits, num_vals, separator);
-
- errno = 0;
- for (i = 0; i < num_splits; i++) {
- vals[i] = strtoul(splits[i], &endptr, 0);
- if (errno != 0 || *endptr != '\0')
- return -1;
- }
-
- return num_splits;
-}
-
-static int
-str_to_unsigned_vals(
- const char *s,
- size_t sbuflen,
- char separator,
- unsigned num_vals, ...)
-{
- unsigned i, vals[num_vals];
- va_list ap;
-
- num_vals = str_to_unsigned_array(s, sbuflen, separator, num_vals, vals);
-
- va_start(ap, num_vals);
- for (i = 0; i < num_vals; i++) {
- unsigned *u = va_arg(ap, unsigned *);
- *u = vals[i];
- }
- va_end(ap);
- return num_vals;
-}
-
-static int
-parse_arg_rx(const char *arg)
-{
- const char *p0 = arg, *p = arg;
- uint32_t n_tuples;
-
- if (strnlen(arg, APP_ARG_RX_MAX_CHARS + 1) == APP_ARG_RX_MAX_CHARS + 1) {
- return -1;
- }
-
- n_tuples = 0;
- while ((p = strchr(p0,'(')) != NULL) {
- struct app_lcore_params *lp;
- uint32_t port, queue, lcore, i;
-
- p0 = strchr(p++, ')');
- if ((p0 == NULL) ||
- (str_to_unsigned_vals(p, p0 - p, ',', 3, &port, &queue, &lcore) != 3)) {
- return -2;
- }
-
- /* Enable port and queue for later initialization */
- if ((port >= APP_MAX_NIC_PORTS) || (queue >= APP_MAX_RX_QUEUES_PER_NIC_PORT)) {
- return -3;
- }
- if (app.nic_rx_queue_mask[port][queue] != 0) {
- return -4;
- }
- app.nic_rx_queue_mask[port][queue] = 1;
-
- /* Check and assign (port, queue) to I/O lcore */
- if (rte_lcore_is_enabled(lcore) == 0) {
- return -5;
- }
-
- if (lcore >= APP_MAX_LCORES) {
- return -6;
- }
- lp = &app.lcore_params[lcore];
- if (lp->type == e_APP_LCORE_WORKER) {
- return -7;
- }
- lp->type = e_APP_LCORE_IO;
- const size_t n_queues = RTE_MIN(lp->io.rx.n_nic_queues,
- RTE_DIM(lp->io.rx.nic_queues));
- for (i = 0; i < n_queues; i ++) {
- if ((lp->io.rx.nic_queues[i].port == port) &&
- (lp->io.rx.nic_queues[i].queue == queue)) {
- return -8;
- }
- }
- if (lp->io.rx.n_nic_queues >= APP_MAX_NIC_RX_QUEUES_PER_IO_LCORE) {
- return -9;
- }
- lp->io.rx.nic_queues[lp->io.rx.n_nic_queues].port = port;
- lp->io.rx.nic_queues[lp->io.rx.n_nic_queues].queue = (uint8_t) queue;
- lp->io.rx.n_nic_queues ++;
-
- n_tuples ++;
- if (n_tuples > APP_ARG_RX_MAX_TUPLES) {
- return -10;
- }
- }
-
- if (n_tuples == 0) {
- return -11;
- }
-
- return 0;
-}
-
-#ifndef APP_ARG_TX_MAX_CHARS
-#define APP_ARG_TX_MAX_CHARS 4096
-#endif
-
-#ifndef APP_ARG_TX_MAX_TUPLES
-#define APP_ARG_TX_MAX_TUPLES 128
-#endif
-
-static int
-parse_arg_tx(const char *arg)
-{
- const char *p0 = arg, *p = arg;
- uint32_t n_tuples;
-
- if (strnlen(arg, APP_ARG_TX_MAX_CHARS + 1) == APP_ARG_TX_MAX_CHARS + 1) {
- return -1;
- }
-
- n_tuples = 0;
- while ((p = strchr(p0,'(')) != NULL) {
- struct app_lcore_params *lp;
- uint32_t port, lcore, i;
-
- p0 = strchr(p++, ')');
- if ((p0 == NULL) ||
- (str_to_unsigned_vals(p, p0 - p, ',', 2, &port, &lcore) != 2)) {
- return -2;
- }
-
- /* Enable port and queue for later initialization */
- if (port >= APP_MAX_NIC_PORTS) {
- return -3;
- }
- if (app.nic_tx_port_mask[port] != 0) {
- return -4;
- }
- app.nic_tx_port_mask[port] = 1;
-
- /* Check and assign (port, queue) to I/O lcore */
- if (rte_lcore_is_enabled(lcore) == 0) {
- return -5;
- }
-
- if (lcore >= APP_MAX_LCORES) {
- return -6;
- }
- lp = &app.lcore_params[lcore];
- if (lp->type == e_APP_LCORE_WORKER) {
- return -7;
- }
- lp->type = e_APP_LCORE_IO;
- const size_t n_ports = RTE_MIN(lp->io.tx.n_nic_ports,
- RTE_DIM(lp->io.tx.nic_ports));
- for (i = 0; i < n_ports; i ++) {
- if (lp->io.tx.nic_ports[i] == port) {
- return -8;
- }
- }
- if (lp->io.tx.n_nic_ports >= APP_MAX_NIC_TX_PORTS_PER_IO_LCORE) {
- return -9;
- }
- lp->io.tx.nic_ports[lp->io.tx.n_nic_ports] = port;
- lp->io.tx.n_nic_ports ++;
-
- n_tuples ++;
- if (n_tuples > APP_ARG_TX_MAX_TUPLES) {
- return -10;
- }
- }
-
- if (n_tuples == 0) {
- return -11;
- }
-
- return 0;
-}
-
-#ifndef APP_ARG_W_MAX_CHARS
-#define APP_ARG_W_MAX_CHARS 4096
-#endif
-
-#ifndef APP_ARG_W_MAX_TUPLES
-#define APP_ARG_W_MAX_TUPLES APP_MAX_WORKER_LCORES
-#endif
-
-static int
-parse_arg_w(const char *arg)
-{
- const char *p = arg;
- uint32_t n_tuples;
-
- if (strnlen(arg, APP_ARG_W_MAX_CHARS + 1) == APP_ARG_W_MAX_CHARS + 1) {
- return -1;
- }
-
- n_tuples = 0;
- while (*p != 0) {
- struct app_lcore_params *lp;
- uint32_t lcore;
-
- errno = 0;
- lcore = strtoul(p, NULL, 0);
- if (errno != 0) {
- return -2;
- }
-
- /* Check and enable worker lcore */
- if (rte_lcore_is_enabled(lcore) == 0) {
- return -3;
- }
-
- if (lcore >= APP_MAX_LCORES) {
- return -4;
- }
- lp = &app.lcore_params[lcore];
- if (lp->type == e_APP_LCORE_IO) {
- return -5;
- }
- lp->type = e_APP_LCORE_WORKER;
-
- n_tuples ++;
- if (n_tuples > APP_ARG_W_MAX_TUPLES) {
- return -6;
- }
-
- p = strchr(p, ',');
- if (p == NULL) {
- break;
- }
- p ++;
- }
-
- if (n_tuples == 0) {
- return -7;
- }
-
- if ((n_tuples & (n_tuples - 1)) != 0) {
- return -8;
- }
-
- return 0;
-}
-
-#ifndef APP_ARG_LPM_MAX_CHARS
-#define APP_ARG_LPM_MAX_CHARS 4096
-#endif
-
-static int
-parse_arg_lpm(const char *arg)
-{
- const char *p = arg, *p0;
-
- if (strnlen(arg, APP_ARG_LPM_MAX_CHARS + 1) == APP_ARG_TX_MAX_CHARS + 1) {
- return -1;
- }
-
- while (*p != 0) {
- uint32_t ip_a, ip_b, ip_c, ip_d, ip, depth, if_out;
- char *endptr;
-
- p0 = strchr(p, '/');
- if ((p0 == NULL) ||
- (str_to_unsigned_vals(p, p0 - p, '.', 4, &ip_a, &ip_b, &ip_c, &ip_d) != 4)) {
- return -2;
- }
-
- p = p0 + 1;
- errno = 0;
- depth = strtoul(p, &endptr, 0);
- if (errno != 0 || *endptr != '=') {
- return -3;
- }
- p = strchr(p, '>');
- if (p == NULL) {
- return -4;
- }
- if_out = strtoul(++p, &endptr, 0);
- if (errno != 0 || (*endptr != '\0' && *endptr != ';')) {
- return -5;
- }
-
- if ((ip_a >= 256) || (ip_b >= 256) || (ip_c >= 256) || (ip_d >= 256) ||
- (depth == 0) || (depth >= 32) ||
- (if_out >= APP_MAX_NIC_PORTS)) {
- return -6;
- }
- ip = (ip_a << 24) | (ip_b << 16) | (ip_c << 8) | ip_d;
-
- if (app.n_lpm_rules >= APP_MAX_LPM_RULES) {
- return -7;
- }
- app.lpm_rules[app.n_lpm_rules].ip = ip;
- app.lpm_rules[app.n_lpm_rules].depth = (uint8_t) depth;
- app.lpm_rules[app.n_lpm_rules].if_out = (uint8_t) if_out;
- app.n_lpm_rules ++;
-
- p = strchr(p, ';');
- if (p == NULL) {
- return -8;
- }
- p ++;
- }
-
- if (app.n_lpm_rules == 0) {
- return -9;
- }
-
- return 0;
-}
-
-static int
-app_check_lpm_table(void)
-{
- uint32_t rule;
-
- /* For each rule, check that the output I/F is enabled */
- for (rule = 0; rule < app.n_lpm_rules; rule ++)
- {
- uint32_t port = app.lpm_rules[rule].if_out;
-
- if (app.nic_tx_port_mask[port] == 0) {
- return -1;
- }
- }
-
- return 0;
-}
-
-static int
-app_check_every_rx_port_is_tx_enabled(void)
-{
- uint16_t port;
-
- for (port = 0; port < APP_MAX_NIC_PORTS; port ++) {
- if ((app_get_nic_rx_queues_per_port(port) > 0) && (app.nic_tx_port_mask[port] == 0)) {
- return -1;
- }
- }
-
- return 0;
-}
-
-#ifndef APP_ARG_RSZ_CHARS
-#define APP_ARG_RSZ_CHARS 63
-#endif
-
-static int
-parse_arg_rsz(const char *arg)
-{
- if (strnlen(arg, APP_ARG_RSZ_CHARS + 1) == APP_ARG_RSZ_CHARS + 1) {
- return -1;
- }
-
- if (str_to_unsigned_vals(arg, APP_ARG_RSZ_CHARS, ',', 4,
- &app.nic_rx_ring_size,
- &app.ring_rx_size,
- &app.ring_tx_size,
- &app.nic_tx_ring_size) != 4)
- return -2;
-
-
- if ((app.nic_rx_ring_size == 0) ||
- (app.nic_tx_ring_size == 0) ||
- (app.ring_rx_size == 0) ||
- (app.ring_tx_size == 0)) {
- return -3;
- }
-
- return 0;
-}
-
-#ifndef APP_ARG_BSZ_CHARS
-#define APP_ARG_BSZ_CHARS 63
-#endif
-
-static int
-parse_arg_bsz(const char *arg)
-{
- const char *p = arg, *p0;
- if (strnlen(arg, APP_ARG_BSZ_CHARS + 1) == APP_ARG_BSZ_CHARS + 1) {
- return -1;
- }
-
- p0 = strchr(p++, ')');
- if ((p0 == NULL) ||
- (str_to_unsigned_vals(p, p0 - p, ',', 2, &app.burst_size_io_rx_read, &app.burst_size_io_rx_write) != 2)) {
- return -2;
- }
-
- p = strchr(p0, '(');
- if (p == NULL) {
- return -3;
- }
-
- p0 = strchr(p++, ')');
- if ((p0 == NULL) ||
- (str_to_unsigned_vals(p, p0 - p, ',', 2, &app.burst_size_worker_read, &app.burst_size_worker_write) != 2)) {
- return -4;
- }
-
- p = strchr(p0, '(');
- if (p == NULL) {
- return -5;
- }
-
- p0 = strchr(p++, ')');
- if ((p0 == NULL) ||
- (str_to_unsigned_vals(p, p0 - p, ',', 2, &app.burst_size_io_tx_read, &app.burst_size_io_tx_write) != 2)) {
- return -6;
- }
-
- if ((app.burst_size_io_rx_read == 0) ||
- (app.burst_size_io_rx_write == 0) ||
- (app.burst_size_worker_read == 0) ||
- (app.burst_size_worker_write == 0) ||
- (app.burst_size_io_tx_read == 0) ||
- (app.burst_size_io_tx_write == 0)) {
- return -7;
- }
-
- if ((app.burst_size_io_rx_read > APP_MBUF_ARRAY_SIZE) ||
- (app.burst_size_io_rx_write > APP_MBUF_ARRAY_SIZE) ||
- (app.burst_size_worker_read > APP_MBUF_ARRAY_SIZE) ||
- (app.burst_size_worker_write > APP_MBUF_ARRAY_SIZE) ||
- ((2 * app.burst_size_io_tx_read) > APP_MBUF_ARRAY_SIZE) ||
- (app.burst_size_io_tx_write > APP_MBUF_ARRAY_SIZE)) {
- return -8;
- }
-
- return 0;
-}
-
-#ifndef APP_ARG_NUMERICAL_SIZE_CHARS
-#define APP_ARG_NUMERICAL_SIZE_CHARS 15
-#endif
-
-static int
-parse_arg_pos_lb(const char *arg)
-{
- uint32_t x;
- char *endpt;
-
- if (strnlen(arg, APP_ARG_NUMERICAL_SIZE_CHARS + 1) == APP_ARG_NUMERICAL_SIZE_CHARS + 1) {
- return -1;
- }
-
- errno = 0;
- x = strtoul(arg, &endpt, 10);
- if (errno != 0 || endpt == arg || *endpt != '\0'){
- return -2;
- }
-
- if (x >= 64) {
- return -3;
- }
-
- app.pos_lb = (uint8_t) x;
-
- return 0;
-}
-
-/* Parse the argument given in the command line of the application */
-int
-app_parse_args(int argc, char **argv)
-{
- int opt, ret;
- char **argvopt;
- int option_index;
- char *prgname = argv[0];
- static struct option lgopts[] = {
- {"rx", 1, 0, 0},
- {"tx", 1, 0, 0},
- {"w", 1, 0, 0},
- {"lpm", 1, 0, 0},
- {"rsz", 1, 0, 0},
- {"bsz", 1, 0, 0},
- {"pos-lb", 1, 0, 0},
- {NULL, 0, 0, 0}
- };
- uint32_t arg_w = 0;
- uint32_t arg_rx = 0;
- uint32_t arg_tx = 0;
- uint32_t arg_lpm = 0;
- uint32_t arg_rsz = 0;
- uint32_t arg_bsz = 0;
- uint32_t arg_pos_lb = 0;
-
- argvopt = argv;
-
- while ((opt = getopt_long(argc, argvopt, "",
- lgopts, &option_index)) != EOF) {
-
- switch (opt) {
- /* long options */
- case 0:
- if (!strcmp(lgopts[option_index].name, "rx")) {
- arg_rx = 1;
- ret = parse_arg_rx(optarg);
- if (ret) {
- printf("Incorrect value for --rx argument (%d)\n", ret);
- return -1;
- }
- }
- if (!strcmp(lgopts[option_index].name, "tx")) {
- arg_tx = 1;
- ret = parse_arg_tx(optarg);
- if (ret) {
- printf("Incorrect value for --tx argument (%d)\n", ret);
- return -1;
- }
- }
- if (!strcmp(lgopts[option_index].name, "w")) {
- arg_w = 1;
- ret = parse_arg_w(optarg);
- if (ret) {
- printf("Incorrect value for --w argument (%d)\n", ret);
- return -1;
- }
- }
- if (!strcmp(lgopts[option_index].name, "lpm")) {
- arg_lpm = 1;
- ret = parse_arg_lpm(optarg);
- if (ret) {
- printf("Incorrect value for --lpm argument (%d)\n", ret);
- return -1;
- }
- }
- if (!strcmp(lgopts[option_index].name, "rsz")) {
- arg_rsz = 1;
- ret = parse_arg_rsz(optarg);
- if (ret) {
- printf("Incorrect value for --rsz argument (%d)\n", ret);
- return -1;
- }
- }
- if (!strcmp(lgopts[option_index].name, "bsz")) {
- arg_bsz = 1;
- ret = parse_arg_bsz(optarg);
- if (ret) {
- printf("Incorrect value for --bsz argument (%d)\n", ret);
- return -1;
- }
- }
- if (!strcmp(lgopts[option_index].name, "pos-lb")) {
- arg_pos_lb = 1;
- ret = parse_arg_pos_lb(optarg);
- if (ret) {
- printf("Incorrect value for --pos-lb argument (%d)\n", ret);
- return -1;
- }
- }
- break;
-
- default:
- return -1;
- }
- }
-
- /* Check that all mandatory arguments are provided */
- if ((arg_rx == 0) || (arg_tx == 0) || (arg_w == 0) || (arg_lpm == 0)){
- printf("Not all mandatory arguments are present\n");
- return -1;
- }
-
- /* Assign default values for the optional arguments not provided */
- if (arg_rsz == 0) {
- app.nic_rx_ring_size = APP_DEFAULT_NIC_RX_RING_SIZE;
- app.nic_tx_ring_size = APP_DEFAULT_NIC_TX_RING_SIZE;
- app.ring_rx_size = APP_DEFAULT_RING_RX_SIZE;
- app.ring_tx_size = APP_DEFAULT_RING_TX_SIZE;
- }
-
- if (arg_bsz == 0) {
- app.burst_size_io_rx_read = APP_DEFAULT_BURST_SIZE_IO_RX_READ;
- app.burst_size_io_rx_write = APP_DEFAULT_BURST_SIZE_IO_RX_WRITE;
- app.burst_size_io_tx_read = APP_DEFAULT_BURST_SIZE_IO_TX_READ;
- app.burst_size_io_tx_write = APP_DEFAULT_BURST_SIZE_IO_TX_WRITE;
- app.burst_size_worker_read = APP_DEFAULT_BURST_SIZE_WORKER_READ;
- app.burst_size_worker_write = APP_DEFAULT_BURST_SIZE_WORKER_WRITE;
- }
-
- if (arg_pos_lb == 0) {
- app.pos_lb = APP_DEFAULT_IO_RX_LB_POS;
- }
-
- /* Check cross-consistency of arguments */
- if ((ret = app_check_lpm_table()) < 0) {
- printf("At least one LPM rule is inconsistent (%d)\n", ret);
- return -1;
- }
- if (app_check_every_rx_port_is_tx_enabled() < 0) {
- printf("On LPM lookup miss, packet is sent back on the input port.\n");
- printf("At least one RX port is not enabled for TX.\n");
- return -2;
- }
-
- if (optind >= 0)
- argv[optind - 1] = prgname;
-
- ret = optind - 1;
- optind = 1; /* reset getopt lib */
- return ret;
-}
-
-int
-app_get_nic_rx_queues_per_port(uint16_t port)
-{
- uint32_t i, count;
-
- if (port >= APP_MAX_NIC_PORTS) {
- return -1;
- }
-
- count = 0;
- for (i = 0; i < APP_MAX_RX_QUEUES_PER_NIC_PORT; i ++) {
- if (app.nic_rx_queue_mask[port][i] == 1) {
- count ++;
- }
- }
-
- return count;
-}
-
-int
-app_get_lcore_for_nic_rx(uint16_t port, uint8_t queue, uint32_t *lcore_out)
-{
- uint32_t lcore;
-
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_io *lp = &app.lcore_params[lcore].io;
- uint32_t i;
-
- if (app.lcore_params[lcore].type != e_APP_LCORE_IO) {
- continue;
- }
-
- const size_t n_queues = RTE_MIN(lp->rx.n_nic_queues,
- RTE_DIM(lp->rx.nic_queues));
- for (i = 0; i < n_queues; i ++) {
- if ((lp->rx.nic_queues[i].port == port) &&
- (lp->rx.nic_queues[i].queue == queue)) {
- *lcore_out = lcore;
- return 0;
- }
- }
- }
-
- return -1;
-}
-
-int
-app_get_lcore_for_nic_tx(uint16_t port, uint32_t *lcore_out)
-{
- uint32_t lcore;
-
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_io *lp = &app.lcore_params[lcore].io;
- uint32_t i;
-
- if (app.lcore_params[lcore].type != e_APP_LCORE_IO) {
- continue;
- }
-
- const size_t n_ports = RTE_MIN(lp->tx.n_nic_ports,
- RTE_DIM(lp->tx.nic_ports));
- for (i = 0; i < n_ports; i ++) {
- if (lp->tx.nic_ports[i] == port) {
- *lcore_out = lcore;
- return 0;
- }
- }
- }
-
- return -1;
-}
-
-int
-app_is_socket_used(uint32_t socket)
-{
- uint32_t lcore;
-
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- if (app.lcore_params[lcore].type == e_APP_LCORE_DISABLED) {
- continue;
- }
-
- if (socket == rte_lcore_to_socket_id(lcore)) {
- return 1;
- }
- }
-
- return 0;
-}
-
-uint32_t
-app_get_lcores_io_rx(void)
-{
- uint32_t lcore, count;
-
- count = 0;
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_io *lp_io = &app.lcore_params[lcore].io;
-
- if ((app.lcore_params[lcore].type != e_APP_LCORE_IO) ||
- (lp_io->rx.n_nic_queues == 0)) {
- continue;
- }
-
- count ++;
- }
-
- return count;
-}
-
-uint32_t
-app_get_lcores_worker(void)
-{
- uint32_t lcore, count;
-
- count = 0;
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- if (app.lcore_params[lcore].type != e_APP_LCORE_WORKER) {
- continue;
- }
-
- count ++;
- }
-
- if (count > APP_MAX_WORKER_LCORES) {
- rte_panic("Algorithmic error (too many worker lcores)\n");
- return 0;
- }
-
- return count;
-}
-
-void
-app_print_params(void)
-{
- unsigned port, queue, lcore, rule, i, j;
-
- /* Print NIC RX configuration */
- printf("NIC RX ports: ");
- for (port = 0; port < APP_MAX_NIC_PORTS; port ++) {
- uint32_t n_rx_queues = app_get_nic_rx_queues_per_port(port);
-
- if (n_rx_queues == 0) {
- continue;
- }
-
- printf("%u (", port);
- for (queue = 0; queue < APP_MAX_RX_QUEUES_PER_NIC_PORT; queue ++) {
- if (app.nic_rx_queue_mask[port][queue] == 1) {
- printf("%u ", queue);
- }
- }
- printf(") ");
- }
- printf(";\n");
-
- /* Print I/O lcore RX params */
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_io *lp = &app.lcore_params[lcore].io;
-
- if ((app.lcore_params[lcore].type != e_APP_LCORE_IO) ||
- (lp->rx.n_nic_queues == 0)) {
- continue;
- }
-
- printf("I/O lcore %u (socket %u): ", lcore, rte_lcore_to_socket_id(lcore));
-
- printf("RX ports ");
- for (i = 0; i < lp->rx.n_nic_queues; i ++) {
- printf("(%u, %u) ",
- (unsigned) lp->rx.nic_queues[i].port,
- (unsigned) lp->rx.nic_queues[i].queue);
- }
- printf("; ");
-
- printf("Output rings ");
- for (i = 0; i < lp->rx.n_rings; i ++) {
- printf("%p ", lp->rx.rings[i]);
- }
- printf(";\n");
- }
-
- /* Print worker lcore RX params */
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_worker *lp = &app.lcore_params[lcore].worker;
-
- if (app.lcore_params[lcore].type != e_APP_LCORE_WORKER) {
- continue;
- }
-
- printf("Worker lcore %u (socket %u) ID %u: ",
- lcore,
- rte_lcore_to_socket_id(lcore),
- (unsigned)lp->worker_id);
-
- printf("Input rings ");
- for (i = 0; i < lp->n_rings_in; i ++) {
- printf("%p ", lp->rings_in[i]);
- }
-
- printf(";\n");
- }
-
- printf("\n");
-
- /* Print NIC TX configuration */
- printf("NIC TX ports: ");
- for (port = 0; port < APP_MAX_NIC_PORTS; port ++) {
- if (app.nic_tx_port_mask[port] == 1) {
- printf("%u ", port);
- }
- }
- printf(";\n");
-
- /* Print I/O TX lcore params */
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_io *lp = &app.lcore_params[lcore].io;
- uint32_t n_workers = app_get_lcores_worker();
-
- if ((app.lcore_params[lcore].type != e_APP_LCORE_IO) ||
- (lp->tx.n_nic_ports == 0)) {
- continue;
- }
-
- printf("I/O lcore %u (socket %u): ", lcore, rte_lcore_to_socket_id(lcore));
-
- printf("Input rings per TX port ");
- for (i = 0; i < lp->tx.n_nic_ports; i ++) {
- port = lp->tx.nic_ports[i];
-
- printf("%u (", port);
- for (j = 0; j < n_workers; j ++) {
- printf("%p ", lp->tx.rings[port][j]);
- }
- printf(") ");
-
- }
-
- printf(";\n");
- }
-
- /* Print worker lcore TX params */
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_worker *lp = &app.lcore_params[lcore].worker;
-
- if (app.lcore_params[lcore].type != e_APP_LCORE_WORKER) {
- continue;
- }
-
- printf("Worker lcore %u (socket %u) ID %u: \n",
- lcore,
- rte_lcore_to_socket_id(lcore),
- (unsigned)lp->worker_id);
-
- printf("Output rings per TX port ");
- for (port = 0; port < APP_MAX_NIC_PORTS; port ++) {
- if (lp->rings_out[port] != NULL) {
- printf("%u (%p) ", port, lp->rings_out[port]);
- }
- }
-
- printf(";\n");
- }
-
- /* Print LPM rules */
- printf("LPM rules: \n");
- for (rule = 0; rule < app.n_lpm_rules; rule ++) {
- uint32_t ip = app.lpm_rules[rule].ip;
- uint8_t depth = app.lpm_rules[rule].depth;
- uint8_t if_out = app.lpm_rules[rule].if_out;
-
- printf("\t%u: %u.%u.%u.%u/%u => %u;\n",
- rule,
- (unsigned) (ip & 0xFF000000) >> 24,
- (unsigned) (ip & 0x00FF0000) >> 16,
- (unsigned) (ip & 0x0000FF00) >> 8,
- (unsigned) ip & 0x000000FF,
- (unsigned) depth,
- (unsigned) if_out
- );
- }
-
- /* Rings */
- printf("Ring sizes: NIC RX = %u; Worker in = %u; Worker out = %u; NIC TX = %u;\n",
- (unsigned) app.nic_rx_ring_size,
- (unsigned) app.ring_rx_size,
- (unsigned) app.ring_tx_size,
- (unsigned) app.nic_tx_ring_size);
-
- /* Bursts */
- printf("Burst sizes: I/O RX (rd = %u, wr = %u); Worker (rd = %u, wr = %u); I/O TX (rd = %u, wr = %u)\n",
- (unsigned) app.burst_size_io_rx_read,
- (unsigned) app.burst_size_io_rx_write,
- (unsigned) app.burst_size_worker_read,
- (unsigned) app.burst_size_worker_write,
- (unsigned) app.burst_size_io_tx_read,
- (unsigned) app.burst_size_io_tx_write);
-}
diff --git a/examples/load_balancer/init.c b/examples/load_balancer/init.c
deleted file mode 100644
index 3ab7d0211..000000000
--- a/examples/load_balancer/init.c
+++ /dev/null
@@ -1,520 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdint.h>
-#include <inttypes.h>
-#include <sys/types.h>
-#include <string.h>
-#include <sys/queue.h>
-#include <stdarg.h>
-#include <errno.h>
-#include <getopt.h>
-
-#include <rte_common.h>
-#include <rte_byteorder.h>
-#include <rte_log.h>
-#include <rte_memory.h>
-#include <rte_memcpy.h>
-#include <rte_eal.h>
-#include <rte_launch.h>
-#include <rte_atomic.h>
-#include <rte_cycles.h>
-#include <rte_prefetch.h>
-#include <rte_lcore.h>
-#include <rte_per_lcore.h>
-#include <rte_branch_prediction.h>
-#include <rte_interrupts.h>
-#include <rte_random.h>
-#include <rte_debug.h>
-#include <rte_ether.h>
-#include <rte_ethdev.h>
-#include <rte_ring.h>
-#include <rte_mempool.h>
-#include <rte_mbuf.h>
-#include <rte_string_fns.h>
-#include <rte_ip.h>
-#include <rte_tcp.h>
-#include <rte_lpm.h>
-
-#include "main.h"
-
-static struct rte_eth_conf port_conf = {
- .rxmode = {
- .mq_mode = ETH_MQ_RX_RSS,
- .split_hdr_size = 0,
- .offloads = DEV_RX_OFFLOAD_CHECKSUM,
- },
- .rx_adv_conf = {
- .rss_conf = {
- .rss_key = NULL,
- .rss_hf = ETH_RSS_IP,
- },
- },
- .txmode = {
- .mq_mode = ETH_MQ_TX_NONE,
- },
-};
-
-static void
-app_assign_worker_ids(void)
-{
- uint32_t lcore, worker_id;
-
- /* Assign ID for each worker */
- worker_id = 0;
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_worker *lp_worker = &app.lcore_params[lcore].worker;
-
- if (app.lcore_params[lcore].type != e_APP_LCORE_WORKER) {
- continue;
- }
-
- lp_worker->worker_id = worker_id;
- worker_id ++;
- }
-}
-
-static void
-app_init_mbuf_pools(void)
-{
- unsigned socket, lcore;
-
- /* Init the buffer pools */
- for (socket = 0; socket < APP_MAX_SOCKETS; socket ++) {
- char name[32];
- if (app_is_socket_used(socket) == 0) {
- continue;
- }
-
- snprintf(name, sizeof(name), "mbuf_pool_%u", socket);
- printf("Creating the mbuf pool for socket %u ...\n", socket);
- app.pools[socket] = rte_pktmbuf_pool_create(
- name, APP_DEFAULT_MEMPOOL_BUFFERS,
- APP_DEFAULT_MEMPOOL_CACHE_SIZE,
- 0, APP_DEFAULT_MBUF_DATA_SIZE, socket);
- if (app.pools[socket] == NULL) {
- rte_panic("Cannot create mbuf pool on socket %u\n", socket);
- }
- }
-
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- if (app.lcore_params[lcore].type == e_APP_LCORE_DISABLED) {
- continue;
- }
-
- socket = rte_lcore_to_socket_id(lcore);
- app.lcore_params[lcore].pool = app.pools[socket];
- }
-}
-
-static void
-app_init_lpm_tables(void)
-{
- unsigned socket, lcore;
-
- /* Init the LPM tables */
- for (socket = 0; socket < APP_MAX_SOCKETS; socket ++) {
- char name[32];
- uint32_t rule;
-
- if (app_is_socket_used(socket) == 0) {
- continue;
- }
-
- struct rte_lpm_config lpm_config;
-
- lpm_config.max_rules = APP_MAX_LPM_RULES;
- lpm_config.number_tbl8s = 256;
- lpm_config.flags = 0;
- snprintf(name, sizeof(name), "lpm_table_%u", socket);
- printf("Creating the LPM table for socket %u ...\n", socket);
- app.lpm_tables[socket] = rte_lpm_create(
- name,
- socket,
- &lpm_config);
- if (app.lpm_tables[socket] == NULL) {
- rte_panic("Unable to create LPM table on socket %u\n", socket);
- }
-
- for (rule = 0; rule < app.n_lpm_rules; rule ++) {
- int ret;
-
- ret = rte_lpm_add(app.lpm_tables[socket],
- app.lpm_rules[rule].ip,
- app.lpm_rules[rule].depth,
- app.lpm_rules[rule].if_out);
-
- if (ret < 0) {
- rte_panic("Unable to add entry %u (%x/%u => %u) to the LPM table on socket %u (%d)\n",
- (unsigned) rule,
- (unsigned) app.lpm_rules[rule].ip,
- (unsigned) app.lpm_rules[rule].depth,
- (unsigned) app.lpm_rules[rule].if_out,
- socket,
- ret);
- }
- }
-
- }
-
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- if (app.lcore_params[lcore].type != e_APP_LCORE_WORKER) {
- continue;
- }
-
- socket = rte_lcore_to_socket_id(lcore);
- app.lcore_params[lcore].worker.lpm_table = app.lpm_tables[socket];
- }
-}
-
-static void
-app_init_rings_rx(void)
-{
- unsigned lcore;
-
- /* Initialize the rings for the RX side */
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_io *lp_io = &app.lcore_params[lcore].io;
- unsigned socket_io, lcore_worker;
-
- if ((app.lcore_params[lcore].type != e_APP_LCORE_IO) ||
- (lp_io->rx.n_nic_queues == 0)) {
- continue;
- }
-
- socket_io = rte_lcore_to_socket_id(lcore);
-
- for (lcore_worker = 0; lcore_worker < APP_MAX_LCORES; lcore_worker ++) {
- char name[32];
- struct app_lcore_params_worker *lp_worker = &app.lcore_params[lcore_worker].worker;
- struct rte_ring *ring = NULL;
-
- if (app.lcore_params[lcore_worker].type != e_APP_LCORE_WORKER) {
- continue;
- }
-
- printf("Creating ring to connect I/O lcore %u (socket %u) with worker lcore %u ...\n",
- lcore,
- socket_io,
- lcore_worker);
- snprintf(name, sizeof(name), "app_ring_rx_s%u_io%u_w%u",
- socket_io,
- lcore,
- lcore_worker);
- ring = rte_ring_create(
- name,
- app.ring_rx_size,
- socket_io,
- RING_F_SP_ENQ | RING_F_SC_DEQ);
- if (ring == NULL) {
- rte_panic("Cannot create ring to connect I/O core %u with worker core %u\n",
- lcore,
- lcore_worker);
- }
-
- lp_io->rx.rings[lp_io->rx.n_rings] = ring;
- lp_io->rx.n_rings ++;
-
- lp_worker->rings_in[lp_worker->n_rings_in] = ring;
- lp_worker->n_rings_in ++;
- }
- }
-
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_io *lp_io = &app.lcore_params[lcore].io;
-
- if ((app.lcore_params[lcore].type != e_APP_LCORE_IO) ||
- (lp_io->rx.n_nic_queues == 0)) {
- continue;
- }
-
- if (lp_io->rx.n_rings != app_get_lcores_worker()) {
- rte_panic("Algorithmic error (I/O RX rings)\n");
- }
- }
-
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_worker *lp_worker = &app.lcore_params[lcore].worker;
-
- if (app.lcore_params[lcore].type != e_APP_LCORE_WORKER) {
- continue;
- }
-
- if (lp_worker->n_rings_in != app_get_lcores_io_rx()) {
- rte_panic("Algorithmic error (worker input rings)\n");
- }
- }
-}
-
-static void
-app_init_rings_tx(void)
-{
- unsigned lcore;
-
- /* Initialize the rings for the TX side */
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_worker *lp_worker = &app.lcore_params[lcore].worker;
- unsigned port;
-
- if (app.lcore_params[lcore].type != e_APP_LCORE_WORKER) {
- continue;
- }
-
- for (port = 0; port < APP_MAX_NIC_PORTS; port ++) {
- char name[32];
- struct app_lcore_params_io *lp_io = NULL;
- struct rte_ring *ring;
- uint32_t socket_io, lcore_io;
-
- if (app.nic_tx_port_mask[port] == 0) {
- continue;
- }
-
- if (app_get_lcore_for_nic_tx(port, &lcore_io) < 0) {
- rte_panic("Algorithmic error (no I/O core to handle TX of port %u)\n",
- port);
- }
-
- lp_io = &app.lcore_params[lcore_io].io;
- socket_io = rte_lcore_to_socket_id(lcore_io);
-
- printf("Creating ring to connect worker lcore %u with TX port %u (through I/O lcore %u) (socket %u) ...\n",
- lcore, port, (unsigned)lcore_io, (unsigned)socket_io);
- snprintf(name, sizeof(name), "app_ring_tx_s%u_w%u_p%u", socket_io, lcore, port);
- ring = rte_ring_create(
- name,
- app.ring_tx_size,
- socket_io,
- RING_F_SP_ENQ | RING_F_SC_DEQ);
- if (ring == NULL) {
- rte_panic("Cannot create ring to connect worker core %u with TX port %u\n",
- lcore,
- port);
- }
-
- lp_worker->rings_out[port] = ring;
- lp_io->tx.rings[port][lp_worker->worker_id] = ring;
- }
- }
-
- for (lcore = 0; lcore < APP_MAX_LCORES; lcore ++) {
- struct app_lcore_params_io *lp_io = &app.lcore_params[lcore].io;
- unsigned i;
-
- if ((app.lcore_params[lcore].type != e_APP_LCORE_IO) ||
- (lp_io->tx.n_nic_ports == 0)) {
- continue;
- }
-
- for (i = 0; i < lp_io->tx.n_nic_ports; i ++){
- unsigned port, j;
-
- port = lp_io->tx.nic_ports[i];
- for (j = 0; j < app_get_lcores_worker(); j ++) {
- if (lp_io->tx.rings[port][j] == NULL) {
- rte_panic("Algorithmic error (I/O TX rings)\n");
- }
- }
- }
- }
-}
-
-/* Check the link status of all ports in up to 9s, and print them finally */
-static void
-check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
-{
-#define CHECK_INTERVAL 100 /* 100ms */
-#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
- uint16_t portid;
- uint8_t count, all_ports_up, print_flag = 0;
- struct rte_eth_link link;
- uint32_t n_rx_queues, n_tx_queues;
-
- printf("\nChecking link status");
- fflush(stdout);
- for (count = 0; count <= MAX_CHECK_TIME; count++) {
- all_ports_up = 1;
- for (portid = 0; portid < port_num; portid++) {
- if ((port_mask & (1 << portid)) == 0)
- continue;
- n_rx_queues = app_get_nic_rx_queues_per_port(portid);
- n_tx_queues = app.nic_tx_port_mask[portid];
- if ((n_rx_queues == 0) && (n_tx_queues == 0))
- continue;
- memset(&link, 0, sizeof(link));
- rte_eth_link_get_nowait(portid, &link);
- /* print link status if flag set */
- if (print_flag == 1) {
- if (link.link_status)
- printf(
- "Port%d Link Up - speed %uMbps - %s\n",
- portid, link.link_speed,
- (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
- ("full-duplex") : ("half-duplex\n"));
- else
- printf("Port %d Link Down\n", portid);
- continue;
- }
- /* clear all_ports_up flag if any link down */
- if (link.link_status == ETH_LINK_DOWN) {
- all_ports_up = 0;
- break;
- }
- }
- /* after finally printing all link status, get out */
- if (print_flag == 1)
- break;
-
- if (all_ports_up == 0) {
- printf(".");
- fflush(stdout);
- rte_delay_ms(CHECK_INTERVAL);
- }
-
- /* set the print_flag if all ports up or timeout */
- if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
- print_flag = 1;
- printf("done\n");
- }
- }
-}
-
-static void
-app_init_nics(void)
-{
- unsigned socket;
- uint32_t lcore;
- uint16_t port;
- uint8_t queue;
- int ret;
- uint32_t n_rx_queues, n_tx_queues;
-
- /* Init NIC ports and queues, then start the ports */
- for (port = 0; port < APP_MAX_NIC_PORTS; port ++) {
- struct rte_mempool *pool;
- uint16_t nic_rx_ring_size;
- uint16_t nic_tx_ring_size;
- struct rte_eth_rxconf rxq_conf;
- struct rte_eth_txconf txq_conf;
- struct rte_eth_dev_info dev_info;
- struct rte_eth_conf local_port_conf = port_conf;
-
- n_rx_queues = app_get_nic_rx_queues_per_port(port);
- n_tx_queues = app.nic_tx_port_mask[port];
-
- if ((n_rx_queues == 0) && (n_tx_queues == 0)) {
- continue;
- }
-
- /* Init port */
- printf("Initializing NIC port %u ...\n", port);
- rte_eth_dev_info_get(port, &dev_info);
- if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
- local_port_conf.txmode.offloads |=
- DEV_TX_OFFLOAD_MBUF_FAST_FREE;
-
- local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
- dev_info.flow_type_rss_offloads;
- if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
- port_conf.rx_adv_conf.rss_conf.rss_hf) {
- printf("Port %u modified RSS hash function based on hardware support,"
- "requested:%#"PRIx64" configured:%#"PRIx64"\n",
- port,
- port_conf.rx_adv_conf.rss_conf.rss_hf,
- local_port_conf.rx_adv_conf.rss_conf.rss_hf);
- }
-
- ret = rte_eth_dev_configure(
- port,
- (uint8_t) n_rx_queues,
- (uint8_t) n_tx_queues,
- &local_port_conf);
- if (ret < 0) {
- rte_panic("Cannot init NIC port %u (%d)\n", port, ret);
- }
- rte_eth_promiscuous_enable(port);
-
- nic_rx_ring_size = app.nic_rx_ring_size;
- nic_tx_ring_size = app.nic_tx_ring_size;
- ret = rte_eth_dev_adjust_nb_rx_tx_desc(
- port, &nic_rx_ring_size, &nic_tx_ring_size);
- if (ret < 0) {
- rte_panic("Cannot adjust number of descriptors for port %u (%d)\n",
- port, ret);
- }
- app.nic_rx_ring_size = nic_rx_ring_size;
- app.nic_tx_ring_size = nic_tx_ring_size;
-
- rxq_conf = dev_info.default_rxconf;
- rxq_conf.offloads = local_port_conf.rxmode.offloads;
- /* Init RX queues */
- for (queue = 0; queue < APP_MAX_RX_QUEUES_PER_NIC_PORT; queue ++) {
- if (app.nic_rx_queue_mask[port][queue] == 0) {
- continue;
- }
-
- app_get_lcore_for_nic_rx(port, queue, &lcore);
- socket = rte_lcore_to_socket_id(lcore);
- pool = app.lcore_params[lcore].pool;
-
- printf("Initializing NIC port %u RX queue %u ...\n",
- port, queue);
- ret = rte_eth_rx_queue_setup(
- port,
- queue,
- (uint16_t) app.nic_rx_ring_size,
- socket,
- &rxq_conf,
- pool);
- if (ret < 0) {
- rte_panic("Cannot init RX queue %u for port %u (%d)\n",
- queue, port, ret);
- }
- }
-
- txq_conf = dev_info.default_txconf;
- txq_conf.offloads = local_port_conf.txmode.offloads;
- /* Init TX queues */
- if (app.nic_tx_port_mask[port] == 1) {
- app_get_lcore_for_nic_tx(port, &lcore);
- socket = rte_lcore_to_socket_id(lcore);
- printf("Initializing NIC port %u TX queue 0 ...\n",
- port);
- ret = rte_eth_tx_queue_setup(
- port,
- 0,
- (uint16_t) app.nic_tx_ring_size,
- socket,
- &txq_conf);
- if (ret < 0) {
- rte_panic("Cannot init TX queue 0 for port %d (%d)\n",
- port,
- ret);
- }
- }
-
- /* Start port */
- ret = rte_eth_dev_start(port);
- if (ret < 0) {
- rte_panic("Cannot start port %d (%d)\n", port, ret);
- }
- }
-
- check_all_ports_link_status(APP_MAX_NIC_PORTS, (~0x0));
-}
-
-void
-app_init(void)
-{
- app_assign_worker_ids();
- app_init_mbuf_pools();
- app_init_lpm_tables();
- app_init_rings_rx();
- app_init_rings_tx();
- app_init_nics();
-
- printf("Initialization completed.\n");
-}
diff --git a/examples/load_balancer/main.c b/examples/load_balancer/main.c
deleted file mode 100644
index d3dcb235d..000000000
--- a/examples/load_balancer/main.c
+++ /dev/null
@@ -1,76 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdint.h>
-#include <inttypes.h>
-#include <sys/types.h>
-#include <string.h>
-#include <sys/queue.h>
-#include <stdarg.h>
-#include <errno.h>
-#include <getopt.h>
-#include <unistd.h>
-
-#include <rte_common.h>
-#include <rte_byteorder.h>
-#include <rte_log.h>
-#include <rte_memory.h>
-#include <rte_memcpy.h>
-#include <rte_eal.h>
-#include <rte_launch.h>
-#include <rte_atomic.h>
-#include <rte_cycles.h>
-#include <rte_prefetch.h>
-#include <rte_lcore.h>
-#include <rte_per_lcore.h>
-#include <rte_branch_prediction.h>
-#include <rte_interrupts.h>
-#include <rte_random.h>
-#include <rte_debug.h>
-#include <rte_ether.h>
-#include <rte_ethdev.h>
-#include <rte_mempool.h>
-#include <rte_mbuf.h>
-#include <rte_ip.h>
-#include <rte_tcp.h>
-#include <rte_lpm.h>
-
-#include "main.h"
-
-int
-main(int argc, char **argv)
-{
- uint32_t lcore;
- int ret;
-
- /* Init EAL */
- ret = rte_eal_init(argc, argv);
- if (ret < 0)
- return -1;
- argc -= ret;
- argv += ret;
-
- /* Parse application arguments (after the EAL ones) */
- ret = app_parse_args(argc, argv);
- if (ret < 0) {
- app_print_usage();
- return -1;
- }
-
- /* Init */
- app_init();
- app_print_params();
-
- /* Launch per-lcore init on every lcore */
- rte_eal_mp_remote_launch(app_lcore_main_loop, NULL, CALL_MASTER);
- RTE_LCORE_FOREACH_SLAVE(lcore) {
- if (rte_eal_wait_lcore(lcore) < 0) {
- return -1;
- }
- }
-
- return 0;
-}
diff --git a/examples/load_balancer/main.h b/examples/load_balancer/main.h
deleted file mode 100644
index 9fefb62ed..000000000
--- a/examples/load_balancer/main.h
+++ /dev/null
@@ -1,351 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _MAIN_H_
-#define _MAIN_H_
-
-/* Logical cores */
-#ifndef APP_MAX_SOCKETS
-#define APP_MAX_SOCKETS 2
-#endif
-
-#ifndef APP_MAX_LCORES
-#define APP_MAX_LCORES RTE_MAX_LCORE
-#endif
-
-#ifndef APP_MAX_NIC_PORTS
-#define APP_MAX_NIC_PORTS RTE_MAX_ETHPORTS
-#endif
-
-#ifndef APP_MAX_RX_QUEUES_PER_NIC_PORT
-#define APP_MAX_RX_QUEUES_PER_NIC_PORT 128
-#endif
-
-#ifndef APP_MAX_TX_QUEUES_PER_NIC_PORT
-#define APP_MAX_TX_QUEUES_PER_NIC_PORT 128
-#endif
-
-#ifndef APP_MAX_IO_LCORES
-#if (APP_MAX_LCORES > 16)
-#define APP_MAX_IO_LCORES 16
-#else
-#define APP_MAX_IO_LCORES APP_MAX_LCORES
-#endif
-#endif
-#if (APP_MAX_IO_LCORES > APP_MAX_LCORES)
-#error "APP_MAX_IO_LCORES is too big"
-#endif
-
-#ifndef APP_MAX_NIC_RX_QUEUES_PER_IO_LCORE
-#define APP_MAX_NIC_RX_QUEUES_PER_IO_LCORE 16
-#endif
-
-#ifndef APP_MAX_NIC_TX_PORTS_PER_IO_LCORE
-#define APP_MAX_NIC_TX_PORTS_PER_IO_LCORE 16
-#endif
-#if (APP_MAX_NIC_TX_PORTS_PER_IO_LCORE > APP_MAX_NIC_PORTS)
-#error "APP_MAX_NIC_TX_PORTS_PER_IO_LCORE too big"
-#endif
-
-#ifndef APP_MAX_WORKER_LCORES
-#if (APP_MAX_LCORES > 16)
-#define APP_MAX_WORKER_LCORES 16
-#else
-#define APP_MAX_WORKER_LCORES APP_MAX_LCORES
-#endif
-#endif
-#if (APP_MAX_WORKER_LCORES > APP_MAX_LCORES)
-#error "APP_MAX_WORKER_LCORES is too big"
-#endif
-
-
-/* Mempools */
-#ifndef APP_DEFAULT_MBUF_DATA_SIZE
-#define APP_DEFAULT_MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE
-#endif
-
-#ifndef APP_DEFAULT_MEMPOOL_BUFFERS
-#define APP_DEFAULT_MEMPOOL_BUFFERS 8192 * 4
-#endif
-
-#ifndef APP_DEFAULT_MEMPOOL_CACHE_SIZE
-#define APP_DEFAULT_MEMPOOL_CACHE_SIZE 256
-#endif
-
-/* LPM Tables */
-#ifndef APP_MAX_LPM_RULES
-#define APP_MAX_LPM_RULES 1024
-#endif
-
-/* NIC RX */
-#ifndef APP_DEFAULT_NIC_RX_RING_SIZE
-#define APP_DEFAULT_NIC_RX_RING_SIZE 1024
-#endif
-
-/*
- * RX and TX Prefetch, Host, and Write-back threshold values should be
- * carefully set for optimal performance. Consult the network
- * controller's datasheet and supporting DPDK documentation for guidance
- * on how these parameters should be set.
- */
-#ifndef APP_DEFAULT_NIC_RX_PTHRESH
-#define APP_DEFAULT_NIC_RX_PTHRESH 8
-#endif
-
-#ifndef APP_DEFAULT_NIC_RX_HTHRESH
-#define APP_DEFAULT_NIC_RX_HTHRESH 8
-#endif
-
-#ifndef APP_DEFAULT_NIC_RX_WTHRESH
-#define APP_DEFAULT_NIC_RX_WTHRESH 4
-#endif
-
-#ifndef APP_DEFAULT_NIC_RX_FREE_THRESH
-#define APP_DEFAULT_NIC_RX_FREE_THRESH 64
-#endif
-
-#ifndef APP_DEFAULT_NIC_RX_DROP_EN
-#define APP_DEFAULT_NIC_RX_DROP_EN 0
-#endif
-
-/* NIC TX */
-#ifndef APP_DEFAULT_NIC_TX_RING_SIZE
-#define APP_DEFAULT_NIC_TX_RING_SIZE 1024
-#endif
-
-/*
- * These default values are optimized for use with the Intel(R) 82599 10 GbE
- * Controller and the DPDK ixgbe PMD. Consider using other values for other
- * network controllers and/or network drivers.
- */
-#ifndef APP_DEFAULT_NIC_TX_PTHRESH
-#define APP_DEFAULT_NIC_TX_PTHRESH 36
-#endif
-
-#ifndef APP_DEFAULT_NIC_TX_HTHRESH
-#define APP_DEFAULT_NIC_TX_HTHRESH 0
-#endif
-
-#ifndef APP_DEFAULT_NIC_TX_WTHRESH
-#define APP_DEFAULT_NIC_TX_WTHRESH 0
-#endif
-
-#ifndef APP_DEFAULT_NIC_TX_FREE_THRESH
-#define APP_DEFAULT_NIC_TX_FREE_THRESH 0
-#endif
-
-#ifndef APP_DEFAULT_NIC_TX_RS_THRESH
-#define APP_DEFAULT_NIC_TX_RS_THRESH 0
-#endif
-
-/* Software Rings */
-#ifndef APP_DEFAULT_RING_RX_SIZE
-#define APP_DEFAULT_RING_RX_SIZE 1024
-#endif
-
-#ifndef APP_DEFAULT_RING_TX_SIZE
-#define APP_DEFAULT_RING_TX_SIZE 1024
-#endif
-
-/* Bursts */
-#ifndef APP_MBUF_ARRAY_SIZE
-#define APP_MBUF_ARRAY_SIZE 512
-#endif
-
-#ifndef APP_DEFAULT_BURST_SIZE_IO_RX_READ
-#define APP_DEFAULT_BURST_SIZE_IO_RX_READ 144
-#endif
-#if (APP_DEFAULT_BURST_SIZE_IO_RX_READ > APP_MBUF_ARRAY_SIZE)
-#error "APP_DEFAULT_BURST_SIZE_IO_RX_READ is too big"
-#endif
-
-#ifndef APP_DEFAULT_BURST_SIZE_IO_RX_WRITE
-#define APP_DEFAULT_BURST_SIZE_IO_RX_WRITE 144
-#endif
-#if (APP_DEFAULT_BURST_SIZE_IO_RX_WRITE > APP_MBUF_ARRAY_SIZE)
-#error "APP_DEFAULT_BURST_SIZE_IO_RX_WRITE is too big"
-#endif
-
-#ifndef APP_DEFAULT_BURST_SIZE_IO_TX_READ
-#define APP_DEFAULT_BURST_SIZE_IO_TX_READ 144
-#endif
-#if (APP_DEFAULT_BURST_SIZE_IO_TX_READ > APP_MBUF_ARRAY_SIZE)
-#error "APP_DEFAULT_BURST_SIZE_IO_TX_READ is too big"
-#endif
-
-#ifndef APP_DEFAULT_BURST_SIZE_IO_TX_WRITE
-#define APP_DEFAULT_BURST_SIZE_IO_TX_WRITE 144
-#endif
-#if (APP_DEFAULT_BURST_SIZE_IO_TX_WRITE > APP_MBUF_ARRAY_SIZE)
-#error "APP_DEFAULT_BURST_SIZE_IO_TX_WRITE is too big"
-#endif
-
-#ifndef APP_DEFAULT_BURST_SIZE_WORKER_READ
-#define APP_DEFAULT_BURST_SIZE_WORKER_READ 144
-#endif
-#if ((2 * APP_DEFAULT_BURST_SIZE_WORKER_READ) > APP_MBUF_ARRAY_SIZE)
-#error "APP_DEFAULT_BURST_SIZE_WORKER_READ is too big"
-#endif
-
-#ifndef APP_DEFAULT_BURST_SIZE_WORKER_WRITE
-#define APP_DEFAULT_BURST_SIZE_WORKER_WRITE 144
-#endif
-#if (APP_DEFAULT_BURST_SIZE_WORKER_WRITE > APP_MBUF_ARRAY_SIZE)
-#error "APP_DEFAULT_BURST_SIZE_WORKER_WRITE is too big"
-#endif
-
-/* Load balancing logic */
-#ifndef APP_DEFAULT_IO_RX_LB_POS
-#define APP_DEFAULT_IO_RX_LB_POS 29
-#endif
-#if (APP_DEFAULT_IO_RX_LB_POS >= 64)
-#error "APP_DEFAULT_IO_RX_LB_POS is too big"
-#endif
-
-struct app_mbuf_array {
- struct rte_mbuf *array[APP_MBUF_ARRAY_SIZE];
- uint32_t n_mbufs;
-};
-
-enum app_lcore_type {
- e_APP_LCORE_DISABLED = 0,
- e_APP_LCORE_IO,
- e_APP_LCORE_WORKER
-};
-
-struct app_lcore_params_io {
- /* I/O RX */
- struct {
- /* NIC */
- struct {
- uint16_t port;
- uint8_t queue;
- } nic_queues[APP_MAX_NIC_RX_QUEUES_PER_IO_LCORE];
- uint32_t n_nic_queues;
-
- /* Rings */
- struct rte_ring *rings[APP_MAX_WORKER_LCORES];
- uint32_t n_rings;
-
- /* Internal buffers */
- struct app_mbuf_array mbuf_in;
- struct app_mbuf_array mbuf_out[APP_MAX_WORKER_LCORES];
- uint8_t mbuf_out_flush[APP_MAX_WORKER_LCORES];
-
- /* Stats */
- uint32_t nic_queues_count[APP_MAX_NIC_RX_QUEUES_PER_IO_LCORE];
- uint32_t nic_queues_iters[APP_MAX_NIC_RX_QUEUES_PER_IO_LCORE];
- uint32_t rings_count[APP_MAX_WORKER_LCORES];
- uint32_t rings_iters[APP_MAX_WORKER_LCORES];
- } rx;
-
- /* I/O TX */
- struct {
- /* Rings */
- struct rte_ring *rings[APP_MAX_NIC_PORTS][APP_MAX_WORKER_LCORES];
-
- /* NIC */
- uint16_t nic_ports[APP_MAX_NIC_TX_PORTS_PER_IO_LCORE];
- uint32_t n_nic_ports;
-
- /* Internal buffers */
- struct app_mbuf_array mbuf_out[APP_MAX_NIC_TX_PORTS_PER_IO_LCORE];
- uint8_t mbuf_out_flush[APP_MAX_NIC_TX_PORTS_PER_IO_LCORE];
-
- /* Stats */
- uint32_t rings_count[APP_MAX_NIC_PORTS][APP_MAX_WORKER_LCORES];
- uint32_t rings_iters[APP_MAX_NIC_PORTS][APP_MAX_WORKER_LCORES];
- uint32_t nic_ports_count[APP_MAX_NIC_TX_PORTS_PER_IO_LCORE];
- uint32_t nic_ports_iters[APP_MAX_NIC_TX_PORTS_PER_IO_LCORE];
- } tx;
-};
-
-struct app_lcore_params_worker {
- /* Rings */
- struct rte_ring *rings_in[APP_MAX_IO_LCORES];
- uint32_t n_rings_in;
- struct rte_ring *rings_out[APP_MAX_NIC_PORTS];
-
- /* LPM table */
- struct rte_lpm *lpm_table;
- uint32_t worker_id;
-
- /* Internal buffers */
- struct app_mbuf_array mbuf_in;
- struct app_mbuf_array mbuf_out[APP_MAX_NIC_PORTS];
- uint8_t mbuf_out_flush[APP_MAX_NIC_PORTS];
-
- /* Stats */
- uint32_t rings_in_count[APP_MAX_IO_LCORES];
- uint32_t rings_in_iters[APP_MAX_IO_LCORES];
- uint32_t rings_out_count[APP_MAX_NIC_PORTS];
- uint32_t rings_out_iters[APP_MAX_NIC_PORTS];
-};
-
-struct app_lcore_params {
- union {
- struct app_lcore_params_io io;
- struct app_lcore_params_worker worker;
- };
- enum app_lcore_type type;
- struct rte_mempool *pool;
-} __rte_cache_aligned;
-
-struct app_lpm_rule {
- uint32_t ip;
- uint8_t depth;
- uint8_t if_out;
-};
-
-struct app_params {
- /* lcore */
- struct app_lcore_params lcore_params[APP_MAX_LCORES];
-
- /* NIC */
- uint8_t nic_rx_queue_mask[APP_MAX_NIC_PORTS][APP_MAX_RX_QUEUES_PER_NIC_PORT];
- uint8_t nic_tx_port_mask[APP_MAX_NIC_PORTS];
-
- /* mbuf pools */
- struct rte_mempool *pools[APP_MAX_SOCKETS];
-
- /* LPM tables */
- struct rte_lpm *lpm_tables[APP_MAX_SOCKETS];
- struct app_lpm_rule lpm_rules[APP_MAX_LPM_RULES];
- uint32_t n_lpm_rules;
-
- /* rings */
- uint32_t nic_rx_ring_size;
- uint32_t nic_tx_ring_size;
- uint32_t ring_rx_size;
- uint32_t ring_tx_size;
-
- /* burst size */
- uint32_t burst_size_io_rx_read;
- uint32_t burst_size_io_rx_write;
- uint32_t burst_size_io_tx_read;
- uint32_t burst_size_io_tx_write;
- uint32_t burst_size_worker_read;
- uint32_t burst_size_worker_write;
-
- /* load balancing */
- uint8_t pos_lb;
-} __rte_cache_aligned;
-
-extern struct app_params app;
-
-int app_parse_args(int argc, char **argv);
-void app_print_usage(void);
-void app_init(void);
-int app_lcore_main_loop(void *arg);
-
-int app_get_nic_rx_queues_per_port(uint16_t port);
-int app_get_lcore_for_nic_rx(uint16_t port, uint8_t queue,
- uint32_t *lcore_out);
-int app_get_lcore_for_nic_tx(uint16_t port, uint32_t *lcore_out);
-int app_is_socket_used(uint32_t socket);
-uint32_t app_get_lcores_io_rx(void);
-uint32_t app_get_lcores_worker(void);
-void app_print_params(void);
-
-#endif /* _MAIN_H_ */
diff --git a/examples/load_balancer/meson.build b/examples/load_balancer/meson.build
deleted file mode 100644
index 4f7ac3999..000000000
--- a/examples/load_balancer/meson.build
+++ /dev/null
@@ -1,12 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
-
-# meson file, for building this example as part of a main DPDK build.
-#
-# To build this example as a standalone application with an already-installed
-# DPDK instance, use 'make'
-
-deps += 'lpm'
-sources = files(
- 'config.c', 'init.c', 'main.c', 'runtime.c'
-)
diff --git a/examples/load_balancer/runtime.c b/examples/load_balancer/runtime.c
deleted file mode 100644
index f0aad1513..000000000
--- a/examples/load_balancer/runtime.c
+++ /dev/null
@@ -1,642 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#include <stdio.h>
-#include <stdlib.h>
-#include <stdint.h>
-#include <inttypes.h>
-#include <sys/types.h>
-#include <string.h>
-#include <sys/queue.h>
-#include <stdarg.h>
-#include <errno.h>
-#include <getopt.h>
-
-#include <rte_common.h>
-#include <rte_byteorder.h>
-#include <rte_log.h>
-#include <rte_memory.h>
-#include <rte_memcpy.h>
-#include <rte_eal.h>
-#include <rte_launch.h>
-#include <rte_atomic.h>
-#include <rte_cycles.h>
-#include <rte_prefetch.h>
-#include <rte_lcore.h>
-#include <rte_per_lcore.h>
-#include <rte_branch_prediction.h>
-#include <rte_interrupts.h>
-#include <rte_random.h>
-#include <rte_debug.h>
-#include <rte_ether.h>
-#include <rte_ethdev.h>
-#include <rte_ring.h>
-#include <rte_mempool.h>
-#include <rte_mbuf.h>
-#include <rte_ip.h>
-#include <rte_tcp.h>
-#include <rte_lpm.h>
-
-#include "main.h"
-
-#ifndef APP_LCORE_IO_FLUSH
-#define APP_LCORE_IO_FLUSH 1000000
-#endif
-
-#ifndef APP_LCORE_WORKER_FLUSH
-#define APP_LCORE_WORKER_FLUSH 1000000
-#endif
-
-#ifndef APP_STATS
-#define APP_STATS 1000000
-#endif
-
-#define APP_IO_RX_DROP_ALL_PACKETS 0
-#define APP_WORKER_DROP_ALL_PACKETS 0
-#define APP_IO_TX_DROP_ALL_PACKETS 0
-
-#ifndef APP_IO_RX_PREFETCH_ENABLE
-#define APP_IO_RX_PREFETCH_ENABLE 1
-#endif
-
-#ifndef APP_WORKER_PREFETCH_ENABLE
-#define APP_WORKER_PREFETCH_ENABLE 1
-#endif
-
-#ifndef APP_IO_TX_PREFETCH_ENABLE
-#define APP_IO_TX_PREFETCH_ENABLE 1
-#endif
-
-#if APP_IO_RX_PREFETCH_ENABLE
-#define APP_IO_RX_PREFETCH0(p) rte_prefetch0(p)
-#define APP_IO_RX_PREFETCH1(p) rte_prefetch1(p)
-#else
-#define APP_IO_RX_PREFETCH0(p)
-#define APP_IO_RX_PREFETCH1(p)
-#endif
-
-#if APP_WORKER_PREFETCH_ENABLE
-#define APP_WORKER_PREFETCH0(p) rte_prefetch0(p)
-#define APP_WORKER_PREFETCH1(p) rte_prefetch1(p)
-#else
-#define APP_WORKER_PREFETCH0(p)
-#define APP_WORKER_PREFETCH1(p)
-#endif
-
-#if APP_IO_TX_PREFETCH_ENABLE
-#define APP_IO_TX_PREFETCH0(p) rte_prefetch0(p)
-#define APP_IO_TX_PREFETCH1(p) rte_prefetch1(p)
-#else
-#define APP_IO_TX_PREFETCH0(p)
-#define APP_IO_TX_PREFETCH1(p)
-#endif
-
-static inline void
-app_lcore_io_rx_buffer_to_send (
- struct app_lcore_params_io *lp,
- uint32_t worker,
- struct rte_mbuf *mbuf,
- uint32_t bsz)
-{
- uint32_t pos;
- int ret;
-
- pos = lp->rx.mbuf_out[worker].n_mbufs;
- lp->rx.mbuf_out[worker].array[pos ++] = mbuf;
- if (likely(pos < bsz)) {
- lp->rx.mbuf_out[worker].n_mbufs = pos;
- return;
- }
-
- ret = rte_ring_sp_enqueue_bulk(
- lp->rx.rings[worker],
- (void **) lp->rx.mbuf_out[worker].array,
- bsz,
- NULL);
-
- if (unlikely(ret == 0)) {
- uint32_t k;
- for (k = 0; k < bsz; k ++) {
- struct rte_mbuf *m = lp->rx.mbuf_out[worker].array[k];
- rte_pktmbuf_free(m);
- }
- }
-
- lp->rx.mbuf_out[worker].n_mbufs = 0;
- lp->rx.mbuf_out_flush[worker] = 0;
-
-#if APP_STATS
- lp->rx.rings_iters[worker] ++;
- if (likely(ret == 0)) {
- lp->rx.rings_count[worker] ++;
- }
- if (unlikely(lp->rx.rings_iters[worker] == APP_STATS)) {
- unsigned lcore = rte_lcore_id();
-
- printf("\tI/O RX %u out (worker %u): enq success rate = %.2f\n",
- lcore,
- (unsigned)worker,
- ((double) lp->rx.rings_count[worker]) / ((double) lp->rx.rings_iters[worker]));
- lp->rx.rings_iters[worker] = 0;
- lp->rx.rings_count[worker] = 0;
- }
-#endif
-}
-
-static inline void
-app_lcore_io_rx(
- struct app_lcore_params_io *lp,
- uint32_t n_workers,
- uint32_t bsz_rd,
- uint32_t bsz_wr,
- uint8_t pos_lb)
-{
- struct rte_mbuf *mbuf_1_0, *mbuf_1_1, *mbuf_2_0, *mbuf_2_1;
- uint8_t *data_1_0, *data_1_1 = NULL;
- uint32_t i;
-
- for (i = 0; i < lp->rx.n_nic_queues; i ++) {
- uint16_t port = lp->rx.nic_queues[i].port;
- uint8_t queue = lp->rx.nic_queues[i].queue;
- uint32_t n_mbufs, j;
-
- n_mbufs = rte_eth_rx_burst(
- port,
- queue,
- lp->rx.mbuf_in.array,
- (uint16_t) bsz_rd);
-
- if (unlikely(n_mbufs == 0)) {
- continue;
- }
-
-#if APP_STATS
- lp->rx.nic_queues_iters[i] ++;
- lp->rx.nic_queues_count[i] += n_mbufs;
- if (unlikely(lp->rx.nic_queues_iters[i] == APP_STATS)) {
- struct rte_eth_stats stats;
- unsigned lcore = rte_lcore_id();
-
- rte_eth_stats_get(port, &stats);
-
- printf("I/O RX %u in (NIC port %u): NIC drop ratio = %.2f avg burst size = %.2f\n",
- lcore,
- port,
- (double) stats.imissed / (double) (stats.imissed + stats.ipackets),
- ((double) lp->rx.nic_queues_count[i]) / ((double) lp->rx.nic_queues_iters[i]));
- lp->rx.nic_queues_iters[i] = 0;
- lp->rx.nic_queues_count[i] = 0;
- }
-#endif
-
-#if APP_IO_RX_DROP_ALL_PACKETS
- for (j = 0; j < n_mbufs; j ++) {
- struct rte_mbuf *pkt = lp->rx.mbuf_in.array[j];
- rte_pktmbuf_free(pkt);
- }
-
- continue;
-#endif
-
- mbuf_1_0 = lp->rx.mbuf_in.array[0];
- mbuf_1_1 = lp->rx.mbuf_in.array[1];
- data_1_0 = rte_pktmbuf_mtod(mbuf_1_0, uint8_t *);
- if (likely(n_mbufs > 1)) {
- data_1_1 = rte_pktmbuf_mtod(mbuf_1_1, uint8_t *);
- }
-
- mbuf_2_0 = lp->rx.mbuf_in.array[2];
- mbuf_2_1 = lp->rx.mbuf_in.array[3];
- APP_IO_RX_PREFETCH0(mbuf_2_0);
- APP_IO_RX_PREFETCH0(mbuf_2_1);
-
- for (j = 0; j + 3 < n_mbufs; j += 2) {
- struct rte_mbuf *mbuf_0_0, *mbuf_0_1;
- uint8_t *data_0_0, *data_0_1;
- uint32_t worker_0, worker_1;
-
- mbuf_0_0 = mbuf_1_0;
- mbuf_0_1 = mbuf_1_1;
- data_0_0 = data_1_0;
- data_0_1 = data_1_1;
-
- mbuf_1_0 = mbuf_2_0;
- mbuf_1_1 = mbuf_2_1;
- data_1_0 = rte_pktmbuf_mtod(mbuf_2_0, uint8_t *);
- data_1_1 = rte_pktmbuf_mtod(mbuf_2_1, uint8_t *);
- APP_IO_RX_PREFETCH0(data_1_0);
- APP_IO_RX_PREFETCH0(data_1_1);
-
- mbuf_2_0 = lp->rx.mbuf_in.array[j+4];
- mbuf_2_1 = lp->rx.mbuf_in.array[j+5];
- APP_IO_RX_PREFETCH0(mbuf_2_0);
- APP_IO_RX_PREFETCH0(mbuf_2_1);
-
- worker_0 = data_0_0[pos_lb] & (n_workers - 1);
- worker_1 = data_0_1[pos_lb] & (n_workers - 1);
-
- app_lcore_io_rx_buffer_to_send(lp, worker_0, mbuf_0_0, bsz_wr);
- app_lcore_io_rx_buffer_to_send(lp, worker_1, mbuf_0_1, bsz_wr);
- }
-
- /* Handle the last 1, 2 (when n_mbufs is even) or 3 (when n_mbufs is odd) packets */
- for ( ; j < n_mbufs; j += 1) {
- struct rte_mbuf *mbuf;
- uint8_t *data;
- uint32_t worker;
-
- mbuf = mbuf_1_0;
- mbuf_1_0 = mbuf_1_1;
- mbuf_1_1 = mbuf_2_0;
- mbuf_2_0 = mbuf_2_1;
-
- data = rte_pktmbuf_mtod(mbuf, uint8_t *);
-
- APP_IO_RX_PREFETCH0(mbuf_1_0);
-
- worker = data[pos_lb] & (n_workers - 1);
-
- app_lcore_io_rx_buffer_to_send(lp, worker, mbuf, bsz_wr);
- }
- }
-}
-
-static inline void
-app_lcore_io_rx_flush(struct app_lcore_params_io *lp, uint32_t n_workers)
-{
- uint32_t worker;
-
- for (worker = 0; worker < n_workers; worker ++) {
- int ret;
-
- if (likely((lp->rx.mbuf_out_flush[worker] == 0) ||
- (lp->rx.mbuf_out[worker].n_mbufs == 0))) {
- lp->rx.mbuf_out_flush[worker] = 1;
- continue;
- }
-
- ret = rte_ring_sp_enqueue_bulk(
- lp->rx.rings[worker],
- (void **) lp->rx.mbuf_out[worker].array,
- lp->rx.mbuf_out[worker].n_mbufs,
- NULL);
-
- if (unlikely(ret == 0)) {
- uint32_t k;
- for (k = 0; k < lp->rx.mbuf_out[worker].n_mbufs; k ++) {
- struct rte_mbuf *pkt_to_free = lp->rx.mbuf_out[worker].array[k];
- rte_pktmbuf_free(pkt_to_free);
- }
- }
-
- lp->rx.mbuf_out[worker].n_mbufs = 0;
- lp->rx.mbuf_out_flush[worker] = 1;
- }
-}
-
-static inline void
-app_lcore_io_tx(
- struct app_lcore_params_io *lp,
- uint32_t n_workers,
- uint32_t bsz_rd,
- uint32_t bsz_wr)
-{
- uint32_t worker;
-
- for (worker = 0; worker < n_workers; worker ++) {
- uint32_t i;
-
- for (i = 0; i < lp->tx.n_nic_ports; i ++) {
- uint16_t port = lp->tx.nic_ports[i];
- struct rte_ring *ring = lp->tx.rings[port][worker];
- uint32_t n_mbufs, n_pkts;
- int ret;
-
- n_mbufs = lp->tx.mbuf_out[port].n_mbufs;
- ret = rte_ring_sc_dequeue_bulk(
- ring,
- (void **) &lp->tx.mbuf_out[port].array[n_mbufs],
- bsz_rd,
- NULL);
-
- if (unlikely(ret == 0))
- continue;
-
- n_mbufs += bsz_rd;
-
-#if APP_IO_TX_DROP_ALL_PACKETS
- {
- uint32_t j;
- APP_IO_TX_PREFETCH0(lp->tx.mbuf_out[port].array[0]);
- APP_IO_TX_PREFETCH0(lp->tx.mbuf_out[port].array[1]);
-
- for (j = 0; j < n_mbufs; j ++) {
- if (likely(j < n_mbufs - 2)) {
- APP_IO_TX_PREFETCH0(lp->tx.mbuf_out[port].array[j + 2]);
- }
-
- rte_pktmbuf_free(lp->tx.mbuf_out[port].array[j]);
- }
-
- lp->tx.mbuf_out[port].n_mbufs = 0;
-
- continue;
- }
-#endif
-
- if (unlikely(n_mbufs < bsz_wr)) {
- lp->tx.mbuf_out[port].n_mbufs = n_mbufs;
- continue;
- }
-
- n_pkts = rte_eth_tx_burst(
- port,
- 0,
- lp->tx.mbuf_out[port].array,
- (uint16_t) n_mbufs);
-
-#if APP_STATS
- lp->tx.nic_ports_iters[port] ++;
- lp->tx.nic_ports_count[port] += n_pkts;
- if (unlikely(lp->tx.nic_ports_iters[port] == APP_STATS)) {
- unsigned lcore = rte_lcore_id();
-
- printf("\t\t\tI/O TX %u out (port %u): avg burst size = %.2f\n",
- lcore,
- port,
- ((double) lp->tx.nic_ports_count[port]) / ((double) lp->tx.nic_ports_iters[port]));
- lp->tx.nic_ports_iters[port] = 0;
- lp->tx.nic_ports_count[port] = 0;
- }
-#endif
-
- if (unlikely(n_pkts < n_mbufs)) {
- uint32_t k;
- for (k = n_pkts; k < n_mbufs; k ++) {
- struct rte_mbuf *pkt_to_free = lp->tx.mbuf_out[port].array[k];
- rte_pktmbuf_free(pkt_to_free);
- }
- }
- lp->tx.mbuf_out[port].n_mbufs = 0;
- lp->tx.mbuf_out_flush[port] = 0;
- }
- }
-}
-
-static inline void
-app_lcore_io_tx_flush(struct app_lcore_params_io *lp)
-{
- uint16_t port;
- uint32_t i;
-
- for (i = 0; i < lp->tx.n_nic_ports; i++) {
- uint32_t n_pkts;
-
- port = lp->tx.nic_ports[i];
- if (likely((lp->tx.mbuf_out_flush[port] == 0) ||
- (lp->tx.mbuf_out[port].n_mbufs == 0))) {
- lp->tx.mbuf_out_flush[port] = 1;
- continue;
- }
-
- n_pkts = rte_eth_tx_burst(
- port,
- 0,
- lp->tx.mbuf_out[port].array,
- (uint16_t) lp->tx.mbuf_out[port].n_mbufs);
-
- if (unlikely(n_pkts < lp->tx.mbuf_out[port].n_mbufs)) {
- uint32_t k;
- for (k = n_pkts; k < lp->tx.mbuf_out[port].n_mbufs; k ++) {
- struct rte_mbuf *pkt_to_free = lp->tx.mbuf_out[port].array[k];
- rte_pktmbuf_free(pkt_to_free);
- }
- }
-
- lp->tx.mbuf_out[port].n_mbufs = 0;
- lp->tx.mbuf_out_flush[port] = 1;
- }
-}
-
-static void
-app_lcore_main_loop_io(void)
-{
- uint32_t lcore = rte_lcore_id();
- struct app_lcore_params_io *lp = &app.lcore_params[lcore].io;
- uint32_t n_workers = app_get_lcores_worker();
- uint64_t i = 0;
-
- uint32_t bsz_rx_rd = app.burst_size_io_rx_read;
- uint32_t bsz_rx_wr = app.burst_size_io_rx_write;
- uint32_t bsz_tx_rd = app.burst_size_io_tx_read;
- uint32_t bsz_tx_wr = app.burst_size_io_tx_write;
-
- uint8_t pos_lb = app.pos_lb;
-
- for ( ; ; ) {
- if (APP_LCORE_IO_FLUSH && (unlikely(i == APP_LCORE_IO_FLUSH))) {
- if (likely(lp->rx.n_nic_queues > 0)) {
- app_lcore_io_rx_flush(lp, n_workers);
- }
-
- if (likely(lp->tx.n_nic_ports > 0)) {
- app_lcore_io_tx_flush(lp);
- }
-
- i = 0;
- }
-
- if (likely(lp->rx.n_nic_queues > 0)) {
- app_lcore_io_rx(lp, n_workers, bsz_rx_rd, bsz_rx_wr, pos_lb);
- }
-
- if (likely(lp->tx.n_nic_ports > 0)) {
- app_lcore_io_tx(lp, n_workers, bsz_tx_rd, bsz_tx_wr);
- }
-
- i ++;
- }
-}
-
-static inline void
-app_lcore_worker(
- struct app_lcore_params_worker *lp,
- uint32_t bsz_rd,
- uint32_t bsz_wr)
-{
- uint32_t i;
-
- for (i = 0; i < lp->n_rings_in; i ++) {
- struct rte_ring *ring_in = lp->rings_in[i];
- uint32_t j;
- int ret;
-
- ret = rte_ring_sc_dequeue_bulk(
- ring_in,
- (void **) lp->mbuf_in.array,
- bsz_rd,
- NULL);
-
- if (unlikely(ret == 0))
- continue;
-
-#if APP_WORKER_DROP_ALL_PACKETS
- for (j = 0; j < bsz_rd; j ++) {
- struct rte_mbuf *pkt = lp->mbuf_in.array[j];
- rte_pktmbuf_free(pkt);
- }
-
- continue;
-#endif
-
- APP_WORKER_PREFETCH1(rte_pktmbuf_mtod(lp->mbuf_in.array[0], unsigned char *));
- APP_WORKER_PREFETCH0(lp->mbuf_in.array[1]);
-
- for (j = 0; j < bsz_rd; j ++) {
- struct rte_mbuf *pkt;
- struct rte_ipv4_hdr *ipv4_hdr;
- uint32_t ipv4_dst, pos;
- uint32_t port;
-
- if (likely(j < bsz_rd - 1)) {
- APP_WORKER_PREFETCH1(rte_pktmbuf_mtod(lp->mbuf_in.array[j+1], unsigned char *));
- }
- if (likely(j < bsz_rd - 2)) {
- APP_WORKER_PREFETCH0(lp->mbuf_in.array[j+2]);
- }
-
- pkt = lp->mbuf_in.array[j];
- ipv4_hdr = rte_pktmbuf_mtod_offset(
- pkt, struct rte_ipv4_hdr *,
- sizeof(struct rte_ether_hdr));
- ipv4_dst = rte_be_to_cpu_32(ipv4_hdr->dst_addr);
-
- if (unlikely(rte_lpm_lookup(lp->lpm_table, ipv4_dst, &port) != 0)) {
- port = pkt->port;
- }
-
- pos = lp->mbuf_out[port].n_mbufs;
-
- lp->mbuf_out[port].array[pos ++] = pkt;
- if (likely(pos < bsz_wr)) {
- lp->mbuf_out[port].n_mbufs = pos;
- continue;
- }
-
- ret = rte_ring_sp_enqueue_bulk(
- lp->rings_out[port],
- (void **) lp->mbuf_out[port].array,
- bsz_wr,
- NULL);
-
-#if APP_STATS
- lp->rings_out_iters[port] ++;
- if (ret > 0) {
- lp->rings_out_count[port] += 1;
- }
- if (lp->rings_out_iters[port] == APP_STATS){
- printf("\t\tWorker %u out (NIC port %u): enq success rate = %.2f\n",
- (unsigned) lp->worker_id,
- port,
- ((double) lp->rings_out_count[port]) / ((double) lp->rings_out_iters[port]));
- lp->rings_out_iters[port] = 0;
- lp->rings_out_count[port] = 0;
- }
-#endif
-
- if (unlikely(ret == 0)) {
- uint32_t k;
- for (k = 0; k < bsz_wr; k ++) {
- struct rte_mbuf *pkt_to_free = lp->mbuf_out[port].array[k];
- rte_pktmbuf_free(pkt_to_free);
- }
- }
-
- lp->mbuf_out[port].n_mbufs = 0;
- lp->mbuf_out_flush[port] = 0;
- }
- }
-}
-
-static inline void
-app_lcore_worker_flush(struct app_lcore_params_worker *lp)
-{
- uint32_t port;
-
- for (port = 0; port < APP_MAX_NIC_PORTS; port ++) {
- int ret;
-
- if (unlikely(lp->rings_out[port] == NULL)) {
- continue;
- }
-
- if (likely((lp->mbuf_out_flush[port] == 0) ||
- (lp->mbuf_out[port].n_mbufs == 0))) {
- lp->mbuf_out_flush[port] = 1;
- continue;
- }
-
- ret = rte_ring_sp_enqueue_bulk(
- lp->rings_out[port],
- (void **) lp->mbuf_out[port].array,
- lp->mbuf_out[port].n_mbufs,
- NULL);
-
- if (unlikely(ret == 0)) {
- uint32_t k;
- for (k = 0; k < lp->mbuf_out[port].n_mbufs; k ++) {
- struct rte_mbuf *pkt_to_free = lp->mbuf_out[port].array[k];
- rte_pktmbuf_free(pkt_to_free);
- }
- }
-
- lp->mbuf_out[port].n_mbufs = 0;
- lp->mbuf_out_flush[port] = 1;
- }
-}
-
-static void
-app_lcore_main_loop_worker(void) {
- uint32_t lcore = rte_lcore_id();
- struct app_lcore_params_worker *lp = &app.lcore_params[lcore].worker;
- uint64_t i = 0;
-
- uint32_t bsz_rd = app.burst_size_worker_read;
- uint32_t bsz_wr = app.burst_size_worker_write;
-
- for ( ; ; ) {
- if (APP_LCORE_WORKER_FLUSH && (unlikely(i == APP_LCORE_WORKER_FLUSH))) {
- app_lcore_worker_flush(lp);
- i = 0;
- }
-
- app_lcore_worker(lp, bsz_rd, bsz_wr);
-
- i ++;
- }
-}
-
-int
-app_lcore_main_loop(__attribute__((unused)) void *arg)
-{
- struct app_lcore_params *lp;
- unsigned lcore;
-
- lcore = rte_lcore_id();
- lp = &app.lcore_params[lcore];
-
- if (lp->type == e_APP_LCORE_IO) {
- printf("Logical core %u (I/O) main loop.\n", lcore);
- app_lcore_main_loop_io();
- }
-
- if (lp->type == e_APP_LCORE_WORKER) {
- printf("Logical core %u (worker %u) main loop.\n",
- lcore,
- (unsigned) lp->worker.worker_id);
- app_lcore_main_loop_worker();
- }
-
- return 0;
-}
diff --git a/examples/meson.build b/examples/meson.build
index f0356f2a1..e4580f74a 100644
--- a/examples/meson.build
+++ b/examples/meson.build
@@ -24,7 +24,6 @@ all_examples = [
'l2fwd-keepalive', 'l3fwd',
'l3fwd-acl', 'l3fwd-power',
'link_status_interrupt',
- 'load_balancer',
'multi_process/client_server_mp/mp_client',
'multi_process/client_server_mp/mp_server',
'multi_process/hotplug_mp',
--
2.17.1
next prev parent reply other threads:[~2019-10-24 13:35 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-03 13:19 [dpdk-dev] [PATCH 0/6] remove a few example applications Bruce Richardson
2019-10-03 13:19 ` [dpdk-dev] [PATCH 1/6] examples/exception_path: remove example from DPDK Bruce Richardson
2019-10-03 13:19 ` [dpdk-dev] [PATCH 2/6] examples/l3fwd-vf: " Bruce Richardson
2019-10-03 13:19 ` [dpdk-dev] [PATCH 3/6] examples/quota-watermark: " Bruce Richardson
2019-10-03 13:19 ` [dpdk-dev] [PATCH 4/6] examples/netmap-compat: " Bruce Richardson
2019-10-03 13:19 ` [dpdk-dev] [PATCH 5/6] examples/load_balancer: " Bruce Richardson
2019-10-03 13:19 ` [dpdk-dev] [PATCH 6/6] doc: close up gaps in sample app guide table Bruce Richardson
2019-10-23 15:19 ` Thomas Monjalon
2019-10-23 15:21 ` [dpdk-dev] [dpdk-techboard] [PATCH 0/6] remove a few example applications Stephen Hemminger
2019-10-23 15:53 ` Hemant Agrawal
2019-10-24 13:31 ` [dpdk-dev] [PATCH v2 " Ciara Power
2019-10-24 13:31 ` [dpdk-dev] [PATCH v2 1/6] doc: remove unnecessary sample app guide table Ciara Power
2019-10-24 13:31 ` [dpdk-dev] [PATCH v2 2/6] examples/exception_path: remove example from DPDK Ciara Power
2019-10-24 13:31 ` [dpdk-dev] [PATCH v2 3/6] examples/l3fwd-vf: " Ciara Power
2019-10-24 13:31 ` [dpdk-dev] [PATCH v2 4/6] examples/quota-watermark: " Ciara Power
2019-10-24 13:31 ` [dpdk-dev] [PATCH v2 5/6] examples/netmap-compat: " Ciara Power
2019-10-24 13:31 ` Ciara Power [this message]
2019-10-24 14:12 ` [dpdk-dev] [PATCH v2 0/6] remove a few example applications David Marchand
2019-10-25 9:56 ` [dpdk-dev] [PATCH v3 " Ciara Power
2019-10-25 9:56 ` [dpdk-dev] [PATCH v3 1/6] doc: remove unnecessary sample app guide table Ciara Power
2019-10-25 10:26 ` [dpdk-dev] [dpdk-techboard] " Bruce Richardson
2019-10-25 9:56 ` [dpdk-dev] [PATCH v3 2/6] examples/exception_path: remove example Ciara Power
2019-10-25 9:56 ` [dpdk-dev] [PATCH v3 3/6] examples/l3fwd-vf: " Ciara Power
2019-10-25 9:56 ` [dpdk-dev] [PATCH v3 4/6] examples/quota-watermark: " Ciara Power
2019-10-25 9:56 ` [dpdk-dev] [PATCH v3 5/6] examples/netmap-compat: " Ciara Power
2019-10-25 9:56 ` [dpdk-dev] [PATCH v3 6/6] examples/load_balancer: " Ciara Power
2019-10-26 20:35 ` [dpdk-dev] [PATCH v3 0/6] remove a few example applications David Marchand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191024133145.12246-7-ciara.power@intel.com \
--to=ciara.power@intel.com \
--cc=dev@dpdk.org \
--cc=techboard@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).