* [PATCH v2 1/5] ethdev: support setting and querying RSS algorithm
@ 2023-08-26 7:46 5% ` Jie Hai
0 siblings, 0 replies; 200+ results
From: Jie Hai @ 2023-08-26 7:46 UTC (permalink / raw)
To: dev, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
Cc: liudongdong3, haijie1
Currently, rte_eth_rss_conf supports configuring and querying
rss hash functions, rss key and it's length, but not rss hash
algorithm.
The structure ``rte_eth_rss_conf`` is extended by adding a new
field "func". This represents the RSS algorithms to apply. The
following API is affected:
- rte_eth_dev_configure
- rte_eth_dev_rss_hash_update
- rte_eth_dev_rss_hash_conf_get
If the value of "func" used for configuration is a gibberish
value, report the error and return. Do the same for
rte_eth_dev_rss_hash_update and rte_eth_dev_configure.
To check whether the drivers report the "func" field, it is set
to default value before querying.
Signed-off-by: Jie Hai <haijie1@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
---
doc/guides/rel_notes/release_23_11.rst | 2 ++
lib/ethdev/rte_ethdev.c | 17 +++++++++++++++++
lib/ethdev/rte_ethdev.h | 6 ++++++
3 files changed, 25 insertions(+)
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 4411bb32c195..3746436e8bc9 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -123,6 +123,8 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+ * ethdev: Added "func" field to ``rte_eth_rss_conf`` structure for RSS hash
+ algorithm.
Known Issues
------------
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0840d2b5942a..4cbcdb344cac 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -1445,6 +1445,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
goto rollback;
}
+ if (dev_conf->rx_adv_conf.rss_conf.func >= RTE_ETH_HASH_FUNCTION_MAX) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u invalid rss hash function (%u)\n",
+ port_id, dev_conf->rx_adv_conf.rss_conf.func);
+ ret = -EINVAL;
+ goto rollback;
+ }
+
/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
(dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
@@ -4630,6 +4638,13 @@ rte_eth_dev_rss_hash_update(uint16_t port_id,
return -ENOTSUP;
}
+ if (rss_conf->func >= RTE_ETH_HASH_FUNCTION_MAX) {
+ RTE_ETHDEV_LOG(ERR,
+ "Ethdev port_id=%u invalid rss hash function (%u)\n",
+ port_id, rss_conf->func);
+ return -EINVAL;
+ }
+
if (*dev->dev_ops->rss_hash_update == NULL)
return -ENOTSUP;
ret = eth_err(port_id, (*dev->dev_ops->rss_hash_update)(dev,
@@ -4657,6 +4672,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id,
return -EINVAL;
}
+ rss_conf->func = RTE_ETH_HASH_FUNCTION_DEFAULT;
+
if (*dev->dev_ops->rss_hash_conf_get == NULL)
return -ENOTSUP;
ret = eth_err(port_id, (*dev->dev_ops->rss_hash_conf_get)(dev,
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 04a2564f222a..1bb5f23059ca 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -174,6 +174,7 @@ extern "C" {
#include "rte_ethdev_trace_fp.h"
#include "rte_dev_info.h"
+#include "rte_flow.h"
extern int rte_eth_dev_logtype;
@@ -461,11 +462,16 @@ struct rte_vlan_filter_conf {
* The *rss_hf* field of the *rss_conf* structure indicates the different
* types of IPv4/IPv6 packets to which the RSS hashing must be applied.
* Supplying an *rss_hf* equal to zero disables the RSS feature.
+ *
+ * The *func* field of the *rss_conf* structure indicates the hash algorithm
+ * applied by the RSS hashing. Passing RTE_ETH_HASH_FUNCTION_DEFAULT allows
+ * the PMD to use its best-effort algorithm rather than a specific one.
*/
struct rte_eth_rss_conf {
uint8_t *rss_key; /**< If not NULL, 40-byte hash key. */
uint8_t rss_key_len; /**< hash key length in bytes. */
uint64_t rss_hf; /**< Hash functions to apply - see below. */
+ enum rte_eth_hash_function func; /**< Hash algorithm to apply. */
};
/*
--
2.33.0
^ permalink raw reply [relevance 5%]
* [PATCH 19/27] net/nfp: refact the nsp module
` (2 preceding siblings ...)
2023-08-24 11:09 1% ` [PATCH 07/27] net/nfp: standard the comment style Chaoyong He
@ 2023-08-24 11:09 5% ` Chaoyong He
3 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-08-24 11:09 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Move the definition of data structure into the implement file.
Also sync the logic from kernel driver and remove the unneeded header
file include statements.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_ethdev.c | 2 +-
drivers/net/nfp/nfpcore/nfp_nsp.c | 390 +++++++++++++++++++------
drivers/net/nfp/nfpcore/nfp_nsp.h | 140 ++++-----
drivers/net/nfp/nfpcore/nfp_nsp_cmds.c | 4 -
drivers/net/nfp/nfpcore/nfp_nsp_eth.c | 79 ++---
5 files changed, 398 insertions(+), 217 deletions(-)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 2e43055fd5..9243191de3 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -661,7 +661,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
static int
nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)
{
- struct nfp_cpp *cpp = nsp->cpp;
+ struct nfp_cpp *cpp = nfp_nsp_cpp(nsp);
void *fw_buf;
char fw_name[125];
char serial[40];
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.c b/drivers/net/nfp/nfpcore/nfp_nsp.c
index 8e65064b10..75d13cb84f 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.c
@@ -3,20 +3,127 @@
* All rights reserved.
*/
-#define NFP_SUBSYS "nfp_nsp"
-
-#include <stdio.h>
-#include <time.h>
+#include "nfp_nsp.h"
#include <rte_common.h>
-#include "nfp_cpp.h"
#include "nfp_logs.h"
-#include "nfp_nsp.h"
#include "nfp_platform.h"
#include "nfp_resource.h"
-int
+/* Offsets relative to the CSR base */
+#define NSP_STATUS 0x00
+#define NSP_STATUS_MAGIC GENMASK_ULL(63, 48)
+#define NSP_STATUS_MAJOR GENMASK_ULL(47, 44)
+#define NSP_STATUS_MINOR GENMASK_ULL(43, 32)
+#define NSP_STATUS_CODE GENMASK_ULL(31, 16)
+#define NSP_STATUS_RESULT GENMASK_ULL(15, 8)
+#define NSP_STATUS_BUSY RTE_BIT64(0)
+
+#define NSP_COMMAND 0x08
+#define NSP_COMMAND_OPTION GENMASK_ULL(63, 32)
+#define NSP_COMMAND_CODE GENMASK_ULL(31, 16)
+#define NSP_COMMAND_DMA_BUF RTE_BIT64(1)
+#define NSP_COMMAND_START RTE_BIT64(0)
+
+/* CPP address to retrieve the data from */
+#define NSP_BUFFER 0x10
+#define NSP_BUFFER_CPP GENMASK_ULL(63, 40)
+#define NSP_BUFFER_ADDRESS GENMASK_ULL(39, 0)
+
+#define NSP_DFLT_BUFFER 0x18
+#define NSP_DFLT_BUFFER_CPP GENMASK_ULL(63, 40)
+#define NSP_DFLT_BUFFER_ADDRESS GENMASK_ULL(39, 0)
+
+#define NSP_DFLT_BUFFER_CONFIG 0x20
+#define NSP_DFLT_BUFFER_SIZE_4KB GENMASK_ULL(15, 8)
+#define NSP_DFLT_BUFFER_SIZE_MB GENMASK_ULL(7, 0)
+
+#define NSP_MAGIC 0xab10
+#define NSP_MAJOR 0
+#define NSP_MINOR 8
+
+#define NSP_CODE_MAJOR GENMASK_ULL(15, 12)
+#define NSP_CODE_MINOR GENMASK_ULL(11, 0)
+
+#define NFP_FW_LOAD_RET_MAJOR GENMASK_ULL(15, 8)
+#define NFP_FW_LOAD_RET_MINOR GENMASK_ULL(23, 16)
+
+enum nfp_nsp_cmd {
+ SPCODE_NOOP = 0, /* No operation */
+ SPCODE_SOFT_RESET = 1, /* Soft reset the NFP */
+ SPCODE_FW_DEFAULT = 2, /* Load default (UNDI) FW */
+ SPCODE_PHY_INIT = 3, /* Initialize the PHY */
+ SPCODE_MAC_INIT = 4, /* Initialize the MAC */
+ SPCODE_PHY_RXADAPT = 5, /* Re-run PHY RX Adaptation */
+ SPCODE_FW_LOAD = 6, /* Load fw from buffer, len in option */
+ SPCODE_ETH_RESCAN = 7, /* Rescan ETHs, write ETH_TABLE to buf */
+ SPCODE_ETH_CONTROL = 8, /* Update media config from buffer */
+ SPCODE_NSP_WRITE_FLASH = 11, /* Load and flash image from buffer */
+ SPCODE_NSP_SENSORS = 12, /* Read NSP sensor(s) */
+ SPCODE_NSP_IDENTIFY = 13, /* Read NSP version */
+ SPCODE_FW_STORED = 16, /* If no FW loaded, load flash app FW */
+ SPCODE_HWINFO_LOOKUP = 17, /* Lookup HWinfo with overwrites etc. */
+ SPCODE_HWINFO_SET = 18, /* Set HWinfo entry */
+ SPCODE_FW_LOADED = 19, /* Is application firmware loaded */
+ SPCODE_VERSIONS = 21, /* Report FW versions */
+ SPCODE_READ_SFF_EEPROM = 22, /* Read module EEPROM */
+ SPCODE_READ_MEDIA = 23, /* Get the supported/advertised media for a port */
+};
+
+static const struct {
+ uint32_t code;
+ const char *msg;
+} nsp_errors[] = {
+ { 6010, "could not map to phy for port" },
+ { 6011, "not an allowed rate/lanes for port" },
+ { 6012, "not an allowed rate/lanes for port" },
+ { 6013, "high/low error, change other port first" },
+ { 6014, "config not found in flash" },
+};
+
+struct nfp_nsp {
+ struct nfp_cpp *cpp;
+ struct nfp_resource *res;
+ struct {
+ uint16_t major;
+ uint16_t minor;
+ } ver;
+
+ /** Eth table config state */
+ bool modified;
+ uint32_t idx;
+ void *entries;
+};
+
+/* NFP command argument structure */
+struct nfp_nsp_command_arg {
+ uint16_t code; /**< NFP SP Command Code */
+ bool dma; /**< @buf points to a host buffer, not NSP buffer */
+ bool error_quiet; /**< Don't print command error/warning */
+ uint32_t timeout_sec; /**< Timeout value to wait for completion in seconds */
+ uint32_t option; /**< NSP Command Argument */
+ uint64_t buf; /**< NSP Buffer Address */
+ /** Callback for interpreting option if error occurred */
+ void (*error_cb)(struct nfp_nsp *state, uint32_t ret_val);
+};
+
+/* NFP command with buffer argument structure */
+struct nfp_nsp_command_buf_arg {
+ struct nfp_nsp_command_arg arg; /**< NFP command argument structure */
+ const void *in_buf; /**< Buffer with data for input */
+ void *out_buf; /**< Buffer for output data */
+ uint32_t in_size; /**< Size of @in_buf */
+ uint32_t out_size; /**< Size of @out_buf */
+};
+
+struct nfp_cpp *
+nfp_nsp_cpp(struct nfp_nsp *state)
+{
+ return state->cpp;
+}
+
+bool
nfp_nsp_config_modified(struct nfp_nsp *state)
{
return state->modified;
@@ -24,7 +131,7 @@ nfp_nsp_config_modified(struct nfp_nsp *state)
void
nfp_nsp_config_set_modified(struct nfp_nsp *state,
- int modified)
+ bool modified)
{
state->modified = modified;
}
@@ -66,7 +173,7 @@ nfp_nsp_print_extended_error(uint32_t ret_val)
return;
for (i = 0; i < RTE_DIM(nsp_errors); i++)
- if (ret_val == (uint32_t)nsp_errors[i].code)
+ if (ret_val == nsp_errors[i].code)
PMD_DRV_LOG(ERR, "err msg: %s", nsp_errors[i].msg);
}
@@ -222,11 +329,8 @@ nfp_nsp_wait_reg(struct nfp_cpp *cpp,
* - -ETIMEDOUT if the NSP took longer than @timeout_sec seconds to complete
*/
static int
-nfp_nsp_command(struct nfp_nsp *state,
- uint16_t code,
- uint32_t option,
- uint32_t buff_cpp,
- uint64_t buff_addr)
+nfp_nsp_command_real(struct nfp_nsp *state,
+ const struct nfp_nsp_command_arg *arg)
{
int err;
uint64_t reg;
@@ -250,22 +354,14 @@ nfp_nsp_command(struct nfp_nsp *state,
return err;
}
- if (!FIELD_FIT(NSP_BUFFER_CPP, buff_cpp >> 8) ||
- !FIELD_FIT(NSP_BUFFER_ADDRESS, buff_addr)) {
- PMD_DRV_LOG(ERR, "Host buffer out of reach %08x %" PRIx64,
- buff_cpp, buff_addr);
- return -EINVAL;
- }
-
- err = nfp_cpp_writeq(cpp, nsp_cpp, nsp_buffer,
- FIELD_PREP(NSP_BUFFER_CPP, buff_cpp >> 8) |
- FIELD_PREP(NSP_BUFFER_ADDRESS, buff_addr));
+ err = nfp_cpp_writeq(cpp, nsp_cpp, nsp_buffer, arg->buf);
if (err < 0)
return err;
err = nfp_cpp_writeq(cpp, nsp_cpp, nsp_command,
- FIELD_PREP(NSP_COMMAND_OPTION, option) |
- FIELD_PREP(NSP_COMMAND_CODE, code) |
+ FIELD_PREP(NSP_COMMAND_OPTION, arg->option) |
+ FIELD_PREP(NSP_COMMAND_CODE, arg->code) |
+ FIELD_PREP(NSP_COMMAND_DMA_BUF, arg->dma) |
FIELD_PREP(NSP_COMMAND_START, 1));
if (err < 0)
return err;
@@ -275,7 +371,7 @@ nfp_nsp_command(struct nfp_nsp *state,
NSP_COMMAND_START, 0);
if (err != 0) {
PMD_DRV_LOG(ERR, "Error %d waiting for code %#04x to start",
- err, code);
+ err, arg->code);
return err;
}
@@ -284,7 +380,7 @@ nfp_nsp_command(struct nfp_nsp *state,
NSP_STATUS_BUSY, 0);
if (err != 0) {
PMD_DRV_LOG(ERR, "Error %d waiting for code %#04x to complete",
- err, code);
+ err, arg->code);
return err;
}
@@ -296,84 +392,85 @@ nfp_nsp_command(struct nfp_nsp *state,
err = FIELD_GET(NSP_STATUS_RESULT, reg);
if (err != 0) {
- PMD_DRV_LOG(ERR, "Result (error) code set: %d (%d) command: %d",
- -err, (int)ret_val, code);
- nfp_nsp_print_extended_error(ret_val);
+ if (!arg->error_quiet)
+ PMD_DRV_LOG(WARNING, "Result (error) code set: %d (%d) command: %d",
+ -err, (int)ret_val, arg->code);
+
+ if (arg->error_cb != 0)
+ arg->error_cb(state, ret_val);
+ else
+ nfp_nsp_print_extended_error(ret_val);
+
return -err;
}
return ret_val;
}
-#define SZ_1M 0x00100000
+static int
+nfp_nsp_command(struct nfp_nsp *state,
+ uint16_t code)
+{
+ const struct nfp_nsp_command_arg arg = {
+ .code = code,
+ };
+
+ return nfp_nsp_command_real(state, &arg);
+}
static int
-nfp_nsp_command_buf(struct nfp_nsp *nsp,
- uint16_t code, uint32_t option,
- const void *in_buf,
- unsigned int in_size,
- void *out_buf,
- unsigned int out_size)
+nfp_nsp_command_buf_def(struct nfp_nsp *nsp,
+ struct nfp_nsp_command_buf_arg *arg)
{
int err;
int ret;
uint64_t reg;
- size_t max_size;
uint32_t cpp_id;
uint64_t cpp_buf;
struct nfp_cpp *cpp = nsp->cpp;
- if (nsp->ver.minor < 13) {
- PMD_DRV_LOG(ERR, "NSP: Code 0x%04x with buffer not supported ABI %hu.%hu)",
- code, nsp->ver.major, nsp->ver.minor);
- return -EOPNOTSUPP;
- }
-
- err = nfp_cpp_readq(cpp, nfp_resource_cpp_id(nsp->res),
- nfp_resource_address(nsp->res) + NSP_DFLT_BUFFER_CONFIG,
- ®);
- if (err < 0)
- return err;
-
- max_size = RTE_MAX(in_size, out_size);
- if (FIELD_GET(NSP_DFLT_BUFFER_SIZE_MB, reg) * SZ_1M < max_size) {
- PMD_DRV_LOG(ERR, "NSP: default buffer too small for command 0x%04x (%llu < %lu)",
- code, FIELD_GET(NSP_DFLT_BUFFER_SIZE_MB, reg) * SZ_1M, max_size);
- return -EINVAL;
- }
-
err = nfp_cpp_readq(cpp, nfp_resource_cpp_id(nsp->res),
nfp_resource_address(nsp->res) + NSP_DFLT_BUFFER,
®);
if (err < 0)
return err;
- cpp_id = FIELD_GET(NSP_BUFFER_CPP, reg) << 8;
- cpp_buf = FIELD_GET(NSP_BUFFER_ADDRESS, reg);
+ cpp_id = FIELD_GET(NSP_DFLT_BUFFER_CPP, reg) << 8;
+ cpp_buf = FIELD_GET(NSP_DFLT_BUFFER_ADDRESS, reg);
- if (in_buf != NULL && in_size > 0) {
- err = nfp_cpp_write(cpp, cpp_id, cpp_buf, in_buf, in_size);
+ if (arg->in_buf != NULL && arg->in_size > 0) {
+ err = nfp_cpp_write(cpp, cpp_id, cpp_buf,
+ arg->in_buf, arg->in_size);
if (err < 0)
return err;
}
/* Zero out remaining part of the buffer */
- if (out_buf != NULL && out_size > 0 && out_size > in_size) {
- memset(out_buf, 0, out_size - in_size);
- err = nfp_cpp_write(cpp, cpp_id, cpp_buf + in_size, out_buf,
- out_size - in_size);
+ if (arg->out_buf != NULL && arg->out_size > arg->in_size) {
+ err = nfp_cpp_write(cpp, cpp_id, cpp_buf + arg->in_size,
+ arg->out_buf, arg->out_size - arg->in_size);
if (err < 0)
return err;
}
- ret = nfp_nsp_command(nsp, code, option, cpp_id, cpp_buf);
+ if (!FIELD_FIT(NSP_BUFFER_CPP, cpp_id >> 8) ||
+ !FIELD_FIT(NSP_BUFFER_ADDRESS, cpp_buf)) {
+ PMD_DRV_LOG(ERR, "Buffer out of reach %#08x %#016lx",
+ cpp_id, cpp_buf);
+ return -EINVAL;
+ }
+
+ arg->arg.buf = FIELD_PREP(NSP_BUFFER_CPP, cpp_id >> 8) |
+ FIELD_PREP(NSP_BUFFER_ADDRESS, cpp_buf);
+ ret = nfp_nsp_command_real(nsp, &arg->arg);
if (ret < 0) {
PMD_DRV_LOG(ERR, "NSP command failed");
return ret;
}
- if (out_buf != NULL && out_size > 0) {
- err = nfp_cpp_read(cpp, cpp_id, cpp_buf, out_buf, out_size);
+ if (arg->out_buf != NULL && arg->out_size > 0) {
+ err = nfp_cpp_read(cpp, cpp_id, cpp_buf,
+ arg->out_buf, arg->out_size);
if (err < 0)
return err;
}
@@ -381,6 +478,43 @@ nfp_nsp_command_buf(struct nfp_nsp *nsp,
return ret;
}
+#define SZ_1M 0x00100000
+#define SZ_4K 0x00001000
+
+static int
+nfp_nsp_command_buf(struct nfp_nsp *nsp,
+ struct nfp_nsp_command_buf_arg *arg)
+{
+ int err;
+ uint64_t reg;
+ uint32_t size;
+ uint32_t max_size;
+ struct nfp_cpp *cpp = nsp->cpp;
+
+ if (nsp->ver.minor < 13) {
+ PMD_DRV_LOG(ERR, "NSP: Code %#04x with buffer not supported ABI %hu.%hu)",
+ arg->arg.code, nsp->ver.major, nsp->ver.minor);
+ return -EOPNOTSUPP;
+ }
+
+ err = nfp_cpp_readq(cpp, nfp_resource_cpp_id(nsp->res),
+ nfp_resource_address(nsp->res) + NSP_DFLT_BUFFER_CONFIG,
+ ®);
+ if (err < 0)
+ return err;
+
+ max_size = RTE_MAX(arg->in_size, arg->out_size);
+ size = FIELD_GET(NSP_DFLT_BUFFER_SIZE_MB, reg) * SZ_1M +
+ FIELD_GET(NSP_DFLT_BUFFER_SIZE_4KB, reg) * SZ_4K;
+ if (size < max_size) {
+ PMD_DRV_LOG(ERR, "NSP: default buffer too small for command %#04x (%u < %u)",
+ arg->arg.code, size, max_size);
+ return -EINVAL;
+ }
+
+ return nfp_nsp_command_buf_def(nsp, arg);
+}
+
int
nfp_nsp_wait(struct nfp_nsp *state)
{
@@ -392,7 +526,7 @@ nfp_nsp_wait(struct nfp_nsp *state)
wait.tv_nsec = 25000000; /* 25ms */
for (;;) {
- err = nfp_nsp_command(state, SPCODE_NOOP, 0, 0, 0);
+ err = nfp_nsp_command(state, SPCODE_NOOP);
if (err != -EAGAIN)
break;
@@ -413,13 +547,57 @@ nfp_nsp_wait(struct nfp_nsp *state)
int
nfp_nsp_device_soft_reset(struct nfp_nsp *state)
{
- return nfp_nsp_command(state, SPCODE_SOFT_RESET, 0, 0, 0);
+ return nfp_nsp_command(state, SPCODE_SOFT_RESET);
}
int
nfp_nsp_mac_reinit(struct nfp_nsp *state)
{
- return nfp_nsp_command(state, SPCODE_MAC_INIT, 0, 0, 0);
+ return nfp_nsp_command(state, SPCODE_MAC_INIT);
+}
+
+static void
+nfp_nsp_load_fw_extended_msg(struct nfp_nsp *state,
+ uint32_t ret_val)
+{
+ uint32_t minor;
+ uint32_t major;
+ static const char * const major_msg[] = {
+ /* 0 */ "Firmware from driver loaded",
+ /* 1 */ "Firmware from flash loaded",
+ /* 2 */ "Firmware loading failure",
+ };
+ static const char * const minor_msg[] = {
+ /* 0 */ "",
+ /* 1 */ "no named partition on flash",
+ /* 2 */ "error reading from flash",
+ /* 3 */ "can not deflate",
+ /* 4 */ "not a trusted file",
+ /* 5 */ "can not parse FW file",
+ /* 6 */ "MIP not found in FW file",
+ /* 7 */ "null firmware name in MIP",
+ /* 8 */ "FW version none",
+ /* 9 */ "FW build number none",
+ /* 10 */ "no FW selection policy HWInfo key found",
+ /* 11 */ "static FW selection policy",
+ /* 12 */ "FW version has precedence",
+ /* 13 */ "different FW application load requested",
+ /* 14 */ "development build",
+ };
+
+ major = FIELD_GET(NFP_FW_LOAD_RET_MAJOR, ret_val);
+ minor = FIELD_GET(NFP_FW_LOAD_RET_MINOR, ret_val);
+
+ if (!nfp_nsp_has_stored_fw_load(state))
+ return;
+
+ if (major >= RTE_DIM(major_msg))
+ PMD_DRV_LOG(INFO, "FW loading status: %x", ret_val);
+ else if (minor >= RTE_DIM(minor_msg))
+ PMD_DRV_LOG(INFO, "%s, reason code: %d", major_msg[major], minor);
+ else
+ PMD_DRV_LOG(INFO, "%s%c %s", major_msg[major],
+ minor != 0 ? ',' : '.', minor_msg[minor]);
}
int
@@ -427,8 +605,24 @@ nfp_nsp_load_fw(struct nfp_nsp *state,
void *buf,
size_t size)
{
- return nfp_nsp_command_buf(state, SPCODE_FW_LOAD, size, buf, size,
- NULL, 0);
+ int ret;
+ struct nfp_nsp_command_buf_arg load_fw = {
+ {
+ .code = SPCODE_FW_LOAD,
+ .option = size,
+ .error_cb = nfp_nsp_load_fw_extended_msg,
+ },
+ .in_buf = buf,
+ .in_size = size,
+ };
+
+ ret = nfp_nsp_command_buf(state, &load_fw);
+ if (ret < 0)
+ return ret;
+
+ nfp_nsp_load_fw_extended_msg(state, ret);
+
+ return 0;
}
int
@@ -436,8 +630,16 @@ nfp_nsp_read_eth_table(struct nfp_nsp *state,
void *buf,
size_t size)
{
- return nfp_nsp_command_buf(state, SPCODE_ETH_RESCAN, size, NULL, 0,
- buf, size);
+ struct nfp_nsp_command_buf_arg eth_rescan = {
+ {
+ .code = SPCODE_ETH_RESCAN,
+ .option = size,
+ },
+ .out_buf = buf,
+ .out_size = size,
+ };
+
+ return nfp_nsp_command_buf(state, ð_rescan);
}
int
@@ -445,8 +647,16 @@ nfp_nsp_write_eth_table(struct nfp_nsp *state,
const void *buf,
size_t size)
{
- return nfp_nsp_command_buf(state, SPCODE_ETH_CONTROL, size, buf, size,
- NULL, 0);
+ struct nfp_nsp_command_buf_arg eth_ctrl = {
+ {
+ .code = SPCODE_ETH_CONTROL,
+ .option = size,
+ },
+ .in_buf = buf,
+ .in_size = size,
+ };
+
+ return nfp_nsp_command_buf(state, ð_ctrl);
}
int
@@ -454,8 +664,16 @@ nfp_nsp_read_identify(struct nfp_nsp *state,
void *buf,
size_t size)
{
- return nfp_nsp_command_buf(state, SPCODE_NSP_IDENTIFY, size, NULL, 0,
- buf, size);
+ struct nfp_nsp_command_buf_arg identify = {
+ {
+ .code = SPCODE_NSP_IDENTIFY,
+ .option = size,
+ },
+ .out_buf = buf,
+ .out_size = size,
+ };
+
+ return nfp_nsp_command_buf(state, &identify);
}
int
@@ -464,6 +682,14 @@ nfp_nsp_read_sensors(struct nfp_nsp *state,
void *buf,
size_t size)
{
- return nfp_nsp_command_buf(state, SPCODE_NSP_SENSORS, sensor_mask, NULL,
- 0, buf, size);
+ struct nfp_nsp_command_buf_arg sensors = {
+ {
+ .code = SPCODE_NSP_SENSORS,
+ .option = sensor_mask,
+ },
+ .out_buf = buf,
+ .out_size = size,
+ };
+
+ return nfp_nsp_command_buf(state, &sensors);
}
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.h b/drivers/net/nfp/nfpcore/nfp_nsp.h
index 14986a9130..fe52dffeb7 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.h
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.h
@@ -7,78 +7,8 @@
#define __NSP_NSP_H__
#include "nfp_cpp.h"
-#include "nfp_nsp.h"
-
-/* Offsets relative to the CSR base */
-#define NSP_STATUS 0x00
-#define NSP_STATUS_MAGIC GENMASK_ULL(63, 48)
-#define NSP_STATUS_MAJOR GENMASK_ULL(47, 44)
-#define NSP_STATUS_MINOR GENMASK_ULL(43, 32)
-#define NSP_STATUS_CODE GENMASK_ULL(31, 16)
-#define NSP_STATUS_RESULT GENMASK_ULL(15, 8)
-#define NSP_STATUS_BUSY RTE_BIT64(0)
-
-#define NSP_COMMAND 0x08
-#define NSP_COMMAND_OPTION GENMASK_ULL(63, 32)
-#define NSP_COMMAND_CODE GENMASK_ULL(31, 16)
-#define NSP_COMMAND_START RTE_BIT64(0)
-
-/* CPP address to retrieve the data from */
-#define NSP_BUFFER 0x10
-#define NSP_BUFFER_CPP GENMASK_ULL(63, 40)
-#define NSP_BUFFER_PCIE GENMASK_ULL(39, 38)
-#define NSP_BUFFER_ADDRESS GENMASK_ULL(37, 0)
-
-#define NSP_DFLT_BUFFER 0x18
-
-#define NSP_DFLT_BUFFER_CONFIG 0x20
-#define NSP_DFLT_BUFFER_SIZE_MB GENMASK_ULL(7, 0)
-
-#define NSP_MAGIC 0xab10
-#define NSP_MAJOR 0
-#define NSP_MINOR 8
-
-#define NSP_CODE_MAJOR GENMASK(15, 12)
-#define NSP_CODE_MINOR GENMASK(11, 0)
-
-enum nfp_nsp_cmd {
- SPCODE_NOOP = 0, /* No operation */
- SPCODE_SOFT_RESET = 1, /* Soft reset the NFP */
- SPCODE_FW_DEFAULT = 2, /* Load default (UNDI) FW */
- SPCODE_PHY_INIT = 3, /* Initialize the PHY */
- SPCODE_MAC_INIT = 4, /* Initialize the MAC */
- SPCODE_PHY_RXADAPT = 5, /* Re-run PHY RX Adaptation */
- SPCODE_FW_LOAD = 6, /* Load fw from buffer, len in option */
- SPCODE_ETH_RESCAN = 7, /* Rescan ETHs, write ETH_TABLE to buf */
- SPCODE_ETH_CONTROL = 8, /* Update media config from buffer */
- SPCODE_NSP_SENSORS = 12, /* Read NSP sensor(s) */
- SPCODE_NSP_IDENTIFY = 13, /* Read NSP version */
-};
-
-static const struct {
- int code;
- const char *msg;
-} nsp_errors[] = {
- { 6010, "could not map to phy for port" },
- { 6011, "not an allowed rate/lanes for port" },
- { 6012, "not an allowed rate/lanes for port" },
- { 6013, "high/low error, change other port first" },
- { 6014, "config not found in flash" },
-};
-struct nfp_nsp {
- struct nfp_cpp *cpp;
- struct nfp_resource *res;
- struct {
- uint16_t major;
- uint16_t minor;
- } ver;
-
- /* Eth table config state */
- int modified;
- unsigned int idx;
- void *entries;
-};
+struct nfp_nsp;
struct nfp_nsp *nfp_nsp_open(struct nfp_cpp *cpp);
void nfp_nsp_close(struct nfp_nsp *state);
@@ -92,18 +22,61 @@ int nfp_nsp_read_identify(struct nfp_nsp *state, void *buf, size_t size);
int nfp_nsp_read_sensors(struct nfp_nsp *state, uint32_t sensor_mask,
void *buf, size_t size);
-static inline int
+static inline bool
nfp_nsp_has_mac_reinit(struct nfp_nsp *state)
{
return nfp_nsp_get_abi_ver_minor(state) > 20;
}
+static inline bool
+nfp_nsp_has_stored_fw_load(struct nfp_nsp *state)
+{
+ return nfp_nsp_get_abi_ver_minor(state) > 23;
+}
+
+static inline bool
+nfp_nsp_has_hwinfo_lookup(struct nfp_nsp *state)
+{
+ return nfp_nsp_get_abi_ver_minor(state) > 24;
+}
+
+static inline bool
+nfp_nsp_has_hwinfo_set(struct nfp_nsp *state)
+{
+ return nfp_nsp_get_abi_ver_minor(state) > 25;
+}
+
+static inline bool
+nfp_nsp_has_fw_loaded(struct nfp_nsp *state)
+{
+ return nfp_nsp_get_abi_ver_minor(state) > 25;
+}
+
+static inline bool
+nfp_nsp_has_versions(struct nfp_nsp *state)
+{
+ return nfp_nsp_get_abi_ver_minor(state) > 27;
+}
+
+static inline bool
+nfp_nsp_has_read_module_eeprom(struct nfp_nsp *state)
+{
+ return nfp_nsp_get_abi_ver_minor(state) > 28;
+}
+
+static inline bool
+nfp_nsp_has_read_media(struct nfp_nsp *state)
+{
+ return nfp_nsp_get_abi_ver_minor(state) > 33;
+}
+
enum nfp_eth_interface {
NFP_INTERFACE_NONE = 0,
NFP_INTERFACE_SFP = 1,
NFP_INTERFACE_SFPP = 10,
NFP_INTERFACE_SFP28 = 28,
NFP_INTERFACE_QSFP = 40,
+ NFP_INTERFACE_RJ45 = 45,
NFP_INTERFACE_CXP = 100,
NFP_INTERFACE_QSFP28 = 112,
};
@@ -151,6 +124,7 @@ struct nfp_eth_table {
enum nfp_eth_media media; /**< Media type of the @interface */
enum nfp_eth_fec fec; /**< Forward Error Correction mode */
+ enum nfp_eth_fec act_fec; /**< Active Forward Error Correction mode */
enum nfp_eth_aneg aneg; /**< Auto negotiation mode */
struct rte_ether_addr mac_addr; /**< Interface MAC address */
@@ -159,17 +133,18 @@ struct nfp_eth_table {
/** Id of interface within port (for split ports) */
uint8_t label_subport;
- int enabled; /**< Enable port */
- int tx_enabled; /**< Enable TX */
- int rx_enabled; /**< Enable RX */
+ bool enabled; /**< Enable port */
+ bool tx_enabled; /**< Enable TX */
+ bool rx_enabled; /**< Enable RX */
+ bool supp_aneg; /**< Support auto negotiation */
- int override_changed; /**< Media reconfig pending */
+ bool override_changed; /**< Media reconfig pending */
uint8_t port_type; /**< One of %PORT_* */
/** Sum of lanes of all subports of this port */
uint32_t port_lanes;
- int is_split; /**< Split port */
+ bool is_split; /**< Split port */
uint32_t fec_modes_supported; /**< Bitmap of FEC modes supported */
} ports[]; /**< Table of ports */
@@ -177,8 +152,8 @@ struct nfp_eth_table {
struct nfp_eth_table *nfp_eth_read_ports(struct nfp_cpp *cpp);
-int nfp_eth_set_mod_enable(struct nfp_cpp *cpp, uint32_t idx, int enable);
-int nfp_eth_set_configured(struct nfp_cpp *cpp, uint32_t idx, int configed);
+int nfp_eth_set_mod_enable(struct nfp_cpp *cpp, uint32_t idx, bool enable);
+int nfp_eth_set_configured(struct nfp_cpp *cpp, uint32_t idx, bool configured);
int nfp_eth_set_fec(struct nfp_cpp *cpp, uint32_t idx, enum nfp_eth_fec mode);
int nfp_nsp_read_eth_table(struct nfp_nsp *state, void *buf, size_t size);
@@ -187,12 +162,13 @@ int nfp_nsp_write_eth_table(struct nfp_nsp *state, const void *buf,
void nfp_nsp_config_set_state(struct nfp_nsp *state, void *entries,
uint32_t idx);
void nfp_nsp_config_clear_state(struct nfp_nsp *state);
-void nfp_nsp_config_set_modified(struct nfp_nsp *state, int modified);
+void nfp_nsp_config_set_modified(struct nfp_nsp *state, bool modified);
void *nfp_nsp_config_entries(struct nfp_nsp *state);
-int nfp_nsp_config_modified(struct nfp_nsp *state);
+struct nfp_cpp *nfp_nsp_cpp(struct nfp_nsp *state);
+bool nfp_nsp_config_modified(struct nfp_nsp *state);
uint32_t nfp_nsp_config_idx(struct nfp_nsp *state);
-static inline int
+static inline bool
nfp_eth_can_support_fec(struct nfp_eth_table_port *eth_port)
{
return eth_port->fec_modes_supported != 0;
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
index 429f639fa2..86956f4330 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
@@ -3,12 +3,8 @@
* All rights reserved.
*/
-#include <stdio.h>
-#include <rte_byteorder.h>
-#include "nfp_cpp.h"
#include "nfp_logs.h"
#include "nfp_nsp.h"
-#include "nfp_nffw.h"
struct nsp_identify {
uint8_t version[40];
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
index 355d907f4d..996fd4b44a 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
@@ -3,10 +3,6 @@
* All rights reserved.
*/
-#include <stdio.h>
-#include <rte_common.h>
-#include <rte_byteorder.h>
-#include "nfp_cpp.h"
#include "nfp_logs.h"
#include "nfp_nsp.h"
#include "nfp_platform.h"
@@ -21,6 +17,7 @@
#define NSP_ETH_PORT_PHYLABEL GENMASK_ULL(59, 54)
#define NSP_ETH_PORT_FEC_SUPP_BASER RTE_BIT64(60)
#define NSP_ETH_PORT_FEC_SUPP_RS RTE_BIT64(61)
+#define NSP_ETH_PORT_SUPP_ANEG RTE_BIT64(63)
#define NSP_ETH_PORT_LANES_MASK rte_cpu_to_le_64(NSP_ETH_PORT_LANES)
@@ -34,6 +31,7 @@
#define NSP_ETH_STATE_OVRD_CHNG RTE_BIT64(22)
#define NSP_ETH_STATE_ANEG GENMASK_ULL(25, 23)
#define NSP_ETH_STATE_FEC GENMASK_ULL(27, 26)
+#define NSP_ETH_STATE_ACT_FEC GENMASK_ULL(29, 28)
#define NSP_ETH_CTRL_CONFIGURED RTE_BIT64(0)
#define NSP_ETH_CTRL_ENABLED RTE_BIT64(1)
@@ -54,26 +52,12 @@
#define PORT_NONE 0xef
#define PORT_OTHER 0xff
-#define SPEED_10 10
-#define SPEED_100 100
-#define SPEED_1000 1000
-#define SPEED_2500 2500
-#define SPEED_5000 5000
-#define SPEED_10000 10000
-#define SPEED_14000 14000
-#define SPEED_20000 20000
-#define SPEED_25000 25000
-#define SPEED_40000 40000
-#define SPEED_50000 50000
-#define SPEED_56000 56000
-#define SPEED_100000 100000
-
enum nfp_eth_raw {
NSP_ETH_RAW_PORT = 0,
NSP_ETH_RAW_STATE,
NSP_ETH_RAW_MAC,
NSP_ETH_RAW_CONTROL,
- NSP_ETH_NUM_RAW
+ NSP_ETH_NUM_RAW,
};
enum nfp_eth_rate {
@@ -100,12 +84,12 @@ static const struct {
enum nfp_eth_rate rate;
uint32_t speed;
} nsp_eth_rate_tbl[] = {
- { RATE_INVALID, 0, },
- { RATE_10M, SPEED_10, },
- { RATE_100M, SPEED_100, },
- { RATE_1G, SPEED_1000, },
- { RATE_10G, SPEED_10000, },
- { RATE_25G, SPEED_25000, },
+ { RATE_INVALID, RTE_ETH_SPEED_NUM_NONE, },
+ { RATE_10M, RTE_ETH_SPEED_NUM_10M, },
+ { RATE_100M, RTE_ETH_SPEED_NUM_100M, },
+ { RATE_1G, RTE_ETH_SPEED_NUM_1G, },
+ { RATE_10G, RTE_ETH_SPEED_NUM_10G, },
+ { RATE_25G, RTE_ETH_SPEED_NUM_25G, },
};
static uint32_t
@@ -192,7 +176,14 @@ nfp_eth_port_translate(struct nfp_nsp *nsp,
if (dst->fec_modes_supported != 0)
dst->fec_modes_supported |= NFP_FEC_AUTO | NFP_FEC_DISABLED;
- dst->fec = 1 << FIELD_GET(NSP_ETH_STATE_FEC, state);
+ dst->fec = FIELD_GET(NSP_ETH_STATE_FEC, state);
+ dst->act_fec = dst->fec;
+
+ if (nfp_nsp_get_abi_ver_minor(nsp) < 33)
+ return;
+
+ dst->act_fec = FIELD_GET(NSP_ETH_STATE_ACT_FEC, state);
+ dst->supp_aneg = FIELD_GET(NSP_ETH_PORT_SUPP_ANEG, port);
}
static void
@@ -221,7 +212,7 @@ nfp_eth_calc_port_geometry(struct nfp_eth_table *table)
table->ports[i].label_port,
table->ports[i].label_subport);
- table->ports[i].is_split = 1;
+ table->ports[i].is_split = true;
}
}
}
@@ -232,6 +223,9 @@ nfp_eth_calc_port_type(struct nfp_eth_table_port *entry)
if (entry->interface == NFP_INTERFACE_NONE) {
entry->port_type = PORT_NONE;
return;
+ } else if (entry->interface == NFP_INTERFACE_RJ45) {
+ entry->port_type = PORT_TP;
+ return;
}
if (entry->media == NFP_MEDIA_FIBRE)
@@ -250,7 +244,6 @@ nfp_eth_read_ports_real(struct nfp_nsp *nsp)
uint32_t table_sz;
struct nfp_eth_table *table;
union eth_table_entry *entries;
- const struct rte_ether_addr *mac;
entries = rte_zmalloc(NULL, NSP_ETH_TABLE_SIZE, 0);
if (entries == NULL)
@@ -262,16 +255,9 @@ nfp_eth_read_ports_real(struct nfp_nsp *nsp)
goto err;
}
- /*
- * The NFP3800 NIC support 8 ports, but only 2 ports are valid,
- * the rest 6 ports mac are all 0, ensure we don't use these port
- */
- for (i = 0; i < NSP_ETH_MAX_COUNT; i++) {
- mac = (const struct rte_ether_addr *)entries[i].mac_addr;
- if ((entries[i].port & NSP_ETH_PORT_LANES_MASK) != 0 &&
- !rte_is_zero_ether_addr(mac))
+ for (i = 0; i < NSP_ETH_MAX_COUNT; i++)
+ if ((entries[i].port & NSP_ETH_PORT_LANES_MASK) != 0)
cnt++;
- }
/*
* Some versions of flash will give us 0 instead of port count. For
@@ -291,11 +277,8 @@ nfp_eth_read_ports_real(struct nfp_nsp *nsp)
table->count = cnt;
for (i = 0, j = 0; i < NSP_ETH_MAX_COUNT; i++) {
- mac = (const struct rte_ether_addr *)entries[i].mac_addr;
- if ((entries[i].port & NSP_ETH_PORT_LANES_MASK) != 0 &&
- !rte_is_zero_ether_addr(mac))
- nfp_eth_port_translate(nsp, &entries[i], i,
- &table->ports[j++]);
+ if ((entries[i].port & NSP_ETH_PORT_LANES_MASK) != 0)
+ nfp_eth_port_translate(nsp, &entries[i], i, &table->ports[j++]);
}
nfp_eth_calc_port_geometry(table);
@@ -436,7 +419,7 @@ nfp_eth_config_commit_end(struct nfp_nsp *nsp)
int
nfp_eth_set_mod_enable(struct nfp_cpp *cpp,
uint32_t idx,
- int enable)
+ bool enable)
{
uint64_t reg;
struct nfp_nsp *nsp;
@@ -444,7 +427,7 @@ nfp_eth_set_mod_enable(struct nfp_cpp *cpp,
nsp = nfp_eth_config_start(cpp, idx);
if (nsp == NULL)
- return -1;
+ return -EIO;
entries = nfp_nsp_config_entries(nsp);
@@ -456,7 +439,7 @@ nfp_eth_set_mod_enable(struct nfp_cpp *cpp,
reg |= FIELD_PREP(NSP_ETH_CTRL_ENABLED, enable);
entries[idx].control = rte_cpu_to_le_64(reg);
- nfp_nsp_config_set_modified(nsp, 1);
+ nfp_nsp_config_set_modified(nsp, true);
}
return nfp_eth_config_commit_end(nsp);
@@ -480,7 +463,7 @@ nfp_eth_set_mod_enable(struct nfp_cpp *cpp,
int
nfp_eth_set_configured(struct nfp_cpp *cpp,
uint32_t idx,
- int configured)
+ bool configured)
{
uint64_t reg;
struct nfp_nsp *nsp;
@@ -509,7 +492,7 @@ nfp_eth_set_configured(struct nfp_cpp *cpp,
reg |= FIELD_PREP(NSP_ETH_CTRL_CONFIGURED, configured);
entries[idx].control = rte_cpu_to_le_64(reg);
- nfp_nsp_config_set_modified(nsp, 1);
+ nfp_nsp_config_set_modified(nsp, true);
}
return nfp_eth_config_commit_end(nsp);
@@ -547,7 +530,7 @@ nfp_eth_set_bit_config(struct nfp_nsp *nsp,
entries[idx].control |= rte_cpu_to_le_64(ctrl_bit);
- nfp_nsp_config_set_modified(nsp, 1);
+ nfp_nsp_config_set_modified(nsp, true);
return 0;
}
--
2.39.1
^ permalink raw reply [relevance 5%]
* [PATCH 07/27] net/nfp: standard the comment style
2023-08-24 11:09 1% ` [PATCH 02/27] net/nfp: unify the indent coding style Chaoyong He
2023-08-24 11:09 3% ` [PATCH 05/27] net/nfp: standard the local variable " Chaoyong He
@ 2023-08-24 11:09 1% ` Chaoyong He
2023-08-24 11:09 5% ` [PATCH 19/27] net/nfp: refact the nsp module Chaoyong He
3 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-08-24 11:09 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Follow the DPDK coding style, use the kdoc comment style.
Also add some comment to help understand logic.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfpcore/nfp_cpp.h | 504 ++++-----------------
drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c | 39 +-
drivers/net/nfp/nfpcore/nfp_cppcore.c | 484 ++++++++++++++++----
drivers/net/nfp/nfpcore/nfp_hwinfo.c | 21 +-
drivers/net/nfp/nfpcore/nfp_hwinfo.h | 2 +
drivers/net/nfp/nfpcore/nfp_mip.c | 43 +-
drivers/net/nfp/nfpcore/nfp_mutex.c | 69 +--
drivers/net/nfp/nfpcore/nfp_nffw.c | 49 +-
drivers/net/nfp/nfpcore/nfp_nffw.h | 6 +-
drivers/net/nfp/nfpcore/nfp_nsp.c | 53 ++-
drivers/net/nfp/nfpcore/nfp_nsp.h | 108 ++---
drivers/net/nfp/nfpcore/nfp_nsp_eth.c | 170 ++++---
drivers/net/nfp/nfpcore/nfp_resource.c | 103 +++--
drivers/net/nfp/nfpcore/nfp_resource.h | 28 +-
drivers/net/nfp/nfpcore/nfp_rtsym.c | 59 ++-
drivers/net/nfp/nfpcore/nfp_rtsym.h | 12 +-
drivers/net/nfp/nfpcore/nfp_target.c | 2 +-
17 files changed, 888 insertions(+), 864 deletions(-)
diff --git a/drivers/net/nfp/nfpcore/nfp_cpp.h b/drivers/net/nfp/nfpcore/nfp_cpp.h
index 139752f85a..82189e9910 100644
--- a/drivers/net/nfp/nfpcore/nfp_cpp.h
+++ b/drivers/net/nfp/nfpcore/nfp_cpp.h
@@ -10,9 +10,7 @@
struct nfp_cpp_mutex;
-/*
- * NFP CPP handle
- */
+/* NFP CPP handle */
struct nfp_cpp {
uint32_t model;
uint32_t interface;
@@ -37,9 +35,7 @@ struct nfp_cpp {
int driver_lock_needed;
};
-/*
- * NFP CPP device area handle
- */
+/* NFP CPP device area handle */
struct nfp_cpp_area {
struct nfp_cpp *cpp;
char *name;
@@ -127,35 +123,45 @@ struct nfp_cpp_operations {
#define NFP_CPP_TARGET_ID_MASK 0x1f
-/*
+/**
* Pack target, token, and action into a CPP ID.
*
* Create a 32-bit CPP identifier representing the access to be made.
* These identifiers are used as parameters to other NFP CPP functions.
* Some CPP devices may allow wildcard identifiers to be specified.
*
- * @target NFP CPP target id
- * @action NFP CPP action id
- * @token NFP CPP token id
+ * @param target
+ * NFP CPP target id
+ * @param action
+ * NFP CPP action id
+ * @param token
+ * NFP CPP token id
*
- * @return NFP CPP ID
+ * @return
+ * NFP CPP ID
*/
#define NFP_CPP_ID(target, action, token) \
((((target) & 0x7f) << 24) | (((token) & 0xff) << 16) | \
(((action) & 0xff) << 8))
-/*
+/**
* Pack target, token, action, and island into a CPP ID.
- * @target NFP CPP target id
- * @action NFP CPP action id
- * @token NFP CPP token id
- * @island NFP CPP island id
*
* Create a 32-bit CPP identifier representing the access to be made.
* These identifiers are used as parameters to other NFP CPP functions.
* Some CPP devices may allow wildcard identifiers to be specified.
*
- * @return NFP CPP ID
+ * @param target
+ * NFP CPP target id
+ * @param action
+ * NFP CPP action id
+ * @param token
+ * NFP CPP token id
+ * @param island
+ * NFP CPP island id
+ *
+ * @return
+ * NFP CPP ID
*/
#define NFP_CPP_ISLAND_ID(target, action, token, island) \
((((target) & 0x7f) << 24) | (((token) & 0xff) << 16) | \
@@ -163,9 +169,12 @@ struct nfp_cpp_operations {
/**
* Return the NFP CPP target of a NFP CPP ID
- * @id NFP CPP ID
*
- * @return NFP CPP target
+ * @param id
+ * NFP CPP ID
+ *
+ * @return
+ * NFP CPP target
*/
static inline uint8_t
NFP_CPP_ID_TARGET_of(uint32_t id)
@@ -173,11 +182,14 @@ NFP_CPP_ID_TARGET_of(uint32_t id)
return (id >> 24) & NFP_CPP_TARGET_ID_MASK;
}
-/*
+/**
* Return the NFP CPP token of a NFP CPP ID
- * @id NFP CPP ID
*
- * @return NFP CPP token
+ * @param id
+ * NFP CPP ID
+ *
+ * @return
+ * NFP CPP token
*/
static inline uint8_t
NFP_CPP_ID_TOKEN_of(uint32_t id)
@@ -185,11 +197,14 @@ NFP_CPP_ID_TOKEN_of(uint32_t id)
return (id >> 16) & 0xff;
}
-/*
+/**
* Return the NFP CPP action of a NFP CPP ID
- * @id NFP CPP ID
*
- * @return NFP CPP action
+ * @param id
+ * NFP CPP ID
+ *
+ * @return
+ * NFP CPP action
*/
static inline uint8_t
NFP_CPP_ID_ACTION_of(uint32_t id)
@@ -197,11 +212,14 @@ NFP_CPP_ID_ACTION_of(uint32_t id)
return (id >> 8) & 0xff;
}
-/*
+/**
* Return the NFP CPP island of a NFP CPP ID
- * @id NFP CPP ID
*
- * @return NFP CPP island
+ * @param id
+ * NFP CPP ID
+ *
+ * @return
+ * NFP CPP island
*/
static inline uint8_t
NFP_CPP_ID_ISLAND_of(uint32_t id)
@@ -215,109 +233,57 @@ NFP_CPP_ID_ISLAND_of(uint32_t id)
*/
const struct nfp_cpp_operations *nfp_cpp_transport_operations(void);
-/*
- * Set the model id
- *
- * @param cpp NFP CPP operations structure
- * @param model Model ID
- */
void nfp_cpp_model_set(struct nfp_cpp *cpp, uint32_t model);
-/*
- * Set the private instance owned data of a nfp_cpp struct
- *
- * @param cpp NFP CPP operations structure
- * @param interface Interface ID
- */
void nfp_cpp_interface_set(struct nfp_cpp *cpp, uint32_t interface);
-/*
- * Set the private instance owned data of a nfp_cpp struct
- *
- * @param cpp NFP CPP operations structure
- * @param serial NFP serial byte array
- * @param len Length of the serial byte array
- */
int nfp_cpp_serial_set(struct nfp_cpp *cpp, const uint8_t *serial,
size_t serial_len);
-/*
- * Set the private data of the nfp_cpp instance
- *
- * @param cpp NFP CPP operations structure
- * @return Opaque device pointer
- */
void nfp_cpp_priv_set(struct nfp_cpp *cpp, void *priv);
-/*
- * Return the private data of the nfp_cpp instance
- *
- * @param cpp NFP CPP operations structure
- * @return Opaque device pointer
- */
void *nfp_cpp_priv(struct nfp_cpp *cpp);
-/*
- * Get the privately allocated portion of a NFP CPP area handle
- *
- * @param cpp_area NFP CPP area handle
- * @return Pointer to the private area, or NULL on failure
- */
void *nfp_cpp_area_priv(struct nfp_cpp_area *cpp_area);
uint32_t __nfp_cpp_model_autodetect(struct nfp_cpp *cpp, uint32_t *model);
-/*
- * NFP CPP core interface for CPP clients.
- */
-
-/*
- * Open a NFP CPP handle to a CPP device
- *
- * @param[in] id 0-based ID for the CPP interface to use
- *
- * @return NFP CPP handle, or NULL on failure.
- */
+/* NFP CPP core interface for CPP clients */
struct nfp_cpp *nfp_cpp_from_device_name(struct rte_pci_device *dev,
int driver_lock_needed);
-/*
- * Free a NFP CPP handle
- *
- * @param[in] cpp NFP CPP handle
- */
void nfp_cpp_free(struct nfp_cpp *cpp);
#define NFP_CPP_MODEL_INVALID 0xffffffff
-/*
- * NFP_CPP_MODEL_CHIP_of - retrieve the chip ID from the model ID
+/**
+ * Retrieve the chip ID from the model ID
*
* The chip ID is a 16-bit BCD+A-F encoding for the chip type.
*
- * @param[in] model NFP CPP model id
- * @return NFP CPP chip id
+ * @param model
+ * NFP CPP model id
+ *
+ * @return
+ * NFP CPP chip id
*/
#define NFP_CPP_MODEL_CHIP_of(model) (((model) >> 16) & 0xffff)
-/*
- * NFP_CPP_MODEL_IS_6000 - Check for the NFP6000 family of devices
+/**
+ * Check for the NFP6000 family of devices
*
* NOTE: The NFP4000 series is considered as a NFP6000 series variant.
*
- * @param[in] model NFP CPP model id
- * @return true if model is in the NFP6000 family, false otherwise.
+ * @param model
+ * NFP CPP model id
+ *
+ * @return
+ * true if model is in the NFP6000 family, false otherwise.
*/
#define NFP_CPP_MODEL_IS_6000(model) \
((NFP_CPP_MODEL_CHIP_of(model) >= 0x3800) && \
(NFP_CPP_MODEL_CHIP_of(model) < 0x7000))
-/*
- * nfp_cpp_model - Retrieve the Model ID of the NFP
- *
- * @param[in] cpp NFP CPP handle
- * @return NFP CPP Model ID
- */
uint32_t nfp_cpp_model(struct nfp_cpp *cpp);
/*
@@ -330,7 +296,7 @@ uint32_t nfp_cpp_model(struct nfp_cpp *cpp);
#define NFP_CPP_INTERFACE_TYPE_RPC 0x3
#define NFP_CPP_INTERFACE_TYPE_ILA 0x4
-/*
+/**
* Construct a 16-bit NFP Interface ID
*
* Interface IDs consists of 4 bits of interface type, 4 bits of unit
@@ -340,422 +306,138 @@ uint32_t nfp_cpp_model(struct nfp_cpp *cpp);
* which use the MU Atomic CompareAndWrite operation - hence the limit to 16
* bits to be able to use the NFP Interface ID as a lock owner.
*
- * @param[in] type NFP Interface Type
- * @param[in] unit Unit identifier for the interface type
- * @param[in] channel Channel identifier for the interface unit
- * @return Interface ID
+ * @param type
+ * NFP Interface Type
+ * @param unit
+ * Unit identifier for the interface type
+ * @param channel
+ * Channel identifier for the interface unit
+ *
+ * @return
+ * Interface ID
*/
#define NFP_CPP_INTERFACE(type, unit, channel) \
((((type) & 0xf) << 12) | \
(((unit) & 0xf) << 8) | \
(((channel) & 0xff) << 0))
-/*
+/**
* Get the interface type of a NFP Interface ID
- * @param[in] interface NFP Interface ID
- * @return NFP Interface ID's type
+ *
+ * @param interface
+ * NFP Interface ID
+ *
+ * @return
+ * NFP Interface ID's type
*/
#define NFP_CPP_INTERFACE_TYPE_of(interface) (((interface) >> 12) & 0xf)
-/*
+/**
* Get the interface unit of a NFP Interface ID
- * @param[in] interface NFP Interface ID
- * @return NFP Interface ID's unit
+ *
+ * @param interface
+ * NFP Interface ID
+ *
+ * @return
+ * NFP Interface ID's unit
*/
#define NFP_CPP_INTERFACE_UNIT_of(interface) (((interface) >> 8) & 0xf)
-/*
+/**
* Get the interface channel of a NFP Interface ID
- * @param[in] interface NFP Interface ID
- * @return NFP Interface ID's channel
+ *
+ * @param interface
+ * NFP Interface ID
+ *
+ * @return
+ * NFP Interface ID's channel
*/
#define NFP_CPP_INTERFACE_CHANNEL_of(interface) (((interface) >> 0) & 0xff)
-/*
- * Retrieve the Interface ID of the NFP
- * @param[in] cpp NFP CPP handle
- * @return NFP CPP Interface ID
- */
+
uint16_t nfp_cpp_interface(struct nfp_cpp *cpp);
-/*
- * Retrieve the NFP Serial Number (unique per NFP)
- * @param[in] cpp NFP CPP handle
- * @param[out] serial Pointer to reference the serial number array
- *
- * @return size of the NFP6000 serial number, in bytes
- */
int nfp_cpp_serial(struct nfp_cpp *cpp, const uint8_t **serial);
-/*
- * Allocate a NFP CPP area handle, as an offset into a CPP ID
- * @param[in] cpp NFP CPP handle
- * @param[in] cpp_id NFP CPP ID
- * @param[in] address Offset into the NFP CPP ID address space
- * @param[in] size Size of the area to reserve
- *
- * @return NFP CPP handle, or NULL on failure.
- */
struct nfp_cpp_area *nfp_cpp_area_alloc(struct nfp_cpp *cpp, uint32_t cpp_id,
uint64_t address, size_t size);
-/*
- * Allocate a NFP CPP area handle, as an offset into a CPP ID, by a named owner
- * @param[in] cpp NFP CPP handle
- * @param[in] cpp_id NFP CPP ID
- * @param[in] name Name of owner of the area
- * @param[in] address Offset into the NFP CPP ID address space
- * @param[in] size Size of the area to reserve
- *
- * @return NFP CPP handle, or NULL on failure.
- */
struct nfp_cpp_area *nfp_cpp_area_alloc_with_name(struct nfp_cpp *cpp,
uint32_t cpp_id, const char *name, uint64_t address,
uint32_t size);
-/*
- * Free an allocated NFP CPP area handle
- * @param[in] area NFP CPP area handle
- */
void nfp_cpp_area_free(struct nfp_cpp_area *area);
-/*
- * Acquire the resources needed to access the NFP CPP area handle
- *
- * @param[in] area NFP CPP area handle
- *
- * @return 0 on success, -1 on failure.
- */
int nfp_cpp_area_acquire(struct nfp_cpp_area *area);
-/*
- * Release the resources needed to access the NFP CPP area handle
- *
- * @param[in] area NFP CPP area handle
- */
void nfp_cpp_area_release(struct nfp_cpp_area *area);
-/*
- * Allocate, then acquire the resources needed to access the NFP CPP area handle
- * @param[in] cpp NFP CPP handle
- * @param[in] cpp_id NFP CPP ID
- * @param[in] address Offset into the NFP CPP ID address space
- * @param[in] size Size of the area to reserve
- *
- * @return NFP CPP handle, or NULL on failure.
- */
struct nfp_cpp_area *nfp_cpp_area_alloc_acquire(struct nfp_cpp *cpp,
uint32_t cpp_id, uint64_t address, size_t size);
-/*
- * Release the resources, then free the NFP CPP area handle
- * @param[in] area NFP CPP area handle
- */
void nfp_cpp_area_release_free(struct nfp_cpp_area *area);
uint8_t *nfp_cpp_map_area(struct nfp_cpp *cpp, uint32_t cpp_id,
uint64_t addr, uint32_t size, struct nfp_cpp_area **area);
-/*
- * Read from a NFP CPP area handle into a buffer. The area must be acquired with
- * 'nfp_cpp_area_acquire()' before calling this operation.
- *
- * @param[in] area NFP CPP area handle
- * @param[in] offset Offset into the area
- * @param[in] buffer Location of buffer to receive the data
- * @param[in] length Length of the data to read
- *
- * @return bytes read on success, negative value on failure.
- *
- */
int nfp_cpp_area_read(struct nfp_cpp_area *area, uint32_t offset,
void *buffer, size_t length);
-/*
- * Write to a NFP CPP area handle from a buffer. The area must be acquired with
- * 'nfp_cpp_area_acquire()' before calling this operation.
- *
- * @param[in] area NFP CPP area handle
- * @param[in] offset Offset into the area
- * @param[in] buffer Location of buffer that holds the data
- * @param[in] length Length of the data to read
- *
- * @return bytes written on success, negative value on failure.
- */
int nfp_cpp_area_write(struct nfp_cpp_area *area, uint32_t offset,
const void *buffer, size_t length);
-/*
- * nfp_cpp_area_iomem() - get IOMEM region for CPP area
- * @area: CPP area handle
- *
- * Returns an iomem pointer for use with readl()/writel() style operations.
- *
- * NOTE: Area must have been locked down with an 'acquire'.
- *
- * Return: pointer to the area, or NULL
- */
void *nfp_cpp_area_iomem(struct nfp_cpp_area *area);
-/*
- * Get the NFP CPP handle that is the parent of a NFP CPP area handle
- *
- * @param cpp_area NFP CPP area handle
- * @return NFP CPP handle
- */
struct nfp_cpp *nfp_cpp_area_cpp(struct nfp_cpp_area *cpp_area);
-/*
- * Get the name passed during allocation of the NFP CPP area handle
- *
- * @param cpp_area NFP CPP area handle
- * @return Pointer to the area's name
- */
const char *nfp_cpp_area_name(struct nfp_cpp_area *cpp_area);
-/*
- * Read a block of data from a NFP CPP ID
- *
- * @param[in] cpp NFP CPP handle
- * @param[in] cpp_id NFP CPP ID
- * @param[in] address Offset into the NFP CPP ID address space
- * @param[in] kernel_vaddr Buffer to copy read data to
- * @param[in] length Size of the area to reserve
- *
- * @return bytes read on success, -1 on failure.
- */
int nfp_cpp_read(struct nfp_cpp *cpp, uint32_t cpp_id,
uint64_t address, void *kernel_vaddr, size_t length);
-/*
- * Write a block of data to a NFP CPP ID
- *
- * @param[in] cpp NFP CPP handle
- * @param[in] cpp_id NFP CPP ID
- * @param[in] address Offset into the NFP CPP ID address space
- * @param[in] kernel_vaddr Buffer to copy write data from
- * @param[in] length Size of the area to reserve
- *
- * @return bytes written on success, -1 on failure.
- */
int nfp_cpp_write(struct nfp_cpp *cpp, uint32_t cpp_id,
uint64_t address, const void *kernel_vaddr, size_t length);
-/*
- * Read a single 32-bit value from a NFP CPP area handle
- *
- * @param area NFP CPP area handle
- * @param offset offset into NFP CPP area handle
- * @param value output value
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 32-bit aligned.
- *
- * @return 0 on success, or -1 on error.
- */
int nfp_cpp_area_readl(struct nfp_cpp_area *area, uint32_t offset,
uint32_t *value);
-/*
- * Write a single 32-bit value to a NFP CPP area handle
- *
- * @param area NFP CPP area handle
- * @param offset offset into NFP CPP area handle
- * @param value value to write
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 32-bit aligned.
- *
- * @return 0 on success, or -1 on error.
- */
int nfp_cpp_area_writel(struct nfp_cpp_area *area, uint32_t offset,
uint32_t value);
-/*
- * Read a single 64-bit value from a NFP CPP area handle
- *
- * @param area NFP CPP area handle
- * @param offset offset into NFP CPP area handle
- * @param value output value
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 64-bit aligned.
- *
- * @return 0 on success, or -1 on error.
- */
int nfp_cpp_area_readq(struct nfp_cpp_area *area, uint32_t offset,
uint64_t *value);
-/*
- * Write a single 64-bit value to a NFP CPP area handle
- *
- * @param area NFP CPP area handle
- * @param offset offset into NFP CPP area handle
- * @param value value to write
- *
- * The area must be acquired with 'nfp_cpp_area_acquire()' before calling this
- * operation.
- *
- * NOTE: offset must be 64-bit aligned.
- *
- * @return 0 on success, or -1 on error.
- */
int nfp_cpp_area_writeq(struct nfp_cpp_area *area, uint32_t offset,
uint64_t value);
-/*
- * Write a single 32-bit value on the XPB bus
- *
- * @param cpp NFP CPP device handle
- * @param xpb_tgt XPB target and address
- * @param value value to write
- *
- * @return 0 on success, or -1 on failure.
- */
int nfp_xpb_writel(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t value);
-/*
- * Read a single 32-bit value from the XPB bus
- *
- * @param cpp NFP CPP device handle
- * @param xpb_tgt XPB target and address
- * @param value output value
- *
- * @return 0 on success, or -1 on failure.
- */
int nfp_xpb_readl(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t *value);
-/*
- * Read a 32-bit word from a NFP CPP ID
- *
- * @param cpp NFP CPP handle
- * @param cpp_id NFP CPP ID
- * @param address offset into the NFP CPP ID address space
- * @param value output value
- *
- * @return 0 on success, or -1 on failure.
- */
int nfp_cpp_readl(struct nfp_cpp *cpp, uint32_t cpp_id,
uint64_t address, uint32_t *value);
-/*
- * Write a 32-bit value to a NFP CPP ID
- *
- * @param cpp NFP CPP handle
- * @param cpp_id NFP CPP ID
- * @param address offset into the NFP CPP ID address space
- * @param value value to write
- *
- * @return 0 on success, or -1 on failure.
- *
- */
int nfp_cpp_writel(struct nfp_cpp *cpp, uint32_t cpp_id,
uint64_t address, uint32_t value);
-/*
- * Read a 64-bit work from a NFP CPP ID
- *
- * @param cpp NFP CPP handle
- * @param cpp_id NFP CPP ID
- * @param address offset into the NFP CPP ID address space
- * @param value output value
- *
- * @return 0 on success, or -1 on failure.
- */
int nfp_cpp_readq(struct nfp_cpp *cpp, uint32_t cpp_id,
uint64_t address, uint64_t *value);
-/*
- * Write a 64-bit value to a NFP CPP ID
- *
- * @param cpp NFP CPP handle
- * @param cpp_id NFP CPP ID
- * @param address offset into the NFP CPP ID address space
- * @param value value to write
- *
- * @return 0 on success, or -1 on failure.
- */
int nfp_cpp_writeq(struct nfp_cpp *cpp, uint32_t cpp_id,
uint64_t address, uint64_t value);
-/*
- * Initialize a mutex location
-
- * The CPP target:address must point to a 64-bit aligned location, and will
- * initialize 64 bits of data at the location.
- *
- * This creates the initial mutex state, as locked by this nfp_cpp_interface().
- *
- * This function should only be called when setting up the initial lock state
- * upon boot-up of the system.
- *
- * @param cpp NFP CPP handle
- * @param target NFP CPP target ID
- * @param address Offset into the address space of the NFP CPP target ID
- * @param key_id Unique 32-bit value for this mutex
- *
- * @return 0 on success, negative value on failure.
- */
int nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target,
uint64_t address, uint32_t key_id);
-/*
- * Create a mutex handle from an address controlled by a MU Atomic engine
- *
- * The CPP target:address must point to a 64-bit aligned location, and reserve
- * 64 bits of data at the location for use by the handle.
- *
- * Only target/address pairs that point to entities that support the MU Atomic
- * Engine's CmpAndSwap32 command are supported.
- *
- * @param cpp NFP CPP handle
- * @param target NFP CPP target ID
- * @param address Offset into the address space of the NFP CPP target ID
- * @param key_id 32-bit unique key (must match the key at this location)
- *
- * @return A non-NULL struct nfp_cpp_mutex * on success, NULL on
- * failure.
- */
struct nfp_cpp_mutex *nfp_cpp_mutex_alloc(struct nfp_cpp *cpp, int target,
uint64_t address, uint32_t key_id);
-/*
- * Free a mutex handle - does not alter the lock state
- *
- * @param mutex NFP CPP Mutex handle
- */
void nfp_cpp_mutex_free(struct nfp_cpp_mutex *mutex);
-/*
- * Lock a mutex handle, using the NFP MU Atomic Engine
- *
- * @param mutex NFP CPP Mutex handle
- *
- * @return 0 on success, negative value on failure.
- */
int nfp_cpp_mutex_lock(struct nfp_cpp_mutex *mutex);
-/*
- * Unlock a mutex handle, using the NFP MU Atomic Engine
- *
- * @param mutex NFP CPP Mutex handle
- *
- * @return 0 on success, negative value on failure.
- */
int nfp_cpp_mutex_unlock(struct nfp_cpp_mutex *mutex);
-/*
- * Attempt to lock a mutex handle, using the NFP MU Atomic Engine
- *
- * @param mutex NFP CPP Mutex handle
- * @return 0 if the lock succeeded, negative value on failure.
- */
int nfp_cpp_mutex_trylock(struct nfp_cpp_mutex *mutex);
uint32_t nfp_cpp_mu_locality_lsb(struct nfp_cpp *cpp);
diff --git a/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c b/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c
index bdf4a658f5..7e94bfb611 100644
--- a/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c
+++ b/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c
@@ -58,7 +58,7 @@
* Minimal size of the PCIe cfg memory we depend on being mapped,
* queue controller and DMA controller don't have to be covered.
*/
-#define NFP_PCI_MIN_MAP_SIZE 0x080000
+#define NFP_PCI_MIN_MAP_SIZE 0x080000 /* 512K */
#define NFP_PCIE_P2C_FIXED_SIZE(bar) (1 << (bar)->bitsize)
#define NFP_PCIE_P2C_BULK_SIZE(bar) (1 << (bar)->bitsize)
@@ -72,40 +72,25 @@
#define NFP_PCIE_CPP_BAR_PCIETOCPPEXPBAR(bar, slot) \
(((bar) * 8 + (slot)) * 4)
-/*
- * Define to enable a bit more verbose debug output.
- * Set to 1 to enable a bit more verbose debug output.
- */
struct nfp_pcie_user;
struct nfp6000_area_priv;
-/*
- * struct nfp_bar - describes BAR configuration and usage
- * @nfp: backlink to owner
- * @barcfg: cached contents of BAR config CSR
- * @base: the BAR's base CPP offset
- * @mask: mask for the BAR aperture (read only)
- * @bitsize: bitsize of BAR aperture (read only)
- * @index: index of the BAR
- * @lock: lock to specify if bar is in use
- * @refcnt: number of current users
- * @iomem: mapped IO memory
- */
+/* Describes BAR configuration and usage */
#define NFP_BAR_MIN 1
#define NFP_BAR_MID 5
#define NFP_BAR_MAX 7
struct nfp_bar {
- struct nfp_pcie_user *nfp;
- uint32_t barcfg;
- uint64_t base; /* CPP address base */
- uint64_t mask; /* Bit mask of the bar */
- uint32_t bitsize; /* Bit size of the bar */
- uint32_t index;
- int lock;
+ struct nfp_pcie_user *nfp; /**< Backlink to owner */
+ uint32_t barcfg; /**< BAR config CSR */
+ uint64_t base; /**< Base CPP offset */
+ uint64_t mask; /**< Mask of the BAR aperture (read only) */
+ uint32_t bitsize; /**< Bit size of the BAR aperture (read only) */
+ uint32_t index; /**< Index of the BAR */
+ int lock; /**< If the BAR has been locked */
char *csr;
- char *iomem;
+ char *iomem; /**< mapped IO memory */
};
#define BUSDEV_SZ 13
@@ -360,9 +345,7 @@ nfp_disable_bars(struct nfp_pcie_user *nfp)
}
}
-/*
- * Generic CPP bus access interface.
- */
+/* Generic CPP bus access interface. */
struct nfp6000_area_priv {
struct nfp_bar *bar;
diff --git a/drivers/net/nfp/nfpcore/nfp_cppcore.c b/drivers/net/nfp/nfpcore/nfp_cppcore.c
index 66f4ddaab7..f601907673 100644
--- a/drivers/net/nfp/nfpcore/nfp_cppcore.c
+++ b/drivers/net/nfp/nfpcore/nfp_cppcore.c
@@ -26,6 +26,15 @@
#define NFP_PL_DEVICE_MODEL_MASK (NFP_PL_DEVICE_PART_MASK | \
NFP_PL_DEVICE_ID_MASK)
+/**
+ * Set the private data of the nfp_cpp instance
+ *
+ * @param cpp
+ * NFP CPP operations structure
+ *
+ * @return
+ * Opaque device pointer
+ */
void
nfp_cpp_priv_set(struct nfp_cpp *cpp,
void *priv)
@@ -33,12 +42,29 @@ nfp_cpp_priv_set(struct nfp_cpp *cpp,
cpp->priv = priv;
}
+/**
+ * Return the private data of the nfp_cpp instance
+ *
+ * @param cpp
+ * NFP CPP operations structure
+ *
+ * @return
+ * Opaque device pointer
+ */
void *
nfp_cpp_priv(struct nfp_cpp *cpp)
{
return cpp->priv;
}
+/**
+ * Set the model id
+ *
+ * @param cpp
+ * NFP CPP operations structure
+ * @param model
+ * Model ID
+ */
void
nfp_cpp_model_set(struct nfp_cpp *cpp,
uint32_t model)
@@ -46,6 +72,15 @@ nfp_cpp_model_set(struct nfp_cpp *cpp,
cpp->model = model;
}
+/**
+ * Retrieve the Model ID of the NFP
+ *
+ * @param cpp
+ * NFP CPP handle
+ *
+ * @return
+ * NFP CPP Model ID
+ */
uint32_t
nfp_cpp_model(struct nfp_cpp *cpp)
{
@@ -63,6 +98,14 @@ nfp_cpp_model(struct nfp_cpp *cpp)
return model;
}
+/**
+ * Set the private instance owned data of a nfp_cpp struct
+ *
+ * @param cpp
+ * NFP CPP operations structure
+ * @param interface
+ * Interface ID
+ */
void
nfp_cpp_interface_set(struct nfp_cpp *cpp,
uint32_t interface)
@@ -70,6 +113,17 @@ nfp_cpp_interface_set(struct nfp_cpp *cpp,
cpp->interface = interface;
}
+/**
+ * Retrieve the Serial ID of the NFP
+ *
+ * @param cpp
+ * NFP CPP handle
+ * @param serial
+ * Pointer to NFP serial number
+ *
+ * @return
+ * Length of NFP serial number
+ */
int
nfp_cpp_serial(struct nfp_cpp *cpp,
const uint8_t **serial)
@@ -78,6 +132,16 @@ nfp_cpp_serial(struct nfp_cpp *cpp,
return cpp->serial_len;
}
+/**
+ * Set the private instance owned data of a nfp_cpp struct
+ *
+ * @param cpp
+ * NFP CPP operations structure
+ * @param serial
+ * NFP serial byte array
+ * @param serial_len
+ * Length of the serial byte array
+ */
int
nfp_cpp_serial_set(struct nfp_cpp *cpp,
const uint8_t *serial,
@@ -96,6 +160,15 @@ nfp_cpp_serial_set(struct nfp_cpp *cpp,
return 0;
}
+/**
+ * Retrieve the Interface ID of the NFP
+ *
+ * @param cpp
+ * NFP CPP handle
+ *
+ * @return
+ * NFP CPP Interface ID
+ */
uint16_t
nfp_cpp_interface(struct nfp_cpp *cpp)
{
@@ -105,18 +178,45 @@ nfp_cpp_interface(struct nfp_cpp *cpp)
return cpp->interface;
}
+/**
+ * Get the privately allocated portion of a NFP CPP area handle
+ *
+ * @param cpp_area
+ * NFP CPP area handle
+ *
+ * @return
+ * Pointer to the private area, or NULL on failure
+ */
void *
nfp_cpp_area_priv(struct nfp_cpp_area *cpp_area)
{
return &cpp_area[1];
}
+/**
+ * Get the NFP CPP handle that is the pci_dev of a NFP CPP area handle
+ *
+ * @param cpp_area
+ * NFP CPP area handle
+ *
+ * @return
+ * NFP CPP handle
+ */
struct nfp_cpp *
nfp_cpp_area_cpp(struct nfp_cpp_area *cpp_area)
{
return cpp_area->cpp;
}
+/**
+ * Get the name passed during allocation of the NFP CPP area handle
+ *
+ * @param cpp_area
+ * NFP CPP area handle
+ *
+ * @return
+ * Pointer to the area's name
+ */
const char *
nfp_cpp_area_name(struct nfp_cpp_area *cpp_area)
{
@@ -153,15 +253,24 @@ nfp_cpp_mu_locality_lsb(struct nfp_cpp *cpp)
return cpp->mu_locality_lsb;
}
-/*
- * nfp_cpp_area_alloc - allocate a new CPP area
- * @cpp: CPP handle
- * @dest: CPP id
- * @address: start address on CPP target
- * @size: size of area in bytes
+/**
+ * Allocate and initialize a CPP area structure.
+ * The area must later be locked down with an 'acquire' before
+ * it can be safely accessed.
*
- * Allocate and initialize a CPP area structure. The area must later
- * be locked down with an 'acquire' before it can be safely accessed.
+ * @param cpp
+ * CPP device handle
+ * @param dest
+ * CPP id
+ * @param name
+ * Name of region
+ * @param address
+ * Address of region
+ * @param size
+ * Size of region
+ *
+ * @return
+ * NFP CPP area handle, or NULL
*
* NOTE: @address and @size must be 32-bit aligned values.
*/
@@ -211,6 +320,25 @@ nfp_cpp_area_alloc_with_name(struct nfp_cpp *cpp,
return area;
}
+/**
+ * Allocate and initialize a CPP area structure.
+ * The area must later be locked down with an 'acquire' before
+ * it can be safely accessed.
+ *
+ * @param cpp
+ * CPP device handle
+ * @param dest
+ * CPP id
+ * @param address
+ * Address of region
+ * @param size
+ * Size of region
+ *
+ * @return
+ * NFP CPP area handle, or NULL
+ *
+ * NOTE: @address and @size must be 32-bit aligned values.
+ */
struct nfp_cpp_area *
nfp_cpp_area_alloc(struct nfp_cpp *cpp,
uint32_t dest,
@@ -220,17 +348,22 @@ nfp_cpp_area_alloc(struct nfp_cpp *cpp,
return nfp_cpp_area_alloc_with_name(cpp, dest, NULL, address, size);
}
-/*
- * nfp_cpp_area_alloc_acquire - allocate a new CPP area and lock it down
- *
- * @cpp: CPP handle
- * @dest: CPP id
- * @address: start address on CPP target
- * @size: size of area
- *
+/**
* Allocate and initialize a CPP area structure, and lock it down so
* that it can be accessed directly.
*
+ * @param cpp
+ * CPP device handle
+ * @param destination
+ * CPP id
+ * @param address
+ * Address of region
+ * @param size
+ * Size of region
+ *
+ * @return
+ * NFP CPP area handle, or NULL
+ *
* NOTE: @address and @size must be 32-bit aligned values.
*
* NOTE: The area must also be 'released' when the structure is freed.
@@ -258,11 +391,11 @@ nfp_cpp_area_alloc_acquire(struct nfp_cpp *cpp,
return area;
}
-/*
- * nfp_cpp_area_free - free up the CPP area
- * area: CPP area handle
- *
+/**
* Frees up memory resources held by the CPP area.
+ *
+ * @param area
+ * CPP area handle
*/
void
nfp_cpp_area_free(struct nfp_cpp_area *area)
@@ -272,11 +405,11 @@ nfp_cpp_area_free(struct nfp_cpp_area *area)
free(area);
}
-/*
- * nfp_cpp_area_release_free - release CPP area and free it
- * area: CPP area handle
+/**
+ * Releases CPP area and frees up memory resources held by it.
*
- * Releases CPP area and frees up memory resources held by the it.
+ * @param area
+ * CPP area handle
*/
void
nfp_cpp_area_release_free(struct nfp_cpp_area *area)
@@ -285,12 +418,15 @@ nfp_cpp_area_release_free(struct nfp_cpp_area *area)
nfp_cpp_area_free(area);
}
-/*
- * nfp_cpp_area_acquire - lock down a CPP area for access
- * @area: CPP area handle
+/**
+ * Locks down the CPP area for a potential long term activity.
+ * Area must always be locked down before being accessed.
*
- * Locks down the CPP area for a potential long term activity. Area
- * must always be locked down before being accessed.
+ * @param area
+ * CPP area handle
+ *
+ * @return
+ * 0 on success, -1 on failure.
*/
int
nfp_cpp_area_acquire(struct nfp_cpp_area *area)
@@ -307,11 +443,11 @@ nfp_cpp_area_acquire(struct nfp_cpp_area *area)
return 0;
}
-/*
- * nfp_cpp_area_release - release a locked down CPP area
- * @area: CPP area handle
- *
+/**
* Releases a previously locked down CPP area.
+ *
+ * @param area
+ * CPP area handle
*/
void
nfp_cpp_area_release(struct nfp_cpp_area *area)
@@ -320,16 +456,16 @@ nfp_cpp_area_release(struct nfp_cpp_area *area)
area->cpp->op->area_release(area);
}
-/*
- * nfp_cpp_area_iomem() - get IOMEM region for CPP area
+/**
+ * Returns an iomem pointer for use with readl()/writel() style operations.
*
- * @area: CPP area handle
+ * @param area
+ * CPP area handle
*
- * Returns an iomem pointer for use with readl()/writel() style operations.
+ * @return
+ * Pointer to the area, or NULL
*
* NOTE: Area must have been locked down with an 'acquire'.
- *
- * Return: pointer to the area, or NULL
*/
void *
nfp_cpp_area_iomem(struct nfp_cpp_area *area)
@@ -342,18 +478,22 @@ nfp_cpp_area_iomem(struct nfp_cpp_area *area)
return iomem;
}
-/*
- * nfp_cpp_area_read - read data from CPP area
+/**
+ * Read data from indicated CPP region.
*
- * @area: CPP area handle
- * @offset: offset into CPP area
- * @kernel_vaddr: kernel address to put data into
- * @length: number of bytes to read
+ * @param area
+ * CPP area handle
+ * @param offset
+ * Offset into CPP area
+ * @param kernel_vaddr
+ * Address to put data into
+ * @param length
+ * Number of bytes to read
*
- * Read data from indicated CPP region.
+ * @return
+ * Length of io, or -ERRNO
*
* NOTE: @offset and @length must be 32-bit aligned values.
- *
* NOTE: Area must have been locked down with an 'acquire'.
*/
int
@@ -368,18 +508,22 @@ nfp_cpp_area_read(struct nfp_cpp_area *area,
return area->cpp->op->area_read(area, kernel_vaddr, offset, length);
}
-/*
- * nfp_cpp_area_write - write data to CPP area
+/**
+ * Write data to indicated CPP region.
*
- * @area: CPP area handle
- * @offset: offset into CPP area
- * @kernel_vaddr: kernel address to read data from
- * @length: number of bytes to write
+ * @param area
+ * CPP area handle
+ * @param offset
+ * Offset into CPP area
+ * @param kernel_vaddr
+ * Address to put data into
+ * @param length
+ * Number of bytes to read
*
- * Write data to indicated CPP region.
+ * @return
+ * Length of io, or -ERRNO
*
* NOTE: @offset and @length must be 32-bit aligned values.
- *
* NOTE: Area must have been locked down with an 'acquire'.
*/
int
@@ -436,6 +580,19 @@ nfp_xpb_to_cpp(struct nfp_cpp *cpp,
return xpb;
}
+/**
+ * Read a uint32_t value from an area
+ *
+ * @param area
+ * CPP Area handle
+ * @param offset
+ * Offset into area
+ * @param value
+ * Pointer to read buffer
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_cpp_area_readl(struct nfp_cpp_area *area,
uint32_t offset,
@@ -450,6 +607,19 @@ nfp_cpp_area_readl(struct nfp_cpp_area *area,
return (sz == sizeof(*value)) ? 0 : -1;
}
+/**
+ * Write a uint32_t vale to an area
+ *
+ * @param area
+ * CPP Area handle
+ * @param offset
+ * Offset into area
+ * @param value
+ * Value to write
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_cpp_area_writel(struct nfp_cpp_area *area,
uint32_t offset,
@@ -462,6 +632,19 @@ nfp_cpp_area_writel(struct nfp_cpp_area *area,
return (sz == sizeof(value)) ? 0 : -1;
}
+/**
+ * Read a uint64_t value from an area
+ *
+ * @param area
+ * CPP Area handle
+ * @param offset
+ * Offset into area
+ * @param value
+ * Pointer to read buffer
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_cpp_area_readq(struct nfp_cpp_area *area,
uint32_t offset,
@@ -476,6 +659,19 @@ nfp_cpp_area_readq(struct nfp_cpp_area *area,
return (sz == sizeof(*value)) ? 0 : -1;
}
+/**
+ * Write a uint64_t vale to an area
+ *
+ * @param area
+ * CPP Area handle
+ * @param offset
+ * Offset into area
+ * @param value
+ * Value to write
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_cpp_area_writeq(struct nfp_cpp_area *area,
uint32_t offset,
@@ -489,6 +685,21 @@ nfp_cpp_area_writeq(struct nfp_cpp_area *area,
return (sz == sizeof(value)) ? 0 : -1;
}
+/**
+ * Read a uint32_t value from a CPP location
+ *
+ * @param cpp
+ * CPP device handle
+ * @param cpp_id
+ * CPP ID for operation
+ * @param address
+ * Address for operation
+ * @param value
+ * Pointer to read buffer
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_cpp_readl(struct nfp_cpp *cpp,
uint32_t cpp_id,
@@ -504,6 +715,21 @@ nfp_cpp_readl(struct nfp_cpp *cpp,
return (sz == sizeof(*value)) ? 0 : -1;
}
+/**
+ * Write a uint32_t value to a CPP location
+ *
+ * @param cpp
+ * CPP device handle
+ * @param cpp_id
+ * CPP ID for operation
+ * @param address
+ * Address for operation
+ * @param value
+ * Value to write
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_cpp_writel(struct nfp_cpp *cpp,
uint32_t cpp_id,
@@ -518,6 +744,21 @@ nfp_cpp_writel(struct nfp_cpp *cpp,
return (sz == sizeof(value)) ? 0 : -1;
}
+/**
+ * Read a uint64_t value from a CPP location
+ *
+ * @param cpp
+ * CPP device handle
+ * @param cpp_id
+ * CPP ID for operation
+ * @param address
+ * Address for operation
+ * @param value
+ * Pointer to read buffer
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_cpp_readq(struct nfp_cpp *cpp,
uint32_t cpp_id,
@@ -533,6 +774,21 @@ nfp_cpp_readq(struct nfp_cpp *cpp,
return (sz == sizeof(*value)) ? 0 : -1;
}
+/**
+ * Write a uint64_t value to a CPP location
+ *
+ * @param cpp
+ * CPP device handle
+ * @param cpp_id
+ * CPP ID for operation
+ * @param address
+ * Address for operation
+ * @param value
+ * Value to write
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_cpp_writeq(struct nfp_cpp *cpp,
uint32_t cpp_id,
@@ -547,6 +803,19 @@ nfp_cpp_writeq(struct nfp_cpp *cpp,
return (sz == sizeof(value)) ? 0 : -1;
}
+/**
+ * Write a uint32_t word to a XPB location
+ *
+ * @param cpp
+ * CPP device handle
+ * @param xpb_addr
+ * XPB target and address
+ * @param value
+ * Value to write
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_xpb_writel(struct nfp_cpp *cpp,
uint32_t xpb_addr,
@@ -559,6 +828,19 @@ nfp_xpb_writel(struct nfp_cpp *cpp,
return nfp_cpp_writel(cpp, cpp_dest, xpb_addr, value);
}
+/**
+ * Read a uint32_t value from a XPB location
+ *
+ * @param cpp
+ * CPP device handle
+ * @param xpb_addr
+ * XPB target and address
+ * @param value
+ * Pointer to read buffer
+ *
+ * @return
+ * 0 on success, or -ERRNO
+ */
int
nfp_xpb_readl(struct nfp_cpp *cpp,
uint32_t xpb_addr,
@@ -625,9 +907,11 @@ nfp_cpp_alloc(struct rte_pci_device *dev,
return cpp;
}
-/*
- * nfp_cpp_free - free the CPP handle
- * @cpp: CPP handle
+/**
+ * Free the CPP handle
+ *
+ * @param cpp
+ * CPP handle
*/
void
nfp_cpp_free(struct nfp_cpp *cpp)
@@ -641,6 +925,19 @@ nfp_cpp_free(struct nfp_cpp *cpp)
free(cpp);
}
+/**
+ * Create a NFP CPP handle from device
+ *
+ * @param dev
+ * PCI device
+ * @param driver_lock_needed
+ * Driver lock flag
+ *
+ * @return
+ * NFP CPP handle on success, NULL on failure
+ *
+ * NOTE: On failure, cpp_ops->free will be called!
+ */
struct nfp_cpp *
nfp_cpp_from_device_name(struct rte_pci_device *dev,
int driver_lock_needed)
@@ -648,13 +945,22 @@ nfp_cpp_from_device_name(struct rte_pci_device *dev,
return nfp_cpp_alloc(dev, driver_lock_needed);
}
-/*
- * nfp_cpp_read - read from CPP target
- * @cpp: CPP handle
- * @destination: CPP id
- * @address: offset into CPP target
- * @kernel_vaddr: kernel buffer for result
- * @length: number of bytes to read
+/**
+ * Read from CPP target
+ *
+ * @param cpp
+ * CPP handle
+ * @param destination
+ * CPP id
+ * @param address
+ * Offset into CPP target
+ * @param kernel_vaddr
+ * Buffer for result
+ * @param length
+ * Number of bytes to read
+ *
+ * @return
+ * Length of io, or -ERRNO
*/
int
nfp_cpp_read(struct nfp_cpp *cpp,
@@ -678,13 +984,22 @@ nfp_cpp_read(struct nfp_cpp *cpp,
return err;
}
-/*
- * nfp_cpp_write - write to CPP target
- * @cpp: CPP handle
- * @destination: CPP id
- * @address: offset into CPP target
- * @kernel_vaddr: kernel buffer to read from
- * @length: number of bytes to write
+/**
+ * Write to CPP target
+ *
+ * @param cpp
+ * CPP handle
+ * @param destination
+ * CPP id
+ * @param address
+ * Offset into CPP target
+ * @param kernel_vaddr
+ * Buffer to read from
+ * @param length
+ * Number of bytes to write
+ *
+ * @return
+ * Length of io, or -ERRNO
*/
int
nfp_cpp_write(struct nfp_cpp *cpp,
@@ -731,18 +1046,23 @@ __nfp_cpp_model_autodetect(struct nfp_cpp *cpp,
return 0;
}
-/*
- * nfp_cpp_map_area() - Helper function to map an area
- * @cpp: NFP CPP handler
- * @cpp_id: CPP ID
- * @addr: CPP address
- * @size: Size of the area
- * @area: Area handle (output)
+/**
+ * Map an area of IOMEM access.
+ * To undo the effect of this function call @nfp_cpp_area_release_free(*area).
*
- * Map an area of IOMEM access. To undo the effect of this function call
- * @nfp_cpp_area_release_free(*area).
+ * @param cpp
+ * NFP CPP handler
+ * @param cpp_id
+ * CPP id
+ * @param addr
+ * CPP address
+ * @param size
+ * Size of the area
+ * @param area
+ * Area handle (output)
*
- * Return: Pointer to memory mapped area or NULL
+ * @return
+ * Pointer to memory mapped area or NULL
*/
uint8_t *
nfp_cpp_map_area(struct nfp_cpp *cpp,
diff --git a/drivers/net/nfp/nfpcore/nfp_hwinfo.c b/drivers/net/nfp/nfpcore/nfp_hwinfo.c
index b658b5e900..f5579ab60f 100644
--- a/drivers/net/nfp/nfpcore/nfp_hwinfo.c
+++ b/drivers/net/nfp/nfpcore/nfp_hwinfo.c
@@ -3,7 +3,8 @@
* All rights reserved.
*/
-/* Parse the hwinfo table that the ARM firmware builds in the ARM scratch SRAM
+/*
+ * Parse the hwinfo table that the ARM firmware builds in the ARM scratch SRAM
* after chip reset.
*
* Examples of the fields:
@@ -146,7 +147,7 @@ nfp_hwinfo_fetch(struct nfp_cpp *cpp,
struct nfp_hwinfo *db;
wait.tv_sec = 0;
- wait.tv_nsec = 10000000;
+ wait.tv_nsec = 10000000; /* 10ms */
for (;;) {
db = nfp_hwinfo_try_fetch(cpp, hwdb_size);
@@ -154,7 +155,7 @@ nfp_hwinfo_fetch(struct nfp_cpp *cpp,
return db;
nanosleep(&wait, NULL);
- if (count++ > 200) {
+ if (count++ > 200) { /* 10ms * 200 = 2s */
PMD_DRV_LOG(ERR, "NFP access error");
return NULL;
}
@@ -180,12 +181,16 @@ nfp_hwinfo_read(struct nfp_cpp *cpp)
return db;
}
-/*
- * nfp_hwinfo_lookup() - Find a value in the HWInfo table by name
- * @hwinfo: NFP HWinfo table
- * @lookup: HWInfo name to search for
+/**
+ * Find a value in the HWInfo table by name
+ *
+ * @param hwinfo
+ * NFP HWInfo table
+ * @param lookup
+ * HWInfo name to search for
*
- * Return: Value of the HWInfo name, or NULL
+ * @return
+ * Value of the HWInfo name, or NULL
*/
const char *
nfp_hwinfo_lookup(struct nfp_hwinfo *hwinfo,
diff --git a/drivers/net/nfp/nfpcore/nfp_hwinfo.h b/drivers/net/nfp/nfpcore/nfp_hwinfo.h
index a3da7512db..424db8035d 100644
--- a/drivers/net/nfp/nfpcore/nfp_hwinfo.h
+++ b/drivers/net/nfp/nfpcore/nfp_hwinfo.h
@@ -59,6 +59,8 @@
* Packed UTF8Z strings, ie 'key1\000value1\000key2\000value2\000'
*
* Unsorted.
+ *
+ * Note: Only the HwInfo v2 Table be supported now.
*/
#define NFP_HWINFO_VERSION_1 ('H' << 24 | 'I' << 16 | 1 << 8 | 0 << 1 | 0)
diff --git a/drivers/net/nfp/nfpcore/nfp_mip.c b/drivers/net/nfp/nfpcore/nfp_mip.c
index 086e82db70..0892c99e96 100644
--- a/drivers/net/nfp/nfpcore/nfp_mip.c
+++ b/drivers/net/nfp/nfpcore/nfp_mip.c
@@ -87,15 +87,16 @@ nfp_mip_read_resource(struct nfp_cpp *cpp,
return err;
}
-/*
- * nfp_mip_open() - Get device MIP structure
- * @cpp: NFP CPP Handle
- *
- * Copy MIP structure from NFP device and return it. The returned
+/**
+ * Copy MIP structure from NFP device and return it. The returned
* structure is handled internally by the library and should be
- * freed by calling nfp_mip_close().
+ * freed by calling @nfp_mip_close().
+ *
+ * @param cpp
+ * NFP CPP Handle
*
- * Return: pointer to mip, NULL on failure.
+ * @return
+ * Pointer to MIP, NULL on failure.
*/
struct nfp_mip *
nfp_mip_open(struct nfp_cpp *cpp)
@@ -131,11 +132,15 @@ nfp_mip_name(const struct nfp_mip *mip)
return mip->name;
}
-/*
- * nfp_mip_symtab() - Get the address and size of the MIP symbol table
- * @mip: MIP handle
- * @addr: Location for NFP DDR address of MIP symbol table
- * @size: Location for size of MIP symbol table
+/**
+ * Get the address and size of the MIP symbol table.
+ *
+ * @param mip
+ * MIP handle
+ * @param addr
+ * Location for NFP DDR address of MIP symbol table
+ * @param size
+ * Location for size of MIP symbol table
*/
void
nfp_mip_symtab(const struct nfp_mip *mip,
@@ -146,11 +151,15 @@ nfp_mip_symtab(const struct nfp_mip *mip,
*size = rte_le_to_cpu_32(mip->symtab_size);
}
-/*
- * nfp_mip_strtab() - Get the address and size of the MIP symbol name table
- * @mip: MIP handle
- * @addr: Location for NFP DDR address of MIP symbol name table
- * @size: Location for size of MIP symbol name table
+/**
+ * Get the address and size of the MIP symbol name table.
+ *
+ * @param mip
+ * MIP handle
+ * @param addr
+ * Location for NFP DDR address of MIP symbol name table
+ * @param size
+ * Location for size of MIP symbol name table
*/
void
nfp_mip_strtab(const struct nfp_mip *mip,
diff --git a/drivers/net/nfp/nfpcore/nfp_mutex.c b/drivers/net/nfp/nfpcore/nfp_mutex.c
index 82919d8270..404d4fa938 100644
--- a/drivers/net/nfp/nfpcore/nfp_mutex.c
+++ b/drivers/net/nfp/nfpcore/nfp_mutex.c
@@ -53,7 +53,7 @@ _nfp_cpp_mutex_validate(uint32_t model,
return 0;
}
-/*
+/**
* Initialize a mutex location
*
* The CPP target:address must point to a 64-bit aligned location, and
@@ -65,13 +65,17 @@ _nfp_cpp_mutex_validate(uint32_t model,
* This function should only be called when setting up
* the initial lock state upon boot-up of the system.
*
- * @param mutex NFP CPP Mutex handle
- * @param target NFP CPP target ID (ie NFP_CPP_TARGET_CLS or
- * NFP_CPP_TARGET_MU)
- * @param address Offset into the address space of the NFP CPP target ID
- * @param key Unique 32-bit value for this mutex
+ * @param cpp
+ * NFP CPP handle
+ * @param target
+ * NFP CPP target ID (ie NFP_CPP_TARGET_CLS or NFP_CPP_TARGET_MU)
+ * @param address
+ * Offset into the address space of the NFP CPP target ID
+ * @param key
+ * Unique 32-bit value for this mutex
*
- * @return 0 on success, or negative value on failure.
+ * @return
+ * 0 on success, or negative value on failure
*/
int
nfp_cpp_mutex_init(struct nfp_cpp *cpp,
@@ -99,7 +103,7 @@ nfp_cpp_mutex_init(struct nfp_cpp *cpp,
return 0;
}
-/*
+/**
* Create a mutex handle from an address controlled by a MU Atomic engine
*
* The CPP target:address must point to a 64-bit aligned location, and
@@ -108,13 +112,17 @@ nfp_cpp_mutex_init(struct nfp_cpp *cpp,
* Only target/address pairs that point to entities that support the
* MU Atomic Engine are supported.
*
- * @param cpp NFP CPP handle
- * @param target NFP CPP target ID (ie NFP_CPP_TARGET_CLS or
- * NFP_CPP_TARGET_MU)
- * @param address Offset into the address space of the NFP CPP target ID
- * @param key 32-bit unique key (must match the key at this location)
+ * @param cpp
+ * NFP CPP handle
+ * @param target
+ * NFP CPP target ID (ie NFP_CPP_TARGET_CLS or NFP_CPP_TARGET_MU)
+ * @param address
+ * Offset into the address space of the NFP CPP target ID
+ * @param key
+ * 32-bit unique key (must match the key at this location)
*
- * @return A non-NULL struct nfp_cpp_mutex * on success, NULL on failure.
+ * @return
+ * A non-NULL struct nfp_cpp_mutex * on success, NULL on failure.
*/
struct nfp_cpp_mutex *
nfp_cpp_mutex_alloc(struct nfp_cpp *cpp,
@@ -178,10 +186,11 @@ nfp_cpp_mutex_alloc(struct nfp_cpp *cpp,
return mutex;
}
-/*
+/**
* Free a mutex handle - does not alter the lock state
*
- * @param mutex NFP CPP Mutex handle
+ * @param mutex
+ * NFP CPP Mutex handle
*/
void
nfp_cpp_mutex_free(struct nfp_cpp_mutex *mutex)
@@ -203,12 +212,14 @@ nfp_cpp_mutex_free(struct nfp_cpp_mutex *mutex)
free(mutex);
}
-/*
+/**
* Lock a mutex handle, using the NFP MU Atomic Engine
*
- * @param mutex NFP CPP Mutex handle
+ * @param mutex
+ * NFP CPP Mutex handle
*
- * @return 0 on success, or negative value on failure.
+ * @return
+ * 0 on success, or negative value on failure.
*/
int
nfp_cpp_mutex_lock(struct nfp_cpp_mutex *mutex)
@@ -229,12 +240,14 @@ nfp_cpp_mutex_lock(struct nfp_cpp_mutex *mutex)
return 0;
}
-/*
+/**
* Unlock a mutex handle, using the NFP MU Atomic Engine
*
- * @param mutex NFP CPP Mutex handle
+ * @param mutex
+ * NFP CPP Mutex handle
*
- * @return 0 on success, or negative value on failure.
+ * @return
+ * 0 on success, or negative value on failure
*/
int
nfp_cpp_mutex_unlock(struct nfp_cpp_mutex *mutex)
@@ -280,16 +293,18 @@ nfp_cpp_mutex_unlock(struct nfp_cpp_mutex *mutex)
return err;
}
-/*
+/**
* Attempt to lock a mutex handle, using the NFP MU Atomic Engine
*
* Valid lock states:
- *
* 0x....0000 - Unlocked
* 0x....000f - Locked
*
- * @param mutex NFP CPP Mutex handle
- * @return 0 if the lock succeeded, negative value on failure.
+ * @param mutex
+ * NFP CPP Mutex handle
+ *
+ * @return
+ * 0 if the lock succeeded, negative value on failure.
*/
int
nfp_cpp_mutex_trylock(struct nfp_cpp_mutex *mutex)
@@ -352,7 +367,7 @@ nfp_cpp_mutex_trylock(struct nfp_cpp_mutex *mutex)
* If there was another contending for this lock, then
* the lock state would be 0x....000f
*
- * Write our owner ID into the lock
+ * Write our owner ID into the lock.
* While not strictly necessary, this helps with
* debug and bookkeeping.
*/
diff --git a/drivers/net/nfp/nfpcore/nfp_nffw.c b/drivers/net/nfp/nfpcore/nfp_nffw.c
index 6ba40cd085..af55671a88 100644
--- a/drivers/net/nfp/nfpcore/nfp_nffw.c
+++ b/drivers/net/nfp/nfpcore/nfp_nffw.c
@@ -52,7 +52,7 @@ nffw_fwinfo_mip_mu_da_get(const struct nffw_fwinfo *fi)
return (fi->loaded__mu_da__mip_off_hi >> 8) & 1;
}
-/* mip_offset = (loaded__mu_da__mip_off_hi<7:0> << 8) | mip_offset_lo */
+/* mip_offset = (loaded__mu_da__mip_off_hi<7:0> << 32) | mip_offset_lo */
static uint64_t
nffw_fwinfo_mip_offset_get(const struct nffw_fwinfo *fi)
{
@@ -111,11 +111,14 @@ nffw_res_fwinfos(struct nfp_nffw_info_data *fwinf, struct nffw_fwinfo **arr)
}
}
-/*
- * nfp_nffw_info_open() - Acquire the lock on the NFFW table
- * @cpp: NFP CPP handle
+/**
+ * Acquire the lock on the NFFW table
+ *
+ * @param cpp
+ * NFP CPP handle
*
- * Return: nffw info pointer, or NULL on failure
+ * @return
+ * NFFW info pointer, or NULL on failure
*/
struct nfp_nffw_info *
nfp_nffw_info_open(struct nfp_cpp *cpp)
@@ -167,11 +170,11 @@ nfp_nffw_info_open(struct nfp_cpp *cpp)
return NULL;
}
-/*
- * nfp_nffw_info_close() - Release the lock on the NFFW table
- * @state: NFP FW info state
+/**
+ * Release the lock on the NFFW table
*
- * Return: void
+ * @param state
+ * NFFW info pointer
*/
void
nfp_nffw_info_close(struct nfp_nffw_info *state)
@@ -180,11 +183,14 @@ nfp_nffw_info_close(struct nfp_nffw_info *state)
free(state);
}
-/*
- * nfp_nffw_info_fwid_first() - Return the first firmware ID in the NFFW
- * @state: NFP FW info state
+/**
+ * Return the first firmware ID in the NFFW
*
- * Return: First NFFW firmware info, NULL on failure
+ * @param state
+ * NFFW info pointer
+ *
+ * @return:
+ * First NFFW firmware info, NULL on failure
*/
static struct nffw_fwinfo *
nfp_nffw_info_fwid_first(struct nfp_nffw_info *state)
@@ -204,13 +210,18 @@ nfp_nffw_info_fwid_first(struct nfp_nffw_info *state)
return NULL;
}
-/*
- * nfp_nffw_info_mip_first() - Retrieve the location of the first FW's MIP
- * @state: NFP FW info state
- * @cpp_id: Pointer to the CPP ID of the MIP
- * @off: Pointer to the CPP Address of the MIP
+/**
+ * Retrieve the location of the first FW's MIP
+ *
+ * @param state
+ * NFFW info pointer
+ * @param cpp_id
+ * Pointer to the CPP ID of the MIP
+ * @param off
+ * Pointer to the CPP Address of the MIP
*
- * Return: 0, or -ERRNO
+ * @return
+ * 0, or -ERRNO
*/
int
nfp_nffw_info_mip_first(struct nfp_nffw_info *state,
diff --git a/drivers/net/nfp/nfpcore/nfp_nffw.h b/drivers/net/nfp/nfpcore/nfp_nffw.h
index 46ac8a8d07..e032b6cce7 100644
--- a/drivers/net/nfp/nfpcore/nfp_nffw.h
+++ b/drivers/net/nfp/nfpcore/nfp_nffw.h
@@ -8,7 +8,8 @@
#include "nfp_cpp.h"
-/* Init-CSR owner IDs for firmware map to firmware IDs which start at 4.
+/*
+ * Init-CSR owner IDs for firmware map to firmware IDs which start at 4.
* Lower IDs are reserved for target and loader IDs.
*/
#define NFFW_FWID_EXT 3 /* For active MEs that we didn't load. */
@@ -16,7 +17,7 @@
#define NFFW_FWID_ALL 255
-/**
+/*
* NFFW_INFO_VERSION history:
* 0: This was never actually used (before versioning), but it refers to
* the previous struct which had FWINFO_CNT = MEINFO_CNT = 120 that later
@@ -35,6 +36,7 @@
#define NFFW_MEINFO_CNT_V2 200
#define NFFW_FWINFO_CNT_V2 200
+/* nfp.nffw meinfo */
struct nffw_meinfo {
uint32_t ctxmask__fwid__meid;
};
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.c b/drivers/net/nfp/nfpcore/nfp_nsp.c
index 76d418d478..039e4729bd 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.c
@@ -109,9 +109,11 @@ nfp_nsp_check(struct nfp_nsp *state)
return 0;
}
-/*
- * nfp_nsp_open() - Prepare for communication and lock the NSP resource.
- * @cpp: NFP CPP Handle
+/**
+ * Prepare for communication and lock the NSP resource.
+ *
+ * @param cpp
+ * NFP CPP Handle
*/
struct nfp_nsp *
nfp_nsp_open(struct nfp_cpp *cpp)
@@ -145,9 +147,11 @@ nfp_nsp_open(struct nfp_cpp *cpp)
return state;
}
-/*
- * nfp_nsp_close() - Clean up and unlock the NSP resource.
- * @state: NFP SP state
+/**
+ * Clean up and unlock the NSP resource.
+ *
+ * @param state
+ * NFP SP state
*/
void
nfp_nsp_close(struct nfp_nsp *state)
@@ -181,7 +185,7 @@ nfp_nsp_wait_reg(struct nfp_cpp *cpp,
struct timespec wait;
wait.tv_sec = 0;
- wait.tv_nsec = 25000000;
+ wait.tv_nsec = 25000000; /* 25ms */
for (;;) {
err = nfp_cpp_readq(cpp, nsp_cpp, addr, reg);
@@ -194,28 +198,27 @@ nfp_nsp_wait_reg(struct nfp_cpp *cpp,
return 0;
nanosleep(&wait, 0);
- if (count++ > 1000)
+ if (count++ > 1000) /* 25ms * 1000 = 25s */
return -ETIMEDOUT;
}
}
-/*
- * nfp_nsp_command() - Execute a command on the NFP Service Processor
- * @state: NFP SP state
- * @code: NFP SP Command Code
- * @option: NFP SP Command Argument
- * @buff_cpp: NFP SP Buffer CPP Address info
- * @buff_addr: NFP SP Buffer Host address
- *
- * Return: 0 for success with no result
+/**
+ * Execute a command on the NFP Service Processor
*
- * positive value for NSP completion with a result code
+ * @param state
+ * NFP SP state
+ * @param arg
+ * NFP command argument structure
*
- * -EAGAIN if the NSP is not yet present
- * -ENODEV if the NSP is not a supported model
- * -EBUSY if the NSP is stuck
- * -EINTR if interrupted while waiting for completion
- * -ETIMEDOUT if the NSP took longer than 30 seconds to complete
+ * @return
+ * - 0 for success with no result
+ * - Positive value for NSP completion with a result code
+ * - -EAGAIN if the NSP is not yet present
+ * - -ENODEV if the NSP is not a supported model
+ * - -EBUSY if the NSP is stuck
+ * - -EINTR if interrupted while waiting for completion
+ * - -ETIMEDOUT if the NSP took longer than @timeout_sec seconds to complete
*/
static int
nfp_nsp_command(struct nfp_nsp *state,
@@ -383,7 +386,7 @@ nfp_nsp_wait(struct nfp_nsp *state)
struct timespec wait;
wait.tv_sec = 0;
- wait.tv_nsec = 25000000;
+ wait.tv_nsec = 25000000; /* 25ms */
for (;;) {
err = nfp_nsp_command(state, SPCODE_NOOP, 0, 0, 0);
@@ -392,7 +395,7 @@ nfp_nsp_wait(struct nfp_nsp *state)
nanosleep(&wait, 0);
- if (count++ > 1000) {
+ if (count++ > 1000) { /* 25ms * 1000 = 25s */
err = -ETIMEDOUT;
break;
}
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.h b/drivers/net/nfp/nfpcore/nfp_nsp.h
index edb56e26ca..0fcb21e99c 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.h
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.h
@@ -158,72 +158,45 @@ enum nfp_eth_fec {
#define NFP_FEC_REED_SOLOMON RTE_BIT32(NFP_FEC_REED_SOLOMON_BIT)
#define NFP_FEC_DISABLED RTE_BIT32(NFP_FEC_DISABLED_BIT)
-/**
- * struct nfp_eth_table - ETH table information
- * @count: number of table entries
- * @max_index: max of @index fields of all @ports
- * @ports: table of ports
- *
- * @eth_index: port index according to legacy ethX numbering
- * @index: chip-wide first channel index
- * @nbi: NBI index
- * @base: first channel index (within NBI)
- * @lanes: number of channels
- * @speed: interface speed (in Mbps)
- * @interface: interface (module) plugged in
- * @media: media type of the @interface
- * @fec: forward error correction mode
- * @aneg: auto negotiation mode
- * @mac_addr: interface MAC address
- * @label_port: port id
- * @label_subport: id of interface within port (for split ports)
- * @enabled: is enabled?
- * @tx_enabled: is TX enabled?
- * @rx_enabled: is RX enabled?
- * @override_changed: is media reconfig pending?
- *
- * @port_type: one of %PORT_* defines for ethtool
- * @port_lanes: total number of lanes on the port (sum of lanes of all subports)
- * @is_split: is interface part of a split port
- * @fec_modes_supported: bitmap of FEC modes supported
- */
+/* ETH table information */
struct nfp_eth_table {
- uint32_t count;
- uint32_t max_index;
+ uint32_t count; /**< Number of table entries */
+ uint32_t max_index; /**< Max of @index fields of all @ports */
struct nfp_eth_table_port {
+ /** Port index according to legacy ethX numbering */
uint32_t eth_index;
- uint32_t index;
- uint32_t nbi;
- uint32_t base;
- uint32_t lanes;
- uint32_t speed;
+ uint32_t index; /**< Chip-wide first channel index */
+ uint32_t nbi; /**< NBI index */
+ uint32_t base; /**< First channel index (within NBI) */
+ uint32_t lanes; /**< Number of channels */
+ uint32_t speed; /**< Interface speed (in Mbps) */
- uint32_t interface;
- enum nfp_eth_media media;
+ uint32_t interface; /**< Interface (module) plugged in */
+ enum nfp_eth_media media; /**< Media type of the @interface */
- enum nfp_eth_fec fec;
- enum nfp_eth_aneg aneg;
+ enum nfp_eth_fec fec; /**< Forward Error Correction mode */
+ enum nfp_eth_aneg aneg; /**< Auto negotiation mode */
- struct rte_ether_addr mac_addr;
+ struct rte_ether_addr mac_addr; /**< Interface MAC address */
- uint8_t label_port;
+ uint8_t label_port; /**< Port id */
+ /** Id of interface within port (for split ports) */
uint8_t label_subport;
- int enabled;
- int tx_enabled;
- int rx_enabled;
-
- int override_changed;
+ int enabled; /**< Enable port */
+ int tx_enabled; /**< Enable TX */
+ int rx_enabled; /**< Enable RX */
- /* Computed fields */
- uint8_t port_type;
+ int override_changed; /**< Media reconfig pending */
+ uint8_t port_type; /**< One of %PORT_* */
+ /** Sum of lanes of all subports of this port */
uint32_t port_lanes;
- int is_split;
+ int is_split; /**< Split port */
- uint32_t fec_modes_supported;
- } ports[];
+ uint32_t fec_modes_supported; /**< Bitmap of FEC modes supported */
+ } ports[]; /**< Table of ports */
};
struct nfp_eth_table *nfp_eth_read_ports(struct nfp_cpp *cpp);
@@ -263,28 +236,17 @@ int __nfp_eth_set_aneg(struct nfp_nsp *nsp, enum nfp_eth_aneg mode);
int __nfp_eth_set_speed(struct nfp_nsp *nsp, uint32_t speed);
int __nfp_eth_set_split(struct nfp_nsp *nsp, uint32_t lanes);
-/**
- * struct nfp_nsp_identify - NSP static information
- * @version: opaque version string
- * @flags: version flags
- * @br_primary: branch id of primary bootloader
- * @br_secondary: branch id of secondary bootloader
- * @br_nsp: branch id of NSP
- * @primary: version of primary bootloader
- * @secondary: version id of secondary bootloader
- * @nsp: version id of NSP
- * @sensor_mask: mask of present sensors available on NIC
- */
+/* NSP static information */
struct nfp_nsp_identify {
- char version[40];
- uint8_t flags;
- uint8_t br_primary;
- uint8_t br_secondary;
- uint8_t br_nsp;
- uint16_t primary;
- uint16_t secondary;
- uint16_t nsp;
- uint64_t sensor_mask;
+ char version[40]; /**< Opaque version string */
+ uint8_t flags; /**< Version flags */
+ uint8_t br_primary; /**< Branch id of primary bootloader */
+ uint8_t br_secondary; /**< Branch id of secondary bootloader */
+ uint8_t br_nsp; /**< Branch id of NSP */
+ uint16_t primary; /**< Version of primary bootloader */
+ uint16_t secondary; /**< Version id of secondary bootloader */
+ uint16_t nsp; /**< Version id of NSP */
+ uint64_t sensor_mask; /**< Mask of present sensors available on NIC */
};
struct nfp_nsp_identify *__nfp_nsp_identify(struct nfp_nsp *nsp);
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
index 51dcf24f5f..e32884e7d3 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
@@ -263,7 +263,8 @@ __nfp_eth_read_ports(struct nfp_nsp *nsp)
goto err;
}
- /* The NFP3800 NIC support 8 ports, but only 2 ports are valid,
+ /*
+ * The NFP3800 NIC support 8 ports, but only 2 ports are valid,
* the rest 6 ports mac are all 0, ensure we don't use these port
*/
for (i = 0; i < NSP_ETH_MAX_COUNT; i++) {
@@ -273,7 +274,8 @@ __nfp_eth_read_ports(struct nfp_nsp *nsp)
cnt++;
}
- /* Some versions of flash will give us 0 instead of port count. For
+ /*
+ * Some versions of flash will give us 0 instead of port count. For
* those that give a port count, verify it against the value calculated
* above.
*/
@@ -311,14 +313,16 @@ __nfp_eth_read_ports(struct nfp_nsp *nsp)
return NULL;
}
-/*
- * nfp_eth_read_ports() - retrieve port information
- * @cpp: NFP CPP handle
+/**
+ * Read the port information from the device.
+ *
+ * Returned structure should be freed once no longer needed.
*
- * Read the port information from the device. Returned structure should
- * be freed with kfree() once no longer needed.
+ * @param cpp
+ * NFP CPP handle
*
- * Return: populated ETH table or NULL on error.
+ * @return
+ * Populated ETH table or NULL on error.
*/
struct nfp_eth_table *
nfp_eth_read_ports(struct nfp_cpp *cpp)
@@ -386,19 +390,19 @@ nfp_eth_config_cleanup_end(struct nfp_nsp *nsp)
free(entries);
}
-/*
- * nfp_eth_config_commit_end() - perform recorded configuration changes
- * @nsp: NFP NSP handle returned from nfp_eth_config_start()
- *
+/**
* Perform the configuration which was requested with __nfp_eth_set_*()
- * helpers and recorded in @nsp state. If device was already configured
- * as requested or no __nfp_eth_set_*() operations were made no NSP command
+ * helpers and recorded in @nsp state. If device was already configured
+ * as requested or no __nfp_eth_set_*() operations were made, no NSP command
* will be performed.
*
- * Return:
- * 0 - configuration successful;
- * 1 - no changes were needed;
- * -ERRNO - configuration failed.
+ * @param nsp
+ * NFP NSP handle returned from nfp_eth_config_start()
+ *
+ * @return
+ * - (0) Configuration successful
+ * - (1) No changes were needed
+ * - (-ERRNO) Configuration failed
*/
int
nfp_eth_config_commit_end(struct nfp_nsp *nsp)
@@ -416,19 +420,21 @@ nfp_eth_config_commit_end(struct nfp_nsp *nsp)
return ret;
}
-/*
- * nfp_eth_set_mod_enable() - set PHY module enable control bit
- * @cpp: NFP CPP handle
- * @idx: NFP chip-wide port index
- * @enable: Desired state
- *
+/**
* Enable or disable PHY module (this usually means setting the TX lanes
* disable bits).
*
- * Return:
- * 0 - configuration successful;
- * 1 - no changes were needed;
- * -ERRNO - configuration failed.
+ * @param cpp
+ * NFP CPP handle
+ * @param idx
+ * NFP chip-wide port index
+ * @param enable
+ * Desired state
+ *
+ * @return
+ * - (0) Configuration successful
+ * - (1) No changes were needed
+ * - (-ERRNO) Configuration failed
*/
int
nfp_eth_set_mod_enable(struct nfp_cpp *cpp,
@@ -459,18 +465,20 @@ nfp_eth_set_mod_enable(struct nfp_cpp *cpp,
return nfp_eth_config_commit_end(nsp);
}
-/*
- * nfp_eth_set_configured() - set PHY module configured control bit
- * @cpp: NFP CPP handle
- * @idx: NFP chip-wide port index
- * @configed: Desired state
- *
+/**
* Set the ifup/ifdown state on the PHY.
*
- * Return:
- * 0 - configuration successful;
- * 1 - no changes were needed;
- * -ERRNO - configuration failed.
+ * @param cpp
+ * NFP CPP handle
+ * @param idx
+ * NFP chip-wide port index
+ * @param configured
+ * Desired state
+ *
+ * @return
+ * - (0) Configuration successful
+ * - (1) No changes were needed
+ * - (-ERRNO) Configuration failed
*/
int
nfp_eth_set_configured(struct nfp_cpp *cpp,
@@ -524,7 +532,7 @@ nfp_eth_set_bit_config(struct nfp_nsp *nsp,
/*
* Note: set features were added in ABI 0.14 but the error
- * codes were initially not populated correctly.
+ * codes were initially not populated correctly.
*/
if (nfp_nsp_get_abi_ver_minor(nsp) < 17) {
PMD_DRV_LOG(ERR, "set operations not supported, please update flash");
@@ -554,15 +562,17 @@ nfp_eth_set_bit_config(struct nfp_nsp *nsp,
val, ctrl_bit); \
}))
-/*
- * __nfp_eth_set_aneg() - set PHY autonegotiation control bit
- * @nsp: NFP NSP handle returned from nfp_eth_config_start()
- * @mode: Desired autonegotiation mode
- *
+/**
* Allow/disallow PHY module to advertise/perform autonegotiation.
* Will write to hwinfo overrides in the flash (persistent config).
*
- * Return: 0 or -ERRNO.
+ * @param nsp
+ * NFP NSP handle returned from nfp_eth_config_start()
+ * @param mode
+ * Desired autonegotiation mode
+ *
+ * @return
+ * 0 or -ERRNO
*/
int
__nfp_eth_set_aneg(struct nfp_nsp *nsp,
@@ -572,15 +582,17 @@ __nfp_eth_set_aneg(struct nfp_nsp *nsp,
NSP_ETH_STATE_ANEG, mode, NSP_ETH_CTRL_SET_ANEG);
}
-/*
- * __nfp_eth_set_fec() - set PHY forward error correction control bit
- * @nsp: NFP NSP handle returned from nfp_eth_config_start()
- * @mode: Desired fec mode
- *
+/**
* Set the PHY module forward error correction mode.
* Will write to hwinfo overrides in the flash (persistent config).
*
- * Return: 0 or -ERRNO.
+ * @param nsp
+ * NFP NSP handle returned from nfp_eth_config_start()
+ * @param mode
+ * Desired fec mode
+ *
+ * @return
+ * 0 or -ERRNO
*/
static int
__nfp_eth_set_fec(struct nfp_nsp *nsp,
@@ -590,16 +602,20 @@ __nfp_eth_set_fec(struct nfp_nsp *nsp,
NSP_ETH_STATE_FEC, mode, NSP_ETH_CTRL_SET_FEC);
}
-/*
- * nfp_eth_set_fec() - set PHY forward error correction control mode
- * @cpp: NFP CPP handle
- * @idx: NFP chip-wide port index
- * @mode: Desired fec mode
+/**
+ * Set PHY forward error correction control mode
+ *
+ * @param cpp
+ * NFP CPP handle
+ * @param idx
+ * NFP chip-wide port index
+ * @param mode
+ * Desired fec mode
*
- * Return:
- * 0 - configuration successful;
- * 1 - no changes were needed;
- * -ERRNO - configuration failed.
+ * @return
+ * - (0) Configuration successful
+ * - (1) No changes were needed
+ * - (-ERRNO) Configuration failed
*/
int
nfp_eth_set_fec(struct nfp_cpp *cpp,
@@ -622,17 +638,19 @@ nfp_eth_set_fec(struct nfp_cpp *cpp,
return nfp_eth_config_commit_end(nsp);
}
-/*
- * __nfp_eth_set_speed() - set interface speed/rate
- * @nsp: NFP NSP handle returned from nfp_eth_config_start()
- * @speed: Desired speed (per lane)
- *
- * Set lane speed. Provided @speed value should be subport speed divided
- * by number of lanes this subport is spanning (i.e. 10000 for 40G, 25000 for
- * 50G, etc.)
+/**
+ * Set lane speed.
+ * Provided @speed value should be subport speed divided by number of
+ * lanes this subport is spanning (i.e. 10000 for 40G, 25000 for 50G, etc.)
* Will write to hwinfo overrides in the flash (persistent config).
*
- * Return: 0 or -ERRNO.
+ * @param nsp
+ * NFP NSP handle returned from nfp_eth_config_start()
+ * @param speed
+ * Desired speed (per lane)
+ *
+ * @return
+ * 0 or -ERRNO
*/
int
__nfp_eth_set_speed(struct nfp_nsp *nsp,
@@ -650,15 +668,17 @@ __nfp_eth_set_speed(struct nfp_nsp *nsp,
NSP_ETH_STATE_RATE, rate, NSP_ETH_CTRL_SET_RATE);
}
-/*
- * __nfp_eth_set_split() - set interface lane split
- * @nsp: NFP NSP handle returned from nfp_eth_config_start()
- * @lanes: Desired lanes per port
- *
+/**
* Set number of lanes in the port.
* Will write to hwinfo overrides in the flash (persistent config).
*
- * Return: 0 or -ERRNO.
+ * @param nsp
+ * NFP NSP handle returned from nfp_eth_config_start()
+ * @param lanes
+ * Desired lanes per port
+ *
+ * @return
+ * 0 or -ERRNO
*/
int
__nfp_eth_set_split(struct nfp_nsp *nsp,
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.c b/drivers/net/nfp/nfpcore/nfp_resource.c
index 363f7d6198..bdebf5c3aa 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.c
+++ b/drivers/net/nfp/nfpcore/nfp_resource.c
@@ -22,32 +22,23 @@
#define NFP_RESOURCE_ENTRY_NAME_SZ 8
-/*
- * struct nfp_resource_entry - Resource table entry
- * @owner: NFP CPP Lock, interface owner
- * @key: NFP CPP Lock, posix_crc32(name, 8)
- * @region: Memory region descriptor
- * @name: ASCII, zero padded name
- * @reserved
- * @cpp_action: CPP Action
- * @cpp_token: CPP Token
- * @cpp_target: CPP Target ID
- * @page_offset: 256-byte page offset into target's CPP address
- * @page_size: size, in 256-byte pages
- */
+/* Resource table entry */
struct nfp_resource_entry {
struct nfp_resource_entry_mutex {
- uint32_t owner;
- uint32_t key;
+ uint32_t owner; /**< NFP CPP Lock, interface owner */
+ uint32_t key; /**< NFP CPP Lock, posix_crc32(name, 8) */
} mutex;
+ /* Memory region descriptor */
struct nfp_resource_entry_region {
+ /** ASCII, zero padded name */
uint8_t name[NFP_RESOURCE_ENTRY_NAME_SZ];
uint8_t reserved[5];
- uint8_t cpp_action;
- uint8_t cpp_token;
- uint8_t cpp_target;
+ uint8_t cpp_action; /**< CPP Action */
+ uint8_t cpp_token; /**< CPP Token */
+ uint8_t cpp_target; /**< CPP Target ID */
+ /** 256-byte page offset into target's CPP address */
uint32_t page_offset;
- uint32_t page_size;
+ uint32_t page_size; /**< Size, in 256-byte pages */
} region;
};
@@ -147,14 +138,18 @@ nfp_resource_try_acquire(struct nfp_cpp *cpp,
return err;
}
-/*
- * nfp_resource_acquire() - Acquire a resource handle
- * @cpp: NFP CPP handle
- * @name: Name of the resource
+/**
+ * Acquire a resource handle
+ *
+ * Note: This function locks the acquired resource.
*
- * NOTE: This function locks the acquired resource
+ * @param cpp
+ * NFP CPP handle
+ * @param name
+ * Name of the resource
*
- * Return: NFP Resource handle, or NULL
+ * @return
+ * NFP Resource handle, or NULL
*/
struct nfp_resource *
nfp_resource_acquire(struct nfp_cpp *cpp,
@@ -183,7 +178,7 @@ nfp_resource_acquire(struct nfp_cpp *cpp,
}
wait.tv_sec = 0;
- wait.tv_nsec = 1000000;
+ wait.tv_nsec = 1000000; /* 1ms */
for (;;) {
err = nfp_resource_try_acquire(cpp, res, dev_mutex);
@@ -194,7 +189,7 @@ nfp_resource_acquire(struct nfp_cpp *cpp,
goto err_free;
}
- if (count++ > 1000) {
+ if (count++ > 1000) { /* 1ms * 1000 = 1s */
PMD_DRV_LOG(ERR, "Error: resource %s timed out", name);
err = -EBUSY;
goto err_free;
@@ -213,11 +208,13 @@ nfp_resource_acquire(struct nfp_cpp *cpp,
return NULL;
}
-/*
- * nfp_resource_release() - Release a NFP Resource handle
- * @res: NFP Resource handle
+/**
+ * Release a NFP Resource handle
*
- * NOTE: This function implicitly unlocks the resource handle
+ * NOTE: This function implicitly unlocks the resource handle.
+ *
+ * @param res
+ * NFP Resource handle
*/
void
nfp_resource_release(struct nfp_resource *res)
@@ -227,11 +224,14 @@ nfp_resource_release(struct nfp_resource *res)
free(res);
}
-/*
- * nfp_resource_cpp_id() - Return the cpp_id of a resource handle
- * @res: NFP Resource handle
+/**
+ * Return the cpp_id of a resource handle
+ *
+ * @param res
+ * NFP Resource handle
*
- * Return: NFP CPP ID
+ * @return
+ * NFP CPP ID
*/
uint32_t
nfp_resource_cpp_id(const struct nfp_resource *res)
@@ -239,11 +239,14 @@ nfp_resource_cpp_id(const struct nfp_resource *res)
return res->cpp_id;
}
-/*
- * nfp_resource_name() - Return the name of a resource handle
- * @res: NFP Resource handle
+/**
+ * Return the name of a resource handle
*
- * Return: const char pointer to the name of the resource
+ * @param res
+ * NFP Resource handle
+ *
+ * @return
+ * Const char pointer to the name of the resource
*/
const char *
nfp_resource_name(const struct nfp_resource *res)
@@ -251,11 +254,14 @@ nfp_resource_name(const struct nfp_resource *res)
return res->name;
}
-/*
- * nfp_resource_address() - Return the address of a resource handle
- * @res: NFP Resource handle
+/**
+ * Return the address of a resource handle
+ *
+ * @param res
+ * NFP Resource handle
*
- * Return: Address of the resource
+ * @return
+ * Address of the resource
*/
uint64_t
nfp_resource_address(const struct nfp_resource *res)
@@ -263,11 +269,14 @@ nfp_resource_address(const struct nfp_resource *res)
return res->addr;
}
-/*
- * nfp_resource_size() - Return the size in bytes of a resource handle
- * @res: NFP Resource handle
+/**
+ * Return the size in bytes of a resource handle
+ *
+ * @param res
+ * NFP Resource handle
*
- * Return: Size of the resource in bytes
+ * @return
+ * Size of the resource in bytes
*/
uint64_t
nfp_resource_size(const struct nfp_resource *res)
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.h b/drivers/net/nfp/nfpcore/nfp_resource.h
index 009b7359a4..4236950caf 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.h
+++ b/drivers/net/nfp/nfpcore/nfp_resource.h
@@ -8,43 +8,27 @@
#include "nfp_cpp.h"
+/* Netronone Flow Firmware Table */
#define NFP_RESOURCE_NFP_NFFW "nfp.nffw"
+
+/* NFP Hardware Info Database */
#define NFP_RESOURCE_NFP_HWINFO "nfp.info"
+
+/* Service Processor */
#define NFP_RESOURCE_NSP "nfp.sp"
-/**
- * Opaque handle to a NFP Resource
- */
+/* Opaque handle to a NFP Resource */
struct nfp_resource;
struct nfp_resource *nfp_resource_acquire(struct nfp_cpp *cpp,
const char *name);
-/**
- * Release a NFP Resource, and free the handle
- * @param[in] res NFP Resource handle
- */
void nfp_resource_release(struct nfp_resource *res);
-/**
- * Return the CPP ID of a NFP Resource
- * @param[in] res NFP Resource handle
- * @return CPP ID of the NFP Resource
- */
uint32_t nfp_resource_cpp_id(const struct nfp_resource *res);
-/**
- * Return the name of a NFP Resource
- * @param[in] res NFP Resource handle
- * @return Name of the NFP Resource
- */
const char *nfp_resource_name(const struct nfp_resource *res);
-/**
- * Return the target address of a NFP Resource
- * @param[in] res NFP Resource handle
- * @return Address of the NFP Resource
- */
uint64_t nfp_resource_address(const struct nfp_resource *res);
uint64_t nfp_resource_size(const struct nfp_resource *res);
diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.c b/drivers/net/nfp/nfpcore/nfp_rtsym.c
index d15a920752..0e6c0f9fe1 100644
--- a/drivers/net/nfp/nfpcore/nfp_rtsym.c
+++ b/drivers/net/nfp/nfpcore/nfp_rtsym.c
@@ -162,11 +162,14 @@ __nfp_rtsym_table_read(struct nfp_cpp *cpp,
return NULL;
}
-/*
- * nfp_rtsym_count() - Get the number of RTSYM descriptors
- * @rtbl: NFP RTsym table
+/**
+ * Get the number of RTSYM descriptors
+ *
+ * @param rtbl
+ * NFP RTSYM table
*
- * Return: Number of RTSYM descriptors
+ * @return
+ * Number of RTSYM descriptors
*/
int
nfp_rtsym_count(struct nfp_rtsym_table *rtbl)
@@ -177,12 +180,16 @@ nfp_rtsym_count(struct nfp_rtsym_table *rtbl)
return rtbl->num;
}
-/*
- * nfp_rtsym_get() - Get the Nth RTSYM descriptor
- * @rtbl: NFP RTsym table
- * @idx: Index (0-based) of the RTSYM descriptor
+/**
+ * Get the Nth RTSYM descriptor
+ *
+ * @param rtbl
+ * NFP RTSYM table
+ * @param idx
+ * Index (0-based) of the RTSYM descriptor
*
- * Return: const pointer to a struct nfp_rtsym descriptor, or NULL
+ * @return
+ * Const pointer to a struct nfp_rtsym descriptor, or NULL
*/
const struct nfp_rtsym *
nfp_rtsym_get(struct nfp_rtsym_table *rtbl,
@@ -197,12 +204,16 @@ nfp_rtsym_get(struct nfp_rtsym_table *rtbl,
return &rtbl->symtab[idx];
}
-/*
- * nfp_rtsym_lookup() - Return the RTSYM descriptor for a symbol name
- * @rtbl: NFP RTsym table
- * @name: Symbol name
+/**
+ * Return the RTSYM descriptor for a symbol name
+ *
+ * @param rtbl
+ * NFP RTSYM table
+ * @param name
+ * Symbol name
*
- * Return: const pointer to a struct nfp_rtsym descriptor, or NULL
+ * @return
+ * Const pointer to a struct nfp_rtsym descriptor, or NULL
*/
const struct nfp_rtsym *
nfp_rtsym_lookup(struct nfp_rtsym_table *rtbl,
@@ -227,7 +238,8 @@ nfp_rtsym_size(const struct nfp_rtsym *sym)
case NFP_RTSYM_TYPE_NONE:
PMD_DRV_LOG(ERR, "The type of rtsym '%s' is NONE", sym->name);
return 0;
- case NFP_RTSYM_TYPE_OBJECT: /* Fall through */
+ case NFP_RTSYM_TYPE_OBJECT:
+ /* FALLTHROUGH */
case NFP_RTSYM_TYPE_FUNCTION:
return sym->size;
case NFP_RTSYM_TYPE_ABS:
@@ -327,17 +339,22 @@ nfp_rtsym_readq(struct nfp_cpp *cpp,
return nfp_cpp_readq(cpp, cpp_id, addr, value);
}
-/*
- * nfp_rtsym_read_le() - Read a simple unsigned scalar value from symbol
- * @rtbl: NFP RTsym table
- * @name: Symbol name
- * @error: Pointer to error code (optional)
+/**
+ * Read a simple unsigned scalar value from symbol
*
* Lookup a symbol, map, read it and return it's value. Value of the symbol
* will be interpreted as a simple little-endian unsigned value. Symbol can
* be 4 or 8 bytes in size.
*
- * Return: value read, on error sets the error and returns ~0ULL.
+ * @param rtbl
+ * NFP RTSYM table
+ * @param name
+ * Symbol name
+ * @param error
+ * Pointer to error code (optional)
+ *
+ * @return
+ * Value read, on error sets the error and returns ~0ULL.
*/
uint64_t
nfp_rtsym_read_le(struct nfp_rtsym_table *rtbl,
diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.h b/drivers/net/nfp/nfpcore/nfp_rtsym.h
index e7295258b3..ff1facbd17 100644
--- a/drivers/net/nfp/nfpcore/nfp_rtsym.h
+++ b/drivers/net/nfp/nfpcore/nfp_rtsym.h
@@ -31,12 +31,12 @@
* of "sram" symbols for backward compatibility, which are viewed as global.
*/
struct nfp_rtsym {
- const char *name;
- uint64_t addr;
- uint64_t size;
- int type;
- int target;
- int domain;
+ const char *name; /**< Symbol name */
+ uint64_t addr; /**< Address in the domain/target's address space */
+ uint64_t size; /**< Size (in bytes) of the symbol */
+ int type; /**< NFP_RTSYM_TYPE_* of the symbol */
+ int target; /**< CPP target identifier, or NFP_RTSYM_TARGET_* */
+ int domain; /**< CPP target domain */
};
struct nfp_rtsym_table;
diff --git a/drivers/net/nfp/nfpcore/nfp_target.c b/drivers/net/nfp/nfpcore/nfp_target.c
index 611848e233..540b242a43 100644
--- a/drivers/net/nfp/nfpcore/nfp_target.c
+++ b/drivers/net/nfp/nfpcore/nfp_target.c
@@ -767,7 +767,7 @@ nfp_encode_basic(uint64_t *addr,
/*
* Make sure we compare against isldN values by clearing the
* LSB. This is what the silicon does.
- **/
+ */
isld[0] &= ~1;
isld[1] &= ~1;
--
2.39.1
^ permalink raw reply [relevance 1%]
* [PATCH 05/27] net/nfp: standard the local variable coding style
2023-08-24 11:09 1% ` [PATCH 02/27] net/nfp: unify the indent coding style Chaoyong He
@ 2023-08-24 11:09 3% ` Chaoyong He
2023-08-24 11:09 1% ` [PATCH 07/27] net/nfp: standard the comment style Chaoyong He
2023-08-24 11:09 5% ` [PATCH 19/27] net/nfp: refact the nsp module Chaoyong He
3 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-08-24 11:09 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
There should only declare one local variable in each line, and the local
variable should be arranged from short to long in the function.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c | 58 ++++++++++++----------
drivers/net/nfp/nfpcore/nfp_cppcore.c | 14 +++---
drivers/net/nfp/nfpcore/nfp_hwinfo.c | 27 ++++++----
drivers/net/nfp/nfpcore/nfp_mip.c | 8 +--
drivers/net/nfp/nfpcore/nfp_mutex.c | 25 ++++++----
drivers/net/nfp/nfpcore/nfp_nffw.c | 15 +++---
drivers/net/nfp/nfpcore/nfp_nsp.c | 40 ++++++++-------
drivers/net/nfp/nfpcore/nfp_nsp_cmds.c | 8 +--
drivers/net/nfp/nfpcore/nfp_nsp_eth.c | 39 ++++++++-------
drivers/net/nfp/nfpcore/nfp_resource.c | 15 +++---
drivers/net/nfp/nfpcore/nfp_rtsym.c | 19 ++++---
11 files changed, 151 insertions(+), 117 deletions(-)
diff --git a/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c b/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c
index ec14ec45dc..78beee07ef 100644
--- a/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c
+++ b/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c
@@ -140,9 +140,9 @@ nfp_compute_bar(const struct nfp_bar *bar,
size_t size,
int width)
{
- uint32_t bitsize;
- uint32_t newcfg;
uint64_t mask;
+ uint32_t newcfg;
+ uint32_t bitsize;
if (tgt >= 16)
return -EINVAL;
@@ -239,7 +239,8 @@ nfp_bar_write(struct nfp_pcie_user *nfp,
struct nfp_bar *bar,
uint32_t newcfg)
{
- int base, slot;
+ int base;
+ int slot;
base = bar->index >> 3;
slot = bar->index & 7;
@@ -268,9 +269,9 @@ nfp_reconfigure_bar(struct nfp_pcie_user *nfp,
size_t size,
int width)
{
- uint64_t newbase;
- uint32_t newcfg;
int err;
+ uint32_t newcfg;
+ uint64_t newbase;
err = nfp_compute_bar(bar, &newcfg, &newbase, tgt, act, tok, offset,
size, width);
@@ -303,8 +304,10 @@ nfp_reconfigure_bar(struct nfp_pcie_user *nfp,
static int
nfp_enable_bars(struct nfp_pcie_user *nfp)
{
+ int x;
+ int end;
+ int start;
struct nfp_bar *bar;
- int x, start, end;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
start = NFP_BAR_MID;
@@ -333,8 +336,10 @@ nfp_enable_bars(struct nfp_pcie_user *nfp)
static struct nfp_bar *
nfp_alloc_bar(struct nfp_pcie_user *nfp)
{
+ int x;
+ int end;
+ int start;
struct nfp_bar *bar;
- int x, start, end;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
start = NFP_BAR_MID;
@@ -356,8 +361,10 @@ nfp_alloc_bar(struct nfp_pcie_user *nfp)
static void
nfp_disable_bars(struct nfp_pcie_user *nfp)
{
+ int x;
+ int end;
+ int start;
struct nfp_bar *bar;
- int x, start, end;
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
start = NFP_BAR_MID;
@@ -403,12 +410,13 @@ nfp6000_area_init(struct nfp_cpp_area *area,
uint64_t address,
size_t size)
{
- struct nfp_pcie_user *nfp = nfp_cpp_priv(nfp_cpp_area_cpp(area));
- struct nfp6000_area_priv *priv = nfp_cpp_area_priv(area);
+ int pp;
+ int ret = 0;
+ uint32_t token = NFP_CPP_ID_TOKEN_of(dest);
uint32_t target = NFP_CPP_ID_TARGET_of(dest);
uint32_t action = NFP_CPP_ID_ACTION_of(dest);
- uint32_t token = NFP_CPP_ID_TOKEN_of(dest);
- int pp, ret = 0;
+ struct nfp6000_area_priv *priv = nfp_cpp_area_priv(area);
+ struct nfp_pcie_user *nfp = nfp_cpp_priv(nfp_cpp_area_cpp(area));
pp = nfp_target_pushpull(NFP_CPP_ID(target, action, token), address);
if (pp < 0)
@@ -493,14 +501,14 @@ nfp6000_area_read(struct nfp_cpp_area *area,
uint32_t offset,
size_t length)
{
+ size_t n;
+ int width;
+ bool is_64;
+ uint32_t *wrptr32 = kernel_vaddr;
uint64_t *wrptr64 = kernel_vaddr;
- const volatile uint64_t *rdptr64;
struct nfp6000_area_priv *priv;
- uint32_t *wrptr32 = kernel_vaddr;
const volatile uint32_t *rdptr32;
- int width;
- size_t n;
- bool is_64;
+ const volatile uint64_t *rdptr64;
priv = nfp_cpp_area_priv(area);
rdptr64 = (uint64_t *)(priv->iomem + offset);
@@ -563,14 +571,14 @@ nfp6000_area_write(struct nfp_cpp_area *area,
uint32_t offset,
size_t length)
{
- const uint64_t *rdptr64 = kernel_vaddr;
- uint64_t *wrptr64;
- const uint32_t *rdptr32 = kernel_vaddr;
- struct nfp6000_area_priv *priv;
- uint32_t *wrptr32;
- int width;
size_t n;
+ int width;
bool is_64;
+ uint32_t *wrptr32;
+ uint64_t *wrptr64;
+ struct nfp6000_area_priv *priv;
+ const uint32_t *rdptr32 = kernel_vaddr;
+ const uint64_t *rdptr64 = kernel_vaddr;
priv = nfp_cpp_area_priv(area);
wrptr64 = (uint64_t *)(priv->iomem + offset);
@@ -693,10 +701,10 @@ static int
nfp6000_set_serial(struct rte_pci_device *dev,
struct nfp_cpp *cpp)
{
+ off_t pos;
uint16_t tmp;
uint8_t serial[6];
int serial_len = 6;
- off_t pos;
pos = rte_pci_find_ext_capability(dev, RTE_PCI_EXT_CAP_ID_DSN);
if (pos <= 0) {
@@ -741,8 +749,8 @@ static int
nfp6000_set_barsz(struct rte_pci_device *dev,
struct nfp_pcie_user *desc)
{
- uint64_t tmp;
int i = 0;
+ uint64_t tmp;
tmp = dev->mem_resource[0].len;
diff --git a/drivers/net/nfp/nfpcore/nfp_cppcore.c b/drivers/net/nfp/nfpcore/nfp_cppcore.c
index f885e7d8ff..776842bdf6 100644
--- a/drivers/net/nfp/nfpcore/nfp_cppcore.c
+++ b/drivers/net/nfp/nfpcore/nfp_cppcore.c
@@ -172,9 +172,9 @@ nfp_cpp_area_alloc_with_name(struct nfp_cpp *cpp,
uint64_t address,
uint32_t size)
{
+ int err;
struct nfp_cpp_area *area;
uint64_t tmp64 = (uint64_t)address;
- int err;
if (cpp == NULL)
return NULL;
@@ -396,8 +396,8 @@ static uint32_t
nfp_xpb_to_cpp(struct nfp_cpp *cpp,
uint32_t *xpb_addr)
{
- uint32_t xpb;
int island;
+ uint32_t xpb;
xpb = NFP_CPP_ID(14, NFP_CPP_ACTION_RW, 0);
@@ -569,9 +569,9 @@ static struct nfp_cpp *
nfp_cpp_alloc(struct rte_pci_device *dev,
int driver_lock_needed)
{
- const struct nfp_cpp_operations *ops;
- struct nfp_cpp *cpp;
int err;
+ struct nfp_cpp *cpp;
+ const struct nfp_cpp_operations *ops;
ops = nfp_cpp_transport_operations();
@@ -657,8 +657,8 @@ nfp_cpp_read(struct nfp_cpp *cpp,
void *kernel_vaddr,
size_t length)
{
- struct nfp_cpp_area *area;
int err;
+ struct nfp_cpp_area *area;
area = nfp_cpp_area_alloc_acquire(cpp, destination, address, length);
if (area == NULL) {
@@ -687,8 +687,8 @@ nfp_cpp_write(struct nfp_cpp *cpp,
const void *kernel_vaddr,
size_t length)
{
- struct nfp_cpp_area *area;
int err;
+ struct nfp_cpp_area *area;
area = nfp_cpp_area_alloc_acquire(cpp, destination, address, length);
if (area == NULL)
@@ -708,8 +708,8 @@ uint32_t
__nfp_cpp_model_autodetect(struct nfp_cpp *cpp,
uint32_t *model)
{
- uint32_t reg;
int err;
+ uint32_t reg;
err = nfp_xpb_readl(cpp, NFP_XPB_DEVICE(1, 1, 16) + NFP_PL_DEVICE_ID,
®);
diff --git a/drivers/net/nfp/nfpcore/nfp_hwinfo.c b/drivers/net/nfp/nfpcore/nfp_hwinfo.c
index ea4c7d6a9e..819761eda0 100644
--- a/drivers/net/nfp/nfpcore/nfp_hwinfo.c
+++ b/drivers/net/nfp/nfpcore/nfp_hwinfo.c
@@ -36,7 +36,9 @@ static int
nfp_hwinfo_db_walk(struct nfp_hwinfo *hwinfo,
uint32_t size)
{
- const char *key, *val, *end = hwinfo->data + size;
+ const char *key;
+ const char *val;
+ const char *end = hwinfo->data + size;
for (key = hwinfo->data; *key != 0 && key < end;
key = val + strlen(val) + 1) {
@@ -58,7 +60,9 @@ static int
nfp_hwinfo_db_validate(struct nfp_hwinfo *db,
uint32_t len)
{
- uint32_t size, new_crc, *crc;
+ uint32_t *crc;
+ uint32_t size;
+ uint32_t new_crc;
size = db->size;
if (size > len) {
@@ -82,12 +86,12 @@ static struct nfp_hwinfo *
nfp_hwinfo_try_fetch(struct nfp_cpp *cpp,
size_t *cpp_size)
{
- struct nfp_hwinfo *header;
- void *res;
- uint64_t cpp_addr;
- uint32_t cpp_id;
int err;
+ void *res;
uint8_t *db;
+ uint32_t cpp_id;
+ uint64_t cpp_addr;
+ struct nfp_hwinfo *header;
res = nfp_resource_acquire(cpp, NFP_RESOURCE_NFP_HWINFO);
if (res) {
@@ -135,13 +139,12 @@ static struct nfp_hwinfo *
nfp_hwinfo_fetch(struct nfp_cpp *cpp,
size_t *hwdb_size)
{
+ int count = 0;
struct timespec wait;
struct nfp_hwinfo *db;
- int count;
wait.tv_sec = 0;
wait.tv_nsec = 10000000;
- count = 0;
for (;;) {
db = nfp_hwinfo_try_fetch(cpp, hwdb_size);
@@ -159,9 +162,9 @@ nfp_hwinfo_fetch(struct nfp_cpp *cpp,
struct nfp_hwinfo *
nfp_hwinfo_read(struct nfp_cpp *cpp)
{
- struct nfp_hwinfo *db;
- size_t hwdb_size = 0;
int err;
+ size_t hwdb_size = 0;
+ struct nfp_hwinfo *db;
db = nfp_hwinfo_fetch(cpp, &hwdb_size);
if (db == NULL)
@@ -186,7 +189,9 @@ const char *
nfp_hwinfo_lookup(struct nfp_hwinfo *hwinfo,
const char *lookup)
{
- const char *key, *val, *end;
+ const char *key;
+ const char *val;
+ const char *end;
if (hwinfo == NULL || lookup == NULL)
return NULL;
diff --git a/drivers/net/nfp/nfpcore/nfp_mip.c b/drivers/net/nfp/nfpcore/nfp_mip.c
index 0071d3fc37..1e601313b4 100644
--- a/drivers/net/nfp/nfpcore/nfp_mip.c
+++ b/drivers/net/nfp/nfpcore/nfp_mip.c
@@ -68,10 +68,10 @@ static int
nfp_mip_read_resource(struct nfp_cpp *cpp,
struct nfp_mip *mip)
{
- struct nfp_nffw_info *nffw_info;
- uint32_t cpp_id;
- uint64_t addr;
int err;
+ uint64_t addr;
+ uint32_t cpp_id;
+ struct nfp_nffw_info *nffw_info;
nffw_info = nfp_nffw_info_open(cpp);
if (nffw_info == NULL)
@@ -100,8 +100,8 @@ nfp_mip_read_resource(struct nfp_cpp *cpp,
struct nfp_mip *
nfp_mip_open(struct nfp_cpp *cpp)
{
- struct nfp_mip *mip;
int err;
+ struct nfp_mip *mip;
mip = malloc(sizeof(*mip));
if (mip == NULL)
diff --git a/drivers/net/nfp/nfpcore/nfp_mutex.c b/drivers/net/nfp/nfpcore/nfp_mutex.c
index 4d26e6f052..05e0ff46e5 100644
--- a/drivers/net/nfp/nfpcore/nfp_mutex.c
+++ b/drivers/net/nfp/nfpcore/nfp_mutex.c
@@ -79,9 +79,9 @@ nfp_cpp_mutex_init(struct nfp_cpp *cpp,
uint64_t address,
uint32_t key)
{
+ int err;
uint32_t model = nfp_cpp_model(cpp);
uint32_t muw = NFP_CPP_ID(target, 4, 0); /* atomic_write */
- int err;
err = _nfp_cpp_mutex_validate(model, &target, address);
if (err < 0)
@@ -122,11 +122,11 @@ nfp_cpp_mutex_alloc(struct nfp_cpp *cpp,
uint64_t address,
uint32_t key)
{
- uint32_t model = nfp_cpp_model(cpp);
- struct nfp_cpp_mutex *mutex;
- uint32_t mur = NFP_CPP_ID(target, 3, 0); /* atomic_read */
int err;
uint32_t tmp;
+ struct nfp_cpp_mutex *mutex;
+ uint32_t model = nfp_cpp_model(cpp);
+ uint32_t mur = NFP_CPP_ID(target, 3, 0); /* atomic_read */
/* Look for cached mutex */
for (mutex = cpp->mutex_cache; mutex; mutex = mutex->next) {
@@ -241,12 +241,13 @@ nfp_cpp_mutex_lock(struct nfp_cpp_mutex *mutex)
int
nfp_cpp_mutex_unlock(struct nfp_cpp_mutex *mutex)
{
- uint32_t muw = NFP_CPP_ID(mutex->target, 4, 0); /* atomic_write */
- uint32_t mur = NFP_CPP_ID(mutex->target, 3, 0); /* atomic_read */
+ int err;
+ uint32_t key;
+ uint32_t value;
struct nfp_cpp *cpp = mutex->cpp;
- uint32_t key, value;
uint16_t interface = nfp_cpp_interface(cpp);
- int err;
+ uint32_t muw = NFP_CPP_ID(mutex->target, 4, 0); /* atomic_write */
+ uint32_t mur = NFP_CPP_ID(mutex->target, 3, 0); /* atomic_read */
if (mutex->depth > 1) {
mutex->depth--;
@@ -295,12 +296,14 @@ nfp_cpp_mutex_unlock(struct nfp_cpp_mutex *mutex)
int
nfp_cpp_mutex_trylock(struct nfp_cpp_mutex *mutex)
{
+ int err;
+ uint32_t key;
+ uint32_t tmp;
+ uint32_t value;
+ struct nfp_cpp *cpp = mutex->cpp;
uint32_t mur = NFP_CPP_ID(mutex->target, 3, 0); /* atomic_read */
uint32_t muw = NFP_CPP_ID(mutex->target, 4, 0); /* atomic_write */
uint32_t mus = NFP_CPP_ID(mutex->target, 5, 3); /* test_set_imm */
- uint32_t key, value, tmp;
- struct nfp_cpp *cpp = mutex->cpp;
- int err;
if (mutex->depth > 0) {
if (mutex->depth == MUTEX_DEPTH_MAX)
diff --git a/drivers/net/nfp/nfpcore/nfp_nffw.c b/drivers/net/nfp/nfpcore/nfp_nffw.c
index 7ff468373b..32e0fc94bb 100644
--- a/drivers/net/nfp/nfpcore/nfp_nffw.c
+++ b/drivers/net/nfp/nfpcore/nfp_nffw.c
@@ -68,9 +68,11 @@ nffw_fwinfo_mip_offset_get(const struct nffw_fwinfo *fi)
static int
nfp_mip_mu_locality_lsb(struct nfp_cpp *cpp)
{
- uint32_t mode, addr40;
- uint32_t xpbaddr, imbcppat;
int err;
+ uint32_t mode;
+ uint32_t addr40;
+ uint32_t xpbaddr;
+ uint32_t imbcppat;
/* Hardcoded XPB IMB Base, island 0 */
xpbaddr = 0x000a0000 + NFP_CPP_TARGET_MU * 4;
@@ -117,10 +119,10 @@ nffw_res_fwinfos(struct nfp_nffw_info_data *fwinf, struct nffw_fwinfo **arr)
struct nfp_nffw_info *
nfp_nffw_info_open(struct nfp_cpp *cpp)
{
- struct nfp_nffw_info_data *fwinf;
- struct nfp_nffw_info *state;
- uint32_t info_ver;
int err;
+ uint32_t info_ver;
+ struct nfp_nffw_info *state;
+ struct nfp_nffw_info_data *fwinf;
state = malloc(sizeof(*state));
if (state == NULL)
@@ -182,8 +184,9 @@ nfp_nffw_info_close(struct nfp_nffw_info *state)
static struct nffw_fwinfo *
nfp_nffw_info_fwid_first(struct nfp_nffw_info *state)
{
+ uint32_t i;
+ uint32_t cnt;
struct nffw_fwinfo *fwinfo;
- uint32_t cnt, i;
cnt = nffw_res_fwinfos(&state->fwinf, &fwinfo);
if (cnt == 0)
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.c b/drivers/net/nfp/nfpcore/nfp_nsp.c
index 87eed3d951..a00bd5870d 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.c
@@ -72,10 +72,11 @@ nfp_nsp_print_extended_error(uint32_t ret_val)
static int
nfp_nsp_check(struct nfp_nsp *state)
{
- struct nfp_cpp *cpp = state->cpp;
- uint64_t nsp_status, reg;
- uint32_t nsp_cpp;
int err;
+ uint64_t reg;
+ uint32_t nsp_cpp;
+ uint64_t nsp_status;
+ struct nfp_cpp *cpp = state->cpp;
nsp_cpp = nfp_resource_cpp_id(state->res);
nsp_status = nfp_resource_address(state->res) + NSP_STATUS;
@@ -113,9 +114,9 @@ nfp_nsp_check(struct nfp_nsp *state)
struct nfp_nsp *
nfp_nsp_open(struct nfp_cpp *cpp)
{
- struct nfp_resource *res;
- struct nfp_nsp *state;
int err;
+ struct nfp_nsp *state;
+ struct nfp_resource *res;
res = nfp_resource_acquire(cpp, NFP_RESOURCE_NSP);
if (res == NULL)
@@ -170,13 +171,12 @@ nfp_nsp_wait_reg(struct nfp_cpp *cpp,
uint64_t mask,
uint64_t val)
{
- struct timespec wait;
- uint32_t count;
int err;
+ uint32_t count = 0;
+ struct timespec wait;
wait.tv_sec = 0;
wait.tv_nsec = 25000000;
- count = 0;
for (;;) {
err = nfp_cpp_readq(cpp, nsp_cpp, addr, reg);
@@ -217,10 +217,15 @@ nfp_nsp_command(struct nfp_nsp *state,
uint32_t buff_cpp,
uint64_t buff_addr)
{
- uint64_t reg, ret_val, nsp_base, nsp_buffer, nsp_status, nsp_command;
- struct nfp_cpp *cpp = state->cpp;
- uint32_t nsp_cpp;
int err;
+ uint64_t reg;
+ uint32_t nsp_cpp;
+ uint64_t ret_val;
+ uint64_t nsp_base;
+ uint64_t nsp_buffer;
+ uint64_t nsp_status;
+ uint64_t nsp_command;
+ struct nfp_cpp *cpp = state->cpp;
nsp_cpp = nfp_resource_cpp_id(state->res);
nsp_base = nfp_resource_address(state->res);
@@ -296,11 +301,13 @@ nfp_nsp_command_buf(struct nfp_nsp *nsp,
void *out_buf,
unsigned int out_size)
{
- struct nfp_cpp *cpp = nsp->cpp;
+ int err;
+ int ret;
+ uint64_t reg;
size_t max_size;
- uint64_t reg, cpp_buf;
- int ret, err;
uint32_t cpp_id;
+ uint64_t cpp_buf;
+ struct nfp_cpp *cpp = nsp->cpp;
if (nsp->ver.minor < 13) {
PMD_DRV_LOG(ERR, "NSP: Code 0x%04x with buffer not supported ABI %hu.%hu)",
@@ -360,13 +367,12 @@ nfp_nsp_command_buf(struct nfp_nsp *nsp,
int
nfp_nsp_wait(struct nfp_nsp *state)
{
- struct timespec wait;
- uint32_t count;
int err;
+ int count = 0;
+ struct timespec wait;
wait.tv_sec = 0;
wait.tv_nsec = 25000000;
- count = 0;
for (;;) {
err = nfp_nsp_command(state, SPCODE_NOOP, 0, 0, 0);
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
index 31677b66e6..3081e22dad 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
@@ -26,9 +26,9 @@ struct nsp_identify {
struct nfp_nsp_identify *
__nfp_nsp_identify(struct nfp_nsp *nsp)
{
- struct nfp_nsp_identify *nspi = NULL;
- struct nsp_identify *ni;
int ret;
+ struct nsp_identify *ni;
+ struct nfp_nsp_identify *nspi = NULL;
if (nfp_nsp_get_abi_ver_minor(nsp) < 15)
return NULL;
@@ -77,9 +77,9 @@ nfp_hwmon_read_sensor(struct nfp_cpp *cpp,
enum nfp_nsp_sensor_id id,
uint32_t *val)
{
- struct nfp_sensors s;
- struct nfp_nsp *nsp;
int ret;
+ struct nfp_nsp *nsp;
+ struct nfp_sensors s;
nsp = nfp_nsp_open(cpp);
if (nsp == NULL)
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
index 899fcd7441..9e8a247e5c 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
@@ -149,9 +149,10 @@ nfp_eth_port_translate(struct nfp_nsp *nsp,
uint32_t index,
struct nfp_eth_table_port *dst)
{
- uint32_t rate;
uint32_t fec;
- uint64_t port, state;
+ uint64_t port;
+ uint32_t rate;
+ uint64_t state;
port = rte_le_to_cpu_64(src->port);
state = rte_le_to_cpu_64(src->state);
@@ -199,7 +200,8 @@ nfp_eth_port_translate(struct nfp_nsp *nsp,
static void
nfp_eth_calc_port_geometry(struct nfp_eth_table *table)
{
- uint32_t i, j;
+ uint32_t i;
+ uint32_t j;
for (i = 0; i < table->count; i++) {
table->max_index = RTE_MAX(table->max_index,
@@ -241,12 +243,13 @@ nfp_eth_calc_port_type(struct nfp_eth_table_port *entry)
static struct nfp_eth_table *
__nfp_eth_read_ports(struct nfp_nsp *nsp)
{
- union eth_table_entry *entries;
- struct nfp_eth_table *table;
- uint32_t table_sz;
+ int ret;
uint32_t i;
uint32_t j;
- int ret, cnt = 0;
+ int cnt = 0;
+ uint32_t table_sz;
+ struct nfp_eth_table *table;
+ union eth_table_entry *entries;
const struct rte_ether_addr *mac;
entries = malloc(NSP_ETH_TABLE_SIZE);
@@ -320,8 +323,8 @@ __nfp_eth_read_ports(struct nfp_nsp *nsp)
struct nfp_eth_table *
nfp_eth_read_ports(struct nfp_cpp *cpp)
{
- struct nfp_eth_table *ret;
struct nfp_nsp *nsp;
+ struct nfp_eth_table *ret;
nsp = nfp_nsp_open(cpp);
if (nsp == NULL)
@@ -337,9 +340,9 @@ struct nfp_nsp *
nfp_eth_config_start(struct nfp_cpp *cpp,
uint32_t idx)
{
- union eth_table_entry *entries;
- struct nfp_nsp *nsp;
int ret;
+ struct nfp_nsp *nsp;
+ union eth_table_entry *entries;
entries = malloc(NSP_ETH_TABLE_SIZE);
if (entries == NULL)
@@ -400,8 +403,8 @@ nfp_eth_config_cleanup_end(struct nfp_nsp *nsp)
int
nfp_eth_config_commit_end(struct nfp_nsp *nsp)
{
- union eth_table_entry *entries = nfp_nsp_config_entries(nsp);
int ret = 1;
+ union eth_table_entry *entries = nfp_nsp_config_entries(nsp);
if (nfp_nsp_config_modified(nsp)) {
ret = nfp_nsp_write_eth_table(nsp, entries, NSP_ETH_TABLE_SIZE);
@@ -432,9 +435,9 @@ nfp_eth_set_mod_enable(struct nfp_cpp *cpp,
uint32_t idx,
int enable)
{
- union eth_table_entry *entries;
- struct nfp_nsp *nsp;
uint64_t reg;
+ struct nfp_nsp *nsp;
+ union eth_table_entry *entries;
nsp = nfp_eth_config_start(cpp, idx);
if (nsp == NULL)
@@ -474,9 +477,9 @@ nfp_eth_set_configured(struct nfp_cpp *cpp,
uint32_t idx,
int configed)
{
- union eth_table_entry *entries;
- struct nfp_nsp *nsp;
uint64_t reg;
+ struct nfp_nsp *nsp;
+ union eth_table_entry *entries;
nsp = nfp_eth_config_start(cpp, idx);
if (nsp == NULL)
@@ -515,9 +518,9 @@ nfp_eth_set_bit_config(struct nfp_nsp *nsp,
uint32_t val,
const uint64_t ctrl_bit)
{
- union eth_table_entry *entries = nfp_nsp_config_entries(nsp);
- uint32_t idx = nfp_nsp_config_idx(nsp);
uint64_t reg;
+ uint32_t idx = nfp_nsp_config_idx(nsp);
+ union eth_table_entry *entries = nfp_nsp_config_entries(nsp);
/*
* Note: set features were added in ABI 0.14 but the error
@@ -603,8 +606,8 @@ nfp_eth_set_fec(struct nfp_cpp *cpp,
uint32_t idx,
enum nfp_eth_fec mode)
{
- struct nfp_nsp *nsp;
int err;
+ struct nfp_nsp *nsp;
nsp = nfp_eth_config_start(cpp, idx);
if (nsp == NULL)
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.c b/drivers/net/nfp/nfpcore/nfp_resource.c
index 9dd4832779..fa92f2762e 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.c
+++ b/drivers/net/nfp/nfpcore/nfp_resource.c
@@ -67,10 +67,12 @@ static int
nfp_cpp_resource_find(struct nfp_cpp *cpp,
struct nfp_resource *res)
{
- char name_pad[NFP_RESOURCE_ENTRY_NAME_SZ + 2];
+ int ret;
+ uint32_t i;
+ uint32_t key;
+ uint32_t cpp_id;
struct nfp_resource_entry entry;
- uint32_t cpp_id, key;
- int ret, i;
+ char name_pad[NFP_RESOURCE_ENTRY_NAME_SZ + 2];
cpp_id = NFP_CPP_ID(NFP_RESOURCE_TBL_TARGET, 3, 0); /* Atomic read */
@@ -152,11 +154,11 @@ struct nfp_resource *
nfp_resource_acquire(struct nfp_cpp *cpp,
const char *name)
{
- struct nfp_cpp_mutex *dev_mutex;
- struct nfp_resource *res;
int err;
+ uint16_t count = 0;
struct timespec wait;
- uint16_t count;
+ struct nfp_resource *res;
+ struct nfp_cpp_mutex *dev_mutex;
res = malloc(sizeof(*res));
if (res == NULL)
@@ -175,7 +177,6 @@ nfp_resource_acquire(struct nfp_cpp *cpp,
wait.tv_sec = 0;
wait.tv_nsec = 1000000;
- count = 0;
for (;;) {
err = nfp_resource_try_acquire(cpp, res, dev_mutex);
diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.c b/drivers/net/nfp/nfpcore/nfp_rtsym.c
index 243d3c9ce5..a34278beca 100644
--- a/drivers/net/nfp/nfpcore/nfp_rtsym.c
+++ b/drivers/net/nfp/nfpcore/nfp_rtsym.c
@@ -85,8 +85,8 @@ nfp_rtsym_sw_entry_init(struct nfp_rtsym_table *cache,
struct nfp_rtsym_table *
nfp_rtsym_table_read(struct nfp_cpp *cpp)
{
- struct nfp_rtsym_table *rtbl;
struct nfp_mip *mip;
+ struct nfp_rtsym_table *rtbl;
mip = nfp_mip_open(cpp);
rtbl = __nfp_rtsym_table_read(cpp, mip);
@@ -99,13 +99,18 @@ struct nfp_rtsym_table *
__nfp_rtsym_table_read(struct nfp_cpp *cpp,
const struct nfp_mip *mip)
{
- uint32_t strtab_addr, symtab_addr, strtab_size, symtab_size;
- struct nfp_rtsym_entry *rtsymtab;
+ int n;
+ int err;
+ uint32_t size;
+ uint32_t strtab_addr;
+ uint32_t symtab_addr;
+ uint32_t strtab_size;
+ uint32_t symtab_size;
struct nfp_rtsym_table *cache;
+ struct nfp_rtsym_entry *rtsymtab;
const uint32_t dram =
NFP_CPP_ID(NFP_CPP_TARGET_MU, NFP_CPP_ACTION_RW, 0) |
NFP_ISL_EMEM0;
- int err, n, size;
if (mip == NULL)
return NULL;
@@ -341,10 +346,10 @@ nfp_rtsym_read_le(struct nfp_rtsym_table *rtbl,
const char *name,
int *error)
{
- const struct nfp_rtsym *sym;
- uint32_t val32;
- uint64_t val;
int err;
+ uint64_t val;
+ uint32_t val32;
+ const struct nfp_rtsym *sym;
sym = nfp_rtsym_lookup(rtbl, name);
if (sym == NULL) {
--
2.39.1
^ permalink raw reply [relevance 3%]
* [PATCH 02/27] net/nfp: unify the indent coding style
@ 2023-08-24 11:09 1% ` Chaoyong He
2023-08-24 11:09 3% ` [PATCH 05/27] net/nfp: standard the local variable " Chaoyong He
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-08-24 11:09 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Each parameter of function should occupy one line, and indent two TAB
character.
All the statement which span multi line should indent two TAB character.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfpcore/nfp_cpp.h | 80 +++++-----
drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c | 152 ++++++++++--------
drivers/net/nfp/nfpcore/nfp_cppcore.c | 173 +++++++++++++--------
drivers/net/nfp/nfpcore/nfp_hwinfo.c | 23 +--
drivers/net/nfp/nfpcore/nfp_mip.c | 21 ++-
drivers/net/nfp/nfpcore/nfp_mip.h | 2 +-
drivers/net/nfp/nfpcore/nfp_mutex.c | 25 +--
drivers/net/nfp/nfpcore/nfp_nffw.c | 9 +-
drivers/net/nfp/nfpcore/nfp_nsp.c | 108 +++++++------
drivers/net/nfp/nfpcore/nfp_nsp.h | 19 +--
drivers/net/nfp/nfpcore/nfp_nsp_cmds.c | 4 +-
drivers/net/nfp/nfpcore/nfp_nsp_eth.c | 69 ++++----
drivers/net/nfp/nfpcore/nfp_resource.c | 29 ++--
drivers/net/nfp/nfpcore/nfp_resource.h | 2 +-
drivers/net/nfp/nfpcore/nfp_rtsym.c | 38 +++--
drivers/net/nfp/nfpcore/nfp_rtsym.h | 15 +-
16 files changed, 447 insertions(+), 322 deletions(-)
diff --git a/drivers/net/nfp/nfpcore/nfp_cpp.h b/drivers/net/nfp/nfpcore/nfp_cpp.h
index 8f87c09327..54bef3cb6b 100644
--- a/drivers/net/nfp/nfpcore/nfp_cpp.h
+++ b/drivers/net/nfp/nfpcore/nfp_cpp.h
@@ -56,7 +56,8 @@ struct nfp_cpp_operations {
size_t area_priv_size;
/* Instance an NFP CPP */
- int (*init)(struct nfp_cpp *cpp, struct rte_pci_device *dev);
+ int (*init)(struct nfp_cpp *cpp,
+ struct rte_pci_device *dev);
/*
* Free the bus.
@@ -69,9 +70,9 @@ struct nfp_cpp_operations {
* NOTE: This is _not_ serialized
*/
int (*area_init)(struct nfp_cpp_area *area,
- uint32_t dest,
- unsigned long long address,
- unsigned long size);
+ uint32_t dest,
+ unsigned long long address,
+ unsigned long size);
/*
* Clean up a NFP CPP area before it is freed
* NOTE: This is _not_ serialized
@@ -101,17 +102,17 @@ struct nfp_cpp_operations {
* Serialized
*/
int (*area_read)(struct nfp_cpp_area *area,
- void *kernel_vaddr,
- unsigned long offset,
- unsigned int length);
+ void *kernel_vaddr,
+ unsigned long offset,
+ unsigned int length);
/*
* Perform a write to a NFP CPP area
* Serialized
*/
int (*area_write)(struct nfp_cpp_area *area,
- const void *kernel_vaddr,
- unsigned long offset,
- unsigned int length);
+ const void *kernel_vaddr,
+ unsigned long offset,
+ unsigned int length);
};
/*
@@ -239,7 +240,7 @@ void nfp_cpp_interface_set(struct nfp_cpp *cpp, uint32_t interface);
* @param len Length of the serial byte array
*/
int nfp_cpp_serial_set(struct nfp_cpp *cpp, const uint8_t *serial,
- size_t serial_len);
+ size_t serial_len);
/*
* Set the private data of the nfp_cpp instance
@@ -279,7 +280,7 @@ uint32_t __nfp_cpp_model_autodetect(struct nfp_cpp *cpp, uint32_t *model);
* @return NFP CPP handle, or NULL on failure.
*/
struct nfp_cpp *nfp_cpp_from_device_name(struct rte_pci_device *dev,
- int driver_lock_needed);
+ int driver_lock_needed);
/*
* Free a NFP CPP handle
@@ -397,8 +398,7 @@ int nfp_cpp_serial(struct nfp_cpp *cpp, const uint8_t **serial);
* @return NFP CPP handle, or NULL on failure.
*/
struct nfp_cpp_area *nfp_cpp_area_alloc(struct nfp_cpp *cpp, uint32_t cpp_id,
- unsigned long long address,
- unsigned long size);
+ unsigned long long address, unsigned long size);
/*
* Allocate a NFP CPP area handle, as an offset into a CPP ID, by a named owner
@@ -411,10 +411,8 @@ struct nfp_cpp_area *nfp_cpp_area_alloc(struct nfp_cpp *cpp, uint32_t cpp_id,
* @return NFP CPP handle, or NULL on failure.
*/
struct nfp_cpp_area *nfp_cpp_area_alloc_with_name(struct nfp_cpp *cpp,
- uint32_t cpp_id,
- const char *name,
- unsigned long long address,
- unsigned long size);
+ uint32_t cpp_id, const char *name, unsigned long long address,
+ unsigned long size);
/*
* Free an allocated NFP CPP area handle
@@ -448,9 +446,7 @@ void nfp_cpp_area_release(struct nfp_cpp_area *area);
* @return NFP CPP handle, or NULL on failure.
*/
struct nfp_cpp_area *nfp_cpp_area_alloc_acquire(struct nfp_cpp *cpp,
- uint32_t cpp_id,
- unsigned long long address,
- unsigned long size);
+ uint32_t cpp_id, unsigned long long address, unsigned long size);
/*
* Release the resources, then free the NFP CPP area handle
@@ -459,8 +455,7 @@ struct nfp_cpp_area *nfp_cpp_area_alloc_acquire(struct nfp_cpp *cpp,
void nfp_cpp_area_release_free(struct nfp_cpp_area *area);
uint8_t *nfp_cpp_map_area(struct nfp_cpp *cpp, uint32_t cpp_id,
- uint64_t addr, unsigned long size,
- struct nfp_cpp_area **area);
+ uint64_t addr, unsigned long size, struct nfp_cpp_area **area);
/*
* Return an IO pointer to the beginning of the NFP CPP area handle. The area
* must be acquired with 'nfp_cpp_area_acquire()' before calling this operation.
@@ -484,7 +479,7 @@ void *nfp_cpp_area_mapped(struct nfp_cpp_area *area);
*
*/
int nfp_cpp_area_read(struct nfp_cpp_area *area, unsigned long offset,
- void *buffer, size_t length);
+ void *buffer, size_t length);
/*
* Write to a NFP CPP area handle from a buffer. The area must be acquired with
@@ -498,7 +493,7 @@ int nfp_cpp_area_read(struct nfp_cpp_area *area, unsigned long offset,
* @return bytes written on success, negative value on failure.
*/
int nfp_cpp_area_write(struct nfp_cpp_area *area, unsigned long offset,
- const void *buffer, size_t length);
+ const void *buffer, size_t length);
/*
* nfp_cpp_area_iomem() - get IOMEM region for CPP area
@@ -522,7 +517,7 @@ void *nfp_cpp_area_iomem(struct nfp_cpp_area *area);
* @return 0 on success, negative value on failure.
*/
int nfp_cpp_area_check_range(struct nfp_cpp_area *area,
- unsigned long long offset, unsigned long size);
+ unsigned long long offset, unsigned long size);
/*
* Get the NFP CPP handle that is the parent of a NFP CPP area handle
@@ -552,7 +547,7 @@ const char *nfp_cpp_area_name(struct nfp_cpp_area *cpp_area);
* @return bytes read on success, -1 on failure.
*/
int nfp_cpp_read(struct nfp_cpp *cpp, uint32_t cpp_id,
- unsigned long long address, void *kernel_vaddr, size_t length);
+ unsigned long long address, void *kernel_vaddr, size_t length);
/*
* Write a block of data to a NFP CPP ID
@@ -566,8 +561,8 @@ int nfp_cpp_read(struct nfp_cpp *cpp, uint32_t cpp_id,
* @return bytes written on success, -1 on failure.
*/
int nfp_cpp_write(struct nfp_cpp *cpp, uint32_t cpp_id,
- unsigned long long address, const void *kernel_vaddr,
- size_t length);
+ unsigned long long address, const void *kernel_vaddr,
+ size_t length);
@@ -582,7 +577,7 @@ int nfp_cpp_write(struct nfp_cpp *cpp, uint32_t cpp_id,
* @return bytes written on success, negative value on failure.
*/
int nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t value, size_t length);
+ uint32_t value, size_t length);
/*
* Read a single 32-bit value from a NFP CPP area handle
@@ -599,7 +594,7 @@ int nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
* @return 0 on success, or -1 on error.
*/
int nfp_cpp_area_readl(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t *value);
+ uint32_t *value);
/*
* Write a single 32-bit value to a NFP CPP area handle
@@ -616,7 +611,7 @@ int nfp_cpp_area_readl(struct nfp_cpp_area *area, unsigned long offset,
* @return 0 on success, or -1 on error.
*/
int nfp_cpp_area_writel(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t value);
+ uint32_t value);
/*
* Read a single 64-bit value from a NFP CPP area handle
@@ -633,7 +628,7 @@ int nfp_cpp_area_writel(struct nfp_cpp_area *area, unsigned long offset,
* @return 0 on success, or -1 on error.
*/
int nfp_cpp_area_readq(struct nfp_cpp_area *area, unsigned long offset,
- uint64_t *value);
+ uint64_t *value);
/*
* Write a single 64-bit value to a NFP CPP area handle
@@ -650,7 +645,7 @@ int nfp_cpp_area_readq(struct nfp_cpp_area *area, unsigned long offset,
* @return 0 on success, or -1 on error.
*/
int nfp_cpp_area_writeq(struct nfp_cpp_area *area, unsigned long offset,
- uint64_t value);
+ uint64_t value);
/*
* Write a single 32-bit value on the XPB bus
@@ -685,7 +680,7 @@ int nfp_xpb_readl(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t *value);
* @return 0 on success, or -1 on failure.
*/
int nfp_xpb_writelm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
- uint32_t value);
+ uint32_t value);
/*
* Modify bits of a 32-bit value from the XPB bus
@@ -699,7 +694,7 @@ int nfp_xpb_writelm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
* @return >= 0 on success, negative value on failure.
*/
int nfp_xpb_waitlm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
- uint32_t value, int timeout_us);
+ uint32_t value, int timeout_us);
/*
* Read a 32-bit word from a NFP CPP ID
@@ -712,7 +707,7 @@ int nfp_xpb_waitlm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
* @return 0 on success, or -1 on failure.
*/
int nfp_cpp_readl(struct nfp_cpp *cpp, uint32_t cpp_id,
- unsigned long long address, uint32_t *value);
+ unsigned long long address, uint32_t *value);
/*
* Write a 32-bit value to a NFP CPP ID
@@ -726,7 +721,7 @@ int nfp_cpp_readl(struct nfp_cpp *cpp, uint32_t cpp_id,
*
*/
int nfp_cpp_writel(struct nfp_cpp *cpp, uint32_t cpp_id,
- unsigned long long address, uint32_t value);
+ unsigned long long address, uint32_t value);
/*
* Read a 64-bit work from a NFP CPP ID
@@ -739,7 +734,7 @@ int nfp_cpp_writel(struct nfp_cpp *cpp, uint32_t cpp_id,
* @return 0 on success, or -1 on failure.
*/
int nfp_cpp_readq(struct nfp_cpp *cpp, uint32_t cpp_id,
- unsigned long long address, uint64_t *value);
+ unsigned long long address, uint64_t *value);
/*
* Write a 64-bit value to a NFP CPP ID
@@ -752,7 +747,7 @@ int nfp_cpp_readq(struct nfp_cpp *cpp, uint32_t cpp_id,
* @return 0 on success, or -1 on failure.
*/
int nfp_cpp_writeq(struct nfp_cpp *cpp, uint32_t cpp_id,
- unsigned long long address, uint64_t value);
+ unsigned long long address, uint64_t value);
/*
* Initialize a mutex location
@@ -773,7 +768,7 @@ int nfp_cpp_writeq(struct nfp_cpp *cpp, uint32_t cpp_id,
* @return 0 on success, negative value on failure.
*/
int nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target,
- unsigned long long address, uint32_t key_id);
+ unsigned long long address, uint32_t key_id);
/*
* Create a mutex handle from an address controlled by a MU Atomic engine
@@ -793,8 +788,7 @@ int nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target,
* failure.
*/
struct nfp_cpp_mutex *nfp_cpp_mutex_alloc(struct nfp_cpp *cpp, int target,
- unsigned long long address,
- uint32_t key_id);
+ unsigned long long address, uint32_t key_id);
/*
* Get the NFP CPP handle the mutex was created with
diff --git a/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c b/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c
index 2ee60eefc3..884cc84eaa 100644
--- a/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c
+++ b/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c
@@ -130,9 +130,15 @@ nfp_bar_maptype(struct nfp_bar *bar)
#define TARGET_WIDTH_64 8
static int
-nfp_compute_bar(const struct nfp_bar *bar, uint32_t *bar_config,
- uint64_t *bar_base, int tgt, int act, int tok,
- uint64_t offset, size_t size, int width)
+nfp_compute_bar(const struct nfp_bar *bar,
+ uint32_t *bar_config,
+ uint64_t *bar_base,
+ int tgt,
+ int act,
+ int tok,
+ uint64_t offset,
+ size_t size,
+ int width)
{
uint32_t bitsize;
uint32_t newcfg;
@@ -143,19 +149,16 @@ nfp_compute_bar(const struct nfp_bar *bar, uint32_t *bar_config,
switch (width) {
case 8:
- newcfg =
- NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT
- (NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT_64BIT);
+ newcfg = NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT
+ (NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT_64BIT);
break;
case 4:
- newcfg =
- NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT
- (NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT_32BIT);
+ newcfg = NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT
+ (NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT_32BIT);
break;
case 0:
- newcfg =
- NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT
- (NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT_0BYTE);
+ newcfg = NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT
+ (NFP_PCIE_BAR_PCIE2CPP_LENGTHSELECT_0BYTE);
break;
default:
return -EINVAL;
@@ -165,60 +168,58 @@ nfp_compute_bar(const struct nfp_bar *bar, uint32_t *bar_config,
/* Fixed CPP mapping with specific action */
mask = ~(NFP_PCIE_P2C_FIXED_SIZE(bar) - 1);
- newcfg |=
- NFP_PCIE_BAR_PCIE2CPP_MAPTYPE
- (NFP_PCIE_BAR_PCIE2CPP_MAPTYPE_FIXED);
+ newcfg |= NFP_PCIE_BAR_PCIE2CPP_MAPTYPE
+ (NFP_PCIE_BAR_PCIE2CPP_MAPTYPE_FIXED);
newcfg |= NFP_PCIE_BAR_PCIE2CPP_TARGET_BASEADDRESS(tgt);
newcfg |= NFP_PCIE_BAR_PCIE2CPP_ACTION_BASEADDRESS(act);
newcfg |= NFP_PCIE_BAR_PCIE2CPP_TOKEN_BASEADDRESS(tok);
if ((offset & mask) != ((offset + size - 1) & mask)) {
PMD_DRV_LOG(ERR, "BAR%d: Won't use for Fixed mapping <%#llx,%#llx>, action=%d BAR too small (0x%llx)",
- bar->index, (unsigned long long)offset,
- (unsigned long long)(offset + size), act,
- (unsigned long long)mask);
+ bar->index, (unsigned long long)offset,
+ (unsigned long long)(offset + size), act,
+ (unsigned long long)mask);
return -EINVAL;
}
offset &= mask;
PMD_DRV_LOG(DEBUG, "BAR%d: Created Fixed mapping %d:%d:%d:0x%#llx-0x%#llx>",
- bar->index, tgt, act, tok, (unsigned long long)offset,
- (unsigned long long)(offset + mask));
+ bar->index, tgt, act, tok, (unsigned long long)offset,
+ (unsigned long long)(offset + mask));
bitsize = 40 - 16;
} else {
mask = ~(NFP_PCIE_P2C_BULK_SIZE(bar) - 1);
/* Bulk mapping */
- newcfg |=
- NFP_PCIE_BAR_PCIE2CPP_MAPTYPE
- (NFP_PCIE_BAR_PCIE2CPP_MAPTYPE_BULK);
+ newcfg |= NFP_PCIE_BAR_PCIE2CPP_MAPTYPE
+ (NFP_PCIE_BAR_PCIE2CPP_MAPTYPE_BULK);
newcfg |= NFP_PCIE_BAR_PCIE2CPP_TARGET_BASEADDRESS(tgt);
newcfg |= NFP_PCIE_BAR_PCIE2CPP_TOKEN_BASEADDRESS(tok);
if ((offset & mask) != ((offset + size - 1) & mask)) {
PMD_DRV_LOG(ERR, "BAR%d: Won't use for bulk mapping <%#llx,%#llx> target=%d, token=%d BAR too small (%#llx) - (%#llx != %#llx).",
- bar->index, (unsigned long long)offset,
- (unsigned long long)(offset + size),
- tgt, tok, (unsigned long long)mask,
- (unsigned long long)(offset & mask),
- (unsigned long long)(offset + size - 1) & mask);
+ bar->index, (unsigned long long)offset,
+ (unsigned long long)(offset + size),
+ tgt, tok, (unsigned long long)mask,
+ (unsigned long long)(offset & mask),
+ (unsigned long long)(offset + size - 1) & mask);
return -EINVAL;
}
offset &= mask;
PMD_DRV_LOG(DEBUG, "BAR%d: Created bulk mapping %d:x:%d:%#llx-%#llx",
- bar->index, tgt, tok, (unsigned long long)offset,
- (unsigned long long)(offset + ~mask));
+ bar->index, tgt, tok, (unsigned long long)offset,
+ (unsigned long long)(offset + ~mask));
bitsize = 40 - 21;
}
if (bar->bitsize < bitsize) {
PMD_DRV_LOG(ERR, "BAR%d: Too small for %d:%d:%d", bar->index,
- tgt, tok, act);
+ tgt, tok, act);
return -EINVAL;
}
@@ -234,8 +235,9 @@ nfp_compute_bar(const struct nfp_bar *bar, uint32_t *bar_config,
}
static int
-nfp_bar_write(struct nfp_pcie_user *nfp, struct nfp_bar *bar,
- uint32_t newcfg)
+nfp_bar_write(struct nfp_pcie_user *nfp,
+ struct nfp_bar *bar,
+ uint32_t newcfg)
{
int base, slot;
@@ -246,7 +248,7 @@ nfp_bar_write(struct nfp_pcie_user *nfp, struct nfp_bar *bar,
return (-ENOMEM);
bar->csr = nfp->cfg +
- NFP_PCIE_CFG_BAR_PCIETOCPPEXPBAR(nfp->dev_id, base, slot);
+ NFP_PCIE_CFG_BAR_PCIETOCPPEXPBAR(nfp->dev_id, base, slot);
*(uint32_t *)(bar->csr) = newcfg;
@@ -257,15 +259,21 @@ nfp_bar_write(struct nfp_pcie_user *nfp, struct nfp_bar *bar,
}
static int
-nfp_reconfigure_bar(struct nfp_pcie_user *nfp, struct nfp_bar *bar, int tgt,
- int act, int tok, uint64_t offset, size_t size, int width)
+nfp_reconfigure_bar(struct nfp_pcie_user *nfp,
+ struct nfp_bar *bar,
+ int tgt,
+ int act,
+ int tok,
+ uint64_t offset,
+ size_t size,
+ int width)
{
uint64_t newbase;
uint32_t newcfg;
int err;
err = nfp_compute_bar(bar, &newcfg, &newbase, tgt, act, tok, offset,
- size, width);
+ size, width);
if (err != 0)
return err;
@@ -390,8 +398,10 @@ struct nfp6000_area_priv {
};
static int
-nfp6000_area_init(struct nfp_cpp_area *area, uint32_t dest,
- unsigned long long address, unsigned long size)
+nfp6000_area_init(struct nfp_cpp_area *area,
+ uint32_t dest,
+ unsigned long long address,
+ unsigned long size)
{
struct nfp_pcie_user *nfp = nfp_cpp_priv(nfp_cpp_area_cpp(area));
struct nfp6000_area_priv *priv = nfp_cpp_area_priv(area);
@@ -400,8 +410,7 @@ nfp6000_area_init(struct nfp_cpp_area *area, uint32_t dest,
uint32_t token = NFP_CPP_ID_TOKEN_of(dest);
int pp, ret = 0;
- pp = nfp_target_pushpull(NFP_CPP_ID(target, action, token),
- address);
+ pp = nfp_target_pushpull(NFP_CPP_ID(target, action, token), address);
if (pp < 0)
return pp;
@@ -409,7 +418,8 @@ nfp6000_area_init(struct nfp_cpp_area *area, uint32_t dest,
priv->width.write = PULL_WIDTH(pp);
if (priv->width.read > 0 &&
- priv->width.write > 0 && priv->width.read != priv->width.write)
+ priv->width.write > 0 &&
+ priv->width.read != priv->width.write)
return -EINVAL;
if (priv->width.read > 0)
@@ -428,8 +438,8 @@ nfp6000_area_init(struct nfp_cpp_area *area, uint32_t dest,
priv->size = size;
ret = nfp_reconfigure_bar(nfp, priv->bar, priv->target, priv->action,
- priv->token, priv->offset, priv->size,
- priv->width.bar);
+ priv->token, priv->offset, priv->size,
+ priv->width.bar);
return ret;
}
@@ -441,14 +451,13 @@ nfp6000_area_acquire(struct nfp_cpp_area *area)
/* Calculate offset into BAR. */
if (nfp_bar_maptype(priv->bar) ==
- NFP_PCIE_BAR_PCIE2CPP_MAPTYPE_GENERAL) {
+ NFP_PCIE_BAR_PCIE2CPP_MAPTYPE_GENERAL) {
priv->bar_offset = priv->offset &
- (NFP_PCIE_P2C_GENERAL_SIZE(priv->bar) - 1);
- priv->bar_offset +=
- NFP_PCIE_P2C_GENERAL_TARGET_OFFSET(priv->bar,
- priv->target);
- priv->bar_offset +=
- NFP_PCIE_P2C_GENERAL_TOKEN_OFFSET(priv->bar, priv->token);
+ (NFP_PCIE_P2C_GENERAL_SIZE(priv->bar) - 1);
+ priv->bar_offset += NFP_PCIE_P2C_GENERAL_TARGET_OFFSET(priv->bar,
+ priv->target);
+ priv->bar_offset += NFP_PCIE_P2C_GENERAL_TOKEN_OFFSET(priv->bar,
+ priv->token);
} else {
priv->bar_offset = priv->offset & priv->bar->mask;
}
@@ -490,8 +499,10 @@ nfp6000_area_iomem(struct nfp_cpp_area *area)
}
static int
-nfp6000_area_read(struct nfp_cpp_area *area, void *kernel_vaddr,
- unsigned long offset, unsigned int length)
+nfp6000_area_read(struct nfp_cpp_area *area,
+ void *kernel_vaddr,
+ unsigned long offset,
+ unsigned int length)
{
uint64_t *wrptr64 = kernel_vaddr;
const volatile uint64_t *rdptr64;
@@ -524,17 +535,17 @@ nfp6000_area_read(struct nfp_cpp_area *area, void *kernel_vaddr,
/* MU reads via a PCIe2CPP BAR supports 32bit (and other) lengths */
if (priv->target == (NFP_CPP_TARGET_ID_MASK & NFP_CPP_TARGET_MU) &&
- priv->action == NFP_CPP_ACTION_RW) {
+ priv->action == NFP_CPP_ACTION_RW) {
is_64 = false;
}
if (is_64) {
if (offset % sizeof(uint64_t) != 0 ||
- length % sizeof(uint64_t) != 0)
+ length % sizeof(uint64_t) != 0)
return -EINVAL;
} else {
if (offset % sizeof(uint32_t) != 0 ||
- length % sizeof(uint32_t) != 0)
+ length % sizeof(uint32_t) != 0)
return -EINVAL;
}
@@ -558,8 +569,10 @@ nfp6000_area_read(struct nfp_cpp_area *area, void *kernel_vaddr,
}
static int
-nfp6000_area_write(struct nfp_cpp_area *area, const void *kernel_vaddr,
- unsigned long offset, unsigned int length)
+nfp6000_area_write(struct nfp_cpp_area *area,
+ const void *kernel_vaddr,
+ unsigned long offset,
+ unsigned int length)
{
const uint64_t *rdptr64 = kernel_vaddr;
uint64_t *wrptr64;
@@ -590,16 +603,16 @@ nfp6000_area_write(struct nfp_cpp_area *area, const void *kernel_vaddr,
/* MU writes via a PCIe2CPP BAR supports 32bit (and other) lengths */
if (priv->target == (NFP_CPP_TARGET_ID_MASK & NFP_CPP_TARGET_MU) &&
- priv->action == NFP_CPP_ACTION_RW)
+ priv->action == NFP_CPP_ACTION_RW)
is_64 = false;
if (is_64) {
if (offset % sizeof(uint64_t) != 0 ||
- length % sizeof(uint64_t) != 0)
+ length % sizeof(uint64_t) != 0)
return -EINVAL;
} else {
if (offset % sizeof(uint32_t) != 0 ||
- length % sizeof(uint32_t) != 0)
+ length % sizeof(uint32_t) != 0)
return -EINVAL;
}
@@ -655,7 +668,8 @@ nfp_acquire_process_lock(struct nfp_pcie_user *desc)
}
static int
-nfp6000_set_model(struct rte_pci_device *dev, struct nfp_cpp *cpp)
+nfp6000_set_model(struct rte_pci_device *dev,
+ struct nfp_cpp *cpp)
{
uint32_t model;
@@ -671,7 +685,8 @@ nfp6000_set_model(struct rte_pci_device *dev, struct nfp_cpp *cpp)
}
static int
-nfp6000_set_interface(struct rte_pci_device *dev, struct nfp_cpp *cpp)
+nfp6000_set_interface(struct rte_pci_device *dev,
+ struct nfp_cpp *cpp)
{
uint16_t interface;
@@ -686,7 +701,8 @@ nfp6000_set_interface(struct rte_pci_device *dev, struct nfp_cpp *cpp)
}
static int
-nfp6000_set_serial(struct rte_pci_device *dev, struct nfp_cpp *cpp)
+nfp6000_set_serial(struct rte_pci_device *dev,
+ struct nfp_cpp *cpp)
{
uint16_t tmp;
uint8_t serial[6];
@@ -733,7 +749,8 @@ nfp6000_set_serial(struct rte_pci_device *dev, struct nfp_cpp *cpp)
}
static int
-nfp6000_set_barsz(struct rte_pci_device *dev, struct nfp_pcie_user *desc)
+nfp6000_set_barsz(struct rte_pci_device *dev,
+ struct nfp_pcie_user *desc)
{
unsigned long tmp;
int i = 0;
@@ -748,7 +765,8 @@ nfp6000_set_barsz(struct rte_pci_device *dev, struct nfp_pcie_user *desc)
}
static int
-nfp6000_init(struct nfp_cpp *cpp, struct rte_pci_device *dev)
+nfp6000_init(struct nfp_cpp *cpp,
+ struct rte_pci_device *dev)
{
int ret = 0;
struct nfp_pcie_user *desc;
@@ -762,7 +780,7 @@ nfp6000_init(struct nfp_cpp *cpp, struct rte_pci_device *dev)
strlcpy(desc->busdev, dev->device.name, sizeof(desc->busdev));
if (rte_eal_process_type() == RTE_PROC_PRIMARY &&
- cpp->driver_lock_needed) {
+ cpp->driver_lock_needed) {
ret = nfp_acquire_process_lock(desc);
if (ret != 0)
goto error;
diff --git a/drivers/net/nfp/nfpcore/nfp_cppcore.c b/drivers/net/nfp/nfpcore/nfp_cppcore.c
index 2c6ec3e126..25f7700b08 100644
--- a/drivers/net/nfp/nfpcore/nfp_cppcore.c
+++ b/drivers/net/nfp/nfpcore/nfp_cppcore.c
@@ -27,7 +27,8 @@
NFP_PL_DEVICE_ID_MASK)
void
-nfp_cpp_priv_set(struct nfp_cpp *cpp, void *priv)
+nfp_cpp_priv_set(struct nfp_cpp *cpp,
+ void *priv)
{
cpp->priv = priv;
}
@@ -39,7 +40,8 @@ nfp_cpp_priv(struct nfp_cpp *cpp)
}
void
-nfp_cpp_model_set(struct nfp_cpp *cpp, uint32_t model)
+nfp_cpp_model_set(struct nfp_cpp *cpp,
+ uint32_t model)
{
cpp->model = model;
}
@@ -62,21 +64,24 @@ nfp_cpp_model(struct nfp_cpp *cpp)
}
void
-nfp_cpp_interface_set(struct nfp_cpp *cpp, uint32_t interface)
+nfp_cpp_interface_set(struct nfp_cpp *cpp,
+ uint32_t interface)
{
cpp->interface = interface;
}
int
-nfp_cpp_serial(struct nfp_cpp *cpp, const uint8_t **serial)
+nfp_cpp_serial(struct nfp_cpp *cpp,
+ const uint8_t **serial)
{
*serial = cpp->serial;
return cpp->serial_len;
}
int
-nfp_cpp_serial_set(struct nfp_cpp *cpp, const uint8_t *serial,
- size_t serial_len)
+nfp_cpp_serial_set(struct nfp_cpp *cpp,
+ const uint8_t *serial,
+ size_t serial_len)
{
if (cpp->serial_len)
free(cpp->serial);
@@ -161,9 +166,11 @@ nfp_cpp_mu_locality_lsb(struct nfp_cpp *cpp)
* NOTE: @address and @size must be 32-bit aligned values.
*/
struct nfp_cpp_area *
-nfp_cpp_area_alloc_with_name(struct nfp_cpp *cpp, uint32_t dest,
- const char *name, unsigned long long address,
- unsigned long size)
+nfp_cpp_area_alloc_with_name(struct nfp_cpp *cpp,
+ uint32_t dest,
+ const char *name,
+ unsigned long long address,
+ unsigned long size)
{
struct nfp_cpp_area *area;
uint64_t tmp64 = (uint64_t)address;
@@ -183,7 +190,7 @@ nfp_cpp_area_alloc_with_name(struct nfp_cpp *cpp, uint32_t dest,
name = "";
area = calloc(1, sizeof(*area) + cpp->op->area_priv_size +
- strlen(name) + 1);
+ strlen(name) + 1);
if (area == NULL)
return NULL;
@@ -204,8 +211,10 @@ nfp_cpp_area_alloc_with_name(struct nfp_cpp *cpp, uint32_t dest,
}
struct nfp_cpp_area *
-nfp_cpp_area_alloc(struct nfp_cpp *cpp, uint32_t dest,
- unsigned long long address, unsigned long size)
+nfp_cpp_area_alloc(struct nfp_cpp *cpp,
+ uint32_t dest,
+ unsigned long long address,
+ unsigned long size)
{
return nfp_cpp_area_alloc_with_name(cpp, dest, NULL, address, size);
}
@@ -226,8 +235,10 @@ nfp_cpp_area_alloc(struct nfp_cpp *cpp, uint32_t dest,
* NOTE: The area must also be 'released' when the structure is freed.
*/
struct nfp_cpp_area *
-nfp_cpp_area_alloc_acquire(struct nfp_cpp *cpp, uint32_t destination,
- unsigned long long address, unsigned long size)
+nfp_cpp_area_alloc_acquire(struct nfp_cpp *cpp,
+ uint32_t destination,
+ unsigned long long address,
+ unsigned long size)
{
struct nfp_cpp_area *area;
@@ -340,8 +351,10 @@ nfp_cpp_area_iomem(struct nfp_cpp_area *area)
* NOTE: Area must have been locked down with an 'acquire'.
*/
int
-nfp_cpp_area_read(struct nfp_cpp_area *area, unsigned long offset,
- void *kernel_vaddr, size_t length)
+nfp_cpp_area_read(struct nfp_cpp_area *area,
+ unsigned long offset,
+ void *kernel_vaddr,
+ size_t length)
{
if ((offset + length) > area->size)
return -EFAULT;
@@ -364,8 +377,10 @@ nfp_cpp_area_read(struct nfp_cpp_area *area, unsigned long offset,
* NOTE: Area must have been locked down with an 'acquire'.
*/
int
-nfp_cpp_area_write(struct nfp_cpp_area *area, unsigned long offset,
- const void *kernel_vaddr, size_t length)
+nfp_cpp_area_write(struct nfp_cpp_area *area,
+ unsigned long offset,
+ const void *kernel_vaddr,
+ size_t length)
{
if ((offset + length) > area->size)
return -EFAULT;
@@ -392,8 +407,9 @@ nfp_cpp_area_mapped(struct nfp_cpp_area *area)
* or negative value on error.
*/
int
-nfp_cpp_area_check_range(struct nfp_cpp_area *area, unsigned long long offset,
- unsigned long length)
+nfp_cpp_area_check_range(struct nfp_cpp_area *area,
+ unsigned long long offset,
+ unsigned long length)
{
if (((offset + length) > area->size))
return -EFAULT;
@@ -406,7 +422,8 @@ nfp_cpp_area_check_range(struct nfp_cpp_area *area, unsigned long long offset,
* based upon NFP model.
*/
static uint32_t
-nfp_xpb_to_cpp(struct nfp_cpp *cpp, uint32_t *xpb_addr)
+nfp_xpb_to_cpp(struct nfp_cpp *cpp,
+ uint32_t *xpb_addr)
{
uint32_t xpb;
int island;
@@ -433,7 +450,7 @@ nfp_xpb_to_cpp(struct nfp_cpp *cpp, uint32_t *xpb_addr)
else
/* And only non-ARM interfaces use island id = 1 */
if (NFP_CPP_INTERFACE_TYPE_of(nfp_cpp_interface(cpp)) !=
- NFP_CPP_INTERFACE_TYPE_ARM)
+ NFP_CPP_INTERFACE_TYPE_ARM)
*xpb_addr |= (1 << 24);
} else {
(*xpb_addr) |= (1 << 30);
@@ -443,8 +460,9 @@ nfp_xpb_to_cpp(struct nfp_cpp *cpp, uint32_t *xpb_addr)
}
int
-nfp_cpp_area_readl(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t *value)
+nfp_cpp_area_readl(struct nfp_cpp_area *area,
+ unsigned long offset,
+ uint32_t *value)
{
int sz;
uint32_t tmp = 0;
@@ -456,8 +474,9 @@ nfp_cpp_area_readl(struct nfp_cpp_area *area, unsigned long offset,
}
int
-nfp_cpp_area_writel(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t value)
+nfp_cpp_area_writel(struct nfp_cpp_area *area,
+ unsigned long offset,
+ uint32_t value)
{
int sz;
@@ -467,8 +486,9 @@ nfp_cpp_area_writel(struct nfp_cpp_area *area, unsigned long offset,
}
int
-nfp_cpp_area_readq(struct nfp_cpp_area *area, unsigned long offset,
- uint64_t *value)
+nfp_cpp_area_readq(struct nfp_cpp_area *area,
+ unsigned long offset,
+ uint64_t *value)
{
int sz;
uint64_t tmp = 0;
@@ -480,8 +500,9 @@ nfp_cpp_area_readq(struct nfp_cpp_area *area, unsigned long offset,
}
int
-nfp_cpp_area_writeq(struct nfp_cpp_area *area, unsigned long offset,
- uint64_t value)
+nfp_cpp_area_writeq(struct nfp_cpp_area *area,
+ unsigned long offset,
+ uint64_t value)
{
int sz;
@@ -492,8 +513,10 @@ nfp_cpp_area_writeq(struct nfp_cpp_area *area, unsigned long offset,
}
int
-nfp_cpp_readl(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
- uint32_t *value)
+nfp_cpp_readl(struct nfp_cpp *cpp,
+ uint32_t cpp_id,
+ unsigned long long address,
+ uint32_t *value)
{
int sz;
uint32_t tmp;
@@ -505,8 +528,10 @@ nfp_cpp_readl(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
}
int
-nfp_cpp_writel(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
- uint32_t value)
+nfp_cpp_writel(struct nfp_cpp *cpp,
+ uint32_t cpp_id,
+ unsigned long long address,
+ uint32_t value)
{
int sz;
@@ -517,8 +542,10 @@ nfp_cpp_writel(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
}
int
-nfp_cpp_readq(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
- uint64_t *value)
+nfp_cpp_readq(struct nfp_cpp *cpp,
+ uint32_t cpp_id,
+ unsigned long long address,
+ uint64_t *value)
{
int sz;
uint64_t tmp;
@@ -530,8 +557,10 @@ nfp_cpp_readq(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
}
int
-nfp_cpp_writeq(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
- uint64_t value)
+nfp_cpp_writeq(struct nfp_cpp *cpp,
+ uint32_t cpp_id,
+ unsigned long long address,
+ uint64_t value)
{
int sz;
@@ -542,7 +571,9 @@ nfp_cpp_writeq(struct nfp_cpp *cpp, uint32_t cpp_id, unsigned long long address,
}
int
-nfp_xpb_writel(struct nfp_cpp *cpp, uint32_t xpb_addr, uint32_t value)
+nfp_xpb_writel(struct nfp_cpp *cpp,
+ uint32_t xpb_addr,
+ uint32_t value)
{
uint32_t cpp_dest;
@@ -552,7 +583,9 @@ nfp_xpb_writel(struct nfp_cpp *cpp, uint32_t xpb_addr, uint32_t value)
}
int
-nfp_xpb_readl(struct nfp_cpp *cpp, uint32_t xpb_addr, uint32_t *value)
+nfp_xpb_readl(struct nfp_cpp *cpp,
+ uint32_t xpb_addr,
+ uint32_t *value)
{
uint32_t cpp_dest;
@@ -562,7 +595,8 @@ nfp_xpb_readl(struct nfp_cpp *cpp, uint32_t xpb_addr, uint32_t *value)
}
static struct nfp_cpp *
-nfp_cpp_alloc(struct rte_pci_device *dev, int driver_lock_needed)
+nfp_cpp_alloc(struct rte_pci_device *dev,
+ int driver_lock_needed)
{
const struct nfp_cpp_operations *ops;
struct nfp_cpp *cpp;
@@ -596,7 +630,7 @@ nfp_cpp_alloc(struct rte_pci_device *dev, int driver_lock_needed)
/* Hardcoded XPB IMB Base, island 0 */
xpbaddr = 0x000a0000 + (tgt * 4);
err = nfp_xpb_readl(cpp, xpbaddr,
- (uint32_t *)&cpp->imb_cat_table[tgt]);
+ (uint32_t *)&cpp->imb_cat_table[tgt]);
if (err < 0) {
free(cpp);
return NULL;
@@ -631,7 +665,8 @@ nfp_cpp_free(struct nfp_cpp *cpp)
}
struct nfp_cpp *
-nfp_cpp_from_device_name(struct rte_pci_device *dev, int driver_lock_needed)
+nfp_cpp_from_device_name(struct rte_pci_device *dev,
+ int driver_lock_needed)
{
return nfp_cpp_alloc(dev, driver_lock_needed);
}
@@ -647,7 +682,9 @@ nfp_cpp_from_device_name(struct rte_pci_device *dev, int driver_lock_needed)
* @return 0 on success, or -1 on failure.
*/
int
-nfp_xpb_writelm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
+nfp_xpb_writelm(struct nfp_cpp *cpp,
+ uint32_t xpb_tgt,
+ uint32_t mask,
uint32_t value)
{
int err;
@@ -674,8 +711,11 @@ nfp_xpb_writelm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
* @return >= 0 on success, or negative value on failure.
*/
int
-nfp_xpb_waitlm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
- uint32_t value, int timeout_us)
+nfp_xpb_waitlm(struct nfp_cpp *cpp,
+ uint32_t xpb_tgt,
+ uint32_t mask,
+ uint32_t value,
+ int timeout_us)
{
uint32_t tmp;
int err;
@@ -716,8 +756,11 @@ nfp_xpb_waitlm(struct nfp_cpp *cpp, uint32_t xpb_tgt, uint32_t mask,
* @length: number of bytes to read
*/
int
-nfp_cpp_read(struct nfp_cpp *cpp, uint32_t destination,
- unsigned long long address, void *kernel_vaddr, size_t length)
+nfp_cpp_read(struct nfp_cpp *cpp,
+ uint32_t destination,
+ unsigned long long address,
+ void *kernel_vaddr,
+ size_t length)
{
struct nfp_cpp_area *area;
int err;
@@ -743,9 +786,11 @@ nfp_cpp_read(struct nfp_cpp *cpp, uint32_t destination,
* @length: number of bytes to write
*/
int
-nfp_cpp_write(struct nfp_cpp *cpp, uint32_t destination,
- unsigned long long address, const void *kernel_vaddr,
- size_t length)
+nfp_cpp_write(struct nfp_cpp *cpp,
+ uint32_t destination,
+ unsigned long long address,
+ const void *kernel_vaddr,
+ size_t length)
{
struct nfp_cpp_area *area;
int err;
@@ -768,8 +813,10 @@ nfp_cpp_write(struct nfp_cpp *cpp, uint32_t destination,
* @length: length of area to fill
*/
int
-nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
- uint32_t value, size_t length)
+nfp_cpp_area_fill(struct nfp_cpp_area *area,
+ unsigned long offset,
+ uint32_t value,
+ size_t length)
{
int err;
size_t i;
@@ -795,9 +842,8 @@ nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
}
for (i = 0; (i + sizeof(value)) < length; i += sizeof(value64)) {
- err =
- nfp_cpp_area_write(area, offset + i, &value64,
- sizeof(value64));
+ err = nfp_cpp_area_write(area, offset + i, &value64,
+ sizeof(value64));
if (err < 0)
return err;
if (err != sizeof(value64))
@@ -805,8 +851,7 @@ nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
}
if ((i + sizeof(value)) <= length) {
- err =
- nfp_cpp_area_write(area, offset + i, &value, sizeof(value));
+ err = nfp_cpp_area_write(area, offset + i, &value, sizeof(value));
if (err < 0)
return err;
if (err != sizeof(value))
@@ -822,13 +867,14 @@ nfp_cpp_area_fill(struct nfp_cpp_area *area, unsigned long offset,
* as those are model-specific
*/
uint32_t
-__nfp_cpp_model_autodetect(struct nfp_cpp *cpp, uint32_t *model)
+__nfp_cpp_model_autodetect(struct nfp_cpp *cpp,
+ uint32_t *model)
{
uint32_t reg;
int err;
err = nfp_xpb_readl(cpp, NFP_XPB_DEVICE(1, 1, 16) + NFP_PL_DEVICE_ID,
- ®);
+ ®);
if (err < 0)
return err;
@@ -853,8 +899,11 @@ __nfp_cpp_model_autodetect(struct nfp_cpp *cpp, uint32_t *model)
* Return: Pointer to memory mapped area or NULL
*/
uint8_t *
-nfp_cpp_map_area(struct nfp_cpp *cpp, uint32_t cpp_id, uint64_t addr,
- unsigned long size, struct nfp_cpp_area **area)
+nfp_cpp_map_area(struct nfp_cpp *cpp,
+ uint32_t cpp_id,
+ uint64_t addr,
+ unsigned long size,
+ struct nfp_cpp_area **area)
{
uint8_t *res;
diff --git a/drivers/net/nfp/nfpcore/nfp_hwinfo.c b/drivers/net/nfp/nfpcore/nfp_hwinfo.c
index a9d166c4dc..ea4c7d6a9e 100644
--- a/drivers/net/nfp/nfpcore/nfp_hwinfo.c
+++ b/drivers/net/nfp/nfpcore/nfp_hwinfo.c
@@ -33,12 +33,13 @@ nfp_hwinfo_is_updating(struct nfp_hwinfo *hwinfo)
}
static int
-nfp_hwinfo_db_walk(struct nfp_hwinfo *hwinfo, uint32_t size)
+nfp_hwinfo_db_walk(struct nfp_hwinfo *hwinfo,
+ uint32_t size)
{
const char *key, *val, *end = hwinfo->data + size;
for (key = hwinfo->data; *key != 0 && key < end;
- key = val + strlen(val) + 1) {
+ key = val + strlen(val) + 1) {
val = key + strlen(key) + 1;
if (val >= end) {
PMD_DRV_LOG(ERR, "Bad HWINFO - overflowing value");
@@ -54,7 +55,8 @@ nfp_hwinfo_db_walk(struct nfp_hwinfo *hwinfo, uint32_t size)
}
static int
-nfp_hwinfo_db_validate(struct nfp_hwinfo *db, uint32_t len)
+nfp_hwinfo_db_validate(struct nfp_hwinfo *db,
+ uint32_t len)
{
uint32_t size, new_crc, *crc;
@@ -69,7 +71,7 @@ nfp_hwinfo_db_validate(struct nfp_hwinfo *db, uint32_t len)
crc = (uint32_t *)(db->start + size);
if (new_crc != *crc) {
PMD_DRV_LOG(ERR, "Corrupt hwinfo table (CRC mismatch) calculated 0x%x, expected 0x%x",
- new_crc, *crc);
+ new_crc, *crc);
return -EINVAL;
}
@@ -77,7 +79,8 @@ nfp_hwinfo_db_validate(struct nfp_hwinfo *db, uint32_t len)
}
static struct nfp_hwinfo *
-nfp_hwinfo_try_fetch(struct nfp_cpp *cpp, size_t *cpp_size)
+nfp_hwinfo_try_fetch(struct nfp_cpp *cpp,
+ size_t *cpp_size)
{
struct nfp_hwinfo *header;
void *res;
@@ -115,7 +118,7 @@ nfp_hwinfo_try_fetch(struct nfp_cpp *cpp, size_t *cpp_size)
if (header->version != NFP_HWINFO_VERSION_2) {
PMD_DRV_LOG(DEBUG, "Unknown HWInfo version: 0x%08x",
- header->version);
+ header->version);
goto exit_free;
}
@@ -129,7 +132,8 @@ nfp_hwinfo_try_fetch(struct nfp_cpp *cpp, size_t *cpp_size)
}
static struct nfp_hwinfo *
-nfp_hwinfo_fetch(struct nfp_cpp *cpp, size_t *hwdb_size)
+nfp_hwinfo_fetch(struct nfp_cpp *cpp,
+ size_t *hwdb_size)
{
struct timespec wait;
struct nfp_hwinfo *db;
@@ -179,7 +183,8 @@ nfp_hwinfo_read(struct nfp_cpp *cpp)
* Return: Value of the HWInfo name, or NULL
*/
const char *
-nfp_hwinfo_lookup(struct nfp_hwinfo *hwinfo, const char *lookup)
+nfp_hwinfo_lookup(struct nfp_hwinfo *hwinfo,
+ const char *lookup)
{
const char *key, *val, *end;
@@ -189,7 +194,7 @@ nfp_hwinfo_lookup(struct nfp_hwinfo *hwinfo, const char *lookup)
end = hwinfo->data + hwinfo->size - sizeof(uint32_t);
for (key = hwinfo->data; *key != 0 && key < end;
- key = val + strlen(val) + 1) {
+ key = val + strlen(val) + 1) {
val = key + strlen(key) + 1;
if (strcmp(key, lookup) == 0)
diff --git a/drivers/net/nfp/nfpcore/nfp_mip.c b/drivers/net/nfp/nfpcore/nfp_mip.c
index f9723dd136..0071d3fc37 100644
--- a/drivers/net/nfp/nfpcore/nfp_mip.c
+++ b/drivers/net/nfp/nfpcore/nfp_mip.c
@@ -37,8 +37,10 @@ struct nfp_mip {
/* Read memory and check if it could be a valid MIP */
static int
-nfp_mip_try_read(struct nfp_cpp *cpp, uint32_t cpp_id, uint64_t addr,
- struct nfp_mip *mip)
+nfp_mip_try_read(struct nfp_cpp *cpp,
+ uint32_t cpp_id,
+ uint64_t addr,
+ struct nfp_mip *mip)
{
int ret;
@@ -49,12 +51,12 @@ nfp_mip_try_read(struct nfp_cpp *cpp, uint32_t cpp_id, uint64_t addr,
}
if (mip->signature != NFP_MIP_SIGNATURE) {
PMD_DRV_LOG(ERR, "Incorrect MIP signature (0x%08x)",
- rte_le_to_cpu_32(mip->signature));
+ rte_le_to_cpu_32(mip->signature));
return -EINVAL;
}
if (mip->mip_version != NFP_MIP_VERSION) {
PMD_DRV_LOG(ERR, "Unsupported MIP version (%d)",
- rte_le_to_cpu_32(mip->mip_version));
+ rte_le_to_cpu_32(mip->mip_version));
return -EINVAL;
}
@@ -63,7 +65,8 @@ nfp_mip_try_read(struct nfp_cpp *cpp, uint32_t cpp_id, uint64_t addr,
/* Try to locate MIP using the resource table */
static int
-nfp_mip_read_resource(struct nfp_cpp *cpp, struct nfp_mip *mip)
+nfp_mip_read_resource(struct nfp_cpp *cpp,
+ struct nfp_mip *mip)
{
struct nfp_nffw_info *nffw_info;
uint32_t cpp_id;
@@ -134,7 +137,9 @@ nfp_mip_name(const struct nfp_mip *mip)
* @size: Location for size of MIP symbol table
*/
void
-nfp_mip_symtab(const struct nfp_mip *mip, uint32_t *addr, uint32_t *size)
+nfp_mip_symtab(const struct nfp_mip *mip,
+ uint32_t *addr,
+ uint32_t *size)
{
*addr = rte_le_to_cpu_32(mip->symtab_addr);
*size = rte_le_to_cpu_32(mip->symtab_size);
@@ -147,7 +152,9 @@ nfp_mip_symtab(const struct nfp_mip *mip, uint32_t *addr, uint32_t *size)
* @size: Location for size of MIP symbol name table
*/
void
-nfp_mip_strtab(const struct nfp_mip *mip, uint32_t *addr, uint32_t *size)
+nfp_mip_strtab(const struct nfp_mip *mip,
+ uint32_t *addr,
+ uint32_t *size)
{
*addr = rte_le_to_cpu_32(mip->strtab_addr);
*size = rte_le_to_cpu_32(mip->strtab_size);
diff --git a/drivers/net/nfp/nfpcore/nfp_mip.h b/drivers/net/nfp/nfpcore/nfp_mip.h
index d0919b58fe..980abc2517 100644
--- a/drivers/net/nfp/nfpcore/nfp_mip.h
+++ b/drivers/net/nfp/nfpcore/nfp_mip.h
@@ -17,5 +17,5 @@ const char *nfp_mip_name(const struct nfp_mip *mip);
void nfp_mip_symtab(const struct nfp_mip *mip, uint32_t *addr, uint32_t *size);
void nfp_mip_strtab(const struct nfp_mip *mip, uint32_t *addr, uint32_t *size);
int nfp_nffw_info_mip_first(struct nfp_nffw_info *state, uint32_t *cpp_id,
- uint64_t *off);
+ uint64_t *off);
#endif
diff --git a/drivers/net/nfp/nfpcore/nfp_mutex.c b/drivers/net/nfp/nfpcore/nfp_mutex.c
index 0410a00856..047e755416 100644
--- a/drivers/net/nfp/nfpcore/nfp_mutex.c
+++ b/drivers/net/nfp/nfpcore/nfp_mutex.c
@@ -35,7 +35,9 @@ struct nfp_cpp_mutex {
};
static int
-_nfp_cpp_mutex_validate(uint32_t model, int *target, unsigned long long address)
+_nfp_cpp_mutex_validate(uint32_t model,
+ int *target,
+ unsigned long long address)
{
/* Address must be 64-bit aligned */
if ((address & 7) != 0)
@@ -72,8 +74,10 @@ _nfp_cpp_mutex_validate(uint32_t model, int *target, unsigned long long address)
* @return 0 on success, or negative value on failure.
*/
int
-nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target, unsigned long long address,
- uint32_t key)
+nfp_cpp_mutex_init(struct nfp_cpp *cpp,
+ int target,
+ unsigned long long address,
+ uint32_t key)
{
uint32_t model = nfp_cpp_model(cpp);
uint32_t muw = NFP_CPP_ID(target, 4, 0); /* atomic_write */
@@ -87,9 +91,8 @@ nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target, unsigned long long address,
if (err < 0)
return err;
- err =
- nfp_cpp_writel(cpp, muw, address + 0,
- MUTEX_LOCKED(nfp_cpp_interface(cpp)));
+ err = nfp_cpp_writel(cpp, muw, address + 0,
+ MUTEX_LOCKED(nfp_cpp_interface(cpp)));
if (err < 0)
return err;
@@ -114,8 +117,10 @@ nfp_cpp_mutex_init(struct nfp_cpp *cpp, int target, unsigned long long address,
* @return A non-NULL struct nfp_cpp_mutex * on success, NULL on failure.
*/
struct nfp_cpp_mutex *
-nfp_cpp_mutex_alloc(struct nfp_cpp *cpp, int target,
- unsigned long long address, uint32_t key)
+nfp_cpp_mutex_alloc(struct nfp_cpp *cpp,
+ int target,
+ unsigned long long address,
+ uint32_t key)
{
uint32_t model = nfp_cpp_model(cpp);
struct nfp_cpp_mutex *mutex;
@@ -265,8 +270,8 @@ nfp_cpp_mutex_lock(struct nfp_cpp_mutex *mutex)
return err;
if (time(NULL) >= warn_at) {
PMD_DRV_LOG(ERR, "Warning: waiting for NFP mutex usage:%u depth:%hd] target:%d addr:%llx key:%08x]",
- mutex->usage, mutex->depth, mutex->target,
- mutex->address, mutex->key);
+ mutex->usage, mutex->depth, mutex->target,
+ mutex->address, mutex->key);
warn_at = time(NULL) + 60;
}
sched_yield();
diff --git a/drivers/net/nfp/nfpcore/nfp_nffw.c b/drivers/net/nfp/nfpcore/nfp_nffw.c
index 433780a5e7..8bdc69766e 100644
--- a/drivers/net/nfp/nfpcore/nfp_nffw.c
+++ b/drivers/net/nfp/nfpcore/nfp_nffw.c
@@ -138,8 +138,8 @@ nfp_nffw_info_open(struct nfp_cpp *cpp)
goto err_release;
err = nfp_cpp_read(cpp, nfp_resource_cpp_id(state->res),
- nfp_resource_address(state->res),
- fwinf, sizeof(*fwinf));
+ nfp_resource_address(state->res),
+ fwinf, sizeof(*fwinf));
if (err < (int)sizeof(*fwinf))
goto err_release;
@@ -205,8 +205,9 @@ nfp_nffw_info_fwid_first(struct nfp_nffw_info *state)
* Return: 0, or -ERRNO
*/
int
-nfp_nffw_info_mip_first(struct nfp_nffw_info *state, uint32_t *cpp_id,
- uint64_t *off)
+nfp_nffw_info_mip_first(struct nfp_nffw_info *state,
+ uint32_t *cpp_id,
+ uint64_t *off)
{
struct nffw_fwinfo *fwinfo;
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.c b/drivers/net/nfp/nfpcore/nfp_nsp.c
index 6474abf0c2..4f476f6f2b 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.c
@@ -22,7 +22,8 @@ nfp_nsp_config_modified(struct nfp_nsp *state)
}
void
-nfp_nsp_config_set_modified(struct nfp_nsp *state, int modified)
+nfp_nsp_config_set_modified(struct nfp_nsp *state,
+ int modified)
{
state->modified = modified;
}
@@ -40,7 +41,9 @@ nfp_nsp_config_idx(struct nfp_nsp *state)
}
void
-nfp_nsp_config_set_state(struct nfp_nsp *state, void *entries, unsigned int idx)
+nfp_nsp_config_set_state(struct nfp_nsp *state,
+ void *entries,
+ unsigned int idx)
{
state->entries = entries;
state->idx = idx;
@@ -91,7 +94,7 @@ nfp_nsp_check(struct nfp_nsp *state)
if (state->ver.major != NSP_MAJOR || state->ver.minor < NSP_MINOR) {
PMD_DRV_LOG(ERR, "Unsupported ABI %hu.%hu", state->ver.major,
- state->ver.minor);
+ state->ver.minor);
return -EINVAL;
}
@@ -160,8 +163,12 @@ nfp_nsp_get_abi_ver_minor(struct nfp_nsp *state)
}
static int
-nfp_nsp_wait_reg(struct nfp_cpp *cpp, uint64_t *reg, uint32_t nsp_cpp,
- uint64_t addr, uint64_t mask, uint64_t val)
+nfp_nsp_wait_reg(struct nfp_cpp *cpp,
+ uint64_t *reg,
+ uint32_t nsp_cpp,
+ uint64_t addr,
+ uint64_t mask,
+ uint64_t val)
{
struct timespec wait;
int count;
@@ -204,8 +211,11 @@ nfp_nsp_wait_reg(struct nfp_cpp *cpp, uint64_t *reg, uint32_t nsp_cpp,
* -ETIMEDOUT if the NSP took longer than 30 seconds to complete
*/
static int
-nfp_nsp_command(struct nfp_nsp *state, uint16_t code, uint32_t option,
- uint32_t buff_cpp, uint64_t buff_addr)
+nfp_nsp_command(struct nfp_nsp *state,
+ uint16_t code,
+ uint32_t option,
+ uint32_t buff_cpp,
+ uint64_t buff_addr)
{
uint64_t reg, ret_val, nsp_base, nsp_buffer, nsp_status, nsp_command;
struct nfp_cpp *cpp = state->cpp;
@@ -223,40 +233,40 @@ nfp_nsp_command(struct nfp_nsp *state, uint16_t code, uint32_t option,
return err;
if (!FIELD_FIT(NSP_BUFFER_CPP, buff_cpp >> 8) ||
- !FIELD_FIT(NSP_BUFFER_ADDRESS, buff_addr)) {
+ !FIELD_FIT(NSP_BUFFER_ADDRESS, buff_addr)) {
PMD_DRV_LOG(ERR, "Host buffer out of reach %08x %" PRIx64,
- buff_cpp, buff_addr);
+ buff_cpp, buff_addr);
return -EINVAL;
}
err = nfp_cpp_writeq(cpp, nsp_cpp, nsp_buffer,
- FIELD_PREP(NSP_BUFFER_CPP, buff_cpp >> 8) |
- FIELD_PREP(NSP_BUFFER_ADDRESS, buff_addr));
+ FIELD_PREP(NSP_BUFFER_CPP, buff_cpp >> 8) |
+ FIELD_PREP(NSP_BUFFER_ADDRESS, buff_addr));
if (err < 0)
return err;
err = nfp_cpp_writeq(cpp, nsp_cpp, nsp_command,
- FIELD_PREP(NSP_COMMAND_OPTION, option) |
- FIELD_PREP(NSP_COMMAND_CODE, code) |
- FIELD_PREP(NSP_COMMAND_START, 1));
+ FIELD_PREP(NSP_COMMAND_OPTION, option) |
+ FIELD_PREP(NSP_COMMAND_CODE, code) |
+ FIELD_PREP(NSP_COMMAND_START, 1));
if (err < 0)
return err;
/* Wait for NSP_COMMAND_START to go to 0 */
err = nfp_nsp_wait_reg(cpp, ®, nsp_cpp, nsp_command,
- NSP_COMMAND_START, 0);
+ NSP_COMMAND_START, 0);
if (err != 0) {
PMD_DRV_LOG(ERR, "Error %d waiting for code 0x%04x to start",
- err, code);
+ err, code);
return err;
}
/* Wait for NSP_STATUS_BUSY to go to 0 */
- err = nfp_nsp_wait_reg(cpp, ®, nsp_cpp, nsp_status, NSP_STATUS_BUSY,
- 0);
+ err = nfp_nsp_wait_reg(cpp, ®, nsp_cpp, nsp_status,
+ NSP_STATUS_BUSY, 0);
if (err != 0) {
PMD_DRV_LOG(ERR, "Error %d waiting for code 0x%04x to start",
- err, code);
+ err, code);
return err;
}
@@ -268,7 +278,7 @@ nfp_nsp_command(struct nfp_nsp *state, uint16_t code, uint32_t option,
err = FIELD_GET(NSP_STATUS_RESULT, reg);
if (err != 0) {
PMD_DRV_LOG(ERR, "Result (error) code set: %d (%d) command: %d",
- -err, (int)ret_val, code);
+ -err, (int)ret_val, code);
nfp_nsp_print_extended_error(ret_val);
return -err;
}
@@ -279,9 +289,12 @@ nfp_nsp_command(struct nfp_nsp *state, uint16_t code, uint32_t option,
#define SZ_1M 0x00100000
static int
-nfp_nsp_command_buf(struct nfp_nsp *nsp, uint16_t code, uint32_t option,
- const void *in_buf, unsigned int in_size, void *out_buf,
- unsigned int out_size)
+nfp_nsp_command_buf(struct nfp_nsp *nsp,
+ uint16_t code, uint32_t option,
+ const void *in_buf,
+ unsigned int in_size,
+ void *out_buf,
+ unsigned int out_size)
{
struct nfp_cpp *cpp = nsp->cpp;
unsigned int max_size;
@@ -291,28 +304,26 @@ nfp_nsp_command_buf(struct nfp_nsp *nsp, uint16_t code, uint32_t option,
if (nsp->ver.minor < 13) {
PMD_DRV_LOG(ERR, "NSP: Code 0x%04x with buffer not supported ABI %hu.%hu)",
- code, nsp->ver.major, nsp->ver.minor);
+ code, nsp->ver.major, nsp->ver.minor);
return -EOPNOTSUPP;
}
err = nfp_cpp_readq(cpp, nfp_resource_cpp_id(nsp->res),
- nfp_resource_address(nsp->res) +
- NSP_DFLT_BUFFER_CONFIG,
- ®);
+ nfp_resource_address(nsp->res) + NSP_DFLT_BUFFER_CONFIG,
+ ®);
if (err < 0)
return err;
max_size = RTE_MAX(in_size, out_size);
if (FIELD_GET(NSP_DFLT_BUFFER_SIZE_MB, reg) * SZ_1M < max_size) {
PMD_DRV_LOG(ERR, "NSP: default buffer too small for command 0x%04x (%llu < %u)",
- code, FIELD_GET(NSP_DFLT_BUFFER_SIZE_MB, reg) * SZ_1M, max_size);
+ code, FIELD_GET(NSP_DFLT_BUFFER_SIZE_MB, reg) * SZ_1M, max_size);
return -EINVAL;
}
err = nfp_cpp_readq(cpp, nfp_resource_cpp_id(nsp->res),
- nfp_resource_address(nsp->res) +
- NSP_DFLT_BUFFER,
- ®);
+ nfp_resource_address(nsp->res) + NSP_DFLT_BUFFER,
+ ®);
if (err < 0)
return err;
@@ -328,7 +339,7 @@ nfp_nsp_command_buf(struct nfp_nsp *nsp, uint16_t code, uint32_t option,
if (out_buf != NULL && out_size > 0 && out_size > in_size) {
memset(out_buf, 0, out_size - in_size);
err = nfp_cpp_write(cpp, cpp_id, cpp_buf + in_size, out_buf,
- out_size - in_size);
+ out_size - in_size);
if (err < 0)
return err;
}
@@ -388,38 +399,47 @@ nfp_nsp_mac_reinit(struct nfp_nsp *state)
}
int
-nfp_nsp_load_fw(struct nfp_nsp *state, void *buf, unsigned int size)
+nfp_nsp_load_fw(struct nfp_nsp *state,
+ void *buf,
+ unsigned int size)
{
return nfp_nsp_command_buf(state, SPCODE_FW_LOAD, size, buf, size,
- NULL, 0);
+ NULL, 0);
}
int
-nfp_nsp_read_eth_table(struct nfp_nsp *state, void *buf, unsigned int size)
+nfp_nsp_read_eth_table(struct nfp_nsp *state,
+ void *buf,
+ unsigned int size)
{
return nfp_nsp_command_buf(state, SPCODE_ETH_RESCAN, size, NULL, 0,
- buf, size);
+ buf, size);
}
int
-nfp_nsp_write_eth_table(struct nfp_nsp *state, const void *buf,
- unsigned int size)
+nfp_nsp_write_eth_table(struct nfp_nsp *state,
+ const void *buf,
+ unsigned int size)
{
return nfp_nsp_command_buf(state, SPCODE_ETH_CONTROL, size, buf, size,
- NULL, 0);
+ NULL, 0);
}
int
-nfp_nsp_read_identify(struct nfp_nsp *state, void *buf, unsigned int size)
+nfp_nsp_read_identify(struct nfp_nsp *state,
+ void *buf,
+ unsigned int size)
{
return nfp_nsp_command_buf(state, SPCODE_NSP_IDENTIFY, size, NULL, 0,
- buf, size);
+ buf, size);
}
int
-nfp_nsp_read_sensors(struct nfp_nsp *state, unsigned int sensor_mask, void *buf,
- unsigned int size)
+nfp_nsp_read_sensors(struct nfp_nsp *state,
+ unsigned int sensor_mask,
+ void *buf,
+ unsigned int size)
{
return nfp_nsp_command_buf(state, SPCODE_NSP_SENSORS, sensor_mask, NULL,
- 0, buf, size);
+ 0, buf, size);
}
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp.h b/drivers/net/nfp/nfpcore/nfp_nsp.h
index 9905b2d3d3..1e2deaabb4 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp.h
+++ b/drivers/net/nfp/nfpcore/nfp_nsp.h
@@ -114,9 +114,10 @@ int nfp_nsp_load_fw(struct nfp_nsp *state, void *buf, unsigned int size);
int nfp_nsp_mac_reinit(struct nfp_nsp *state);
int nfp_nsp_read_identify(struct nfp_nsp *state, void *buf, unsigned int size);
int nfp_nsp_read_sensors(struct nfp_nsp *state, unsigned int sensor_mask,
- void *buf, unsigned int size);
+ void *buf, unsigned int size);
-static inline int nfp_nsp_has_mac_reinit(struct nfp_nsp *state)
+static inline int
+nfp_nsp_has_mac_reinit(struct nfp_nsp *state)
{
return nfp_nsp_get_abi_ver_minor(state) > 20;
}
@@ -229,22 +230,22 @@ struct nfp_eth_table *nfp_eth_read_ports(struct nfp_cpp *cpp);
int nfp_eth_set_mod_enable(struct nfp_cpp *cpp, unsigned int idx, int enable);
int nfp_eth_set_configured(struct nfp_cpp *cpp, unsigned int idx,
- int configed);
-int
-nfp_eth_set_fec(struct nfp_cpp *cpp, unsigned int idx, enum nfp_eth_fec mode);
+ int configed);
+int nfp_eth_set_fec(struct nfp_cpp *cpp, unsigned int idx, enum nfp_eth_fec mode);
int nfp_nsp_read_eth_table(struct nfp_nsp *state, void *buf, unsigned int size);
int nfp_nsp_write_eth_table(struct nfp_nsp *state, const void *buf,
- unsigned int size);
+ unsigned int size);
void nfp_nsp_config_set_state(struct nfp_nsp *state, void *entries,
- unsigned int idx);
+ unsigned int idx);
void nfp_nsp_config_clear_state(struct nfp_nsp *state);
void nfp_nsp_config_set_modified(struct nfp_nsp *state, int modified);
void *nfp_nsp_config_entries(struct nfp_nsp *state);
int nfp_nsp_config_modified(struct nfp_nsp *state);
unsigned int nfp_nsp_config_idx(struct nfp_nsp *state);
-static inline int nfp_eth_can_support_fec(struct nfp_eth_table_port *eth_port)
+static inline int
+nfp_eth_can_support_fec(struct nfp_eth_table_port *eth_port)
{
return !!eth_port->fec_modes_supported;
}
@@ -297,6 +298,6 @@ enum nfp_nsp_sensor_id {
};
int nfp_hwmon_read_sensor(struct nfp_cpp *cpp, enum nfp_nsp_sensor_id id,
- long *val);
+ long *val);
#endif
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
index 21b338461e..28dba27124 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_cmds.c
@@ -73,7 +73,9 @@ struct nfp_sensors {
};
int
-nfp_hwmon_read_sensor(struct nfp_cpp *cpp, enum nfp_nsp_sensor_id id, long *val)
+nfp_hwmon_read_sensor(struct nfp_cpp *cpp,
+ enum nfp_nsp_sensor_id id,
+ long *val)
{
struct nfp_sensors s;
struct nfp_nsp *nsp;
diff --git a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
index 825a84a8cd..3eeefc74af 100644
--- a/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
+++ b/drivers/net/nfp/nfpcore/nfp_nsp_eth.c
@@ -168,8 +168,10 @@ nfp_eth_copy_mac_reverse(uint8_t *dst, const uint8_t *src)
}
static void
-nfp_eth_port_translate(struct nfp_nsp *nsp, const union eth_table_entry *src,
- unsigned int index, struct nfp_eth_table_port *dst)
+nfp_eth_port_translate(struct nfp_nsp *nsp,
+ const union eth_table_entry *src,
+ unsigned int index,
+ struct nfp_eth_table_port *dst)
{
unsigned int rate;
unsigned int fec;
@@ -225,21 +227,21 @@ nfp_eth_calc_port_geometry(struct nfp_eth_table *table)
for (i = 0; i < table->count; i++) {
table->max_index = RTE_MAX(table->max_index,
- table->ports[i].index);
+ table->ports[i].index);
for (j = 0; j < table->count; j++) {
if (table->ports[i].label_port !=
- table->ports[j].label_port)
+ table->ports[j].label_port)
continue;
table->ports[i].port_lanes += table->ports[j].lanes;
if (i == j)
continue;
if (table->ports[i].label_subport ==
- table->ports[j].label_subport)
+ table->ports[j].label_subport)
PMD_DRV_LOG(DEBUG, "Port %d subport %d is a duplicate",
- table->ports[i].label_port,
- table->ports[i].label_subport);
+ table->ports[i].label_port,
+ table->ports[i].label_subport);
table->ports[i].is_split = 1;
}
@@ -296,7 +298,7 @@ __nfp_eth_read_ports(struct nfp_nsp *nsp)
*/
if (ret != 0 && ret != cnt) {
PMD_DRV_LOG(ERR, "table entry count (%d) unmatch entries present (%d)",
- ret, cnt);
+ ret, cnt);
goto err;
}
@@ -354,7 +356,8 @@ nfp_eth_read_ports(struct nfp_cpp *cpp)
}
struct nfp_nsp *
-nfp_eth_config_start(struct nfp_cpp *cpp, unsigned int idx)
+nfp_eth_config_start(struct nfp_cpp *cpp,
+ unsigned int idx)
{
union eth_table_entry *entries;
struct nfp_nsp *nsp;
@@ -447,7 +450,9 @@ nfp_eth_config_commit_end(struct nfp_nsp *nsp)
* -ERRNO - configuration failed.
*/
int
-nfp_eth_set_mod_enable(struct nfp_cpp *cpp, unsigned int idx, int enable)
+nfp_eth_set_mod_enable(struct nfp_cpp *cpp,
+ unsigned int idx,
+ int enable)
{
union eth_table_entry *entries;
struct nfp_nsp *nsp;
@@ -487,7 +492,9 @@ nfp_eth_set_mod_enable(struct nfp_cpp *cpp, unsigned int idx, int enable)
* -ERRNO - configuration failed.
*/
int
-nfp_eth_set_configured(struct nfp_cpp *cpp, unsigned int idx, int configed)
+nfp_eth_set_configured(struct nfp_cpp *cpp,
+ unsigned int idx,
+ int configed)
{
union eth_table_entry *entries;
struct nfp_nsp *nsp;
@@ -523,9 +530,12 @@ nfp_eth_set_configured(struct nfp_cpp *cpp, unsigned int idx, int configed)
}
static int
-nfp_eth_set_bit_config(struct nfp_nsp *nsp, unsigned int raw_idx,
- const uint64_t mask, const unsigned int shift,
- unsigned int val, const uint64_t ctrl_bit)
+nfp_eth_set_bit_config(struct nfp_nsp *nsp,
+ unsigned int raw_idx,
+ const uint64_t mask,
+ const unsigned int shift,
+ unsigned int val,
+ const uint64_t ctrl_bit)
{
union eth_table_entry *entries = nfp_nsp_config_entries(nsp);
unsigned int idx = nfp_nsp_config_idx(nsp);
@@ -560,7 +570,7 @@ nfp_eth_set_bit_config(struct nfp_nsp *nsp, unsigned int raw_idx,
(__extension__ ({ \
typeof(mask) _x = (mask); \
nfp_eth_set_bit_config(nsp, raw_idx, _x, __bf_shf(_x), \
- val, ctrl_bit); \
+ val, ctrl_bit); \
}))
/*
@@ -574,11 +584,11 @@ nfp_eth_set_bit_config(struct nfp_nsp *nsp, unsigned int raw_idx,
* Return: 0 or -ERRNO.
*/
int
-__nfp_eth_set_aneg(struct nfp_nsp *nsp, enum nfp_eth_aneg mode)
+__nfp_eth_set_aneg(struct nfp_nsp *nsp,
+ enum nfp_eth_aneg mode)
{
return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_STATE,
- NSP_ETH_STATE_ANEG, mode,
- NSP_ETH_CTRL_SET_ANEG);
+ NSP_ETH_STATE_ANEG, mode, NSP_ETH_CTRL_SET_ANEG);
}
/*
@@ -592,11 +602,11 @@ __nfp_eth_set_aneg(struct nfp_nsp *nsp, enum nfp_eth_aneg mode)
* Return: 0 or -ERRNO.
*/
static int
-__nfp_eth_set_fec(struct nfp_nsp *nsp, enum nfp_eth_fec mode)
+__nfp_eth_set_fec(struct nfp_nsp *nsp,
+ enum nfp_eth_fec mode)
{
return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_STATE,
- NSP_ETH_STATE_FEC, mode,
- NSP_ETH_CTRL_SET_FEC);
+ NSP_ETH_STATE_FEC, mode, NSP_ETH_CTRL_SET_FEC);
}
/*
@@ -611,7 +621,9 @@ __nfp_eth_set_fec(struct nfp_nsp *nsp, enum nfp_eth_fec mode)
* -ERRNO - configuration failed.
*/
int
-nfp_eth_set_fec(struct nfp_cpp *cpp, unsigned int idx, enum nfp_eth_fec mode)
+nfp_eth_set_fec(struct nfp_cpp *cpp,
+ unsigned int idx,
+ enum nfp_eth_fec mode)
{
struct nfp_nsp *nsp;
int err;
@@ -642,7 +654,8 @@ nfp_eth_set_fec(struct nfp_cpp *cpp, unsigned int idx, enum nfp_eth_fec mode)
* Return: 0 or -ERRNO.
*/
int
-__nfp_eth_set_speed(struct nfp_nsp *nsp, unsigned int speed)
+__nfp_eth_set_speed(struct nfp_nsp *nsp,
+ unsigned int speed)
{
enum nfp_eth_rate rate;
@@ -653,8 +666,7 @@ __nfp_eth_set_speed(struct nfp_nsp *nsp, unsigned int speed)
}
return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_STATE,
- NSP_ETH_STATE_RATE, rate,
- NSP_ETH_CTRL_SET_RATE);
+ NSP_ETH_STATE_RATE, rate, NSP_ETH_CTRL_SET_RATE);
}
/*
@@ -668,8 +680,9 @@ __nfp_eth_set_speed(struct nfp_nsp *nsp, unsigned int speed)
* Return: 0 or -ERRNO.
*/
int
-__nfp_eth_set_split(struct nfp_nsp *nsp, unsigned int lanes)
+__nfp_eth_set_split(struct nfp_nsp *nsp,
+ unsigned int lanes)
{
- return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_PORT, NSP_ETH_PORT_LANES,
- lanes, NSP_ETH_CTRL_SET_LANES);
+ return NFP_ETH_SET_BIT_CONFIG(nsp, NSP_ETH_RAW_PORT,
+ NSP_ETH_PORT_LANES, lanes, NSP_ETH_CTRL_SET_LANES);
}
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.c b/drivers/net/nfp/nfpcore/nfp_resource.c
index 838cd6e0ef..57089c770f 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.c
+++ b/drivers/net/nfp/nfpcore/nfp_resource.c
@@ -64,7 +64,8 @@ struct nfp_resource {
};
static int
-nfp_cpp_resource_find(struct nfp_cpp *cpp, struct nfp_resource *res)
+nfp_cpp_resource_find(struct nfp_cpp *cpp,
+ struct nfp_resource *res)
{
char name_pad[NFP_RESOURCE_ENTRY_NAME_SZ + 2];
struct nfp_resource_entry entry;
@@ -85,7 +86,7 @@ nfp_cpp_resource_find(struct nfp_cpp *cpp, struct nfp_resource *res)
for (i = 0; i < NFP_RESOURCE_TBL_ENTRIES; i++) {
uint64_t addr = NFP_RESOURCE_TBL_BASE +
- sizeof(struct nfp_resource_entry) * i;
+ sizeof(struct nfp_resource_entry) * i;
ret = nfp_cpp_read(cpp, cpp_id, addr, &entry, sizeof(entry));
if (ret != sizeof(entry))
@@ -95,12 +96,11 @@ nfp_cpp_resource_find(struct nfp_cpp *cpp, struct nfp_resource *res)
continue;
/* Found key! */
- res->mutex =
- nfp_cpp_mutex_alloc(cpp,
- NFP_RESOURCE_TBL_TARGET, addr, key);
+ res->mutex = nfp_cpp_mutex_alloc(cpp, NFP_RESOURCE_TBL_TARGET,
+ addr, key);
res->cpp_id = NFP_CPP_ID(entry.region.cpp_target,
- entry.region.cpp_action,
- entry.region.cpp_token);
+ entry.region.cpp_action,
+ entry.region.cpp_token);
res->addr = ((uint64_t)entry.region.page_offset) << 8;
res->size = (uint64_t)entry.region.page_size << 8;
return 0;
@@ -110,8 +110,9 @@ nfp_cpp_resource_find(struct nfp_cpp *cpp, struct nfp_resource *res)
}
static int
-nfp_resource_try_acquire(struct nfp_cpp *cpp, struct nfp_resource *res,
- struct nfp_cpp_mutex *dev_mutex)
+nfp_resource_try_acquire(struct nfp_cpp *cpp,
+ struct nfp_resource *res,
+ struct nfp_cpp_mutex *dev_mutex)
{
int err;
@@ -148,7 +149,8 @@ nfp_resource_try_acquire(struct nfp_cpp *cpp, struct nfp_resource *res,
* Return: NFP Resource handle, or NULL
*/
struct nfp_resource *
-nfp_resource_acquire(struct nfp_cpp *cpp, const char *name)
+nfp_resource_acquire(struct nfp_cpp *cpp,
+ const char *name)
{
struct nfp_cpp_mutex *dev_mutex;
struct nfp_resource *res;
@@ -165,8 +167,7 @@ nfp_resource_acquire(struct nfp_cpp *cpp, const char *name)
strncpy(res->name, name, NFP_RESOURCE_ENTRY_NAME_SZ);
dev_mutex = nfp_cpp_mutex_alloc(cpp, NFP_RESOURCE_TBL_TARGET,
- NFP_RESOURCE_TBL_BASE,
- NFP_RESOURCE_TBL_KEY);
+ NFP_RESOURCE_TBL_BASE, NFP_RESOURCE_TBL_KEY);
if (dev_mutex == NULL) {
free(res);
return NULL;
@@ -234,8 +235,8 @@ nfp_resource_cpp_id(const struct nfp_resource *res)
*
* Return: const char pointer to the name of the resource
*/
-const char
-*nfp_resource_name(const struct nfp_resource *res)
+const char *
+nfp_resource_name(const struct nfp_resource *res)
{
return res->name;
}
diff --git a/drivers/net/nfp/nfpcore/nfp_resource.h b/drivers/net/nfp/nfpcore/nfp_resource.h
index 06cc6f74f4..009b7359a4 100644
--- a/drivers/net/nfp/nfpcore/nfp_resource.h
+++ b/drivers/net/nfp/nfpcore/nfp_resource.h
@@ -18,7 +18,7 @@
struct nfp_resource;
struct nfp_resource *nfp_resource_acquire(struct nfp_cpp *cpp,
- const char *name);
+ const char *name);
/**
* Release a NFP Resource, and free the handle
diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.c b/drivers/net/nfp/nfpcore/nfp_rtsym.c
index 4c45aec5c1..aa3b7a483e 100644
--- a/drivers/net/nfp/nfpcore/nfp_rtsym.c
+++ b/drivers/net/nfp/nfpcore/nfp_rtsym.c
@@ -40,22 +40,25 @@ struct nfp_rtsym_table {
};
static int
-nfp_meid(uint8_t island_id, uint8_t menum)
+nfp_meid(uint8_t island_id,
+ uint8_t menum)
{
return (island_id & 0x3F) == island_id && menum < 12 ?
(island_id << 4) | (menum + 4) : -1;
}
static void
-nfp_rtsym_sw_entry_init(struct nfp_rtsym_table *cache, uint32_t strtab_size,
- struct nfp_rtsym *sw, struct nfp_rtsym_entry *fw)
+nfp_rtsym_sw_entry_init(struct nfp_rtsym_table *cache,
+ uint32_t strtab_size,
+ struct nfp_rtsym *sw,
+ struct nfp_rtsym_entry *fw)
{
sw->type = fw->type;
sw->name = cache->strtab + rte_le_to_cpu_16(fw->name) % strtab_size;
sw->addr = ((uint64_t)fw->addr_hi << 32) |
- rte_le_to_cpu_32(fw->addr_lo);
+ rte_le_to_cpu_32(fw->addr_lo);
sw->size = ((uint64_t)fw->size_hi << 32) |
- rte_le_to_cpu_32(fw->size_lo);
+ rte_le_to_cpu_32(fw->size_lo);
PMD_INIT_LOG(DEBUG, "rtsym_entry_init name=%s, addr=%" PRIx64 ", size=%" PRIu64 ", target=%d",
sw->name, sw->addr, sw->size, sw->target);
@@ -93,7 +96,8 @@ nfp_rtsym_table_read(struct nfp_cpp *cpp)
}
struct nfp_rtsym_table *
-__nfp_rtsym_table_read(struct nfp_cpp *cpp, const struct nfp_mip *mip)
+__nfp_rtsym_table_read(struct nfp_cpp *cpp,
+ const struct nfp_mip *mip)
{
uint32_t strtab_addr, symtab_addr, strtab_size, symtab_size;
struct nfp_rtsym_entry *rtsymtab;
@@ -142,7 +146,7 @@ __nfp_rtsym_table_read(struct nfp_cpp *cpp, const struct nfp_mip *mip)
for (n = 0; n < cache->num; n++)
nfp_rtsym_sw_entry_init(cache, strtab_size,
- &cache->symtab[n], &rtsymtab[n]);
+ &cache->symtab[n], &rtsymtab[n]);
free(rtsymtab);
@@ -178,7 +182,8 @@ nfp_rtsym_count(struct nfp_rtsym_table *rtbl)
* Return: const pointer to a struct nfp_rtsym descriptor, or NULL
*/
const struct nfp_rtsym *
-nfp_rtsym_get(struct nfp_rtsym_table *rtbl, int idx)
+nfp_rtsym_get(struct nfp_rtsym_table *rtbl,
+ int idx)
{
if (rtbl == NULL)
return NULL;
@@ -197,7 +202,8 @@ nfp_rtsym_get(struct nfp_rtsym_table *rtbl, int idx)
* Return: const pointer to a struct nfp_rtsym descriptor, or NULL
*/
const struct nfp_rtsym *
-nfp_rtsym_lookup(struct nfp_rtsym_table *rtbl, const char *name)
+nfp_rtsym_lookup(struct nfp_rtsym_table *rtbl,
+ const char *name)
{
int n;
@@ -331,7 +337,9 @@ nfp_rtsym_readq(struct nfp_cpp *cpp,
* Return: value read, on error sets the error and returns ~0ULL.
*/
uint64_t
-nfp_rtsym_read_le(struct nfp_rtsym_table *rtbl, const char *name, int *error)
+nfp_rtsym_read_le(struct nfp_rtsym_table *rtbl,
+ const char *name,
+ int *error)
{
const struct nfp_rtsym *sym;
uint32_t val32;
@@ -354,7 +362,7 @@ nfp_rtsym_read_le(struct nfp_rtsym_table *rtbl, const char *name, int *error)
break;
default:
PMD_DRV_LOG(ERR, "rtsym '%s' unsupported size: %" PRId64,
- name, sym->size);
+ name, sym->size);
err = -EINVAL;
break;
}
@@ -372,8 +380,10 @@ nfp_rtsym_read_le(struct nfp_rtsym_table *rtbl, const char *name, int *error)
}
uint8_t *
-nfp_rtsym_map(struct nfp_rtsym_table *rtbl, const char *name,
- unsigned int min_size, struct nfp_cpp_area **area)
+nfp_rtsym_map(struct nfp_rtsym_table *rtbl,
+ const char *name,
+ unsigned int min_size,
+ struct nfp_cpp_area **area)
{
int ret;
uint8_t *mem;
@@ -397,7 +407,7 @@ nfp_rtsym_map(struct nfp_rtsym_table *rtbl, const char *name,
if (sym->size < min_size) {
PMD_DRV_LOG(ERR, "Symbol %s too small (%" PRIu64 " < %u)", name,
- sym->size, min_size);
+ sym->size, min_size);
return NULL;
}
diff --git a/drivers/net/nfp/nfpcore/nfp_rtsym.h b/drivers/net/nfp/nfpcore/nfp_rtsym.h
index 8b494211bc..30768f1ccf 100644
--- a/drivers/net/nfp/nfpcore/nfp_rtsym.h
+++ b/drivers/net/nfp/nfpcore/nfp_rtsym.h
@@ -43,19 +43,18 @@ struct nfp_rtsym_table;
struct nfp_rtsym_table *nfp_rtsym_table_read(struct nfp_cpp *cpp);
-struct nfp_rtsym_table *
-__nfp_rtsym_table_read(struct nfp_cpp *cpp, const struct nfp_mip *mip);
+struct nfp_rtsym_table *__nfp_rtsym_table_read(struct nfp_cpp *cpp,
+ const struct nfp_mip *mip);
int nfp_rtsym_count(struct nfp_rtsym_table *rtbl);
const struct nfp_rtsym *nfp_rtsym_get(struct nfp_rtsym_table *rtbl, int idx);
-const struct nfp_rtsym *
-nfp_rtsym_lookup(struct nfp_rtsym_table *rtbl, const char *name);
+const struct nfp_rtsym *nfp_rtsym_lookup(struct nfp_rtsym_table *rtbl,
+ const char *name);
uint64_t nfp_rtsym_read_le(struct nfp_rtsym_table *rtbl, const char *name,
- int *error);
-uint8_t *
-nfp_rtsym_map(struct nfp_rtsym_table *rtbl, const char *name,
- unsigned int min_size, struct nfp_cpp_area **area);
+ int *error);
+uint8_t *nfp_rtsym_map(struct nfp_rtsym_table *rtbl, const char *name,
+ unsigned int min_size, struct nfp_cpp_area **area);
#endif
--
2.39.1
^ permalink raw reply [relevance 1%]
* [PATCH v12 1/4] ethdev: add API for mbufs recycle mode
@ 2023-08-24 7:36 3% ` Feifei Wang
0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2023-08-24 7:36 UTC (permalink / raw)
To: Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Feifei Wang, Honnappa Nagarahalli, Ruifeng Wang,
Morten Brørup, Konstantin Ananyev
Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
APIs to recycle used mbufs from a transmit queue of an Ethernet device,
and move these mbufs into a mbuf ring for a receive queue of an Ethernet
device. This can bypass mempool 'put/get' operations hence saving CPU
cycles.
For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
the following operations:
- Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
ring.
- Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
from the Tx mbuf ring.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
---
doc/guides/rel_notes/release_23_11.rst | 15 +++
lib/ethdev/ethdev_driver.h | 10 ++
lib/ethdev/ethdev_private.c | 2 +
lib/ethdev/rte_ethdev.c | 22 +++
lib/ethdev/rte_ethdev.h | 180 +++++++++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 23 +++-
lib/ethdev/version.map | 3 +
7 files changed, 249 insertions(+), 6 deletions(-)
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 333e1d95a2..9d6ce65f22 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -78,6 +78,13 @@ New Features
* build: Optional libraries can now be selected with the new ``enable_libs``
build option similarly to the existing ``enable_drivers`` build option.
+* **Add mbufs recycling support.**
+
+ Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``
+ APIs which allow the user to copy used mbufs from the Tx mbuf ring
+ into the Rx mbuf ring. This feature supports the case that the Rx Ethernet
+ device is different from the Tx Ethernet device with respective driver
+ callback functions in ``rte_eth_recycle_mbufs``.
Removed Items
-------------
@@ -129,6 +136,14 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: Added ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields to ``rte_eth_dev`` structure.
+
+* ethdev: Structure ``rte_eth_fp_ops`` was affected to add
+ ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields, to move ``rxq`` and ``txq`` fields, to change the size of
+ ``reserved1`` and ``reserved2`` fields.
+
Known Issues
------------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 980f837ab6..b0c55a8523 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -58,6 +58,10 @@ struct rte_eth_dev {
eth_rx_descriptor_status_t rx_descriptor_status;
/** Check the status of a Tx descriptor */
eth_tx_descriptor_status_t tx_descriptor_status;
+ /** Pointer to PMD transmit mbufs reuse function */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ /** Pointer to PMD receive descriptors refill function */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
/**
* Device data that is shared between primary and secondary processes
@@ -507,6 +511,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
uint16_t queue_id, struct rte_eth_burst_mode *mode);
@@ -1250,6 +1258,8 @@ struct eth_dev_ops {
eth_rxq_info_get_t rxq_info_get;
/** Retrieve Tx queue information */
eth_txq_info_get_t txq_info_get;
+ /** Retrieve mbufs recycle Rx queue information */
+ eth_recycle_rxq_info_get_t recycle_rxq_info_get;
eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */
eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */
eth_fw_version_get_t fw_version_get; /**< Get firmware version */
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 14ec8c6ccf..f8ab64f195 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->rx_queue_count = dev->rx_queue_count;
fpo->rx_descriptor_status = dev->rx_descriptor_status;
fpo->tx_descriptor_status = dev->tx_descriptor_status;
+ fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
+ fpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;
fpo->rxq.data = dev->data->rx_queues;
fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0840d2b594..2bf7a84f16 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5876,6 +5876,28 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
return 0;
}
+int
+rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_dev *dev;
+ int ret;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ ret = eth_dev_validate_rx_queue(dev, queue_id);
+ if (unlikely(ret != 0))
+ return ret;
+
+ if (*dev->dev_ops->recycle_rxq_info_get == NULL)
+ return -ENOTSUP;
+
+ dev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);
+
+ return 0;
+}
+
int
rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 04a2564f22..9ea639852d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {
uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */
} __rte_cache_min_aligned;
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice.
+ *
+ * Ethernet device Rx queue information structure for recycling mbufs.
+ * Used to retrieve Rx queue information when Tx queue reusing mbufs and moving
+ * them into Rx mbuf ring.
+ */
+struct rte_eth_recycle_rxq_info {
+ struct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */
+ struct rte_mempool *mp; /**< mempool of Rx queue. */
+ uint16_t *refill_head; /**< head of Rx queue refilling mbufs. */
+ uint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */
+ uint16_t mbuf_ring_size; /**< configured number of mbuf ring size. */
+ /**
+ * Requirement on mbuf refilling batch size of Rx mbuf ring.
+ * For some PMD drivers, the number of Rx mbuf ring refilling mbufs
+ * should be aligned with mbuf ring size, in order to simplify
+ * ring wrapping around.
+ * Value 0 means that PMD drivers have no requirement for this.
+ */
+ uint16_t refill_requirement;
+} __rte_cache_min_aligned;
+
/* Generic Burst mode flag definition, values can be ORed. */
/**
@@ -4853,6 +4877,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve information about given ports's Rx queue for recycling mbufs.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The Rx queue on the Ethernet devicefor which information
+ * will be retrieved.
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.
+ *
+ * @return
+ * - 0: Success
+ * - -ENODEV: If *port_id* is invalid.
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The queue_id is out of range.
+ */
+__rte_experimental
+int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,
+ uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
/**
* Retrieve information about the Rx packet burst mode.
*
@@ -6527,6 +6576,137 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id,
return rte_eth_tx_buffer_flush(port_id, queue_id, buffer);
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Recycle used mbufs from a transmit queue of an Ethernet device, and move
+ * these mbufs into a mbuf ring for a receive queue of an Ethernet device.
+ * This can bypass mempool path to save CPU cycles.
+ *
+ * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst() and
+ * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing Rx
+ * descriptors. The number of recycling mbufs depends on the request of Rx mbuf
+ * ring, with the constraint of enough used mbufs from Tx mbuf ring.
+ *
+ * For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the
+ * following operations:
+ *
+ * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.
+ *
+ * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
+ * from the Tx mbuf ring.
+ *
+ * This function spilts Rx and Tx path with different callback functions. The
+ * callback function recycle_tx_mbufs_reuse is for Tx driver. The callback
+ * function recycle_rx_descriptors_refill is for Rx driver. rte_eth_recycle_mbufs()
+ * can support the case that Rx Ethernet device is different from Tx Ethernet device.
+ *
+ * It is the responsibility of users to select the Rx/Tx queue pair to recycle
+ * mbufs. Before call this function, users must call rte_eth_recycle_rxq_info_get
+ * function to retrieve selected Rx queue information.
+ * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info
+ *
+ * Currently, the rte_eth_recycle_mbufs() function can support to feed 1 Rx queue from
+ * 2 Tx queues in the same thread. Do not pair the Rx queue and Tx queue in different
+ * threads, in order to avoid memory error rewriting.
+ *
+ * @param rx_port_id
+ * Port identifying the receive side.
+ * @param rx_queue_id
+ * The index of the receive queue identifying the receive side.
+ * The value must be in the range [0, nb_rx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param tx_port_id
+ * Port identifying the transmit side.
+ * @param tx_queue_id
+ * The index of the transmit queue identifying the transmit side.
+ * The value must be in the range [0, nb_tx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains
+ * the information of the Rx queue mbuf ring.
+ * @return
+ * The number of recycling mbufs.
+ */
+__rte_experimental
+static inline uint16_t
+rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
+ uint16_t tx_port_id, uint16_t tx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_fp_ops *p1, *p2;
+ void *qd1, *qd2;
+ uint16_t nb_mbufs;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ if (tx_port_id >= RTE_MAX_ETHPORTS ||
+ tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid tx_port_id=%u or tx_queue_id=%u\n",
+ tx_port_id, tx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to Tx queue data */
+ p1 = &rte_eth_fp_ops[tx_port_id];
+ qd1 = p1->txq.data[tx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
+
+ if (qd1 == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
+ tx_queue_id, tx_port_id);
+ return 0;
+ }
+#endif
+ if (p1->recycle_tx_mbufs_reuse == NULL)
+ return 0;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ if (rx_port_id >= RTE_MAX_ETHPORTS ||
+ rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n",
+ rx_port_id, rx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to Rx queue data */
+ p2 = &rte_eth_fp_ops[rx_port_id];
+ qd2 = p2->rxq.data[rx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
+
+ if (qd2 == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
+ rx_queue_id, rx_port_id);
+ return 0;
+ }
+#endif
+ if (p2->recycle_rx_descriptors_refill == NULL)
+ return 0;
+
+ /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
+ * into Rx mbuf ring.
+ */
+ nb_mbufs = p1->recycle_tx_mbufs_reuse(qd1, recycle_rxq_info);
+
+ /* If no recycling mbufs, return 0. */
+ if (nb_mbufs == 0)
+ return 0;
+
+ /* Replenish the Rx descriptors with the recycling
+ * into Rx mbuf ring.
+ */
+ p2->recycle_rx_descriptors_refill(qd2, nb_mbufs);
+
+ return nb_mbufs;
+}
+
/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 46e9721e07..a24ad7a6b2 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -55,6 +55,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
/** @internal Check the status of a Tx descriptor */
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
+/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
+typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
+/** @internal Refill Rx descriptors with the recycling mbufs */
+typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);
+
/**
* @internal
* Structure used to hold opaque pointers to internal ethdev Rx/Tx
@@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
* Rx fast-path functions and related data.
* 64-bit systems: occupies first 64B line
*/
+ /** Rx queues data. */
+ struct rte_ethdev_qdata rxq;
/** PMD receive function. */
eth_rx_burst_t rx_pkt_burst;
/** Get the number of used Rx descriptors. */
eth_rx_queue_count_t rx_queue_count;
/** Check the status of a Rx descriptor. */
eth_rx_descriptor_status_t rx_descriptor_status;
- /** Rx queues data. */
- struct rte_ethdev_qdata rxq;
- uintptr_t reserved1[3];
+ /** Refill Rx descriptors with the recycling mbufs. */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
+ uintptr_t reserved1[2];
/**@}*/
/**@{*/
@@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
* Tx fast-path functions and related data.
* 64-bit systems: occupies second 64B line
*/
+ /** Tx queues data. */
+ struct rte_ethdev_qdata txq;
/** PMD transmit function. */
eth_tx_burst_t tx_pkt_burst;
/** PMD transmit prepare function. */
eth_tx_prep_t tx_pkt_prepare;
/** Check the status of a Tx descriptor. */
eth_tx_descriptor_status_t tx_descriptor_status;
- /** Tx queues data. */
- struct rte_ethdev_qdata txq;
- uintptr_t reserved2[3];
+ /** Copy used mbufs from Tx mbuf ring into Rx. */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ uintptr_t reserved2[2];
/**@}*/
} __rte_cache_aligned;
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index b965d6aa52..eec159dfdd 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -312,6 +312,9 @@ EXPERIMENTAL {
rte_flow_async_action_list_handle_query_update;
rte_flow_async_actions_update;
rte_flow_restore_info_dynflag;
+
+ # added in 23.11
+ rte_eth_recycle_rx_queue_info_get;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 3%]
* RE: [PATCH v11 1/4] ethdev: add API for mbufs recycle mode
2023-08-22 7:27 3% ` [PATCH v11 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2023-08-22 23:33 0% ` Konstantin Ananyev
@ 2023-08-24 3:38 0% ` Feifei Wang
1 sibling, 0 replies; 200+ results
From: Feifei Wang @ 2023-08-24 3:38 UTC (permalink / raw)
To: Feifei Wang, Konstantin Ananyev
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang, Morten Brørup,
thomas, Ferruh Yigit, Andrew Rybchenko, nd
For Konstantin
> -----Original Message-----
> From: Feifei Wang <feifei.wang2@arm.com>
> Sent: Tuesday, August 22, 2023 3:27 PM
> To: thomas@monjalon.net; Ferruh Yigit <ferruh.yigit@amd.com>; Andrew
> Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Feifei Wang
> <Feifei.Wang2@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; Ruifeng Wang
> <Ruifeng.Wang@arm.com>; Morten Brørup <mb@smartsharesystems.com>
> Subject: [PATCH v11 1/4] ethdev: add API for mbufs recycle mode
>
> Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
> APIs to recycle used mbufs from a transmit queue of an Ethernet device, and
> move these mbufs into a mbuf ring for a receive queue of an Ethernet device.
> This can bypass mempool 'put/get' operations hence saving CPU cycles.
>
> For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the
> following operations:
> - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.
> - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed from
> the Tx mbuf ring.
>
> Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> ---
> doc/guides/rel_notes/release_23_11.rst | 15 ++
> lib/ethdev/ethdev_driver.h | 10 ++
> lib/ethdev/ethdev_private.c | 2 +
> lib/ethdev/rte_ethdev.c | 31 +++++
> lib/ethdev/rte_ethdev.h | 181 +++++++++++++++++++++++++
> lib/ethdev/rte_ethdev_core.h | 23 +++-
> lib/ethdev/version.map | 3 +
> 7 files changed, 259 insertions(+), 6 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_23_11.rst
> b/doc/guides/rel_notes/release_23_11.rst
> index 4411bb32c1..02ee3867a0 100644
> --- a/doc/guides/rel_notes/release_23_11.rst
> +++ b/doc/guides/rel_notes/release_23_11.rst
> @@ -72,6 +72,13 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Add mbufs recycling support.**
> +
> + Added ``rte_eth_recycle_rx_queue_info_get`` and
> + ``rte_eth_recycle_mbufs`` APIs which allow the user to copy used
> + mbufs from the Tx mbuf ring into the Rx mbuf ring. This feature
> + supports the case that the Rx Ethernet device is different from the
> + Tx Ethernet device with respective driver callback functions in
> ``rte_eth_recycle_mbufs``.
>
> Removed Items
> -------------
> @@ -123,6 +130,14 @@ ABI Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* ethdev: Added ``recycle_tx_mbufs_reuse`` and
> +``recycle_rx_descriptors_refill``
> + fields to ``rte_eth_dev`` structure.
> +
> +* ethdev: Structure ``rte_eth_fp_ops`` was affected to add
> + ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
> + fields, to move ``rxq`` and ``txq`` fields, to change the size of
> + ``reserved1`` and ``reserved2`` fields.
> +
>
> Known Issues
> ------------
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index
> 980f837ab6..b0c55a8523 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -58,6 +58,10 @@ struct rte_eth_dev {
> eth_rx_descriptor_status_t rx_descriptor_status;
> /** Check the status of a Tx descriptor */
> eth_tx_descriptor_status_t tx_descriptor_status;
> + /** Pointer to PMD transmit mbufs reuse function */
> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> + /** Pointer to PMD receive descriptors refill function */
> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
>
> /**
> * Device data that is shared between primary and secondary
> processes @@ -507,6 +511,10 @@ typedef void (*eth_rxq_info_get_t)(struct
> rte_eth_dev *dev, typedef void (*eth_txq_info_get_t)(struct rte_eth_dev
> *dev,
> uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
>
> +typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
> + uint16_t rx_queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> +
> typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
> uint16_t queue_id, struct rte_eth_burst_mode *mode);
>
> @@ -1250,6 +1258,8 @@ struct eth_dev_ops {
> eth_rxq_info_get_t rxq_info_get;
> /** Retrieve Tx queue information */
> eth_txq_info_get_t txq_info_get;
> + /** Retrieve mbufs recycle Rx queue information */
> + eth_recycle_rxq_info_get_t recycle_rxq_info_get;
> eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst
> mode */
> eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst
> mode */
> eth_fw_version_get_t fw_version_get; /**< Get firmware version
> */
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index
> 14ec8c6ccf..f8ab64f195 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> fpo->rx_queue_count = dev->rx_queue_count;
> fpo->rx_descriptor_status = dev->rx_descriptor_status;
> fpo->tx_descriptor_status = dev->tx_descriptor_status;
> + fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
> + fpo->recycle_rx_descriptors_refill =
> +dev->recycle_rx_descriptors_refill;
>
> fpo->rxq.data = dev->data->rx_queues;
> fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
> 0840d2b594..ea89a101a1 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -5876,6 +5876,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id,
> uint16_t queue_id,
> return 0;
> }
>
> +int
> +rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info) {
> + struct rte_eth_dev *dev;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + dev = &rte_eth_devices[port_id];
> +
> + if (queue_id >= dev->data->nb_rx_queues) {
> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n",
> queue_id);
> + return -EINVAL;
> + }
> +
> + if (dev->data->rx_queues == NULL ||
> + dev->data->rx_queues[queue_id] == NULL) {
> + RTE_ETHDEV_LOG(ERR,
> + "Rx queue %"PRIu16" of device with port_id=%"
> + PRIu16" has not been setup\n",
> + queue_id, port_id);
> + return -EINVAL;
> + }
> +
> + if (*dev->dev_ops->recycle_rxq_info_get == NULL)
> + return -ENOTSUP;
> +
> + dev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);
> +
> + return 0;
> +}
> +
> int
> rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
> struct rte_eth_burst_mode *mode)
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> 04a2564f22..9dc5749d83 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {
> uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */
> } __rte_cache_min_aligned;
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice.
> + *
> + * Ethernet device Rx queue information structure for recycling mbufs.
> + * Used to retrieve Rx queue information when Tx queue reusing mbufs
> +and moving
> + * them into Rx mbuf ring.
> + */
> +struct rte_eth_recycle_rxq_info {
> + struct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */
> + struct rte_mempool *mp; /**< mempool of Rx queue. */
> + uint16_t *refill_head; /**< head of Rx queue refilling mbufs. */
> + uint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */
> + uint16_t mbuf_ring_size; /**< configured number of mbuf ring size.
> */
> + /**
> + * Requirement on mbuf refilling batch size of Rx mbuf ring.
> + * For some PMD drivers, the number of Rx mbuf ring refilling mbufs
> + * should be aligned with mbuf ring size, in order to simplify
> + * ring wrapping around.
> + * Value 0 means that PMD drivers have no requirement for this.
> + */
> + uint16_t refill_requirement;
> +} __rte_cache_min_aligned;
> +
> /* Generic Burst mode flag definition, values can be ORed. */
>
> /**
> @@ -4853,6 +4877,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id,
> uint16_t queue_id, int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t
> queue_id,
> struct rte_eth_txq_info *qinfo);
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> +notice
> + *
> + * Retrieve information about given ports's Rx queue for recycling mbufs.
> + *
> + * @param port_id
> + * The port identifier of the Ethernet device.
> + * @param queue_id
> + * The Rx queue on the Ethernet devicefor which information
> + * will be retrieved.
> + * @param recycle_rxq_info
> + * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.
> + *
> + * @return
> + * - 0: Success
> + * - -ENODEV: If *port_id* is invalid.
> + * - -ENOTSUP: routine is not supported by the device PMD.
> + * - -EINVAL: The queue_id is out of range.
> + */
> +__rte_experimental
> +int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,
> + uint16_t queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> +
> /**
> * Retrieve information about the Rx packet burst mode.
> *
> @@ -6527,6 +6576,138 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t
> queue_id,
> return rte_eth_tx_buffer_flush(port_id, queue_id, buffer); }
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> +notice
> + *
> + * Recycle used mbufs from a transmit queue of an Ethernet device, and
> +move
> + * these mbufs into a mbuf ring for a receive queue of an Ethernet device.
> + * This can bypass mempool path to save CPU cycles.
> + *
> + * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst()
> +and
> + * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing
> +Rx
> + * descriptors. The number of recycling mbufs depends on the request of
> +Rx mbuf
> + * ring, with the constraint of enough used mbufs from Tx mbuf ring.
> + *
> + * For each recycling mbufs, the rte_eth_recycle_mbufs() function
> +performs the
> + * following operations:
> + *
> + * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
> ring.
> + *
> + * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
> + * from the Tx mbuf ring.
> + *
> + * This function spilts Rx and Tx path with different callback
> +functions. The
> + * callback function recycle_tx_mbufs_reuse is for Tx driver. The
> +callback
> + * function recycle_rx_descriptors_refill is for Rx driver.
> +rte_eth_recycle_mbufs()
> + * can support the case that Rx Ethernet device is different from Tx Ethernet
> device.
> + *
> + * It is the responsibility of users to select the Rx/Tx queue pair to
> +recycle
> + * mbufs. Before call this function, users must call
> +rte_eth_recycle_rxq_info_get
> + * function to retrieve selected Rx queue information.
> + * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info
> + *
> + * Currently, the rte_eth_recycle_mbufs() function can support to feed
> +1 Rx queue from
> + * 2 Tx queues in the same thread. Do not pair the Rx queue and Tx
> +queue in different
> + * threads, in order to avoid memory error rewriting.
> + *
> + * @param rx_port_id
> + * Port identifying the receive side.
> + * @param rx_queue_id
> + * The index of the receive queue identifying the receive side.
> + * The value must be in the range [0, nb_rx_queue - 1] previously supplied
> + * to rte_eth_dev_configure().
> + * @param tx_port_id
> + * Port identifying the transmit side.
> + * @param tx_queue_id
> + * The index of the transmit queue identifying the transmit side.
> + * The value must be in the range [0, nb_tx_queue - 1] previously supplied
> + * to rte_eth_dev_configure().
> + * @param recycle_rxq_info
> + * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains
> + * the information of the Rx queue mbuf ring.
> + * @return
> + * The number of recycling mbufs.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
> + uint16_t tx_port_id, uint16_t tx_queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info) {
> + struct rte_eth_fp_ops *p;
> + void *qd;
> + uint16_t nb_mbufs;
> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> + if (tx_port_id >= RTE_MAX_ETHPORTS ||
> + tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid tx_port_id=%u or
> tx_queue_id=%u\n",
> + tx_port_id, tx_queue_id);
> + return 0;
> + }
> +#endif
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[tx_port_id];
> + qd = p->txq.data[tx_queue_id];
> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
> +
> + if (qd == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for
> port_id=%u\n",
> + tx_queue_id, tx_port_id);
> + return 0;
> + }
> +#endif
> + if (p->recycle_tx_mbufs_reuse == NULL)
> + return 0;
> +
> + /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
> + * into Rx mbuf ring.
> + */
> + nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
> +
[Konstantin]
It is probably better to do that call after rx_port_id, rx_queue_id,
etc. checks.
Otherwise with some errorneous params we can get mbufs, from TXQ,
'rx_refill' would not happen and we will return zero.
With that in place:
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Thanks for the comments, I will put all check before function call.
> + /* If no recycling mbufs, return 0. */
> + if (nb_mbufs == 0)
> + return 0;
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> + if (rx_port_id >= RTE_MAX_ETHPORTS ||
> + rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or
> rx_queue_id=%u\n",
> + rx_port_id, rx_queue_id);
> + return 0;
> + }
> +#endif
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[rx_port_id];
> + qd = p->rxq.data[rx_queue_id];
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
> +
> + if (qd == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for
> port_id=%u\n",
> + rx_queue_id, rx_port_id);
> + return 0;
> + }
> +#endif
> +
> + if (p->recycle_rx_descriptors_refill == NULL)
> + return 0;
> +
> + /* Replenish the Rx descriptors with the recycling
> + * into Rx mbuf ring.
> + */
> + p->recycle_rx_descriptors_refill(qd, nb_mbufs);
> +
> + return nb_mbufs;
> +}
> +
> /**
> * @warning
> * @b EXPERIMENTAL: this API may change without prior notice diff --git
> a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index
> 46e9721e07..a24ad7a6b2 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -55,6 +55,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq,
> uint16_t offset);
> /** @internal Check the status of a Tx descriptor */ typedef int
> (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>
> +/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
> +typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> +
> +/** @internal Refill Rx descriptors with the recycling mbufs */ typedef
> +void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);
> +
> /**
> * @internal
> * Structure used to hold opaque pointers to internal ethdev Rx/Tx @@ -
> 83,15 +90,17 @@ struct rte_eth_fp_ops {
> * Rx fast-path functions and related data.
> * 64-bit systems: occupies first 64B line
> */
> + /** Rx queues data. */
> + struct rte_ethdev_qdata rxq;
> /** PMD receive function. */
> eth_rx_burst_t rx_pkt_burst;
> /** Get the number of used Rx descriptors. */
> eth_rx_queue_count_t rx_queue_count;
> /** Check the status of a Rx descriptor. */
> eth_rx_descriptor_status_t rx_descriptor_status;
> - /** Rx queues data. */
> - struct rte_ethdev_qdata rxq;
> - uintptr_t reserved1[3];
> + /** Refill Rx descriptors with the recycling mbufs. */
> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> + uintptr_t reserved1[2];
> /**@}*/
>
> /**@{*/
> @@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
> * Tx fast-path functions and related data.
> * 64-bit systems: occupies second 64B line
> */
> + /** Tx queues data. */
> + struct rte_ethdev_qdata txq;
> /** PMD transmit function. */
> eth_tx_burst_t tx_pkt_burst;
> /** PMD transmit prepare function. */
> eth_tx_prep_t tx_pkt_prepare;
> /** Check the status of a Tx descriptor. */
> eth_tx_descriptor_status_t tx_descriptor_status;
> - /** Tx queues data. */
> - struct rte_ethdev_qdata txq;
> - uintptr_t reserved2[3];
> + /** Copy used mbufs from Tx mbuf ring into Rx. */
> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> + uintptr_t reserved2[2];
> /**@}*/
>
> } __rte_cache_aligned;
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index
> b965d6aa52..eec159dfdd 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -312,6 +312,9 @@ EXPERIMENTAL {
> rte_flow_async_action_list_handle_query_update;
> rte_flow_async_actions_update;
> rte_flow_restore_info_dynflag;
> +
> + # added in 23.11
> + rte_eth_recycle_rx_queue_info_get;
> };
>
> INTERNAL {
> --
> 2.25.1
^ permalink raw reply [relevance 0%]
* Re: [PATCH v11 1/4] ethdev: add API for mbufs recycle mode
2023-08-22 7:27 3% ` [PATCH v11 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
@ 2023-08-22 23:33 0% ` Konstantin Ananyev
2023-08-24 3:38 0% ` Feifei Wang
1 sibling, 0 replies; 200+ results
From: Konstantin Ananyev @ 2023-08-22 23:33 UTC (permalink / raw)
To: Feifei Wang, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang, Morten Brørup
Hi Feifei,
> Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
> APIs to recycle used mbufs from a transmit queue of an Ethernet device,
> and move these mbufs into a mbuf ring for a receive queue of an Ethernet
> device. This can bypass mempool 'put/get' operations hence saving CPU
> cycles.
>
> For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
> the following operations:
> - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
> ring.
> - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
> from the Tx mbuf ring.
>
> Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> ---
> doc/guides/rel_notes/release_23_11.rst | 15 ++
> lib/ethdev/ethdev_driver.h | 10 ++
> lib/ethdev/ethdev_private.c | 2 +
> lib/ethdev/rte_ethdev.c | 31 +++++
> lib/ethdev/rte_ethdev.h | 181 +++++++++++++++++++++++++
> lib/ethdev/rte_ethdev_core.h | 23 +++-
> lib/ethdev/version.map | 3 +
> 7 files changed, 259 insertions(+), 6 deletions(-)
>
> diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
> index 4411bb32c1..02ee3867a0 100644
> --- a/doc/guides/rel_notes/release_23_11.rst
> +++ b/doc/guides/rel_notes/release_23_11.rst
> @@ -72,6 +72,13 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Add mbufs recycling support.**
> +
> + Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``
> + APIs which allow the user to copy used mbufs from the Tx mbuf ring
> + into the Rx mbuf ring. This feature supports the case that the Rx Ethernet
> + device is different from the Tx Ethernet device with respective driver
> + callback functions in ``rte_eth_recycle_mbufs``.
>
> Removed Items
> -------------
> @@ -123,6 +130,14 @@ ABI Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* ethdev: Added ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
> + fields to ``rte_eth_dev`` structure.
> +
> +* ethdev: Structure ``rte_eth_fp_ops`` was affected to add
> + ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
> + fields, to move ``rxq`` and ``txq`` fields, to change the size of
> + ``reserved1`` and ``reserved2`` fields.
> +
>
> Known Issues
> ------------
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 980f837ab6..b0c55a8523 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -58,6 +58,10 @@ struct rte_eth_dev {
> eth_rx_descriptor_status_t rx_descriptor_status;
> /** Check the status of a Tx descriptor */
> eth_tx_descriptor_status_t tx_descriptor_status;
> + /** Pointer to PMD transmit mbufs reuse function */
> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> + /** Pointer to PMD receive descriptors refill function */
> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
>
> /**
> * Device data that is shared between primary and secondary processes
> @@ -507,6 +511,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
> typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
> uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
>
> +typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
> + uint16_t rx_queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> +
> typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
> uint16_t queue_id, struct rte_eth_burst_mode *mode);
>
> @@ -1250,6 +1258,8 @@ struct eth_dev_ops {
> eth_rxq_info_get_t rxq_info_get;
> /** Retrieve Tx queue information */
> eth_txq_info_get_t txq_info_get;
> + /** Retrieve mbufs recycle Rx queue information */
> + eth_recycle_rxq_info_get_t recycle_rxq_info_get;
> eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */
> eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */
> eth_fw_version_get_t fw_version_get; /**< Get firmware version */
> diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
> index 14ec8c6ccf..f8ab64f195 100644
> --- a/lib/ethdev/ethdev_private.c
> +++ b/lib/ethdev/ethdev_private.c
> @@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
> fpo->rx_queue_count = dev->rx_queue_count;
> fpo->rx_descriptor_status = dev->rx_descriptor_status;
> fpo->tx_descriptor_status = dev->tx_descriptor_status;
> + fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
> + fpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;
>
> fpo->rxq.data = dev->data->rx_queues;
> fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 0840d2b594..ea89a101a1 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -5876,6 +5876,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
> return 0;
> }
>
> +int
> +rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info)
> +{
> + struct rte_eth_dev *dev;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + dev = &rte_eth_devices[port_id];
> +
> + if (queue_id >= dev->data->nb_rx_queues) {
> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id);
> + return -EINVAL;
> + }
> +
> + if (dev->data->rx_queues == NULL ||
> + dev->data->rx_queues[queue_id] == NULL) {
> + RTE_ETHDEV_LOG(ERR,
> + "Rx queue %"PRIu16" of device with port_id=%"
> + PRIu16" has not been setup\n",
> + queue_id, port_id);
> + return -EINVAL;
> + }
> +
> + if (*dev->dev_ops->recycle_rxq_info_get == NULL)
> + return -ENOTSUP;
> +
> + dev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);
> +
> + return 0;
> +}
> +
> int
> rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
> struct rte_eth_burst_mode *mode)
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 04a2564f22..9dc5749d83 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {
> uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */
> } __rte_cache_min_aligned;
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice.
> + *
> + * Ethernet device Rx queue information structure for recycling mbufs.
> + * Used to retrieve Rx queue information when Tx queue reusing mbufs and moving
> + * them into Rx mbuf ring.
> + */
> +struct rte_eth_recycle_rxq_info {
> + struct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */
> + struct rte_mempool *mp; /**< mempool of Rx queue. */
> + uint16_t *refill_head; /**< head of Rx queue refilling mbufs. */
> + uint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */
> + uint16_t mbuf_ring_size; /**< configured number of mbuf ring size. */
> + /**
> + * Requirement on mbuf refilling batch size of Rx mbuf ring.
> + * For some PMD drivers, the number of Rx mbuf ring refilling mbufs
> + * should be aligned with mbuf ring size, in order to simplify
> + * ring wrapping around.
> + * Value 0 means that PMD drivers have no requirement for this.
> + */
> + uint16_t refill_requirement;
> +} __rte_cache_min_aligned;
> +
> /* Generic Burst mode flag definition, values can be ORed. */
>
> /**
> @@ -4853,6 +4877,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
> int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
> struct rte_eth_txq_info *qinfo);
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
> + *
> + * Retrieve information about given ports's Rx queue for recycling mbufs.
> + *
> + * @param port_id
> + * The port identifier of the Ethernet device.
> + * @param queue_id
> + * The Rx queue on the Ethernet devicefor which information
> + * will be retrieved.
> + * @param recycle_rxq_info
> + * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.
> + *
> + * @return
> + * - 0: Success
> + * - -ENODEV: If *port_id* is invalid.
> + * - -ENOTSUP: routine is not supported by the device PMD.
> + * - -EINVAL: The queue_id is out of range.
> + */
> +__rte_experimental
> +int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,
> + uint16_t queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> +
> /**
> * Retrieve information about the Rx packet burst mode.
> *
> @@ -6527,6 +6576,138 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id,
> return rte_eth_tx_buffer_flush(port_id, queue_id, buffer);
> }
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
> + *
> + * Recycle used mbufs from a transmit queue of an Ethernet device, and move
> + * these mbufs into a mbuf ring for a receive queue of an Ethernet device.
> + * This can bypass mempool path to save CPU cycles.
> + *
> + * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst() and
> + * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing Rx
> + * descriptors. The number of recycling mbufs depends on the request of Rx mbuf
> + * ring, with the constraint of enough used mbufs from Tx mbuf ring.
> + *
> + * For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the
> + * following operations:
> + *
> + * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.
> + *
> + * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
> + * from the Tx mbuf ring.
> + *
> + * This function spilts Rx and Tx path with different callback functions. The
> + * callback function recycle_tx_mbufs_reuse is for Tx driver. The callback
> + * function recycle_rx_descriptors_refill is for Rx driver. rte_eth_recycle_mbufs()
> + * can support the case that Rx Ethernet device is different from Tx Ethernet device.
> + *
> + * It is the responsibility of users to select the Rx/Tx queue pair to recycle
> + * mbufs. Before call this function, users must call rte_eth_recycle_rxq_info_get
> + * function to retrieve selected Rx queue information.
> + * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info
> + *
> + * Currently, the rte_eth_recycle_mbufs() function can support to feed 1 Rx queue from
> + * 2 Tx queues in the same thread. Do not pair the Rx queue and Tx queue in different
> + * threads, in order to avoid memory error rewriting.
> + *
> + * @param rx_port_id
> + * Port identifying the receive side.
> + * @param rx_queue_id
> + * The index of the receive queue identifying the receive side.
> + * The value must be in the range [0, nb_rx_queue - 1] previously supplied
> + * to rte_eth_dev_configure().
> + * @param tx_port_id
> + * Port identifying the transmit side.
> + * @param tx_queue_id
> + * The index of the transmit queue identifying the transmit side.
> + * The value must be in the range [0, nb_tx_queue - 1] previously supplied
> + * to rte_eth_dev_configure().
> + * @param recycle_rxq_info
> + * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains
> + * the information of the Rx queue mbuf ring.
> + * @return
> + * The number of recycling mbufs.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
> + uint16_t tx_port_id, uint16_t tx_queue_id,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info)
> +{
> + struct rte_eth_fp_ops *p;
> + void *qd;
> + uint16_t nb_mbufs;
> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> + if (tx_port_id >= RTE_MAX_ETHPORTS ||
> + tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid tx_port_id=%u or tx_queue_id=%u\n",
> + tx_port_id, tx_queue_id);
> + return 0;
> + }
> +#endif
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[tx_port_id];
> + qd = p->txq.data[tx_queue_id];
> +
> +#ifdef RTE_ETHDEV_DEBUG_TX
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
> +
> + if (qd == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
> + tx_queue_id, tx_port_id);
> + return 0;
> + }
> +#endif
> + if (p->recycle_tx_mbufs_reuse == NULL)
> + return 0;
> +
> + /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
> + * into Rx mbuf ring.
> + */
> + nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
It is probably better to do that call after rx_port_id, rx_queue_id,
etc. checks.
Otherwise with some errorneous params we can get mbufs, from TXQ,
'rx_refill' would not happen and we will return zero.
With that in place:
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
> +
> + /* If no recycling mbufs, return 0. */
> + if (nb_mbufs == 0)
> + return 0;
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> + if (rx_port_id >= RTE_MAX_ETHPORTS ||
> + rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
> + RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n",
> + rx_port_id, rx_queue_id);
> + return 0;
> + }
> +#endif
> +
> + /* fetch pointer to queue data */
> + p = &rte_eth_fp_ops[rx_port_id];
> + qd = p->rxq.data[rx_queue_id];
> +
> +#ifdef RTE_ETHDEV_DEBUG_RX
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
> +
> + if (qd == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
> + rx_queue_id, rx_port_id);
> + return 0;
> + }
> +#endif
> +
> + if (p->recycle_rx_descriptors_refill == NULL)
> + return 0;
> +
> + /* Replenish the Rx descriptors with the recycling
> + * into Rx mbuf ring.
> + */
> + p->recycle_rx_descriptors_refill(qd, nb_mbufs);
> +
> + return nb_mbufs;
> +}
> +
> /**
> * @warning
> * @b EXPERIMENTAL: this API may change without prior notice
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
> index 46e9721e07..a24ad7a6b2 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -55,6 +55,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
> /** @internal Check the status of a Tx descriptor */
> typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
>
> +/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
> +typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
> + struct rte_eth_recycle_rxq_info *recycle_rxq_info);
> +
> +/** @internal Refill Rx descriptors with the recycling mbufs */
> +typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);
> +
> /**
> * @internal
> * Structure used to hold opaque pointers to internal ethdev Rx/Tx
> @@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
> * Rx fast-path functions and related data.
> * 64-bit systems: occupies first 64B line
> */
> + /** Rx queues data. */
> + struct rte_ethdev_qdata rxq;
> /** PMD receive function. */
> eth_rx_burst_t rx_pkt_burst;
> /** Get the number of used Rx descriptors. */
> eth_rx_queue_count_t rx_queue_count;
> /** Check the status of a Rx descriptor. */
> eth_rx_descriptor_status_t rx_descriptor_status;
> - /** Rx queues data. */
> - struct rte_ethdev_qdata rxq;
> - uintptr_t reserved1[3];
> + /** Refill Rx descriptors with the recycling mbufs. */
> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> + uintptr_t reserved1[2];
> /**@}*/
>
> /**@{*/
> @@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
> * Tx fast-path functions and related data.
> * 64-bit systems: occupies second 64B line
> */
> + /** Tx queues data. */
> + struct rte_ethdev_qdata txq;
> /** PMD transmit function. */
> eth_tx_burst_t tx_pkt_burst;
> /** PMD transmit prepare function. */
> eth_tx_prep_t tx_pkt_prepare;
> /** Check the status of a Tx descriptor. */
> eth_tx_descriptor_status_t tx_descriptor_status;
> - /** Tx queues data. */
> - struct rte_ethdev_qdata txq;
> - uintptr_t reserved2[3];
> + /** Copy used mbufs from Tx mbuf ring into Rx. */
> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> + uintptr_t reserved2[2];
> /**@}*/
>
> } __rte_cache_aligned;
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index b965d6aa52..eec159dfdd 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -312,6 +312,9 @@ EXPERIMENTAL {
> rte_flow_async_action_list_handle_query_update;
> rte_flow_async_actions_update;
> rte_flow_restore_info_dynflag;
> +
> + # added in 23.11
> + rte_eth_recycle_rx_queue_info_get;
> };
>
> INTERNAL {
^ permalink raw reply [relevance 0%]
* [PATCH v6 3/6] eal: add rte atomic qualifier with casts
2023-08-22 21:00 3% ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
@ 2023-08-22 21:00 2% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-22 21:00 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 5940e7e..709bf15 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 256309e..b7b059f 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -81,7 +81,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint16_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -91,7 +92,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint32_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -101,7 +103,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..fb8539f 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile __rte_atomic uint64_t *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [relevance 2%]
* [PATCH v6 0/6] rte atomics API for optional stdatomic
2023-08-11 1:31 4% [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (4 preceding siblings ...)
2023-08-17 21:42 3% ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-22 21:00 3% ` Tyler Retzlaff
2023-08-22 21:00 2% ` [PATCH v6 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
5 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-22 21:00 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v6:
* Adjust checkpatches to warn about use of __rte_atomic_thread_fence
and suggest use of rte_atomic_thread_fence. Use the existing check
more generic check for __atomic_xxx to catch use of __atomic_thread_fence
and recommend rte_atomic_xxx.
v5:
* Add RTE_ATOMIC to doxygen configuration PREDEFINED macros list to
fix documentation generation failure
* Fix two typos in expansion of C11 atomics macros strong -> weak and
add missing _explicit
* Adjust devtools/checkpatches messages based on feedback. i have chosen
not to try and catch use of C11 atomics or _Atomic since using those
directly will be picked up by existing CI passes where by compilation
error where enable_stdatomic=false (the default for most platforms)
v4:
* Move the definition of #define RTE_ATOMIC(type) to patch 1 where it
belongs (a mistake in v3)
* Provide comments for both RTE_ATOMIC and __rte_atomic macros indicating
their use as specified or qualified contexts.
v3:
* Remove comments from APIs mentioning the mapping to C++ memory model
memory orders
* Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
where _Atomic is used as a type specifier to declare variables. The
macro allows more clarity about what the atomic type being specified
is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
the former is an atomic pointer type and the latter is an atomic
type. it also has the benefit of (in the future) being interoperable
with c++23 syntactically
note: Morten i have retained your 'reviewed-by' tags if you disagree
given the changes in the above version please indicate as such but
i believe the changes are in the spirit of the feedback you provided
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 12 +-
doc/api/doxy-api.conf.in | 1 +
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 +++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++--
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 +++++++----
lib/eal/include/generic/rte_pause.h | 50 ++++----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++----
lib/eal/include/rte_pflock.h | 25 ++--
lib/eal/include/rte_seqcount.h | 19 +--
lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 +++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
30 files changed, 501 insertions(+), 269 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* Minutes of Technical Board Meeting, 2023-August -9
@ 2023-08-22 10:23 3% Jerin Jacob Kollanukkaran
0 siblings, 0 replies; 200+ results
From: Jerin Jacob Kollanukkaran @ 2023-08-22 10:23 UTC (permalink / raw)
To: dev, techboard
Minutes of Technical Board Meeting, 2023-August -9
Members Attending
-----------------
-Aaron
-Bruce
-Hemant
-Honnappa
-Jerin (Chair)
-Konstantin
-Stephen
-Thomas
-Tyler
-Morten
NOTE: The technical board meetings every second Wednesday at https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.
NOTE: Next meeting will be on Wednesday 2023- August -23 @3pm UTC, and will be chaired by Honnappa
Dublin DPDK submit
------------------
Collected various topics to be discussed in tech-board meeting on 11th September 2023 evening.
1) General tech-board discussion on the
- Process of adding/removing tech-board members
- Expected size of tech-board Do we want more people? Less? Grow the TB or expand it?
- Specific individuals who wish to join or leave the TB within the next year?
- Efficient ways to split the work up etc.
2) Discussion led by Aaron on the possible automation of patches in the future - pros? Cons ? Worth implementing? Best process if this route is pursued.
3) Internal review process: how to accelerate and/or improve this - more comments, addressing/responding to these comments, etc.
4) Improvements in use of lab resources by maintainers and committers - additional branches, etc.
DPDK C11 atomics integration challenges
----------------------------------------
1) Discussed various issues with C11 atomics integration like
- rte_ring performance issue C11 atomics
- ABI compatibility issues in public header due to atomics size differences
- c++23 clang distribution issues
- In general agreed to have DPDK specific prefix for better integration of atomics primitives by providing room to
accommodate compiler specific differences, Tyler Retzlaff will send a RFC for the same.
RFC is at https://patches.dpdk.org/project/dpdk/list/?series=29255
^ permalink raw reply [relevance 3%]
* [PATCH v11 1/4] ethdev: add API for mbufs recycle mode
@ 2023-08-22 7:27 3% ` Feifei Wang
2023-08-22 23:33 0% ` Konstantin Ananyev
2023-08-24 3:38 0% ` Feifei Wang
0 siblings, 2 replies; 200+ results
From: Feifei Wang @ 2023-08-22 7:27 UTC (permalink / raw)
To: Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Feifei Wang, Honnappa Nagarahalli, Ruifeng Wang,
Morten Brørup
Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
APIs to recycle used mbufs from a transmit queue of an Ethernet device,
and move these mbufs into a mbuf ring for a receive queue of an Ethernet
device. This can bypass mempool 'put/get' operations hence saving CPU
cycles.
For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
the following operations:
- Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
ring.
- Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
from the Tx mbuf ring.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
doc/guides/rel_notes/release_23_11.rst | 15 ++
lib/ethdev/ethdev_driver.h | 10 ++
lib/ethdev/ethdev_private.c | 2 +
lib/ethdev/rte_ethdev.c | 31 +++++
lib/ethdev/rte_ethdev.h | 181 +++++++++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 23 +++-
lib/ethdev/version.map | 3 +
7 files changed, 259 insertions(+), 6 deletions(-)
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 4411bb32c1..02ee3867a0 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -72,6 +72,13 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Add mbufs recycling support.**
+
+ Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``
+ APIs which allow the user to copy used mbufs from the Tx mbuf ring
+ into the Rx mbuf ring. This feature supports the case that the Rx Ethernet
+ device is different from the Tx Ethernet device with respective driver
+ callback functions in ``rte_eth_recycle_mbufs``.
Removed Items
-------------
@@ -123,6 +130,14 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: Added ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields to ``rte_eth_dev`` structure.
+
+* ethdev: Structure ``rte_eth_fp_ops`` was affected to add
+ ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields, to move ``rxq`` and ``txq`` fields, to change the size of
+ ``reserved1`` and ``reserved2`` fields.
+
Known Issues
------------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 980f837ab6..b0c55a8523 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -58,6 +58,10 @@ struct rte_eth_dev {
eth_rx_descriptor_status_t rx_descriptor_status;
/** Check the status of a Tx descriptor */
eth_tx_descriptor_status_t tx_descriptor_status;
+ /** Pointer to PMD transmit mbufs reuse function */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ /** Pointer to PMD receive descriptors refill function */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
/**
* Device data that is shared between primary and secondary processes
@@ -507,6 +511,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
uint16_t queue_id, struct rte_eth_burst_mode *mode);
@@ -1250,6 +1258,8 @@ struct eth_dev_ops {
eth_rxq_info_get_t rxq_info_get;
/** Retrieve Tx queue information */
eth_txq_info_get_t txq_info_get;
+ /** Retrieve mbufs recycle Rx queue information */
+ eth_recycle_rxq_info_get_t recycle_rxq_info_get;
eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */
eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */
eth_fw_version_get_t fw_version_get; /**< Get firmware version */
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 14ec8c6ccf..f8ab64f195 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->rx_queue_count = dev->rx_queue_count;
fpo->rx_descriptor_status = dev->rx_descriptor_status;
fpo->tx_descriptor_status = dev->tx_descriptor_status;
+ fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
+ fpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;
fpo->rxq.data = dev->data->rx_queues;
fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0840d2b594..ea89a101a1 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5876,6 +5876,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
return 0;
}
+int
+rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (queue_id >= dev->data->nb_rx_queues) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->rx_queues == NULL ||
+ dev->data->rx_queues[queue_id] == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Rx queue %"PRIu16" of device with port_id=%"
+ PRIu16" has not been setup\n",
+ queue_id, port_id);
+ return -EINVAL;
+ }
+
+ if (*dev->dev_ops->recycle_rxq_info_get == NULL)
+ return -ENOTSUP;
+
+ dev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);
+
+ return 0;
+}
+
int
rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 04a2564f22..9dc5749d83 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {
uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */
} __rte_cache_min_aligned;
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice.
+ *
+ * Ethernet device Rx queue information structure for recycling mbufs.
+ * Used to retrieve Rx queue information when Tx queue reusing mbufs and moving
+ * them into Rx mbuf ring.
+ */
+struct rte_eth_recycle_rxq_info {
+ struct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */
+ struct rte_mempool *mp; /**< mempool of Rx queue. */
+ uint16_t *refill_head; /**< head of Rx queue refilling mbufs. */
+ uint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */
+ uint16_t mbuf_ring_size; /**< configured number of mbuf ring size. */
+ /**
+ * Requirement on mbuf refilling batch size of Rx mbuf ring.
+ * For some PMD drivers, the number of Rx mbuf ring refilling mbufs
+ * should be aligned with mbuf ring size, in order to simplify
+ * ring wrapping around.
+ * Value 0 means that PMD drivers have no requirement for this.
+ */
+ uint16_t refill_requirement;
+} __rte_cache_min_aligned;
+
/* Generic Burst mode flag definition, values can be ORed. */
/**
@@ -4853,6 +4877,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve information about given ports's Rx queue for recycling mbufs.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The Rx queue on the Ethernet devicefor which information
+ * will be retrieved.
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.
+ *
+ * @return
+ * - 0: Success
+ * - -ENODEV: If *port_id* is invalid.
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The queue_id is out of range.
+ */
+__rte_experimental
+int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,
+ uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
/**
* Retrieve information about the Rx packet burst mode.
*
@@ -6527,6 +6576,138 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id,
return rte_eth_tx_buffer_flush(port_id, queue_id, buffer);
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Recycle used mbufs from a transmit queue of an Ethernet device, and move
+ * these mbufs into a mbuf ring for a receive queue of an Ethernet device.
+ * This can bypass mempool path to save CPU cycles.
+ *
+ * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst() and
+ * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing Rx
+ * descriptors. The number of recycling mbufs depends on the request of Rx mbuf
+ * ring, with the constraint of enough used mbufs from Tx mbuf ring.
+ *
+ * For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the
+ * following operations:
+ *
+ * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.
+ *
+ * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
+ * from the Tx mbuf ring.
+ *
+ * This function spilts Rx and Tx path with different callback functions. The
+ * callback function recycle_tx_mbufs_reuse is for Tx driver. The callback
+ * function recycle_rx_descriptors_refill is for Rx driver. rte_eth_recycle_mbufs()
+ * can support the case that Rx Ethernet device is different from Tx Ethernet device.
+ *
+ * It is the responsibility of users to select the Rx/Tx queue pair to recycle
+ * mbufs. Before call this function, users must call rte_eth_recycle_rxq_info_get
+ * function to retrieve selected Rx queue information.
+ * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info
+ *
+ * Currently, the rte_eth_recycle_mbufs() function can support to feed 1 Rx queue from
+ * 2 Tx queues in the same thread. Do not pair the Rx queue and Tx queue in different
+ * threads, in order to avoid memory error rewriting.
+ *
+ * @param rx_port_id
+ * Port identifying the receive side.
+ * @param rx_queue_id
+ * The index of the receive queue identifying the receive side.
+ * The value must be in the range [0, nb_rx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param tx_port_id
+ * Port identifying the transmit side.
+ * @param tx_queue_id
+ * The index of the transmit queue identifying the transmit side.
+ * The value must be in the range [0, nb_tx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains
+ * the information of the Rx queue mbuf ring.
+ * @return
+ * The number of recycling mbufs.
+ */
+__rte_experimental
+static inline uint16_t
+rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
+ uint16_t tx_port_id, uint16_t tx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_fp_ops *p;
+ void *qd;
+ uint16_t nb_mbufs;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ if (tx_port_id >= RTE_MAX_ETHPORTS ||
+ tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid tx_port_id=%u or tx_queue_id=%u\n",
+ tx_port_id, tx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[tx_port_id];
+ qd = p->txq.data[tx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
+ tx_queue_id, tx_port_id);
+ return 0;
+ }
+#endif
+ if (p->recycle_tx_mbufs_reuse == NULL)
+ return 0;
+
+ /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
+ * into Rx mbuf ring.
+ */
+ nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
+
+ /* If no recycling mbufs, return 0. */
+ if (nb_mbufs == 0)
+ return 0;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ if (rx_port_id >= RTE_MAX_ETHPORTS ||
+ rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n",
+ rx_port_id, rx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[rx_port_id];
+ qd = p->rxq.data[rx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
+ rx_queue_id, rx_port_id);
+ return 0;
+ }
+#endif
+
+ if (p->recycle_rx_descriptors_refill == NULL)
+ return 0;
+
+ /* Replenish the Rx descriptors with the recycling
+ * into Rx mbuf ring.
+ */
+ p->recycle_rx_descriptors_refill(qd, nb_mbufs);
+
+ return nb_mbufs;
+}
+
/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 46e9721e07..a24ad7a6b2 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -55,6 +55,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
/** @internal Check the status of a Tx descriptor */
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
+/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
+typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
+/** @internal Refill Rx descriptors with the recycling mbufs */
+typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);
+
/**
* @internal
* Structure used to hold opaque pointers to internal ethdev Rx/Tx
@@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
* Rx fast-path functions and related data.
* 64-bit systems: occupies first 64B line
*/
+ /** Rx queues data. */
+ struct rte_ethdev_qdata rxq;
/** PMD receive function. */
eth_rx_burst_t rx_pkt_burst;
/** Get the number of used Rx descriptors. */
eth_rx_queue_count_t rx_queue_count;
/** Check the status of a Rx descriptor. */
eth_rx_descriptor_status_t rx_descriptor_status;
- /** Rx queues data. */
- struct rte_ethdev_qdata rxq;
- uintptr_t reserved1[3];
+ /** Refill Rx descriptors with the recycling mbufs. */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
+ uintptr_t reserved1[2];
/**@}*/
/**@{*/
@@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
* Tx fast-path functions and related data.
* 64-bit systems: occupies second 64B line
*/
+ /** Tx queues data. */
+ struct rte_ethdev_qdata txq;
/** PMD transmit function. */
eth_tx_burst_t tx_pkt_burst;
/** PMD transmit prepare function. */
eth_tx_prep_t tx_pkt_prepare;
/** Check the status of a Tx descriptor. */
eth_tx_descriptor_status_t tx_descriptor_status;
- /** Tx queues data. */
- struct rte_ethdev_qdata txq;
- uintptr_t reserved2[3];
+ /** Copy used mbufs from Tx mbuf ring into Rx. */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ uintptr_t reserved2[2];
/**@}*/
} __rte_cache_aligned;
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index b965d6aa52..eec159dfdd 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -312,6 +312,9 @@ EXPERIMENTAL {
rte_flow_async_action_list_handle_query_update;
rte_flow_async_actions_update;
rte_flow_restore_info_dynflag;
+
+ # added in 23.11
+ rte_eth_recycle_rx_queue_info_get;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v5 0/6] optional rte optional stdatomics API
2023-08-17 21:42 3% ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
2023-08-17 21:42 2% ` [PATCH v5 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-21 22:27 0% ` Konstantin Ananyev
1 sibling, 0 replies; 200+ results
From: Konstantin Ananyev @ 2023-08-21 22:27 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, David Hunt, Thomas Monjalon, David Marchand
> This series introduces API additions prefixed in the rte namespace that allow
> the optional use of stdatomics.h from C11 using enable_stdatomics=true for
> targets where enable_stdatomics=false no functional change is intended.
>
> Be aware this does not contain all changes to use stdatomics across the DPDK
> tree it only introduces the minimum to allow the option to be used which is
> a pre-requisite for a clean CI (probably using clang) that can be run
> with enable_stdatomics=true enabled.
>
> It is planned that subsequent series will be introduced per lib/driver as
> appropriate to further enable stdatomics use when enable_stdatomics=true.
>
> Notes:
>
> * Additional libraries beyond EAL make visible atomics use across the
> API/ABI surface they will be converted in the subsequent series.
>
> * The eal: add rte atomic qualifier with casts patch needs some discussion
> as to whether or not the legacy rte_atomic APIs should be converted to
> work with enable_stdatomic=true right now some implementation dependent
> casts are used to prevent cascading / having to convert too much in
> the intial series.
>
> * Windows will obviously need complete conversion of libraries including
> atomics that are not crossing API/ABI boundaries. those conversions will
> introduced in separate series as new along side the existing msvc series.
>
> Please keep in mind we would like to prioritize the review / acceptance of
> this patch since it needs to be completed in the 23.11 merge window.
>
> Thank you all for the discussion that lead to the formation of this series.
>
> v5:
> * Add RTE_ATOMIC to doxygen configuration PREDEFINED macros list to
> fix documentation generation failure
> * Fix two typos in expansion of C11 atomics macros strong -> weak and
> add missing _explicit
> * Adjust devtools/checkpatches messages based on feedback. i have chosen
> not to try and catch use of C11 atomics or _Atomic since using those
> directly will be picked up by existing CI passes where by compilation
> error where enable_stdatomic=false (the default for most platforms)
>
> v4:
> * Move the definition of #define RTE_ATOMIC(type) to patch 1 where it
> belongs (a mistake in v3)
> * Provide comments for both RTE_ATOMIC and __rte_atomic macros indicating
> their use as specified or qualified contexts.
>
> v3:
> * Remove comments from APIs mentioning the mapping to C++ memory model
> memory orders
> * Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
> where _Atomic is used as a type specifier to declare variables. The
> macro allows more clarity about what the atomic type being specified
> is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
> the former is an atomic pointer type and the latter is an atomic
> type. it also has the benefit of (in the future) being interoperable
> with c++23 syntactically
> note: Morten i have retained your 'reviewed-by' tags if you disagree
> given the changes in the above version please indicate as such but
> i believe the changes are in the spirit of the feedback you provided
>
> v2:
> * Wrap meson_options.txt option description to newline and indent to
> be consistent with other options.
> * Provide separate typedef of rte_memory_order for enable_stdatomic=true
> VS enable_stdatomic=false instead of a single typedef to int
> note: slight tweak to reviewers feedback i've chosen to use a typedef
> for both enable_stdatomic={true,false} (just seemed more consistent)
> * Bring in assert.h and use static_assert macro instead of _Static_assert
> keyword to better interoperate with c/c++
> * Directly include rte_stdatomic.h where into other places it is consumed
> instead of hacking it globally into rte_config.h
> * Provide and use __rte_atomic_thread_fence to allow conditional expansion
> within the body of existing rte_atomic_thread_fence inline function to
> maintain per-arch optimizations when enable_stdatomic=false
>
> Tyler Retzlaff (6):
> eal: provide rte stdatomics optional atomics API
> eal: adapt EAL to present rte optional atomics API
> eal: add rte atomic qualifier with casts
> distributor: adapt for EAL optional atomics API changes
> bpf: adapt for EAL optional atomics API changes
> devtools: forbid new direct use of GCC atomic builtins
>
> app/test/test_mcslock.c | 6 +-
> config/meson.build | 1 +
> devtools/checkpatches.sh | 8 +-
> doc/api/doxy-api.conf.in | 1 +
> lib/bpf/bpf_pkt.c | 6 +-
> lib/distributor/distributor_private.h | 2 +-
> lib/distributor/rte_distributor_single.c | 44 +++----
> lib/eal/arm/include/rte_atomic_32.h | 4 +-
> lib/eal/arm/include/rte_atomic_64.h | 36 +++---
> lib/eal/arm/include/rte_pause_64.h | 26 ++--
> lib/eal/arm/rte_power_intrinsics.c | 8 +-
> lib/eal/common/eal_common_trace.c | 16 +--
> lib/eal/include/generic/rte_atomic.h | 67 +++++++----
> lib/eal/include/generic/rte_pause.h | 50 ++++----
> lib/eal/include/generic/rte_rwlock.h | 48 ++++----
> lib/eal/include/generic/rte_spinlock.h | 20 ++--
> lib/eal/include/meson.build | 1 +
> lib/eal/include/rte_mcslock.h | 51 ++++----
> lib/eal/include/rte_pflock.h | 25 ++--
> lib/eal/include/rte_seqcount.h | 19 +--
> lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++
> lib/eal/include/rte_ticketlock.h | 43 +++----
> lib/eal/include/rte_trace_point.h | 5 +-
> lib/eal/loongarch/include/rte_atomic.h | 4 +-
> lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
> lib/eal/riscv/include/rte_atomic.h | 4 +-
> lib/eal/x86/include/rte_atomic.h | 8 +-
> lib/eal/x86/include/rte_spinlock.h | 2 +-
> lib/eal/x86/rte_power_intrinsics.c | 7 +-
> meson_options.txt | 2 +
> 30 files changed, 499 insertions(+), 267 deletions(-)
> create mode 100644 lib/eal/include/rte_stdatomic.h
>
Series-acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
^ permalink raw reply [relevance 0%]
* [PATCH v13 17/21] hash: move rte_hash_set_alg out header
2023-08-21 16:09 2% ` [PATCH v13 00/21] Convert static log types in libraries to dynamic types Stephen Hemminger
@ 2023-08-21 16:09 2% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-08-21 16:09 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Ruifeng Wang, Yipeng Wang, Sameh Gobriel,
Bruce Richardson, Vladimir Medvedkin
The code for setting algorithm for hash is not at all perf sensitive,
and doing it inline has a couple of problems. First, it means that if
multiple files include the header, then the initialization gets done
multiple times. But also, it makes it harder to fix usage of RTE_LOG().
Despite what the checking script say. This is not an ABI change, the
previous version inlined the same code; therefore both old and new code
will work the same.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
lib/hash/meson.build | 1 +
lib/hash/rte_crc_arm64.h | 8 ++---
lib/hash/rte_crc_x86.h | 10 +++---
lib/hash/rte_hash_crc.c | 68 ++++++++++++++++++++++++++++++++++++++++
lib/hash/rte_hash_crc.h | 48 ++--------------------------
lib/hash/version.map | 7 +++++
6 files changed, 88 insertions(+), 54 deletions(-)
create mode 100644 lib/hash/rte_hash_crc.c
diff --git a/lib/hash/meson.build b/lib/hash/meson.build
index e56ee8572564..c345c6f561fc 100644
--- a/lib/hash/meson.build
+++ b/lib/hash/meson.build
@@ -19,6 +19,7 @@ indirect_headers += files(
sources = files(
'rte_cuckoo_hash.c',
+ 'rte_hash_crc.c',
'rte_fbk_hash.c',
'rte_thash.c',
'rte_thash_gfni.c'
diff --git a/lib/hash/rte_crc_arm64.h b/lib/hash/rte_crc_arm64.h
index c9f52510871b..414fe065caa8 100644
--- a/lib/hash/rte_crc_arm64.h
+++ b/lib/hash/rte_crc_arm64.h
@@ -53,7 +53,7 @@ crc32c_arm64_u64(uint64_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_ARM64))
+ if (likely(rte_hash_crc32_alg & CRC32_ARM64))
return crc32c_arm64_u8(data, init_val);
return crc32c_1byte(data, init_val);
@@ -67,7 +67,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_ARM64))
+ if (likely(rte_hash_crc32_alg & CRC32_ARM64))
return crc32c_arm64_u16(data, init_val);
return crc32c_2bytes(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_ARM64))
+ if (likely(rte_hash_crc32_alg & CRC32_ARM64))
return crc32c_arm64_u32(data, init_val);
return crc32c_1word(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_ARM64))
+ if (likely(rte_hash_crc32_alg & CRC32_ARM64))
return crc32c_arm64_u64(data, init_val);
return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_crc_x86.h b/lib/hash/rte_crc_x86.h
index 205bc182be77..3b865e251db2 100644
--- a/lib/hash/rte_crc_x86.h
+++ b/lib/hash/rte_crc_x86.h
@@ -67,7 +67,7 @@ crc32c_sse42_u64(uint64_t data, uint64_t init_val)
static inline uint32_t
rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_SSE42))
+ if (likely(rte_hash_crc32_alg & CRC32_SSE42))
return crc32c_sse42_u8(data, init_val);
return crc32c_1byte(data, init_val);
@@ -81,7 +81,7 @@ rte_hash_crc_1byte(uint8_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_SSE42))
+ if (likely(rte_hash_crc32_alg & CRC32_SSE42))
return crc32c_sse42_u16(data, init_val);
return crc32c_2bytes(data, init_val);
@@ -95,7 +95,7 @@ rte_hash_crc_2byte(uint16_t data, uint32_t init_val)
static inline uint32_t
rte_hash_crc_4byte(uint32_t data, uint32_t init_val)
{
- if (likely(crc32_alg & CRC32_SSE42))
+ if (likely(rte_hash_crc32_alg & CRC32_SSE42))
return crc32c_sse42_u32(data, init_val);
return crc32c_1word(data, init_val);
@@ -110,11 +110,11 @@ static inline uint32_t
rte_hash_crc_8byte(uint64_t data, uint32_t init_val)
{
#ifdef RTE_ARCH_X86_64
- if (likely(crc32_alg == CRC32_SSE42_x64))
+ if (likely(rte_hash_crc32_alg == CRC32_SSE42_x64))
return crc32c_sse42_u64(data, init_val);
#endif
- if (likely(crc32_alg & CRC32_SSE42))
+ if (likely(rte_hash_crc32_alg & CRC32_SSE42))
return crc32c_sse42_u64_mimic(data, init_val);
return crc32c_2words(data, init_val);
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
new file mode 100644
index 000000000000..1439d8a71f6a
--- /dev/null
+++ b/lib/hash/rte_hash_crc.c
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2010-2014 Intel Corporation
+ */
+
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+#include "rte_hash_crc.h"
+
+RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO);
+#define RTE_LOGTYPE_HASH_CRC hash_crc_logtype
+
+uint8_t rte_hash_crc32_alg = CRC32_SW;
+
+/**
+ * Allow or disallow use of SSE4.2/ARMv8 intrinsics for CRC32 hash
+ * calculation.
+ *
+ * @param alg
+ * An OR of following flags:
+ * - (CRC32_SW) Don't use SSE4.2/ARMv8 intrinsics (default non-[x86/ARMv8])
+ * - (CRC32_SSE42) Use SSE4.2 intrinsics if available
+ * - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
+ * - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
+ *
+ */
+void
+rte_hash_crc_set_alg(uint8_t alg)
+{
+ rte_hash_crc32_alg = CRC32_SW;
+
+ if (alg == CRC32_SW)
+ return;
+
+#if defined RTE_ARCH_X86
+ if (!(alg & CRC32_SSE42_x64))
+ RTE_LOG(WARNING, HASH_CRC,
+ "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+ if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
+ rte_hash_crc32_alg = CRC32_SSE42;
+ else
+ rte_hash_crc32_alg = CRC32_SSE42_x64;
+#endif
+
+#if defined RTE_ARCH_ARM64
+ if (!(alg & CRC32_ARM64))
+ RTE_LOG(WARNING, HASH_CRC,
+ "Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
+ rte_hash_crc32_alg = CRC32_ARM64;
+#endif
+
+ if (rte_hash_crc32_alg == CRC32_SW)
+ RTE_LOG(WARNING, HASH_CRC,
+ "Unsupported CRC32 algorithm requested using CRC32_SW\n");
+}
+
+/* Setting the best available algorithm */
+RTE_INIT(rte_hash_crc_init_alg)
+{
+#if defined(RTE_ARCH_X86)
+ rte_hash_crc_set_alg(CRC32_SSE42_x64);
+#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
+ rte_hash_crc_set_alg(CRC32_ARM64);
+#else
+ rte_hash_crc_set_alg(CRC32_SW);
+#endif
+}
diff --git a/lib/hash/rte_hash_crc.h b/lib/hash/rte_hash_crc.h
index 60bf42ce1d97..8ad2422ec333 100644
--- a/lib/hash/rte_hash_crc.h
+++ b/lib/hash/rte_hash_crc.h
@@ -20,8 +20,6 @@ extern "C" {
#include <rte_branch_prediction.h>
#include <rte_common.h>
#include <rte_config.h>
-#include <rte_cpuflags.h>
-#include <rte_log.h>
#include "rte_crc_sw.h"
@@ -31,7 +29,7 @@ extern "C" {
#define CRC32_SSE42_x64 (CRC32_x64|CRC32_SSE42)
#define CRC32_ARM64 (1U << 3)
-static uint8_t crc32_alg = CRC32_SW;
+extern uint8_t rte_hash_crc32_alg;
#if defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
#include "rte_crc_arm64.h"
@@ -52,48 +50,8 @@ static uint8_t crc32_alg = CRC32_SW;
* - (CRC32_SSE42_x64) Use 64-bit SSE4.2 intrinsic if available (default x86)
* - (CRC32_ARM64) Use ARMv8 CRC intrinsic if available (default ARMv8)
*/
-static inline void
-rte_hash_crc_set_alg(uint8_t alg)
-{
- crc32_alg = CRC32_SW;
-
- if (alg == CRC32_SW)
- return;
-
-#if defined RTE_ARCH_X86
- if (!(alg & CRC32_SSE42_x64))
- RTE_LOG(WARNING, HASH,
- "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
- if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
- crc32_alg = CRC32_SSE42;
- else
- crc32_alg = CRC32_SSE42_x64;
-#endif
-
-#if defined RTE_ARCH_ARM64
- if (!(alg & CRC32_ARM64))
- RTE_LOG(WARNING, HASH,
- "Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
- if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
- crc32_alg = CRC32_ARM64;
-#endif
-
- if (crc32_alg == CRC32_SW)
- RTE_LOG(WARNING, HASH,
- "Unsupported CRC32 algorithm requested using CRC32_SW\n");
-}
-
-/* Setting the best available algorithm */
-RTE_INIT(rte_hash_crc_init_alg)
-{
-#if defined(RTE_ARCH_X86)
- rte_hash_crc_set_alg(CRC32_SSE42_x64);
-#elif defined(RTE_ARCH_ARM64) && defined(__ARM_FEATURE_CRC32)
- rte_hash_crc_set_alg(CRC32_ARM64);
-#else
- rte_hash_crc_set_alg(CRC32_SW);
-#endif
-}
+void
+rte_hash_crc_set_alg(uint8_t alg);
#ifdef __DOXYGEN__
diff --git a/lib/hash/version.map b/lib/hash/version.map
index 8288c6e7967e..a1e68036c5b8 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -9,6 +9,7 @@ DPDK_24 {
rte_hash_add_key_with_hash;
rte_hash_add_key_with_hash_data;
rte_hash_count;
+ rte_hash_crc_set_alg;
rte_hash_create;
rte_hash_del_key;
rte_hash_del_key_with_hash;
@@ -56,3 +57,9 @@ EXPERIMENTAL {
rte_thash_gfni;
rte_thash_gfni_bulk;
};
+
+INTERNAL {
+ global:
+
+ rte_hash_crc32_alg;
+};
--
2.39.2
^ permalink raw reply [relevance 2%]
* [PATCH v13 00/21] Convert static log types in libraries to dynamic types
@ 2023-08-21 16:09 2% ` Stephen Hemminger
2023-08-21 16:09 2% ` [PATCH v13 17/21] hash: move rte_hash_set_alg out header Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-08-21 16:09 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This patchset removes most of the uses of static LOGTYPE's in DPDK
libraries. It starts with the easy one and goes on to the more complex ones.
There are several options on how to treat the old static types:
leave them there, mark as deprecated, or remove them.
This version removes them since there is no guarantee in current
DPDK policies that says they can't be removed.
Note: there is one patch in this series that will get
flagged incorrectly as an ABI change.
v13 - rebase because log now moved.
v12 - rebase and add table and pipeline libraries
v11 - fix include check on arm cross build
v10 - add necessary rte_compat.h in thash_gfni stub for arm
v9 - fix handling of crc32 alg in lib/hash.
make it an internal global variable.
fix gfni stubs for case where they are not used.
Stephen Hemminger (21):
gso: don't log message on non TCP/UDP
eal: drop no longer used GSO logtype
log: drop unused RTE_LOGTYPE_TIMER
efd: convert RTE_LOGTYPE_EFD to dynamic type
mbuf: convert RTE_LOGTYPE_MBUF to dynamic type
acl: convert RTE_LOGTYPE_ACL to dynamic type
examples/power: replace use of RTE_LOGTYPE_POWER
examples/l3fwd-power: replace use of RTE_LOGTYPE_POWER
power: convert RTE_LOGTYPE_POWER to dynamic type
ring: convert RTE_LOGTYPE_RING to dynamic type
mempool: convert RTE_LOGTYPE_MEMPOOL to dynamic type
lpm: convert RTE_LOGTYPE_LPM to dynamic types
sched: convert RTE_LOGTYPE_SCHED to dynamic type
examples/ipsec-secgw: replace RTE_LOGTYPE_PORT
port: convert RTE_LOGTYPE_PORT to dynamic type
hash: move rte_thash_gfni stubs out of header file
hash: move rte_hash_set_alg out header
hash: convert RTE_LOGTYPE_HASH to dynamic type
table: convert RTE_LOGTYPE_TABLE to dynamic type
app/test: remove use of RTE_LOGTYPE_PIPELINE
pipeline: convert RTE_LOGTYPE_PIPELINE to dynamic type
app/test/test_acl.c | 2 +-
app/test/test_table_acl.c | 50 +++++++++++-------------
app/test/test_table_pipeline.c | 40 +++++++++----------
examples/distributor/main.c | 2 +-
examples/ipsec-secgw/sa.c | 6 +--
examples/l3fwd-power/main.c | 17 +++++----
lib/acl/acl.h | 1 +
lib/acl/acl_bld.c | 3 ++
lib/acl/acl_gen.c | 1 +
lib/acl/acl_log.h | 6 +++
lib/acl/rte_acl.c | 3 ++
lib/acl/tb_mem.c | 3 +-
lib/efd/rte_efd.c | 4 ++
lib/fib/fib_log.h | 4 ++
lib/fib/rte_fib.c | 3 ++
lib/fib/rte_fib6.c | 2 +
lib/gso/rte_gso.c | 4 +-
lib/gso/rte_gso.h | 1 +
lib/hash/meson.build | 9 ++++-
lib/hash/rte_crc_arm64.h | 8 ++--
lib/hash/rte_crc_x86.h | 10 ++---
lib/hash/rte_cuckoo_hash.c | 5 +++
lib/hash/rte_fbk_hash.c | 5 +++
lib/hash/rte_hash_crc.c | 68 +++++++++++++++++++++++++++++++++
lib/hash/rte_hash_crc.h | 48 ++---------------------
lib/hash/rte_thash.c | 3 ++
lib/hash/rte_thash_gfni.c | 50 ++++++++++++++++++++++++
lib/hash/rte_thash_gfni.h | 30 +++++----------
lib/hash/version.map | 11 ++++++
lib/log/log.c | 16 --------
lib/log/rte_log.h | 32 ++++++++--------
lib/lpm/lpm_log.h | 4 ++
lib/lpm/rte_lpm.c | 3 ++
lib/lpm/rte_lpm6.c | 1 +
lib/mbuf/mbuf_log.h | 4 ++
lib/mbuf/rte_mbuf.c | 4 ++
lib/mbuf/rte_mbuf_dyn.c | 2 +
lib/mbuf/rte_mbuf_pool_ops.c | 2 +
lib/mempool/rte_mempool.c | 2 +
lib/mempool/rte_mempool.h | 8 ++++
lib/mempool/version.map | 3 ++
lib/pipeline/rte_pipeline.c | 2 +
lib/pipeline/rte_pipeline.h | 5 +++
lib/port/rte_port_ethdev.c | 3 ++
lib/port/rte_port_eventdev.c | 4 ++
lib/port/rte_port_fd.c | 3 ++
lib/port/rte_port_frag.c | 3 ++
lib/port/rte_port_ras.c | 3 ++
lib/port/rte_port_ring.c | 3 ++
lib/port/rte_port_sched.c | 3 ++
lib/port/rte_port_source_sink.c | 3 ++
lib/port/rte_port_sym_crypto.c | 3 ++
lib/power/guest_channel.c | 3 +-
lib/power/power_common.c | 2 +
lib/power/power_common.h | 2 +
lib/power/power_kvm_vm.c | 1 +
lib/power/rte_power.c | 1 +
lib/rib/rib_log.h | 4 ++
lib/rib/rte_rib.c | 3 ++
lib/rib/rte_rib6.c | 3 ++
lib/ring/rte_ring.c | 3 ++
lib/sched/rte_pie.c | 1 +
lib/sched/rte_sched.c | 5 +++
lib/sched/rte_sched_log.h | 4 ++
lib/table/meson.build | 1 +
lib/table/rte_table.c | 8 ++++
lib/table/rte_table.h | 4 ++
67 files changed, 387 insertions(+), 173 deletions(-)
create mode 100644 lib/acl/acl_log.h
create mode 100644 lib/fib/fib_log.h
create mode 100644 lib/hash/rte_hash_crc.c
create mode 100644 lib/hash/rte_thash_gfni.c
create mode 100644 lib/lpm/lpm_log.h
create mode 100644 lib/mbuf/mbuf_log.h
create mode 100644 lib/rib/rib_log.h
create mode 100644 lib/sched/rte_sched_log.h
create mode 100644 lib/table/rte_table.c
--
2.39.2
^ permalink raw reply [relevance 2%]
* Re: [PATCH v7 0/3] add telemetry cmds for ring
2023-07-04 9:04 3% ` [PATCH v7 " Jie Hai
2023-07-04 9:04 3% ` [PATCH v7 1/3] ring: fix unmatched type definition and usage Jie Hai
@ 2023-08-18 6:53 0% ` Jie Hai
1 sibling, 0 replies; 200+ results
From: Jie Hai @ 2023-08-18 6:53 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, liudongdong3
Hi, Thomas,
Kindly ping for review.
Thanks, Jie Hai
On 2023/7/4 17:04, Jie Hai wrote:
> This patch set supports telemetry cmd to list rings and dump information
> of a ring by its name.
>
> v1->v2:
> 1. Add space after "switch".
> 2. Fix wrong strlen parameter.
>
> v2->v3:
> 1. Remove prefix "rte_" for static function.
> 2. Add Acked-by Konstantin Ananyev for PATCH 1.
> 3. Introduce functions to return strings instead copy strings.
> 4. Check pointer to memzone of ring.
> 5. Remove redundant variable.
> 6. Hold lock when access ring data.
>
> v3->v4:
> 1. Update changelog according to reviews of Honnappa Nagarahalli.
> 2. Add Reviewed-by Honnappa Nagarahalli.
> 3. Correct grammar in help information.
> 4. Correct spell warning on "te" reported by checkpatch.pl.
> 5. Use ring_walk() to query ring info instead of rte_ring_lookup().
> 6. Fix that type definition the flag field of rte_ring does not match the usage.
> 7. Use rte_tel_data_add_dict_uint_hex instead of rte_tel_data_add_dict_u64
> for mask and flags.
>
> v4->v5:
> 1. Add Acked-by Konstantin Ananyev and Chengwen Feng.
> 2. Add ABI change explanation for commit message of patch 1/3.
>
> v5->v6:
> 1. Add Acked-by Morten Brørup.
> 2. Fix incorrect reference of commit.
>
> v6->v7:
> 1. Remove prod/consumer head/tail info.
>
> Jie Hai (3):
> ring: fix unmatched type definition and usage
> ring: add telemetry cmd to list rings
> ring: add telemetry cmd for ring info
>
> lib/ring/meson.build | 1 +
> lib/ring/rte_ring.c | 135 +++++++++++++++++++++++++++++++++++++++
> lib/ring/rte_ring_core.h | 2 +-
> 3 files changed, 137 insertions(+), 1 deletion(-)
>
^ permalink raw reply [relevance 0%]
* [PATCH v5 3/6] eal: add rte atomic qualifier with casts
2023-08-17 21:42 3% ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-17 21:42 2% ` Tyler Retzlaff
2023-08-21 22:27 0% ` [PATCH v5 0/6] optional rte optional stdatomics API Konstantin Ananyev
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-17 21:42 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 5940e7e..709bf15 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 256309e..b7b059f 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -81,7 +81,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint16_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -91,7 +92,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint32_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -101,7 +103,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..fb8539f 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile __rte_atomic uint64_t *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [relevance 2%]
* [PATCH v5 0/6] optional rte optional stdatomics API
2023-08-11 1:31 4% [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (3 preceding siblings ...)
2023-08-16 21:38 3% ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-17 21:42 3% ` Tyler Retzlaff
2023-08-17 21:42 2% ` [PATCH v5 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-21 22:27 0% ` [PATCH v5 0/6] optional rte optional stdatomics API Konstantin Ananyev
2023-08-22 21:00 3% ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
5 siblings, 2 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-17 21:42 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v5:
* Add RTE_ATOMIC to doxygen configuration PREDEFINED macros list to
fix documentation generation failure
* Fix two typos in expansion of C11 atomics macros strong -> weak and
add missing _explicit
* Adjust devtools/checkpatches messages based on feedback. i have chosen
not to try and catch use of C11 atomics or _Atomic since using those
directly will be picked up by existing CI passes where by compilation
error where enable_stdatomic=false (the default for most platforms)
v4:
* Move the definition of #define RTE_ATOMIC(type) to patch 1 where it
belongs (a mistake in v3)
* Provide comments for both RTE_ATOMIC and __rte_atomic macros indicating
their use as specified or qualified contexts.
v3:
* Remove comments from APIs mentioning the mapping to C++ memory model
memory orders
* Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
where _Atomic is used as a type specifier to declare variables. The
macro allows more clarity about what the atomic type being specified
is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
the former is an atomic pointer type and the latter is an atomic
type. it also has the benefit of (in the future) being interoperable
with c++23 syntactically
note: Morten i have retained your 'reviewed-by' tags if you disagree
given the changes in the above version please indicate as such but
i believe the changes are in the spirit of the feedback you provided
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 8 +-
doc/api/doxy-api.conf.in | 1 +
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 +++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++--
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 +++++++----
lib/eal/include/generic/rte_pause.h | 50 ++++----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++----
lib/eal/include/rte_pflock.h | 25 ++--
lib/eal/include/rte_seqcount.h | 19 +--
lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 +++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
30 files changed, 499 insertions(+), 267 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v5 2/2] net/bonding: replace master/slave to main/member
2023-08-16 6:27 1% ` [PATCH v5 2/2] net/bonding: " Chaoyong He
@ 2023-08-17 2:36 0% ` lihuisong (C)
0 siblings, 0 replies; 200+ results
From: lihuisong (C) @ 2023-08-17 2:36 UTC (permalink / raw)
To: Chaoyong He, dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, James Hershaw
good job. I doesn't find any where doesn't been replaced. thanks.
How do you make such a perfect and accurate replacement?
Acked-by: Huisong Li <lihuisong@huawei.com>
在 2023/8/16 14:27, Chaoyong He 写道:
> From: Long Wu <long.wu@corigine.com>
>
> This patch replaces the usage of the word 'master/slave' with more
> appropriate word 'main/member' in bonding PMD as well as in its docs
> and examples. Also the test app and testpmd were modified to use the
> new wording.
>
> The bonding PMD's public APIs were modified according to the changes
> in word:
> rte_eth_bond_8023ad_slave_info is now called
> rte_eth_bond_8023ad_member_info,
> rte_eth_bond_active_slaves_get is now called
> rte_eth_bond_active_members_get,
> rte_eth_bond_slave_add is now called
> rte_eth_bond_member_add,
> rte_eth_bond_slave_remove is now called
> rte_eth_bond_member_remove,
> rte_eth_bond_slaves_get is now called
> rte_eth_bond_members_get.
>
> The data structure ``struct rte_eth_bond_8023ad_slave_info`` was
> renamed to ``struct rte_eth_bond_8023ad_member_info``
>
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Reviewed-by: James Hershaw <james.hershaw@corigine.com>
> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
> Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
> ---
> app/test-pmd/testpmd.c | 113 +-
> app/test-pmd/testpmd.h | 8 +-
> app/test/test_link_bonding.c | 2792 +++++++++--------
> app/test/test_link_bonding_mode4.c | 588 ++--
> app/test/test_link_bonding_rssconf.c | 166 +-
> doc/guides/howto/lm_bond_virtio_sriov.rst | 24 +-
> doc/guides/nics/bnxt.rst | 4 +-
> doc/guides/prog_guide/img/bond-mode-1.svg | 2 +-
> .../link_bonding_poll_mode_drv_lib.rst | 230 +-
> doc/guides/rel_notes/deprecation.rst | 16 -
> doc/guides/rel_notes/release_23_11.rst | 17 +
> drivers/net/bonding/bonding_testpmd.c | 178 +-
> drivers/net/bonding/eth_bond_8023ad_private.h | 40 +-
> drivers/net/bonding/eth_bond_private.h | 108 +-
> drivers/net/bonding/rte_eth_bond.h | 96 +-
> drivers/net/bonding/rte_eth_bond_8023ad.c | 372 +--
> drivers/net/bonding/rte_eth_bond_8023ad.h | 67 +-
> drivers/net/bonding/rte_eth_bond_alb.c | 44 +-
> drivers/net/bonding/rte_eth_bond_alb.h | 20 +-
> drivers/net/bonding/rte_eth_bond_api.c | 482 +--
> drivers/net/bonding/rte_eth_bond_args.c | 32 +-
> drivers/net/bonding/rte_eth_bond_flow.c | 54 +-
> drivers/net/bonding/rte_eth_bond_pmd.c | 1384 ++++----
> drivers/net/bonding/version.map | 15 +-
> examples/bond/main.c | 40 +-
> 25 files changed, 3486 insertions(+), 3406 deletions(-)
>
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index 938ca035d4..d41eb2b6f1 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -602,27 +602,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
> }
>
> static int
> -change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
> +change_bonding_member_port_status(portid_t bond_pid, bool is_stop)
> {
> #ifdef RTE_NET_BOND
>
> - portid_t slave_pids[RTE_MAX_ETHPORTS];
> + portid_t member_pids[RTE_MAX_ETHPORTS];
> struct rte_port *port;
> - int num_slaves;
> - portid_t slave_pid;
> + int num_members;
> + portid_t member_pid;
> int i;
>
> - num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
> + num_members = rte_eth_bond_members_get(bond_pid, member_pids,
> RTE_MAX_ETHPORTS);
> - if (num_slaves < 0) {
> - fprintf(stderr, "Failed to get slave list for port = %u\n",
> + if (num_members < 0) {
> + fprintf(stderr, "Failed to get member list for port = %u\n",
> bond_pid);
> - return num_slaves;
> + return num_members;
> }
>
> - for (i = 0; i < num_slaves; i++) {
> - slave_pid = slave_pids[i];
> - port = &ports[slave_pid];
> + for (i = 0; i < num_members; i++) {
> + member_pid = member_pids[i];
> + port = &ports[member_pid];
> port->port_status =
> is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
> }
> @@ -646,12 +646,12 @@ eth_dev_start_mp(uint16_t port_id)
> struct rte_port *port = &ports[port_id];
>
> /*
> - * Starting a bonded port also starts all slaves under the bonded
> + * Starting a bonded port also starts all members under the bonded
> * device. So if this port is bond device, we need to modify the
> - * port status of these slaves.
> + * port status of these members.
> */
> if (port->bond_flag == 1)
> - return change_bonding_slave_port_status(port_id, false);
> + return change_bonding_member_port_status(port_id, false);
> }
>
> return 0;
> @@ -670,12 +670,12 @@ eth_dev_stop_mp(uint16_t port_id)
> struct rte_port *port = &ports[port_id];
>
> /*
> - * Stopping a bonded port also stops all slaves under the bonded
> + * Stopping a bonded port also stops all members under the bonded
> * device. So if this port is bond device, we need to modify the
> - * port status of these slaves.
> + * port status of these members.
> */
> if (port->bond_flag == 1)
> - return change_bonding_slave_port_status(port_id, true);
> + return change_bonding_member_port_status(port_id, true);
> }
>
> return 0;
> @@ -2624,7 +2624,7 @@ all_ports_started(void)
> port = &ports[pi];
> /* Check if there is a port which is not started */
> if ((port->port_status != RTE_PORT_STARTED) &&
> - (port->slave_flag == 0))
> + (port->member_flag == 0))
> return 0;
> }
>
> @@ -2638,7 +2638,7 @@ port_is_stopped(portid_t port_id)
> struct rte_port *port = &ports[port_id];
>
> if ((port->port_status != RTE_PORT_STOPPED) &&
> - (port->slave_flag == 0))
> + (port->member_flag == 0))
> return 0;
> return 1;
> }
> @@ -2984,8 +2984,8 @@ fill_xstats_display_info(void)
>
> /*
> * Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
> - * device in dev_info is zero when no slave is added. And its capability
> - * will be updated when add a new slave device. So adding a slave device need
> + * device in dev_info is zero when no member is added. And its capability
> + * will be updated when add a new member device. So adding a member device need
> * to update the port configurations of bonding device.
> */
> static void
> @@ -3042,7 +3042,7 @@ start_port(portid_t pid)
> if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
> continue;
>
> - if (port_is_bonding_slave(pi)) {
> + if (port_is_bonding_member(pi)) {
> fprintf(stderr,
> "Please remove port %d from bonded device.\n",
> pi);
> @@ -3364,7 +3364,7 @@ stop_port(portid_t pid)
> continue;
> }
>
> - if (port_is_bonding_slave(pi)) {
> + if (port_is_bonding_member(pi)) {
> fprintf(stderr,
> "Please remove port %d from bonded device.\n",
> pi);
> @@ -3453,28 +3453,28 @@ flush_port_owned_resources(portid_t pi)
> }
>
> static void
> -clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
> +clear_bonding_member_device(portid_t *member_pids, uint16_t num_members)
> {
> struct rte_port *port;
> - portid_t slave_pid;
> + portid_t member_pid;
> uint16_t i;
>
> - for (i = 0; i < num_slaves; i++) {
> - slave_pid = slave_pids[i];
> - if (port_is_started(slave_pid) == 1) {
> - if (rte_eth_dev_stop(slave_pid) != 0)
> + for (i = 0; i < num_members; i++) {
> + member_pid = member_pids[i];
> + if (port_is_started(member_pid) == 1) {
> + if (rte_eth_dev_stop(member_pid) != 0)
> fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
> - slave_pid);
> + member_pid);
>
> - port = &ports[slave_pid];
> + port = &ports[member_pid];
> port->port_status = RTE_PORT_STOPPED;
> }
>
> - clear_port_slave_flag(slave_pid);
> + clear_port_member_flag(member_pid);
>
> - /* Close slave device when testpmd quit or is killed. */
> + /* Close member device when testpmd quit or is killed. */
> if (cl_quit == 1 || f_quit == 1)
> - rte_eth_dev_close(slave_pid);
> + rte_eth_dev_close(member_pid);
> }
> }
>
> @@ -3483,8 +3483,8 @@ close_port(portid_t pid)
> {
> portid_t pi;
> struct rte_port *port;
> - portid_t slave_pids[RTE_MAX_ETHPORTS];
> - int num_slaves = 0;
> + portid_t member_pids[RTE_MAX_ETHPORTS];
> + int num_members = 0;
>
> if (port_id_is_invalid(pid, ENABLED_WARN))
> return;
> @@ -3502,7 +3502,7 @@ close_port(portid_t pid)
> continue;
> }
>
> - if (port_is_bonding_slave(pi)) {
> + if (port_is_bonding_member(pi)) {
> fprintf(stderr,
> "Please remove port %d from bonded device.\n",
> pi);
> @@ -3519,17 +3519,17 @@ close_port(portid_t pid)
> flush_port_owned_resources(pi);
> #ifdef RTE_NET_BOND
> if (port->bond_flag == 1)
> - num_slaves = rte_eth_bond_slaves_get(pi,
> - slave_pids, RTE_MAX_ETHPORTS);
> + num_members = rte_eth_bond_members_get(pi,
> + member_pids, RTE_MAX_ETHPORTS);
> #endif
> rte_eth_dev_close(pi);
> /*
> - * If this port is bonded device, all slaves under the
> + * If this port is bonded device, all members under the
> * device need to be removed or closed.
> */
> - if (port->bond_flag == 1 && num_slaves > 0)
> - clear_bonding_slave_device(slave_pids,
> - num_slaves);
> + if (port->bond_flag == 1 && num_members > 0)
> + clear_bonding_member_device(member_pids,
> + num_members);
> }
>
> free_xstats_display_info(pi);
> @@ -3569,7 +3569,7 @@ reset_port(portid_t pid)
> continue;
> }
>
> - if (port_is_bonding_slave(pi)) {
> + if (port_is_bonding_member(pi)) {
> fprintf(stderr,
> "Please remove port %d from bonded device.\n",
> pi);
> @@ -4217,38 +4217,39 @@ init_port_config(void)
> }
> }
>
> -void set_port_slave_flag(portid_t slave_pid)
> +void set_port_member_flag(portid_t member_pid)
> {
> struct rte_port *port;
>
> - port = &ports[slave_pid];
> - port->slave_flag = 1;
> + port = &ports[member_pid];
> + port->member_flag = 1;
> }
>
> -void clear_port_slave_flag(portid_t slave_pid)
> +void clear_port_member_flag(portid_t member_pid)
> {
> struct rte_port *port;
>
> - port = &ports[slave_pid];
> - port->slave_flag = 0;
> + port = &ports[member_pid];
> + port->member_flag = 0;
> }
>
> -uint8_t port_is_bonding_slave(portid_t slave_pid)
> +uint8_t port_is_bonding_member(portid_t member_pid)
> {
> struct rte_port *port;
> struct rte_eth_dev_info dev_info;
> int ret;
>
> - port = &ports[slave_pid];
> - ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
> + port = &ports[member_pid];
> + ret = eth_dev_info_get_print_err(member_pid, &dev_info);
> if (ret != 0) {
> TESTPMD_LOG(ERR,
> "Failed to get device info for port id %d,"
> - "cannot determine if the port is a bonded slave",
> - slave_pid);
> + "cannot determine if the port is a bonded member",
> + member_pid);
> return 0;
> }
> - if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDING_MEMBER) || (port->slave_flag == 1))
> +
> + if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDING_MEMBER) || (port->member_flag == 1))
> return 1;
> return 0;
> }
> diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
> index f1df6a8faf..888e30367f 100644
> --- a/app/test-pmd/testpmd.h
> +++ b/app/test-pmd/testpmd.h
> @@ -337,7 +337,7 @@ struct rte_port {
> uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
> queueid_t queue_nb; /**< nb. of queues for flow rules */
> uint32_t queue_sz; /**< size of a queue for flow rules */
> - uint8_t slave_flag : 1, /**< bonding slave port */
> + uint8_t member_flag : 1, /**< bonding member port */
> bond_flag : 1, /**< port is bond device */
> fwd_mac_swap : 1, /**< swap packet MAC before forward */
> update_conf : 1; /**< need to update bonding device configuration */
> @@ -1107,9 +1107,9 @@ void stop_packet_forwarding(void);
> void dev_set_link_up(portid_t pid);
> void dev_set_link_down(portid_t pid);
> void init_port_config(void);
> -void set_port_slave_flag(portid_t slave_pid);
> -void clear_port_slave_flag(portid_t slave_pid);
> -uint8_t port_is_bonding_slave(portid_t slave_pid);
> +void set_port_member_flag(portid_t member_pid);
> +void clear_port_member_flag(portid_t member_pid);
> +uint8_t port_is_bonding_member(portid_t member_pid);
>
> int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
> enum rte_eth_nb_tcs num_tcs,
> diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
> index 2f46e4c6ee..8dceb14ed0 100644
> --- a/app/test/test_link_bonding.c
> +++ b/app/test/test_link_bonding.c
> @@ -59,13 +59,13 @@
> #define INVALID_BONDING_MODE (-1)
>
>
> -uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
> +uint8_t member_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
> uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
>
> struct link_bonding_unittest_params {
> int16_t bonded_port_id;
> - int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
> - uint16_t bonded_slave_count;
> + int16_t member_port_ids[TEST_MAX_NUMBER_OF_PORTS];
> + uint16_t bonded_member_count;
> uint8_t bonding_mode;
>
> uint16_t nb_rx_q;
> @@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
>
> struct rte_mempool *mbuf_pool;
>
> - struct rte_ether_addr *default_slave_mac;
> + struct rte_ether_addr *default_member_mac;
> struct rte_ether_addr *default_bonded_mac;
>
> /* Packet Headers */
> @@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
>
> static struct link_bonding_unittest_params default_params = {
> .bonded_port_id = -1,
> - .slave_port_ids = { -1 },
> - .bonded_slave_count = 0,
> + .member_port_ids = { -1 },
> + .bonded_member_count = 0,
> .bonding_mode = BONDING_MODE_ROUND_ROBIN,
>
> .nb_rx_q = 1,
> @@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params = {
>
> .mbuf_pool = NULL,
>
> - .default_slave_mac = (struct rte_ether_addr *)slave_mac,
> + .default_member_mac = (struct rte_ether_addr *)member_mac,
> .default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
>
> .pkt_eth_hdr = NULL,
> @@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
> return 0;
> }
>
> -static int slaves_initialized;
> -static int mac_slaves_initialized;
> +static int members_initialized;
> +static int mac_members_initialized;
>
> static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
> static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
> @@ -213,7 +213,7 @@ static int
> test_setup(void)
> {
> int i, nb_mbuf_per_pool;
> - struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
> + struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)member_mac;
>
> /* Allocate ethernet packet header with space for VLAN header */
> if (test_params->pkt_eth_hdr == NULL) {
> @@ -235,7 +235,7 @@ test_setup(void)
> }
>
> /* Create / Initialize virtual eth devs */
> - if (!slaves_initialized) {
> + if (!members_initialized) {
> for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
> char pmd_name[RTE_ETH_NAME_MAX_LEN];
>
> @@ -243,16 +243,16 @@ test_setup(void)
>
> snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
>
> - test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
> + test_params->member_port_ids[i] = virtual_ethdev_create(pmd_name,
> mac_addr, rte_socket_id(), 1);
> - TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
> + TEST_ASSERT(test_params->member_port_ids[i] >= 0,
> "Failed to create virtual virtual ethdev %s", pmd_name);
>
> TEST_ASSERT_SUCCESS(configure_ethdev(
> - test_params->slave_port_ids[i], 1, 0),
> + test_params->member_port_ids[i], 1, 0),
> "Failed to configure virtual ethdev %s", pmd_name);
> }
> - slaves_initialized = 1;
> + members_initialized = 1;
> }
>
> return 0;
> @@ -261,9 +261,9 @@ test_setup(void)
> static int
> test_create_bonded_device(void)
> {
> - int current_slave_count;
> + int current_member_count;
>
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> /* Don't try to recreate bonded device if re-running test suite*/
> if (test_params->bonded_port_id == -1) {
> @@ -281,19 +281,19 @@ test_create_bonded_device(void)
> test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
> test_params->bonded_port_id, test_params->bonding_mode);
>
> - current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> + current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
>
> - TEST_ASSERT_EQUAL(current_slave_count, 0,
> - "Number of slaves %d is great than expected %d.",
> - current_slave_count, 0);
> + TEST_ASSERT_EQUAL(current_member_count, 0,
> + "Number of members %d is great than expected %d.",
> + current_member_count, 0);
>
> - current_slave_count = rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
> + current_member_count = rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
>
> - TEST_ASSERT_EQUAL(current_slave_count, 0,
> - "Number of active slaves %d is great than expected %d.",
> - current_slave_count, 0);
> + TEST_ASSERT_EQUAL(current_member_count, 0,
> + "Number of active members %d is great than expected %d.",
> + current_member_count, 0);
>
> return 0;
> }
> @@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
> }
>
> static int
> -test_add_slave_to_bonded_device(void)
> +test_add_member_to_bonded_device(void)
> {
> - int current_slave_count;
> + int current_member_count;
>
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
> - test_params->slave_port_ids[test_params->bonded_slave_count]),
> - "Failed to add slave (%d) to bonded port (%d).",
> - test_params->slave_port_ids[test_params->bonded_slave_count],
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
> + test_params->member_port_ids[test_params->bonded_member_count]),
> + "Failed to add member (%d) to bonded port (%d).",
> + test_params->member_port_ids[test_params->bonded_member_count],
> test_params->bonded_port_id);
>
> - current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
> - "Number of slaves (%d) is greater than expected (%d).",
> - current_slave_count, test_params->bonded_slave_count + 1);
> + current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count + 1,
> + "Number of members (%d) is greater than expected (%d).",
> + current_member_count, test_params->bonded_member_count + 1);
>
> - current_slave_count = rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(current_slave_count, 0,
> - "Number of active slaves (%d) is not as expected (%d).\n",
> - current_slave_count, 0);
> + current_member_count = rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(current_member_count, 0,
> + "Number of active members (%d) is not as expected (%d).\n",
> + current_member_count, 0);
>
> - test_params->bonded_slave_count++;
> + test_params->bonded_member_count++;
>
> return 0;
> }
>
> static int
> -test_add_slave_to_invalid_bonded_device(void)
> +test_add_member_to_invalid_bonded_device(void)
> {
> /* Invalid port ID */
> - TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
> - test_params->slave_port_ids[test_params->bonded_slave_count]),
> + TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->bonded_port_id + 5,
> + test_params->member_port_ids[test_params->bonded_member_count]),
> "Expected call to failed as invalid port specified.");
>
> /* Non bonded device */
> - TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
> - test_params->slave_port_ids[test_params->bonded_slave_count]),
> + TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->member_port_ids[0],
> + test_params->member_port_ids[test_params->bonded_member_count]),
> "Expected call to failed as invalid port specified.");
>
> return 0;
> @@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
>
>
> static int
> -test_remove_slave_from_bonded_device(void)
> +test_remove_member_from_bonded_device(void)
> {
> - int current_slave_count;
> + int current_member_count;
> struct rte_ether_addr read_mac_addr, *mac_addr;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
> - test_params->slave_port_ids[test_params->bonded_slave_count-1]),
> - "Failed to remove slave %d from bonded port (%d).",
> - test_params->slave_port_ids[test_params->bonded_slave_count-1],
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params->bonded_port_id,
> + test_params->member_port_ids[test_params->bonded_member_count-1]),
> + "Failed to remove member %d from bonded port (%d).",
> + test_params->member_port_ids[test_params->bonded_member_count-1],
> test_params->bonded_port_id);
>
>
> - current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> + current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
>
> - TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
> - "Number of slaves (%d) is great than expected (%d).\n",
> - current_slave_count, test_params->bonded_slave_count - 1);
> + TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count - 1,
> + "Number of members (%d) is great than expected (%d).\n",
> + current_member_count, test_params->bonded_member_count - 1);
>
>
> - mac_addr = (struct rte_ether_addr *)slave_mac;
> + mac_addr = (struct rte_ether_addr *)member_mac;
> mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
> - test_params->bonded_slave_count-1;
> + test_params->bonded_member_count-1;
>
> TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
> - test_params->slave_port_ids[test_params->bonded_slave_count-1],
> + test_params->member_port_ids[test_params->bonded_member_count-1],
> &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[test_params->bonded_slave_count-1]);
> + test_params->member_port_ids[test_params->bonded_member_count-1]);
> TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
> "bonded port mac address not set to that of primary port\n");
>
> rte_eth_stats_reset(
> - test_params->slave_port_ids[test_params->bonded_slave_count-1]);
> + test_params->member_port_ids[test_params->bonded_member_count-1]);
>
> virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
> 0);
>
> - test_params->bonded_slave_count--;
> + test_params->bonded_member_count--;
>
> return 0;
> }
>
> static int
> -test_remove_slave_from_invalid_bonded_device(void)
> +test_remove_member_from_invalid_bonded_device(void)
> {
> /* Invalid port ID */
> - TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
> + TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
> test_params->bonded_port_id + 5,
> - test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
> + test_params->member_port_ids[test_params->bonded_member_count - 1]),
> "Expected call to failed as invalid port specified.");
>
> /* Non bonded device */
> - TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
> - test_params->slave_port_ids[0],
> - test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
> + TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
> + test_params->member_port_ids[0],
> + test_params->member_port_ids[test_params->bonded_member_count - 1]),
> "Expected call to failed as invalid port specified.");
>
> return 0;
> @@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
> static int bonded_id = 2;
>
> static int
> -test_add_already_bonded_slave_to_bonded_device(void)
> +test_add_already_bonded_member_to_bonded_device(void)
> {
> - int port_id, current_slave_count;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + int port_id, current_member_count;
> + uint16_t members[RTE_MAX_ETHPORTS];
> char pmd_name[RTE_ETH_NAME_MAX_LEN];
>
> - test_add_slave_to_bonded_device();
> + test_add_member_to_bonded_device();
>
> - current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(current_slave_count, 1,
> - "Number of slaves (%d) is not that expected (%d).",
> - current_slave_count, 1);
> + current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(current_member_count, 1,
> + "Number of members (%d) is not that expected (%d).",
> + current_member_count, 1);
>
> snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
>
> @@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
> rte_socket_id());
> TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
>
> - TEST_ASSERT(rte_eth_bond_slave_add(port_id,
> - test_params->slave_port_ids[test_params->bonded_slave_count - 1])
> + TEST_ASSERT(rte_eth_bond_member_add(port_id,
> + test_params->member_port_ids[test_params->bonded_member_count - 1])
> < 0,
> - "Added slave (%d) to bonded port (%d) unexpectedly.",
> - test_params->slave_port_ids[test_params->bonded_slave_count-1],
> + "Added member (%d) to bonded port (%d) unexpectedly.",
> + test_params->member_port_ids[test_params->bonded_member_count-1],
> port_id);
>
> - return test_remove_slave_from_bonded_device();
> + return test_remove_member_from_bonded_device();
> }
>
>
> static int
> -test_get_slaves_from_bonded_device(void)
> +test_get_members_from_bonded_device(void)
> {
> - int current_slave_count;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + int current_member_count;
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
> - "Failed to add slave to bonded device");
> + TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
> + "Failed to add member to bonded device");
>
> /* Invalid port id */
> - current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
> + current_member_count = rte_eth_bond_members_get(INVALID_PORT_ID, members,
> RTE_MAX_ETHPORTS);
> - TEST_ASSERT(current_slave_count < 0,
> + TEST_ASSERT(current_member_count < 0,
> "Invalid port id unexpectedly succeeded");
>
> - current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT(current_slave_count < 0,
> + current_member_count = rte_eth_bond_active_members_get(INVALID_PORT_ID,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT(current_member_count < 0,
> "Invalid port id unexpectedly succeeded");
>
> - /* Invalid slaves pointer */
> - current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
> + /* Invalid members pointer */
> + current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
> NULL, RTE_MAX_ETHPORTS);
> - TEST_ASSERT(current_slave_count < 0,
> - "Invalid slave array unexpectedly succeeded");
> + TEST_ASSERT(current_member_count < 0,
> + "Invalid member array unexpectedly succeeded");
>
> - current_slave_count = rte_eth_bond_active_slaves_get(
> + current_member_count = rte_eth_bond_active_members_get(
> test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
> - TEST_ASSERT(current_slave_count < 0,
> - "Invalid slave array unexpectedly succeeded");
> + TEST_ASSERT(current_member_count < 0,
> + "Invalid member array unexpectedly succeeded");
>
> /* non bonded device*/
> - current_slave_count = rte_eth_bond_slaves_get(
> - test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
> - TEST_ASSERT(current_slave_count < 0,
> + current_member_count = rte_eth_bond_members_get(
> + test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
> + TEST_ASSERT(current_member_count < 0,
> "Invalid port id unexpectedly succeeded");
>
> - current_slave_count = rte_eth_bond_active_slaves_get(
> - test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
> - TEST_ASSERT(current_slave_count < 0,
> + current_member_count = rte_eth_bond_active_members_get(
> + test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
> + TEST_ASSERT(current_member_count < 0,
> "Invalid port id unexpectedly succeeded");
>
> - TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
> - "Failed to remove slaves from bonded device");
> + TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
> + "Failed to remove members from bonded device");
>
> return 0;
> }
>
>
> static int
> -test_add_remove_multiple_slaves_to_from_bonded_device(void)
> +test_add_remove_multiple_members_to_from_bonded_device(void)
> {
> int i;
>
> for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
> - TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
> - "Failed to add slave to bonded device");
> + TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
> + "Failed to add member to bonded device");
>
> for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
> - TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
> - "Failed to remove slaves from bonded device");
> + TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
> + "Failed to remove members from bonded device");
>
> return 0;
> }
>
> static void
> -enable_bonded_slaves(void)
> +enable_bonded_members(void)
> {
> int i;
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + virtual_ethdev_tx_burst_fn_set_success(test_params->member_port_ids[i],
> 1);
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[i], 1);
> + test_params->member_port_ids[i], 1);
> }
> }
>
> @@ -556,34 +556,36 @@ test_start_bonded_device(void)
> {
> struct rte_eth_link link_status;
>
> - int current_slave_count, current_bonding_mode, primary_port;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + int current_member_count, current_bonding_mode, primary_port;
> + uint16_t members[RTE_MAX_ETHPORTS];
> int retval;
>
> - /* Add slave to bonded device*/
> - TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
> - "Failed to add slave to bonded device");
> + /* Add member to bonded device*/
> + TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
> + "Failed to add member to bonded device");
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
> "Failed to start bonded pmd eth device %d.",
> test_params->bonded_port_id);
>
> - /* Change link status of virtual pmd so it will be added to the active
> - * slave list of the bonded device*/
> + /*
> + * Change link status of virtual pmd so it will be added to the active
> + * member list of the bonded device.
> + */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
> + test_params->member_port_ids[test_params->bonded_member_count-1], 1);
>
> - current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
> - "Number of slaves (%d) is not expected value (%d).",
> - current_slave_count, test_params->bonded_slave_count);
> + current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
> + "Number of members (%d) is not expected value (%d).",
> + current_member_count, test_params->bonded_member_count);
>
> - current_slave_count = rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
> - "Number of active slaves (%d) is not expected value (%d).",
> - current_slave_count, test_params->bonded_slave_count);
> + current_member_count = rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
> + "Number of active members (%d) is not expected value (%d).",
> + current_member_count, test_params->bonded_member_count);
>
> current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
> TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
> @@ -591,9 +593,9 @@ test_start_bonded_device(void)
> current_bonding_mode, test_params->bonding_mode);
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
> - TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
> + TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
> "Primary port (%d) is not expected value (%d).",
> - primary_port, test_params->slave_port_ids[0]);
> + primary_port, test_params->member_port_ids[0]);
>
> retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
> TEST_ASSERT(retval >= 0,
> @@ -609,8 +611,8 @@ test_start_bonded_device(void)
> static int
> test_stop_bonded_device(void)
> {
> - int current_slave_count;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + int current_member_count;
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> struct rte_eth_link link_status;
> int retval;
> @@ -627,29 +629,29 @@ test_stop_bonded_device(void)
> "Bonded port (%d) status (%d) is not expected value (%d).",
> test_params->bonded_port_id, link_status.link_status, 0);
>
> - current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
> - "Number of slaves (%d) is not expected value (%d).",
> - current_slave_count, test_params->bonded_slave_count);
> + current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
> + "Number of members (%d) is not expected value (%d).",
> + current_member_count, test_params->bonded_member_count);
>
> - current_slave_count = rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(current_slave_count, 0,
> - "Number of active slaves (%d) is not expected value (%d).",
> - current_slave_count, 0);
> + current_member_count = rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(current_member_count, 0,
> + "Number of active members (%d) is not expected value (%d).",
> + current_member_count, 0);
>
> return 0;
> }
>
> static int
> -remove_slaves_and_stop_bonded_device(void)
> +remove_members_and_stop_bonded_device(void)
> {
> - /* Clean up and remove slaves from bonded device */
> + /* Clean up and remove members from bonded device */
> free_virtualpmd_tx_queue();
> - while (test_params->bonded_slave_count > 0)
> - TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
> - "test_remove_slave_from_bonded_device failed");
> + while (test_params->bonded_member_count > 0)
> + TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
> + "test_remove_member_from_bonded_device failed");
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
> "Failed to stop bonded port %u",
> @@ -681,10 +683,10 @@ test_set_bonding_mode(void)
> INVALID_PORT_ID);
>
> /* Non bonded device */
> - TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
> + TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->member_port_ids[0],
> bonding_modes[i]),
> "Expected call to failed as invalid port (%d) specified.",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
>
> TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
> bonding_modes[i]),
> @@ -704,26 +706,26 @@ test_set_bonding_mode(void)
> INVALID_PORT_ID);
>
> /* Non bonded device */
> - bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
> + bonding_mode = rte_eth_bond_mode_get(test_params->member_port_ids[0]);
> TEST_ASSERT(bonding_mode < 0,
> "Expected call to failed as invalid port (%d) specified.",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> }
>
> - return remove_slaves_and_stop_bonded_device();
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> -test_set_primary_slave(void)
> +test_set_primary_member(void)
> {
> int i, j, retval;
> struct rte_ether_addr read_mac_addr;
> struct rte_ether_addr *expected_mac_addr;
>
> - /* Add 4 slaves to bonded device */
> - for (i = test_params->bonded_slave_count; i < 4; i++)
> - TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
> - "Failed to add slave to bonded device.");
> + /* Add 4 members to bonded device */
> + for (i = test_params->bonded_member_count; i < 4; i++)
> + TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
> + "Failed to add member to bonded device.");
>
> TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
> BONDING_MODE_ROUND_ROBIN),
> @@ -732,34 +734,34 @@ test_set_primary_slave(void)
>
> /* Invalid port ID */
> TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
> - test_params->slave_port_ids[i]),
> + test_params->member_port_ids[i]),
> "Expected call to failed as invalid port specified.");
>
> /* Non bonded device */
> - TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
> - test_params->slave_port_ids[i]),
> + TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->member_port_ids[i],
> + test_params->member_port_ids[i]),
> "Expected call to failed as invalid port specified.");
>
> - /* Set slave as primary
> - * Verify slave it is now primary slave
> - * Verify that MAC address of bonded device is that of primary slave
> - * Verify that MAC address of all bonded slaves are that of primary slave
> + /* Set member as primary
> + * Verify member it is now primary member
> + * Verify that MAC address of bonded device is that of primary member
> + * Verify that MAC address of all bonded members are that of primary member
> */
> for (i = 0; i < 4; i++) {
> TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
> - test_params->slave_port_ids[i]),
> + test_params->member_port_ids[i]),
> "Failed to set bonded port (%d) primary port to (%d)",
> - test_params->bonded_port_id, test_params->slave_port_ids[i]);
> + test_params->bonded_port_id, test_params->member_port_ids[i]);
>
> retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
> TEST_ASSERT(retval >= 0,
> "Failed to read primary port from bonded port (%d)\n",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
> + TEST_ASSERT_EQUAL(retval, test_params->member_port_ids[i],
> "Bonded port (%d) primary port (%d) not expected value (%d)\n",
> test_params->bonded_port_id, retval,
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
>
> /* stop/start bonded eth dev to apply new MAC */
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
> @@ -770,13 +772,14 @@ test_set_primary_slave(void)
> "Failed to start bonded port %d",
> test_params->bonded_port_id);
>
> - expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
> + expected_mac_addr = (struct rte_ether_addr *)&member_mac;
> expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
>
> - /* Check primary slave MAC */
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + /* Check primary member MAC */
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> "bonded port mac address not set to that of primary port\n");
> @@ -789,16 +792,17 @@ test_set_primary_slave(void)
> sizeof(read_mac_addr)),
> "bonded port mac address not set to that of primary port\n");
>
> - /* Check other slaves MACs */
> + /* Check other members MACs */
> for (j = 0; j < 4; j++) {
> if (j != i) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
> + test_params->member_port_ids[j],
> &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[j]);
> + test_params->member_port_ids[j]);
> TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port mac address not set to that of primary "
> + "member port mac address not set to that of primary "
> "port");
> }
> }
> @@ -809,14 +813,14 @@ test_set_primary_slave(void)
> TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
> "read primary port from expectedly");
>
> - /* Test with slave port */
> - TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
> + /* Test with member port */
> + TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->member_port_ids[0]),
> "read primary port from expectedly\n");
>
> - TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
> - "Failed to stop and remove slaves from bonded device");
> + TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
> + "Failed to stop and remove members from bonded device");
>
> - /* No slaves */
> + /* No members */
> TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id) < 0,
> "read primary port from expectedly\n");
>
> @@ -840,7 +844,7 @@ test_set_explicit_bonded_mac(void)
>
> /* Non bonded device */
> TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
> - test_params->slave_port_ids[0], mac_addr),
> + test_params->member_port_ids[0], mac_addr),
> "Expected call to failed as invalid port specified.");
>
> /* NULL MAC address */
> @@ -853,10 +857,10 @@ test_set_explicit_bonded_mac(void)
> "Failed to set MAC address on bonded port (%d)",
> test_params->bonded_port_id);
>
> - /* Add 4 slaves to bonded device */
> - for (i = test_params->bonded_slave_count; i < 4; i++) {
> - TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
> - "Failed to add slave to bonded device.\n");
> + /* Add 4 members to bonded device */
> + for (i = test_params->bonded_member_count; i < 4; i++) {
> + TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
> + "Failed to add member to bonded device.\n");
> }
>
> /* Check bonded MAC */
> @@ -866,14 +870,15 @@ test_set_explicit_bonded_mac(void)
> TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
> "bonded port mac address not set to that of primary port");
>
> - /* Check other slaves MACs */
> + /* Check other members MACs */
> for (i = 0; i < 4; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port mac address not set to that of primary port");
> + "member port mac address not set to that of primary port");
> }
>
> /* test resetting mac address on bonded device */
> @@ -883,13 +888,13 @@ test_set_explicit_bonded_mac(void)
> test_params->bonded_port_id);
>
> TEST_ASSERT_FAIL(
> - rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
> + rte_eth_bond_mac_address_reset(test_params->member_port_ids[0]),
> "Reset MAC address on bonded port (%d) unexpectedly",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
>
> - /* test resetting mac address on bonded device with no slaves */
> - TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
> - "Failed to remove slaves and stop bonded device");
> + /* test resetting mac address on bonded device with no members */
> + TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
> + "Failed to remove members and stop bonded device");
>
> TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
> "Failed to reset MAC address on bonded port (%d)",
> @@ -898,25 +903,25 @@ test_set_explicit_bonded_mac(void)
> return 0;
> }
>
> -#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
> +#define BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT (3)
>
> static int
> test_set_bonded_port_initialization_mac_assignment(void)
> {
> - int i, slave_count;
> + int i, member_count;
>
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
> static int bonded_port_id = -1;
> - static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
> + static int member_port_ids[BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT];
>
> - struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
> + struct rte_ether_addr member_mac_addr, bonded_mac_addr, read_mac_addr;
>
> /* Initialize default values for MAC addresses */
> - memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
> - memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
> + memcpy(&member_mac_addr, member_mac, sizeof(struct rte_ether_addr));
> + memcpy(&bonded_mac_addr, member_mac, sizeof(struct rte_ether_addr));
>
> /*
> - * 1. a - Create / configure bonded / slave ethdevs
> + * 1. a - Create / configure bonded / member ethdevs
> */
> if (bonded_port_id == -1) {
> bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
> @@ -927,46 +932,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
> "Failed to configure bonded ethdev");
> }
>
> - if (!mac_slaves_initialized) {
> - for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
> + if (!mac_members_initialized) {
> + for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
> char pmd_name[RTE_ETH_NAME_MAX_LEN];
>
> - slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
> + member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
> i + 100;
>
> snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
> - "eth_slave_%d", i);
> + "eth_member_%d", i);
>
> - slave_port_ids[i] = virtual_ethdev_create(pmd_name,
> - &slave_mac_addr, rte_socket_id(), 1);
> + member_port_ids[i] = virtual_ethdev_create(pmd_name,
> + &member_mac_addr, rte_socket_id(), 1);
>
> - TEST_ASSERT(slave_port_ids[i] >= 0,
> - "Failed to create slave ethdev %s",
> + TEST_ASSERT(member_port_ids[i] >= 0,
> + "Failed to create member ethdev %s",
> pmd_name);
>
> - TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
> + TEST_ASSERT_SUCCESS(configure_ethdev(member_port_ids[i], 1, 0),
> "Failed to configure virtual ethdev %s",
> pmd_name);
> }
> - mac_slaves_initialized = 1;
> + mac_members_initialized = 1;
> }
>
>
> /*
> - * 2. Add slave ethdevs to bonded device
> + * 2. Add member ethdevs to bonded device
> */
> - for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
> - slave_port_ids[i]),
> - "Failed to add slave (%d) to bonded port (%d).",
> - slave_port_ids[i], bonded_port_id);
> + for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(bonded_port_id,
> + member_port_ids[i]),
> + "Failed to add member (%d) to bonded port (%d).",
> + member_port_ids[i], bonded_port_id);
> }
>
> - slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
> + member_count = rte_eth_bond_members_get(bonded_port_id, members,
> RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
> - "Number of slaves (%d) is not as expected (%d)",
> - slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
> + TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT, member_count,
> + "Number of members (%d) is not as expected (%d)",
> + member_count, BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT);
>
>
> /*
> @@ -982,16 +987,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
>
>
> /* 4. a - Start bonded ethdev
> - * b - Enable slave devices
> - * c - Verify bonded/slaves ethdev MAC addresses
> + * b - Enable member devices
> + * c - Verify bonded/members ethdev MAC addresses
> */
> TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
> "Failed to start bonded pmd eth device %d.",
> bonded_port_id);
>
> - for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
> + for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
> virtual_ethdev_simulate_link_status_interrupt(
> - slave_port_ids[i], 1);
> + member_port_ids[i], 1);
> }
>
> TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
> @@ -1001,36 +1006,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
> sizeof(read_mac_addr)),
> "bonded port mac address not as expected");
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - slave_port_ids[0]);
> + member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port 0 mac address not as expected");
> + "member port 0 mac address not as expected");
>
> - slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
> + member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - slave_port_ids[1]);
> - TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
> + member_port_ids[1]);
> + TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port 1 mac address not as expected");
> + "member port 1 mac address not as expected");
>
> - slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
> + member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - slave_port_ids[2]);
> - TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
> + member_port_ids[2]);
> + TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port 2 mac address not as expected");
> + "member port 2 mac address not as expected");
>
>
> /* 7. a - Change primary port
> * b - Stop / Start bonded port
> - * d - Verify slave ethdev MAC addresses
> + * d - Verify member ethdev MAC addresses
> */
> TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
> - slave_port_ids[2]),
> + member_port_ids[2]),
> "failed to set primary port on bonded device.");
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
> @@ -1048,94 +1053,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
> sizeof(read_mac_addr)),
> "bonded port mac address not as expected");
>
> - slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
> + member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - slave_port_ids[0]);
> - TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
> + member_port_ids[0]);
> + TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port 0 mac address not as expected");
> + "member port 0 mac address not as expected");
>
> - slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
> + member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - slave_port_ids[1]);
> - TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
> + member_port_ids[1]);
> + TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port 1 mac address not as expected");
> + "member port 1 mac address not as expected");
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - slave_port_ids[2]);
> + member_port_ids[2]);
> TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port 2 mac address not as expected");
> + "member port 2 mac address not as expected");
>
> /* 6. a - Stop bonded ethdev
> - * b - remove slave ethdevs
> - * c - Verify slave ethdevs MACs are restored
> + * b - remove member ethdevs
> + * c - Verify member ethdevs MACs are restored
> */
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
> "Failed to stop bonded port %u",
> bonded_port_id);
>
> - for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
> - slave_port_ids[i]),
> - "Failed to remove slave %d from bonded port (%d).",
> - slave_port_ids[i], bonded_port_id);
> + for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(bonded_port_id,
> + member_port_ids[i]),
> + "Failed to remove member %d from bonded port (%d).",
> + member_port_ids[i], bonded_port_id);
> }
>
> - slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
> + member_count = rte_eth_bond_members_get(bonded_port_id, members,
> RTE_MAX_ETHPORTS);
>
> - TEST_ASSERT_EQUAL(slave_count, 0,
> - "Number of slaves (%d) is great than expected (%d).",
> - slave_count, 0);
> + TEST_ASSERT_EQUAL(member_count, 0,
> + "Number of members (%d) is great than expected (%d).",
> + member_count, 0);
>
> - slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
> + member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - slave_port_ids[0]);
> - TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
> + member_port_ids[0]);
> + TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port 0 mac address not as expected");
> + "member port 0 mac address not as expected");
>
> - slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
> + member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - slave_port_ids[1]);
> - TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
> + member_port_ids[1]);
> + TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port 1 mac address not as expected");
> + "member port 1 mac address not as expected");
>
> - slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
> + member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - slave_port_ids[2]);
> - TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
> + member_port_ids[2]);
> + TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port 2 mac address not as expected");
> + "member port 2 mac address not as expected");
>
> return 0;
> }
>
>
> static int
> -initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
> - uint16_t number_of_slaves, uint8_t enable_slave)
> +initialize_bonded_device_with_members(uint8_t bonding_mode, uint8_t bond_en_isr,
> + uint16_t number_of_members, uint8_t enable_member)
> {
> /* Configure bonded device */
> TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
> bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
> - "with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
> - number_of_slaves);
> -
> - /* Add slaves to bonded device */
> - while (number_of_slaves > test_params->bonded_slave_count)
> - TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
> - "Failed to add slave (%d to bonding port (%d).",
> - test_params->bonded_slave_count - 1,
> + "with (%d) members.", test_params->bonded_port_id, bonding_mode,
> + number_of_members);
> +
> + /* Add members to bonded device */
> + while (number_of_members > test_params->bonded_member_count)
> + TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
> + "Failed to add member (%d to bonding port (%d).",
> + test_params->bonded_member_count - 1,
> test_params->bonded_port_id);
>
> /* Set link bonding mode */
> @@ -1148,40 +1153,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
> "Failed to start bonded pmd eth device %d.",
> test_params->bonded_port_id);
>
> - if (enable_slave)
> - enable_bonded_slaves();
> + if (enable_member)
> + enable_bonded_members();
>
> return 0;
> }
>
> static int
> -test_adding_slave_after_bonded_device_started(void)
> +test_adding_member_after_bonded_device_started(void)
> {
> int i;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
> - "Failed to add slaves to bonded device");
> + "Failed to add members to bonded device");
>
> - /* Enabled slave devices */
> - for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
> + /* Enabled member devices */
> + for (i = 0; i < test_params->bonded_member_count + 1; i++) {
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[i], 1);
> + test_params->member_port_ids[i], 1);
> }
>
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
> - test_params->slave_port_ids[test_params->bonded_slave_count]),
> - "Failed to add slave to bonded port.\n");
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
> + test_params->member_port_ids[test_params->bonded_member_count]),
> + "Failed to add member to bonded port.\n");
>
> rte_eth_stats_reset(
> - test_params->slave_port_ids[test_params->bonded_slave_count]);
> + test_params->member_port_ids[test_params->bonded_member_count]);
>
> - test_params->bonded_slave_count++;
> + test_params->bonded_member_count++;
>
> - return remove_slaves_and_stop_bonded_device();
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_STATUS_INTERRUPT_SLAVE_COUNT 4
> +#define TEST_STATUS_INTERRUPT_MEMBER_COUNT 4
> #define TEST_LSC_WAIT_TIMEOUT_US 500000
>
> int test_lsc_interrupt_count;
> @@ -1237,13 +1242,13 @@ lsc_timeout(int wait_us)
> static int
> test_status_interrupt(void)
> {
> - int slave_count;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + int member_count;
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - /* initialized bonding device with T slaves */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* initialized bonding device with T members */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ROUND_ROBIN, 1,
> - TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
> + TEST_STATUS_INTERRUPT_MEMBER_COUNT, 1),
> "Failed to initialise bonded device");
>
> test_lsc_interrupt_count = 0;
> @@ -1253,27 +1258,27 @@ test_status_interrupt(void)
> RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
> &test_params->bonded_port_id);
>
> - slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> + member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
>
> - TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
> - "Number of active slaves (%d) is not as expected (%d)",
> - slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
> + TEST_ASSERT_EQUAL(member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT,
> + "Number of active members (%d) is not as expected (%d)",
> + member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT);
>
> - /* Bring all 4 slaves link status to down and test that we have received a
> + /* Bring all 4 members link status to down and test that we have received a
> * lsc interrupts */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[0], 0);
> + test_params->member_port_ids[0], 0);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[1], 0);
> + test_params->member_port_ids[1], 0);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[2], 0);
> + test_params->member_port_ids[2], 0);
>
> TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
> "Received a link status change interrupt unexpectedly");
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[3], 0);
> + test_params->member_port_ids[3], 0);
>
> TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
> "timed out waiting for interrupt");
> @@ -1281,18 +1286,18 @@ test_status_interrupt(void)
> TEST_ASSERT(test_lsc_interrupt_count > 0,
> "Did not receive link status change interrupt");
>
> - slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> + member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
>
> - TEST_ASSERT_EQUAL(slave_count, 0,
> - "Number of active slaves (%d) is not as expected (%d)",
> - slave_count, 0);
> + TEST_ASSERT_EQUAL(member_count, 0,
> + "Number of active members (%d) is not as expected (%d)",
> + member_count, 0);
>
> - /* bring one slave port up so link status will change */
> + /* bring one member port up so link status will change */
> test_lsc_interrupt_count = 0;
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[0], 1);
> + test_params->member_port_ids[0], 1);
>
> TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
> "timed out waiting for interrupt");
> @@ -1301,12 +1306,12 @@ test_status_interrupt(void)
> TEST_ASSERT(test_lsc_interrupt_count > 0,
> "Did not receive link status change interrupt");
>
> - /* Verify that calling the same slave lsc interrupt doesn't cause another
> + /* Verify that calling the same member lsc interrupt doesn't cause another
> * lsc interrupt from bonded device */
> test_lsc_interrupt_count = 0;
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[0], 1);
> + test_params->member_port_ids[0], 1);
>
> TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
> "received unexpected interrupt");
> @@ -1320,8 +1325,8 @@ test_status_interrupt(void)
> RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
> &test_params->bonded_port_id);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -1398,11 +1403,11 @@ test_roundrobin_tx_burst(void)
> struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
> struct rte_eth_stats port_stats;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
> "Failed to initialise bonded device");
>
> - burst_size = 20 * test_params->bonded_slave_count;
> + burst_size = 20 * test_params->bonded_member_count;
>
> TEST_ASSERT(burst_size <= MAX_PKT_BURST,
> "Burst size specified is greater than supported.");
> @@ -1423,20 +1428,20 @@ test_roundrobin_tx_burst(void)
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> burst_size);
>
> - /* Verify slave ports tx stats */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
> + /* Verify member ports tx stats */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets,
> - (uint64_t)burst_size / test_params->bonded_slave_count,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
> + (uint64_t)burst_size / test_params->bonded_member_count,
> + "Member Port (%d) opackets value (%u) not as expected (%d)\n",
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> - burst_size / test_params->bonded_slave_count);
> + burst_size / test_params->bonded_member_count);
> }
>
> - /* Put all slaves down and try and transmit */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + /* Put all members down and try and transmit */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[i], 0);
> + test_params->member_port_ids[i], 0);
> }
>
> /* Send burst on bonded port */
> @@ -1444,8 +1449,8 @@ test_roundrobin_tx_burst(void)
> pkt_burst, burst_size), 0,
> "tx burst return unexpected value");
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -1471,13 +1476,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
> rte_pktmbuf_free(mbufs[i]);
> }
>
> -#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT (2)
> -#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE (64)
> -#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT (22)
> -#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (1)
> +#define TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT (2)
> +#define TEST_RR_MEMBER_TX_FAIL_BURST_SIZE (64)
> +#define TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT (22)
> +#define TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (1)
>
> static int
> -test_roundrobin_tx_burst_slave_tx_fail(void)
> +test_roundrobin_tx_burst_member_tx_fail(void)
> {
> struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
> struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
> @@ -1486,49 +1491,51 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
>
> int i, first_fail_idx, tx_count;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ROUND_ROBIN, 0,
> - TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
> + TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
> "Failed to initialise bonded device");
>
> /* Generate test bursts of packets to transmit */
> TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
> - TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
> - TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
> + TEST_RR_MEMBER_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
> + TEST_RR_MEMBER_TX_FAIL_BURST_SIZE,
> "Failed to generate test packet burst");
>
> /* Copy references to packets which we expect not to be transmitted */
> - first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
> - (TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
> - TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
> - TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
> + first_fail_idx = (TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
> + (TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT *
> + TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)) +
> + TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX;
>
> - for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
> + for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
> expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
> - (i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
> + (i * TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)];
> }
>
> - /* Set virtual slave to only fail transmission of
> - * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
> + /*
> + * Set virtual member to only fail transmission of
> + * TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT packets in burst.
> + */
> virtual_ethdev_tx_burst_fn_set_success(
> - test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
> + test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
> 0);
>
> virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
> - test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
> - TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
> + test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
> + TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
>
> tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
> - TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
> + TEST_RR_MEMBER_TX_FAIL_BURST_SIZE);
>
> - TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
> + TEST_ASSERT_EQUAL(tx_count, TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
> "Transmitted (%d) an unexpected (%d) number of packets", tx_count,
> - TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
> + TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
>
> /* Verify that failed packet are expected failed packets */
> - for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
> + for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
> TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
> "expected mbuf (%d) pointer %p not expected pointer %p",
> i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
> @@ -1538,45 +1545,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
> rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
>
> TEST_ASSERT_EQUAL(port_stats.opackets,
> - (uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
> + (uint64_t)TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
> "Bonded Port (%d) opackets value (%u) not as expected (%d)",
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> - TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
> + TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
>
> - /* Verify slave ports tx stats */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - int slave_expected_tx_count;
> + /* Verify member ports tx stats */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + int member_expected_tx_count;
>
> - rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
>
> - slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
> - test_params->bonded_slave_count;
> + member_expected_tx_count = TEST_RR_MEMBER_TX_FAIL_BURST_SIZE /
> + test_params->bonded_member_count;
>
> - if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
> - slave_expected_tx_count = slave_expected_tx_count -
> - TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
> + if (i == TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX)
> + member_expected_tx_count = member_expected_tx_count -
> + TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT;
>
> TEST_ASSERT_EQUAL(port_stats.opackets,
> - (uint64_t)slave_expected_tx_count,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[i],
> - (unsigned int)port_stats.opackets, slave_expected_tx_count);
> + (uint64_t)member_expected_tx_count,
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[i],
> + (unsigned int)port_stats.opackets, member_expected_tx_count);
> }
>
> /* Verify that all mbufs have a ref value of zero */
> TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
> - TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
> + TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
> "mbufs refcnts not as expected");
> - free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
> + free_mbufs(&pkt_burst[tx_count], TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> -test_roundrobin_rx_burst_on_single_slave(void)
> +test_roundrobin_rx_burst_on_single_member(void)
> {
> struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
> struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
> @@ -1585,19 +1592,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
>
> int i, j, burst_size = 25;
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
> - "Failed to initialize bonded device with slaves");
> + "Failed to initialize bonded device with members");
>
> /* Generate test bursts of packets to transmit */
> TEST_ASSERT_EQUAL(generate_test_burst(
> gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
> "burst generation failed");
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - /* Add rx data to slave */
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + /* Add rx data to member */
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
> &gen_pkt_burst[0], burst_size);
>
> /* Call rx burst on bonded device */
> @@ -1616,25 +1623,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
>
>
>
> - /* Verify bonded slave devices rx count */
> - /* Verify slave ports tx stats */
> - for (j = 0; j < test_params->bonded_slave_count; j++) {
> - rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
> + /* Verify bonded member devices rx count */
> + /* Verify member ports tx stats */
> + for (j = 0; j < test_params->bonded_member_count; j++) {
> + rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
>
> if (i == j) {
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
> - "Slave Port (%d) ipackets value (%u) not as expected"
> - " (%d)", test_params->slave_port_ids[i],
> + "Member Port (%d) ipackets value (%u) not as expected"
> + " (%d)", test_params->member_port_ids[i],
> (unsigned int)port_stats.ipackets, burst_size);
> } else {
> TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
> - "Slave Port (%d) ipackets value (%u) not as expected"
> - " (%d)", test_params->slave_port_ids[i],
> + "Member Port (%d) ipackets value (%u) not as expected"
> + " (%d)", test_params->member_port_ids[i],
> (unsigned int)port_stats.ipackets, 0);
> }
>
> - /* Reset bonded slaves stats */
> - rte_eth_stats_reset(test_params->slave_port_ids[j]);
> + /* Reset bonded members stats */
> + rte_eth_stats_reset(test_params->member_port_ids[j]);
> }
> /* reset bonded device stats */
> rte_eth_stats_reset(test_params->bonded_port_id);
> @@ -1646,38 +1653,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
> }
>
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
> +#define TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT (3)
>
> static int
> -test_roundrobin_rx_burst_on_multiple_slaves(void)
> +test_roundrobin_rx_burst_on_multiple_members(void)
> {
> - struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
> + struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
>
> struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
> struct rte_eth_stats port_stats;
>
> - int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
> + int burst_size[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT] = { 15, 13, 36 };
> int i, nb_rx;
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
> - "Failed to initialize bonded device with slaves");
> + "Failed to initialize bonded device with members");
>
> /* Generate test bursts of packets to transmit */
> - for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
> + for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
> TEST_ASSERT_EQUAL(generate_test_burst(
> &gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
> burst_size[i], "burst generation failed");
> }
>
> - /* Add rx data to slaves */
> - for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
> + /* Add rx data to members */
> + for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
> &gen_pkt_burst[i][0], burst_size[i]);
> }
>
> @@ -1697,29 +1704,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
> test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
> burst_size[0] + burst_size[1] + burst_size[2]);
>
> - /* Verify bonded slave devices rx counts */
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + /* Verify bonded member devices rx counts */
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[0],
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[0],
> (unsigned int)port_stats.ipackets, burst_size[0]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
> burst_size[1]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[2],
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[2],
> (unsigned int)port_stats.ipackets, burst_size[2]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[3],
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[3],
> (unsigned int)port_stats.ipackets, 0);
>
> /* free mbufs */
> @@ -1727,8 +1734,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
> rte_pktmbuf_free(rx_pkt_burst[i]);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -1739,48 +1746,54 @@ test_roundrobin_verify_mac_assignment(void)
>
> int i;
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
> + &expected_mac_addr_0),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
> + test_params->member_port_ids[0]);
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
> + &expected_mac_addr_2),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[2]);
> + test_params->member_port_ids[2]);
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
> - "Failed to initialize bonded device with slaves");
> + "Failed to initialize bonded device with members");
>
> - /* Verify that all MACs are the same as first slave added to bonded dev */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + /* Verify that all MACs are the same as first member added to bonded dev */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[i]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[i]);
> }
>
> /* change primary and verify that MAC addresses haven't changed */
> TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
> - test_params->slave_port_ids[2]),
> + test_params->member_port_ids[2]),
> "Failed to set bonded port (%d) primary port to (%d)",
> - test_params->bonded_port_id, test_params->slave_port_ids[i]);
> + test_params->bonded_port_id, test_params->member_port_ids[i]);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address has changed to that of primary"
> + "member port (%d) mac address has changed to that of primary"
> " port without stop/start toggle of bonded device",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> }
>
> - /* stop / start bonded device and verify that primary MAC address is
> - * propagate to bonded device and slaves */
> + /*
> + * stop / start bonded device and verify that primary MAC address is
> + * propagate to bonded device and members.
> + */
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
> "Failed to stop bonded port %u",
> test_params->bonded_port_id);
> @@ -1794,16 +1807,17 @@ test_roundrobin_verify_mac_assignment(void)
> TEST_ASSERT_SUCCESS(
> memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
> "bonded port (%d) mac address not set to that of new primary port",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of new primary"
> - " port", test_params->slave_port_ids[i]);
> + "member port (%d) mac address not set to that of new primary"
> + " port", test_params->member_port_ids[i]);
> }
>
> /* Set explicit MAC address */
> @@ -1818,19 +1832,20 @@ test_roundrobin_verify_mac_assignment(void)
> TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
> sizeof(read_mac_addr)),
> "bonded port (%d) mac address not set to that of new primary port",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
> - sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
> - " that of new primary port\n", test_params->slave_port_ids[i]);
> + sizeof(read_mac_addr)), "member port (%d) mac address not set to"
> + " that of new primary port\n", test_params->member_port_ids[i]);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -1839,10 +1854,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
> int i, promiscuous_en;
> int ret;
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
> - "Failed to initialize bonded device with slaves");
> + "Failed to initialize bonded device with members");
>
> ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
> TEST_ASSERT_SUCCESS(ret,
> @@ -1854,12 +1869,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
> "Port (%d) promiscuous mode not enabled",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> promiscuous_en = rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_EQUAL(promiscuous_en, 1,
> - "slave port (%d) promiscuous mode not enabled",
> - test_params->slave_port_ids[i]);
> + "member port (%d) promiscuous mode not enabled",
> + test_params->member_port_ids[i]);
> }
>
> ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
> @@ -1872,76 +1887,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
> "Port (%d) promiscuous mode not disabled\n",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> promiscuous_en = rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_EQUAL(promiscuous_en, 0,
> "Port (%d) promiscuous mode not disabled\n",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
> -#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
> +#define TEST_RR_LINK_STATUS_MEMBER_COUNT (4)
> +#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT (2)
>
> static int
> -test_roundrobin_verify_slave_link_status_change_behaviour(void)
> +test_roundrobin_verify_member_link_status_change_behaviour(void)
> {
> struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
> - struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
> + struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
> struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
>
> struct rte_eth_stats port_stats;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - int i, burst_size, slave_count;
> + int i, burst_size, member_count;
>
> /* NULL all pointers in array to simplify cleanup */
> memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
>
> - /* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
> + /* Initialize bonded device with TEST_RR_LINK_STATUS_MEMBER_COUNT members
> * in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> - BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
> - "Failed to initialize bonded device with slaves");
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> + BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_MEMBER_COUNT, 1),
> + "Failed to initialize bonded device with members");
>
> - /* Verify Current Slaves Count /Active Slave Count is */
> - slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
> + /* Verify Current Members Count /Active Member Count is */
> + member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
> RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
> - "Number of slaves (%d) is not as expected (%d).",
> - slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
> + TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
> + "Number of members (%d) is not as expected (%d).",
> + member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
>
> - slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
> + member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
>
> - /* Set 2 slaves eth_devs link status to down */
> + /* Set 2 members eth_devs link status to down */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[1], 0);
> + test_params->member_port_ids[1], 0);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[3], 0);
> + test_params->member_port_ids[3], 0);
>
> - slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count,
> - TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
> - "Number of active slaves (%d) is not as expected (%d).\n",
> - slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
> + member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(member_count,
> + TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT,
> + "Number of active members (%d) is not as expected (%d).\n",
> + member_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT);
>
> burst_size = 20;
>
> - /* Verify that pkts are not sent on slaves with link status down:
> + /* Verify that pkts are not sent on members with link status down:
> *
> * 1. Generate test burst of traffic
> * 2. Transmit burst on bonded eth_dev
> * 3. Verify stats for bonded eth_dev (opackets = burst_size)
> - * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
> + * 4. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
> */
> TEST_ASSERT_EQUAL(
> generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
> @@ -1960,41 +1975,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
> test_params->bonded_port_id, (int)port_stats.opackets,
> burst_size);
>
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
> "Port (%d) opackets stats (%d) not expected (%d) value",
> - test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
> + test_params->member_port_ids[0], (int)port_stats.opackets, 10);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
> "Port (%d) opackets stats (%d) not expected (%d) value",
> - test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
> + test_params->member_port_ids[1], (int)port_stats.opackets, 0);
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
> "Port (%d) opackets stats (%d) not expected (%d) value",
> - test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
> + test_params->member_port_ids[2], (int)port_stats.opackets, 10);
>
> - rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
> "Port (%d) opackets stats (%d) not expected (%d) value",
> - test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
> + test_params->member_port_ids[3], (int)port_stats.opackets, 0);
>
> - /* Verify that pkts are not sent on slaves with link status down:
> + /* Verify that pkts are not sent on members with link status down:
> *
> * 1. Generate test bursts of traffic
> * 2. Add bursts on to virtual eth_devs
> * 3. Rx burst on bonded eth_dev, expected (burst_ size *
> - * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
> + * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT) received
> * 4. Verify stats for bonded eth_dev
> - * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
> + * 6. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
> */
> - for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
> + for (i = 0; i < TEST_RR_LINK_STATUS_MEMBER_COUNT; i++) {
> TEST_ASSERT_EQUAL(generate_test_burst(
> &gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
> burst_size, "failed to generate packet burst");
>
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
> &gen_pkt_burst[i][0], burst_size);
> }
>
> @@ -2014,49 +2029,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
> rte_pktmbuf_free(rx_pkt_burst[i]);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
> +#define TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT (2)
>
> -uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
> +uint8_t polling_member_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
>
>
> -int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
> +int polling_test_members[TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT] = { -1, -1 };
>
> static int
> -test_roundrobin_verfiy_polling_slave_link_status_change(void)
> +test_roundrobin_verify_polling_member_link_status_change(void)
> {
> struct rte_ether_addr *mac_addr =
> - (struct rte_ether_addr *)polling_slave_mac;
> - char slave_name[RTE_ETH_NAME_MAX_LEN];
> + (struct rte_ether_addr *)polling_member_mac;
> + char member_name[RTE_ETH_NAME_MAX_LEN];
>
> int i;
>
> - for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
> - /* Generate slave name / MAC address */
> - snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
> + for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
> + /* Generate member name / MAC address */
> + snprintf(member_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
> mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
>
> - /* Create slave devices with no ISR Support */
> - if (polling_test_slaves[i] == -1) {
> - polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
> + /* Create member devices with no ISR Support */
> + if (polling_test_members[i] == -1) {
> + polling_test_members[i] = virtual_ethdev_create(member_name, mac_addr,
> rte_socket_id(), 0);
> - TEST_ASSERT(polling_test_slaves[i] >= 0,
> - "Failed to create virtual virtual ethdev %s\n", slave_name);
> + TEST_ASSERT(polling_test_members[i] >= 0,
> + "Failed to create virtual ethdev %s\n", member_name);
>
> - /* Configure slave */
> - TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
> - "Failed to configure virtual ethdev %s(%d)", slave_name,
> - polling_test_slaves[i]);
> + /* Configure member */
> + TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_members[i], 0, 0),
> + "Failed to configure virtual ethdev %s(%d)", member_name,
> + polling_test_members[i]);
> }
>
> - /* Add slave to bonded device */
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
> - polling_test_slaves[i]),
> - "Failed to add slave %s(%d) to bonded device %d",
> - slave_name, polling_test_slaves[i],
> + /* Add member to bonded device */
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
> + polling_test_members[i]),
> + "Failed to add member %s(%d) to bonded device %d",
> + member_name, polling_test_members[i],
> test_params->bonded_port_id);
> }
>
> @@ -2071,26 +2086,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
> RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
> &test_params->bonded_port_id);
>
> - /* link status change callback for first slave link up */
> + /* link status change callback for first member link up */
> test_lsc_interrupt_count = 0;
>
> - virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
> + virtual_ethdev_set_link_status(polling_test_members[0], 1);
>
> TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
>
>
> - /* no link status change callback for second slave link up */
> + /* no link status change callback for second member link up */
> test_lsc_interrupt_count = 0;
>
> - virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
> + virtual_ethdev_set_link_status(polling_test_members[1], 1);
>
> TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
>
> - /* link status change callback for both slave links down */
> + /* link status change callback for both member links down */
> test_lsc_interrupt_count = 0;
>
> - virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
> - virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
> + virtual_ethdev_set_link_status(polling_test_members[0], 0);
> + virtual_ethdev_set_link_status(polling_test_members[1], 0);
>
> TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
>
> @@ -2100,17 +2115,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
> &test_params->bonded_port_id);
>
>
> - /* Clean up and remove slaves from bonded device */
> - for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
> + /* Clean up and remove members from bonded device */
> + for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
>
> TEST_ASSERT_SUCCESS(
> - rte_eth_bond_slave_remove(test_params->bonded_port_id,
> - polling_test_slaves[i]),
> - "Failed to remove slave %d from bonded port (%d)",
> - polling_test_slaves[i], test_params->bonded_port_id);
> + rte_eth_bond_member_remove(test_params->bonded_port_id,
> + polling_test_members[i]),
> + "Failed to remove member %d from bonded port (%d)",
> + polling_test_members[i], test_params->bonded_port_id);
> }
>
> - return remove_slaves_and_stop_bonded_device();
> + return remove_members_and_stop_bonded_device();
> }
>
>
> @@ -2123,9 +2138,9 @@ test_activebackup_tx_burst(void)
> struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
> struct rte_eth_stats port_stats;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
> - "Failed to initialize bonded device with slaves");
> + "Failed to initialize bonded device with members");
>
> initialize_eth_header(test_params->pkt_eth_hdr,
> (struct rte_ether_addr *)src_mac,
> @@ -2136,7 +2151,7 @@ test_activebackup_tx_burst(void)
> pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
> dst_addr_0, pktlen);
>
> - burst_size = 20 * test_params->bonded_slave_count;
> + burst_size = 20 * test_params->bonded_member_count;
>
> TEST_ASSERT(burst_size < MAX_PKT_BURST,
> "Burst size specified is greater than supported.");
> @@ -2160,38 +2175,38 @@ test_activebackup_tx_burst(void)
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
>
> - /* Verify slave ports tx stats */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
> - if (test_params->slave_port_ids[i] == primary_port) {
> + /* Verify member ports tx stats */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
> + if (test_params->member_port_ids[i] == primary_port) {
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> test_params->bonded_port_id,
> (unsigned int)port_stats.opackets,
> - burst_size / test_params->bonded_slave_count);
> + burst_size / test_params->bonded_member_count);
> } else {
> TEST_ASSERT_EQUAL(port_stats.opackets, 0,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> test_params->bonded_port_id,
> (unsigned int)port_stats.opackets, 0);
> }
> }
>
> - /* Put all slaves down and try and transmit */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + /* Put all members down and try and transmit */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[i], 0);
> + test_params->member_port_ids[i], 0);
> }
>
> /* Send burst on bonded port */
> TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
> pkts_burst, burst_size), 0, "Sending empty burst failed");
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
> +#define TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT (4)
>
> static int
> test_activebackup_rx_burst(void)
> @@ -2205,24 +2220,24 @@ test_activebackup_rx_burst(void)
>
> int i, j, burst_size = 17;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ACTIVE_BACKUP, 0,
> - TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
> - "Failed to initialize bonded device with slaves");
> + TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
> + "Failed to initialize bonded device with members");
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
> TEST_ASSERT(primary_port >= 0,
> - "failed to get primary slave for bonded port (%d)",
> + "failed to get primary member for bonded port (%d)",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> /* Generate test bursts of packets to transmit */
> TEST_ASSERT_EQUAL(generate_test_burst(
> &gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
> burst_size, "burst generation failed");
>
> - /* Add rx data to slave */
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
> + /* Add rx data to member */
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
> &gen_pkt_burst[0], burst_size);
>
> /* Call rx burst on bonded device */
> @@ -2230,7 +2245,7 @@ test_activebackup_rx_burst(void)
> &rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
> "rte_eth_rx_burst failed");
>
> - if (test_params->slave_port_ids[i] == primary_port) {
> + if (test_params->member_port_ids[i] == primary_port) {
> /* Verify bonded device rx count */
> rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
> @@ -2238,27 +2253,30 @@ test_activebackup_rx_burst(void)
> test_params->bonded_port_id,
> (unsigned int)port_stats.ipackets, burst_size);
>
> - /* Verify bonded slave devices rx count */
> - for (j = 0; j < test_params->bonded_slave_count; j++) {
> - rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
> + /* Verify bonded member devices rx count */
> + for (j = 0; j < test_params->bonded_member_count; j++) {
> + rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
> if (i == j) {
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
> - "Slave Port (%d) ipackets value (%u) not as "
> - "expected (%d)", test_params->slave_port_ids[i],
> - (unsigned int)port_stats.ipackets, burst_size);
> + "Member Port (%d) ipackets value (%u) not as "
> + "expected (%d)",
> + test_params->member_port_ids[i],
> + (unsigned int)port_stats.ipackets,
> + burst_size);
> } else {
> TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
> - "Slave Port (%d) ipackets value (%u) not as "
> - "expected (%d)\n", test_params->slave_port_ids[i],
> + "Member Port (%d) ipackets value (%u) not as "
> + "expected (%d)\n",
> + test_params->member_port_ids[i],
> (unsigned int)port_stats.ipackets, 0);
> }
> }
> } else {
> - for (j = 0; j < test_params->bonded_slave_count; j++) {
> - rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
> + for (j = 0; j < test_params->bonded_member_count; j++) {
> + rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
> - "Slave Port (%d) ipackets value (%u) not as expected "
> - "(%d)", test_params->slave_port_ids[i],
> + "Member Port (%d) ipackets value (%u) not as expected "
> + "(%d)", test_params->member_port_ids[i],
> (unsigned int)port_stats.ipackets, 0);
> }
> }
> @@ -2275,8 +2293,8 @@ test_activebackup_rx_burst(void)
> rte_eth_stats_reset(test_params->bonded_port_id);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -2285,14 +2303,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
> int i, primary_port, promiscuous_en;
> int ret;
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
> - "Failed to initialize bonded device with slaves");
> + "Failed to initialize bonded device with members");
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
> TEST_ASSERT(primary_port >= 0,
> - "failed to get primary slave for bonded port (%d)",
> + "failed to get primary member for bonded port (%d)",
> test_params->bonded_port_id);
>
> ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
> @@ -2304,17 +2322,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
> "Port (%d) promiscuous mode not enabled",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> promiscuous_en = rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]);
> - if (primary_port == test_params->slave_port_ids[i]) {
> + test_params->member_port_ids[i]);
> + if (primary_port == test_params->member_port_ids[i]) {
> TEST_ASSERT_EQUAL(promiscuous_en, 1,
> - "slave port (%d) promiscuous mode not enabled",
> - test_params->slave_port_ids[i]);
> + "member port (%d) promiscuous mode not enabled",
> + test_params->member_port_ids[i]);
> } else {
> TEST_ASSERT_EQUAL(promiscuous_en, 0,
> - "slave port (%d) promiscuous mode enabled",
> - test_params->slave_port_ids[i]);
> + "member port (%d) promiscuous mode enabled",
> + test_params->member_port_ids[i]);
> }
>
> }
> @@ -2328,16 +2346,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
> "Port (%d) promiscuous mode not disabled\n",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> promiscuous_en = rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_EQUAL(promiscuous_en, 0,
> - "slave port (%d) promiscuous mode not disabled\n",
> - test_params->slave_port_ids[i]);
> + "member port (%d) promiscuous mode not disabled\n",
> + test_params->member_port_ids[i]);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -2346,19 +2364,21 @@ test_activebackup_verify_mac_assignment(void)
> struct rte_ether_addr read_mac_addr;
> struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
> + &expected_mac_addr_0),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
> + test_params->member_port_ids[0]);
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
> + &expected_mac_addr_1),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
>
> - /* Initialize bonded device with 2 slaves in active backup mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 2 members in active backup mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
> - "Failed to initialize bonded device with slaves");
> + "Failed to initialize bonded device with members");
>
> - /* Verify that bonded MACs is that of first slave and that the other slave
> + /* Verify that bonded MACs is that of first member and that the other member
> * MAC hasn't been changed */
> TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
> "Failed to get mac address (port %d)",
> @@ -2368,27 +2388,27 @@ test_activebackup_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of primary port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not as expected",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not as expected",
> + test_params->member_port_ids[1]);
>
> /* change primary and verify that MAC addresses haven't changed */
> TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
> - test_params->slave_port_ids[1]), 0,
> + test_params->member_port_ids[1]), 0,
> "Failed to set bonded port (%d) primary port to (%d)",
> - test_params->bonded_port_id, test_params->slave_port_ids[1]);
> + test_params->bonded_port_id, test_params->member_port_ids[1]);
>
> TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
> "Failed to get mac address (port %d)",
> @@ -2398,24 +2418,26 @@ test_activebackup_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of primary port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not as expected",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not as expected",
> + test_params->member_port_ids[1]);
>
> - /* stop / start bonded device and verify that primary MAC address is
> - * propagated to bonded device and slaves */
> + /*
> + * stop / start bonded device and verify that primary MAC address is
> + * propagated to bonded device and members.
> + */
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
> "Failed to stop bonded port %u",
> @@ -2432,21 +2454,21 @@ test_activebackup_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of primary port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not as expected",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not as expected",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[1]);
>
> /* Set explicit MAC address */
> TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
> @@ -2462,36 +2484,36 @@ test_activebackup_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of bonded port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not as expected",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not as expected",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of bonded port",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not set to that of bonded port",
> + test_params->member_port_ids[1]);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> -test_activebackup_verify_slave_link_status_change_failover(void)
> +test_activebackup_verify_member_link_status_change_failover(void)
> {
> - struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
> + struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
> struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
> struct rte_eth_stats port_stats;
>
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - int i, burst_size, slave_count, primary_port;
> + int i, burst_size, member_count, primary_port;
>
> burst_size = 21;
>
> @@ -2502,96 +2524,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
> &pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
> "generate_test_burst failed");
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ACTIVE_BACKUP, 0,
> - TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
> - "Failed to initialize bonded device with slaves");
> + TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
> + "Failed to initialize bonded device with members");
>
> - /* Verify Current Slaves Count /Active Slave Count is */
> - slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
> + /* Verify Current Members Count /Active Member Count is */
> + member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
> RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, 4,
> - "Number of slaves (%d) is not as expected (%d).",
> - slave_count, 4);
> + TEST_ASSERT_EQUAL(member_count, 4,
> + "Number of members (%d) is not as expected (%d).",
> + member_count, 4);
>
> - slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, 4,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, 4);
> + member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(member_count, 4,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, 4);
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
> - TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
> + TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
> "Primary port not as expected");
>
> - /* Bring 2 slaves down and verify active slave count */
> + /* Bring 2 members down and verify active member count */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[1], 0);
> + test_params->member_port_ids[1], 0);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[3], 0);
> + test_params->member_port_ids[3], 0);
>
> - TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, 2);
> + TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, 2);
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[1], 1);
> + test_params->member_port_ids[1], 1);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[3], 1);
> + test_params->member_port_ids[3], 1);
>
>
> - /* Bring primary port down, verify that active slave count is 3 and primary
> + /* Bring primary port down, verify that active member count is 3 and primary
> * has changed */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[0], 0);
> + test_params->member_port_ids[0], 0);
>
> - TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
> + TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS),
> 3,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, 3);
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, 3);
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
> - TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
> + TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
> "Primary port not as expected");
>
> - /* Verify that pkts are sent on new primary slave */
> + /* Verify that pkts are sent on new primary member */
>
> TEST_ASSERT_EQUAL(rte_eth_tx_burst(
> test_params->bonded_port_id, 0, &pkt_burst[0][0],
> burst_size), burst_size, "rte_eth_tx_burst failed");
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
> "(%d) port_stats.opackets not as expected",
> - test_params->slave_port_ids[2]);
> + test_params->member_port_ids[2]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, 0,
> "(%d) port_stats.opackets not as expected\n",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, 0,
> "(%d) port_stats.opackets not as expected\n",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, 0,
> "(%d) port_stats.opackets not as expected\n",
> - test_params->slave_port_ids[3]);
> + test_params->member_port_ids[3]);
>
> /* Generate packet burst for testing */
>
> - for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
> + for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT; i++) {
> TEST_ASSERT_EQUAL(generate_test_burst(
> &pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
> "generate_test_burst failed");
>
> virtual_ethdev_add_mbufs_to_rx_queue(
> - test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
> + test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
> }
>
> TEST_ASSERT_EQUAL(rte_eth_rx_burst(
> @@ -2604,28 +2626,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
> "(%d) port_stats.ipackets not as expected",
> test_params->bonded_port_id);
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
> "(%d) port_stats.opackets not as expected",
> - test_params->slave_port_ids[2]);
> + test_params->member_port_ids[2]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, 0,
> "(%d) port_stats.opackets not as expected",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, 0,
> "(%d) port_stats.opackets not as expected",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, 0,
> "(%d) port_stats.opackets not as expected",
> - test_params->slave_port_ids[3]);
> + test_params->member_port_ids[3]);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> /** Balance Mode Tests */
> @@ -2633,9 +2655,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
> static int
> test_balance_xmit_policy_configuration(void)
> {
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
> - "Failed to initialize_bonded_device_with_slaves.");
> + "Failed to initialize_bonded_device_with_members.");
>
> /* Invalid port id */
> TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
> @@ -2644,7 +2666,7 @@ test_balance_xmit_policy_configuration(void)
>
> /* Set xmit policy on non bonded device */
> TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
> - test_params->slave_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
> + test_params->member_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
> "Expected call to failed as invalid port specified.");
>
>
> @@ -2677,25 +2699,25 @@ test_balance_xmit_policy_configuration(void)
> TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
> "Expected call to failed as invalid port specified.");
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
> +#define TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT (2)
>
> static int
> test_balance_l2_tx_burst(void)
> {
> - struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
> - int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
> + struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
> + int burst_size[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT] = { 10, 15 };
>
> uint16_t pktlen;
> int i;
> struct rte_eth_stats port_stats;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> - BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
> - "Failed to initialize_bonded_device_with_slaves.");
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> + BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT, 1),
> + "Failed to initialize_bonded_device_with_members.");
>
> TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
> test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
> @@ -2730,7 +2752,7 @@ test_balance_l2_tx_burst(void)
> "failed to generate packet burst");
>
> /* Send burst 1 on bonded port */
> - for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
> + for (i = 0; i < TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT; i++) {
> TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
> &pkts_burst[i][0], burst_size[i]),
> burst_size[i], "Failed to transmit packet burst");
> @@ -2745,24 +2767,24 @@ test_balance_l2_tx_burst(void)
> burst_size[0] + burst_size[1]);
>
>
> - /* Verify slave ports tx stats */
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + /* Verify member ports tx stats */
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
> burst_size[0]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
> - "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
> - test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
> + "Member Port (%d) opackets value (%u) not as expected (%d)\n",
> + test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
> burst_size[1]);
>
> - /* Put all slaves down and try and transmit */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + /* Put all members down and try and transmit */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[i], 0);
> + test_params->member_port_ids[i], 0);
> }
>
> /* Send burst on bonded port */
> @@ -2770,8 +2792,8 @@ test_balance_l2_tx_burst(void)
> test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
> 0, "Expected zero packet");
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -2785,9 +2807,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
>
> struct rte_eth_stats port_stats;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BALANCE, 0, 2, 1),
> - "Failed to initialize_bonded_device_with_slaves.");
> + "Failed to initialize_bonded_device_with_members.");
>
> TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
> test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
> @@ -2825,24 +2847,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> nb_tx_1 + nb_tx_2);
>
> - /* Verify slave ports tx stats */
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + /* Verify member ports tx stats */
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
> nb_tx_1);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
> nb_tx_2);
>
> - /* Put all slaves down and try and transmit */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + /* Put all members down and try and transmit */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[i], 0);
> + test_params->member_port_ids[i], 0);
> }
>
> /* Send burst on bonded port */
> @@ -2851,8 +2873,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
> burst_size_1), 0, "Expected zero packet");
>
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -2897,9 +2919,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
>
> struct rte_eth_stats port_stats;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BALANCE, 0, 2, 1),
> - "Failed to initialize_bonded_device_with_slaves.");
> + "Failed to initialize_bonded_device_with_members.");
>
> TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
> test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
> @@ -2938,24 +2960,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> nb_tx_1 + nb_tx_2);
>
> - /* Verify slave ports tx stats */
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + /* Verify member ports tx stats */
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
> nb_tx_1);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
> nb_tx_2);
>
> - /* Put all slaves down and try and transmit */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + /* Put all members down and try and transmit */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[i], 0);
> + test_params->member_port_ids[i], 0);
> }
>
> /* Send burst on bonded port */
> @@ -2963,8 +2985,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
> test_params->bonded_port_id, 0, pkts_burst_1,
> burst_size_1), 0, "Expected zero packet");
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -3003,27 +3025,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
> return balance_l34_tx_burst(0, 0, 0, 0, 1);
> }
>
> -#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT (2)
> -#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 (40)
> -#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2 (20)
> -#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT (25)
> -#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (0)
> +#define TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT (2)
> +#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 (40)
> +#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2 (20)
> +#define TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT (25)
> +#define TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (0)
>
> static int
> -test_balance_tx_burst_slave_tx_fail(void)
> +test_balance_tx_burst_member_tx_fail(void)
> {
> - struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
> - struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
> + struct rte_mbuf *pkts_burst_1[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1];
> + struct rte_mbuf *pkts_burst_2[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2];
>
> - struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
> + struct rte_mbuf *expected_fail_pkts[TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT];
>
> struct rte_eth_stats port_stats;
>
> int i, first_tx_fail_idx, tx_count_1, tx_count_2;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BALANCE, 0,
> - TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
> + TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
> "Failed to initialise bonded device");
>
> TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
> @@ -3033,46 +3055,48 @@ test_balance_tx_burst_slave_tx_fail(void)
>
> /* Generate test bursts for transmission */
> TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1,
> "Failed to generate test packet burst 1");
>
> - first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
> + first_tx_fail_idx = TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT;
>
> /* copy mbuf references for expected transmission failures */
> - for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
> + for (i = 0; i < TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT; i++)
> expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
>
> TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
> "Failed to generate test packet burst 2");
>
>
> - /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
> - * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
> + /*
> + * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
> + * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
> + */
> virtual_ethdev_tx_burst_fn_set_success(
> - test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
> + test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
> 0);
>
> virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
> - test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
> + test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
>
>
> /* Transmit burst 1 */
> tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1);
>
> - TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
> + TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
> "Transmitted (%d) packets, expected to transmit (%d) packets",
> - tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
> + tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
>
> /* Verify that failed packet are expected failed packets */
> - for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
> + for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
> TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
> "expected mbuf (%d) pointer %p not expected pointer %p",
> i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
> @@ -3080,94 +3104,94 @@ test_balance_tx_burst_slave_tx_fail(void)
>
> /* Transmit burst 2 */
> tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
>
> - TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
> + TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
> "Transmitted (%d) packets, expected to transmit (%d) packets",
> - tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
> + tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
>
>
> /* Verify bonded port tx stats */
> rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
>
> TEST_ASSERT_EQUAL(port_stats.opackets,
> - (uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
> + (uint64_t)((TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2),
> "Bonded Port (%d) opackets value (%u) not as expected (%d)",
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> - (TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
> + (TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
>
> - /* Verify slave ports tx stats */
> + /* Verify member ports tx stats */
>
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
>
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[0],
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[0],
> (unsigned int)port_stats.opackets,
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
>
>
>
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
>
> TEST_ASSERT_EQUAL(port_stats.opackets,
> - (uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[1],
> + (uint64_t)TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
> + "Member Port (%d) opackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[1],
> (unsigned int)port_stats.opackets,
> - TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
> + TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
>
> /* Verify that all mbufs have a ref value of zero */
> TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
> "mbufs refcnts not as expected");
>
> free_mbufs(&pkts_burst_1[tx_count_1],
> - TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
> + TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
> +#define TEST_BALANCE_RX_BURST_MEMBER_COUNT (3)
>
> static int
> test_balance_rx_burst(void)
> {
> - struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
> + struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
>
> struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
> struct rte_eth_stats port_stats;
>
> - int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
> + int burst_size[TEST_BALANCE_RX_BURST_MEMBER_COUNT] = { 10, 5, 30 };
> int i, j;
>
> memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BALANCE, 0, 3, 1),
> "Failed to initialise bonded device");
>
> /* Generate test bursts of packets to transmit */
> - for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
> + for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
> TEST_ASSERT_EQUAL(generate_test_burst(
> &gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
> 0, 0), burst_size[i],
> "failed to generate packet burst");
> }
>
> - /* Add rx data to slaves */
> - for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
> + /* Add rx data to members */
> + for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
> &gen_pkt_burst[i][0], burst_size[i]);
> }
>
> @@ -3187,33 +3211,33 @@ test_balance_rx_burst(void)
> burst_size[0] + burst_size[1] + burst_size[2]);
>
>
> - /* Verify bonded slave devices rx counts */
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + /* Verify bonded member devices rx counts */
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[0],
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[0],
> (unsigned int)port_stats.ipackets, burst_size[0]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
> burst_size[1]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
> burst_size[2]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
> 0);
>
> /* free mbufs */
> - for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
> + for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
> for (j = 0; j < MAX_PKT_BURST; j++) {
> if (gen_pkt_burst[i][j] != NULL) {
> rte_pktmbuf_free(gen_pkt_burst[i][j]);
> @@ -3222,8 +3246,8 @@ test_balance_rx_burst(void)
> }
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -3232,8 +3256,8 @@ test_balance_verify_promiscuous_enable_disable(void)
> int i;
> int ret;
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BALANCE, 0, 4, 1),
> "Failed to initialise bonded device");
>
> @@ -3246,11 +3270,11 @@ test_balance_verify_promiscuous_enable_disable(void)
> "Port (%d) promiscuous mode not enabled",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]), 1,
> + test_params->member_port_ids[i]), 1,
> "Port (%d) promiscuous mode not enabled",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> }
>
> ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
> @@ -3262,15 +3286,15 @@ test_balance_verify_promiscuous_enable_disable(void)
> "Port (%d) promiscuous mode not disabled",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]), 0,
> + test_params->member_port_ids[i]), 0,
> "Port (%d) promiscuous mode not disabled",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -3279,19 +3303,21 @@ test_balance_verify_mac_assignment(void)
> struct rte_ether_addr read_mac_addr;
> struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
> + &expected_mac_addr_0),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
> + test_params->member_port_ids[0]);
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
> + &expected_mac_addr_1),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
>
> - /* Initialize bonded device with 2 slaves in active backup mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 2 members in active backup mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BALANCE, 0, 2, 1),
> "Failed to initialise bonded device");
>
> - /* Verify that bonded MACs is that of first slave and that the other slave
> + /* Verify that bonded MACs is that of first member and that the other member
> * MAC hasn't been changed */
> TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
> "Failed to get mac address (port %d)",
> @@ -3301,27 +3327,27 @@ test_balance_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of primary port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[1]);
>
> /* change primary and verify that MAC addresses haven't changed */
> TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
> - test_params->slave_port_ids[1]),
> + test_params->member_port_ids[1]),
> "Failed to set bonded port (%d) primary port to (%d)\n",
> - test_params->bonded_port_id, test_params->slave_port_ids[1]);
> + test_params->bonded_port_id, test_params->member_port_ids[1]);
>
> TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
> "Failed to get mac address (port %d)",
> @@ -3331,24 +3357,26 @@ test_balance_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of primary port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[1]);
>
> - /* stop / start bonded device and verify that primary MAC address is
> - * propagated to bonded device and slaves */
> + /*
> + * stop / start bonded device and verify that primary MAC address is
> + * propagated to bonded device and members.
> + */
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
> "Failed to stop bonded port %u",
> @@ -3365,21 +3393,21 @@ test_balance_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of primary port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[1]);
>
> /* Set explicit MAC address */
> TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
> @@ -3395,44 +3423,44 @@ test_balance_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of bonded port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not as expected\n",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not as expected\n",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of bonded port",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not set to that of bonded port",
> + test_params->member_port_ids[1]);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
> +#define TEST_BALANCE_LINK_STATUS_MEMBER_COUNT (4)
>
> static int
> -test_balance_verify_slave_link_status_change_behaviour(void)
> +test_balance_verify_member_link_status_change_behaviour(void)
> {
> - struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
> + struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
> struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
> struct rte_eth_stats port_stats;
>
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - int i, burst_size, slave_count;
> + int i, burst_size, member_count;
>
> memset(pkt_burst, 0, sizeof(pkt_burst));
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> - BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> + BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT, 1),
> "Failed to initialise bonded device");
>
> TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
> @@ -3440,32 +3468,34 @@ test_balance_verify_slave_link_status_change_behaviour(void)
> "Failed to set balance xmit policy.");
>
>
> - /* Verify Current Slaves Count /Active Slave Count is */
> - slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
> + /* Verify Current Members Count /Active Member Count is */
> + member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
> RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
> - "Number of slaves (%d) is not as expected (%d).",
> - slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
> + TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
> + "Number of members (%d) is not as expected (%d).",
> + member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
>
> - slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
> + member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
>
> - /* Set 2 slaves link status to down */
> + /* Set 2 members link status to down */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[1], 0);
> + test_params->member_port_ids[1], 0);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[3], 0);
> + test_params->member_port_ids[3], 0);
>
> - TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, 2);
> + TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, 2);
>
> - /* Send to sets of packet burst and verify that they are balanced across
> - * slaves */
> + /*
> + * Send to sets of packet burst and verify that they are balanced across
> + * members.
> + */
> burst_size = 21;
>
> TEST_ASSERT_EQUAL(generate_test_burst(
> @@ -3491,27 +3521,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
> test_params->bonded_port_id, (int)port_stats.opackets,
> burst_size + burst_size);
>
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
> "(%d) port_stats.opackets (%d) not as expected (%d).",
> - test_params->slave_port_ids[0], (int)port_stats.opackets,
> + test_params->member_port_ids[0], (int)port_stats.opackets,
> burst_size);
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
> "(%d) port_stats.opackets (%d) not as expected (%d).",
> - test_params->slave_port_ids[2], (int)port_stats.opackets,
> + test_params->member_port_ids[2], (int)port_stats.opackets,
> burst_size);
>
> - /* verify that all packets get send on primary slave when no other slaves
> + /* verify that all packets get send on primary member when no other members
> * are available */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[2], 0);
> + test_params->member_port_ids[2], 0);
>
> - TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, 1);
> + TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 1,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, 1);
>
> TEST_ASSERT_EQUAL(generate_test_burst(
> &pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
> @@ -3528,31 +3558,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
> test_params->bonded_port_id, (int)port_stats.opackets,
> burst_size + burst_size + burst_size);
>
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
> "(%d) port_stats.opackets (%d) not as expected (%d).",
> - test_params->slave_port_ids[0], (int)port_stats.opackets,
> + test_params->member_port_ids[0], (int)port_stats.opackets,
> burst_size + burst_size);
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[0], 0);
> + test_params->member_port_ids[0], 0);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[1], 1);
> + test_params->member_port_ids[1], 1);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[2], 1);
> + test_params->member_port_ids[2], 1);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[3], 1);
> + test_params->member_port_ids[3], 1);
>
> - for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
> + for (i = 0; i < TEST_BALANCE_LINK_STATUS_MEMBER_COUNT; i++) {
> TEST_ASSERT_EQUAL(generate_test_burst(
> &pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
> "Failed to generate packet burst");
>
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
> &pkt_burst[i][0], burst_size);
> }
>
> - /* Verify that pkts are not received on slaves with link status down */
> + /* Verify that pkts are not received on members with link status down */
>
> rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
> MAX_PKT_BURST);
> @@ -3564,8 +3594,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
> test_params->bonded_port_id, (int)port_stats.ipackets,
> burst_size * 3);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -3576,7 +3606,7 @@ test_broadcast_tx_burst(void)
>
> struct rte_eth_stats port_stats;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BROADCAST, 0, 2, 1),
> "Failed to initialise bonded device");
>
> @@ -3590,7 +3620,7 @@ test_broadcast_tx_burst(void)
> pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
> dst_addr_0, pktlen);
>
> - burst_size = 20 * test_params->bonded_slave_count;
> + burst_size = 20 * test_params->bonded_member_count;
>
> TEST_ASSERT(burst_size < MAX_PKT_BURST,
> "Burst size specified is greater than supported.");
> @@ -3611,25 +3641,25 @@ test_broadcast_tx_burst(void)
> /* Verify bonded port tx stats */
> rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets,
> - (uint64_t)burst_size * test_params->bonded_slave_count,
> + (uint64_t)burst_size * test_params->bonded_member_count,
> "Bonded Port (%d) opackets value (%u) not as expected (%d)",
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> burst_size);
>
> - /* Verify slave ports tx stats */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
> + /* Verify member ports tx stats */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
> - "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
> + "Member Port (%d) opackets value (%u) not as expected (%d)\n",
> test_params->bonded_port_id,
> (unsigned int)port_stats.opackets, burst_size);
> }
>
> - /* Put all slaves down and try and transmit */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + /* Put all members down and try and transmit */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[i], 0);
> + test_params->member_port_ids[i], 0);
> }
>
> /* Send burst on bonded port */
> @@ -3637,159 +3667,161 @@ test_broadcast_tx_burst(void)
> test_params->bonded_port_id, 0, pkts_burst, burst_size), 0,
> "transmitted an unexpected number of packets");
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
>
> -#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT (3)
> -#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE (40)
> -#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT (15)
> -#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT (10)
> +#define TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT (3)
> +#define TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE (40)
> +#define TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT (15)
> +#define TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT (10)
>
> static int
> -test_broadcast_tx_burst_slave_tx_fail(void)
> +test_broadcast_tx_burst_member_tx_fail(void)
> {
> - struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
> - struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
> + struct rte_mbuf *pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE];
> + struct rte_mbuf *expected_fail_pkts[TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT];
>
> struct rte_eth_stats port_stats;
>
> int i, tx_count;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BROADCAST, 0,
> - TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
> + TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
> "Failed to initialise bonded device");
>
> /* Generate test bursts for transmission */
> TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
> - TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
> - TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
> + TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
> + TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE,
> "Failed to generate test packet burst");
>
> - for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
> - expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
> + for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
> + expected_fail_pkts[i] = pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT + i];
> }
>
> - /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
> - * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
> + /*
> + * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
> + * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
> + */
> virtual_ethdev_tx_burst_fn_set_success(
> - test_params->slave_port_ids[0],
> + test_params->member_port_ids[0],
> 0);
> virtual_ethdev_tx_burst_fn_set_success(
> - test_params->slave_port_ids[1],
> + test_params->member_port_ids[1],
> 0);
> virtual_ethdev_tx_burst_fn_set_success(
> - test_params->slave_port_ids[2],
> + test_params->member_port_ids[2],
> 0);
>
> virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
> - test_params->slave_port_ids[0],
> - TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
> + test_params->member_port_ids[0],
> + TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
>
> virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
> - test_params->slave_port_ids[1],
> - TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
> + test_params->member_port_ids[1],
> + TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
>
> virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
> - test_params->slave_port_ids[2],
> - TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
> + test_params->member_port_ids[2],
> + TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
>
> /* Transmit burst */
> tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
> - TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
> + TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE);
>
> - TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
> + TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
> "Transmitted (%d) packets, expected to transmit (%d) packets",
> - tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
> + tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
>
> /* Verify that failed packet are expected failed packets */
> - for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
> + for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
> TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
> "expected mbuf (%d) pointer %p not expected pointer %p",
> i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
> }
>
> - /* Verify slave ports tx stats */
> + /* Verify member ports tx stats */
>
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
>
> TEST_ASSERT_EQUAL(port_stats.opackets,
> - (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
> + (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
> "Port (%d) opackets value (%u) not as expected (%d)",
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> - TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
> + TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
>
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
>
> TEST_ASSERT_EQUAL(port_stats.opackets,
> - (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
> + (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
> "Port (%d) opackets value (%u) not as expected (%d)",
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> - TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
> + TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
>
> TEST_ASSERT_EQUAL(port_stats.opackets,
> - (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
> + (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
> "Port (%d) opackets value (%u) not as expected (%d)",
> test_params->bonded_port_id, (unsigned int)port_stats.opackets,
> - TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
> - TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
> + TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
> + TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
>
>
> /* Verify that all mbufs who transmission failed have a ref value of one */
> TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
> - TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
> + TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT, 1),
> "mbufs refcnts not as expected");
>
> free_mbufs(&pkts_burst[tx_count],
> - TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
> + TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
> +#define BROADCAST_RX_BURST_NUM_OF_MEMBERS (3)
>
> static int
> test_broadcast_rx_burst(void)
> {
> - struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
> + struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_MEMBERS][MAX_PKT_BURST];
>
> struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
> struct rte_eth_stats port_stats;
>
> - int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
> + int burst_size[BROADCAST_RX_BURST_NUM_OF_MEMBERS] = { 10, 5, 30 };
> int i, j;
>
> memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BROADCAST, 0, 3, 1),
> "Failed to initialise bonded device");
>
> /* Generate test bursts of packets to transmit */
> - for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
> + for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
> TEST_ASSERT_EQUAL(generate_test_burst(
> &gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
> burst_size[i], "failed to generate packet burst");
> }
>
> - /* Add rx data to slave 0 */
> - for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
> + /* Add rx data to member 0 */
> + for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
> &gen_pkt_burst[i][0], burst_size[i]);
> }
>
> @@ -3810,33 +3842,33 @@ test_broadcast_rx_burst(void)
> burst_size[0] + burst_size[1] + burst_size[2]);
>
>
> - /* Verify bonded slave devices rx counts */
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + /* Verify bonded member devices rx counts */
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
> burst_size[0]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
> burst_size[1]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
> burst_size[2]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)",
> - test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
> + "Member Port (%d) ipackets value (%u) not as expected (%d)",
> + test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
> 0);
>
> /* free mbufs allocate for rx testing */
> - for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
> + for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
> for (j = 0; j < MAX_PKT_BURST; j++) {
> if (gen_pkt_burst[i][j] != NULL) {
> rte_pktmbuf_free(gen_pkt_burst[i][j]);
> @@ -3845,8 +3877,8 @@ test_broadcast_rx_burst(void)
> }
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -3855,8 +3887,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
> int i;
> int ret;
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BROADCAST, 0, 4, 1),
> "Failed to initialise bonded device");
>
> @@ -3870,11 +3902,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
> "Port (%d) promiscuous mode not enabled",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]), 1,
> + test_params->member_port_ids[i]), 1,
> "Port (%d) promiscuous mode not enabled",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> }
>
> ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
> @@ -3886,15 +3918,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
> "Port (%d) promiscuous mode not disabled",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]), 0,
> + test_params->member_port_ids[i]), 0,
> "Port (%d) promiscuous mode not disabled",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -3905,49 +3937,55 @@ test_broadcast_verify_mac_assignment(void)
>
> int i;
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
> + &expected_mac_addr_0),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
> + test_params->member_port_ids[0]);
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
> + &expected_mac_addr_1),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[2]);
> + test_params->member_port_ids[2]);
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_BROADCAST, 0, 4, 1),
> "Failed to initialise bonded device");
>
> - /* Verify that all MACs are the same as first slave added to bonded
> + /* Verify that all MACs are the same as first member added to bonded
> * device */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[i]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[i]);
> }
>
> /* change primary and verify that MAC addresses haven't changed */
> TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
> - test_params->slave_port_ids[2]),
> + test_params->member_port_ids[2]),
> "Failed to set bonded port (%d) primary port to (%d)",
> - test_params->bonded_port_id, test_params->slave_port_ids[i]);
> + test_params->bonded_port_id, test_params->member_port_ids[i]);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address has changed to that of primary "
> + "member port (%d) mac address has changed to that of primary "
> "port without stop/start toggle of bonded device",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> }
>
> - /* stop / start bonded device and verify that primary MAC address is
> - * propagated to bonded device and slaves */
> + /*
> + * stop / start bonded device and verify that primary MAC address is
> + * propagated to bonded device and members.
> + */
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
> "Failed to stop bonded port %u",
> @@ -3962,16 +4000,17 @@ test_broadcast_verify_mac_assignment(void)
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> "bonded port (%d) mac address not set to that of new primary port",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of new primary "
> - "port", test_params->slave_port_ids[i]);
> + "member port (%d) mac address not set to that of new primary "
> + "port", test_params->member_port_ids[i]);
> }
>
> /* Set explicit MAC address */
> @@ -3986,71 +4025,72 @@ test_broadcast_verify_mac_assignment(void)
> TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
> sizeof(read_mac_addr)),
> "bonded port (%d) mac address not set to that of new primary port",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
>
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
> + &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of new primary "
> - "port", test_params->slave_port_ids[i]);
> + "member port (%d) mac address not set to that of new primary "
> + "port", test_params->member_port_ids[i]);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
> +#define BROADCAST_LINK_STATUS_NUM_OF_MEMBERS (4)
> static int
> -test_broadcast_verify_slave_link_status_change_behaviour(void)
> +test_broadcast_verify_member_link_status_change_behaviour(void)
> {
> - struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
> + struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_MEMBERS][MAX_PKT_BURST];
> struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
> struct rte_eth_stats port_stats;
>
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - int i, burst_size, slave_count;
> + int i, burst_size, member_count;
>
> memset(pkt_burst, 0, sizeof(pkt_burst));
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> - BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> + BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_MEMBERS,
> 1), "Failed to initialise bonded device");
>
> - /* Verify Current Slaves Count /Active Slave Count is */
> - slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
> + /* Verify Current Members Count /Active Member Count is */
> + member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
> RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, 4,
> - "Number of slaves (%d) is not as expected (%d).",
> - slave_count, 4);
> + TEST_ASSERT_EQUAL(member_count, 4,
> + "Number of members (%d) is not as expected (%d).",
> + member_count, 4);
>
> - slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, 4,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, 4);
> + member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(member_count, 4,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, 4);
>
> - /* Set 2 slaves link status to down */
> + /* Set 2 members link status to down */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[1], 0);
> + test_params->member_port_ids[1], 0);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[3], 0);
> + test_params->member_port_ids[3], 0);
>
> - slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, 2,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, 2);
> + member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(member_count, 2,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, 2);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++)
> - rte_eth_stats_reset(test_params->slave_port_ids[i]);
> + for (i = 0; i < test_params->bonded_member_count; i++)
> + rte_eth_stats_reset(test_params->member_port_ids[i]);
>
> - /* Verify that pkts are not sent on slaves with link status down */
> + /* Verify that pkts are not sent on members with link status down */
> burst_size = 21;
>
> TEST_ASSERT_EQUAL(generate_test_burst(
> @@ -4062,43 +4102,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
> "rte_eth_tx_burst failed\n");
>
> rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
> - TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
> + TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * member_count),
> "(%d) port_stats.opackets (%d) not as expected (%d)\n",
> test_params->bonded_port_id, (int)port_stats.opackets,
> - burst_size * slave_count);
> + burst_size * member_count);
>
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
> "(%d) port_stats.opackets not as expected",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, 0,
> "(%d) port_stats.opackets not as expected",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
> "(%d) port_stats.opackets not as expected",
> - test_params->slave_port_ids[2]);
> + test_params->member_port_ids[2]);
>
>
> - rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, 0,
> "(%d) port_stats.opackets not as expected",
> - test_params->slave_port_ids[3]);
> + test_params->member_port_ids[3]);
>
>
> - for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
> + for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_MEMBERS; i++) {
> TEST_ASSERT_EQUAL(generate_test_burst(
> &pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
> burst_size, "failed to generate packet burst");
>
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
> &pkt_burst[i][0], burst_size);
> }
>
> - /* Verify that pkts are not received on slaves with link status down */
> + /* Verify that pkts are not received on members with link status down */
> TEST_ASSERT_EQUAL(rte_eth_rx_burst(
> test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
> burst_size + burst_size, "rte_eth_rx_burst failed");
> @@ -4110,8 +4150,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
> "(%d) port_stats.ipackets not as expected\n",
> test_params->bonded_port_id);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -4146,21 +4186,21 @@ testsuite_teardown(void)
> free(test_params->pkt_eth_hdr);
> test_params->pkt_eth_hdr = NULL;
>
> - /* Clean up and remove slaves from bonded device */
> - remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + remove_members_and_stop_bonded_device();
> }
>
> static void
> free_virtualpmd_tx_queue(void)
> {
> - int i, slave_port, to_free_cnt;
> + int i, member_port, to_free_cnt;
> struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
>
> /* Free tx queue of virtual pmd */
> - for (slave_port = 0; slave_port < test_params->bonded_slave_count;
> - slave_port++) {
> + for (member_port = 0; member_port < test_params->bonded_member_count;
> + member_port++) {
> to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
> - test_params->slave_port_ids[slave_port],
> + test_params->member_port_ids[member_port],
> pkts_to_free, MAX_PKT_BURST);
> for (i = 0; i < to_free_cnt; i++)
> rte_pktmbuf_free(pkts_to_free[i]);
> @@ -4177,11 +4217,11 @@ test_tlb_tx_burst(void)
> uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
> uint16_t pktlen;
>
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members
> (BONDING_MODE_TLB, 1, 3, 1),
> "Failed to initialise bonded device");
>
> - burst_size = 20 * test_params->bonded_slave_count;
> + burst_size = 20 * test_params->bonded_member_count;
>
> TEST_ASSERT(burst_size < MAX_PKT_BURST,
> "Burst size specified is greater than supported.\n");
> @@ -4197,7 +4237,7 @@ test_tlb_tx_burst(void)
> RTE_ETHER_TYPE_IPV4, 0, 0);
> } else {
> initialize_eth_header(test_params->pkt_eth_hdr,
> - (struct rte_ether_addr *)test_params->default_slave_mac,
> + (struct rte_ether_addr *)test_params->default_member_mac,
> (struct rte_ether_addr *)dst_mac_0,
> RTE_ETHER_TYPE_IPV4, 0, 0);
> }
> @@ -4234,26 +4274,26 @@ test_tlb_tx_burst(void)
> burst_size);
>
>
> - /* Verify slave ports tx stats */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> - rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
> + /* Verify member ports tx stats */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> + rte_eth_stats_get(test_params->member_port_ids[i], &port_stats[i]);
> sum_ports_opackets += port_stats[i].opackets;
> }
>
> TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
> - "Total packets sent by slaves is not equal to packets sent by bond interface");
> + "Total packets sent by members is not equal to packets sent by bond interface");
>
> - /* checking if distribution of packets is balanced over slaves */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + /* checking if distribution of packets is balanced over members */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> TEST_ASSERT(port_stats[i].obytes > 0 &&
> port_stats[i].obytes < all_bond_obytes,
> - "Packets are not balanced over slaves");
> + "Packets are not balanced over members");
> }
>
> - /* Put all slaves down and try and transmit */
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + /* Put all members down and try and transmit */
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[i], 0);
> + test_params->member_port_ids[i], 0);
> }
>
> /* Send burst on bonded port */
> @@ -4261,11 +4301,11 @@ test_tlb_tx_burst(void)
> burst_size);
> TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
>
> - /* Clean ugit checkout masterp and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean ugit checkout masterp and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
> +#define TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT (4)
>
> static int
> test_tlb_rx_burst(void)
> @@ -4279,26 +4319,26 @@ test_tlb_rx_burst(void)
>
> uint16_t i, j, nb_rx, burst_size = 17;
>
> - /* Initialize bonded device with 4 slaves in transmit load balancing mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in transmit load balancing mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_TLB,
> - TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
> + TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1, 1),
> "Failed to initialize bonded device");
>
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
> TEST_ASSERT(primary_port >= 0,
> - "failed to get primary slave for bonded port (%d)",
> + "failed to get primary member for bonded port (%d)",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> /* Generate test bursts of packets to transmit */
> TEST_ASSERT_EQUAL(generate_test_burst(
> &gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
> "burst generation failed");
>
> - /* Add rx data to slave */
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
> + /* Add rx data to member */
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
> &gen_pkt_burst[0], burst_size);
>
> /* Call rx burst on bonded device */
> @@ -4307,7 +4347,7 @@ test_tlb_rx_burst(void)
>
> TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
>
> - if (test_params->slave_port_ids[i] == primary_port) {
> + if (test_params->member_port_ids[i] == primary_port) {
> /* Verify bonded device rx count */
> rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
> @@ -4315,27 +4355,27 @@ test_tlb_rx_burst(void)
> test_params->bonded_port_id,
> (unsigned int)port_stats.ipackets, burst_size);
>
> - /* Verify bonded slave devices rx count */
> - for (j = 0; j < test_params->bonded_slave_count; j++) {
> - rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
> + /* Verify bonded member devices rx count */
> + for (j = 0; j < test_params->bonded_member_count; j++) {
> + rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
> if (i == j) {
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
> - test_params->slave_port_ids[i],
> + "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
> + test_params->member_port_ids[i],
> (unsigned int)port_stats.ipackets, burst_size);
> } else {
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
> - test_params->slave_port_ids[i],
> + "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
> + test_params->member_port_ids[i],
> (unsigned int)port_stats.ipackets, 0);
> }
> }
> } else {
> - for (j = 0; j < test_params->bonded_slave_count; j++) {
> - rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
> + for (j = 0; j < test_params->bonded_member_count; j++) {
> + rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
> - "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
> - test_params->slave_port_ids[i],
> + "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
> + test_params->member_port_ids[i],
> (unsigned int)port_stats.ipackets, 0);
> }
> }
> @@ -4348,8 +4388,8 @@ test_tlb_rx_burst(void)
> rte_eth_stats_reset(test_params->bonded_port_id);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -4358,14 +4398,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
> int i, primary_port, promiscuous_en;
> int ret;
>
> - /* Initialize bonded device with 4 slaves in transmit load balancing mode */
> - TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in transmit load balancing mode */
> + TEST_ASSERT_SUCCESS( initialize_bonded_device_with_members(
> BONDING_MODE_TLB, 0, 4, 1),
> "Failed to initialize bonded device");
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
> TEST_ASSERT(primary_port >= 0,
> - "failed to get primary slave for bonded port (%d)",
> + "failed to get primary member for bonded port (%d)",
> test_params->bonded_port_id);
>
> ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
> @@ -4377,10 +4417,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
> TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
> "Port (%d) promiscuous mode not enabled\n",
> test_params->bonded_port_id);
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> promiscuous_en = rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]);
> - if (primary_port == test_params->slave_port_ids[i]) {
> + test_params->member_port_ids[i]);
> + if (primary_port == test_params->member_port_ids[i]) {
> TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
> "Port (%d) promiscuous mode not enabled\n",
> test_params->bonded_port_id);
> @@ -4402,16 +4442,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
> "Port (%d) promiscuous mode not disabled\n",
> test_params->bonded_port_id);
>
> - for (i = 0; i < test_params->bonded_slave_count; i++) {
> + for (i = 0; i < test_params->bonded_member_count; i++) {
> promiscuous_en = rte_eth_promiscuous_get(
> - test_params->slave_port_ids[i]);
> + test_params->member_port_ids[i]);
> TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
> - "slave port (%d) promiscuous mode not disabled\n",
> - test_params->slave_port_ids[i]);
> + "member port (%d) promiscuous mode not disabled\n",
> + test_params->member_port_ids[i]);
> }
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> @@ -4420,20 +4460,24 @@ test_tlb_verify_mac_assignment(void)
> struct rte_ether_addr read_mac_addr;
> struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
> + &expected_mac_addr_0),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
> + test_params->member_port_ids[0]);
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
> + &expected_mac_addr_1),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
>
> - /* Initialize bonded device with 2 slaves in active backup mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 2 members in active backup mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_TLB, 0, 2, 1),
> "Failed to initialize bonded device");
>
> - /* Verify that bonded MACs is that of first slave and that the other slave
> - * MAC hasn't been changed */
> + /*
> + * Verify that bonded MACs is that of first member and that the other member
> + * MAC hasn't been changed.
> + */
> TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
> "Failed to get mac address (port %d)",
> test_params->bonded_port_id);
> @@ -4442,27 +4486,27 @@ test_tlb_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of primary port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not as expected",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not as expected",
> + test_params->member_port_ids[1]);
>
> /* change primary and verify that MAC addresses haven't changed */
> TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
> - test_params->slave_port_ids[1]), 0,
> + test_params->member_port_ids[1]), 0,
> "Failed to set bonded port (%d) primary port to (%d)",
> - test_params->bonded_port_id, test_params->slave_port_ids[1]);
> + test_params->bonded_port_id, test_params->member_port_ids[1]);
>
> TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
> "Failed to get mac address (port %d)",
> @@ -4472,24 +4516,26 @@ test_tlb_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of primary port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not as expected",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not as expected",
> + test_params->member_port_ids[1]);
>
> - /* stop / start bonded device and verify that primary MAC address is
> - * propagated to bonded device and slaves */
> + /*
> + * stop / start bonded device and verify that primary MAC address is
> + * propagated to bonded device and members.
> + */
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
> "Failed to stop bonded port %u",
> @@ -4506,21 +4552,21 @@ test_tlb_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of primary port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not as expected",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not as expected",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of primary port",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not set to that of primary port",
> + test_params->member_port_ids[1]);
>
>
> /* Set explicit MAC address */
> @@ -4537,36 +4583,36 @@ test_tlb_verify_mac_assignment(void)
> "bonded port (%d) mac address not set to that of bonded port",
> test_params->bonded_port_id);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
> TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not as expected",
> - test_params->slave_port_ids[0]);
> + "member port (%d) mac address not as expected",
> + test_params->member_port_ids[0]);
>
> - TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
> + TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
> "Failed to get mac address (port %d)",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
> TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
> sizeof(read_mac_addr)),
> - "slave port (%d) mac address not set to that of bonded port",
> - test_params->slave_port_ids[1]);
> + "member port (%d) mac address not set to that of bonded port",
> + test_params->member_port_ids[1]);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> static int
> -test_tlb_verify_slave_link_status_change_failover(void)
> +test_tlb_verify_member_link_status_change_failover(void)
> {
> - struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
> + struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
> struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
> struct rte_eth_stats port_stats;
>
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - int i, burst_size, slave_count, primary_port;
> + int i, burst_size, member_count, primary_port;
>
> burst_size = 21;
>
> @@ -4574,61 +4620,63 @@ test_tlb_verify_slave_link_status_change_failover(void)
>
>
>
> - /* Initialize bonded device with 4 slaves in round robin mode */
> - TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
> + /* Initialize bonded device with 4 members in round robin mode */
> + TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
> BONDING_MODE_TLB, 0,
> - TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
> - "Failed to initialize bonded device with slaves");
> + TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1),
> + "Failed to initialize bonded device with members");
>
> - /* Verify Current Slaves Count /Active Slave Count is */
> - slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
> + /* Verify Current Members Count /Active Member Count is */
> + member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
> RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, 4,
> - "Number of slaves (%d) is not as expected (%d).\n",
> - slave_count, 4);
> + TEST_ASSERT_EQUAL(member_count, 4,
> + "Number of members (%d) is not as expected (%d).\n",
> + member_count, 4);
>
> - slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
> - slaves, RTE_MAX_ETHPORTS);
> - TEST_ASSERT_EQUAL(slave_count, (int)4,
> - "Number of slaves (%d) is not as expected (%d).\n",
> - slave_count, 4);
> + member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
> + members, RTE_MAX_ETHPORTS);
> + TEST_ASSERT_EQUAL(member_count, 4,
> + "Number of members (%d) is not as expected (%d).\n",
> + member_count, 4);
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
> - TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
> + TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
> "Primary port not as expected");
>
> - /* Bring 2 slaves down and verify active slave count */
> + /* Bring 2 members down and verify active member count */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[1], 0);
> + test_params->member_port_ids[1], 0);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[3], 0);
> + test_params->member_port_ids[3], 0);
>
> - TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, 2);
> + TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, 2);
>
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[1], 1);
> + test_params->member_port_ids[1], 1);
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[3], 1);
> + test_params->member_port_ids[3], 1);
>
>
> - /* Bring primary port down, verify that active slave count is 3 and primary
> - * has changed */
> + /*
> + * Bring primary port down, verify that active member count is 3 and primary
> + * has changed.
> + */
> virtual_ethdev_simulate_link_status_interrupt(
> - test_params->slave_port_ids[0], 0);
> + test_params->member_port_ids[0], 0);
>
> - TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
> - test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
> - "Number of active slaves (%d) is not as expected (%d).",
> - slave_count, 3);
> + TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
> + test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 3,
> + "Number of active members (%d) is not as expected (%d).",
> + member_count, 3);
>
> primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
> - TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
> + TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
> "Primary port not as expected");
> rte_delay_us(500000);
> - /* Verify that pkts are sent on new primary slave */
> + /* Verify that pkts are sent on new primary member */
> for (i = 0; i < 4; i++) {
> TEST_ASSERT_EQUAL(generate_test_burst(
> &pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
> @@ -4639,36 +4687,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
> rte_delay_us(11000);
> }
>
> - rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
> TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
> "(%d) port_stats.opackets not as expected\n",
> - test_params->slave_port_ids[0]);
> + test_params->member_port_ids[0]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
> TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
> "(%d) port_stats.opackets not as expected\n",
> - test_params->slave_port_ids[1]);
> + test_params->member_port_ids[1]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
> TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
> "(%d) port_stats.opackets not as expected\n",
> - test_params->slave_port_ids[2]);
> + test_params->member_port_ids[2]);
>
> - rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
> + rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
> TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
> "(%d) port_stats.opackets not as expected\n",
> - test_params->slave_port_ids[3]);
> + test_params->member_port_ids[3]);
>
>
> /* Generate packet burst for testing */
>
> - for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
> + for (i = 0; i < TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT; i++) {
> if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
> burst_size)
> return -1;
>
> virtual_ethdev_add_mbufs_to_rx_queue(
> - test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
> + test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
> }
>
> if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
> @@ -4684,11 +4732,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
> "(%d) port_stats.ipackets not as expected\n",
> test_params->bonded_port_id);
>
> - /* Clean up and remove slaves from bonded device */
> - return remove_slaves_and_stop_bonded_device();
> + /* Clean up and remove members from bonded device */
> + return remove_members_and_stop_bonded_device();
> }
>
> -#define TEST_ALB_SLAVE_COUNT 2
> +#define TEST_ALB_MEMBER_COUNT 2
>
> static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
> static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
> @@ -4710,23 +4758,23 @@ test_alb_change_mac_in_reply_sent(void)
> struct rte_ether_hdr *eth_pkt;
> struct rte_arp_hdr *arp_pkt;
>
> - int slave_idx, nb_pkts, pkt_idx;
> + int member_idx, nb_pkts, pkt_idx;
> int retval = 0;
>
> struct rte_ether_addr bond_mac, client_mac;
> - struct rte_ether_addr *slave_mac1, *slave_mac2;
> + struct rte_ether_addr *member_mac1, *member_mac2;
>
> TEST_ASSERT_SUCCESS(
> - initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
> - 0, TEST_ALB_SLAVE_COUNT, 1),
> - "Failed to initialize_bonded_device_with_slaves.");
> + initialize_bonded_device_with_members(BONDING_MODE_ALB,
> + 0, TEST_ALB_MEMBER_COUNT, 1),
> + "Failed to initialize_bonded_device_with_members.");
>
> /* Flush tx queue */
> rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
> - for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
> - slave_idx++) {
> + for (member_idx = 0; member_idx < test_params->bonded_member_count;
> + member_idx++) {
> nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
> - test_params->slave_port_ids[slave_idx], pkts_sent,
> + test_params->member_port_ids[member_idx], pkts_sent,
> MAX_PKT_BURST);
> }
>
> @@ -4782,18 +4830,18 @@ test_alb_change_mac_in_reply_sent(void)
> RTE_ARP_OP_REPLY);
> rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
>
> - slave_mac1 =
> - rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
> - slave_mac2 =
> - rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
> + member_mac1 =
> + rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
> + member_mac2 =
> + rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
>
> /*
> * Checking if packets are properly distributed on bonding ports. Packets
> * 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
> */
> - for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
> + for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
> nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
> - test_params->slave_port_ids[slave_idx], pkts_sent,
> + test_params->member_port_ids[member_idx], pkts_sent,
> MAX_PKT_BURST);
>
> for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
> @@ -4802,14 +4850,14 @@ test_alb_change_mac_in_reply_sent(void)
> arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
> sizeof(struct rte_ether_hdr));
>
> - if (slave_idx%2 == 0) {
> - if (!rte_is_same_ether_addr(slave_mac1,
> + if (member_idx%2 == 0) {
> + if (!rte_is_same_ether_addr(member_mac1,
> &arp_pkt->arp_data.arp_sha)) {
> retval = -1;
> goto test_end;
> }
> } else {
> - if (!rte_is_same_ether_addr(slave_mac2,
> + if (!rte_is_same_ether_addr(member_mac2,
> &arp_pkt->arp_data.arp_sha)) {
> retval = -1;
> goto test_end;
> @@ -4819,7 +4867,7 @@ test_alb_change_mac_in_reply_sent(void)
> }
>
> test_end:
> - retval += remove_slaves_and_stop_bonded_device();
> + retval += remove_members_and_stop_bonded_device();
> return retval;
> }
>
> @@ -4832,22 +4880,22 @@ test_alb_reply_from_client(void)
> struct rte_mbuf *pkt;
> struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
>
> - int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
> + int member_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
> int retval = 0;
>
> struct rte_ether_addr bond_mac, client_mac;
> - struct rte_ether_addr *slave_mac1, *slave_mac2;
> + struct rte_ether_addr *member_mac1, *member_mac2;
>
> TEST_ASSERT_SUCCESS(
> - initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
> - 0, TEST_ALB_SLAVE_COUNT, 1),
> - "Failed to initialize_bonded_device_with_slaves.");
> + initialize_bonded_device_with_members(BONDING_MODE_ALB,
> + 0, TEST_ALB_MEMBER_COUNT, 1),
> + "Failed to initialize_bonded_device_with_members.");
>
> /* Flush tx queue */
> rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
> - for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
> + for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
> nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
> - test_params->slave_port_ids[slave_idx], pkts_sent,
> + test_params->member_port_ids[member_idx], pkts_sent,
> MAX_PKT_BURST);
> }
>
> @@ -4868,7 +4916,7 @@ test_alb_reply_from_client(void)
> sizeof(struct rte_ether_hdr));
> initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
> RTE_ARP_OP_REPLY);
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
> 1);
>
> pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
> @@ -4880,7 +4928,7 @@ test_alb_reply_from_client(void)
> sizeof(struct rte_ether_hdr));
> initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
> RTE_ARP_OP_REPLY);
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
> 1);
>
> pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
> @@ -4892,7 +4940,7 @@ test_alb_reply_from_client(void)
> sizeof(struct rte_ether_hdr));
> initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
> RTE_ARP_OP_REPLY);
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
> 1);
>
> pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
> @@ -4904,7 +4952,7 @@ test_alb_reply_from_client(void)
> sizeof(struct rte_ether_hdr));
> initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
> RTE_ARP_OP_REPLY);
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
> 1);
>
> /*
> @@ -4914,15 +4962,15 @@ test_alb_reply_from_client(void)
> rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
> rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
>
> - slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
> - slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
> + member_mac1 = rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
> + member_mac2 = rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
>
> /*
> - * Checking if update ARP packets were properly send on slave ports.
> + * Checking if update ARP packets were properly send on member ports.
> */
> - for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
> + for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
> nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
> - test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
> + test_params->member_port_ids[member_idx], pkts_sent, MAX_PKT_BURST);
> nb_pkts_sum += nb_pkts;
>
> for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
> @@ -4931,14 +4979,14 @@ test_alb_reply_from_client(void)
> arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
> sizeof(struct rte_ether_hdr));
>
> - if (slave_idx%2 == 0) {
> - if (!rte_is_same_ether_addr(slave_mac1,
> + if (member_idx%2 == 0) {
> + if (!rte_is_same_ether_addr(member_mac1,
> &arp_pkt->arp_data.arp_sha)) {
> retval = -1;
> goto test_end;
> }
> } else {
> - if (!rte_is_same_ether_addr(slave_mac2,
> + if (!rte_is_same_ether_addr(member_mac2,
> &arp_pkt->arp_data.arp_sha)) {
> retval = -1;
> goto test_end;
> @@ -4954,7 +5002,7 @@ test_alb_reply_from_client(void)
> }
>
> test_end:
> - retval += remove_slaves_and_stop_bonded_device();
> + retval += remove_members_and_stop_bonded_device();
> return retval;
> }
>
> @@ -4968,21 +5016,21 @@ test_alb_receive_vlan_reply(void)
> struct rte_mbuf *pkt;
> struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
>
> - int slave_idx, nb_pkts, pkt_idx;
> + int member_idx, nb_pkts, pkt_idx;
> int retval = 0;
>
> struct rte_ether_addr bond_mac, client_mac;
>
> TEST_ASSERT_SUCCESS(
> - initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
> - 0, TEST_ALB_SLAVE_COUNT, 1),
> - "Failed to initialize_bonded_device_with_slaves.");
> + initialize_bonded_device_with_members(BONDING_MODE_ALB,
> + 0, TEST_ALB_MEMBER_COUNT, 1),
> + "Failed to initialize_bonded_device_with_members.");
>
> /* Flush tx queue */
> rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
> - for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
> + for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
> nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
> - test_params->slave_port_ids[slave_idx], pkts_sent,
> + test_params->member_port_ids[member_idx], pkts_sent,
> MAX_PKT_BURST);
> }
>
> @@ -5007,7 +5055,7 @@ test_alb_receive_vlan_reply(void)
> arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
> initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
> RTE_ARP_OP_REPLY);
> - virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
> + virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
> 1);
>
> rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
> @@ -5016,9 +5064,9 @@ test_alb_receive_vlan_reply(void)
> /*
> * Checking if VLAN headers in generated ARP Update packet are correct.
> */
> - for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
> + for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
> nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
> - test_params->slave_port_ids[slave_idx], pkts_sent,
> + test_params->member_port_ids[member_idx], pkts_sent,
> MAX_PKT_BURST);
>
> for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
> @@ -5049,7 +5097,7 @@ test_alb_receive_vlan_reply(void)
> }
>
> test_end:
> - retval += remove_slaves_and_stop_bonded_device();
> + retval += remove_members_and_stop_bonded_device();
> return retval;
> }
>
> @@ -5062,9 +5110,9 @@ test_alb_ipv4_tx(void)
> retval = 0;
>
> TEST_ASSERT_SUCCESS(
> - initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
> - 0, TEST_ALB_SLAVE_COUNT, 1),
> - "Failed to initialize_bonded_device_with_slaves.");
> + initialize_bonded_device_with_members(BONDING_MODE_ALB,
> + 0, TEST_ALB_MEMBER_COUNT, 1),
> + "Failed to initialize_bonded_device_with_members.");
>
> burst_size = 32;
>
> @@ -5085,7 +5133,7 @@ test_alb_ipv4_tx(void)
> }
>
> test_end:
> - retval += remove_slaves_and_stop_bonded_device();
> + retval += remove_members_and_stop_bonded_device();
> return retval;
> }
>
> @@ -5096,34 +5144,34 @@ static struct unit_test_suite link_bonding_test_suite = {
> .unit_test_cases = {
> TEST_CASE(test_create_bonded_device),
> TEST_CASE(test_create_bonded_device_with_invalid_params),
> - TEST_CASE(test_add_slave_to_bonded_device),
> - TEST_CASE(test_add_slave_to_invalid_bonded_device),
> - TEST_CASE(test_remove_slave_from_bonded_device),
> - TEST_CASE(test_remove_slave_from_invalid_bonded_device),
> - TEST_CASE(test_get_slaves_from_bonded_device),
> - TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
> - TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
> + TEST_CASE(test_add_member_to_bonded_device),
> + TEST_CASE(test_add_member_to_invalid_bonded_device),
> + TEST_CASE(test_remove_member_from_bonded_device),
> + TEST_CASE(test_remove_member_from_invalid_bonded_device),
> + TEST_CASE(test_get_members_from_bonded_device),
> + TEST_CASE(test_add_already_bonded_member_to_bonded_device),
> + TEST_CASE(test_add_remove_multiple_members_to_from_bonded_device),
> TEST_CASE(test_start_bonded_device),
> TEST_CASE(test_stop_bonded_device),
> TEST_CASE(test_set_bonding_mode),
> - TEST_CASE(test_set_primary_slave),
> + TEST_CASE(test_set_primary_member),
> TEST_CASE(test_set_explicit_bonded_mac),
> TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
> TEST_CASE(test_status_interrupt),
> - TEST_CASE(test_adding_slave_after_bonded_device_started),
> + TEST_CASE(test_adding_member_after_bonded_device_started),
> TEST_CASE(test_roundrobin_tx_burst),
> - TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
> - TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
> - TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
> + TEST_CASE(test_roundrobin_tx_burst_member_tx_fail),
> + TEST_CASE(test_roundrobin_rx_burst_on_single_member),
> + TEST_CASE(test_roundrobin_rx_burst_on_multiple_members),
> TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
> TEST_CASE(test_roundrobin_verify_mac_assignment),
> - TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
> - TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
> + TEST_CASE(test_roundrobin_verify_member_link_status_change_behaviour),
> + TEST_CASE(test_roundrobin_verify_polling_member_link_status_change),
> TEST_CASE(test_activebackup_tx_burst),
> TEST_CASE(test_activebackup_rx_burst),
> TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
> TEST_CASE(test_activebackup_verify_mac_assignment),
> - TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
> + TEST_CASE(test_activebackup_verify_member_link_status_change_failover),
> TEST_CASE(test_balance_xmit_policy_configuration),
> TEST_CASE(test_balance_l2_tx_burst),
> TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
> @@ -5137,26 +5185,26 @@ static struct unit_test_suite link_bonding_test_suite = {
> TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
> TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
> TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
> - TEST_CASE(test_balance_tx_burst_slave_tx_fail),
> + TEST_CASE(test_balance_tx_burst_member_tx_fail),
> TEST_CASE(test_balance_rx_burst),
> TEST_CASE(test_balance_verify_promiscuous_enable_disable),
> TEST_CASE(test_balance_verify_mac_assignment),
> - TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
> + TEST_CASE(test_balance_verify_member_link_status_change_behaviour),
> TEST_CASE(test_tlb_tx_burst),
> TEST_CASE(test_tlb_rx_burst),
> TEST_CASE(test_tlb_verify_mac_assignment),
> TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
> - TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
> + TEST_CASE(test_tlb_verify_member_link_status_change_failover),
> TEST_CASE(test_alb_change_mac_in_reply_sent),
> TEST_CASE(test_alb_reply_from_client),
> TEST_CASE(test_alb_receive_vlan_reply),
> TEST_CASE(test_alb_ipv4_tx),
> TEST_CASE(test_broadcast_tx_burst),
> - TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
> + TEST_CASE(test_broadcast_tx_burst_member_tx_fail),
> TEST_CASE(test_broadcast_rx_burst),
> TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
> TEST_CASE(test_broadcast_verify_mac_assignment),
> - TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
> + TEST_CASE(test_broadcast_verify_member_link_status_change_behaviour),
> TEST_CASE(test_reconfigure_bonded_device),
> TEST_CASE(test_close_bonded_device),
>
> diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
> index 21c512c94b..2de907e7f3 100644
> --- a/app/test/test_link_bonding_mode4.c
> +++ b/app/test/test_link_bonding_mode4.c
> @@ -31,7 +31,7 @@
>
> #include "test.h"
>
> -#define SLAVE_COUNT (4)
> +#define MEMBER_COUNT (4)
>
> #define RX_RING_SIZE 1024
> #define TX_RING_SIZE 1024
> @@ -46,15 +46,15 @@
>
> #define BONDED_DEV_NAME ("net_bonding_m4_bond_dev")
>
> -#define SLAVE_DEV_NAME_FMT ("net_virt_%d")
> -#define SLAVE_RX_QUEUE_FMT ("net_virt_%d_rx")
> -#define SLAVE_TX_QUEUE_FMT ("net_virt_%d_tx")
> +#define MEMBER_DEV_NAME_FMT ("net_virt_%d")
> +#define MEMBER_RX_QUEUE_FMT ("net_virt_%d_rx")
> +#define MEMBER_TX_QUEUE_FMT ("net_virt_%d_tx")
>
> #define INVALID_SOCKET_ID (-1)
> #define INVALID_PORT_ID (0xFF)
> #define INVALID_BONDING_MODE (-1)
>
> -static const struct rte_ether_addr slave_mac_default = {
> +static const struct rte_ether_addr member_mac_default = {
> { 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
> };
>
> @@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
> { 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
> };
>
> -struct slave_conf {
> +struct member_conf {
> struct rte_ring *rx_queue;
> struct rte_ring *tx_queue;
> uint16_t port_id;
> @@ -86,21 +86,21 @@ struct ether_vlan_hdr {
>
> struct link_bonding_unittest_params {
> uint8_t bonded_port_id;
> - struct slave_conf slave_ports[SLAVE_COUNT];
> + struct member_conf member_ports[MEMBER_COUNT];
>
> struct rte_mempool *mbuf_pool;
> };
>
> -#define TEST_DEFAULT_SLAVE_COUNT RTE_DIM(test_params.slave_ports)
> -#define TEST_RX_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
> -#define TEST_TX_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
> -#define TEST_MARKER_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
> -#define TEST_EXPIRED_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
> -#define TEST_PROMISC_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
> +#define TEST_DEFAULT_MEMBER_COUNT RTE_DIM(test_params.member_ports)
> +#define TEST_RX_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
> +#define TEST_TX_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
> +#define TEST_MARKER_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
> +#define TEST_EXPIRED_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
> +#define TEST_PROMISC_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
>
> static struct link_bonding_unittest_params test_params = {
> .bonded_port_id = INVALID_PORT_ID,
> - .slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
> + .member_ports = { [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
>
> .mbuf_pool = NULL,
> };
> @@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
> #define FOR_EACH(_i, _item, _array, _size) \
> for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
>
> -/* Macro for iterating over every port that can be used as a slave
> +/* Macro for iterating over every port that can be used as a member
> * in this test.
> - * _i variable used as an index in test_params->slave_ports
> - * _slave pointer to &test_params->slave_ports[_idx]
> + * _i variable used as an index in test_params->member_ports
> + * _member pointer to &test_params->member_ports[_idx]
> */
> #define FOR_EACH_PORT(_i, _port) \
> - FOR_EACH(_i, _port, test_params.slave_ports, \
> - RTE_DIM(test_params.slave_ports))
> + FOR_EACH(_i, _port, test_params.member_ports, \
> + RTE_DIM(test_params.member_ports))
>
> -/* Macro for iterating over every port that can be used as a slave
> +/* Macro for iterating over every port that can be used as a member
> * in this test and satisfy given condition.
> *
> - * _i variable used as an index in test_params->slave_ports
> - * _slave pointer to &test_params->slave_ports[_idx]
> + * _i variable used as an index in test_params->member_ports
> + * _member pointer to &test_params->member_ports[_idx]
> * _condition condition that need to be checked
> */
> #define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
> if (!!(_condition))
>
> -/* Macro for iterating over every port that is currently a slave of a bonded
> +/* Macro for iterating over every port that is currently a member of a bonded
> * device.
> - * _i variable used as an index in test_params->slave_ports
> - * _slave pointer to &test_params->slave_ports[_idx]
> + * _i variable used as an index in test_params->member_ports
> + * _member pointer to &test_params->member_ports[_idx]
> * */
> -#define FOR_EACH_SLAVE(_i, _slave) \
> - FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
> +#define FOR_EACH_MEMBER(_i, _member) \
> + FOR_EACH_PORT_IF(_i, _member, (_member)->bonded != 0)
>
> /*
> - * Returns packets from slaves TX queue.
> - * slave slave port
> + * Returns packets from members TX queue.
> + * member port
> * buffer for packets
> * size size of buffer
> * return number of packets or negative error number
> */
> static int
> -slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
> +member_get_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
> {
> - return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
> + return rte_ring_dequeue_burst(member->tx_queue, (void **)buf,
> size, NULL);
> }
>
> /*
> - * Injects given packets into slaves RX queue.
> - * slave slave port
> + * Injects given packets into members RX queue.
> + * member port
> * buffer for packets
> * size number of packets to be injected
> * return number of queued packets or negative error number
> */
> static int
> -slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
> +member_put_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
> {
> - return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
> + return rte_ring_enqueue_burst(member->rx_queue, (void **)buf,
> size, NULL);
> }
>
> @@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
> }
>
> static int
> -add_slave(struct slave_conf *slave, uint8_t start)
> +add_member(struct member_conf *member, uint8_t start)
> {
> struct rte_ether_addr addr, addr_check;
> int retval;
>
> /* Some sanity check */
> - RTE_VERIFY(test_params.slave_ports <= slave &&
> - slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
> - RTE_VERIFY(slave->bonded == 0);
> - RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
> + RTE_VERIFY(test_params.member_ports <= member &&
> + member - test_params.member_ports < (int)RTE_DIM(test_params.member_ports));
> + RTE_VERIFY(member->bonded == 0);
> + RTE_VERIFY(member->port_id != INVALID_PORT_ID);
>
> - rte_ether_addr_copy(&slave_mac_default, &addr);
> - addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
> + rte_ether_addr_copy(&member_mac_default, &addr);
> + addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
>
> - rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
> + rte_eth_dev_mac_addr_remove(member->port_id, &addr);
>
> - TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
> - "Failed to set slave MAC address");
> + TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(member->port_id, &addr, 0),
> + "Failed to set member MAC address");
>
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
> - slave->port_id),
> - "Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
> - (uint8_t)(slave - test_params.slave_ports), slave->port_id,
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bonded_port_id,
> + member->port_id),
> + "Failed to add member (idx=%u, id=%u) to bonding (id=%u)",
> + (uint8_t)(member - test_params.member_ports), member->port_id,
> test_params.bonded_port_id);
>
> - slave->bonded = 1;
> + member->bonded = 1;
> if (start) {
> - TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
> - "Failed to start slave %u", slave->port_id);
> + TEST_ASSERT_SUCCESS(rte_eth_dev_start(member->port_id),
> + "Failed to start member %u", member->port_id);
> }
>
> - retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
> - TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
> + retval = rte_eth_macaddr_get(member->port_id, &addr_check);
> + TEST_ASSERT_SUCCESS(retval, "Failed to get member mac address: %s",
> strerror(-retval));
> TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
> - "Slave MAC address is not as expected");
> + "Member MAC address is not as expected");
>
> - RTE_VERIFY(slave->lacp_parnter_state == 0);
> + RTE_VERIFY(member->lacp_parnter_state == 0);
> return 0;
> }
>
> static int
> -remove_slave(struct slave_conf *slave)
> +remove_member(struct member_conf *member)
> {
> - ptrdiff_t slave_idx = slave - test_params.slave_ports;
> + ptrdiff_t member_idx = member - test_params.member_ports;
>
> - RTE_VERIFY(test_params.slave_ports <= slave &&
> - slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
> + RTE_VERIFY(test_params.member_ports <= member &&
> + member_idx < (ptrdiff_t)RTE_DIM(test_params.member_ports));
>
> - RTE_VERIFY(slave->bonded == 1);
> - RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
> + RTE_VERIFY(member->bonded == 1);
> + RTE_VERIFY(member->port_id != INVALID_PORT_ID);
>
> - TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
> - "Slave %u tx queue not empty while removing from bonding.",
> - slave->port_id);
> + TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
> + "Member %u tx queue not empty while removing from bonding.",
> + member->port_id);
>
> - TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
> - "Slave %u tx queue not empty while removing from bonding.",
> - slave->port_id);
> + TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
> + "Member %u tx queue not empty while removing from bonding.",
> + member->port_id);
>
> - TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
> - slave->port_id), 0,
> - "Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
> - (uint8_t)slave_idx, slave->port_id,
> + TEST_ASSERT_EQUAL(rte_eth_bond_member_remove(test_params.bonded_port_id,
> + member->port_id), 0,
> + "Failed to remove member (idx=%u, id=%u) from bonding (id=%u)",
> + (uint8_t)member_idx, member->port_id,
> test_params.bonded_port_id);
>
> - slave->bonded = 0;
> - slave->lacp_parnter_state = 0;
> + member->bonded = 0;
> + member->lacp_parnter_state = 0;
> return 0;
> }
>
> static void
> -lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
> +lacp_recv_cb(uint16_t member_id, struct rte_mbuf *lacp_pkt)
> {
> struct rte_ether_hdr *hdr;
> struct slow_protocol_frame *slow_hdr;
> @@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
> slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
> RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
>
> - lacpdu_rx_count[slave_id]++;
> + lacpdu_rx_count[member_id]++;
> rte_pktmbuf_free(lacp_pkt);
> }
>
> static int
> -initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
> +initialize_bonded_device_with_members(uint16_t member_count, uint8_t external_sm)
> {
> uint8_t i;
> int ret;
>
> RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
>
> - for (i = 0; i < slave_count; i++) {
> - TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
> + for (i = 0; i < member_count; i++) {
> + TEST_ASSERT_SUCCESS(add_member(&test_params.member_ports[i], 1),
> "Failed to add port %u to bonded device.\n",
> - test_params.slave_ports[i].port_id);
> + test_params.member_ports[i].port_id);
> }
>
> /* Reset mode 4 configuration */
> @@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
> }
>
> static int
> -remove_slaves_and_stop_bonded_device(void)
> +remove_members_and_stop_bonded_device(void)
> {
> - struct slave_conf *slave;
> + struct member_conf *member;
> int retval;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
> uint16_t i;
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
> "Failed to stop bonded port %u",
> test_params.bonded_port_id);
>
> - FOR_EACH_SLAVE(i, slave)
> - remove_slave(slave);
> + FOR_EACH_MEMBER(i, member)
> + remove_member(member);
>
> - retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
> - RTE_DIM(slaves));
> + retval = rte_eth_bond_members_get(test_params.bonded_port_id, members,
> + RTE_DIM(members));
>
> TEST_ASSERT_EQUAL(retval, 0,
> - "Expected bonded device %u have 0 slaves but returned %d.",
> + "Expected bonded device %u have 0 members but returned %d.",
> test_params.bonded_port_id, retval);
>
> - FOR_EACH_PORT(i, slave) {
> - TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
> + FOR_EACH_PORT(i, member) {
> + TEST_ASSERT_SUCCESS(rte_eth_dev_stop(member->port_id),
> "Failed to stop bonded port %u",
> - slave->port_id);
> + member->port_id);
>
> - TEST_ASSERT(slave->bonded == 0,
> - "Port id=%u is still marked as enslaved.", slave->port_id);
> + TEST_ASSERT(member->bonded == 0,
> + "Port id=%u is still marked as enmemberd.", member->port_id);
> }
>
> return TEST_SUCCESS;
> @@ -383,7 +383,7 @@ test_setup(void)
> {
> int retval, nb_mbuf_per_pool;
> char name[RTE_ETH_NAME_MAX_LEN];
> - struct slave_conf *port;
> + struct member_conf *port;
> const uint8_t socket_id = rte_socket_id();
> uint16_t i;
>
> @@ -400,10 +400,10 @@ test_setup(void)
>
> /* Create / initialize ring eth devs. */
> FOR_EACH_PORT(i, port) {
> - port = &test_params.slave_ports[i];
> + port = &test_params.member_ports[i];
>
> if (port->rx_queue == NULL) {
> - retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
> + retval = snprintf(name, RTE_DIM(name), MEMBER_RX_QUEUE_FMT, i);
> TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
> port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
> TEST_ASSERT(port->rx_queue != NULL,
> @@ -412,7 +412,7 @@ test_setup(void)
> }
>
> if (port->tx_queue == NULL) {
> - retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
> + retval = snprintf(name, RTE_DIM(name), MEMBER_TX_QUEUE_FMT, i);
> TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
> port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
> TEST_ASSERT_NOT_NULL(port->tx_queue,
> @@ -421,7 +421,7 @@ test_setup(void)
> }
>
> if (port->port_id == INVALID_PORT_ID) {
> - retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
> + retval = snprintf(name, RTE_DIM(name), MEMBER_DEV_NAME_FMT, i);
> TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
> retval = rte_eth_from_rings(name, &port->rx_queue, 1,
> &port->tx_queue, 1, socket_id);
> @@ -460,7 +460,7 @@ test_setup(void)
> static void
> testsuite_teardown(void)
> {
> - struct slave_conf *port;
> + struct member_conf *port;
> uint8_t i;
>
> /* Only stop ports.
> @@ -480,7 +480,7 @@ testsuite_teardown(void)
> * frame but not LACP
> */
> static int
> -make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
> +make_lacp_reply(struct member_conf *member, struct rte_mbuf *pkt)
> {
> struct rte_ether_hdr *hdr;
> struct slow_protocol_frame *slow_hdr;
> @@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
> /* Change source address to partner address */
> rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
> slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
> - slave->port_id;
> + member->port_id;
>
> lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
> /* Save last received state */
> - slave->lacp_parnter_state = lacp->actor.state;
> + member->lacp_parnter_state = lacp->actor.state;
> /* Change it into LACP replay by matching parameters. */
> memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
> sizeof(struct port_params));
> @@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
> }
>
> /*
> - * Reads packets from given slave, search for LACP packet and reply them.
> + * Reads packets from given member, search for LACP packet and reply them.
> *
> - * Receives burst of packets from slave. Looks for LACP packet. Drops
> + * Receives burst of packets from member. Looks for LACP packet. Drops
> * all other packets. Prepares response LACP and sends it back.
> *
> * return number of LACP received and replied, -1 on error.
> */
> static int
> -bond_handshake_reply(struct slave_conf *slave)
> +bond_handshake_reply(struct member_conf *member)
> {
> int retval;
> struct rte_mbuf *rx_buf[MAX_PKT_BURST];
> struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
> uint16_t lacp_tx_buf_cnt = 0, i;
>
> - retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
> - TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
> - slave->port_id);
> + retval = member_get_pkts(member, rx_buf, RTE_DIM(rx_buf));
> + TEST_ASSERT(retval >= 0, "Getting member %u packets failed.",
> + member->port_id);
>
> for (i = 0; i < (uint16_t)retval; i++) {
> - if (make_lacp_reply(slave, rx_buf[i]) == 0) {
> + if (make_lacp_reply(member, rx_buf[i]) == 0) {
> /* reply with actor's LACP */
> lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
> } else
> @@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
> if (lacp_tx_buf_cnt == 0)
> return 0;
>
> - retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
> + retval = member_put_pkts(member, lacp_tx_buf, lacp_tx_buf_cnt);
> if (retval <= lacp_tx_buf_cnt) {
> /* retval might be negative */
> for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
> @@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
> }
>
> TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
> - "Failed to equeue lacp packets into slave %u tx queue.",
> - slave->port_id);
> + "Failed to equeue lacp packets into member %u tx queue.",
> + member->port_id);
>
> return lacp_tx_buf_cnt;
> }
>
> /*
> - * Function check if given slave tx queue contains packets that make mode 4
> - * handshake complete. It will drain slave queue.
> + * Function check if given member tx queue contains packets that make mode 4
> + * handshake complete. It will drain member queue.
> * return 0 if handshake not completed, 1 if handshake was complete,
> */
> static int
> -bond_handshake_done(struct slave_conf *slave)
> +bond_handshake_done(struct member_conf *member)
> {
> const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
> STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
>
> - return slave->lacp_parnter_state == expected_state;
> + return member->lacp_parnter_state == expected_state;
> }
>
> static unsigned
> @@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
> static int
> bond_handshake(void)
> {
> - struct slave_conf *slave;
> + struct member_conf *member;
> struct rte_mbuf *buf[MAX_PKT_BURST];
> uint16_t nb_pkts;
> - uint8_t all_slaves_done, i, j;
> - uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
> + uint8_t all_members_done, i, j;
> + uint8_t status[RTE_DIM(test_params.member_ports)] = { 0 };
> const unsigned delay = bond_get_update_timeout_ms();
>
> /* Exchange LACP frames */
> - all_slaves_done = 0;
> - for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
> + all_members_done = 0;
> + for (i = 0; i < 30 && all_members_done == 0; ++i) {
> rte_delay_ms(delay);
>
> - all_slaves_done = 1;
> - FOR_EACH_SLAVE(j, slave) {
> - /* If response already send, skip slave */
> + all_members_done = 1;
> + FOR_EACH_MEMBER(j, member) {
> + /* If response already send, skip member */
> if (status[j] != 0)
> continue;
>
> - if (bond_handshake_reply(slave) < 0) {
> - all_slaves_done = 0;
> + if (bond_handshake_reply(member) < 0) {
> + all_members_done = 0;
> break;
> }
>
> - status[j] = bond_handshake_done(slave);
> + status[j] = bond_handshake_done(member);
> if (status[j] == 0)
> - all_slaves_done = 0;
> + all_members_done = 0;
> }
>
> nb_pkts = bond_tx(NULL, 0);
> @@ -639,26 +639,26 @@ bond_handshake(void)
> TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
> }
> /* If response didn't send - report failure */
> - TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
> + TEST_ASSERT_EQUAL(all_members_done, 1, "Bond handshake failed\n");
>
> /* If flags doesn't match - report failure */
> - return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
> + return all_members_done == 1 ? TEST_SUCCESS : TEST_FAILED;
> }
>
> -#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
> +#define TEST_LACP_MEMBER_COUT RTE_DIM(test_params.member_ports)
> static int
> test_mode4_lacp(void)
> {
> int retval;
>
> - retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
> + retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
> TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
>
> /* Test LACP handshake function */
> retval = bond_handshake();
> TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
>
> - retval = remove_slaves_and_stop_bonded_device();
> + retval = remove_members_and_stop_bonded_device();
> TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
>
> return TEST_SUCCESS;
> @@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
> {
> int retval;
> /* Test and verify for Stable mode */
> - retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
> + retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
> TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
>
>
> @@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
> TEST_ASSERT_EQUAL(retval, AGG_STABLE,
> "Wrong agg mode received from bonding device");
>
> - retval = remove_slaves_and_stop_bonded_device();
> + retval = remove_members_and_stop_bonded_device();
> TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
>
>
> /* test and verify for Bandwidth mode */
> - retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
> + retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
> TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
>
>
> @@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
> TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
> "Wrong agg mode received from bonding device");
>
> - retval = remove_slaves_and_stop_bonded_device();
> + retval = remove_members_and_stop_bonded_device();
> TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
>
> /* test and verify selection for count mode */
> - retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
> + retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
> TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
>
>
> @@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
> TEST_ASSERT_EQUAL(retval, AGG_COUNT,
> "Wrong agg mode received from bonding device");
>
> - retval = remove_slaves_and_stop_bonded_device();
> + retval = remove_members_and_stop_bonded_device();
> TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
>
> return TEST_SUCCESS;
> @@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
> }
>
> static int
> -generate_and_put_packets(struct slave_conf *slave,
> +generate_and_put_packets(struct member_conf *member,
> struct rte_ether_addr *src_mac,
> struct rte_ether_addr *dst_mac, uint16_t count)
> {
> @@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
> if (retval != (int)count)
> return retval;
>
> - retval = slave_put_pkts(slave, pkts, count);
> + retval = member_put_pkts(member, pkts, count);
> if (retval > 0 && retval != count)
> free_pkts(&pkts[retval], count - retval);
>
> TEST_ASSERT_EQUAL(retval, count,
> - "Failed to enqueue packets into slave %u RX queue", slave->port_id);
> + "Failed to enqueue packets into member %u RX queue", member->port_id);
>
> return TEST_SUCCESS;
> }
> @@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
> static int
> test_mode4_rx(void)
> {
> - struct slave_conf *slave;
> + struct member_conf *member;
> uint16_t i, j;
>
> uint16_t expected_pkts_cnt;
> @@ -819,7 +819,7 @@ test_mode4_rx(void)
> struct rte_ether_addr dst_mac;
> struct rte_ether_addr bonded_mac;
>
> - retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
> + retval = initialize_bonded_device_with_members(TEST_PROMISC_MEMBER_COUNT,
> 0);
> TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
>
> @@ -838,7 +838,7 @@ test_mode4_rx(void)
> dst_mac.addr_bytes[0] += 2;
>
> /* First try with promiscuous mode enabled.
> - * Add 2 packets to each slave. First with bonding MAC address, second with
> + * Add 2 packets to each member. First with bonding MAC address, second with
> * different. Check if we received all of them. */
> retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
> TEST_ASSERT_SUCCESS(retval,
> @@ -846,16 +846,16 @@ test_mode4_rx(void)
> test_params.bonded_port_id, rte_strerror(-retval));
>
> expected_pkts_cnt = 0;
> - FOR_EACH_SLAVE(i, slave) {
> - retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
> - TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
> - slave->port_id);
> + FOR_EACH_MEMBER(i, member) {
> + retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
> + TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
> + member->port_id);
>
> - retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
> - TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
> - slave->port_id);
> + retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
> + TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
> + member->port_id);
>
> - /* Expect 2 packets per slave */
> + /* Expect 2 packets per member */
> expected_pkts_cnt += 2;
> }
>
> @@ -894,16 +894,16 @@ test_mode4_rx(void)
> test_params.bonded_port_id, rte_strerror(-retval));
>
> expected_pkts_cnt = 0;
> - FOR_EACH_SLAVE(i, slave) {
> - retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
> - TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
> - slave->port_id);
> + FOR_EACH_MEMBER(i, member) {
> + retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
> + TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
> + member->port_id);
>
> - retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
> - TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
> - slave->port_id);
> + retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
> + TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
> + member->port_id);
>
> - /* Expect only one packet per slave */
> + /* Expect only one packet per member */
> expected_pkts_cnt += 1;
> }
>
> @@ -927,19 +927,19 @@ test_mode4_rx(void)
> TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
> "Expected %u packets but received only %d", expected_pkts_cnt, retval);
>
> - /* Link down test: simulate link down for first slave. */
> + /* Link down test: simulate link down for first member. */
> delay = bond_get_update_timeout_ms();
>
> - uint8_t slave_down_id = INVALID_PORT_ID;
> + uint8_t member_down_id = INVALID_PORT_ID;
>
> - /* Find first slave and make link down on it*/
> - FOR_EACH_SLAVE(i, slave) {
> - rte_eth_dev_set_link_down(slave->port_id);
> - slave_down_id = slave->port_id;
> + /* Find first member and make link down on it*/
> + FOR_EACH_MEMBER(i, member) {
> + rte_eth_dev_set_link_down(member->port_id);
> + member_down_id = member->port_id;
> break;
> }
>
> - RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
> + RTE_VERIFY(member_down_id != INVALID_PORT_ID);
>
> /* Give some time to rearrange bonding */
> for (i = 0; i < 3; i++) {
> @@ -949,16 +949,16 @@ test_mode4_rx(void)
>
> TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
>
> - /* Put packet to each slave */
> - FOR_EACH_SLAVE(i, slave) {
> + /* Put packet to each member */
> + FOR_EACH_MEMBER(i, member) {
> void *pkt = NULL;
>
> - dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
> - retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
> + dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
> + retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
> TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
>
> - src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
> - retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
> + src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
> + retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
> TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
>
> retval = bond_rx(pkts, RTE_DIM(pkts));
> @@ -967,36 +967,36 @@ test_mode4_rx(void)
> if (retval > 0)
> free_pkts(pkts, retval);
>
> - while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
> + while (rte_ring_dequeue(member->rx_queue, (void **)&pkt) == 0)
> rte_pktmbuf_free(pkt);
>
> - if (slave_down_id == slave->port_id)
> + if (member_down_id == member->port_id)
> TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
> else
> TEST_ASSERT_NOT_EQUAL(retval, 0,
> - "Expected to receive some packets on slave %u.",
> - slave->port_id);
> - rte_eth_dev_start(slave->port_id);
> + "Expected to receive some packets on member %u.",
> + member->port_id);
> + rte_eth_dev_start(member->port_id);
>
> for (j = 0; j < 5; j++) {
> - TEST_ASSERT(bond_handshake_reply(slave) >= 0,
> + TEST_ASSERT(bond_handshake_reply(member) >= 0,
> "Handshake after link up");
>
> - if (bond_handshake_done(slave) == 1)
> + if (bond_handshake_done(member) == 1)
> break;
> }
>
> - TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
> + TEST_ASSERT(j < 5, "Failed to aggregate member after link up");
> }
>
> - remove_slaves_and_stop_bonded_device();
> + remove_members_and_stop_bonded_device();
> return TEST_SUCCESS;
> }
>
> static int
> test_mode4_tx_burst(void)
> {
> - struct slave_conf *slave;
> + struct member_conf *member;
> uint16_t i, j;
>
> uint16_t exp_pkts_cnt, pkts_cnt = 0;
> @@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
> { 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
> struct rte_ether_addr bonded_mac;
>
> - retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
> + retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
> TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
>
> retval = bond_handshake();
> @@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
>
> TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
>
> - /* Check if packets were transmitted properly. Every slave should have
> + /* Check if packets were transmitted properly. Every member should have
> * at least one packet, and sum must match. Under normal operation
> * there should be no LACP nor MARKER frames. */
> pkts_cnt = 0;
> - FOR_EACH_SLAVE(i, slave) {
> + FOR_EACH_MEMBER(i, member) {
> uint16_t normal_cnt, slow_cnt;
>
> - retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
> + retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
> normal_cnt = 0;
> slow_cnt = 0;
>
> for (j = 0; j < retval; j++) {
> - if (make_lacp_reply(slave, pkts[j]) == 1)
> + if (make_lacp_reply(member, pkts[j]) == 1)
> normal_cnt++;
> else
> slow_cnt++;
> @@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
>
> free_pkts(pkts, normal_cnt + slow_cnt);
> TEST_ASSERT_EQUAL(slow_cnt, 0,
> - "slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
> + "member %u unexpectedly transmitted %d SLOW packets", member->port_id,
> slow_cnt);
>
> TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
> - "slave %u did not transmitted any packets", slave->port_id);
> + "member %u did not transmitted any packets", member->port_id);
>
> pkts_cnt += normal_cnt;
> }
> @@ -1068,19 +1068,21 @@ test_mode4_tx_burst(void)
> TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
> "Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
>
> - /* Link down test:
> - * simulate link down for first slave. */
> + /*
> + * Link down test:
> + * simulate link down for first member.
> + */
> delay = bond_get_update_timeout_ms();
>
> - uint8_t slave_down_id = INVALID_PORT_ID;
> + uint8_t member_down_id = INVALID_PORT_ID;
>
> - FOR_EACH_SLAVE(i, slave) {
> - rte_eth_dev_set_link_down(slave->port_id);
> - slave_down_id = slave->port_id;
> + FOR_EACH_MEMBER(i, member) {
> + rte_eth_dev_set_link_down(member->port_id);
> + member_down_id = member->port_id;
> break;
> }
>
> - RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
> + RTE_VERIFY(member_down_id != INVALID_PORT_ID);
>
> /* Give some time to rearrange bonding. */
> for (i = 0; i < 3; i++) {
> @@ -1110,19 +1112,19 @@ test_mode4_tx_burst(void)
>
> TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
>
> - /* Check if packets was transmitted properly. Every slave should have
> + /* Check if packets was transmitted properly. Every member should have
> * at least one packet, and sum must match. Under normal operation
> * there should be no LACP nor MARKER frames. */
> pkts_cnt = 0;
> - FOR_EACH_SLAVE(i, slave) {
> + FOR_EACH_MEMBER(i, member) {
> uint16_t normal_cnt, slow_cnt;
>
> - retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
> + retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
> normal_cnt = 0;
> slow_cnt = 0;
>
> for (j = 0; j < retval; j++) {
> - if (make_lacp_reply(slave, pkts[j]) == 1)
> + if (make_lacp_reply(member, pkts[j]) == 1)
> normal_cnt++;
> else
> slow_cnt++;
> @@ -1130,17 +1132,17 @@ test_mode4_tx_burst(void)
>
> free_pkts(pkts, normal_cnt + slow_cnt);
>
> - if (slave_down_id == slave->port_id) {
> + if (member_down_id == member->port_id) {
> TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
> - "slave %u enexpectedly transmitted %u packets",
> - normal_cnt + slow_cnt, slave->port_id);
> + "member %u enexpectedly transmitted %u packets",
> + normal_cnt + slow_cnt, member->port_id);
> } else {
> TEST_ASSERT_EQUAL(slow_cnt, 0,
> - "slave %u unexpectedly transmitted %d SLOW packets",
> - slave->port_id, slow_cnt);
> + "member %u unexpectedly transmitted %d SLOW packets",
> + member->port_id, slow_cnt);
>
> TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
> - "slave %u did not transmitted any packets", slave->port_id);
> + "member %u did not transmitted any packets", member->port_id);
> }
>
> pkts_cnt += normal_cnt;
> @@ -1149,11 +1151,11 @@ test_mode4_tx_burst(void)
> TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
> "Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
>
> - return remove_slaves_and_stop_bonded_device();
> + return remove_members_and_stop_bonded_device();
> }
>
> static void
> -init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
> +init_marker(struct rte_mbuf *pkt, struct member_conf *member)
> {
> struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
> struct marker_header *);
> @@ -1166,7 +1168,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
> rte_ether_addr_copy(&parnter_mac_default,
> &marker_hdr->eth_hdr.src_addr);
> marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
> - slave->port_id;
> + member->port_id;
>
> marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
>
> @@ -1177,7 +1179,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
> offsetof(struct marker, reserved_90) -
> offsetof(struct marker, requester_port);
> RTE_VERIFY(marker_hdr->marker.info_length == 16);
> - marker_hdr->marker.requester_port = slave->port_id + 1;
> + marker_hdr->marker.requester_port = member->port_id + 1;
> marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
> marker_hdr->marker.terminator_length = 0;
> }
> @@ -1185,7 +1187,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
> static int
> test_mode4_marker(void)
> {
> - struct slave_conf *slave;
> + struct member_conf *member;
> struct rte_mbuf *pkts[MAX_PKT_BURST];
> struct rte_mbuf *marker_pkt;
> struct marker_header *marker_hdr;
> @@ -1196,7 +1198,7 @@ test_mode4_marker(void)
> uint8_t i, j;
> const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
>
> - retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
> + retval = initialize_bonded_device_with_members(TEST_MARKER_MEMBER_COUT,
> 0);
> TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
>
> @@ -1205,17 +1207,17 @@ test_mode4_marker(void)
> TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
>
> delay = bond_get_update_timeout_ms();
> - FOR_EACH_SLAVE(i, slave) {
> + FOR_EACH_MEMBER(i, member) {
> marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
> TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
> - init_marker(marker_pkt, slave);
> + init_marker(marker_pkt, member);
>
> - retval = slave_put_pkts(slave, &marker_pkt, 1);
> + retval = member_put_pkts(member, &marker_pkt, 1);
> if (retval != 1)
> rte_pktmbuf_free(marker_pkt);
>
> TEST_ASSERT_EQUAL(retval, 1,
> - "Failed to send marker packet to slave %u", slave->port_id);
> + "Failed to send marker packet to member %u", member->port_id);
>
> for (j = 0; j < 20; ++j) {
> rte_delay_ms(delay);
> @@ -1233,13 +1235,13 @@ test_mode4_marker(void)
>
> /* Check if LACP packet was send by state machines
> First and only packet must be a maker response */
> - retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
> + retval = member_get_pkts(member, pkts, MAX_PKT_BURST);
> if (retval == 0)
> continue;
> if (retval > 1)
> free_pkts(pkts, retval);
>
> - TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
> + TEST_ASSERT_EQUAL(retval, 1, "failed to get member packets");
> nb_pkts = retval;
>
> marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
> @@ -1263,7 +1265,7 @@ test_mode4_marker(void)
> TEST_ASSERT(j < 20, "Marker response not found");
> }
>
> - retval = remove_slaves_and_stop_bonded_device();
> + retval = remove_members_and_stop_bonded_device();
> TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
>
> return TEST_SUCCESS;
> @@ -1272,7 +1274,7 @@ test_mode4_marker(void)
> static int
> test_mode4_expired(void)
> {
> - struct slave_conf *slave, *exp_slave = NULL;
> + struct member_conf *member, *exp_member = NULL;
> struct rte_mbuf *pkts[MAX_PKT_BURST];
> int retval;
> uint32_t old_delay;
> @@ -1282,7 +1284,7 @@ test_mode4_expired(void)
>
> struct rte_eth_bond_8023ad_conf conf;
>
> - retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
> + retval = initialize_bonded_device_with_members(TEST_EXPIRED_MEMBER_COUNT,
> 0);
> /* Set custom timeouts to make test last shorter. */
> rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
> @@ -1298,8 +1300,8 @@ test_mode4_expired(void)
>
> /* Wait for new settings to be applied. */
> for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
> - FOR_EACH_SLAVE(j, slave)
> - bond_handshake_reply(slave);
> + FOR_EACH_MEMBER(j, member)
> + bond_handshake_reply(member);
>
> rte_delay_ms(conf.update_timeout_ms);
> }
> @@ -1307,13 +1309,13 @@ test_mode4_expired(void)
> retval = bond_handshake();
> TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
>
> - /* Find first slave */
> - FOR_EACH_SLAVE(i, slave) {
> - exp_slave = slave;
> + /* Find first member */
> + FOR_EACH_MEMBER(i, member) {
> + exp_member = member;
> break;
> }
>
> - RTE_VERIFY(exp_slave != NULL);
> + RTE_VERIFY(exp_member != NULL);
>
> /* When one of partners do not send or respond to LACP frame in
> * conf.long_timeout_ms time, internal state machines should detect this
> @@ -1325,16 +1327,16 @@ test_mode4_expired(void)
> TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
> retval);
>
> - FOR_EACH_SLAVE(i, slave) {
> - retval = bond_handshake_reply(slave);
> + FOR_EACH_MEMBER(i, member) {
> + retval = bond_handshake_reply(member);
> TEST_ASSERT(retval >= 0, "Handshake failed");
>
> - /* Remove replay for slave that suppose to be expired. */
> - if (slave == exp_slave) {
> - while (rte_ring_count(slave->rx_queue) > 0) {
> + /* Remove replay for member that suppose to be expired. */
> + if (member == exp_member) {
> + while (rte_ring_count(member->rx_queue) > 0) {
> void *pkt = NULL;
>
> - rte_ring_dequeue(slave->rx_queue, &pkt);
> + rte_ring_dequeue(member->rx_queue, &pkt);
> rte_pktmbuf_free(pkt);
> }
> }
> @@ -1348,17 +1350,17 @@ test_mode4_expired(void)
> retval);
> }
>
> - /* After test only expected slave should be in EXPIRED state */
> - FOR_EACH_SLAVE(i, slave) {
> - if (slave == exp_slave)
> - TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
> - "Slave %u should be in expired.", slave->port_id);
> + /* After test only expected member should be in EXPIRED state */
> + FOR_EACH_MEMBER(i, member) {
> + if (member == exp_member)
> + TEST_ASSERT(member->lacp_parnter_state & STATE_EXPIRED,
> + "Member %u should be in expired.", member->port_id);
> else
> - TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
> - "Slave %u should be operational.", slave->port_id);
> + TEST_ASSERT_EQUAL(bond_handshake_done(member), 1,
> + "Member %u should be operational.", member->port_id);
> }
>
> - retval = remove_slaves_and_stop_bonded_device();
> + retval = remove_members_and_stop_bonded_device();
> TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
>
> return TEST_SUCCESS;
> @@ -1372,17 +1374,17 @@ test_mode4_ext_ctrl(void)
> * . try to transmit lacpdu (should fail)
> * . try to set collecting and distributing flags (should fail)
> * reconfigure w/external sm
> - * . transmit one lacpdu on each slave using new api
> - * . make sure each slave receives one lacpdu using the callback api
> - * . transmit one data pdu on each slave (should fail)
> + * . transmit one lacpdu on each member using new api
> + * . make sure each member receives one lacpdu using the callback api
> + * . transmit one data pdu on each member (should fail)
> * . enable distribution and collection, send one data pdu each again
> */
>
> int retval;
> - struct slave_conf *slave = NULL;
> + struct member_conf *member = NULL;
> uint8_t i;
>
> - struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
> + struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
> struct rte_ether_addr src_mac, dst_mac;
> struct lacpdu_header lacpdu = {
> .lacpdu = {
> @@ -1396,30 +1398,30 @@ test_mode4_ext_ctrl(void)
> initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
> RTE_ETHER_TYPE_SLOW, 0, 0);
>
> - for (i = 0; i < SLAVE_COUNT; i++) {
> + for (i = 0; i < MEMBER_COUNT; i++) {
> lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
> rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
> &lacpdu, sizeof(lacpdu));
> rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
> }
>
> - retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
> + retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
> TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
>
> - FOR_EACH_SLAVE(i, slave) {
> + FOR_EACH_MEMBER(i, member) {
> TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
> test_params.bonded_port_id,
> - slave->port_id, lacp_tx_buf[i]),
> - "Slave should not allow manual LACP xmit");
> + member->port_id, lacp_tx_buf[i]),
> + "Member should not allow manual LACP xmit");
> TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
> test_params.bonded_port_id,
> - slave->port_id, 1),
> - "Slave should not allow external state controls");
> + member->port_id, 1),
> + "Member should not allow external state controls");
> }
>
> free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
>
> - retval = remove_slaves_and_stop_bonded_device();
> + retval = remove_members_and_stop_bonded_device();
> TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
>
> return TEST_SUCCESS;
> @@ -1430,13 +1432,13 @@ static int
> test_mode4_ext_lacp(void)
> {
> int retval;
> - struct slave_conf *slave = NULL;
> - uint8_t all_slaves_done = 0, i;
> + struct member_conf *member = NULL;
> + uint8_t all_members_done = 0, i;
> uint16_t nb_pkts;
> const unsigned int delay = bond_get_update_timeout_ms();
>
> - struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
> - struct rte_mbuf *buf[SLAVE_COUNT];
> + struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
> + struct rte_mbuf *buf[MEMBER_COUNT];
> struct rte_ether_addr src_mac, dst_mac;
> struct lacpdu_header lacpdu = {
> .lacpdu = {
> @@ -1450,14 +1452,14 @@ test_mode4_ext_lacp(void)
> initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
> RTE_ETHER_TYPE_SLOW, 0, 0);
>
> - for (i = 0; i < SLAVE_COUNT; i++) {
> + for (i = 0; i < MEMBER_COUNT; i++) {
> lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
> rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
> &lacpdu, sizeof(lacpdu));
> rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
> }
>
> - retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
> + retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 1);
> TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
>
> memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
> @@ -1466,22 +1468,22 @@ test_mode4_ext_lacp(void)
> for (i = 0; i < 30; ++i)
> rte_delay_ms(delay);
>
> - FOR_EACH_SLAVE(i, slave) {
> + FOR_EACH_MEMBER(i, member) {
> retval = rte_eth_bond_8023ad_ext_slowtx(
> test_params.bonded_port_id,
> - slave->port_id, lacp_tx_buf[i]);
> + member->port_id, lacp_tx_buf[i]);
> TEST_ASSERT_SUCCESS(retval,
> - "Slave should allow manual LACP xmit");
> + "Member should allow manual LACP xmit");
> }
>
> nb_pkts = bond_tx(NULL, 0);
> TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
>
> - FOR_EACH_SLAVE(i, slave) {
> - nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
> - TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
> + FOR_EACH_MEMBER(i, member) {
> + nb_pkts = member_get_pkts(member, buf, RTE_DIM(buf));
> + TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on member %d\n",
> nb_pkts, i);
> - slave_put_pkts(slave, buf, nb_pkts);
> + member_put_pkts(member, buf, nb_pkts);
> }
>
> nb_pkts = bond_rx(buf, RTE_DIM(buf));
> @@ -1489,26 +1491,26 @@ test_mode4_ext_lacp(void)
> TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
>
> /* wait for the periodic callback to run */
> - for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
> + for (i = 0; i < 30 && all_members_done == 0; ++i) {
> uint8_t s, total = 0;
>
> rte_delay_ms(delay);
> - FOR_EACH_SLAVE(s, slave) {
> - total += lacpdu_rx_count[slave->port_id];
> + FOR_EACH_MEMBER(s, member) {
> + total += lacpdu_rx_count[member->port_id];
> }
>
> - if (total >= SLAVE_COUNT)
> - all_slaves_done = 1;
> + if (total >= MEMBER_COUNT)
> + all_members_done = 1;
> }
>
> - FOR_EACH_SLAVE(i, slave) {
> - TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
> - "Slave port %u should have received 1 lacpdu (count=%u)",
> - slave->port_id,
> - lacpdu_rx_count[slave->port_id]);
> + FOR_EACH_MEMBER(i, member) {
> + TEST_ASSERT_EQUAL(lacpdu_rx_count[member->port_id], 1,
> + "Member port %u should have received 1 lacpdu (count=%u)",
> + member->port_id,
> + lacpdu_rx_count[member->port_id]);
> }
>
> - retval = remove_slaves_and_stop_bonded_device();
> + retval = remove_members_and_stop_bonded_device();
> TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
>
> return TEST_SUCCESS;
> @@ -1517,10 +1519,10 @@ test_mode4_ext_lacp(void)
> static int
> check_environment(void)
> {
> - struct slave_conf *port;
> + struct member_conf *port;
> uint8_t i, env_state;
> - uint16_t slaves[RTE_DIM(test_params.slave_ports)];
> - int slaves_count;
> + uint16_t members[RTE_DIM(test_params.member_ports)];
> + int members_count;
>
> env_state = 0;
> FOR_EACH_PORT(i, port) {
> @@ -1540,20 +1542,20 @@ check_environment(void)
> break;
> }
>
> - slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
> - slaves, RTE_DIM(slaves));
> + members_count = rte_eth_bond_members_get(test_params.bonded_port_id,
> + members, RTE_DIM(members));
>
> - if (slaves_count != 0)
> + if (members_count != 0)
> env_state |= 0x10;
>
> TEST_ASSERT_EQUAL(env_state, 0,
> "Environment not clean (port %u):%s%s%s%s%s",
> port->port_id,
> - env_state & 0x01 ? " slave rx queue not clean" : "",
> - env_state & 0x02 ? " slave tx queue not clean" : "",
> - env_state & 0x04 ? " port marked as enslaved" : "",
> - env_state & 0x80 ? " slave state is not reset" : "",
> - env_state & 0x10 ? " slave count not equal 0" : ".");
> + env_state & 0x01 ? " member rx queue not clean" : "",
> + env_state & 0x02 ? " member tx queue not clean" : "",
> + env_state & 0x04 ? " port marked as enmemberd" : "",
> + env_state & 0x80 ? " member state is not reset" : "",
> + env_state & 0x10 ? " member count not equal 0" : ".");
>
>
> return TEST_SUCCESS;
> @@ -1562,7 +1564,7 @@ check_environment(void)
> static int
> test_mode4_executor(int (*test_func)(void))
> {
> - struct slave_conf *port;
> + struct member_conf *port;
> int test_result;
> uint8_t i;
> void *pkt;
> @@ -1581,7 +1583,7 @@ test_mode4_executor(int (*test_func)(void))
>
> /* Reset environment in case test failed to do that. */
> if (test_result != TEST_SUCCESS) {
> - TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
> + TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
> "Failed to stop bonded device");
>
> FOR_EACH_PORT(i, port) {
> diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
> index 464fb2dbd0..1f888b4771 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -27,15 +27,15 @@
>
> #include "test.h"
>
> -#define SLAVE_COUNT (4)
> +#define MEMBER_COUNT (4)
>
> #define RXTX_RING_SIZE 1024
> #define RXTX_QUEUE_COUNT 4
>
> #define BONDED_DEV_NAME ("net_bonding_rss")
>
> -#define SLAVE_DEV_NAME_FMT ("net_null%d")
> -#define SLAVE_RXTX_QUEUE_FMT ("rssconf_slave%d_q%d")
> +#define MEMBER_DEV_NAME_FMT ("net_null%d")
> +#define MEMBER_RXTX_QUEUE_FMT ("rssconf_member%d_q%d")
>
> #define NUM_MBUFS 8191
> #define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
> @@ -46,7 +46,7 @@
> #define INVALID_PORT_ID (0xFF)
> #define INVALID_BONDING_MODE (-1)
>
> -struct slave_conf {
> +struct member_conf {
> uint16_t port_id;
> struct rte_eth_dev_info dev_info;
>
> @@ -54,7 +54,7 @@ struct slave_conf {
> uint8_t rss_key[40];
> struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
>
> - uint8_t is_slave;
> + uint8_t is_member;
> struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
> };
>
> @@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
> uint8_t bond_port_id;
> struct rte_eth_dev_info bond_dev_info;
> struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
> - struct slave_conf slave_ports[SLAVE_COUNT];
> + struct member_conf member_ports[MEMBER_COUNT];
>
> struct rte_mempool *mbuf_pool;
> };
>
> static struct link_bonding_rssconf_unittest_params test_params = {
> .bond_port_id = INVALID_PORT_ID,
> - .slave_ports = {
> - [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
> + .member_ports = {
> + [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_member = 0}
> },
> .mbuf_pool = NULL,
> };
> @@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
> #define FOR_EACH(_i, _item, _array, _size) \
> for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
>
> -/* Macro for iterating over every port that can be used as a slave
> +/* Macro for iterating over every port that can be used as a member
> * in this test.
> - * _i variable used as an index in test_params->slave_ports
> - * _slave pointer to &test_params->slave_ports[_idx]
> + * _i variable used as an index in test_params->member_ports
> + * _member pointer to &test_params->member_ports[_idx]
> */
> #define FOR_EACH_PORT(_i, _port) \
> - FOR_EACH(_i, _port, test_params.slave_ports, \
> - RTE_DIM(test_params.slave_ports))
> + FOR_EACH(_i, _port, test_params.member_ports, \
> + RTE_DIM(test_params.member_ports))
>
> static int
> configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
> @@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
> }
>
> /**
> - * Remove all slaves from bonding
> + * Remove all members from bonding
> */
> static int
> -remove_slaves(void)
> +remove_members(void)
> {
> unsigned n;
> - struct slave_conf *port;
> + struct member_conf *port;
>
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> - if (port->is_slave) {
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
> + port = &test_params.member_ports[n];
> + if (port->is_member) {
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(
> test_params.bond_port_id, port->port_id),
> - "Cannot remove slave %d from bonding", port->port_id);
> - port->is_slave = 0;
> + "Cannot remove member %d from bonding", port->port_id);
> + port->is_member = 0;
> }
> }
>
> @@ -173,30 +173,30 @@ remove_slaves(void)
> }
>
> static int
> -remove_slaves_and_stop_bonded_device(void)
> +remove_members_and_stop_bonded_device(void)
> {
> - TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
> + TEST_ASSERT_SUCCESS(remove_members(), "Removing members");
> TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
> "Failed to stop port %u", test_params.bond_port_id);
> return TEST_SUCCESS;
> }
>
> /**
> - * Add all slaves to bonding
> + * Add all members to bonding
> */
> static int
> -bond_slaves(void)
> +bond_members(void)
> {
> unsigned n;
> - struct slave_conf *port;
> + struct member_conf *port;
>
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> - if (!port->is_slave) {
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
> - port->port_id), "Cannot attach slave %d to the bonding",
> + port = &test_params.member_ports[n];
> + if (!port->is_member) {
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
> + port->port_id), "Cannot attach member %d to the bonding",
> port->port_id);
> - port->is_slave = 1;
> + port->is_member = 1;
> }
> }
>
> @@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
> }
>
> /**
> - * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
> + * Check if members RETA is synchronized with bonding port. Returns 1 if member
> * port is synced with bonding port.
> */
> static int
> -reta_check_synced(struct slave_conf *port)
> +reta_check_synced(struct member_conf *port)
> {
> unsigned i;
>
> @@ -264,10 +264,10 @@ bond_reta_fetch(void) {
> }
>
> /**
> - * Fetch slaves RETA
> + * Fetch members RETA
> */
> static int
> -slave_reta_fetch(struct slave_conf *port) {
> +member_reta_fetch(struct member_conf *port) {
> unsigned j;
>
> for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
> @@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
> }
>
> /**
> - * Remove and add slave to check if slaves configuration is synced with
> - * the bonding ports values after adding new slave.
> + * Remove and add member to check if members configuration is synced with
> + * the bonding ports values after adding new member.
> */
> static int
> -slave_remove_and_add(void)
> +member_remove_and_add(void)
> {
> - struct slave_conf *port = &(test_params.slave_ports[0]);
> + struct member_conf *port = &(test_params.member_ports[0]);
>
> - /* 1. Remove first slave from bonding */
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
> - port->port_id), "Cannot remove slave #d from bonding");
> + /* 1. Remove first member from bonding */
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params.bond_port_id,
> + port->port_id), "Cannot remove member #d from bonding");
>
> - /* 2. Change removed (ex-)slave and bonding configuration to different
> + /* 2. Change removed (ex-)member and bonding configuration to different
> * values
> */
> reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
> bond_reta_fetch();
>
> reta_set(port->port_id, 2, port->dev_info.reta_size);
> - slave_reta_fetch(port);
> + member_reta_fetch(port);
>
> TEST_ASSERT(reta_check_synced(port) == 0,
> - "Removed slave didn't should be synchronized with bonding port");
> + "Removed member didn't should be synchronized with bonding port");
>
> - /* 3. Add (ex-)slave and check if configuration changed*/
> - TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
> - port->port_id), "Cannot add slave");
> + /* 3. Add (ex-)member and check if configuration changed*/
> + TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
> + port->port_id), "Cannot add member");
>
> bond_reta_fetch();
> - slave_reta_fetch(port);
> + member_reta_fetch(port);
>
> return reta_check_synced(port);
> }
>
> /**
> - * Test configuration propagation over slaves.
> + * Test configuration propagation over members.
> */
> static int
> test_propagate(void)
> {
> unsigned i;
> uint8_t n;
> - struct slave_conf *port;
> + struct member_conf *port;
> uint8_t bond_rss_key[40];
> struct rte_eth_rss_conf bond_rss_conf;
>
> @@ -349,18 +349,18 @@ test_propagate(void)
>
> retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
> &bond_rss_conf);
> - TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
> + TEST_ASSERT_SUCCESS(retval, "Cannot set members hash function");
>
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> + port = &test_params.member_ports[n];
>
> retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
> &port->rss_conf);
> TEST_ASSERT_SUCCESS(retval,
> - "Cannot take slaves RSS configuration");
> + "Cannot take members RSS configuration");
>
> TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
> - "Hash function not propagated for slave %d",
> + "Hash function not propagated for member %d",
> port->port_id);
> }
>
> @@ -376,11 +376,11 @@ test_propagate(void)
>
> /* Set all keys to zero */
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> + port = &test_params.member_ports[n];
> memset(port->rss_conf.rss_key, 0, 40);
> retval = rte_eth_dev_rss_hash_update(port->port_id,
> &port->rss_conf);
> - TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
> + TEST_ASSERT_SUCCESS(retval, "Cannot set members RSS keys");
> }
>
> memset(bond_rss_key, i, sizeof(bond_rss_key));
> @@ -393,18 +393,18 @@ test_propagate(void)
> TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
>
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> + port = &test_params.member_ports[n];
>
> retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
> &(port->rss_conf));
>
> TEST_ASSERT_SUCCESS(retval,
> - "Cannot take slaves RSS configuration");
> + "Cannot take members RSS configuration");
>
> /* compare keys */
> retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
> sizeof(bond_rss_key));
> - TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
> + TEST_ASSERT(retval == 0, "Key value not propagated for member %d",
> port->port_id);
> }
> }
> @@ -416,10 +416,10 @@ test_propagate(void)
>
> /* Set all keys to zero */
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> + port = &test_params.member_ports[n];
> retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
> port->dev_info.reta_size);
> - TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
> + TEST_ASSERT_SUCCESS(retval, "Cannot set members RETA");
> }
>
> TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
> @@ -429,9 +429,9 @@ test_propagate(void)
> bond_reta_fetch();
>
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> + port = &test_params.member_ports[n];
>
> - slave_reta_fetch(port);
> + member_reta_fetch(port);
> TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
> }
> }
> @@ -459,29 +459,29 @@ test_rss(void)
> "Error during getting device (port %u) info: %s\n",
> test_params.bond_port_id, strerror(-ret));
>
> - TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
> + TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
> "Failed to start bonding port (%d).", test_params.bond_port_id);
>
> TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
>
> - TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
> + TEST_ASSERT(member_remove_and_add() == 1, "remove and add members success.");
>
> - remove_slaves_and_stop_bonded_device();
> + remove_members_and_stop_bonded_device();
>
> return TEST_SUCCESS;
> }
>
>
> /**
> - * Test RSS configuration over bonded and slaves.
> + * Test RSS configuration over bonded and members.
> */
> static int
> test_rss_config_lazy(void)
> {
> struct rte_eth_rss_conf bond_rss_conf = {0};
> - struct slave_conf *port;
> + struct member_conf *port;
> uint8_t rss_key[40];
> uint64_t rss_hf;
> int retval;
> @@ -502,18 +502,18 @@ test_rss_config_lazy(void)
> TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
> }
>
> - /* Set all keys to zero for all slaves */
> + /* Set all keys to zero for all members */
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> + port = &test_params.member_ports[n];
> retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
> &port->rss_conf);
> - TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
> + TEST_ASSERT_SUCCESS(retval, "Cannot get members RSS configuration");
> memset(port->rss_key, 0, sizeof(port->rss_key));
> port->rss_conf.rss_key = port->rss_key;
> port->rss_conf.rss_key_len = sizeof(port->rss_key);
> retval = rte_eth_dev_rss_hash_update(port->port_id,
> &port->rss_conf);
> - TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
> + TEST_ASSERT(retval != 0, "Succeeded in setting members RSS keys");
> }
>
> /* Set RSS keys for bonded port */
> @@ -529,10 +529,10 @@ test_rss_config_lazy(void)
> /* Test RETA propagation */
> for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> + port = &test_params.member_ports[n];
> retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
> port->dev_info.reta_size);
> - TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
> + TEST_ASSERT(retval != 0, "Succeeded in setting members RETA");
> }
>
> retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
> @@ -560,14 +560,14 @@ test_rss_lazy(void)
> "Error during getting device (port %u) info: %s\n",
> test_params.bond_port_id, strerror(-ret));
>
> - TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
> + TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
>
> TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
> "Failed to start bonding port (%d).", test_params.bond_port_id);
>
> TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
>
> - remove_slaves_and_stop_bonded_device();
> + remove_members_and_stop_bonded_device();
>
> return TEST_SUCCESS;
> }
> @@ -579,13 +579,13 @@ test_setup(void)
> int retval;
> int port_id;
> char name[256];
> - struct slave_conf *port;
> + struct member_conf *port;
> struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
>
> if (test_params.mbuf_pool == NULL) {
>
> test_params.mbuf_pool = rte_pktmbuf_pool_create(
> - "RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
> + "RSS_MBUF_POOL", NUM_MBUFS * MEMBER_COUNT,
> MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
>
> TEST_ASSERT(test_params.mbuf_pool != NULL,
> @@ -594,10 +594,10 @@ test_setup(void)
>
> /* Create / initialize ring eth devs. */
> FOR_EACH_PORT(n, port) {
> - port = &test_params.slave_ports[n];
> + port = &test_params.member_ports[n];
>
> port_id = rte_eth_dev_count_avail();
> - snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
> + snprintf(name, sizeof(name), MEMBER_DEV_NAME_FMT, port_id);
>
> retval = rte_vdev_init(name, "size=64,copy=0");
> TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
> @@ -647,7 +647,7 @@ test_setup(void)
> static void
> testsuite_teardown(void)
> {
> - struct slave_conf *port;
> + struct member_conf *port;
> uint8_t i;
>
> /* Only stop ports.
> @@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
>
> /* Reset environment in case test failed to do that. */
> if (test_result != TEST_SUCCESS) {
> - TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
> + TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
> "Failed to stop bonded device");
> }
>
> diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
> index e854ae214e..c06d1bc43c 100644
> --- a/doc/guides/howto/lm_bond_virtio_sriov.rst
> +++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
> @@ -17,8 +17,8 @@ Test Setup
> ----------
>
> A bonded device is created in the VM.
> -The virtio and VF PMD's are added as slaves to the bonded device.
> -The VF is set as the primary slave of the bonded device.
> +The virtio and VF PMD's are added as members to the bonded device.
> +The VF is set as the primary member of the bonded device.
>
> A bridge must be set up on the Host connecting the tap device, which is the
> backend of the Virtio device and the Physical Function (PF) device.
> @@ -116,13 +116,13 @@ Bonding is port 2 (P2).
>
> testpmd> create bonded device 1 0
> Created new bonded device net_bond_testpmd_0 on (port 2).
> - testpmd> add bonding slave 0 2
> - testpmd> add bonding slave 1 2
> + testpmd> add bonding member 0 2
> + testpmd> add bonding member 1 2
> testpmd> show bonding config 2
>
> The syntax of the ``testpmd`` command is:
>
> -set bonding primary (slave id) (port id)
> +set bonding primary (member id) (port id)
>
> Set primary to P1 before starting bonding port.
>
> @@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
>
> testpmd> show bonding config 2
>
> -Primary is now P1. There are 2 active slaves.
> +Primary is now P1. There are 2 active members.
>
> Use P2 only for forwarding.
>
> @@ -151,7 +151,7 @@ Use P2 only for forwarding.
> testpmd> start
> testpmd> show bonding config 2
>
> -Primary is now P1. There are 2 active slaves.
> +Primary is now P1. There are 2 active members.
>
> .. code-block:: console
>
> @@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
>
> testpmd> clear port stats all
> testpmd> set bonding primary 0 2
> - testpmd> remove bonding slave 1 2
> + testpmd> remove bonding member 1 2
> testpmd> show bonding config 2
>
> -Primary is now P0. There is 1 active slave.
> +Primary is now P0. There is 1 active member.
>
> .. code-block:: console
>
> @@ -210,7 +210,7 @@ On host_server_1: Terminal 1
>
> testpmd> show bonding config 2
>
> -Primary is now P0. There is 1 active slave.
> +Primary is now P0. There is 1 active member.
>
> .. code-block:: console
>
> @@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
> testpmd> show port stats all.
> testpmd> show config fwd
> testpmd> show bonding config 2
> - testpmd> add bonding slave 1 2
> + testpmd> add bonding member 1 2
> testpmd> set bonding primary 1 2
> testpmd> show bonding config 2
> testpmd> show port stats all
> @@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
>
> .. code-block:: console
>
> - testpmd> remove bonding slave 0 2
> + testpmd> remove bonding member 0 2
> testpmd> show bonding config 2
> testpmd> port stop 0
> testpmd> port close 0
> diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
> index 70242ab2ce..6db880d632 100644
> --- a/doc/guides/nics/bnxt.rst
> +++ b/doc/guides/nics/bnxt.rst
> @@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
>
> .. code-block:: console
>
> - dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
> - (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
> + dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,member=<PCI B:D.F device 1>,member=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
> + (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,member=0000:82:00.0,member=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
>
> Vector Processing
> -----------------
> diff --git a/doc/guides/prog_guide/img/bond-mode-1.svg b/doc/guides/prog_guide/img/bond-mode-1.svg
> index 7c81b856b7..5a9271facf 100644
> --- a/doc/guides/prog_guide/img/bond-mode-1.svg
> +++ b/doc/guides/prog_guide/img/bond-mode-1.svg
> @@ -53,7 +53,7 @@
> v:langID="1033"
> v:metric="true"
> v:viewMarkup="false"><v:userDefs><v:ud
> - v:nameU="msvSubprocessMaster"
> + v:nameU="msvSubprocessMain"
> v:prompt=""
> v:val="VT4(Rectangle)" /><v:ud
> v:nameU="msvNoAutoConnect"
> diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
> index 1f66154e35..58e5ef41da 100644
> --- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
> +++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
> @@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
> The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
> ``rte_eth_dev`` ports of the same speed and duplex to provide similar
> capabilities to that found in Linux bonding driver to allow the aggregation
> -of multiple (slave) NICs into a single logical interface between a server
> +of multiple (member) NICs into a single logical interface between a server
> and a switch. The new bonded PMD will then process these interfaces based on
> the mode of operation specified to provide support for features such as
> redundant links, fault tolerance and/or load balancing.
>
> The librte_net_bond library exports a C API which provides an API for the
> creation of bonded devices as well as the configuration and management of the
> -bonded device and its slave devices.
> +bonded device and its member devices.
>
> .. note::
>
> @@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
>
>
> This mode provides load balancing and fault tolerance by transmission of
> - packets in sequential order from the first available slave device through
> + packets in sequential order from the first available member device through
> the last. Packets are bulk dequeued from devices then serviced in a
> round-robin manner. This mode does not guarantee in order reception of
> packets and down stream should be able to handle out of order packets.
> @@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
> Active Backup (Mode 1)
>
>
> - In this mode only one slave in the bond is active at any time, a different
> - slave becomes active if, and only if, the primary active slave fails,
> - thereby providing fault tolerance to slave failure. The single logical
> + In this mode only one member in the bond is active at any time, a different
> + member becomes active if, and only if, the primary active member fails,
> + thereby providing fault tolerance to member failure. The single logical
> bonded interface's MAC address is externally visible on only one NIC (port)
> to avoid confusing the network switch.
>
> @@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
> This mode provides transmit load balancing (based on the selected
> transmission policy) and fault tolerance. The default policy (layer2) uses
> a simple calculation based on the packet flow source and destination MAC
> - addresses as well as the number of active slaves available to the bonded
> - device to classify the packet to a specific slave to transmit on. Alternate
> + addresses as well as the number of active members available to the bonded
> + device to classify the packet to a specific member to transmit on. Alternate
> transmission policies supported are layer 2+3, this takes the IP source and
> - destination addresses into the calculation of the transmit slave port and
> + destination addresses into the calculation of the transmit member port and
> the final supported policy is layer 3+4, this uses IP source and
> destination addresses as well as the TCP/UDP source and destination port.
>
> @@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
> Broadcast (Mode 3)
>
>
> - This mode provides fault tolerance by transmission of packets on all slave
> + This mode provides fault tolerance by transmission of packets on all member
> ports.
>
> * **Link Aggregation 802.3AD (Mode 4):**
> @@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
> intervals period of less than 100ms.
>
> #. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
> - where N is the number of slaves. This is a space required for LACP
> + where N is the number of members. This is a space required for LACP
> frames. Additionally LACP packets are included in the statistics, but
> they are not returned to the application.
>
> @@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
>
>
> This mode provides an adaptive transmit load balancing. It dynamically
> - changes the transmitting slave, according to the computed load. Statistics
> + changes the transmitting member, according to the computed load. Statistics
> are collected in 100ms intervals and scheduled every 10ms.
>
>
> @@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
> startup time during EAL initialization using the ``--vdev`` option as well as
> programmatically via the C API ``rte_eth_bond_create`` function.
>
> -Bonded devices support the dynamical addition and removal of slave devices using
> -the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
> +Bonded devices support the dynamical addition and removal of member devices using
> +the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove`` APIs.
>
> -After a slave device is added to a bonded device slave is stopped using
> +After a member device is added to a bonded device member is stopped using
> ``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
> the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
> ``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
> device. If RSS is enabled for bonding device, this mode is also enabled on new
> -slave and configured as well.
> +member and configured as well.
> Any flow which was configured to the bond device also is configured to the added
> -slave.
> +member.
>
> Setting up multi-queue mode for bonding device to RSS, makes it fully
> -RSS-capable, so all slaves are synchronized with its configuration. This mode is
> -intended to provide RSS configuration on slaves transparent for client
> +RSS-capable, so all members are synchronized with its configuration. This mode is
> +intended to provide RSS configuration on members transparent for client
> application implementation.
>
> Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
> -function and RSS key, used to set up its slaves. That let to define the meaning
> +function and RSS key, used to set up its members. That let to define the meaning
> of RSS configuration of bonding device as desired configuration of whole bonding
> -(as one unit), without pointing any of slave inside. It is required to ensure
> +(as one unit), without pointing any of member inside. It is required to ensure
> consistency and made it more error-proof.
>
> RSS hash function set for bonding device, is a maximal set of RSS hash functions
> -supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
> -it can be easily used as a pattern providing expected behavior, even if slave
> +supported by all bonded members. RETA size is a GCD of all its RETA's sizes, so
> +it can be easily used as a pattern providing expected behavior, even if member
> RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
> -changed on the slaves and default key for device is used.
> +changed on the members and default key for device is used.
>
> -As RSS configurations, there is flow consistency in the bonded slaves for the
> +As RSS configurations, there is flow consistency in the bonded members for the
> next rte flow operations:
>
> Validate:
> - - Validate flow for each slave, failure at least for one slave causes to
> + - Validate flow for each member, failure at least for one member causes to
> bond validation failure.
>
> Create:
> - - Create the flow in all slaves.
> - - Save all the slaves created flows objects in bonding internal flow
> + - Create the flow in all members.
> + - Save all the members created flows objects in bonding internal flow
> structure.
> - - Failure in flow creation for existed slave rejects the flow.
> - - Failure in flow creation for new slaves in slave adding time rejects
> - the slave.
> + - Failure in flow creation for existed member rejects the flow.
> + - Failure in flow creation for new members in member adding time rejects
> + the member.
>
> Destroy:
> - - Destroy the flow in all slaves and release the bond internal flow
> + - Destroy the flow in all members and release the bond internal flow
> memory.
>
> Flush:
> - - Destroy all the bonding PMD flows in all the slaves.
> + - Destroy all the bonding PMD flows in all the members.
>
> .. note::
>
> - Don't call slaves flush directly, It destroys all the slave flows which
> + Don't call members flush directly, It destroys all the member flows which
> may include external flows or the bond internal LACP flow.
>
> Query:
> - - Summarize flow counters from all the slaves, relevant only for
> + - Summarize flow counters from all the members, relevant only for
> ``RTE_FLOW_ACTION_TYPE_COUNT``.
>
> Isolate:
> - - Call to flow isolate for all slaves.
> - - Failure in flow isolation for existed slave rejects the isolate mode.
> - - Failure in flow isolation for new slaves in slave adding time rejects
> - the slave.
> + - Call to flow isolate for all members.
> + - Failure in flow isolation for existed member rejects the isolate mode.
> + - Failure in flow isolation for new members in member adding time rejects
> + the member.
>
> All settings are managed through the bonding port API and always are propagated
> -in one direction (from bonding to slaves).
> +in one direction (from bonding to members).
>
> Link Status Change Interrupts / Polling
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> @@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
> Link bonding devices support the registration of a link status change callback,
> using the ``rte_eth_dev_callback_register`` API, this will be called when the
> status of the bonding device changes. For example in the case of a bonding
> -device which has 3 slaves, the link status will change to up when one slave
> -becomes active or change to down when all slaves become inactive. There is no
> -callback notification when a single slave changes state and the previous
> -conditions are not met. If a user wishes to monitor individual slaves then they
> -must register callbacks with that slave directly.
> +device which has 3 members, the link status will change to up when one member
> +becomes active or change to down when all members become inactive. There is no
> +callback notification when a single member changes state and the previous
> +conditions are not met. If a user wishes to monitor individual members then they
> +must register callbacks with that member directly.
>
> The link bonding library also supports devices which do not implement link
> status change interrupts, this is achieved by polling the devices link status at
> a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
> -API, the default polling interval is 10ms. When a device is added as a slave to
> +API, the default polling interval is 10ms. When a device is added as a member to
> a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
> whether the device supports interrupts or whether the link status should be
> monitored by polling it.
> @@ -233,30 +233,30 @@ Requirements / Limitations
> ~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> The current implementation only supports devices that support the same speed
> -and duplex to be added as a slaves to the same bonded device. The bonded device
> -inherits these attributes from the first active slave added to the bonded
> -device and then all further slaves added to the bonded device must support
> +and duplex to be added as a members to the same bonded device. The bonded device
> +inherits these attributes from the first active member added to the bonded
> +device and then all further members added to the bonded device must support
> these parameters.
>
> -A bonding device must have a minimum of one slave before the bonding device
> +A bonding device must have a minimum of one member before the bonding device
> itself can be started.
>
> To use a bonding device dynamic RSS configuration feature effectively, it is
> -also required, that all slaves should be RSS-capable and support, at least one
> +also required, that all members should be RSS-capable and support, at least one
> common hash function available for each of them. Changing RSS key is only
> -possible, when all slave devices support the same key size.
> +possible, when all member devices support the same key size.
>
> -To prevent inconsistency on how slaves process packets, once a device is added
> +To prevent inconsistency on how members process packets, once a device is added
> to a bonding device, RSS and rte flow configurations should be managed through
> -the bonding device API, and not directly on the slave.
> +the bonding device API, and not directly on the member.
>
> Like all other PMD, all functions exported by a PMD are lock-free functions
> that are assumed not to be invoked in parallel on different logical cores to
> work on the same target object.
>
> It should also be noted that the PMD receive function should not be invoked
> -directly on a slave devices after they have been to a bonded device since
> -packets read directly from the slave device will no longer be available to the
> +directly on a member devices after they have been to a bonded device since
> +packets read directly from the member device will no longer be available to the
> bonded device to read.
>
> Configuration
> @@ -265,25 +265,25 @@ Configuration
> Link bonding devices are created using the ``rte_eth_bond_create`` API
> which requires a unique device name, the bonding mode,
> and the socket Id to allocate the bonding device's resources on.
> -The other configurable parameters for a bonded device are its slave devices,
> -its primary slave, a user defined MAC address and transmission policy to use if
> +The other configurable parameters for a bonded device are its member devices,
> +its primary member, a user defined MAC address and transmission policy to use if
> the device is in balance XOR mode.
>
> -Slave Devices
> -^^^^^^^^^^^^^
> +Member Devices
> +^^^^^^^^^^^^^^
>
> -Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
> -of the same speed and duplex. Ethernet devices can be added as a slave to a
> -maximum of one bonded device. Slave devices are reconfigured with the
> +Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` member devices
> +of the same speed and duplex. Ethernet devices can be added as a member to a
> +maximum of one bonded device. Member devices are reconfigured with the
> configuration of the bonded device on being added to a bonded device.
>
> -The bonded also guarantees to return the MAC address of the slave device to its
> -original value of removal of a slave from it.
> +The bonded also guarantees to return the MAC address of the member device to its
> +original value of removal of a member from it.
>
> -Primary Slave
> -^^^^^^^^^^^^^
> +Primary Member
> +^^^^^^^^^^^^^^
>
> -The primary slave is used to define the default port to use when a bonded
> +The primary member is used to define the default port to use when a bonded
> device is in active backup mode. A different port will only be used if, and
> only if, the current primary port goes down. If the user does not specify a
> primary port it will default to being the first port added to the bonded device.
> @@ -292,14 +292,14 @@ MAC Address
> ^^^^^^^^^^^
>
> The bonded device can be configured with a user specified MAC address, this
> -address will be inherited by the some/all slave devices depending on the
> +address will be inherited by the some/all member devices depending on the
> operating mode. If the device is in active backup mode then only the primary
> -device will have the user specified MAC, all other slaves will retain their
> -original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
> +device will have the user specified MAC, all other members will retain their
> +original MAC address. In mode 0, 2, 3, 4 all members devices are configure with
> the bonded devices MAC address.
>
> If a user defined MAC address is not defined then the bonded device will
> -default to using the primary slaves MAC address.
> +default to using the primary members MAC address.
>
> Balance XOR Transmit Policies
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> @@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
> * **Layer 2:** Ethernet MAC address based balancing is the default
> transmission policy for Balance XOR bonding mode. It uses a simple XOR
> calculation on the source MAC address and destination MAC address of the
> - packet and then calculate the modulus of this value to calculate the slave
> + packet and then calculate the modulus of this value to calculate the member
> device to transmit the packet on.
>
> * **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
> combination of source/destination MAC addresses and the source/destination
> - IP addresses of the data packet to decide which slave port the packet will
> + IP addresses of the data packet to decide which member port the packet will
> be transmitted on.
>
> * **Layer 3 + 4:** IP Address & UDP Port based balancing uses a combination
> of source/destination IP Address and the source/destination UDP ports of
> - the packet of the data packet to decide which slave port the packet will be
> + the packet of the data packet to decide which member port the packet will be
> transmitted on.
>
> All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
> @@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
> which will be used must be setup using ``rte_eth_tx_queue_setup`` /
> ``rte_eth_rx_queue_setup``.
>
> -Slave devices can be dynamically added and removed from a link bonding device
> -using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
> -APIs but at least one slave device must be added to the link bonding device
> +Member devices can be dynamically added and removed from a link bonding device
> +using the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove``
> +APIs but at least one member device must be added to the link bonding device
> before it can be started using ``rte_eth_dev_start``.
>
> -The link status of a bonded device is dictated by that of its slaves, if all
> -slave device link status are down or if all slaves are removed from the link
> +The link status of a bonded device is dictated by that of its members, if all
> +member device link status are down or if all members are removed from the link
> bonding device then the link status of the bonding device will go down.
>
> It is also possible to configure / query the configuration of the control
> @@ -390,7 +390,7 @@ long as the following two rules are respected:
> where X can be any combination of numbers and/or letters,
> and the name is no greater than 32 characters long.
>
> -* A least one slave device is provided with for each bonded device definition.
> +* A least one member device is provided with for each bonded device definition.
>
> * The operation mode of the bonded device being created is provided.
>
> @@ -404,20 +404,20 @@ The different options are:
>
> mode=2
>
> -* slave: Defines the PMD device which will be added as slave to the bonded
> +* member: Defines the PMD device which will be added as member to the bonded
> device. This option can be selected multiple times, for each device to be
> - added as a slave. Physical devices should be specified using their PCI
> + added as a member. Physical devices should be specified using their PCI
> address, in the format domain:bus:devid.function
>
> .. code-block:: console
>
> - slave=0000:0a:00.0,slave=0000:0a:00.1
> + member=0000:0a:00.0,member=0000:0a:00.1
>
> -* primary: Optional parameter which defines the primary slave port,
> - is used in active backup mode to select the primary slave for data TX/RX if
> +* primary: Optional parameter which defines the primary member port,
> + is used in active backup mode to select the primary member for data TX/RX if
> it is available. The primary port also is used to select the MAC address to
> - use when it is not defined by the user. This defaults to the first slave
> - added to the device if it is specified. The primary device must be a slave
> + use when it is not defined by the user. This defaults to the first member
> + added to the device if it is specified. The primary device must be a member
> of the bonded device.
>
> .. code-block:: console
> @@ -432,7 +432,7 @@ The different options are:
> socket_id=0
>
> * mac: Optional parameter to select a MAC address for link bonding device,
> - this overrides the value of the primary slave device.
> + this overrides the value of the primary member device.
>
> .. code-block:: console
>
> @@ -474,29 +474,29 @@ The different options are:
> Examples of Usage
> ^^^^^^^^^^^^^^^^^
>
> -Create a bonded device in round robin mode with two slaves specified by their PCI address:
> +Create a bonded device in round robin mode with two members specified by their PCI address:
>
> .. code-block:: console
>
> - ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
> + ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00' -- --port-topology=chained
>
> -Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
> +Create a bonded device in round robin mode with two members specified by their PCI address and an overriding MAC address:
>
> .. code-block:: console
>
> - ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
> + ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
>
> -Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
> +Create a bonded device in active backup mode with two members specified, and a primary member specified by their PCI addresses:
>
> .. code-block:: console
>
> - ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
> + ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,member=0000:0a:00.01,member=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
>
> -Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
> +Create a bonded device in balance mode with two members specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
>
> .. code-block:: console
>
> - ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
> + ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,member=0000:0a:00.01,member=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
>
> .. _bonding_testpmd_commands:
>
> @@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
> testpmd> create bonded device 1 0
> created new bonded device (port X)
>
> -add bonding slave
> -~~~~~~~~~~~~~~~~~
> +add bonding member
> +~~~~~~~~~~~~~~~~~~
>
> Adds Ethernet device to a Link Bonding device::
>
> - testpmd> add bonding slave (slave id) (port id)
> + testpmd> add bonding member (member id) (port id)
>
> For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
>
> - testpmd> add bonding slave 6 10
> + testpmd> add bonding member 6 10
>
>
> -remove bonding slave
> -~~~~~~~~~~~~~~~~~~~~
> +remove bonding member
> +~~~~~~~~~~~~~~~~~~~~~
>
> -Removes an Ethernet slave device from a Link Bonding device::
> +Removes an Ethernet member device from a Link Bonding device::
>
> - testpmd> remove bonding slave (slave id) (port id)
> + testpmd> remove bonding member (member id) (port id)
>
> -For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
> +For example, to remove Ethernet member device (port 6) to a Link Bonding device (port 10)::
>
> - testpmd> remove bonding slave 6 10
> + testpmd> remove bonding member 6 10
>
> set bonding mode
> ~~~~~~~~~~~~~~~~
> @@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
> set bonding primary
> ~~~~~~~~~~~~~~~~~~~
>
> -Set an Ethernet slave device as the primary device on a Link Bonding device::
> +Set an Ethernet member device as the primary device on a Link Bonding device::
>
> - testpmd> set bonding primary (slave id) (port id)
> + testpmd> set bonding primary (member id) (port id)
>
> -For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
> +For example, to set the Ethernet member device (port 6) as the primary port of a Link Bonding device (port 10)::
>
> testpmd> set bonding primary 6 10
>
> @@ -590,7 +590,7 @@ set bonding mon_period
>
> Set the link status monitoring polling period in milliseconds for a bonding device.
>
> -This adds support for PMD slave devices which do not support link status interrupts.
> +This adds support for PMD member devices which do not support link status interrupts.
> When the mon_period is set to a value greater than 0 then all PMD's which do not support
> link status ISR will be queried every polling interval to check if their link status has changed::
>
> @@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
> set bonding lacp dedicated_queue
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> -Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
> +Enable dedicated tx/rx queues on bonding devices members to handle LACP control plane traffic
> when in mode 4 (link-aggregation-802.3ad)::
>
> testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
> @@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
> testpmd> show bonding config (port id)
>
> For example,
> -to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
> +to show the configuration a Link Bonding device (port 9) with 3 member devices (1, 3, 4)
> in balance mode with a transmission policy of layer 2+3::
>
> testpmd> show bonding config 9
> - Dev basic:
> Bonding mode: BALANCE(2)
> Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
> - Slaves (3): [1 3 4]
> - Active Slaves (3): [1 3 4]
> + Members (3): [1 3 4]
> + Active Members (3): [1 3 4]
> Primary: [3]
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 82455f9e18..535a361a22 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -124,22 +124,6 @@ Deprecation Notices
> The legacy actions should be removed
> once ``MODIFY_FIELD`` alternative is implemented in drivers.
>
> -* bonding: The data structure ``struct rte_eth_bond_8023ad_slave_info`` will be
> - renamed to ``struct rte_eth_bond_8023ad_member_info`` in DPDK 23.11.
> - The following functions will be removed in DPDK 23.11.
> - The old functions:
> - ``rte_eth_bond_8023ad_slave_info``,
> - ``rte_eth_bond_active_slaves_get``,
> - ``rte_eth_bond_slave_add``,
> - ``rte_eth_bond_slave_remove``, and
> - ``rte_eth_bond_slaves_get``
> - will be replaced by:
> - ``rte_eth_bond_8023ad_member_info``,
> - ``rte_eth_bond_active_members_get``,
> - ``rte_eth_bond_member_add``,
> - ``rte_eth_bond_member_remove``, and
> - ``rte_eth_bond_members_get``.
> -
> * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
> to have another parameter ``qp_id`` to return the queue pair ID
> which got error interrupt to the application,
> diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
> index 2fae9539e2..f0ef597351 100644
> --- a/doc/guides/rel_notes/release_23_11.rst
> +++ b/doc/guides/rel_notes/release_23_11.rst
> @@ -109,6 +109,23 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* bonding: Replace master/slave to main/member. The data structure
> + ``struct rte_eth_bond_8023ad_slave_info`` was renamed to
> + ``struct rte_eth_bond_8023ad_member_info`` in DPDK 23.11.
> + The following functions were removed in DPDK 23.11.
> + The old functions:
> + ``rte_eth_bond_8023ad_slave_info``,
> + ``rte_eth_bond_active_slaves_get``,
> + ``rte_eth_bond_slave_add``,
> + ``rte_eth_bond_slave_remove``, and
> + ``rte_eth_bond_slaves_get``
> + will be replaced by:
> + ``rte_eth_bond_8023ad_member_info``,
> + ``rte_eth_bond_active_members_get``,
> + ``rte_eth_bond_member_add``,
> + ``rte_eth_bond_member_remove``, and
> + ``rte_eth_bond_members_get``.
> +
>
> ABI Changes
> -----------
> diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
> index b3c12cada0..1fe85839ed 100644
> --- a/drivers/net/bonding/bonding_testpmd.c
> +++ b/drivers/net/bonding/bonding_testpmd.c
> @@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
> cmdline_fixed_string_t set;
> cmdline_fixed_string_t bonding;
> cmdline_fixed_string_t primary;
> - portid_t slave_id;
> + portid_t member_id;
> portid_t port_id;
> };
>
> @@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
> __rte_unused struct cmdline *cl, __rte_unused void *data)
> {
> struct cmd_set_bonding_primary_result *res = parsed_result;
> - portid_t master_port_id = res->port_id;
> - portid_t slave_port_id = res->slave_id;
> + portid_t main_port_id = res->port_id;
> + portid_t member_port_id = res->member_id;
>
> - /* Set the primary slave for a bonded device. */
> - if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
> - fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
> - master_port_id);
> + /* Set the primary member for a bonded device. */
> + if (rte_eth_bond_primary_set(main_port_id, member_port_id) != 0) {
> + fprintf(stderr, "\t Failed to set primary member for port = %d.\n",
> + main_port_id);
> return;
> }
> init_port_config();
> @@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
> static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
> TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
> primary, "primary");
> -static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
> +static cmdline_parse_token_num_t cmd_setbonding_primary_member =
> TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
> - slave_id, RTE_UINT16);
> + member_id, RTE_UINT16);
> static cmdline_parse_token_num_t cmd_setbonding_primary_port =
> TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
> port_id, RTE_UINT16);
>
> static cmdline_parse_inst_t cmd_set_bonding_primary = {
> .f = cmd_set_bonding_primary_parsed,
> - .help_str = "set bonding primary <slave_id> <port_id>: "
> - "Set the primary slave for port_id",
> + .help_str = "set bonding primary <member_id> <port_id>: "
> + "Set the primary member for port_id",
> .data = NULL,
> .tokens = {
> (void *)&cmd_setbonding_primary_set,
> (void *)&cmd_setbonding_primary_bonding,
> (void *)&cmd_setbonding_primary_primary,
> - (void *)&cmd_setbonding_primary_slave,
> + (void *)&cmd_setbonding_primary_member,
> (void *)&cmd_setbonding_primary_port,
> NULL
> }
> };
>
> -/* *** ADD SLAVE *** */
> -struct cmd_add_bonding_slave_result {
> +/* *** ADD Member *** */
> +struct cmd_add_bonding_member_result {
> cmdline_fixed_string_t add;
> cmdline_fixed_string_t bonding;
> - cmdline_fixed_string_t slave;
> - portid_t slave_id;
> + cmdline_fixed_string_t member;
> + portid_t member_id;
> portid_t port_id;
> };
>
> -static void cmd_add_bonding_slave_parsed(void *parsed_result,
> +static void cmd_add_bonding_member_parsed(void *parsed_result,
> __rte_unused struct cmdline *cl, __rte_unused void *data)
> {
> - struct cmd_add_bonding_slave_result *res = parsed_result;
> - portid_t master_port_id = res->port_id;
> - portid_t slave_port_id = res->slave_id;
> + struct cmd_add_bonding_member_result *res = parsed_result;
> + portid_t main_port_id = res->port_id;
> + portid_t member_port_id = res->member_id;
>
> - /* add the slave for a bonded device. */
> - if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
> + /* add the member for a bonded device. */
> + if (rte_eth_bond_member_add(main_port_id, member_port_id) != 0) {
> fprintf(stderr,
> - "\t Failed to add slave %d to master port = %d.\n",
> - slave_port_id, master_port_id);
> + "\t Failed to add member %d to main port = %d.\n",
> + member_port_id, main_port_id);
> return;
> }
> - ports[master_port_id].update_conf = 1;
> + ports[main_port_id].update_conf = 1;
> init_port_config();
> - set_port_slave_flag(slave_port_id);
> + set_port_member_flag(member_port_id);
> }
>
> -static cmdline_parse_token_string_t cmd_addbonding_slave_add =
> - TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
> +static cmdline_parse_token_string_t cmd_addbonding_member_add =
> + TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
> add, "add");
> -static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
> - TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
> +static cmdline_parse_token_string_t cmd_addbonding_member_bonding =
> + TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
> bonding, "bonding");
> -static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
> - TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
> - slave, "slave");
> -static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
> - TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
> - slave_id, RTE_UINT16);
> -static cmdline_parse_token_num_t cmd_addbonding_slave_port =
> - TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
> +static cmdline_parse_token_string_t cmd_addbonding_member_member =
> + TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
> + member, "member");
> +static cmdline_parse_token_num_t cmd_addbonding_member_memberid =
> + TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
> + member_id, RTE_UINT16);
> +static cmdline_parse_token_num_t cmd_addbonding_member_port =
> + TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
> port_id, RTE_UINT16);
>
> -static cmdline_parse_inst_t cmd_add_bonding_slave = {
> - .f = cmd_add_bonding_slave_parsed,
> - .help_str = "add bonding slave <slave_id> <port_id>: "
> - "Add a slave device to a bonded device",
> +static cmdline_parse_inst_t cmd_add_bonding_member = {
> + .f = cmd_add_bonding_member_parsed,
> + .help_str = "add bonding member <member_id> <port_id>: "
> + "Add a member device to a bonded device",
> .data = NULL,
> .tokens = {
> - (void *)&cmd_addbonding_slave_add,
> - (void *)&cmd_addbonding_slave_bonding,
> - (void *)&cmd_addbonding_slave_slave,
> - (void *)&cmd_addbonding_slave_slaveid,
> - (void *)&cmd_addbonding_slave_port,
> + (void *)&cmd_addbonding_member_add,
> + (void *)&cmd_addbonding_member_bonding,
> + (void *)&cmd_addbonding_member_member,
> + (void *)&cmd_addbonding_member_memberid,
> + (void *)&cmd_addbonding_member_port,
> NULL
> }
> };
>
> -/* *** REMOVE SLAVE *** */
> -struct cmd_remove_bonding_slave_result {
> +/* *** REMOVE Member *** */
> +struct cmd_remove_bonding_member_result {
> cmdline_fixed_string_t remove;
> cmdline_fixed_string_t bonding;
> - cmdline_fixed_string_t slave;
> - portid_t slave_id;
> + cmdline_fixed_string_t member;
> + portid_t member_id;
> portid_t port_id;
> };
>
> -static void cmd_remove_bonding_slave_parsed(void *parsed_result,
> +static void cmd_remove_bonding_member_parsed(void *parsed_result,
> __rte_unused struct cmdline *cl, __rte_unused void *data)
> {
> - struct cmd_remove_bonding_slave_result *res = parsed_result;
> - portid_t master_port_id = res->port_id;
> - portid_t slave_port_id = res->slave_id;
> + struct cmd_remove_bonding_member_result *res = parsed_result;
> + portid_t main_port_id = res->port_id;
> + portid_t member_port_id = res->member_id;
>
> - /* remove the slave from a bonded device. */
> - if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
> + /* remove the member from a bonded device. */
> + if (rte_eth_bond_member_remove(main_port_id, member_port_id) != 0) {
> fprintf(stderr,
> - "\t Failed to remove slave %d from master port = %d.\n",
> - slave_port_id, master_port_id);
> + "\t Failed to remove member %d from main port = %d.\n",
> + member_port_id, main_port_id);
> return;
> }
> init_port_config();
> - clear_port_slave_flag(slave_port_id);
> + clear_port_member_flag(member_port_id);
> }
>
> -static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
> - TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
> +static cmdline_parse_token_string_t cmd_removebonding_member_remove =
> + TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
> remove, "remove");
> -static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
> - TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
> +static cmdline_parse_token_string_t cmd_removebonding_member_bonding =
> + TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
> bonding, "bonding");
> -static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
> - TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
> - slave, "slave");
> -static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
> - TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
> - slave_id, RTE_UINT16);
> -static cmdline_parse_token_num_t cmd_removebonding_slave_port =
> - TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
> +static cmdline_parse_token_string_t cmd_removebonding_member_member =
> + TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
> + member, "member");
> +static cmdline_parse_token_num_t cmd_removebonding_member_memberid =
> + TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
> + member_id, RTE_UINT16);
> +static cmdline_parse_token_num_t cmd_removebonding_member_port =
> + TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
> port_id, RTE_UINT16);
>
> -static cmdline_parse_inst_t cmd_remove_bonding_slave = {
> - .f = cmd_remove_bonding_slave_parsed,
> - .help_str = "remove bonding slave <slave_id> <port_id>: "
> - "Remove a slave device from a bonded device",
> +static cmdline_parse_inst_t cmd_remove_bonding_member = {
> + .f = cmd_remove_bonding_member_parsed,
> + .help_str = "remove bonding member <member_id> <port_id>: "
> + "Remove a member device from a bonded device",
> .data = NULL,
> .tokens = {
> - (void *)&cmd_removebonding_slave_remove,
> - (void *)&cmd_removebonding_slave_bonding,
> - (void *)&cmd_removebonding_slave_slave,
> - (void *)&cmd_removebonding_slave_slaveid,
> - (void *)&cmd_removebonding_slave_port,
> + (void *)&cmd_removebonding_member_remove,
> + (void *)&cmd_removebonding_member_bonding,
> + (void *)&cmd_removebonding_member_member,
> + (void *)&cmd_removebonding_member_memberid,
> + (void *)&cmd_removebonding_member_port,
> NULL
> }
> };
> @@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
> },
> {
> &cmd_set_bonding_primary,
> - "set bonding primary (slave_id) (port_id)\n"
> - " Set the primary slave for a bonded device.\n",
> + "set bonding primary (member_id) (port_id)\n"
> + " Set the primary member for a bonded device.\n",
> },
> {
> - &cmd_add_bonding_slave,
> - "add bonding slave (slave_id) (port_id)\n"
> - " Add a slave device to a bonded device.\n",
> + &cmd_add_bonding_member,
> + "add bonding member (member_id) (port_id)\n"
> + " Add a member device to a bonded device.\n",
> },
> {
> - &cmd_remove_bonding_slave,
> - "remove bonding slave (slave_id) (port_id)\n"
> - " Remove a slave device from a bonded device.\n",
> + &cmd_remove_bonding_member,
> + "remove bonding member (member_id) (port_id)\n"
> + " Remove a member device from a bonded device.\n",
> },
> {
> &cmd_create_bonded_device,
> diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
> index a5e1fffea1..77892c0601 100644
> --- a/drivers/net/bonding/eth_bond_8023ad_private.h
> +++ b/drivers/net/bonding/eth_bond_8023ad_private.h
> @@ -15,10 +15,10 @@
> #include "rte_eth_bond_8023ad.h"
>
> #define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS 100
> -/** Maximum number of packets to one slave queued in TX ring. */
> -#define BOND_MODE_8023AX_SLAVE_RX_PKTS 3
> -/** Maximum number of LACP packets from one slave queued in TX ring. */
> -#define BOND_MODE_8023AX_SLAVE_TX_PKTS 1
> +/** Maximum number of packets to one member queued in TX ring. */
> +#define BOND_MODE_8023AX_MEMBER_RX_PKTS 3
> +/** Maximum number of LACP packets from one member queued in TX ring. */
> +#define BOND_MODE_8023AX_MEMBER_TX_PKTS 1
> /**
> * Timeouts definitions (5.4.4 in 802.1AX documentation).
> */
> @@ -113,7 +113,7 @@ struct port {
> enum rte_bond_8023ad_selection selected;
>
> /** Indicates if either allmulti or promisc has been enforced on the
> - * slave so that we can receive lacp packets
> + * member so that we can receive lacp packets
> */
> #define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
> #define BOND_8023AD_FORCED_PROMISC (1 << 1)
> @@ -162,8 +162,8 @@ struct mode8023ad_private {
> uint8_t external_sm;
> struct rte_ether_addr mac_addr;
>
> - struct rte_eth_link slave_link;
> - /***< slave link properties */
> + struct rte_eth_link member_link;
> + /***< member link properties */
>
> /**
> * Configuration of dedicated hardware queues for control plane
> @@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
> /**
> * @internal
> *
> - * Enables 802.1AX mode and all active slaves on bonded interface.
> + * Enables 802.1AX mode and all active members on bonded interface.
> *
> * @param dev Bonded interface
> * @return
> @@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
> /**
> * @internal
> *
> - * Disables 802.1AX mode of the bonded interface and slaves.
> + * Disables 802.1AX mode of the bonded interface and members.
> *
> * @param dev Bonded interface
> * @return
> @@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
> *
> * Passes given slow packet to state machines management logic.
> * @param internals Bonded device private data.
> - * @param slave_id Slave port id.
> + * @param member_id Member port id.
> * @param slot_pkt Slow packet.
> */
> void
> bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
> - uint16_t slave_id, struct rte_mbuf *pkt);
> + uint16_t member_id, struct rte_mbuf *pkt);
>
> /**
> * @internal
> *
> - * Appends given slave used slave
> + * Appends given member used member
> *
> * @param dev Bonded interface.
> - * @param port_id Slave port ID to be added
> + * @param port_id Member port ID to be added
> *
> * @return
> * 0 on success, negative value otherwise.
> */
> void
> -bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
> +bond_mode_8023ad_activate_member(struct rte_eth_dev *dev, uint16_t port_id);
>
> /**
> * @internal
> *
> - * Denitializes and removes given slave from 802.1AX mode.
> + * Denitializes and removes given member from 802.1AX mode.
> *
> * @param dev Bonded interface.
> - * @param slave_num Position of slave in active_slaves array
> + * @param member_num Position of member in active_members array
> *
> * @return
> * 0 on success, negative value otherwise.
> */
> int
> -bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
> +bond_mode_8023ad_deactivate_member(struct rte_eth_dev *dev, uint16_t member_pos);
>
> /**
> - * Updates state when MAC was changed on bonded device or one of its slaves.
> + * Updates state when MAC was changed on bonded device or one of its members.
> * @param bond_dev Bonded device
> */
> void
> @@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
>
> int
> bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
> - uint16_t slave_port);
> + uint16_t member_port);
>
> int
> -bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
> +bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port);
>
> int
> bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
> diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
> index d4f1fb27d4..93d03b0a79 100644
> --- a/drivers/net/bonding/eth_bond_private.h
> +++ b/drivers/net/bonding/eth_bond_private.h
> @@ -18,8 +18,8 @@
> #include "eth_bond_8023ad_private.h"
> #include "rte_eth_bond_alb.h"
>
> -#define PMD_BOND_SLAVE_PORT_KVARG ("slave")
> -#define PMD_BOND_PRIMARY_SLAVE_KVARG ("primary")
> +#define PMD_BOND_MEMBER_PORT_KVARG ("member")
> +#define PMD_BOND_PRIMARY_MEMBER_KVARG ("primary")
> #define PMD_BOND_MODE_KVARG ("mode")
> #define PMD_BOND_AGG_MODE_KVARG ("agg_mode")
> #define PMD_BOND_XMIT_POLICY_KVARG ("xmit_policy")
> @@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
> /** Port Queue Mapping Structure */
> struct bond_rx_queue {
> uint16_t queue_id;
> - /**< Next active_slave to poll */
> - uint16_t active_slave;
> + /**< Next active_member to poll */
> + uint16_t active_member;
> /**< Queue Id */
> struct bond_dev_private *dev_private;
> /**< Reference to eth_dev private structure */
> @@ -74,19 +74,19 @@ struct bond_tx_queue {
> /**< Copy of TX configuration structure for queue */
> };
>
> -/** Bonded slave devices structure */
> -struct bond_ethdev_slave_ports {
> - uint16_t slaves[RTE_MAX_ETHPORTS]; /**< Slave port id array */
> - uint16_t slave_count; /**< Number of slaves */
> +/** Bonded member devices structure */
> +struct bond_ethdev_member_ports {
> + uint16_t members[RTE_MAX_ETHPORTS]; /**< Member port id array */
> + uint16_t member_count; /**< Number of members */
> };
>
> -struct bond_slave_details {
> +struct bond_member_details {
> uint16_t port_id;
>
> uint8_t link_status_poll_enabled;
> uint8_t link_status_wait_to_complete;
> uint8_t last_link_status;
> - /**< Port Id of slave eth_dev */
> + /**< Port Id of member eth_dev */
> struct rte_ether_addr persisted_mac_addr;
>
> uint16_t reta_size;
> @@ -94,7 +94,7 @@ struct bond_slave_details {
>
> struct rte_flow {
> TAILQ_ENTRY(rte_flow) next;
> - /* Slaves flows */
> + /* Members flows */
> struct rte_flow *flows[RTE_MAX_ETHPORTS];
> /* Flow description for synchronization */
> struct rte_flow_conv_rule rule;
> @@ -102,7 +102,7 @@ struct rte_flow {
> };
>
> typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
> - uint16_t slave_count, uint16_t *slaves);
> + uint16_t member_count, uint16_t *members);
>
> /** Link Bonding PMD device private configuration Structure */
> struct bond_dev_private {
> @@ -112,8 +112,8 @@ struct bond_dev_private {
> rte_spinlock_t lock;
> rte_spinlock_t lsc_lock;
>
> - uint16_t primary_port; /**< Primary Slave Port */
> - uint16_t current_primary_port; /**< Primary Slave Port */
> + uint16_t primary_port; /**< Primary Member Port */
> + uint16_t current_primary_port; /**< Primary Member Port */
> uint16_t user_defined_primary_port;
> /**< Flag for whether primary port is user defined or not */
>
> @@ -137,16 +137,16 @@ struct bond_dev_private {
> uint16_t nb_rx_queues; /**< Total number of rx queues */
> uint16_t nb_tx_queues; /**< Total number of tx queues*/
>
> - uint16_t active_slave_count; /**< Number of active slaves */
> - uint16_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */
> + uint16_t active_member_count; /**< Number of active members */
> + uint16_t active_members[RTE_MAX_ETHPORTS]; /**< Active member list */
>
> - uint16_t slave_count; /**< Number of bonded slaves */
> - struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
> - /**< Array of bonded slaves details */
> + uint16_t member_count; /**< Number of bonded members */
> + struct bond_member_details members[RTE_MAX_ETHPORTS];
> + /**< Array of bonded members details */
>
> struct mode8023ad_private mode4;
> - uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
> - /**< TLB active slaves send order */
> + uint16_t tlb_members_order[RTE_MAX_ETHPORTS];
> + /**< TLB active members send order */
> struct mode_alb_private mode6;
>
> uint64_t rx_offload_capa; /** Rx offload capability */
> @@ -177,7 +177,7 @@ struct bond_dev_private {
> uint8_t rss_key_len; /**< hash key length in bytes. */
>
> struct rte_kvargs *kvlist;
> - uint8_t slave_update_idx;
> + uint8_t member_update_idx;
>
> bool kvargs_processing_is_done;
>
> @@ -191,19 +191,21 @@ struct bond_dev_private {
> extern const struct eth_dev_ops default_dev_ops;
>
> int
> -check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
> +check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev);
>
> int
> check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
>
> -/* Search given slave array to find position of given id.
> - * Return slave pos or slaves_count if not found. */
> +/*
> + * Search given member array to find position of given id.
> + * Return member pos or members_count if not found.
> + */
> static inline uint16_t
> -find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
> +find_member_by_id(uint16_t *members, uint16_t members_count, uint16_t member_id) {
>
> uint16_t pos;
> - for (pos = 0; pos < slaves_count; pos++) {
> - if (slave_id == slaves[pos])
> + for (pos = 0; pos < members_count; pos++) {
> + if (member_id == members[pos])
> break;
> }
>
> @@ -217,13 +219,13 @@ int
> valid_bonded_port_id(uint16_t port_id);
>
> int
> -valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
> +valid_member_port_id(struct bond_dev_private *internals, uint16_t port_id);
>
> void
> -deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
> +deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
>
> void
> -activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
> +activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
>
> int
> mac_address_set(struct rte_eth_dev *eth_dev,
> @@ -234,66 +236,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
> struct rte_ether_addr *dst_mac_addr);
>
> int
> -mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
> +mac_address_members_update(struct rte_eth_dev *bonded_eth_dev);
>
> int
> -slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> - uint16_t slave_port_id);
> +member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> + uint16_t member_port_id);
>
> int
> -slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> - uint16_t slave_port_id);
> +member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> + uint16_t member_port_id);
>
> int
> bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
>
> int
> -slave_configure(struct rte_eth_dev *bonded_eth_dev,
> - struct rte_eth_dev *slave_eth_dev);
> +member_configure(struct rte_eth_dev *bonded_eth_dev,
> + struct rte_eth_dev *member_eth_dev);
>
> int
> -slave_start(struct rte_eth_dev *bonded_eth_dev,
> - struct rte_eth_dev *slave_eth_dev);
> +member_start(struct rte_eth_dev *bonded_eth_dev,
> + struct rte_eth_dev *member_eth_dev);
>
> void
> -slave_remove(struct bond_dev_private *internals,
> - struct rte_eth_dev *slave_eth_dev);
> +member_remove(struct bond_dev_private *internals,
> + struct rte_eth_dev *member_eth_dev);
>
> void
> -slave_add(struct bond_dev_private *internals,
> - struct rte_eth_dev *slave_eth_dev);
> +member_add(struct bond_dev_private *internals,
> + struct rte_eth_dev *member_eth_dev);
>
> void
> burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
> - uint16_t slave_count, uint16_t *slaves);
> + uint16_t member_count, uint16_t *members);
>
> void
> burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
> - uint16_t slave_count, uint16_t *slaves);
> + uint16_t member_count, uint16_t *members);
>
> void
> burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
> - uint16_t slave_count, uint16_t *slaves);
> + uint16_t member_count, uint16_t *members);
>
>
> void
> bond_ethdev_primary_set(struct bond_dev_private *internals,
> - uint16_t slave_port_id);
> + uint16_t member_port_id);
>
> int
> bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
> void *param, void *ret_param);
>
> int
> -bond_ethdev_parse_slave_port_kvarg(const char *key,
> +bond_ethdev_parse_member_port_kvarg(const char *key,
> const char *value, void *extra_args);
>
> int
> -bond_ethdev_parse_slave_mode_kvarg(const char *key,
> +bond_ethdev_parse_member_mode_kvarg(const char *key,
> const char *value, void *extra_args);
>
> int
> -bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
> +bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
> const char *value, void *extra_args);
>
> int
> @@ -301,7 +303,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
> const char *value, void *extra_args);
>
> int
> -bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
> +bond_ethdev_parse_primary_member_port_id_kvarg(const char *key,
> const char *value, void *extra_args);
>
> int
> @@ -323,7 +325,7 @@ void
> bond_tlb_enable(struct bond_dev_private *internals);
>
> void
> -bond_tlb_activate_slave(struct bond_dev_private *internals);
> +bond_tlb_activate_member(struct bond_dev_private *internals);
>
> int
> bond_ethdev_stop(struct rte_eth_dev *eth_dev);
> diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
> index 874aa91a5f..f0cd5767ad 100644
> --- a/drivers/net/bonding/rte_eth_bond.h
> +++ b/drivers/net/bonding/rte_eth_bond.h
> @@ -10,7 +10,7 @@
> *
> * RTE Link Bonding Ethernet Device
> * Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
> - * (slave) NICs into a single logical interface. The bonded device processes
> + * (member) NICs into a single logical interface. The bonded device processes
> * these interfaces based on the mode of operation specified and supported.
> * This implementation supports 4 modes of operation round robin, active backup
> * balance and broadcast. Providing redundant links, fault tolerance and/or
> @@ -28,24 +28,28 @@ extern "C" {
> #define BONDING_MODE_ROUND_ROBIN (0)
> /**< Round Robin (Mode 0).
> * In this mode all transmitted packets will be balanced equally across all
> - * active slaves of the bonded in a round robin fashion. */
> + * active members of the bonded in a round robin fashion.
> + */
> #define BONDING_MODE_ACTIVE_BACKUP (1)
> /**< Active Backup (Mode 1).
> * In this mode all packets transmitted will be transmitted on the primary
> - * slave until such point as the primary slave is no longer available and then
> - * transmitted packets will be sent on the next available slaves. The primary
> - * slave can be defined by the user but defaults to the first active slave
> - * available if not specified. */
> + * member until such point as the primary member is no longer available and then
> + * transmitted packets will be sent on the next available members. The primary
> + * member can be defined by the user but defaults to the first active member
> + * available if not specified.
> + */
> #define BONDING_MODE_BALANCE (2)
> /**< Balance (Mode 2).
> * In this mode all packets transmitted will be balanced across the available
> - * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
> + * members using one of three available transmit policies - l2, l2+3 or l3+4.
> * See BALANCE_XMIT_POLICY macros definitions for further details on transmit
> - * policies. */
> + * policies.
> + */
> #define BONDING_MODE_BROADCAST (3)
> /**< Broadcast (Mode 3).
> * In this mode all transmitted packets will be transmitted on all available
> - * active slaves of the bonded. */
> + * active members of the bonded.
> + */
> #define BONDING_MODE_8023AD (4)
> /**< 802.3AD (Mode 4).
> *
> @@ -62,22 +66,22 @@ extern "C" {
> * be handled with the expected latency and this may cause the link status to be
> * incorrectly marked as down or failure to correctly negotiate with peers.
> * - For optimal performance during initial handshaking the array of mbufs provided
> - * to rx_burst should be at least 2 times the slave count size.
> - *
> + * to rx_burst should be at least 2 times the member count size.
> */
> #define BONDING_MODE_TLB (5)
> /**< Adaptive TLB (Mode 5)
> * This mode provides an adaptive transmit load balancing. It dynamically
> - * changes the transmitting slave, according to the computed load. Statistics
> - * are collected in 100ms intervals and scheduled every 10ms */
> + * changes the transmitting member, according to the computed load. Statistics
> + * are collected in 100ms intervals and scheduled every 10ms.
> + */
> #define BONDING_MODE_ALB (6)
> /**< Adaptive Load Balancing (Mode 6)
> * This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
> * bonding driver intercepts ARP replies send by local system and overwrites its
> * source MAC address, so that different peers send data to the server on
> - * different slave interfaces. When local system sends ARP request, it saves IP
> + * different member interfaces. When local system sends ARP request, it saves IP
> * information from it. When ARP reply from that peer is received, its MAC is
> - * stored, one of slave MACs assigned and ARP reply send to that peer.
> + * stored, one of member MACs assigned and ARP reply send to that peer.
> */
>
> /* Balance Mode Transmit Policies */
> @@ -113,28 +117,30 @@ int
> rte_eth_bond_free(const char *name);
>
> /**
> - * Add a rte_eth_dev device as a slave to the bonded device
> + * Add a rte_eth_dev device as a member to the bonded device
> *
> * @param bonded_port_id Port ID of bonded device.
> - * @param slave_port_id Port ID of slave device.
> + * @param member_port_id Port ID of member device.
> *
> * @return
> * 0 on success, negative value otherwise
> */
> +__rte_experimental
> int
> -rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
> +rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id);
>
> /**
> - * Remove a slave rte_eth_dev device from the bonded device
> + * Remove a member rte_eth_dev device from the bonded device
> *
> * @param bonded_port_id Port ID of bonded device.
> - * @param slave_port_id Port ID of slave device.
> + * @param member_port_id Port ID of member device.
> *
> * @return
> * 0 on success, negative value otherwise
> */
> +__rte_experimental
> int
> -rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
> +rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id);
>
> /**
> * Set link bonding mode of bonded device
> @@ -160,65 +166,67 @@ int
> rte_eth_bond_mode_get(uint16_t bonded_port_id);
>
> /**
> - * Set slave rte_eth_dev as primary slave of bonded device
> + * Set member rte_eth_dev as primary member of bonded device
> *
> * @param bonded_port_id Port ID of bonded device.
> - * @param slave_port_id Port ID of slave device.
> + * @param member_port_id Port ID of member device.
> *
> * @return
> * 0 on success, negative value otherwise
> */
> int
> -rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
> +rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id);
>
> /**
> - * Get primary slave of bonded device
> + * Get primary member of bonded device
> *
> * @param bonded_port_id Port ID of bonded device.
> *
> * @return
> - * Port Id of primary slave on success, -1 on failure
> + * Port Id of primary member on success, -1 on failure
> */
> int
> rte_eth_bond_primary_get(uint16_t bonded_port_id);
>
> /**
> - * Populate an array with list of the slaves port id's of the bonded device
> + * Populate an array with list of the members port id's of the bonded device
> *
> * @param bonded_port_id Port ID of bonded eth_dev to interrogate
> - * @param slaves Array to be populated with the current active slaves
> - * @param len Length of slaves array
> + * @param members Array to be populated with the current active members
> + * @param len Length of members array
> *
> * @return
> - * Number of slaves associated with bonded device on success,
> + * Number of members associated with bonded device on success,
> * negative value otherwise
> */
> +__rte_experimental
> int
> -rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
> - uint16_t len);
> +rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
> + uint16_t len);
>
> /**
> - * Populate an array with list of the active slaves port id's of the bonded
> + * Populate an array with list of the active members port id's of the bonded
> * device.
> *
> * @param bonded_port_id Port ID of bonded eth_dev to interrogate
> - * @param slaves Array to be populated with the current active slaves
> - * @param len Length of slaves array
> + * @param members Array to be populated with the current active members
> + * @param len Length of members array
> *
> * @return
> - * Number of active slaves associated with bonded device on success,
> + * Number of active members associated with bonded device on success,
> * negative value otherwise
> */
> +__rte_experimental
> int
> -rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
> - uint16_t len);
> +rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
> + uint16_t len);
>
> /**
> - * Set explicit MAC address to use on bonded device and it's slaves.
> + * Set explicit MAC address to use on bonded device and it's members.
> *
> * @param bonded_port_id Port ID of bonded device.
> * @param mac_addr MAC Address to use on bonded device overriding
> - * slaves MAC addresses
> + * members MAC addresses
> *
> * @return
> * 0 on success, negative value otherwise
> @@ -228,8 +236,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
> struct rte_ether_addr *mac_addr);
>
> /**
> - * Reset bonded device to use MAC from primary slave on bonded device and it's
> - * slaves.
> + * Reset bonded device to use MAC from primary member on bonded device and it's
> + * members.
> *
> * @param bonded_port_id Port ID of bonded device.
> *
> @@ -266,7 +274,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
>
> /**
> * Set the link monitoring frequency (in ms) for monitoring the link status of
> - * slave devices
> + * member devices
> *
> * @param bonded_port_id Port ID of bonded device.
> * @param internal_ms Monitoring interval in milliseconds
> @@ -280,7 +288,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
>
> /**
> * Get the current link monitoring frequency (in ms) for monitoring of the link
> - * status of slave devices
> + * status of member devices
> *
> * @param bonded_port_id Port ID of bonded device.
> *
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index 4a266bb2ca..ac9f414e74 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
> #define MODE4_DEBUG(fmt, ...) \
> rte_log(RTE_LOG_DEBUG, bond_logtype, \
> "%6u [Port %u: %s] " fmt, \
> - bond_dbg_get_time_diff_ms(), slave_id, \
> + bond_dbg_get_time_diff_ms(), member_id, \
> __func__, ##__VA_ARGS__)
>
> static uint64_t start_time;
> @@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
> }
>
> static void
> -show_warnings(uint16_t slave_id)
> +show_warnings(uint16_t member_id)
> {
> - struct port *port = &bond_mode_8023ad_ports[slave_id];
> + struct port *port = &bond_mode_8023ad_ports[member_id];
> uint8_t warnings;
>
> do {
> @@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
>
> if (warnings & WRN_RX_QUEUE_FULL) {
> RTE_BOND_LOG(DEBUG,
> - "Slave %u: failed to enqueue LACP packet into RX ring.\n"
> + "Member %u: failed to enqueue LACP packet into RX ring.\n"
> "Receive and transmit functions must be invoked on bonded"
> "interface at least 10 times per second or LACP will notwork correctly",
> - slave_id);
> + member_id);
> }
>
> if (warnings & WRN_TX_QUEUE_FULL) {
> RTE_BOND_LOG(DEBUG,
> - "Slave %u: failed to enqueue LACP packet into TX ring.\n"
> + "Member %u: failed to enqueue LACP packet into TX ring.\n"
> "Receive and transmit functions must be invoked on bonded"
> "interface at least 10 times per second or LACP will not work correctly",
> - slave_id);
> + member_id);
> }
>
> if (warnings & WRN_RX_MARKER_TO_FAST)
> - RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
> - slave_id);
> + RTE_BOND_LOG(INFO, "Member %u: marker to early - ignoring.",
> + member_id);
>
> if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
> RTE_BOND_LOG(INFO,
> - "Slave %u: ignoring unknown slow protocol frame type",
> - slave_id);
> + "Member %u: ignoring unknown slow protocol frame type",
> + member_id);
> }
>
> if (warnings & WRN_UNKNOWN_MARKER_TYPE)
> - RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
> - slave_id);
> + RTE_BOND_LOG(INFO, "Member %u: ignoring unknown marker type",
> + member_id);
>
> if (warnings & WRN_NOT_LACP_CAPABLE)
> - MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
> + MODE4_DEBUG("Port %u is not LACP capable!\n", member_id);
> }
>
> static void
> @@ -256,10 +256,10 @@ record_default(struct port *port)
> * @param port Port on which LACPDU was received.
> */
> static void
> -rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
> +rx_machine(struct bond_dev_private *internals, uint16_t member_id,
> struct lacpdu *lacp)
> {
> - struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
> + struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
> uint64_t timeout;
>
> if (SM_FLAG(port, BEGIN)) {
> @@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
> * @param port Port to handle state machine.
> */
> static void
> -periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
> +periodic_machine(struct bond_dev_private *internals, uint16_t member_id)
> {
> - struct port *port = &bond_mode_8023ad_ports[slave_id];
> + struct port *port = &bond_mode_8023ad_ports[member_id];
> /* Calculate if either site is LACP enabled */
> uint64_t timeout;
> uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
> @@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
> * @param port Port to handle state machine.
> */
> static void
> -mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
> +mux_machine(struct bond_dev_private *internals, uint16_t member_id)
> {
> - struct port *port = &bond_mode_8023ad_ports[slave_id];
> + struct port *port = &bond_mode_8023ad_ports[member_id];
>
> /* Save current state for later use */
> const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
> @@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
> SM_FLAG_SET(port, NTT);
> MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
> RTE_BOND_LOG(INFO,
> - "Bond %u: slave id %u distributing started.",
> - internals->port_id, slave_id);
> + "Bond %u: member id %u distributing started.",
> + internals->port_id, member_id);
> }
> } else {
> if (!PARTNER_STATE(port, COLLECTING)) {
> @@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
> SM_FLAG_SET(port, NTT);
> MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
> RTE_BOND_LOG(INFO,
> - "Bond %u: slave id %u distributing stopped.",
> - internals->port_id, slave_id);
> + "Bond %u: member id %u distributing stopped.",
> + internals->port_id, member_id);
> }
> }
> }
> @@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
> * @param port
> */
> static void
> -tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
> +tx_machine(struct bond_dev_private *internals, uint16_t member_id)
> {
> - struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
> + struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
>
> struct rte_mbuf *lacp_pkt = NULL;
> struct lacpdu_header *hdr;
> @@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
>
> /* Source and destination MAC */
> rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
> - rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
> + rte_eth_macaddr_get(member_id, &hdr->eth_hdr.src_addr);
> hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
>
> lacpdu = &hdr->lacpdu;
> @@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
> return;
> }
> } else {
> - uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
> + uint16_t pkts_sent = rte_eth_tx_prepare(member_id,
> internals->mode4.dedicated_queues.tx_qid,
> &lacp_pkt, 1);
> - pkts_sent = rte_eth_tx_burst(slave_id,
> + pkts_sent = rte_eth_tx_burst(member_id,
> internals->mode4.dedicated_queues.tx_qid,
> &lacp_pkt, pkts_sent);
> if (pkts_sent != 1) {
> @@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
> * @param port_pos Port to assign.
> */
> static void
> -selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
> +selection_logic(struct bond_dev_private *internals, uint16_t member_id)
> {
> struct port *agg, *port;
> - uint16_t slaves_count, new_agg_id, i, j = 0;
> - uint16_t *slaves;
> + uint16_t members_count, new_agg_id, i, j = 0;
> + uint16_t *members;
> uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
> uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
> - uint16_t default_slave = 0;
> + uint16_t default_member = 0;
> struct rte_eth_link link_info;
> uint16_t agg_new_idx = 0;
> int ret;
>
> - slaves = internals->active_slaves;
> - slaves_count = internals->active_slave_count;
> - port = &bond_mode_8023ad_ports[slave_id];
> + members = internals->active_members;
> + members_count = internals->active_member_count;
> + port = &bond_mode_8023ad_ports[member_id];
>
> /* Search for aggregator suitable for this port */
> - for (i = 0; i < slaves_count; ++i) {
> - agg = &bond_mode_8023ad_ports[slaves[i]];
> + for (i = 0; i < members_count; ++i) {
> + agg = &bond_mode_8023ad_ports[members[i]];
> /* Skip ports that are not aggregators */
> - if (agg->aggregator_port_id != slaves[i])
> + if (agg->aggregator_port_id != members[i])
> continue;
>
> - ret = rte_eth_link_get_nowait(slaves[i], &link_info);
> + ret = rte_eth_link_get_nowait(members[i], &link_info);
> if (ret < 0) {
> RTE_BOND_LOG(ERR,
> - "Slave (port %u) link get failed: %s\n",
> - slaves[i], rte_strerror(-ret));
> + "Member (port %u) link get failed: %s\n",
> + members[i], rte_strerror(-ret));
> continue;
> }
> agg_count[i] += 1;
> agg_bandwidth[i] += link_info.link_speed;
>
> - /* Actors system ID is not checked since all slave device have the same
> + /* Actors system ID is not checked since all member device have the same
> * ID (MAC address). */
> if ((agg->actor.key == port->actor.key &&
> agg->partner.system_priority == port->partner.system_priority &&
> @@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
> rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
>
> if (j == 0)
> - default_slave = i;
> + default_member = i;
> j++;
> }
> }
>
> switch (internals->mode4.agg_selection) {
> case AGG_COUNT:
> - agg_new_idx = max_index(agg_count, slaves_count);
> - new_agg_id = slaves[agg_new_idx];
> + agg_new_idx = max_index(agg_count, members_count);
> + new_agg_id = members[agg_new_idx];
> break;
> case AGG_BANDWIDTH:
> - agg_new_idx = max_index(agg_bandwidth, slaves_count);
> - new_agg_id = slaves[agg_new_idx];
> + agg_new_idx = max_index(agg_bandwidth, members_count);
> + new_agg_id = members[agg_new_idx];
> break;
> case AGG_STABLE:
> - if (default_slave == slaves_count)
> - new_agg_id = slaves[slave_id];
> + if (default_member == members_count)
> + new_agg_id = members[member_id];
> else
> - new_agg_id = slaves[default_slave];
> + new_agg_id = members[default_member];
> break;
> default:
> - if (default_slave == slaves_count)
> - new_agg_id = slaves[slave_id];
> + if (default_member == members_count)
> + new_agg_id = members[member_id];
> else
> - new_agg_id = slaves[default_slave];
> + new_agg_id = members[default_member];
> break;
> }
>
> @@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
> MODE4_DEBUG("-> SELECTED: ID=%3u\n"
> "\t%s aggregator ID=%3u\n",
> port->aggregator_port_id,
> - port->aggregator_port_id == slave_id ?
> + port->aggregator_port_id == member_id ?
> "aggregator not found, using default" : "aggregator found",
> port->aggregator_port_id);
> }
> @@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
> }
>
> static void
> -rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
> +rx_machine_update(struct bond_dev_private *internals, uint16_t member_id,
> struct rte_mbuf *lacp_pkt) {
> struct lacpdu_header *lacp;
> struct lacpdu_actor_partner_params *partner;
> @@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
> RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
>
> partner = &lacp->lacpdu.partner;
> - port = &bond_mode_8023ad_ports[slave_id];
> + port = &bond_mode_8023ad_ports[member_id];
> agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
>
> if (rte_is_zero_ether_addr(&partner->port_params.system) ||
> @@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
> /* This LACP frame is sending to the bonding port
> * so pass it to rx_machine.
> */
> - rx_machine(internals, slave_id, &lacp->lacpdu);
> + rx_machine(internals, member_id, &lacp->lacpdu);
> } else {
> char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
> char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
> @@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
> }
> rte_pktmbuf_free(lacp_pkt);
> } else
> - rx_machine(internals, slave_id, NULL);
> + rx_machine(internals, member_id, NULL);
> }
>
> static void
> bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
> - uint16_t slave_id)
> + uint16_t member_id)
> {
> #define DEDICATED_QUEUE_BURST_SIZE 32
> struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
> - uint16_t rx_count = rte_eth_rx_burst(slave_id,
> + uint16_t rx_count = rte_eth_rx_burst(member_id,
> internals->mode4.dedicated_queues.rx_qid,
> lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
>
> @@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
> uint16_t i;
>
> for (i = 0; i < rx_count; i++)
> - bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
> + bond_mode_8023ad_handle_slow_pkt(internals, member_id,
> lacp_pkt[i]);
> } else {
> - rx_machine_update(internals, slave_id, NULL);
> + rx_machine_update(internals, member_id, NULL);
> }
> }
>
> @@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
> struct bond_dev_private *internals = bond_dev->data->dev_private;
> struct port *port;
> struct rte_eth_link link_info;
> - struct rte_ether_addr slave_addr;
> + struct rte_ether_addr member_addr;
> struct rte_mbuf *lacp_pkt = NULL;
> - uint16_t slave_id;
> + uint16_t member_id;
> uint16_t i;
>
>
> /* Update link status on each port */
> - for (i = 0; i < internals->active_slave_count; i++) {
> + for (i = 0; i < internals->active_member_count; i++) {
> uint16_t key;
> int ret;
>
> - slave_id = internals->active_slaves[i];
> - ret = rte_eth_link_get_nowait(slave_id, &link_info);
> + member_id = internals->active_members[i];
> + ret = rte_eth_link_get_nowait(member_id, &link_info);
> if (ret < 0) {
> RTE_BOND_LOG(ERR,
> - "Slave (port %u) link get failed: %s\n",
> - slave_id, rte_strerror(-ret));
> + "Member (port %u) link get failed: %s\n",
> + member_id, rte_strerror(-ret));
> }
>
> if (ret >= 0 && link_info.link_status != 0) {
> @@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
> key = 0;
> }
>
> - rte_eth_macaddr_get(slave_id, &slave_addr);
> - port = &bond_mode_8023ad_ports[slave_id];
> + rte_eth_macaddr_get(member_id, &member_addr);
> + port = &bond_mode_8023ad_ports[member_id];
>
> key = rte_cpu_to_be_16(key);
> if (key != port->actor.key) {
> @@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
> SM_FLAG_SET(port, NTT);
> }
>
> - if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
> - rte_ether_addr_copy(&slave_addr, &port->actor.system);
> - if (port->aggregator_port_id == slave_id)
> + if (!rte_is_same_ether_addr(&port->actor.system, &member_addr)) {
> + rte_ether_addr_copy(&member_addr, &port->actor.system);
> + if (port->aggregator_port_id == member_id)
> SM_FLAG_SET(port, NTT);
> }
> }
>
> - for (i = 0; i < internals->active_slave_count; i++) {
> - slave_id = internals->active_slaves[i];
> - port = &bond_mode_8023ad_ports[slave_id];
> + for (i = 0; i < internals->active_member_count; i++) {
> + member_id = internals->active_members[i];
> + port = &bond_mode_8023ad_ports[member_id];
>
> if ((port->actor.key &
> rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
> @@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
> if (retval != 0)
> lacp_pkt = NULL;
>
> - rx_machine_update(internals, slave_id, lacp_pkt);
> + rx_machine_update(internals, member_id, lacp_pkt);
> } else {
> bond_mode_8023ad_dedicated_rxq_process(internals,
> - slave_id);
> + member_id);
> }
>
> - periodic_machine(internals, slave_id);
> - mux_machine(internals, slave_id);
> - tx_machine(internals, slave_id);
> - selection_logic(internals, slave_id);
> + periodic_machine(internals, member_id);
> + mux_machine(internals, member_id);
> + tx_machine(internals, member_id);
> + selection_logic(internals, member_id);
>
> SM_FLAG_CLR(port, BEGIN);
> - show_warnings(slave_id);
> + show_warnings(member_id);
> }
>
> rte_eal_alarm_set(internals->mode4.update_timeout_us,
> @@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
> }
>
> static int
> -bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
> +bond_mode_8023ad_register_lacp_mac(uint16_t member_id)
> {
> int ret;
>
> - ret = rte_eth_allmulticast_enable(slave_id);
> + ret = rte_eth_allmulticast_enable(member_id);
> if (ret != 0) {
> RTE_BOND_LOG(ERR,
> "failed to enable allmulti mode for port %u: %s",
> - slave_id, rte_strerror(-ret));
> + member_id, rte_strerror(-ret));
> }
> - if (rte_eth_allmulticast_get(slave_id)) {
> + if (rte_eth_allmulticast_get(member_id)) {
> RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
> - slave_id);
> - bond_mode_8023ad_ports[slave_id].forced_rx_flags =
> + member_id);
> + bond_mode_8023ad_ports[member_id].forced_rx_flags =
> BOND_8023AD_FORCED_ALLMULTI;
> return 0;
> }
>
> - ret = rte_eth_promiscuous_enable(slave_id);
> + ret = rte_eth_promiscuous_enable(member_id);
> if (ret != 0) {
> RTE_BOND_LOG(ERR,
> "failed to enable promiscuous mode for port %u: %s",
> - slave_id, rte_strerror(-ret));
> + member_id, rte_strerror(-ret));
> }
> - if (rte_eth_promiscuous_get(slave_id)) {
> + if (rte_eth_promiscuous_get(member_id)) {
> RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
> - slave_id);
> - bond_mode_8023ad_ports[slave_id].forced_rx_flags =
> + member_id);
> + bond_mode_8023ad_ports[member_id].forced_rx_flags =
> BOND_8023AD_FORCED_PROMISC;
> return 0;
> }
> @@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
> }
>
> static void
> -bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
> +bond_mode_8023ad_unregister_lacp_mac(uint16_t member_id)
> {
> int ret;
>
> - switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
> + switch (bond_mode_8023ad_ports[member_id].forced_rx_flags) {
> case BOND_8023AD_FORCED_ALLMULTI:
> - RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
> - ret = rte_eth_allmulticast_disable(slave_id);
> + RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", member_id);
> + ret = rte_eth_allmulticast_disable(member_id);
> if (ret != 0)
> RTE_BOND_LOG(ERR,
> "failed to disable allmulti mode for port %u: %s",
> - slave_id, rte_strerror(-ret));
> + member_id, rte_strerror(-ret));
> break;
>
> case BOND_8023AD_FORCED_PROMISC:
> - RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
> - ret = rte_eth_promiscuous_disable(slave_id);
> + RTE_BOND_LOG(DEBUG, "unset promisc for port %u", member_id);
> + ret = rte_eth_promiscuous_disable(member_id);
> if (ret != 0)
> RTE_BOND_LOG(ERR,
> "failed to disable promiscuous mode for port %u: %s",
> - slave_id, rte_strerror(-ret));
> + member_id, rte_strerror(-ret));
> break;
>
> default:
> @@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
> }
>
> void
> -bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
> - uint16_t slave_id)
> +bond_mode_8023ad_activate_member(struct rte_eth_dev *bond_dev,
> + uint16_t member_id)
> {
> struct bond_dev_private *internals = bond_dev->data->dev_private;
>
> - struct port *port = &bond_mode_8023ad_ports[slave_id];
> + struct port *port = &bond_mode_8023ad_ports[member_id];
> struct port_params initial = {
> .system = { { 0 } },
> .system_priority = rte_cpu_to_be_16(0xFFFF),
> @@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
> struct bond_tx_queue *bd_tx_q;
> uint16_t q_id;
>
> - /* Given slave mus not be in active list */
> - RTE_ASSERT(find_slave_by_id(internals->active_slaves,
> - internals->active_slave_count, slave_id) == internals->active_slave_count);
> + /* Given member mus not be in active list */
> + RTE_ASSERT(find_member_by_id(internals->active_members,
> + internals->active_member_count, member_id) == internals->active_member_count);
> RTE_SET_USED(internals); /* used only for assert when enabled */
>
> memcpy(&port->actor, &initial, sizeof(struct port_params));
> /* Standard requires that port ID must be grater than 0.
> * Add 1 do get corresponding port_number */
> - port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
> + port->actor.port_number = rte_cpu_to_be_16(member_id + 1);
>
> memcpy(&port->partner, &initial, sizeof(struct port_params));
> memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
> @@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
> port->sm_flags = SM_FLAGS_BEGIN;
>
> /* use this port as aggregator */
> - port->aggregator_port_id = slave_id;
> + port->aggregator_port_id = member_id;
>
> - if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
> - RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
> - slave_id);
> + if (bond_mode_8023ad_register_lacp_mac(member_id) < 0) {
> + RTE_BOND_LOG(WARNING, "member %u is most likely broken and won't receive LACP packets",
> + member_id);
> }
>
> timer_cancel(&port->warning_timer);
> @@ -1087,22 +1087,24 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
> RTE_ASSERT(port->rx_ring == NULL);
> RTE_ASSERT(port->tx_ring == NULL);
>
> - socket_id = rte_eth_dev_socket_id(slave_id);
> + socket_id = rte_eth_dev_socket_id(member_id);
> if (socket_id == -1)
> socket_id = rte_socket_id();
>
> element_size = sizeof(struct slow_protocol_frame) +
> RTE_PKTMBUF_HEADROOM;
>
> - /* The size of the mempool should be at least:
> - * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
> - total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
> + /*
> + * The size of the mempool should be at least:
> + * the sum of the TX descriptors + BOND_MODE_8023AX_MEMBER_TX_PKTS.
> + */
> + total_tx_desc = BOND_MODE_8023AX_MEMBER_TX_PKTS;
> for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
> bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
> total_tx_desc += bd_tx_q->nb_tx_desc;
> }
>
> - snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
> + snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_pool", member_id);
> port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
> RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> 32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> @@ -1111,39 +1113,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
> /* Any memory allocation failure in initialization is critical because
> * resources can't be free, so reinitialization is impossible. */
> if (port->mbuf_pool == NULL) {
> - rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
> - slave_id, mem_name, rte_strerror(rte_errno));
> + rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
> + member_id, mem_name, rte_strerror(rte_errno));
> }
>
> - snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
> + snprintf(mem_name, RTE_DIM(mem_name), "member_%u_rx", member_id);
> port->rx_ring = rte_ring_create(mem_name,
> - rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
> + rte_align32pow2(BOND_MODE_8023AX_MEMBER_RX_PKTS), socket_id, 0);
>
> if (port->rx_ring == NULL) {
> - rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
> + rte_panic("Member %u: Failed to create rx ring '%s': %s\n", member_id,
> mem_name, rte_strerror(rte_errno));
> }
>
> /* TX ring is at least one pkt longer to make room for marker packet. */
> - snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
> + snprintf(mem_name, RTE_DIM(mem_name), "member_%u_tx", member_id);
> port->tx_ring = rte_ring_create(mem_name,
> - rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
> + rte_align32pow2(BOND_MODE_8023AX_MEMBER_TX_PKTS + 1), socket_id, 0);
>
> if (port->tx_ring == NULL) {
> - rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
> + rte_panic("Member %u: Failed to create tx ring '%s': %s\n", member_id,
> mem_name, rte_strerror(rte_errno));
> }
> }
>
> int
> -bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
> - uint16_t slave_id)
> +bond_mode_8023ad_deactivate_member(struct rte_eth_dev *bond_dev __rte_unused,
> + uint16_t member_id)
> {
> void *pkt = NULL;
> struct port *port = NULL;
> uint8_t old_partner_state;
>
> - port = &bond_mode_8023ad_ports[slave_id];
> + port = &bond_mode_8023ad_ports[member_id];
>
> ACTOR_STATE_CLR(port, AGGREGATION);
> port->selected = UNSELECTED;
> @@ -1151,7 +1153,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
> old_partner_state = port->partner_state;
> record_default(port);
>
> - bond_mode_8023ad_unregister_lacp_mac(slave_id);
> + bond_mode_8023ad_unregister_lacp_mac(member_id);
>
> /* If partner timeout state changes then disable timer */
> if (!((old_partner_state ^ port->partner_state) &
> @@ -1174,30 +1176,30 @@ void
> bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
> {
> struct bond_dev_private *internals = bond_dev->data->dev_private;
> - struct rte_ether_addr slave_addr;
> - struct port *slave, *agg_slave;
> - uint16_t slave_id, i, j;
> + struct rte_ether_addr member_addr;
> + struct port *member, *agg_member;
> + uint16_t member_id, i, j;
>
> bond_mode_8023ad_stop(bond_dev);
>
> - for (i = 0; i < internals->active_slave_count; i++) {
> - slave_id = internals->active_slaves[i];
> - slave = &bond_mode_8023ad_ports[slave_id];
> - rte_eth_macaddr_get(slave_id, &slave_addr);
> + for (i = 0; i < internals->active_member_count; i++) {
> + member_id = internals->active_members[i];
> + member = &bond_mode_8023ad_ports[member_id];
> + rte_eth_macaddr_get(member_id, &member_addr);
>
> - if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
> + if (rte_is_same_ether_addr(&member_addr, &member->actor.system))
> continue;
>
> - rte_ether_addr_copy(&slave_addr, &slave->actor.system);
> + rte_ether_addr_copy(&member_addr, &member->actor.system);
> /* Do nothing if this port is not an aggregator. In other case
> * Set NTT flag on every port that use this aggregator. */
> - if (slave->aggregator_port_id != slave_id)
> + if (member->aggregator_port_id != member_id)
> continue;
>
> - for (j = 0; j < internals->active_slave_count; j++) {
> - agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
> - if (agg_slave->aggregator_port_id == slave_id)
> - SM_FLAG_SET(agg_slave, NTT);
> + for (j = 0; j < internals->active_member_count; j++) {
> + agg_member = &bond_mode_8023ad_ports[internals->active_members[j]];
> + if (agg_member->aggregator_port_id == member_id)
> + SM_FLAG_SET(agg_member, NTT);
> }
> }
>
> @@ -1288,9 +1290,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
> struct bond_dev_private *internals = bond_dev->data->dev_private;
> uint16_t i;
>
> - for (i = 0; i < internals->active_slave_count; i++)
> - bond_mode_8023ad_activate_slave(bond_dev,
> - internals->active_slaves[i]);
> + for (i = 0; i < internals->active_member_count; i++)
> + bond_mode_8023ad_activate_member(bond_dev,
> + internals->active_members[i]);
>
> return 0;
> }
> @@ -1326,10 +1328,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
>
> void
> bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
> - uint16_t slave_id, struct rte_mbuf *pkt)
> + uint16_t member_id, struct rte_mbuf *pkt)
> {
> struct mode8023ad_private *mode4 = &internals->mode4;
> - struct port *port = &bond_mode_8023ad_ports[slave_id];
> + struct port *port = &bond_mode_8023ad_ports[member_id];
> struct marker_header *m_hdr;
> uint64_t marker_timer, old_marker_timer;
> int retval;
> @@ -1362,7 +1364,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
> } while (unlikely(retval == 0));
>
> m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
> - rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
> + rte_eth_macaddr_get(member_id, &m_hdr->eth_hdr.src_addr);
>
> if (internals->mode4.dedicated_queues.enabled == 0) {
> if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
> @@ -1373,10 +1375,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
> }
> } else {
> /* Send packet directly to the slow queue */
> - uint16_t tx_count = rte_eth_tx_prepare(slave_id,
> + uint16_t tx_count = rte_eth_tx_prepare(member_id,
> internals->mode4.dedicated_queues.tx_qid,
> &pkt, 1);
> - tx_count = rte_eth_tx_burst(slave_id,
> + tx_count = rte_eth_tx_burst(member_id,
> internals->mode4.dedicated_queues.tx_qid,
> &pkt, tx_count);
> if (tx_count != 1) {
> @@ -1394,7 +1396,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
> goto free_out;
> }
> } else
> - rx_machine_update(internals, slave_id, pkt);
> + rx_machine_update(internals, member_id, pkt);
> } else {
> wrn = WRN_UNKNOWN_SLOW_TYPE;
> goto free_out;
> @@ -1517,8 +1519,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
>
>
> int
> -rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
> - struct rte_eth_bond_8023ad_slave_info *info)
> +rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
> + struct rte_eth_bond_8023ad_member_info *info)
> {
> struct rte_eth_dev *bond_dev;
> struct bond_dev_private *internals;
> @@ -1531,12 +1533,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
> bond_dev = &rte_eth_devices[port_id];
>
> internals = bond_dev->data->dev_private;
> - if (find_slave_by_id(internals->active_slaves,
> - internals->active_slave_count, slave_id) ==
> - internals->active_slave_count)
> + if (find_member_by_id(internals->active_members,
> + internals->active_member_count, member_id) ==
> + internals->active_member_count)
> return -EINVAL;
>
> - port = &bond_mode_8023ad_ports[slave_id];
> + port = &bond_mode_8023ad_ports[member_id];
> info->selected = port->selected;
>
> info->actor_state = port->actor_state;
> @@ -1550,7 +1552,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
> }
>
> static int
> -bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
> +bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
> {
> struct rte_eth_dev *bond_dev;
> struct bond_dev_private *internals;
> @@ -1565,9 +1567,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
> return -EINVAL;
>
> internals = bond_dev->data->dev_private;
> - if (find_slave_by_id(internals->active_slaves,
> - internals->active_slave_count, slave_id) ==
> - internals->active_slave_count)
> + if (find_member_by_id(internals->active_members,
> + internals->active_member_count, member_id) ==
> + internals->active_member_count)
> return -EINVAL;
>
> mode4 = &internals->mode4;
> @@ -1578,17 +1580,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
> }
>
> int
> -rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
> +rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
> int enabled)
> {
> struct port *port;
> int res;
>
> - res = bond_8023ad_ext_validate(port_id, slave_id);
> + res = bond_8023ad_ext_validate(port_id, member_id);
> if (res != 0)
> return res;
>
> - port = &bond_mode_8023ad_ports[slave_id];
> + port = &bond_mode_8023ad_ports[member_id];
>
> if (enabled)
> ACTOR_STATE_SET(port, COLLECTING);
> @@ -1599,17 +1601,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
> }
>
> int
> -rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
> +rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
> int enabled)
> {
> struct port *port;
> int res;
>
> - res = bond_8023ad_ext_validate(port_id, slave_id);
> + res = bond_8023ad_ext_validate(port_id, member_id);
> if (res != 0)
> return res;
>
> - port = &bond_mode_8023ad_ports[slave_id];
> + port = &bond_mode_8023ad_ports[member_id];
>
> if (enabled)
> ACTOR_STATE_SET(port, DISTRIBUTING);
> @@ -1620,45 +1622,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
> }
>
> int
> -rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
> +rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
> {
> struct port *port;
> int err;
>
> - err = bond_8023ad_ext_validate(port_id, slave_id);
> + err = bond_8023ad_ext_validate(port_id, member_id);
> if (err != 0)
> return err;
>
> - port = &bond_mode_8023ad_ports[slave_id];
> + port = &bond_mode_8023ad_ports[member_id];
> return ACTOR_STATE(port, DISTRIBUTING);
> }
>
> int
> -rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
> +rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
> {
> struct port *port;
> int err;
>
> - err = bond_8023ad_ext_validate(port_id, slave_id);
> + err = bond_8023ad_ext_validate(port_id, member_id);
> if (err != 0)
> return err;
>
> - port = &bond_mode_8023ad_ports[slave_id];
> + port = &bond_mode_8023ad_ports[member_id];
> return ACTOR_STATE(port, COLLECTING);
> }
>
> int
> -rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
> +rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
> struct rte_mbuf *lacp_pkt)
> {
> struct port *port;
> int res;
>
> - res = bond_8023ad_ext_validate(port_id, slave_id);
> + res = bond_8023ad_ext_validate(port_id, member_id);
> if (res != 0)
> return res;
>
> - port = &bond_mode_8023ad_ports[slave_id];
> + port = &bond_mode_8023ad_ports[member_id];
>
> if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
> return -EINVAL;
> @@ -1683,11 +1685,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
> struct mode8023ad_private *mode4 = &internals->mode4;
> struct port *port;
> void *pkt = NULL;
> - uint16_t i, slave_id;
> + uint16_t i, member_id;
>
> - for (i = 0; i < internals->active_slave_count; i++) {
> - slave_id = internals->active_slaves[i];
> - port = &bond_mode_8023ad_ports[slave_id];
> + for (i = 0; i < internals->active_member_count; i++) {
> + member_id = internals->active_members[i];
> + port = &bond_mode_8023ad_ports[member_id];
>
> if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
> struct rte_mbuf *lacp_pkt = pkt;
> @@ -1700,7 +1702,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
> /* This is LACP frame so pass it to rx callback.
> * Callback is responsible for freeing mbuf.
> */
> - mode4->slowrx_cb(slave_id, lacp_pkt);
> + mode4->slowrx_cb(member_id, lacp_pkt);
> }
> }
>
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
> index 921b4446b7..589141d42c 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.h
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
> @@ -35,7 +35,7 @@ extern "C" {
> #define MARKER_TLV_TYPE_INFO 0x01
> #define MARKER_TLV_TYPE_RESP 0x02
>
> -typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
> +typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
> struct rte_mbuf *lacp_pkt);
>
> enum rte_bond_8023ad_selection {
> @@ -66,13 +66,13 @@ struct port_params {
> uint16_t system_priority;
> /**< System priority (unused in current implementation) */
> struct rte_ether_addr system;
> - /**< System ID - Slave MAC address, same as bonding MAC address */
> + /**< System ID - Member MAC address, same as bonding MAC address */
> uint16_t key;
> /**< Speed information (implementation dependent) and duplex. */
> uint16_t port_priority;
> /**< Priority of this (unused in current implementation) */
> uint16_t port_number;
> - /**< Port number. It corresponds to slave port id. */
> + /**< Port number. It corresponds to member port id. */
> } __rte_packed __rte_aligned(2);
>
> struct lacpdu_actor_partner_params {
> @@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
> enum rte_bond_8023ad_agg_selection agg_selection;
> };
>
> -struct rte_eth_bond_8023ad_slave_info {
> +struct rte_eth_bond_8023ad_member_info {
> enum rte_bond_8023ad_selection selected;
> uint8_t actor_state;
> struct port_params actor;
> @@ -184,100 +184,101 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
> /**
> * @internal
> *
> - * Function returns current state of given slave device.
> + * Function returns current state of given member device.
> *
> - * @param slave_id Port id of valid slave.
> + * @param member_id Port id of valid member.
> * @param conf buffer for configuration
> * @return
> * 0 - if ok
> - * -EINVAL if conf is NULL or slave id is invalid (not a slave of given
> + * -EINVAL if conf is NULL or member id is invalid (not a member of given
> * bonded device or is not inactive).
> */
> +__rte_experimental
> int
> -rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
> - struct rte_eth_bond_8023ad_slave_info *conf);
> +rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
> + struct rte_eth_bond_8023ad_member_info *conf);
>
> /**
> - * Configure a slave port to start collecting.
> + * Configure a member port to start collecting.
> *
> * @param port_id Bonding device id
> - * @param slave_id Port id of valid slave.
> + * @param member_id Port id of valid member.
> * @param enabled Non-zero when collection enabled.
> * @return
> * 0 - if ok
> - * -EINVAL if slave is not valid.
> + * -EINVAL if member is not valid.
> */
> int
> -rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
> +rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
> int enabled);
>
> /**
> - * Get COLLECTING flag from slave port actor state.
> + * Get COLLECTING flag from member port actor state.
> *
> * @param port_id Bonding device id
> - * @param slave_id Port id of valid slave.
> + * @param member_id Port id of valid member.
> * @return
> * 0 - if not set
> * 1 - if set
> - * -EINVAL if slave is not valid.
> + * -EINVAL if member is not valid.
> */
> int
> -rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
> +rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id);
>
> /**
> - * Configure a slave port to start distributing.
> + * Configure a member port to start distributing.
> *
> * @param port_id Bonding device id
> - * @param slave_id Port id of valid slave.
> + * @param member_id Port id of valid member.
> * @param enabled Non-zero when distribution enabled.
> * @return
> * 0 - if ok
> - * -EINVAL if slave is not valid.
> + * -EINVAL if member is not valid.
> */
> int
> -rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
> +rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
> int enabled);
>
> /**
> - * Get DISTRIBUTING flag from slave port actor state.
> + * Get DISTRIBUTING flag from member port actor state.
> *
> * @param port_id Bonding device id
> - * @param slave_id Port id of valid slave.
> + * @param member_id Port id of valid member.
> * @return
> * 0 - if not set
> * 1 - if set
> - * -EINVAL if slave is not valid.
> + * -EINVAL if member is not valid.
> */
> int
> -rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
> +rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id);
>
> /**
> * LACPDU transmit path for external 802.3ad state machine. Caller retains
> * ownership of the packet on failure.
> *
> * @param port_id Bonding device id
> - * @param slave_id Port ID of valid slave device.
> + * @param member_id Port ID of valid member device.
> * @param lacp_pkt mbuf containing LACPDU.
> *
> * @return
> * 0 on success, negative value otherwise.
> */
> int
> -rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
> +rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
> struct rte_mbuf *lacp_pkt);
>
> /**
> - * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
> + * Enable dedicated hw queues for 802.3ad control plane traffic on members
> *
> - * This function creates an additional tx and rx queue on each slave for
> + * This function creates an additional tx and rx queue on each member for
> * dedicated 802.3ad control plane traffic . A flow filtering rule is
> - * programmed on each slave to redirect all LACP slow packets to that rx queue
> + * programmed on each member to redirect all LACP slow packets to that rx queue
> * for processing in the LACP state machine, this removes the need to filter
> * these packets in the bonded devices data path. The additional tx queue is
> * used to enable the LACP state machine to enqueue LACP packets directly to
> - * slave hw independently of the bonded devices data path.
> + * member hw independently of the bonded devices data path.
> *
> - * To use this feature all slaves must support the programming of the flow
> + * To use this feature all members must support the programming of the flow
> * filter rule required for rx and have enough queues that one rx and tx queue
> * can be reserved for the LACP state machines control packets.
> *
> @@ -292,7 +293,7 @@ int
> rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
>
> /**
> - * Disable slow queue on slaves
> + * Disable slow queue on members
> *
> * This function disables hardware slow packet filter.
> *
> diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
> index 86335a7971..56945e2349 100644
> --- a/drivers/net/bonding/rte_eth_bond_alb.c
> +++ b/drivers/net/bonding/rte_eth_bond_alb.c
> @@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
> }
>
> static uint16_t
> -calculate_slave(struct bond_dev_private *internals)
> +calculate_member(struct bond_dev_private *internals)
> {
> uint16_t idx;
>
> - idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
> - internals->mode6.last_slave = idx;
> - return internals->active_slaves[idx];
> + idx = (internals->mode6.last_member + 1) % internals->active_member_count;
> + internals->mode6.last_member = idx;
> + return internals->active_members[idx];
> }
>
> int
> @@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
> /* Fill hash table with initial values */
> memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
> rte_spinlock_init(&internals->mode6.lock);
> - internals->mode6.last_slave = ALB_NULL_INDEX;
> + internals->mode6.last_member = ALB_NULL_INDEX;
> internals->mode6.ntt = 0;
>
> /* Initialize memory pool for ARP packets to send */
> @@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
> /*
> * We got reply for ARP Request send by the application. We need to
> * update client table when received data differ from what is stored
> - * in ALB table and issue sending update packet to that slave.
> + * in ALB table and issue sending update packet to that member.
> */
> rte_spinlock_lock(&internals->mode6.lock);
> if (client_info->in_use == 0 ||
> @@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
> client_info->cli_ip = arp->arp_data.arp_sip;
> rte_ether_addr_copy(&arp->arp_data.arp_sha,
> &client_info->cli_mac);
> - client_info->slave_idx = calculate_slave(internals);
> - rte_eth_macaddr_get(client_info->slave_idx,
> + client_info->member_idx = calculate_member(internals);
> + rte_eth_macaddr_get(client_info->member_idx,
> &client_info->app_mac);
> rte_ether_addr_copy(&client_info->app_mac,
> &arp->arp_data.arp_tha);
> @@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
> &arp->arp_data.arp_tha,
> &client_info->cli_mac);
> }
> - rte_eth_macaddr_get(client_info->slave_idx,
> + rte_eth_macaddr_get(client_info->member_idx,
> &client_info->app_mac);
> rte_ether_addr_copy(&client_info->app_mac,
> &arp->arp_data.arp_sha);
> memcpy(client_info->vlan, eth_h + 1, offset);
> client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
> rte_spinlock_unlock(&internals->mode6.lock);
> - return client_info->slave_idx;
> + return client_info->member_idx;
> }
> }
>
> - /* Assign new slave to this client and update src mac in ARP */
> + /* Assign new member to this client and update src mac in ARP */
> client_info->in_use = 1;
> client_info->ntt = 0;
> client_info->app_ip = arp->arp_data.arp_sip;
> rte_ether_addr_copy(&arp->arp_data.arp_tha,
> &client_info->cli_mac);
> client_info->cli_ip = arp->arp_data.arp_tip;
> - client_info->slave_idx = calculate_slave(internals);
> - rte_eth_macaddr_get(client_info->slave_idx,
> + client_info->member_idx = calculate_member(internals);
> + rte_eth_macaddr_get(client_info->member_idx,
> &client_info->app_mac);
> rte_ether_addr_copy(&client_info->app_mac,
> &arp->arp_data.arp_sha);
> memcpy(client_info->vlan, eth_h + 1, offset);
> client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
> rte_spinlock_unlock(&internals->mode6.lock);
> - return client_info->slave_idx;
> + return client_info->member_idx;
> }
>
> /* If packet is not ARP Reply, send it on current primary port. */
> @@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
> {
> struct rte_ether_hdr *eth_h;
> struct rte_arp_hdr *arp_h;
> - uint16_t slave_idx;
> + uint16_t member_idx;
>
> rte_spinlock_lock(&internals->mode6.lock);
> eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
> @@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
> arp_h->arp_plen = sizeof(uint32_t);
> arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
>
> - slave_idx = client_info->slave_idx;
> + member_idx = client_info->member_idx;
> rte_spinlock_unlock(&internals->mode6.lock);
>
> - return slave_idx;
> + return member_idx;
> }
>
> void
> @@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
>
> int i;
>
> - /* If active slave count is 0, it's pointless to refresh alb table */
> - if (internals->active_slave_count <= 0)
> + /* If active member count is 0, it's pointless to refresh alb table */
> + if (internals->active_member_count <= 0)
> return;
>
> rte_spinlock_lock(&internals->mode6.lock);
> - internals->mode6.last_slave = ALB_NULL_INDEX;
> + internals->mode6.last_member = ALB_NULL_INDEX;
>
> for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
> client_info = &internals->mode6.client_table[i];
> if (client_info->in_use) {
> - client_info->slave_idx = calculate_slave(internals);
> - rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
> + client_info->member_idx = calculate_member(internals);
> + rte_eth_macaddr_get(client_info->member_idx, &client_info->app_mac);
> internals->mode6.ntt = 1;
> }
> }
> diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
> index 4e9aeda9bc..beb2e619f9 100644
> --- a/drivers/net/bonding/rte_eth_bond_alb.h
> +++ b/drivers/net/bonding/rte_eth_bond_alb.h
> @@ -22,8 +22,8 @@ struct client_data {
> uint32_t cli_ip;
> /**< Client IP address */
>
> - uint16_t slave_idx;
> - /**< Index of slave on which we connect with that client */
> + uint16_t member_idx;
> + /**< Index of member on which we connect with that client */
> uint8_t in_use;
> /**< Flag indicating if entry in client table is currently used */
> uint8_t ntt;
> @@ -42,8 +42,8 @@ struct mode_alb_private {
> /**< Mempool for creating ARP update packets */
> uint8_t ntt;
> /**< Flag indicating if we need to send update to any client on next tx */
> - uint32_t last_slave;
> - /**< Index of last used slave in client table */
> + uint32_t last_member;
> + /**< Index of last used member in client table */
> rte_spinlock_t lock;
> };
>
> @@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
> struct bond_dev_private *internals);
>
> /**
> - * Function handles ARP packet transmission. It also decides on which slave
> - * send that packet. If packet is ARP Request, it is send on primary slave.
> - * If it is ARP Reply, it is send on slave stored in client table for that
> + * Function handles ARP packet transmission. It also decides on which member
> + * send that packet. If packet is ARP Request, it is send on primary member.
> + * If it is ARP Reply, it is send on member stored in client table for that
> * connection. On Reply function also updates data in client table.
> *
> * @param eth_h ETH header of transmitted packet.
> @@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
> * @param internals Bonding data.
> *
> * @return
> - * Index of slave on which packet should be sent.
> + * Index of member on which packet should be sent.
> */
> uint16_t
> bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
> @@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
> * @param internals Bonding data.
> *
> * @return
> - * Index of slave on which packet should be sent.
> + * Index of member on which packet should be sent.
> */
> uint16_t
> bond_mode_alb_arp_upd(struct client_data *client_info,
> struct rte_mbuf *pkt, struct bond_dev_private *internals);
>
> /**
> - * Function updates slave indexes of active connections.
> + * Function updates member indexes of active connections.
> *
> * @param bond_dev Pointer to bonded device struct.
> */
> diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
> index 8b6cdce34a..b366c02564 100644
> --- a/drivers/net/bonding/rte_eth_bond_api.c
> +++ b/drivers/net/bonding/rte_eth_bond_api.c
> @@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
> }
>
> int
> -check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
> +check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev)
> {
> int i;
> struct bond_dev_private *internals;
> @@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
>
> internals = eth_dev->data->dev_private;
>
> - /* Check if any of slave devices is a bonded device */
> - for (i = 0; i < internals->slave_count; i++)
> - if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
> + /* Check if any of member devices is a bonded device */
> + for (i = 0; i < internals->member_count; i++)
> + if (valid_bonded_port_id(internals->members[i].port_id) == 0)
> return 1;
>
> return 0;
> }
>
> int
> -valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
> +valid_member_port_id(struct bond_dev_private *internals, uint16_t member_port_id)
> {
> - RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(member_port_id, -1);
>
> - /* Verify that slave_port_id refers to a non bonded port */
> - if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
> + /* Verify that member_port_id refers to a non bonded port */
> + if (check_for_bonded_ethdev(&rte_eth_devices[member_port_id]) == 0 &&
> internals->mode == BONDING_MODE_8023AD) {
> - RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
> - " mode as slave is also a bonded device, only "
> + RTE_BOND_LOG(ERR, "Cannot add member to bonded device in 802.3ad"
> + " mode as member is also a bonded device, only "
> "physical devices can be support in this mode.");
> return -1;
> }
>
> - if (internals->port_id == slave_port_id) {
> + if (internals->port_id == member_port_id) {
> RTE_BOND_LOG(ERR,
> - "Cannot add the bonded device itself as its slave.");
> + "Cannot add the bonded device itself as its member.");
> return -1;
> }
>
> @@ -79,61 +79,63 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
> }
>
> void
> -activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
> +activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
> {
> struct bond_dev_private *internals = eth_dev->data->dev_private;
> - uint16_t active_count = internals->active_slave_count;
> + uint16_t active_count = internals->active_member_count;
>
> if (internals->mode == BONDING_MODE_8023AD)
> - bond_mode_8023ad_activate_slave(eth_dev, port_id);
> + bond_mode_8023ad_activate_member(eth_dev, port_id);
>
> if (internals->mode == BONDING_MODE_TLB
> || internals->mode == BONDING_MODE_ALB) {
>
> - internals->tlb_slaves_order[active_count] = port_id;
> + internals->tlb_members_order[active_count] = port_id;
> }
>
> - RTE_ASSERT(internals->active_slave_count <
> - (RTE_DIM(internals->active_slaves) - 1));
> + RTE_ASSERT(internals->active_member_count <
> + (RTE_DIM(internals->active_members) - 1));
>
> - internals->active_slaves[internals->active_slave_count] = port_id;
> - internals->active_slave_count++;
> + internals->active_members[internals->active_member_count] = port_id;
> + internals->active_member_count++;
>
> if (internals->mode == BONDING_MODE_TLB)
> - bond_tlb_activate_slave(internals);
> + bond_tlb_activate_member(internals);
> if (internals->mode == BONDING_MODE_ALB)
> bond_mode_alb_client_list_upd(eth_dev);
> }
>
> void
> -deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
> +deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
> {
> - uint16_t slave_pos;
> + uint16_t member_pos;
> struct bond_dev_private *internals = eth_dev->data->dev_private;
> - uint16_t active_count = internals->active_slave_count;
> + uint16_t active_count = internals->active_member_count;
>
> if (internals->mode == BONDING_MODE_8023AD) {
> bond_mode_8023ad_stop(eth_dev);
> - bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
> + bond_mode_8023ad_deactivate_member(eth_dev, port_id);
> } else if (internals->mode == BONDING_MODE_TLB
> || internals->mode == BONDING_MODE_ALB)
> bond_tlb_disable(internals);
>
> - slave_pos = find_slave_by_id(internals->active_slaves, active_count,
> + member_pos = find_member_by_id(internals->active_members, active_count,
> port_id);
>
> - /* If slave was not at the end of the list
> - * shift active slaves up active array list */
> - if (slave_pos < active_count) {
> + /*
> + * If member was not at the end of the list
> + * shift active members up active array list.
> + */
> + if (member_pos < active_count) {
> active_count--;
> - memmove(internals->active_slaves + slave_pos,
> - internals->active_slaves + slave_pos + 1,
> - (active_count - slave_pos) *
> - sizeof(internals->active_slaves[0]));
> + memmove(internals->active_members + member_pos,
> + internals->active_members + member_pos + 1,
> + (active_count - member_pos) *
> + sizeof(internals->active_members[0]));
> }
>
> - RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
> - internals->active_slave_count = active_count;
> + RTE_ASSERT(active_count < RTE_DIM(internals->active_members));
> + internals->active_member_count = active_count;
>
> if (eth_dev->data->dev_started) {
> if (internals->mode == BONDING_MODE_8023AD) {
> @@ -192,7 +194,7 @@ rte_eth_bond_free(const char *name)
> }
>
> static int
> -slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
> +member_vlan_filter_set(uint16_t bonded_port_id, uint16_t member_port_id)
> {
> struct rte_eth_dev *bonded_eth_dev;
> struct bond_dev_private *internals;
> @@ -224,7 +226,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
> if (unlikely(slab & mask)) {
> uint16_t vlan_id = pos + i;
>
> - res = rte_eth_dev_vlan_filter(slave_port_id,
> + res = rte_eth_dev_vlan_filter(member_port_id,
> vlan_id, 1);
> }
> }
> @@ -236,45 +238,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
> }
>
> static int
> -slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
> +member_rte_flow_prepare(uint16_t member_id, struct bond_dev_private *internals)
> {
> struct rte_flow *flow;
> struct rte_flow_error ferror;
> - uint16_t slave_port_id = internals->slaves[slave_id].port_id;
> + uint16_t member_port_id = internals->members[member_id].port_id;
>
> if (internals->flow_isolated_valid != 0) {
> - if (rte_eth_dev_stop(slave_port_id) != 0) {
> + if (rte_eth_dev_stop(member_port_id) != 0) {
> RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
> - slave_port_id);
> + member_port_id);
> return -1;
> }
>
> - if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
> + if (rte_flow_isolate(member_port_id, internals->flow_isolated,
> &ferror)) {
> - RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
> - " %d: %s", slave_id, ferror.message ?
> + RTE_BOND_LOG(ERR, "rte_flow_isolate failed for member"
> + " %d: %s", member_id, ferror.message ?
> ferror.message : "(no stated reason)");
> return -1;
> }
> }
> TAILQ_FOREACH(flow, &internals->flow_list, next) {
> - flow->flows[slave_id] = rte_flow_create(slave_port_id,
> + flow->flows[member_id] = rte_flow_create(member_port_id,
> flow->rule.attr,
> flow->rule.pattern,
> flow->rule.actions,
> &ferror);
> - if (flow->flows[slave_id] == NULL) {
> - RTE_BOND_LOG(ERR, "Cannot create flow for slave"
> - " %d: %s", slave_id,
> + if (flow->flows[member_id] == NULL) {
> + RTE_BOND_LOG(ERR, "Cannot create flow for member"
> + " %d: %s", member_id,
> ferror.message ? ferror.message :
> "(no stated reason)");
> - /* Destroy successful bond flows from the slave */
> + /* Destroy successful bond flows from the member */
> TAILQ_FOREACH(flow, &internals->flow_list, next) {
> - if (flow->flows[slave_id] != NULL) {
> - rte_flow_destroy(slave_port_id,
> - flow->flows[slave_id],
> + if (flow->flows[member_id] != NULL) {
> + rte_flow_destroy(member_port_id,
> + flow->flows[member_id],
> &ferror);
> - flow->flows[slave_id] = NULL;
> + flow->flows[member_id] = NULL;
> }
> }
> return -1;
> @@ -284,7 +286,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
> }
>
> static void
> -eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
> +eth_bond_member_inherit_dev_info_rx_first(struct bond_dev_private *internals,
> const struct rte_eth_dev_info *di)
> {
> struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
> @@ -292,20 +294,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
> internals->reta_size = di->reta_size;
> internals->rss_key_len = di->hash_key_size;
>
> - /* Inherit Rx offload capabilities from the first slave device */
> + /* Inherit Rx offload capabilities from the first member device */
> internals->rx_offload_capa = di->rx_offload_capa;
> internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
> internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
>
> - /* Inherit maximum Rx packet size from the first slave device */
> + /* Inherit maximum Rx packet size from the first member device */
> internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
>
> - /* Inherit default Rx queue settings from the first slave device */
> + /* Inherit default Rx queue settings from the first member device */
> memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
>
> /*
> * Turn off descriptor prefetch and writeback by default for all
> - * slave devices. Applications may tweak this setting if need be.
> + * member devices. Applications may tweak this setting if need be.
> */
> rxconf_i->rx_thresh.pthresh = 0;
> rxconf_i->rx_thresh.hthresh = 0;
> @@ -314,26 +316,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
> /* Setting this to zero should effectively enable default values */
> rxconf_i->rx_free_thresh = 0;
>
> - /* Disable deferred start by default for all slave devices */
> + /* Disable deferred start by default for all member devices */
> rxconf_i->rx_deferred_start = 0;
> }
>
> static void
> -eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
> +eth_bond_member_inherit_dev_info_tx_first(struct bond_dev_private *internals,
> const struct rte_eth_dev_info *di)
> {
> struct rte_eth_txconf *txconf_i = &internals->default_txconf;
>
> - /* Inherit Tx offload capabilities from the first slave device */
> + /* Inherit Tx offload capabilities from the first member device */
> internals->tx_offload_capa = di->tx_offload_capa;
> internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
>
> - /* Inherit default Tx queue settings from the first slave device */
> + /* Inherit default Tx queue settings from the first member device */
> memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
>
> /*
> * Turn off descriptor prefetch and writeback by default for all
> - * slave devices. Applications may tweak this setting if need be.
> + * member devices. Applications may tweak this setting if need be.
> */
> txconf_i->tx_thresh.pthresh = 0;
> txconf_i->tx_thresh.hthresh = 0;
> @@ -341,17 +343,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
>
> /*
> * Setting these parameters to zero assumes that default
> - * values will be configured implicitly by slave devices.
> + * values will be configured implicitly by member devices.
> */
> txconf_i->tx_free_thresh = 0;
> txconf_i->tx_rs_thresh = 0;
>
> - /* Disable deferred start by default for all slave devices */
> + /* Disable deferred start by default for all member devices */
> txconf_i->tx_deferred_start = 0;
> }
>
> static void
> -eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
> +eth_bond_member_inherit_dev_info_rx_next(struct bond_dev_private *internals,
> const struct rte_eth_dev_info *di)
> {
> struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
> @@ -362,32 +364,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
> internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
>
> /*
> - * If at least one slave device suggests enabling this
> - * setting by default, enable it for all slave devices
> + * If at least one member device suggests enabling this
> + * setting by default, enable it for all member devices
> * since disabling it may not be necessarily supported.
> */
> if (rxconf->rx_drop_en == 1)
> rxconf_i->rx_drop_en = 1;
>
> /*
> - * Adding a new slave device may cause some of previously inherited
> + * Adding a new member device may cause some of previously inherited
> * offloads to be withdrawn from the internal rx_queue_offload_capa
> * value. Thus, the new internal value of default Rx queue offloads
> * has to be masked by rx_queue_offload_capa to make sure that only
> * commonly supported offloads are preserved from both the previous
> - * value and the value being inherited from the new slave device.
> + * value and the value being inherited from the new member device.
> */
> rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
> internals->rx_queue_offload_capa;
>
> /*
> - * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
> + * RETA size is GCD of all members RETA sizes, so, if all sizes will be
> * the power of 2, the lower one is GCD
> */
> if (internals->reta_size > di->reta_size)
> internals->reta_size = di->reta_size;
> if (internals->rss_key_len > di->hash_key_size) {
> - RTE_BOND_LOG(WARNING, "slave has different rss key size, "
> + RTE_BOND_LOG(WARNING, "member has different rss key size, "
> "configuring rss may fail");
> internals->rss_key_len = di->hash_key_size;
> }
> @@ -398,7 +400,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
> }
>
> static void
> -eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
> +eth_bond_member_inherit_dev_info_tx_next(struct bond_dev_private *internals,
> const struct rte_eth_dev_info *di)
> {
> struct rte_eth_txconf *txconf_i = &internals->default_txconf;
> @@ -408,34 +410,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
> internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
>
> /*
> - * Adding a new slave device may cause some of previously inherited
> + * Adding a new member device may cause some of previously inherited
> * offloads to be withdrawn from the internal tx_queue_offload_capa
> * value. Thus, the new internal value of default Tx queue offloads
> * has to be masked by tx_queue_offload_capa to make sure that only
> * commonly supported offloads are preserved from both the previous
> - * value and the value being inherited from the new slave device.
> + * value and the value being inherited from the new member device.
> */
> txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
> internals->tx_queue_offload_capa;
> }
>
> static void
> -eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
> - const struct rte_eth_desc_lim *slave_desc_lim)
> +eth_bond_member_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
> + const struct rte_eth_desc_lim *member_desc_lim)
> {
> - memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
> + memcpy(bond_desc_lim, member_desc_lim, sizeof(*bond_desc_lim));
> }
>
> static int
> -eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
> - const struct rte_eth_desc_lim *slave_desc_lim)
> +eth_bond_member_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
> + const struct rte_eth_desc_lim *member_desc_lim)
> {
> bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
> - slave_desc_lim->nb_max);
> + member_desc_lim->nb_max);
> bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
> - slave_desc_lim->nb_min);
> + member_desc_lim->nb_min);
> bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
> - slave_desc_lim->nb_align);
> + member_desc_lim->nb_align);
>
> if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
> bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
> @@ -444,22 +446,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
> }
>
> /* Treat maximum number of segments equal to 0 as unspecified */
> - if (slave_desc_lim->nb_seg_max != 0 &&
> + if (member_desc_lim->nb_seg_max != 0 &&
> (bond_desc_lim->nb_seg_max == 0 ||
> - slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
> - bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
> - if (slave_desc_lim->nb_mtu_seg_max != 0 &&
> + member_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
> + bond_desc_lim->nb_seg_max = member_desc_lim->nb_seg_max;
> + if (member_desc_lim->nb_mtu_seg_max != 0 &&
> (bond_desc_lim->nb_mtu_seg_max == 0 ||
> - slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
> - bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
> + member_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
> + bond_desc_lim->nb_mtu_seg_max = member_desc_lim->nb_mtu_seg_max;
>
> return 0;
> }
>
> static int
> -__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
> +__eth_bond_member_add_lock_free(uint16_t bonded_port_id, uint16_t member_port_id)
> {
> - struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
> + struct rte_eth_dev *bonded_eth_dev, *member_eth_dev;
> struct bond_dev_private *internals;
> struct rte_eth_link link_props;
> struct rte_eth_dev_info dev_info;
> @@ -468,78 +470,78 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
> bonded_eth_dev = &rte_eth_devices[bonded_port_id];
> internals = bonded_eth_dev->data->dev_private;
>
> - if (valid_slave_port_id(internals, slave_port_id) != 0)
> + if (valid_member_port_id(internals, member_port_id) != 0)
> return -1;
>
> - slave_eth_dev = &rte_eth_devices[slave_port_id];
> - if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDING_MEMBER) {
> - RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
> + member_eth_dev = &rte_eth_devices[member_port_id];
> + if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDING_MEMBER) {
> + RTE_BOND_LOG(ERR, "Member device is already a member of a bonded device");
> return -1;
> }
>
> - ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
> + ret = rte_eth_dev_info_get(member_port_id, &dev_info);
> if (ret != 0) {
> RTE_BOND_LOG(ERR,
> "%s: Error during getting device (port %u) info: %s\n",
> - __func__, slave_port_id, strerror(-ret));
> + __func__, member_port_id, strerror(-ret));
>
> return ret;
> }
> if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
> - RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
> - slave_port_id);
> + RTE_BOND_LOG(ERR, "Member (port %u) max_rx_pktlen too small",
> + member_port_id);
> return -1;
> }
>
> - slave_add(internals, slave_eth_dev);
> + member_add(internals, member_eth_dev);
>
> - /* We need to store slaves reta_size to be able to synchronize RETA for all
> - * slave devices even if its sizes are different.
> + /* We need to store members reta_size to be able to synchronize RETA for all
> + * member devices even if its sizes are different.
> */
> - internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
> + internals->members[internals->member_count].reta_size = dev_info.reta_size;
>
> - if (internals->slave_count < 1) {
> - /* if MAC is not user defined then use MAC of first slave add to
> + if (internals->member_count < 1) {
> + /* if MAC is not user defined then use MAC of first member add to
> * bonded device */
> if (!internals->user_defined_mac) {
> if (mac_address_set(bonded_eth_dev,
> - slave_eth_dev->data->mac_addrs)) {
> + member_eth_dev->data->mac_addrs)) {
> RTE_BOND_LOG(ERR, "Failed to set MAC address");
> return -1;
> }
> }
>
> - /* Make primary slave */
> - internals->primary_port = slave_port_id;
> - internals->current_primary_port = slave_port_id;
> + /* Make primary member */
> + internals->primary_port = member_port_id;
> + internals->current_primary_port = member_port_id;
>
> internals->speed_capa = dev_info.speed_capa;
>
> - /* Inherit queues settings from first slave */
> - internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
> - internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
> + /* Inherit queues settings from first member */
> + internals->nb_rx_queues = member_eth_dev->data->nb_rx_queues;
> + internals->nb_tx_queues = member_eth_dev->data->nb_tx_queues;
>
> - eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
> - eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
> + eth_bond_member_inherit_dev_info_rx_first(internals, &dev_info);
> + eth_bond_member_inherit_dev_info_tx_first(internals, &dev_info);
>
> - eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
> + eth_bond_member_inherit_desc_lim_first(&internals->rx_desc_lim,
> &dev_info.rx_desc_lim);
> - eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
> + eth_bond_member_inherit_desc_lim_first(&internals->tx_desc_lim,
> &dev_info.tx_desc_lim);
> } else {
> int ret;
>
> internals->speed_capa &= dev_info.speed_capa;
> - eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
> - eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
> + eth_bond_member_inherit_dev_info_rx_next(internals, &dev_info);
> + eth_bond_member_inherit_dev_info_tx_next(internals, &dev_info);
>
> - ret = eth_bond_slave_inherit_desc_lim_next(
> - &internals->rx_desc_lim, &dev_info.rx_desc_lim);
> + ret = eth_bond_member_inherit_desc_lim_next(&internals->rx_desc_lim,
> + &dev_info.rx_desc_lim);
> if (ret != 0)
> return ret;
>
> - ret = eth_bond_slave_inherit_desc_lim_next(
> - &internals->tx_desc_lim, &dev_info.tx_desc_lim);
> + ret = eth_bond_member_inherit_desc_lim_next(&internals->tx_desc_lim,
> + &dev_info.tx_desc_lim);
> if (ret != 0)
> return ret;
> }
> @@ -552,79 +554,81 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
> bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
> internals->flow_type_rss_offloads;
>
> - if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
> - RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
> - slave_port_id);
> + if (member_rte_flow_prepare(internals->member_count, internals) != 0) {
> + RTE_BOND_LOG(ERR, "Failed to prepare new member flows: port=%d",
> + member_port_id);
> return -1;
> }
>
> - /* Add additional MAC addresses to the slave */
> - if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
> - RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
> - slave_port_id);
> + /* Add additional MAC addresses to the member */
> + if (member_add_mac_addresses(bonded_eth_dev, member_port_id) != 0) {
> + RTE_BOND_LOG(ERR, "Failed to add mac address(es) to member %hu",
> + member_port_id);
> return -1;
> }
>
> - internals->slave_count++;
> + internals->member_count++;
>
> if (bonded_eth_dev->data->dev_started) {
> - if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
> - internals->slave_count--;
> - RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
> - slave_port_id);
> + if (member_configure(bonded_eth_dev, member_eth_dev) != 0) {
> + internals->member_count--;
> + RTE_BOND_LOG(ERR, "rte_bond_members_configure: port=%d",
> + member_port_id);
> return -1;
> }
> - if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
> - internals->slave_count--;
> - RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
> - slave_port_id);
> + if (member_start(bonded_eth_dev, member_eth_dev) != 0) {
> + internals->member_count--;
> + RTE_BOND_LOG(ERR, "rte_bond_members_start: port=%d",
> + member_port_id);
> return -1;
> }
> }
>
> - /* Update all slave devices MACs */
> - mac_address_slaves_update(bonded_eth_dev);
> + /* Update all member devices MACs */
> + mac_address_members_update(bonded_eth_dev);
>
> /* Register link status change callback with bonded device pointer as
> * argument*/
> - rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
> + rte_eth_dev_callback_register(member_port_id, RTE_ETH_EVENT_INTR_LSC,
> bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
>
> - /* If bonded device is started then we can add the slave to our active
> - * slave array */
> + /*
> + * If bonded device is started then we can add the member to our active
> + * member array.
> + */
> if (bonded_eth_dev->data->dev_started) {
> - ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
> + ret = rte_eth_link_get_nowait(member_port_id, &link_props);
> if (ret < 0) {
> - rte_eth_dev_callback_unregister(slave_port_id,
> + rte_eth_dev_callback_unregister(member_port_id,
> RTE_ETH_EVENT_INTR_LSC,
> bond_ethdev_lsc_event_callback,
> &bonded_eth_dev->data->port_id);
> - internals->slave_count--;
> + internals->member_count--;
> RTE_BOND_LOG(ERR,
> - "Slave (port %u) link get failed: %s\n",
> - slave_port_id, rte_strerror(-ret));
> + "Member (port %u) link get failed: %s\n",
> + member_port_id, rte_strerror(-ret));
> return -1;
> }
>
> if (link_props.link_status == RTE_ETH_LINK_UP) {
> - if (internals->active_slave_count == 0 &&
> + if (internals->active_member_count == 0 &&
> !internals->user_defined_primary_port)
> bond_ethdev_primary_set(internals,
> - slave_port_id);
> + member_port_id);
> }
> }
>
> - /* Add slave details to bonded device */
> - slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDING_MEMBER;
> + /* Add member details to bonded device */
> + member_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDING_MEMBER;
>
> - slave_vlan_filter_set(bonded_port_id, slave_port_id);
> + member_vlan_filter_set(bonded_port_id, member_port_id);
>
> return 0;
>
> }
>
> int
> -rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
> +rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id)
> {
> struct rte_eth_dev *bonded_eth_dev;
> struct bond_dev_private *internals;
> @@ -637,12 +641,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
> bonded_eth_dev = &rte_eth_devices[bonded_port_id];
> internals = bonded_eth_dev->data->dev_private;
>
> - if (valid_slave_port_id(internals, slave_port_id) != 0)
> + if (valid_member_port_id(internals, member_port_id) != 0)
> return -1;
>
> rte_spinlock_lock(&internals->lock);
>
> - retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
> + retval = __eth_bond_member_add_lock_free(bonded_port_id, member_port_id);
>
> rte_spinlock_unlock(&internals->lock);
>
> @@ -650,103 +654,105 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
> }
>
> static int
> -__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
> - uint16_t slave_port_id)
> +__eth_bond_member_remove_lock_free(uint16_t bonded_port_id,
> + uint16_t member_port_id)
> {
> struct rte_eth_dev *bonded_eth_dev;
> struct bond_dev_private *internals;
> - struct rte_eth_dev *slave_eth_dev;
> + struct rte_eth_dev *member_eth_dev;
> struct rte_flow_error flow_error;
> struct rte_flow *flow;
> - int i, slave_idx;
> + int i, member_idx;
>
> bonded_eth_dev = &rte_eth_devices[bonded_port_id];
> internals = bonded_eth_dev->data->dev_private;
>
> - if (valid_slave_port_id(internals, slave_port_id) < 0)
> + if (valid_member_port_id(internals, member_port_id) < 0)
> return -1;
>
> - /* first remove from active slave list */
> - slave_idx = find_slave_by_id(internals->active_slaves,
> - internals->active_slave_count, slave_port_id);
> + /* first remove from active member list */
> + member_idx = find_member_by_id(internals->active_members,
> + internals->active_member_count, member_port_id);
>
> - if (slave_idx < internals->active_slave_count)
> - deactivate_slave(bonded_eth_dev, slave_port_id);
> + if (member_idx < internals->active_member_count)
> + deactivate_member(bonded_eth_dev, member_port_id);
>
> - slave_idx = -1;
> - /* now find in slave list */
> - for (i = 0; i < internals->slave_count; i++)
> - if (internals->slaves[i].port_id == slave_port_id) {
> - slave_idx = i;
> + member_idx = -1;
> + /* now find in member list */
> + for (i = 0; i < internals->member_count; i++)
> + if (internals->members[i].port_id == member_port_id) {
> + member_idx = i;
> break;
> }
>
> - if (slave_idx < 0) {
> - RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
> - internals->slave_count);
> + if (member_idx < 0) {
> + RTE_BOND_LOG(ERR, "Could not find member in port list, member count %u",
> + internals->member_count);
> return -1;
> }
>
> /* Un-register link status change callback with bonded device pointer as
> * argument*/
> - rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
> + rte_eth_dev_callback_unregister(member_port_id, RTE_ETH_EVENT_INTR_LSC,
> bond_ethdev_lsc_event_callback,
> &rte_eth_devices[bonded_port_id].data->port_id);
>
> - /* Restore original MAC address of slave device */
> - rte_eth_dev_default_mac_addr_set(slave_port_id,
> - &(internals->slaves[slave_idx].persisted_mac_addr));
> + /* Restore original MAC address of member device */
> + rte_eth_dev_default_mac_addr_set(member_port_id,
> + &internals->members[member_idx].persisted_mac_addr);
>
> - /* remove additional MAC addresses from the slave */
> - slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
> + /* remove additional MAC addresses from the member */
> + member_remove_mac_addresses(bonded_eth_dev, member_port_id);
>
> /*
> - * Remove bond device flows from slave device.
> + * Remove bond device flows from member device.
> * Note: don't restore flow isolate mode.
> */
> TAILQ_FOREACH(flow, &internals->flow_list, next) {
> - if (flow->flows[slave_idx] != NULL) {
> - rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
> + if (flow->flows[member_idx] != NULL) {
> + rte_flow_destroy(member_port_id, flow->flows[member_idx],
> &flow_error);
> - flow->flows[slave_idx] = NULL;
> + flow->flows[member_idx] = NULL;
> }
> }
>
> /* Remove the dedicated queues flow */
> if (internals->mode == BONDING_MODE_8023AD &&
> internals->mode4.dedicated_queues.enabled == 1 &&
> - internals->mode4.dedicated_queues.flow[slave_port_id] != NULL) {
> - rte_flow_destroy(slave_port_id,
> - internals->mode4.dedicated_queues.flow[slave_port_id],
> + internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
> + rte_flow_destroy(member_port_id,
> + internals->mode4.dedicated_queues.flow[member_port_id],
> &flow_error);
> - internals->mode4.dedicated_queues.flow[slave_port_id] = NULL;
> + internals->mode4.dedicated_queues.flow[member_port_id] = NULL;
> }
>
> - slave_eth_dev = &rte_eth_devices[slave_port_id];
> - slave_remove(internals, slave_eth_dev);
> - slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDING_MEMBER);
> + member_eth_dev = &rte_eth_devices[member_port_id];
> + member_remove(internals, member_eth_dev);
> + member_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDING_MEMBER);
>
> - /* first slave in the active list will be the primary by default,
> + /* first member in the active list will be the primary by default,
> * otherwise use first device in list */
> - if (internals->current_primary_port == slave_port_id) {
> - if (internals->active_slave_count > 0)
> - internals->current_primary_port = internals->active_slaves[0];
> - else if (internals->slave_count > 0)
> - internals->current_primary_port = internals->slaves[0].port_id;
> + if (internals->current_primary_port == member_port_id) {
> + if (internals->active_member_count > 0)
> + internals->current_primary_port = internals->active_members[0];
> + else if (internals->member_count > 0)
> + internals->current_primary_port = internals->members[0].port_id;
> else
> internals->primary_port = 0;
> - mac_address_slaves_update(bonded_eth_dev);
> + mac_address_members_update(bonded_eth_dev);
> }
>
> - if (internals->active_slave_count < 1) {
> - /* if no slaves are any longer attached to bonded device and MAC is not
> + if (internals->active_member_count < 1) {
> + /*
> + * if no members are any longer attached to bonded device and MAC is not
> * user defined then clear MAC of bonded device as it will be reset
> - * when a new slave is added */
> - if (internals->slave_count < 1 && !internals->user_defined_mac)
> + * when a new member is added.
> + */
> + if (internals->member_count < 1 && !internals->user_defined_mac)
> memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
> sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
> }
> - if (internals->slave_count == 0) {
> + if (internals->member_count == 0) {
> internals->rx_offload_capa = 0;
> internals->tx_offload_capa = 0;
> internals->rx_queue_offload_capa = 0;
> @@ -760,7 +766,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
> }
>
> int
> -rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
> +rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id)
> {
> struct rte_eth_dev *bonded_eth_dev;
> struct bond_dev_private *internals;
> @@ -774,7 +780,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
>
> rte_spinlock_lock(&internals->lock);
>
> - retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
> + retval = __eth_bond_member_remove_lock_free(bonded_port_id, member_port_id);
>
> rte_spinlock_unlock(&internals->lock);
>
> @@ -791,7 +797,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
>
> bonded_eth_dev = &rte_eth_devices[bonded_port_id];
>
> - if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
> + if (check_for_main_bonded_ethdev(bonded_eth_dev) != 0 &&
> mode == BONDING_MODE_8023AD)
> return -1;
>
> @@ -812,7 +818,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
> }
>
> int
> -rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
> +rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id)
> {
> struct bond_dev_private *internals;
>
> @@ -821,13 +827,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
>
> internals = rte_eth_devices[bonded_port_id].data->dev_private;
>
> - if (valid_slave_port_id(internals, slave_port_id) != 0)
> + if (valid_member_port_id(internals, member_port_id) != 0)
> return -1;
>
> internals->user_defined_primary_port = 1;
> - internals->primary_port = slave_port_id;
> + internals->primary_port = member_port_id;
>
> - bond_ethdev_primary_set(internals, slave_port_id);
> + bond_ethdev_primary_set(internals, member_port_id);
>
> return 0;
> }
> @@ -842,14 +848,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
>
> internals = rte_eth_devices[bonded_port_id].data->dev_private;
>
> - if (internals->slave_count < 1)
> + if (internals->member_count < 1)
> return -1;
>
> return internals->current_primary_port;
> }
>
> int
> -rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
> +rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
> uint16_t len)
> {
> struct bond_dev_private *internals;
> @@ -858,22 +864,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
> if (valid_bonded_port_id(bonded_port_id) != 0)
> return -1;
>
> - if (slaves == NULL)
> + if (members == NULL)
> return -1;
>
> internals = rte_eth_devices[bonded_port_id].data->dev_private;
>
> - if (internals->slave_count > len)
> + if (internals->member_count > len)
> return -1;
>
> - for (i = 0; i < internals->slave_count; i++)
> - slaves[i] = internals->slaves[i].port_id;
> + for (i = 0; i < internals->member_count; i++)
> + members[i] = internals->members[i].port_id;
>
> - return internals->slave_count;
> + return internals->member_count;
> }
>
> int
> -rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
> +rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
> uint16_t len)
> {
> struct bond_dev_private *internals;
> @@ -881,18 +887,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
> if (valid_bonded_port_id(bonded_port_id) != 0)
> return -1;
>
> - if (slaves == NULL)
> + if (members == NULL)
> return -1;
>
> internals = rte_eth_devices[bonded_port_id].data->dev_private;
>
> - if (internals->active_slave_count > len)
> + if (internals->active_member_count > len)
> return -1;
>
> - memcpy(slaves, internals->active_slaves,
> - internals->active_slave_count * sizeof(internals->active_slaves[0]));
> + memcpy(members, internals->active_members,
> + internals->active_member_count * sizeof(internals->active_members[0]));
>
> - return internals->active_slave_count;
> + return internals->active_member_count;
> }
>
> int
> @@ -914,9 +920,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
>
> internals->user_defined_mac = 1;
>
> - /* Update all slave devices MACs*/
> - if (internals->slave_count > 0)
> - return mac_address_slaves_update(bonded_eth_dev);
> + /* Update all member devices MACs*/
> + if (internals->member_count > 0)
> + return mac_address_members_update(bonded_eth_dev);
>
> return 0;
> }
> @@ -935,30 +941,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
>
> internals->user_defined_mac = 0;
>
> - if (internals->slave_count > 0) {
> - int slave_port;
> - /* Get the primary slave location based on the primary port
> - * number as, while slave_add(), we will keep the primary
> - * slave based on slave_count,but not based on the primary port.
> + if (internals->member_count > 0) {
> + int member_port;
> + /* Get the primary member location based on the primary port
> + * number as, while member_add(), we will keep the primary
> + * member based on member_count,but not based on the primary port.
> */
> - for (slave_port = 0; slave_port < internals->slave_count;
> - slave_port++) {
> - if (internals->slaves[slave_port].port_id ==
> + for (member_port = 0; member_port < internals->member_count;
> + member_port++) {
> + if (internals->members[member_port].port_id ==
> internals->primary_port)
> break;
> }
>
> /* Set MAC Address of Bonded Device */
> if (mac_address_set(bonded_eth_dev,
> - &internals->slaves[slave_port].persisted_mac_addr)
> + &internals->members[member_port].persisted_mac_addr)
> != 0) {
> RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
> return -1;
> }
> - /* Update all slave devices MAC addresses */
> - return mac_address_slaves_update(bonded_eth_dev);
> + /* Update all member devices MAC addresses */
> + return mac_address_members_update(bonded_eth_dev);
> }
> - /* No need to update anything as no slaves present */
> + /* No need to update anything as no members present */
> return 0;
> }
>
> diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
> index c137efd55f..bdec5d61d4 100644
> --- a/drivers/net/bonding/rte_eth_bond_args.c
> +++ b/drivers/net/bonding/rte_eth_bond_args.c
> @@ -12,8 +12,8 @@
> #include "eth_bond_private.h"
>
> const char *pmd_bond_init_valid_arguments[] = {
> - PMD_BOND_SLAVE_PORT_KVARG,
> - PMD_BOND_PRIMARY_SLAVE_KVARG,
> + PMD_BOND_MEMBER_PORT_KVARG,
> + PMD_BOND_PRIMARY_MEMBER_KVARG,
> PMD_BOND_MODE_KVARG,
> PMD_BOND_XMIT_POLICY_KVARG,
> PMD_BOND_SOCKET_ID_KVARG,
> @@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
> }
>
> int
> -bond_ethdev_parse_slave_port_kvarg(const char *key,
> +bond_ethdev_parse_member_port_kvarg(const char *key,
> const char *value, void *extra_args)
> {
> - struct bond_ethdev_slave_ports *slave_ports;
> + struct bond_ethdev_member_ports *member_ports;
>
> if (value == NULL || extra_args == NULL)
> return -1;
>
> - slave_ports = extra_args;
> + member_ports = extra_args;
>
> - if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
> + if (strcmp(key, PMD_BOND_MEMBER_PORT_KVARG) == 0) {
> int port_id = parse_port_id(value);
> if (port_id < 0) {
> - RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
> + RTE_BOND_LOG(ERR, "Invalid member port value (%s) specified",
> value);
> return -1;
> } else
> - slave_ports->slaves[slave_ports->slave_count++] =
> + member_ports->members[member_ports->member_count++] =
> port_id;
> }
> return 0;
> }
>
> int
> -bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
> +bond_ethdev_parse_member_mode_kvarg(const char *key __rte_unused,
> const char *value, void *extra_args)
> {
> uint8_t *mode;
> @@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
> case BONDING_MODE_ALB:
> return 0;
> default:
> - RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
> + RTE_BOND_LOG(ERR, "Invalid member mode value (%s) specified", value);
> return -1;
> }
> }
>
> int
> -bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
> +bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
> const char *value, void *extra_args)
> {
> uint8_t *agg_mode;
> @@ -227,19 +227,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
> }
>
> int
> -bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
> +bond_ethdev_parse_primary_member_port_id_kvarg(const char *key __rte_unused,
> const char *value, void *extra_args)
> {
> - int primary_slave_port_id;
> + int primary_member_port_id;
>
> if (value == NULL || extra_args == NULL)
> return -1;
>
> - primary_slave_port_id = parse_port_id(value);
> - if (primary_slave_port_id < 0)
> + primary_member_port_id = parse_port_id(value);
> + if (primary_member_port_id < 0)
> return -1;
>
> - *(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
> + *(uint16_t *)extra_args = (uint16_t)primary_member_port_id;
>
> return 0;
> }
> diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
> index 65b77faae7..71a91675f7 100644
> --- a/drivers/net/bonding/rte_eth_bond_flow.c
> +++ b/drivers/net/bonding/rte_eth_bond_flow.c
> @@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
> int i;
> int ret;
>
> - for (i = 0; i < internals->slave_count; i++) {
> - ret = rte_flow_validate(internals->slaves[i].port_id, attr,
> + for (i = 0; i < internals->member_count; i++) {
> + ret = rte_flow_validate(internals->members[i].port_id, attr,
> patterns, actions, err);
> if (ret) {
> RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
> - " for slave %d with error %d", i, ret);
> + " for member %d with error %d", i, ret);
> return ret;
> }
> }
> @@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
> NULL, rte_strerror(ENOMEM));
> return NULL;
> }
> - for (i = 0; i < internals->slave_count; i++) {
> - flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
> + for (i = 0; i < internals->member_count; i++) {
> + flow->flows[i] = rte_flow_create(internals->members[i].port_id,
> attr, patterns, actions, err);
> if (unlikely(flow->flows[i] == NULL)) {
> - RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
> + RTE_BOND_LOG(ERR, "Failed to create flow on member %d",
> i);
> goto err;
> }
> @@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
> TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
> return flow;
> err:
> - /* Destroy all slaves flows. */
> - for (i = 0; i < internals->slave_count; i++) {
> + /* Destroy all members flows. */
> + for (i = 0; i < internals->member_count; i++) {
> if (flow->flows[i] != NULL)
> - rte_flow_destroy(internals->slaves[i].port_id,
> + rte_flow_destroy(internals->members[i].port_id,
> flow->flows[i], err);
> }
> bond_flow_release(&flow);
> @@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
> int i;
> int ret = 0;
>
> - for (i = 0; i < internals->slave_count; i++) {
> + for (i = 0; i < internals->member_count; i++) {
> int lret;
>
> if (unlikely(flow->flows[i] == NULL))
> continue;
> - lret = rte_flow_destroy(internals->slaves[i].port_id,
> + lret = rte_flow_destroy(internals->members[i].port_id,
> flow->flows[i], err);
> if (unlikely(lret != 0)) {
> - RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
> + RTE_BOND_LOG(ERR, "Failed to destroy flow on member %d:"
> " %d", i, lret);
> ret = lret;
> }
> @@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
> int ret = 0;
> int lret;
>
> - /* Destroy all bond flows from its slaves instead of flushing them to
> + /* Destroy all bond flows from its members instead of flushing them to
> * keep the LACP flow or any other external flows.
> */
> RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
> @@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
> ret = lret;
> }
> if (unlikely(ret != 0))
> - RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
> + RTE_BOND_LOG(ERR, "Failed to flush flow in all members");
> return ret;
> }
>
> @@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
> struct rte_flow_error *err)
> {
> struct bond_dev_private *internals = dev->data->dev_private;
> - struct rte_flow_query_count slave_count;
> + struct rte_flow_query_count member_count;
> int i;
> int ret;
>
> count->bytes = 0;
> count->hits = 0;
> - rte_memcpy(&slave_count, count, sizeof(slave_count));
> - for (i = 0; i < internals->slave_count; i++) {
> - ret = rte_flow_query(internals->slaves[i].port_id,
> + rte_memcpy(&member_count, count, sizeof(member_count));
> + for (i = 0; i < internals->member_count; i++) {
> + ret = rte_flow_query(internals->members[i].port_id,
> flow->flows[i], action,
> - &slave_count, err);
> + &member_count, err);
> if (unlikely(ret != 0)) {
> RTE_BOND_LOG(ERR, "Failed to query flow on"
> - " slave %d: %d", i, ret);
> + " member %d: %d", i, ret);
> return ret;
> }
> - count->bytes += slave_count.bytes;
> - count->hits += slave_count.hits;
> - slave_count.bytes = 0;
> - slave_count.hits = 0;
> + count->bytes += member_count.bytes;
> + count->hits += member_count.hits;
> + member_count.bytes = 0;
> + member_count.hits = 0;
> }
> return 0;
> }
> @@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
> int i;
> int ret;
>
> - for (i = 0; i < internals->slave_count; i++) {
> - ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
> + for (i = 0; i < internals->member_count; i++) {
> + ret = rte_flow_isolate(internals->members[i].port_id, set, err);
> if (unlikely(ret != 0)) {
> RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
> - " for slave %d with error %d", i, ret);
> + " for member %d with error %d", i, ret);
> internals->flow_isolated_valid = 0;
> return ret;
> }
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 73205f78f4..499c980db8 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -61,33 +61,35 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> struct bond_dev_private *internals;
>
> uint16_t num_rx_total = 0;
> - uint16_t slave_count;
> - uint16_t active_slave;
> + uint16_t member_count;
> + uint16_t active_member;
> int i;
>
> /* Cast to structure, containing bonded device's port id and queue id */
> struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
> internals = bd_rx_q->dev_private;
> - slave_count = internals->active_slave_count;
> - active_slave = bd_rx_q->active_slave;
> + member_count = internals->active_member_count;
> + active_member = bd_rx_q->active_member;
>
> - for (i = 0; i < slave_count && nb_pkts; i++) {
> - uint16_t num_rx_slave;
> + for (i = 0; i < member_count && nb_pkts; i++) {
> + uint16_t num_rx_member;
>
> - /* Offset of pointer to *bufs increases as packets are received
> - * from other slaves */
> - num_rx_slave =
> - rte_eth_rx_burst(internals->active_slaves[active_slave],
> + /*
> + * Offset of pointer to *bufs increases as packets are received
> + * from other members.
> + */
> + num_rx_member =
> + rte_eth_rx_burst(internals->active_members[active_member],
> bd_rx_q->queue_id,
> bufs + num_rx_total, nb_pkts);
> - num_rx_total += num_rx_slave;
> - nb_pkts -= num_rx_slave;
> - if (++active_slave >= slave_count)
> - active_slave = 0;
> + num_rx_total += num_rx_member;
> + nb_pkts -= num_rx_member;
> + if (++active_member >= member_count)
> + active_member = 0;
> }
>
> - if (++bd_rx_q->active_slave >= slave_count)
> - bd_rx_q->active_slave = 0;
> + if (++bd_rx_q->active_member >= member_count)
> + bd_rx_q->active_member = 0;
> return num_rx_total;
> }
>
> @@ -158,8 +160,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
>
> int
> bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
> - uint16_t slave_port) {
> - struct rte_eth_dev_info slave_info;
> + uint16_t member_port) {
> + struct rte_eth_dev_info member_info;
> struct rte_flow_error error;
> struct bond_dev_private *internals = bond_dev->data->dev_private;
>
> @@ -177,29 +179,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
> }
> };
>
> - int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
> + int ret = rte_flow_validate(member_port, &flow_attr_8023ad,
> flow_item_8023ad, actions, &error);
> if (ret < 0) {
> - RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
> - __func__, error.message, slave_port,
> + RTE_BOND_LOG(ERR, "%s: %s (member_port=%d queue_id=%d)",
> + __func__, error.message, member_port,
> internals->mode4.dedicated_queues.rx_qid);
> return -1;
> }
>
> - ret = rte_eth_dev_info_get(slave_port, &slave_info);
> + ret = rte_eth_dev_info_get(member_port, &member_info);
> if (ret != 0) {
> RTE_BOND_LOG(ERR,
> "%s: Error during getting device (port %u) info: %s\n",
> - __func__, slave_port, strerror(-ret));
> + __func__, member_port, strerror(-ret));
>
> return ret;
> }
>
> - if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
> - slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
> + if (member_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
> + member_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
> RTE_BOND_LOG(ERR,
> - "%s: Slave %d capabilities doesn't allow allocating additional queues",
> - __func__, slave_port);
> + "%s: Member %d capabilities doesn't allow allocating additional queues",
> + __func__, member_port);
> return -1;
> }
>
> @@ -214,8 +216,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
> uint16_t idx;
> int ret;
>
> - /* Verify if all slaves in bonding supports flow director and */
> - if (internals->slave_count > 0) {
> + /* Verify if all members in bonding supports flow director and */
> + if (internals->member_count > 0) {
> ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
> if (ret != 0) {
> RTE_BOND_LOG(ERR,
> @@ -229,9 +231,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
> internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
> internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
>
> - for (idx = 0; idx < internals->slave_count; idx++) {
> + for (idx = 0; idx < internals->member_count; idx++) {
> if (bond_ethdev_8023ad_flow_verify(bond_dev,
> - internals->slaves[idx].port_id) != 0)
> + internals->members[idx].port_id) != 0)
> return -1;
> }
> }
> @@ -240,7 +242,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
> }
>
> int
> -bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
> +bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port) {
>
> struct rte_flow_error error;
> struct bond_dev_private *internals = bond_dev->data->dev_private;
> @@ -258,12 +260,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
> }
> };
>
> - internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
> + internals->mode4.dedicated_queues.flow[member_port] = rte_flow_create(member_port,
> &flow_attr_8023ad, flow_item_8023ad, actions, &error);
> - if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
> + if (internals->mode4.dedicated_queues.flow[member_port] == NULL) {
> RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
> - "(slave_port=%d queue_id=%d)",
> - error.message, slave_port,
> + "(member_port=%d queue_id=%d)",
> + error.message, member_port,
> internals->mode4.dedicated_queues.rx_qid);
> return -1;
> }
> @@ -304,10 +306,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
> const uint16_t ether_type_slow_be =
> rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
> uint16_t num_rx_total = 0; /* Total number of received packets */
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> - uint16_t slave_count, idx;
> + uint16_t members[RTE_MAX_ETHPORTS];
> + uint16_t member_count, idx;
>
> - uint8_t collecting; /* current slave collecting status */
> + uint8_t collecting; /* current member collecting status */
> const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
> const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
> uint8_t subtype;
> @@ -315,24 +317,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
> uint16_t j;
> uint16_t k;
>
> - /* Copy slave list to protect against slave up/down changes during tx
> + /* Copy member list to protect against member up/down changes during tx
> * bursting */
> - slave_count = internals->active_slave_count;
> - memcpy(slaves, internals->active_slaves,
> - sizeof(internals->active_slaves[0]) * slave_count);
> + member_count = internals->active_member_count;
> + memcpy(members, internals->active_members,
> + sizeof(internals->active_members[0]) * member_count);
>
> - idx = bd_rx_q->active_slave;
> - if (idx >= slave_count) {
> - bd_rx_q->active_slave = 0;
> + idx = bd_rx_q->active_member;
> + if (idx >= member_count) {
> + bd_rx_q->active_member = 0;
> idx = 0;
> }
> - for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
> + for (i = 0; i < member_count && num_rx_total < nb_pkts; i++) {
> j = num_rx_total;
> - collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
> + collecting = ACTOR_STATE(&bond_mode_8023ad_ports[members[idx]],
> COLLECTING);
>
> - /* Read packets from this slave */
> - num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
> + /* Read packets from this member */
> + num_rx_total += rte_eth_rx_burst(members[idx], bd_rx_q->queue_id,
> &bufs[num_rx_total], nb_pkts - num_rx_total);
>
> for (k = j; k < 2 && k < num_rx_total; k++)
> @@ -348,7 +350,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
>
> /* Remove packet from array if:
> * - it is slow packet but no dedicated rxq is present,
> - * - slave is not in collecting state,
> + * - member is not in collecting state,
> * - bonding interface is not in promiscuous mode and
> * packet address isn't in mac_addrs array:
> * - packet is unicast,
> @@ -367,7 +369,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
> !allmulti)))) {
> if (hdr->ether_type == ether_type_slow_be) {
> bond_mode_8023ad_handle_slow_pkt(
> - internals, slaves[idx], bufs[j]);
> + internals, members[idx], bufs[j]);
> } else
> rte_pktmbuf_free(bufs[j]);
>
> @@ -380,12 +382,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
> } else
> j++;
> }
> - if (unlikely(++idx == slave_count))
> + if (unlikely(++idx == member_count))
> idx = 0;
> }
>
> - if (++bd_rx_q->active_slave >= slave_count)
> - bd_rx_q->active_slave = 0;
> + if (++bd_rx_q->active_member >= member_count)
> + bd_rx_q->active_member = 0;
>
> return num_rx_total;
> }
> @@ -406,7 +408,7 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
>
> #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
> uint32_t burstnumberRX;
> -uint32_t burstnumberTX;
> +uint32_t burst_number_TX;
>
> #ifdef RTE_LIBRTE_BOND_DEBUG_ALB
>
> @@ -583,59 +585,61 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
> struct bond_dev_private *internals;
> struct bond_tx_queue *bd_tx_q;
>
> - struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
> - uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
> + struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_pkts];
> + uint16_t member_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
>
> - uint16_t num_of_slaves;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t num_of_members;
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> - uint16_t num_tx_total = 0, num_tx_slave;
> + uint16_t num_tx_total = 0, num_tx_member;
>
> - static int slave_idx = 0;
> - int i, cslave_idx = 0, tx_fail_total = 0;
> + static int member_idx;
> + int i, cmember_idx = 0, tx_fail_total = 0;
>
> bd_tx_q = (struct bond_tx_queue *)queue;
> internals = bd_tx_q->dev_private;
>
> - /* Copy slave list to protect against slave up/down changes during tx
> + /* Copy member list to protect against member up/down changes during tx
> * bursting */
> - num_of_slaves = internals->active_slave_count;
> - memcpy(slaves, internals->active_slaves,
> - sizeof(internals->active_slaves[0]) * num_of_slaves);
> + num_of_members = internals->active_member_count;
> + memcpy(members, internals->active_members,
> + sizeof(internals->active_members[0]) * num_of_members);
>
> - if (num_of_slaves < 1)
> + if (num_of_members < 1)
> return num_tx_total;
>
> - /* Populate slaves mbuf with which packets are to be sent on it */
> + /* Populate members mbuf with which packets are to be sent on it */
> for (i = 0; i < nb_pkts; i++) {
> - cslave_idx = (slave_idx + i) % num_of_slaves;
> - slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
> + cmember_idx = (member_idx + i) % num_of_members;
> + member_bufs[cmember_idx][(member_nb_pkts[cmember_idx])++] = bufs[i];
> }
>
> - /* increment current slave index so the next call to tx burst starts on the
> - * next slave */
> - slave_idx = ++cslave_idx;
> + /*
> + * increment current member index so the next call to tx burst starts on the
> + * next member.
> + */
> + member_idx = ++cmember_idx;
>
> - /* Send packet burst on each slave device */
> - for (i = 0; i < num_of_slaves; i++) {
> - if (slave_nb_pkts[i] > 0) {
> - num_tx_slave = rte_eth_tx_prepare(slaves[i],
> - bd_tx_q->queue_id, slave_bufs[i],
> - slave_nb_pkts[i]);
> - num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
> - slave_bufs[i], num_tx_slave);
> + /* Send packet burst on each member device */
> + for (i = 0; i < num_of_members; i++) {
> + if (member_nb_pkts[i] > 0) {
> + num_tx_member = rte_eth_tx_prepare(members[i],
> + bd_tx_q->queue_id, member_bufs[i],
> + member_nb_pkts[i]);
> + num_tx_member = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
> + member_bufs[i], num_tx_member);
>
> /* if tx burst fails move packets to end of bufs */
> - if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
> - int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
> + if (unlikely(num_tx_member < member_nb_pkts[i])) {
> + int tx_fail_member = member_nb_pkts[i] - num_tx_member;
>
> - tx_fail_total += tx_fail_slave;
> + tx_fail_total += tx_fail_member;
>
> memcpy(&bufs[nb_pkts - tx_fail_total],
> - &slave_bufs[i][num_tx_slave],
> - tx_fail_slave * sizeof(bufs[0]));
> + &member_bufs[i][num_tx_member],
> + tx_fail_member * sizeof(bufs[0]));
> }
> - num_tx_total += num_tx_slave;
> + num_tx_total += num_tx_member;
> }
> }
>
> @@ -653,7 +657,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
> bd_tx_q = (struct bond_tx_queue *)queue;
> internals = bd_tx_q->dev_private;
>
> - if (internals->active_slave_count < 1)
> + if (internals->active_member_count < 1)
> return 0;
>
> nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
> @@ -699,7 +703,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
>
> void
> burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
> - uint16_t slave_count, uint16_t *slaves)
> + uint16_t member_count, uint16_t *members)
> {
> struct rte_ether_hdr *eth_hdr;
> uint32_t hash;
> @@ -710,13 +714,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
>
> hash = ether_hash(eth_hdr);
>
> - slaves[i] = (hash ^= hash >> 8) % slave_count;
> + members[i] = (hash ^= hash >> 8) % member_count;
> }
> }
>
> void
> burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
> - uint16_t slave_count, uint16_t *slaves)
> + uint16_t member_count, uint16_t *members)
> {
> uint16_t i;
> struct rte_ether_hdr *eth_hdr;
> @@ -748,13 +752,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
> hash ^= hash >> 16;
> hash ^= hash >> 8;
>
> - slaves[i] = hash % slave_count;
> + members[i] = hash % member_count;
> }
> }
>
> void
> burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
> - uint16_t slave_count, uint16_t *slaves)
> + uint16_t member_count, uint16_t *members)
> {
> struct rte_ether_hdr *eth_hdr;
> uint16_t proto;
> @@ -822,30 +826,29 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
> hash ^= hash >> 16;
> hash ^= hash >> 8;
>
> - slaves[i] = hash % slave_count;
> + members[i] = hash % member_count;
> }
> }
>
> -struct bwg_slave {
> +struct bwg_member {
> uint64_t bwg_left_int;
> uint64_t bwg_left_remainder;
> - uint16_t slave;
> + uint16_t member;
> };
>
> void
> -bond_tlb_activate_slave(struct bond_dev_private *internals) {
> +bond_tlb_activate_member(struct bond_dev_private *internals) {
> int i;
>
> - for (i = 0; i < internals->active_slave_count; i++) {
> - tlb_last_obytets[internals->active_slaves[i]] = 0;
> - }
> + for (i = 0; i < internals->active_member_count; i++)
> + tlb_last_obytets[internals->active_members[i]] = 0;
> }
>
> static int
> bandwidth_cmp(const void *a, const void *b)
> {
> - const struct bwg_slave *bwg_a = a;
> - const struct bwg_slave *bwg_b = b;
> + const struct bwg_member *bwg_a = a;
> + const struct bwg_member *bwg_b = b;
> int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
> int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
> (int64_t)bwg_a->bwg_left_remainder;
> @@ -863,14 +866,14 @@ bandwidth_cmp(const void *a, const void *b)
>
> static void
> bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
> - struct bwg_slave *bwg_slave)
> + struct bwg_member *bwg_member)
> {
> struct rte_eth_link link_status;
> int ret;
>
> ret = rte_eth_link_get_nowait(port_id, &link_status);
> if (ret < 0) {
> - RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
> + RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
> port_id, rte_strerror(-ret));
> return;
> }
> @@ -878,51 +881,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
> if (link_bwg == 0)
> return;
> link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
> - bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
> - bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
> + bwg_member->bwg_left_int = (link_bwg - 1000 * load) / link_bwg;
> + bwg_member->bwg_left_remainder = (link_bwg - 1000 * load) % link_bwg;
> }
>
> static void
> -bond_ethdev_update_tlb_slave_cb(void *arg)
> +bond_ethdev_update_tlb_member_cb(void *arg)
> {
> struct bond_dev_private *internals = arg;
> - struct rte_eth_stats slave_stats;
> - struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
> - uint16_t slave_count;
> + struct rte_eth_stats member_stats;
> + struct bwg_member bwg_array[RTE_MAX_ETHPORTS];
> + uint16_t member_count;
> uint64_t tx_bytes;
>
> uint8_t update_stats = 0;
> - uint16_t slave_id;
> + uint16_t member_id;
> uint16_t i;
>
> - internals->slave_update_idx++;
> + internals->member_update_idx++;
>
>
> - if (internals->slave_update_idx >= REORDER_PERIOD_MS)
> + if (internals->member_update_idx >= REORDER_PERIOD_MS)
> update_stats = 1;
>
> - for (i = 0; i < internals->active_slave_count; i++) {
> - slave_id = internals->active_slaves[i];
> - rte_eth_stats_get(slave_id, &slave_stats);
> - tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
> - bandwidth_left(slave_id, tx_bytes,
> - internals->slave_update_idx, &bwg_array[i]);
> - bwg_array[i].slave = slave_id;
> + for (i = 0; i < internals->active_member_count; i++) {
> + member_id = internals->active_members[i];
> + rte_eth_stats_get(member_id, &member_stats);
> + tx_bytes = member_stats.obytes - tlb_last_obytets[member_id];
> + bandwidth_left(member_id, tx_bytes,
> + internals->member_update_idx, &bwg_array[i]);
> + bwg_array[i].member = member_id;
>
> if (update_stats) {
> - tlb_last_obytets[slave_id] = slave_stats.obytes;
> + tlb_last_obytets[member_id] = member_stats.obytes;
> }
> }
>
> if (update_stats == 1)
> - internals->slave_update_idx = 0;
> + internals->member_update_idx = 0;
>
> - slave_count = i;
> - qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
> - for (i = 0; i < slave_count; i++)
> - internals->tlb_slaves_order[i] = bwg_array[i].slave;
> + member_count = i;
> + qsort(bwg_array, member_count, sizeof(bwg_array[0]), bandwidth_cmp);
> + for (i = 0; i < member_count; i++)
> + internals->tlb_members_order[i] = bwg_array[i].member;
>
> - rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
> + rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_member_cb,
> (struct bond_dev_private *)internals);
> }
>
> @@ -937,29 +940,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> uint16_t num_tx_total = 0, num_tx_prep;
> uint16_t i, j;
>
> - uint16_t num_of_slaves = internals->active_slave_count;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t num_of_members = internals->active_member_count;
> + uint16_t members[RTE_MAX_ETHPORTS];
>
> struct rte_ether_hdr *ether_hdr;
> - struct rte_ether_addr primary_slave_addr;
> - struct rte_ether_addr active_slave_addr;
> + struct rte_ether_addr primary_member_addr;
> + struct rte_ether_addr active_member_addr;
>
> - if (num_of_slaves < 1)
> + if (num_of_members < 1)
> return num_tx_total;
>
> - memcpy(slaves, internals->tlb_slaves_order,
> - sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
> + memcpy(members, internals->tlb_members_order,
> + sizeof(internals->tlb_members_order[0]) * num_of_members);
>
>
> - rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
> + rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_member_addr);
>
> if (nb_pkts > 3) {
> for (i = 0; i < 3; i++)
> rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
> }
>
> - for (i = 0; i < num_of_slaves; i++) {
> - rte_eth_macaddr_get(slaves[i], &active_slave_addr);
> + for (i = 0; i < num_of_members; i++) {
> + rte_eth_macaddr_get(members[i], &active_member_addr);
> for (j = num_tx_total; j < nb_pkts; j++) {
> if (j + 3 < nb_pkts)
> rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
> @@ -967,17 +970,18 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> ether_hdr = rte_pktmbuf_mtod(bufs[j],
> struct rte_ether_hdr *);
> if (rte_is_same_ether_addr(ðer_hdr->src_addr,
> - &primary_slave_addr))
> - rte_ether_addr_copy(&active_slave_addr,
> + &primary_member_addr))
> + rte_ether_addr_copy(&active_member_addr,
> ðer_hdr->src_addr);
> #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
> - mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
> + mode6_debug("TX IPv4:", ether_hdr, members[i],
> + &burst_number_TX);
> #endif
> }
>
> - num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
> + num_tx_prep = rte_eth_tx_prepare(members[i], bd_tx_q->queue_id,
> bufs + num_tx_total, nb_pkts - num_tx_total);
> - num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
> + num_tx_total += rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
> bufs + num_tx_total, num_tx_prep);
>
> if (num_tx_total == nb_pkts)
> @@ -990,13 +994,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> void
> bond_tlb_disable(struct bond_dev_private *internals)
> {
> - rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
> + rte_eal_alarm_cancel(bond_ethdev_update_tlb_member_cb, internals);
> }
>
> void
> bond_tlb_enable(struct bond_dev_private *internals)
> {
> - bond_ethdev_update_tlb_slave_cb(internals);
> + bond_ethdev_update_tlb_member_cb(internals);
> }
>
> static uint16_t
> @@ -1011,11 +1015,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> struct client_data *client_info;
>
> /*
> - * We create transmit buffers for every slave and one additional to send
> + * We create transmit buffers for every member and one additional to send
> * through tlb. In worst case every packet will be send on one port.
> */
> - struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
> - uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
> + struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
> + uint16_t member_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
>
> /*
> * We create separate transmit buffers for update packets as they won't
> @@ -1029,7 +1033,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>
> uint16_t num_send, num_not_send = 0;
> uint16_t num_tx_total = 0;
> - uint16_t slave_idx;
> + uint16_t member_idx;
>
> int i, j;
>
> @@ -1040,19 +1044,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> offset = get_vlan_offset(eth_h, ðer_type);
>
> if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
> - slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
> + member_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
>
> /* Change src mac in eth header */
> - rte_eth_macaddr_get(slave_idx, ð_h->src_addr);
> + rte_eth_macaddr_get(member_idx, ð_h->src_addr);
>
> - /* Add packet to slave tx buffer */
> - slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
> - slave_bufs_pkts[slave_idx]++;
> + /* Add packet to member tx buffer */
> + member_bufs[member_idx][member_bufs_pkts[member_idx]] = bufs[i];
> + member_bufs_pkts[member_idx]++;
> } else {
> /* If packet is not ARP, send it with TLB policy */
> - slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
> + member_bufs[RTE_MAX_ETHPORTS][member_bufs_pkts[RTE_MAX_ETHPORTS]] =
> bufs[i];
> - slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
> + member_bufs_pkts[RTE_MAX_ETHPORTS]++;
> }
> }
>
> @@ -1062,7 +1066,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> client_info = &internals->mode6.client_table[i];
>
> if (client_info->in_use) {
> - /* Allocate new packet to send ARP update on current slave */
> + /* Allocate new packet to send ARP update on current member */
> upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
> if (upd_pkt == NULL) {
> RTE_BOND_LOG(ERR,
> @@ -1076,44 +1080,44 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> upd_pkt->data_len = pkt_size;
> upd_pkt->pkt_len = pkt_size;
>
> - slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
> + member_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
> internals);
>
> /* Add packet to update tx buffer */
> - update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
> - update_bufs_pkts[slave_idx]++;
> + update_bufs[member_idx][update_bufs_pkts[member_idx]] = upd_pkt;
> + update_bufs_pkts[member_idx]++;
> }
> }
> internals->mode6.ntt = 0;
> }
>
> - /* Send ARP packets on proper slaves */
> + /* Send ARP packets on proper members */
> for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
> - if (slave_bufs_pkts[i] > 0) {
> + if (member_bufs_pkts[i] > 0) {
> num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
> - slave_bufs[i], slave_bufs_pkts[i]);
> + member_bufs[i], member_bufs_pkts[i]);
> num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
> - slave_bufs[i], num_send);
> - for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
> + member_bufs[i], num_send);
> + for (j = 0; j < member_bufs_pkts[i] - num_send; j++) {
> bufs[nb_pkts - 1 - num_not_send - j] =
> - slave_bufs[i][nb_pkts - 1 - j];
> + member_bufs[i][nb_pkts - 1 - j];
> }
>
> num_tx_total += num_send;
> - num_not_send += slave_bufs_pkts[i] - num_send;
> + num_not_send += member_bufs_pkts[i] - num_send;
>
> #if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
> /* Print TX stats including update packets */
> - for (j = 0; j < slave_bufs_pkts[i]; j++) {
> - eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
> + for (j = 0; j < member_bufs_pkts[i]; j++) {
> + eth_h = rte_pktmbuf_mtod(member_bufs[i][j],
> struct rte_ether_hdr *);
> - mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
> + mode6_debug("TX ARP:", eth_h, i, &burst_number_TX);
> }
> #endif
> }
> }
>
> - /* Send update packets on proper slaves */
> + /* Send update packets on proper members */
> for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
> if (update_bufs_pkts[i] > 0) {
> num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
> @@ -1127,21 +1131,21 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> for (j = 0; j < update_bufs_pkts[i]; j++) {
> eth_h = rte_pktmbuf_mtod(update_bufs[i][j],
> struct rte_ether_hdr *);
> - mode6_debug("TX ARPupd:", eth_h, i, &burstnumberTX);
> + mode6_debug("TX ARPupd:", eth_h, i, &burst_number_TX);
> }
> #endif
> }
> }
>
> /* Send non-ARP packets using tlb policy */
> - if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
> + if (member_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
> num_send = bond_ethdev_tx_burst_tlb(queue,
> - slave_bufs[RTE_MAX_ETHPORTS],
> - slave_bufs_pkts[RTE_MAX_ETHPORTS]);
> + member_bufs[RTE_MAX_ETHPORTS],
> + member_bufs_pkts[RTE_MAX_ETHPORTS]);
>
> - for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
> + for (j = 0; j < member_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
> bufs[nb_pkts - 1 - num_not_send - j] =
> - slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
> + member_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
> }
>
> num_tx_total += num_send;
> @@ -1152,59 +1156,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
>
> static inline uint16_t
> tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
> - uint16_t *slave_port_ids, uint16_t slave_count)
> + uint16_t *member_port_ids, uint16_t member_count)
> {
> struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
> struct bond_dev_private *internals = bd_tx_q->dev_private;
>
> - /* Array to sort mbufs for transmission on each slave into */
> - struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
> - /* Number of mbufs for transmission on each slave */
> - uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
> - /* Mapping array generated by hash function to map mbufs to slaves */
> - uint16_t bufs_slave_port_idxs[nb_bufs];
> + /* Array to sort mbufs for transmission on each member into */
> + struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_bufs];
> + /* Number of mbufs for transmission on each member */
> + uint16_t member_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
> + /* Mapping array generated by hash function to map mbufs to members */
> + uint16_t bufs_member_port_idxs[nb_bufs];
>
> - uint16_t slave_tx_count;
> + uint16_t member_tx_count;
> uint16_t total_tx_count = 0, total_tx_fail_count = 0;
>
> uint16_t i;
>
> /*
> - * Populate slaves mbuf with the packets which are to be sent on it
> - * selecting output slave using hash based on xmit policy
> + * Populate members mbuf with the packets which are to be sent on it
> + * selecting output member using hash based on xmit policy
> */
> - internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
> - bufs_slave_port_idxs);
> + internals->burst_xmit_hash(bufs, nb_bufs, member_count,
> + bufs_member_port_idxs);
>
> for (i = 0; i < nb_bufs; i++) {
> - /* Populate slave mbuf arrays with mbufs for that slave. */
> - uint16_t slave_idx = bufs_slave_port_idxs[i];
> + /* Populate member mbuf arrays with mbufs for that member. */
> + uint16_t member_idx = bufs_member_port_idxs[i];
>
> - slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
> + member_bufs[member_idx][member_nb_bufs[member_idx]++] = bufs[i];
> }
>
> - /* Send packet burst on each slave device */
> - for (i = 0; i < slave_count; i++) {
> - if (slave_nb_bufs[i] == 0)
> + /* Send packet burst on each member device */
> + for (i = 0; i < member_count; i++) {
> + if (member_nb_bufs[i] == 0)
> continue;
>
> - slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
> - bd_tx_q->queue_id, slave_bufs[i],
> - slave_nb_bufs[i]);
> - slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
> - bd_tx_q->queue_id, slave_bufs[i],
> - slave_tx_count);
> + member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
> + bd_tx_q->queue_id, member_bufs[i],
> + member_nb_bufs[i]);
> + member_tx_count = rte_eth_tx_burst(member_port_ids[i],
> + bd_tx_q->queue_id, member_bufs[i],
> + member_tx_count);
>
> - total_tx_count += slave_tx_count;
> + total_tx_count += member_tx_count;
>
> /* If tx burst fails move packets to end of bufs */
> - if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
> - int slave_tx_fail_count = slave_nb_bufs[i] -
> - slave_tx_count;
> - total_tx_fail_count += slave_tx_fail_count;
> + if (unlikely(member_tx_count < member_nb_bufs[i])) {
> + int member_tx_fail_count = member_nb_bufs[i] -
> + member_tx_count;
> + total_tx_fail_count += member_tx_fail_count;
> memcpy(&bufs[nb_bufs - total_tx_fail_count],
> - &slave_bufs[i][slave_tx_count],
> - slave_tx_fail_count * sizeof(bufs[0]));
> + &member_bufs[i][member_tx_count],
> + member_tx_fail_count * sizeof(bufs[0]));
> }
> }
>
> @@ -1218,23 +1222,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
> struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
> struct bond_dev_private *internals = bd_tx_q->dev_private;
>
> - uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
> - uint16_t slave_count;
> + uint16_t member_port_ids[RTE_MAX_ETHPORTS];
> + uint16_t member_count;
>
> if (unlikely(nb_bufs == 0))
> return 0;
>
> - /* Copy slave list to protect against slave up/down changes during tx
> + /* Copy member list to protect against member up/down changes during tx
> * bursting
> */
> - slave_count = internals->active_slave_count;
> - if (unlikely(slave_count < 1))
> + member_count = internals->active_member_count;
> + if (unlikely(member_count < 1))
> return 0;
>
> - memcpy(slave_port_ids, internals->active_slaves,
> - sizeof(slave_port_ids[0]) * slave_count);
> - return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
> - slave_count);
> + memcpy(member_port_ids, internals->active_members,
> + sizeof(member_port_ids[0]) * member_count);
> + return tx_burst_balance(queue, bufs, nb_bufs, member_port_ids,
> + member_count);
> }
>
> static inline uint16_t
> @@ -1244,31 +1248,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
> struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
> struct bond_dev_private *internals = bd_tx_q->dev_private;
>
> - uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
> - uint16_t slave_count;
> + uint16_t member_port_ids[RTE_MAX_ETHPORTS];
> + uint16_t member_count;
>
> - uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
> - uint16_t dist_slave_count;
> + uint16_t dist_member_port_ids[RTE_MAX_ETHPORTS];
> + uint16_t dist_member_count;
>
> - uint16_t slave_tx_count;
> + uint16_t member_tx_count;
>
> uint16_t i;
>
> - /* Copy slave list to protect against slave up/down changes during tx
> + /* Copy member list to protect against member up/down changes during tx
> * bursting */
> - slave_count = internals->active_slave_count;
> - if (unlikely(slave_count < 1))
> + member_count = internals->active_member_count;
> + if (unlikely(member_count < 1))
> return 0;
>
> - memcpy(slave_port_ids, internals->active_slaves,
> - sizeof(slave_port_ids[0]) * slave_count);
> + memcpy(member_port_ids, internals->active_members,
> + sizeof(member_port_ids[0]) * member_count);
>
> if (dedicated_txq)
> goto skip_tx_ring;
>
> /* Check for LACP control packets and send if available */
> - for (i = 0; i < slave_count; i++) {
> - struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
> + for (i = 0; i < member_count; i++) {
> + struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
> struct rte_mbuf *ctrl_pkt = NULL;
>
> if (likely(rte_ring_empty(port->tx_ring)))
> @@ -1276,15 +1280,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
>
> if (rte_ring_dequeue(port->tx_ring,
> (void **)&ctrl_pkt) != -ENOENT) {
> - slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
> + member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
> bd_tx_q->queue_id, &ctrl_pkt, 1);
> - slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
> - bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
> + member_tx_count = rte_eth_tx_burst(member_port_ids[i],
> + bd_tx_q->queue_id, &ctrl_pkt, member_tx_count);
> /*
> * re-enqueue LAG control plane packets to buffering
> * ring if transmission fails so the packet isn't lost.
> */
> - if (slave_tx_count != 1)
> + if (member_tx_count != 1)
> rte_ring_enqueue(port->tx_ring, ctrl_pkt);
> }
> }
> @@ -1293,20 +1297,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
> if (unlikely(nb_bufs == 0))
> return 0;
>
> - dist_slave_count = 0;
> - for (i = 0; i < slave_count; i++) {
> - struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
> + dist_member_count = 0;
> + for (i = 0; i < member_count; i++) {
> + struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
>
> if (ACTOR_STATE(port, DISTRIBUTING))
> - dist_slave_port_ids[dist_slave_count++] =
> - slave_port_ids[i];
> + dist_member_port_ids[dist_member_count++] =
> + member_port_ids[i];
> }
>
> - if (unlikely(dist_slave_count < 1))
> + if (unlikely(dist_member_count < 1))
> return 0;
>
> - return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
> - dist_slave_count);
> + return tx_burst_balance(queue, bufs, nb_bufs, dist_member_port_ids,
> + dist_member_count);
> }
>
> static uint16_t
> @@ -1330,78 +1334,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
> struct bond_dev_private *internals;
> struct bond_tx_queue *bd_tx_q;
>
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> + uint16_t members[RTE_MAX_ETHPORTS];
> uint8_t tx_failed_flag = 0;
> - uint16_t num_of_slaves;
> + uint16_t num_of_members;
>
> uint16_t max_nb_of_tx_pkts = 0;
>
> - int slave_tx_total[RTE_MAX_ETHPORTS];
> - int i, most_successful_tx_slave = -1;
> + int member_tx_total[RTE_MAX_ETHPORTS];
> + int i, most_successful_tx_member = -1;
>
> bd_tx_q = (struct bond_tx_queue *)queue;
> internals = bd_tx_q->dev_private;
>
> - /* Copy slave list to protect against slave up/down changes during tx
> + /* Copy member list to protect against member up/down changes during tx
> * bursting */
> - num_of_slaves = internals->active_slave_count;
> - memcpy(slaves, internals->active_slaves,
> - sizeof(internals->active_slaves[0]) * num_of_slaves);
> + num_of_members = internals->active_member_count;
> + memcpy(members, internals->active_members,
> + sizeof(internals->active_members[0]) * num_of_members);
>
> - if (num_of_slaves < 1)
> + if (num_of_members < 1)
> return 0;
>
> /* It is rare that bond different PMDs together, so just call tx-prepare once */
> - nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
> + nb_pkts = rte_eth_tx_prepare(members[0], bd_tx_q->queue_id, bufs, nb_pkts);
>
> /* Increment reference count on mbufs */
> for (i = 0; i < nb_pkts; i++)
> - rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
> + rte_pktmbuf_refcnt_update(bufs[i], num_of_members - 1);
>
> - /* Transmit burst on each active slave */
> - for (i = 0; i < num_of_slaves; i++) {
> - slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
> + /* Transmit burst on each active member */
> + for (i = 0; i < num_of_members; i++) {
> + member_tx_total[i] = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
> bufs, nb_pkts);
>
> - if (unlikely(slave_tx_total[i] < nb_pkts))
> + if (unlikely(member_tx_total[i] < nb_pkts))
> tx_failed_flag = 1;
>
> - /* record the value and slave index for the slave which transmits the
> + /* record the value and member index for the member which transmits the
> * maximum number of packets */
> - if (slave_tx_total[i] > max_nb_of_tx_pkts) {
> - max_nb_of_tx_pkts = slave_tx_total[i];
> - most_successful_tx_slave = i;
> + if (member_tx_total[i] > max_nb_of_tx_pkts) {
> + max_nb_of_tx_pkts = member_tx_total[i];
> + most_successful_tx_member = i;
> }
> }
>
> - /* if slaves fail to transmit packets from burst, the calling application
> + /* if members fail to transmit packets from burst, the calling application
> * is not expected to know about multiple references to packets so we must
> - * handle failures of all packets except those of the most successful slave
> + * handle failures of all packets except those of the most successful member
> */
> if (unlikely(tx_failed_flag))
> - for (i = 0; i < num_of_slaves; i++)
> - if (i != most_successful_tx_slave)
> - while (slave_tx_total[i] < nb_pkts)
> - rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
> + for (i = 0; i < num_of_members; i++)
> + if (i != most_successful_tx_member)
> + while (member_tx_total[i] < nb_pkts)
> + rte_pktmbuf_free(bufs[member_tx_total[i]++]);
>
> return max_nb_of_tx_pkts;
> }
>
> static void
> -link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
> +link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *member_link)
> {
> struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
>
> if (bond_ctx->mode == BONDING_MODE_8023AD) {
> /**
> * If in mode 4 then save the link properties of the first
> - * slave, all subsequent slaves must match these properties
> + * member, all subsequent members must match these properties
> */
> - struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
> + struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
>
> - bond_link->link_autoneg = slave_link->link_autoneg;
> - bond_link->link_duplex = slave_link->link_duplex;
> - bond_link->link_speed = slave_link->link_speed;
> + bond_link->link_autoneg = member_link->link_autoneg;
> + bond_link->link_duplex = member_link->link_duplex;
> + bond_link->link_speed = member_link->link_speed;
> } else {
> /**
> * In any other mode the link properties are set to default
> @@ -1414,16 +1418,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
>
> static int
> link_properties_valid(struct rte_eth_dev *ethdev,
> - struct rte_eth_link *slave_link)
> + struct rte_eth_link *member_link)
> {
> struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
>
> if (bond_ctx->mode == BONDING_MODE_8023AD) {
> - struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
> + struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
>
> - if (bond_link->link_duplex != slave_link->link_duplex ||
> - bond_link->link_autoneg != slave_link->link_autoneg ||
> - bond_link->link_speed != slave_link->link_speed)
> + if (bond_link->link_duplex != member_link->link_duplex ||
> + bond_link->link_autoneg != member_link->link_autoneg ||
> + bond_link->link_speed != member_link->link_speed)
> return -1;
> }
>
> @@ -1480,11 +1484,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
> static const struct rte_ether_addr null_mac_addr;
>
> /*
> - * Add additional MAC addresses to the slave
> + * Add additional MAC addresses to the member
> */
> int
> -slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> - uint16_t slave_port_id)
> +member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> + uint16_t member_port_id)
> {
> int i, ret;
> struct rte_ether_addr *mac_addr;
> @@ -1494,11 +1498,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
> break;
>
> - ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
> + ret = rte_eth_dev_mac_addr_add(member_port_id, mac_addr, 0);
> if (ret < 0) {
> /* rollback */
> for (i--; i > 0; i--)
> - rte_eth_dev_mac_addr_remove(slave_port_id,
> + rte_eth_dev_mac_addr_remove(member_port_id,
> &bonded_eth_dev->data->mac_addrs[i]);
> return ret;
> }
> @@ -1508,11 +1512,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> }
>
> /*
> - * Remove additional MAC addresses from the slave
> + * Remove additional MAC addresses from the member
> */
> int
> -slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> - uint16_t slave_port_id)
> +member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> + uint16_t member_port_id)
> {
> int i, rc, ret;
> struct rte_ether_addr *mac_addr;
> @@ -1523,7 +1527,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
> break;
>
> - ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
> + ret = rte_eth_dev_mac_addr_remove(member_port_id, mac_addr);
> /* save only the first error */
> if (ret < 0 && rc == 0)
> rc = ret;
> @@ -1533,26 +1537,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
> }
>
> int
> -mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
> +mac_address_members_update(struct rte_eth_dev *bonded_eth_dev)
> {
> struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
> bool set;
> int i;
>
> - /* Update slave devices MAC addresses */
> - if (internals->slave_count < 1)
> + /* Update member devices MAC addresses */
> + if (internals->member_count < 1)
> return -1;
>
> switch (internals->mode) {
> case BONDING_MODE_ROUND_ROBIN:
> case BONDING_MODE_BALANCE:
> case BONDING_MODE_BROADCAST:
> - for (i = 0; i < internals->slave_count; i++) {
> + for (i = 0; i < internals->member_count; i++) {
> if (rte_eth_dev_default_mac_addr_set(
> - internals->slaves[i].port_id,
> + internals->members[i].port_id,
> bonded_eth_dev->data->mac_addrs)) {
> RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
> - internals->slaves[i].port_id);
> + internals->members[i].port_id);
> return -1;
> }
> }
> @@ -1565,8 +1569,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
> case BONDING_MODE_ALB:
> default:
> set = true;
> - for (i = 0; i < internals->slave_count; i++) {
> - if (internals->slaves[i].port_id ==
> + for (i = 0; i < internals->member_count; i++) {
> + if (internals->members[i].port_id ==
> internals->current_primary_port) {
> if (rte_eth_dev_default_mac_addr_set(
> internals->current_primary_port,
> @@ -1577,10 +1581,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
> }
> } else {
> if (rte_eth_dev_default_mac_addr_set(
> - internals->slaves[i].port_id,
> - &internals->slaves[i].persisted_mac_addr)) {
> + internals->members[i].port_id,
> + &internals->members[i].persisted_mac_addr)) {
> RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
> - internals->slaves[i].port_id);
> + internals->members[i].port_id);
> }
> }
> }
> @@ -1655,55 +1659,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
>
>
> static int
> -slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
> - struct rte_eth_dev *slave_eth_dev)
> +member_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
> + struct rte_eth_dev *member_eth_dev)
> {
> int errval = 0;
> struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
> - struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
> + struct port *port = &bond_mode_8023ad_ports[member_eth_dev->data->port_id];
>
> if (port->slow_pool == NULL) {
> char mem_name[256];
> - int slave_id = slave_eth_dev->data->port_id;
> + int member_id = member_eth_dev->data->port_id;
>
> - snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
> - slave_id);
> + snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_slow_pool",
> + member_id);
> port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
> 250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
> - slave_eth_dev->data->numa_node);
> + member_eth_dev->data->numa_node);
>
> /* Any memory allocation failure in initialization is critical because
> * resources can't be free, so reinitialization is impossible. */
> if (port->slow_pool == NULL) {
> - rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
> - slave_id, mem_name, rte_strerror(rte_errno));
> + rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
> + member_id, mem_name, rte_strerror(rte_errno));
> }
> }
>
> if (internals->mode4.dedicated_queues.enabled == 1) {
> /* Configure slow Rx queue */
>
> - errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
> + errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
> internals->mode4.dedicated_queues.rx_qid, 128,
> - rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
> + rte_eth_dev_socket_id(member_eth_dev->data->port_id),
> NULL, port->slow_pool);
> if (errval != 0) {
> RTE_BOND_LOG(ERR,
> "rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
> - slave_eth_dev->data->port_id,
> + member_eth_dev->data->port_id,
> internals->mode4.dedicated_queues.rx_qid,
> errval);
> return errval;
> }
>
> - errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
> + errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
> internals->mode4.dedicated_queues.tx_qid, 512,
> - rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
> + rte_eth_dev_socket_id(member_eth_dev->data->port_id),
> NULL);
> if (errval != 0) {
> RTE_BOND_LOG(ERR,
> "rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
> - slave_eth_dev->data->port_id,
> + member_eth_dev->data->port_id,
> internals->mode4.dedicated_queues.tx_qid,
> errval);
> return errval;
> @@ -1713,8 +1717,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
> }
>
> int
> -slave_configure(struct rte_eth_dev *bonded_eth_dev,
> - struct rte_eth_dev *slave_eth_dev)
> +member_configure(struct rte_eth_dev *bonded_eth_dev,
> + struct rte_eth_dev *member_eth_dev)
> {
> uint16_t nb_rx_queues;
> uint16_t nb_tx_queues;
> @@ -1723,45 +1727,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
>
> struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
>
> - /* Stop slave */
> - errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
> + /* Stop member */
> + errval = rte_eth_dev_stop(member_eth_dev->data->port_id);
> if (errval != 0)
> RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
> - slave_eth_dev->data->port_id, errval);
> + member_eth_dev->data->port_id, errval);
>
> - /* Enable interrupts on slave device if supported */
> - if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
> - slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
> + /* Enable interrupts on member device if supported */
> + if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
> + member_eth_dev->data->dev_conf.intr_conf.lsc = 1;
>
> - /* If RSS is enabled for bonding, try to enable it for slaves */
> + /* If RSS is enabled for bonding, try to enable it for members */
> if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
> /* rss_key won't be empty if RSS is configured in bonded dev */
> - slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
> + member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
> internals->rss_key_len;
> - slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
> + member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
> internals->rss_key;
>
> - slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
> + member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
> bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
> - slave_eth_dev->data->dev_conf.rxmode.mq_mode =
> + member_eth_dev->data->dev_conf.rxmode.mq_mode =
> bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
> } else {
> - slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
> - slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
> - slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
> - slave_eth_dev->data->dev_conf.rxmode.mq_mode =
> + member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
> + member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
> + member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
> + member_eth_dev->data->dev_conf.rxmode.mq_mode =
> bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
> }
>
> - slave_eth_dev->data->dev_conf.rxmode.mtu =
> + member_eth_dev->data->dev_conf.rxmode.mtu =
> bonded_eth_dev->data->dev_conf.rxmode.mtu;
> - slave_eth_dev->data->dev_conf.link_speeds =
> + member_eth_dev->data->dev_conf.link_speeds =
> bonded_eth_dev->data->dev_conf.link_speeds;
>
> - slave_eth_dev->data->dev_conf.txmode.offloads =
> + member_eth_dev->data->dev_conf.txmode.offloads =
> bonded_eth_dev->data->dev_conf.txmode.offloads;
>
> - slave_eth_dev->data->dev_conf.rxmode.offloads =
> + member_eth_dev->data->dev_conf.rxmode.offloads =
> bonded_eth_dev->data->dev_conf.rxmode.offloads;
>
> nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
> @@ -1775,28 +1779,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
> }
>
> /* Configure device */
> - errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
> + errval = rte_eth_dev_configure(member_eth_dev->data->port_id,
> nb_rx_queues, nb_tx_queues,
> - &(slave_eth_dev->data->dev_conf));
> + &member_eth_dev->data->dev_conf);
> if (errval != 0) {
> - RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
> - slave_eth_dev->data->port_id, errval);
> + RTE_BOND_LOG(ERR, "Cannot configure member device: port %u, err (%d)",
> + member_eth_dev->data->port_id, errval);
> return errval;
> }
>
> - errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
> + errval = rte_eth_dev_set_mtu(member_eth_dev->data->port_id,
> bonded_eth_dev->data->mtu);
> if (errval != 0 && errval != -ENOTSUP) {
> RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
> - slave_eth_dev->data->port_id, errval);
> + member_eth_dev->data->port_id, errval);
> return errval;
> }
> return 0;
> }
>
> int
> -slave_start(struct rte_eth_dev *bonded_eth_dev,
> - struct rte_eth_dev *slave_eth_dev)
> +member_start(struct rte_eth_dev *bonded_eth_dev,
> + struct rte_eth_dev *member_eth_dev)
> {
> int errval = 0;
> struct bond_rx_queue *bd_rx_q;
> @@ -1804,19 +1808,20 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
> uint16_t q_id;
> struct rte_flow_error flow_error;
> struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
> + uint16_t member_port_id = member_eth_dev->data->port_id;
>
> /* Setup Rx Queues */
> for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
> bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
>
> - errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
> + errval = rte_eth_rx_queue_setup(member_port_id, q_id,
> bd_rx_q->nb_rx_desc,
> - rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
> + rte_eth_dev_socket_id(member_port_id),
> &(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
> if (errval != 0) {
> RTE_BOND_LOG(ERR,
> "rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
> - slave_eth_dev->data->port_id, q_id, errval);
> + member_port_id, q_id, errval);
> return errval;
> }
> }
> @@ -1825,58 +1830,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
> for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
> bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
>
> - errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
> + errval = rte_eth_tx_queue_setup(member_port_id, q_id,
> bd_tx_q->nb_tx_desc,
> - rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
> + rte_eth_dev_socket_id(member_port_id),
> &bd_tx_q->tx_conf);
> if (errval != 0) {
> RTE_BOND_LOG(ERR,
> "rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
> - slave_eth_dev->data->port_id, q_id, errval);
> + member_port_id, q_id, errval);
> return errval;
> }
> }
>
> if (internals->mode == BONDING_MODE_8023AD &&
> internals->mode4.dedicated_queues.enabled == 1) {
> - if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
> + if (member_configure_slow_queue(bonded_eth_dev, member_eth_dev)
> != 0)
> return errval;
>
> errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
> - slave_eth_dev->data->port_id);
> + member_port_id);
> if (errval != 0) {
> RTE_BOND_LOG(ERR,
> "bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
> - slave_eth_dev->data->port_id, errval);
> + member_port_id, errval);
> return errval;
> }
>
> - if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
> - errval = rte_flow_destroy(slave_eth_dev->data->port_id,
> - internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
> + if (internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
> + errval = rte_flow_destroy(member_port_id,
> + internals->mode4.dedicated_queues.flow[member_port_id],
> &flow_error);
> RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
> - slave_eth_dev->data->port_id, errval);
> + member_port_id, errval);
> }
> }
>
> /* Start device */
> - errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
> + errval = rte_eth_dev_start(member_port_id);
> if (errval != 0) {
> RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
> - slave_eth_dev->data->port_id, errval);
> + member_port_id, errval);
> return -1;
> }
>
> if (internals->mode == BONDING_MODE_8023AD &&
> internals->mode4.dedicated_queues.enabled == 1) {
> errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
> - slave_eth_dev->data->port_id);
> + member_port_id);
> if (errval != 0) {
> RTE_BOND_LOG(ERR,
> "bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
> - slave_eth_dev->data->port_id, errval);
> + member_port_id, errval);
> return errval;
> }
> }
> @@ -1888,27 +1893,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
>
> internals = bonded_eth_dev->data->dev_private;
>
> - for (i = 0; i < internals->slave_count; i++) {
> - if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
> + for (i = 0; i < internals->member_count; i++) {
> + if (internals->members[i].port_id == member_port_id) {
> errval = rte_eth_dev_rss_reta_update(
> - slave_eth_dev->data->port_id,
> + member_port_id,
> &internals->reta_conf[0],
> - internals->slaves[i].reta_size);
> + internals->members[i].reta_size);
> if (errval != 0) {
> RTE_BOND_LOG(WARNING,
> - "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
> + "rte_eth_dev_rss_reta_update on member port %d fails (err %d)."
> " RSS Configuration for bonding may be inconsistent.",
> - slave_eth_dev->data->port_id, errval);
> + member_port_id, errval);
> }
> break;
> }
> }
> }
>
> - /* If lsc interrupt is set, check initial slave's link status */
> - if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
> - slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
> - bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
> + /* If lsc interrupt is set, check initial member's link status */
> + if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
> + member_eth_dev->dev_ops->link_update(member_eth_dev, 0);
> + bond_ethdev_lsc_event_callback(member_port_id,
> RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
> NULL);
> }
> @@ -1917,75 +1922,74 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
> }
>
> void
> -slave_remove(struct bond_dev_private *internals,
> - struct rte_eth_dev *slave_eth_dev)
> +member_remove(struct bond_dev_private *internals,
> + struct rte_eth_dev *member_eth_dev)
> {
> uint16_t i;
>
> - for (i = 0; i < internals->slave_count; i++)
> - if (internals->slaves[i].port_id ==
> - slave_eth_dev->data->port_id)
> + for (i = 0; i < internals->member_count; i++)
> + if (internals->members[i].port_id ==
> + member_eth_dev->data->port_id)
> break;
>
> - if (i < (internals->slave_count - 1)) {
> + if (i < (internals->member_count - 1)) {
> struct rte_flow *flow;
>
> - memmove(&internals->slaves[i], &internals->slaves[i + 1],
> - sizeof(internals->slaves[0]) *
> - (internals->slave_count - i - 1));
> + memmove(&internals->members[i], &internals->members[i + 1],
> + sizeof(internals->members[0]) *
> + (internals->member_count - i - 1));
> TAILQ_FOREACH(flow, &internals->flow_list, next) {
> memmove(&flow->flows[i], &flow->flows[i + 1],
> sizeof(flow->flows[0]) *
> - (internals->slave_count - i - 1));
> - flow->flows[internals->slave_count - 1] = NULL;
> + (internals->member_count - i - 1));
> + flow->flows[internals->member_count - 1] = NULL;
> }
> }
>
> - internals->slave_count--;
> + internals->member_count--;
>
> - /* force reconfiguration of slave interfaces */
> - rte_eth_dev_internal_reset(slave_eth_dev);
> + /* force reconfiguration of member interfaces */
> + rte_eth_dev_internal_reset(member_eth_dev);
> }
>
> static void
> -bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
> +bond_ethdev_member_link_status_change_monitor(void *cb_arg);
>
> void
> -slave_add(struct bond_dev_private *internals,
> - struct rte_eth_dev *slave_eth_dev)
> +member_add(struct bond_dev_private *internals,
> + struct rte_eth_dev *member_eth_dev)
> {
> - struct bond_slave_details *slave_details =
> - &internals->slaves[internals->slave_count];
> + struct bond_member_details *member_details =
> + &internals->members[internals->member_count];
>
> - slave_details->port_id = slave_eth_dev->data->port_id;
> - slave_details->last_link_status = 0;
> + member_details->port_id = member_eth_dev->data->port_id;
> + member_details->last_link_status = 0;
>
> - /* Mark slave devices that don't support interrupts so we can
> + /* Mark member devices that don't support interrupts so we can
> * compensate when we start the bond
> */
> - if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
> - slave_details->link_status_poll_enabled = 1;
> - }
> + if (!(member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))
> + member_details->link_status_poll_enabled = 1;
>
> - slave_details->link_status_wait_to_complete = 0;
> + member_details->link_status_wait_to_complete = 0;
> /* clean tlb_last_obytes when adding port for bonding device */
> - memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
> + memcpy(&member_details->persisted_mac_addr, member_eth_dev->data->mac_addrs,
> sizeof(struct rte_ether_addr));
> }
>
> void
> bond_ethdev_primary_set(struct bond_dev_private *internals,
> - uint16_t slave_port_id)
> + uint16_t member_port_id)
> {
> int i;
>
> - if (internals->active_slave_count < 1)
> - internals->current_primary_port = slave_port_id;
> + if (internals->active_member_count < 1)
> + internals->current_primary_port = member_port_id;
> else
> - /* Search bonded device slave ports for new proposed primary port */
> - for (i = 0; i < internals->active_slave_count; i++) {
> - if (internals->active_slaves[i] == slave_port_id)
> - internals->current_primary_port = slave_port_id;
> + /* Search bonded device member ports for new proposed primary port */
> + for (i = 0; i < internals->active_member_count; i++) {
> + if (internals->active_members[i] == member_port_id)
> + internals->current_primary_port = member_port_id;
> }
> }
>
> @@ -1998,9 +2002,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
> struct bond_dev_private *internals;
> int i;
>
> - /* slave eth dev will be started by bonded device */
> + /* member eth dev will be started by bonded device */
> if (check_for_bonded_ethdev(eth_dev)) {
> - RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
> + RTE_BOND_LOG(ERR, "User tried to explicitly start a member eth_dev (%d)",
> eth_dev->data->port_id);
> return -1;
> }
> @@ -2010,17 +2014,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
>
> internals = eth_dev->data->dev_private;
>
> - if (internals->slave_count == 0) {
> - RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
> + if (internals->member_count == 0) {
> + RTE_BOND_LOG(ERR, "Cannot start port since there are no member devices");
> goto out_err;
> }
>
> if (internals->user_defined_mac == 0) {
> struct rte_ether_addr *new_mac_addr = NULL;
>
> - for (i = 0; i < internals->slave_count; i++)
> - if (internals->slaves[i].port_id == internals->primary_port)
> - new_mac_addr = &internals->slaves[i].persisted_mac_addr;
> + for (i = 0; i < internals->member_count; i++)
> + if (internals->members[i].port_id == internals->primary_port)
> + new_mac_addr = &internals->members[i].persisted_mac_addr;
>
> if (new_mac_addr == NULL)
> goto out_err;
> @@ -2042,28 +2046,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
> }
>
>
> - /* Reconfigure each slave device if starting bonded device */
> - for (i = 0; i < internals->slave_count; i++) {
> - struct rte_eth_dev *slave_ethdev =
> - &(rte_eth_devices[internals->slaves[i].port_id]);
> - if (slave_configure(eth_dev, slave_ethdev) != 0) {
> + /* Reconfigure each member device if starting bonded device */
> + for (i = 0; i < internals->member_count; i++) {
> + struct rte_eth_dev *member_ethdev =
> + &(rte_eth_devices[internals->members[i].port_id]);
> + if (member_configure(eth_dev, member_ethdev) != 0) {
> RTE_BOND_LOG(ERR,
> - "bonded port (%d) failed to reconfigure slave device (%d)",
> + "bonded port (%d) failed to reconfigure member device (%d)",
> eth_dev->data->port_id,
> - internals->slaves[i].port_id);
> + internals->members[i].port_id);
> goto out_err;
> }
> - if (slave_start(eth_dev, slave_ethdev) != 0) {
> + if (member_start(eth_dev, member_ethdev) != 0) {
> RTE_BOND_LOG(ERR,
> - "bonded port (%d) failed to start slave device (%d)",
> + "bonded port (%d) failed to start member device (%d)",
> eth_dev->data->port_id,
> - internals->slaves[i].port_id);
> + internals->members[i].port_id);
> goto out_err;
> }
> - /* We will need to poll for link status if any slave doesn't
> + /* We will need to poll for link status if any member doesn't
> * support interrupts
> */
> - if (internals->slaves[i].link_status_poll_enabled)
> + if (internals->members[i].link_status_poll_enabled)
> internals->link_status_polling_enabled = 1;
> }
>
> @@ -2071,12 +2075,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
> if (internals->link_status_polling_enabled) {
> rte_eal_alarm_set(
> internals->link_status_polling_interval_ms * 1000,
> - bond_ethdev_slave_link_status_change_monitor,
> + bond_ethdev_member_link_status_change_monitor,
> (void *)&rte_eth_devices[internals->port_id]);
> }
>
> - /* Update all slave devices MACs*/
> - if (mac_address_slaves_update(eth_dev) != 0)
> + /* Update all member devices MACs*/
> + if (mac_address_members_update(eth_dev) != 0)
> goto out_err;
>
> if (internals->user_defined_primary_port)
> @@ -2132,8 +2136,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
> bond_mode_8023ad_stop(eth_dev);
>
> /* Discard all messages to/from mode 4 state machines */
> - for (i = 0; i < internals->active_slave_count; i++) {
> - port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
> + for (i = 0; i < internals->active_member_count; i++) {
> + port = &bond_mode_8023ad_ports[internals->active_members[i]];
>
> RTE_ASSERT(port->rx_ring != NULL);
> while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
> @@ -2148,30 +2152,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
> if (internals->mode == BONDING_MODE_TLB ||
> internals->mode == BONDING_MODE_ALB) {
> bond_tlb_disable(internals);
> - for (i = 0; i < internals->active_slave_count; i++)
> - tlb_last_obytets[internals->active_slaves[i]] = 0;
> + for (i = 0; i < internals->active_member_count; i++)
> + tlb_last_obytets[internals->active_members[i]] = 0;
> }
>
> eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
> eth_dev->data->dev_started = 0;
>
> internals->link_status_polling_enabled = 0;
> - for (i = 0; i < internals->slave_count; i++) {
> - uint16_t slave_id = internals->slaves[i].port_id;
> + for (i = 0; i < internals->member_count; i++) {
> + uint16_t member_id = internals->members[i].port_id;
>
> - internals->slaves[i].last_link_status = 0;
> - ret = rte_eth_dev_stop(slave_id);
> + internals->members[i].last_link_status = 0;
> + ret = rte_eth_dev_stop(member_id);
> if (ret != 0) {
> RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
> - slave_id);
> + member_id);
> return ret;
> }
>
> - /* active slaves need to be deactivated. */
> - if (find_slave_by_id(internals->active_slaves,
> - internals->active_slave_count, slave_id) !=
> - internals->active_slave_count)
> - deactivate_slave(eth_dev, slave_id);
> + /* active members need to be deactivated. */
> + if (find_member_by_id(internals->active_members,
> + internals->active_member_count, member_id) !=
> + internals->active_member_count)
> + deactivate_member(eth_dev, member_id);
> }
>
> return 0;
> @@ -2188,8 +2192,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
> /* Flush flows in all back-end devices before removing them */
> bond_flow_ops.flush(dev, &ferror);
>
> - while (internals->slave_count != skipped) {
> - uint16_t port_id = internals->slaves[skipped].port_id;
> + while (internals->member_count != skipped) {
> + uint16_t port_id = internals->members[skipped].port_id;
> int ret;
>
> ret = rte_eth_dev_stop(port_id);
> @@ -2203,7 +2207,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
> continue;
> }
>
> - if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
> + if (rte_eth_bond_member_remove(bond_port_id, port_id) != 0) {
> RTE_BOND_LOG(ERR,
> "Failed to remove port %d from bonded device %s",
> port_id, dev->device->name);
> @@ -2246,7 +2250,7 @@ static int
> bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> {
> struct bond_dev_private *internals = dev->data->dev_private;
> - struct bond_slave_details slave;
> + struct bond_member_details member;
> int ret;
>
> uint16_t max_nb_rx_queues = UINT16_MAX;
> @@ -2259,31 +2263,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> RTE_ETHER_MAX_JUMBO_FRAME_LEN;
>
> /* Max number of tx/rx queues that the bonded device can support is the
> - * minimum values of the bonded slaves, as all slaves must be capable
> + * minimum values of the bonded members, as all members must be capable
> * of supporting the same number of tx/rx queues.
> */
> - if (internals->slave_count > 0) {
> - struct rte_eth_dev_info slave_info;
> + if (internals->member_count > 0) {
> + struct rte_eth_dev_info member_info;
> uint16_t idx;
>
> - for (idx = 0; idx < internals->slave_count; idx++) {
> - slave = internals->slaves[idx];
> - ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
> + for (idx = 0; idx < internals->member_count; idx++) {
> + member = internals->members[idx];
> + ret = rte_eth_dev_info_get(member.port_id, &member_info);
> if (ret != 0) {
> RTE_BOND_LOG(ERR,
> "%s: Error during getting device (port %u) info: %s\n",
> __func__,
> - slave.port_id,
> + member.port_id,
> strerror(-ret));
>
> return ret;
> }
>
> - if (slave_info.max_rx_queues < max_nb_rx_queues)
> - max_nb_rx_queues = slave_info.max_rx_queues;
> + if (member_info.max_rx_queues < max_nb_rx_queues)
> + max_nb_rx_queues = member_info.max_rx_queues;
>
> - if (slave_info.max_tx_queues < max_nb_tx_queues)
> - max_nb_tx_queues = slave_info.max_tx_queues;
> + if (member_info.max_tx_queues < max_nb_tx_queues)
> + max_nb_tx_queues = member_info.max_tx_queues;
> }
> }
>
> @@ -2332,7 +2336,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
> uint16_t i;
> struct bond_dev_private *internals = dev->data->dev_private;
>
> - /* don't do this while a slave is being added */
> + /* don't do this while a member is being added */
> rte_spinlock_lock(&internals->lock);
>
> if (on)
> @@ -2340,13 +2344,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
> else
> rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
>
> - for (i = 0; i < internals->slave_count; i++) {
> - uint16_t port_id = internals->slaves[i].port_id;
> + for (i = 0; i < internals->member_count; i++) {
> + uint16_t port_id = internals->members[i].port_id;
>
> res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
> if (res == ENOTSUP)
> RTE_BOND_LOG(WARNING,
> - "Setting VLAN filter on slave port %u not supported.",
> + "Setting VLAN filter on member port %u not supported.",
> port_id);
> }
>
> @@ -2424,14 +2428,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
> }
>
> static void
> -bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
> +bond_ethdev_member_link_status_change_monitor(void *cb_arg)
> {
> - struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
> + struct rte_eth_dev *bonded_ethdev, *member_ethdev;
> struct bond_dev_private *internals;
>
> - /* Default value for polling slave found is true as we don't want to
> + /* Default value for polling member found is true as we don't want to
> * disable the polling thread if we cannot get the lock */
> - int i, polling_slave_found = 1;
> + int i, polling_member_found = 1;
>
> if (cb_arg == NULL)
> return;
> @@ -2443,28 +2447,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
> !internals->link_status_polling_enabled)
> return;
>
> - /* If device is currently being configured then don't check slaves link
> + /* If device is currently being configured then don't check members link
> * status, wait until next period */
> if (rte_spinlock_trylock(&internals->lock)) {
> - if (internals->slave_count > 0)
> - polling_slave_found = 0;
> + if (internals->member_count > 0)
> + polling_member_found = 0;
>
> - for (i = 0; i < internals->slave_count; i++) {
> - if (!internals->slaves[i].link_status_poll_enabled)
> + for (i = 0; i < internals->member_count; i++) {
> + if (!internals->members[i].link_status_poll_enabled)
> continue;
>
> - slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
> - polling_slave_found = 1;
> + member_ethdev = &rte_eth_devices[internals->members[i].port_id];
> + polling_member_found = 1;
>
> - /* Update slave link status */
> - (*slave_ethdev->dev_ops->link_update)(slave_ethdev,
> - internals->slaves[i].link_status_wait_to_complete);
> + /* Update member link status */
> + (*member_ethdev->dev_ops->link_update)(member_ethdev,
> + internals->members[i].link_status_wait_to_complete);
>
> /* if link status has changed since last checked then call lsc
> * event callback */
> - if (slave_ethdev->data->dev_link.link_status !=
> - internals->slaves[i].last_link_status) {
> - bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
> + if (member_ethdev->data->dev_link.link_status !=
> + internals->members[i].last_link_status) {
> + bond_ethdev_lsc_event_callback(internals->members[i].port_id,
> RTE_ETH_EVENT_INTR_LSC,
> &bonded_ethdev->data->port_id,
> NULL);
> @@ -2473,10 +2477,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
> rte_spinlock_unlock(&internals->lock);
> }
>
> - if (polling_slave_found)
> - /* Set alarm to continue monitoring link status of slave ethdev's */
> + if (polling_member_found)
> + /* Set alarm to continue monitoring link status of member ethdev's */
> rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
> - bond_ethdev_slave_link_status_change_monitor, cb_arg);
> + bond_ethdev_member_link_status_change_monitor, cb_arg);
> }
>
> static int
> @@ -2485,7 +2489,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
> int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
>
> struct bond_dev_private *bond_ctx;
> - struct rte_eth_link slave_link;
> + struct rte_eth_link member_link;
>
> bool one_link_update_succeeded;
> uint32_t idx;
> @@ -2496,7 +2500,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
> ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
>
> if (ethdev->data->dev_started == 0 ||
> - bond_ctx->active_slave_count == 0) {
> + bond_ctx->active_member_count == 0) {
> ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
> return 0;
> }
> @@ -2512,51 +2516,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
> case BONDING_MODE_BROADCAST:
> /**
> * Setting link speed to UINT32_MAX to ensure we pick up the
> - * value of the first active slave
> + * value of the first active member
> */
> ethdev->data->dev_link.link_speed = UINT32_MAX;
>
> /**
> - * link speed is minimum value of all the slaves link speed as
> - * packet loss will occur on this slave if transmission at rates
> + * link speed is minimum value of all the members link speed as
> + * packet loss will occur on this member if transmission at rates
> * greater than this are attempted
> */
> - for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
> - ret = link_update(bond_ctx->active_slaves[idx],
> - &slave_link);
> + for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
> + ret = link_update(bond_ctx->active_members[idx],
> + &member_link);
> if (ret < 0) {
> ethdev->data->dev_link.link_speed =
> RTE_ETH_SPEED_NUM_NONE;
> RTE_BOND_LOG(ERR,
> - "Slave (port %u) link get failed: %s",
> - bond_ctx->active_slaves[idx],
> + "Member (port %u) link get failed: %s",
> + bond_ctx->active_members[idx],
> rte_strerror(-ret));
> return 0;
> }
>
> - if (slave_link.link_speed <
> + if (member_link.link_speed <
> ethdev->data->dev_link.link_speed)
> ethdev->data->dev_link.link_speed =
> - slave_link.link_speed;
> + member_link.link_speed;
> }
> break;
> case BONDING_MODE_ACTIVE_BACKUP:
> - /* Current primary slave */
> - ret = link_update(bond_ctx->current_primary_port, &slave_link);
> + /* Current primary member */
> + ret = link_update(bond_ctx->current_primary_port, &member_link);
> if (ret < 0) {
> - RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
> + RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
> bond_ctx->current_primary_port,
> rte_strerror(-ret));
> return 0;
> }
>
> - ethdev->data->dev_link.link_speed = slave_link.link_speed;
> + ethdev->data->dev_link.link_speed = member_link.link_speed;
> break;
> case BONDING_MODE_8023AD:
> ethdev->data->dev_link.link_autoneg =
> - bond_ctx->mode4.slave_link.link_autoneg;
> + bond_ctx->mode4.member_link.link_autoneg;
> ethdev->data->dev_link.link_duplex =
> - bond_ctx->mode4.slave_link.link_duplex;
> + bond_ctx->mode4.member_link.link_duplex;
> /* fall through */
> /* to update link speed */
> case BONDING_MODE_ROUND_ROBIN:
> @@ -2566,29 +2570,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
> default:
> /**
> * In theses mode the maximum theoretical link speed is the sum
> - * of all the slaves
> + * of all the members
> */
> ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> one_link_update_succeeded = false;
>
> - for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
> - ret = link_update(bond_ctx->active_slaves[idx],
> - &slave_link);
> + for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
> + ret = link_update(bond_ctx->active_members[idx],
> + &member_link);
> if (ret < 0) {
> RTE_BOND_LOG(ERR,
> - "Slave (port %u) link get failed: %s",
> - bond_ctx->active_slaves[idx],
> + "Member (port %u) link get failed: %s",
> + bond_ctx->active_members[idx],
> rte_strerror(-ret));
> continue;
> }
>
> one_link_update_succeeded = true;
> ethdev->data->dev_link.link_speed +=
> - slave_link.link_speed;
> + member_link.link_speed;
> }
>
> if (!one_link_update_succeeded) {
> - RTE_BOND_LOG(ERR, "All slaves link get failed");
> + RTE_BOND_LOG(ERR, "All members link get failed");
> return 0;
> }
> }
> @@ -2602,27 +2606,27 @@ static int
> bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
> {
> struct bond_dev_private *internals = dev->data->dev_private;
> - struct rte_eth_stats slave_stats;
> + struct rte_eth_stats member_stats;
> int i, j;
>
> - for (i = 0; i < internals->slave_count; i++) {
> - rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
> + for (i = 0; i < internals->member_count; i++) {
> + rte_eth_stats_get(internals->members[i].port_id, &member_stats);
>
> - stats->ipackets += slave_stats.ipackets;
> - stats->opackets += slave_stats.opackets;
> - stats->ibytes += slave_stats.ibytes;
> - stats->obytes += slave_stats.obytes;
> - stats->imissed += slave_stats.imissed;
> - stats->ierrors += slave_stats.ierrors;
> - stats->oerrors += slave_stats.oerrors;
> - stats->rx_nombuf += slave_stats.rx_nombuf;
> + stats->ipackets += member_stats.ipackets;
> + stats->opackets += member_stats.opackets;
> + stats->ibytes += member_stats.ibytes;
> + stats->obytes += member_stats.obytes;
> + stats->imissed += member_stats.imissed;
> + stats->ierrors += member_stats.ierrors;
> + stats->oerrors += member_stats.oerrors;
> + stats->rx_nombuf += member_stats.rx_nombuf;
>
> for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
> - stats->q_ipackets[j] += slave_stats.q_ipackets[j];
> - stats->q_opackets[j] += slave_stats.q_opackets[j];
> - stats->q_ibytes[j] += slave_stats.q_ibytes[j];
> - stats->q_obytes[j] += slave_stats.q_obytes[j];
> - stats->q_errors[j] += slave_stats.q_errors[j];
> + stats->q_ipackets[j] += member_stats.q_ipackets[j];
> + stats->q_opackets[j] += member_stats.q_opackets[j];
> + stats->q_ibytes[j] += member_stats.q_ibytes[j];
> + stats->q_obytes[j] += member_stats.q_obytes[j];
> + stats->q_errors[j] += member_stats.q_errors[j];
> }
>
> }
> @@ -2638,8 +2642,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
> int err;
> int ret;
>
> - for (i = 0, err = 0; i < internals->slave_count; i++) {
> - ret = rte_eth_stats_reset(internals->slaves[i].port_id);
> + for (i = 0, err = 0; i < internals->member_count; i++) {
> + ret = rte_eth_stats_reset(internals->members[i].port_id);
> if (ret != 0)
> err = ret;
> }
> @@ -2656,15 +2660,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
> uint16_t port_id;
>
> switch (internals->mode) {
> - /* Promiscuous mode is propagated to all slaves */
> + /* Promiscuous mode is propagated to all members */
> case BONDING_MODE_ROUND_ROBIN:
> case BONDING_MODE_BALANCE:
> case BONDING_MODE_BROADCAST:
> case BONDING_MODE_8023AD: {
> - unsigned int slave_ok = 0;
> + unsigned int member_ok = 0;
>
> - for (i = 0; i < internals->slave_count; i++) {
> - port_id = internals->slaves[i].port_id;
> + for (i = 0; i < internals->member_count; i++) {
> + port_id = internals->members[i].port_id;
>
> ret = rte_eth_promiscuous_enable(port_id);
> if (ret != 0)
> @@ -2672,23 +2676,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
> "Failed to enable promiscuous mode for port %u: %s",
> port_id, rte_strerror(-ret));
> else
> - slave_ok++;
> + member_ok++;
> }
> /*
> * Report success if operation is successful on at least
> - * on one slave. Otherwise return last error code.
> + * on one member. Otherwise return last error code.
> */
> - if (slave_ok > 0)
> + if (member_ok > 0)
> ret = 0;
> break;
> }
> - /* Promiscuous mode is propagated only to primary slave */
> + /* Promiscuous mode is propagated only to primary member */
> case BONDING_MODE_ACTIVE_BACKUP:
> case BONDING_MODE_TLB:
> case BONDING_MODE_ALB:
> default:
> /* Do not touch promisc when there cannot be primary ports */
> - if (internals->slave_count == 0)
> + if (internals->member_count == 0)
> break;
> port_id = internals->current_primary_port;
> ret = rte_eth_promiscuous_enable(port_id);
> @@ -2710,20 +2714,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
> uint16_t port_id;
>
> switch (internals->mode) {
> - /* Promiscuous mode is propagated to all slaves */
> + /* Promiscuous mode is propagated to all members */
> case BONDING_MODE_ROUND_ROBIN:
> case BONDING_MODE_BALANCE:
> case BONDING_MODE_BROADCAST:
> case BONDING_MODE_8023AD: {
> - unsigned int slave_ok = 0;
> + unsigned int member_ok = 0;
>
> - for (i = 0; i < internals->slave_count; i++) {
> - port_id = internals->slaves[i].port_id;
> + for (i = 0; i < internals->member_count; i++) {
> + port_id = internals->members[i].port_id;
>
> if (internals->mode == BONDING_MODE_8023AD &&
> bond_mode_8023ad_ports[port_id].forced_rx_flags ==
> BOND_8023AD_FORCED_PROMISC) {
> - slave_ok++;
> + member_ok++;
> continue;
> }
> ret = rte_eth_promiscuous_disable(port_id);
> @@ -2732,23 +2736,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
> "Failed to disable promiscuous mode for port %u: %s",
> port_id, rte_strerror(-ret));
> else
> - slave_ok++;
> + member_ok++;
> }
> /*
> * Report success if operation is successful on at least
> - * on one slave. Otherwise return last error code.
> + * on one member. Otherwise return last error code.
> */
> - if (slave_ok > 0)
> + if (member_ok > 0)
> ret = 0;
> break;
> }
> - /* Promiscuous mode is propagated only to primary slave */
> + /* Promiscuous mode is propagated only to primary member */
> case BONDING_MODE_ACTIVE_BACKUP:
> case BONDING_MODE_TLB:
> case BONDING_MODE_ALB:
> default:
> /* Do not touch promisc when there cannot be primary ports */
> - if (internals->slave_count == 0)
> + if (internals->member_count == 0)
> break;
> port_id = internals->current_primary_port;
> ret = rte_eth_promiscuous_disable(port_id);
> @@ -2772,7 +2776,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
> case BONDING_MODE_BALANCE:
> case BONDING_MODE_BROADCAST:
> case BONDING_MODE_8023AD:
> - /* As promiscuous mode is propagated to all slaves for these
> + /* As promiscuous mode is propagated to all members for these
> * mode, no need to update for bonding device.
> */
> break;
> @@ -2780,9 +2784,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
> case BONDING_MODE_TLB:
> case BONDING_MODE_ALB:
> default:
> - /* As promiscuous mode is propagated only to primary slave
> + /* As promiscuous mode is propagated only to primary member
> * for these mode. When active/standby switchover, promiscuous
> - * mode should be set to new primary slave according to bonding
> + * mode should be set to new primary member according to bonding
> * device.
> */
> if (rte_eth_promiscuous_get(internals->port_id) == 1)
> @@ -2803,15 +2807,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
> uint16_t port_id;
>
> switch (internals->mode) {
> - /* allmulti mode is propagated to all slaves */
> + /* allmulti mode is propagated to all members */
> case BONDING_MODE_ROUND_ROBIN:
> case BONDING_MODE_BALANCE:
> case BONDING_MODE_BROADCAST:
> case BONDING_MODE_8023AD: {
> - unsigned int slave_ok = 0;
> + unsigned int member_ok = 0;
>
> - for (i = 0; i < internals->slave_count; i++) {
> - port_id = internals->slaves[i].port_id;
> + for (i = 0; i < internals->member_count; i++) {
> + port_id = internals->members[i].port_id;
>
> ret = rte_eth_allmulticast_enable(port_id);
> if (ret != 0)
> @@ -2819,23 +2823,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
> "Failed to enable allmulti mode for port %u: %s",
> port_id, rte_strerror(-ret));
> else
> - slave_ok++;
> + member_ok++;
> }
> /*
> * Report success if operation is successful on at least
> - * on one slave. Otherwise return last error code.
> + * on one member. Otherwise return last error code.
> */
> - if (slave_ok > 0)
> + if (member_ok > 0)
> ret = 0;
> break;
> }
> - /* allmulti mode is propagated only to primary slave */
> + /* allmulti mode is propagated only to primary member */
> case BONDING_MODE_ACTIVE_BACKUP:
> case BONDING_MODE_TLB:
> case BONDING_MODE_ALB:
> default:
> /* Do not touch allmulti when there cannot be primary ports */
> - if (internals->slave_count == 0)
> + if (internals->member_count == 0)
> break;
> port_id = internals->current_primary_port;
> ret = rte_eth_allmulticast_enable(port_id);
> @@ -2857,15 +2861,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
> uint16_t port_id;
>
> switch (internals->mode) {
> - /* allmulti mode is propagated to all slaves */
> + /* allmulti mode is propagated to all members */
> case BONDING_MODE_ROUND_ROBIN:
> case BONDING_MODE_BALANCE:
> case BONDING_MODE_BROADCAST:
> case BONDING_MODE_8023AD: {
> - unsigned int slave_ok = 0;
> + unsigned int member_ok = 0;
>
> - for (i = 0; i < internals->slave_count; i++) {
> - uint16_t port_id = internals->slaves[i].port_id;
> + for (i = 0; i < internals->member_count; i++) {
> + uint16_t port_id = internals->members[i].port_id;
>
> if (internals->mode == BONDING_MODE_8023AD &&
> bond_mode_8023ad_ports[port_id].forced_rx_flags ==
> @@ -2878,23 +2882,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
> "Failed to disable allmulti mode for port %u: %s",
> port_id, rte_strerror(-ret));
> else
> - slave_ok++;
> + member_ok++;
> }
> /*
> * Report success if operation is successful on at least
> - * on one slave. Otherwise return last error code.
> + * on one member. Otherwise return last error code.
> */
> - if (slave_ok > 0)
> + if (member_ok > 0)
> ret = 0;
> break;
> }
> - /* allmulti mode is propagated only to primary slave */
> + /* allmulti mode is propagated only to primary member */
> case BONDING_MODE_ACTIVE_BACKUP:
> case BONDING_MODE_TLB:
> case BONDING_MODE_ALB:
> default:
> /* Do not touch allmulti when there cannot be primary ports */
> - if (internals->slave_count == 0)
> + if (internals->member_count == 0)
> break;
> port_id = internals->current_primary_port;
> ret = rte_eth_allmulticast_disable(port_id);
> @@ -2918,7 +2922,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
> case BONDING_MODE_BALANCE:
> case BONDING_MODE_BROADCAST:
> case BONDING_MODE_8023AD:
> - /* As allmulticast mode is propagated to all slaves for these
> + /* As allmulticast mode is propagated to all members for these
> * mode, no need to update for bonding device.
> */
> break;
> @@ -2926,9 +2930,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
> case BONDING_MODE_TLB:
> case BONDING_MODE_ALB:
> default:
> - /* As allmulticast mode is propagated only to primary slave
> + /* As allmulticast mode is propagated only to primary member
> * for these mode. When active/standby switchover, allmulticast
> - * mode should be set to new primary slave according to bonding
> + * mode should be set to new primary member according to bonding
> * device.
> */
> if (rte_eth_allmulticast_get(internals->port_id) == 1)
> @@ -2961,8 +2965,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
> int ret;
>
> uint8_t lsc_flag = 0;
> - int valid_slave = 0;
> - uint16_t active_pos, slave_idx;
> + int valid_member = 0;
> + uint16_t active_pos, member_idx;
> uint16_t i;
>
> if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
> @@ -2979,62 +2983,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
> if (!bonded_eth_dev->data->dev_started)
> return rc;
>
> - /* verify that port_id is a valid slave of bonded port */
> - for (i = 0; i < internals->slave_count; i++) {
> - if (internals->slaves[i].port_id == port_id) {
> - valid_slave = 1;
> - slave_idx = i;
> + /* verify that port_id is a valid member of bonded port */
> + for (i = 0; i < internals->member_count; i++) {
> + if (internals->members[i].port_id == port_id) {
> + valid_member = 1;
> + member_idx = i;
> break;
> }
> }
>
> - if (!valid_slave)
> + if (!valid_member)
> return rc;
>
> /* Synchronize lsc callback parallel calls either by real link event
> - * from the slaves PMDs or by the bonding PMD itself.
> + * from the members PMDs or by the bonding PMD itself.
> */
> rte_spinlock_lock(&internals->lsc_lock);
>
> /* Search for port in active port list */
> - active_pos = find_slave_by_id(internals->active_slaves,
> - internals->active_slave_count, port_id);
> + active_pos = find_member_by_id(internals->active_members,
> + internals->active_member_count, port_id);
>
> ret = rte_eth_link_get_nowait(port_id, &link);
> if (ret < 0)
> - RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
> + RTE_BOND_LOG(ERR, "Member (port %u) link get failed", port_id);
>
> if (ret == 0 && link.link_status) {
> - if (active_pos < internals->active_slave_count)
> + if (active_pos < internals->active_member_count)
> goto link_update;
>
> /* check link state properties if bonded link is up*/
> if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
> if (link_properties_valid(bonded_eth_dev, &link) != 0)
> RTE_BOND_LOG(ERR, "Invalid link properties "
> - "for slave %d in bonding mode %d",
> + "for member %d in bonding mode %d",
> port_id, internals->mode);
> } else {
> - /* inherit slave link properties */
> + /* inherit member link properties */
> link_properties_set(bonded_eth_dev, &link);
> }
>
> - /* If no active slave ports then set this port to be
> + /* If no active member ports then set this port to be
> * the primary port.
> */
> - if (internals->active_slave_count < 1) {
> - /* If first active slave, then change link status */
> + if (internals->active_member_count < 1) {
> + /* If first active member, then change link status */
> bonded_eth_dev->data->dev_link.link_status =
> RTE_ETH_LINK_UP;
> internals->current_primary_port = port_id;
> lsc_flag = 1;
>
> - mac_address_slaves_update(bonded_eth_dev);
> + mac_address_members_update(bonded_eth_dev);
> bond_ethdev_promiscuous_update(bonded_eth_dev);
> bond_ethdev_allmulticast_update(bonded_eth_dev);
> }
>
> - activate_slave(bonded_eth_dev, port_id);
> + activate_member(bonded_eth_dev, port_id);
>
> /* If the user has defined the primary port then default to
> * using it.
> @@ -3043,24 +3047,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
> internals->primary_port == port_id)
> bond_ethdev_primary_set(internals, port_id);
> } else {
> - if (active_pos == internals->active_slave_count)
> + if (active_pos == internals->active_member_count)
> goto link_update;
>
> - /* Remove from active slave list */
> - deactivate_slave(bonded_eth_dev, port_id);
> + /* Remove from active member list */
> + deactivate_member(bonded_eth_dev, port_id);
>
> - if (internals->active_slave_count < 1)
> + if (internals->active_member_count < 1)
> lsc_flag = 1;
>
> - /* Update primary id, take first active slave from list or if none
> + /* Update primary id, take first active member from list or if none
> * available set to -1 */
> if (port_id == internals->current_primary_port) {
> - if (internals->active_slave_count > 0)
> + if (internals->active_member_count > 0)
> bond_ethdev_primary_set(internals,
> - internals->active_slaves[0]);
> + internals->active_members[0]);
> else
> internals->current_primary_port = internals->primary_port;
> - mac_address_slaves_update(bonded_eth_dev);
> + mac_address_members_update(bonded_eth_dev);
> bond_ethdev_promiscuous_update(bonded_eth_dev);
> bond_ethdev_allmulticast_update(bonded_eth_dev);
> }
> @@ -3069,10 +3073,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
> link_update:
> /**
> * Update bonded device link properties after any change to active
> - * slaves
> + * members
> */
> bond_ethdev_link_update(bonded_eth_dev, 0);
> - internals->slaves[slave_idx].last_link_status = link.link_status;
> + internals->members[member_idx].last_link_status = link.link_status;
>
> if (lsc_flag) {
> /* Cancel any possible outstanding interrupts if delays are enabled */
> @@ -3114,7 +3118,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
> {
> unsigned i, j;
> int result = 0;
> - int slave_reta_size;
> + int member_reta_size;
> unsigned reta_count;
> struct bond_dev_private *internals = dev->data->dev_private;
>
> @@ -3137,11 +3141,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
> memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
> sizeof(internals->reta_conf[0]) * reta_count);
>
> - /* Propagate RETA over slaves */
> - for (i = 0; i < internals->slave_count; i++) {
> - slave_reta_size = internals->slaves[i].reta_size;
> - result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
> - &internals->reta_conf[0], slave_reta_size);
> + /* Propagate RETA over members */
> + for (i = 0; i < internals->member_count; i++) {
> + member_reta_size = internals->members[i].reta_size;
> + result = rte_eth_dev_rss_reta_update(internals->members[i].port_id,
> + &internals->reta_conf[0], member_reta_size);
> if (result < 0)
> return result;
> }
> @@ -3194,8 +3198,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
> bond_rss_conf.rss_key_len = internals->rss_key_len;
> }
>
> - for (i = 0; i < internals->slave_count; i++) {
> - result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
> + for (i = 0; i < internals->member_count; i++) {
> + result = rte_eth_dev_rss_hash_update(internals->members[i].port_id,
> &bond_rss_conf);
> if (result < 0)
> return result;
> @@ -3221,21 +3225,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
> static int
> bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
> {
> - struct rte_eth_dev *slave_eth_dev;
> + struct rte_eth_dev *member_eth_dev;
> struct bond_dev_private *internals = dev->data->dev_private;
> int ret, i;
>
> rte_spinlock_lock(&internals->lock);
>
> - for (i = 0; i < internals->slave_count; i++) {
> - slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
> - if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
> + for (i = 0; i < internals->member_count; i++) {
> + member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
> + if (*member_eth_dev->dev_ops->mtu_set == NULL) {
> rte_spinlock_unlock(&internals->lock);
> return -ENOTSUP;
> }
> }
> - for (i = 0; i < internals->slave_count; i++) {
> - ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
> + for (i = 0; i < internals->member_count; i++) {
> + ret = rte_eth_dev_set_mtu(internals->members[i].port_id, mtu);
> if (ret < 0) {
> rte_spinlock_unlock(&internals->lock);
> return ret;
> @@ -3271,29 +3275,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
> struct rte_ether_addr *mac_addr,
> __rte_unused uint32_t index, uint32_t vmdq)
> {
> - struct rte_eth_dev *slave_eth_dev;
> + struct rte_eth_dev *member_eth_dev;
> struct bond_dev_private *internals = dev->data->dev_private;
> int ret, i;
>
> rte_spinlock_lock(&internals->lock);
>
> - for (i = 0; i < internals->slave_count; i++) {
> - slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
> - if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
> - *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
> + for (i = 0; i < internals->member_count; i++) {
> + member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
> + if (*member_eth_dev->dev_ops->mac_addr_add == NULL ||
> + *member_eth_dev->dev_ops->mac_addr_remove == NULL) {
> ret = -ENOTSUP;
> goto end;
> }
> }
>
> - for (i = 0; i < internals->slave_count; i++) {
> - ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
> + for (i = 0; i < internals->member_count; i++) {
> + ret = rte_eth_dev_mac_addr_add(internals->members[i].port_id,
> mac_addr, vmdq);
> if (ret < 0) {
> /* rollback */
> for (i--; i >= 0; i--)
> rte_eth_dev_mac_addr_remove(
> - internals->slaves[i].port_id, mac_addr);
> + internals->members[i].port_id, mac_addr);
> goto end;
> }
> }
> @@ -3307,22 +3311,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
> static void
> bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
> {
> - struct rte_eth_dev *slave_eth_dev;
> + struct rte_eth_dev *member_eth_dev;
> struct bond_dev_private *internals = dev->data->dev_private;
> int i;
>
> rte_spinlock_lock(&internals->lock);
>
> - for (i = 0; i < internals->slave_count; i++) {
> - slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
> - if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
> + for (i = 0; i < internals->member_count; i++) {
> + member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
> + if (*member_eth_dev->dev_ops->mac_addr_remove == NULL)
> goto end;
> }
>
> struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
>
> - for (i = 0; i < internals->slave_count; i++)
> - rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
> + for (i = 0; i < internals->member_count; i++)
> + rte_eth_dev_mac_addr_remove(internals->members[i].port_id,
> mac_addr);
>
> end:
> @@ -3402,30 +3406,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
> fprintf(f, "\n");
> }
>
> - if (internals->slave_count > 0) {
> - fprintf(f, "\tSlaves (%u): [", internals->slave_count);
> - for (i = 0; i < internals->slave_count - 1; i++)
> - fprintf(f, "%u ", internals->slaves[i].port_id);
> + if (internals->member_count > 0) {
> + fprintf(f, "\tMembers (%u): [", internals->member_count);
> + for (i = 0; i < internals->member_count - 1; i++)
> + fprintf(f, "%u ", internals->members[i].port_id);
>
> - fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
> + fprintf(f, "%u]\n", internals->members[internals->member_count - 1].port_id);
> } else {
> - fprintf(f, "\tSlaves: []\n");
> + fprintf(f, "\tMembers: []\n");
> }
>
> - if (internals->active_slave_count > 0) {
> - fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
> - for (i = 0; i < internals->active_slave_count - 1; i++)
> - fprintf(f, "%u ", internals->active_slaves[i]);
> + if (internals->active_member_count > 0) {
> + fprintf(f, "\tActive Members (%u): [", internals->active_member_count);
> + for (i = 0; i < internals->active_member_count - 1; i++)
> + fprintf(f, "%u ", internals->active_members[i]);
>
> - fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
> + fprintf(f, "%u]\n", internals->active_members[internals->active_member_count - 1]);
>
> } else {
> - fprintf(f, "\tActive Slaves: []\n");
> + fprintf(f, "\tActive Members: []\n");
> }
>
> if (internals->user_defined_primary_port)
> fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
> - if (internals->slave_count > 0)
> + if (internals->member_count > 0)
> fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
> }
>
> @@ -3471,7 +3475,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
> }
>
> static void
> -dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
> +dump_lacp_member(const struct rte_eth_bond_8023ad_member_info *info, FILE *f)
> {
> char a_state[256] = { 0 };
> char p_state[256] = { 0 };
> @@ -3520,18 +3524,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
> static void
> dump_lacp(uint16_t port_id, FILE *f)
> {
> - struct rte_eth_bond_8023ad_slave_info slave_info;
> + struct rte_eth_bond_8023ad_member_info member_info;
> struct rte_eth_bond_8023ad_conf port_conf;
> - uint16_t slaves[RTE_MAX_ETHPORTS];
> - int num_active_slaves;
> + uint16_t members[RTE_MAX_ETHPORTS];
> + int num_active_members;
> int i, ret;
>
> fprintf(f, " - Lacp info:\n");
>
> - num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
> + num_active_members = rte_eth_bond_active_members_get(port_id, members,
> RTE_MAX_ETHPORTS);
> - if (num_active_slaves < 0) {
> - fprintf(f, "\tFailed to get active slave list for port %u\n",
> + if (num_active_members < 0) {
> + fprintf(f, "\tFailed to get active member list for port %u\n",
> port_id);
> return;
> }
> @@ -3545,16 +3549,16 @@ dump_lacp(uint16_t port_id, FILE *f)
> }
> dump_lacp_conf(&port_conf, f);
>
> - for (i = 0; i < num_active_slaves; i++) {
> - ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
> - &slave_info);
> + for (i = 0; i < num_active_members; i++) {
> + ret = rte_eth_bond_8023ad_member_info(port_id, members[i],
> + &member_info);
> if (ret) {
> - fprintf(f, "\tGet slave device %u 8023ad info failed\n",
> - slaves[i]);
> + fprintf(f, "\tGet member device %u 8023ad info failed\n",
> + members[i]);
> return;
> }
> - fprintf(f, "\tSlave Port: %u\n", slaves[i]);
> - dump_lacp_slave(&slave_info, f);
> + fprintf(f, "\tMember Port: %u\n", members[i]);
> + dump_lacp_member(&member_info, f);
> }
> }
>
> @@ -3655,8 +3659,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
> internals->link_down_delay_ms = 0;
> internals->link_up_delay_ms = 0;
>
> - internals->slave_count = 0;
> - internals->active_slave_count = 0;
> + internals->member_count = 0;
> + internals->active_member_count = 0;
> internals->rx_offload_capa = 0;
> internals->tx_offload_capa = 0;
> internals->rx_queue_offload_capa = 0;
> @@ -3684,8 +3688,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
> internals->rx_desc_lim.nb_align = 1;
> internals->tx_desc_lim.nb_align = 1;
>
> - memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
> - memset(internals->slaves, 0, sizeof(internals->slaves));
> + memset(internals->active_members, 0, sizeof(internals->active_members));
> + memset(internals->members, 0, sizeof(internals->members));
>
> TAILQ_INIT(&internals->flow_list);
> internals->flow_isolated_valid = 0;
> @@ -3770,7 +3774,7 @@ bond_probe(struct rte_vdev_device *dev)
> /* Parse link bonding mode */
> if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
> if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
> - &bond_ethdev_parse_slave_mode_kvarg,
> + &bond_ethdev_parse_member_mode_kvarg,
> &bonding_mode) != 0) {
> RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
> name);
> @@ -3815,7 +3819,7 @@ bond_probe(struct rte_vdev_device *dev)
> if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
> if (rte_kvargs_process(kvlist,
> PMD_BOND_AGG_MODE_KVARG,
> - &bond_ethdev_parse_slave_agg_mode_kvarg,
> + &bond_ethdev_parse_member_agg_mode_kvarg,
> &agg_mode) != 0) {
> RTE_BOND_LOG(ERR,
> "Failed to parse agg selection mode for bonded device %s",
> @@ -3865,7 +3869,7 @@ bond_remove(struct rte_vdev_device *dev)
> RTE_ASSERT(eth_dev->device == &dev->device);
>
> internals = eth_dev->data->dev_private;
> - if (internals->slave_count != 0)
> + if (internals->member_count != 0)
> return -EBUSY;
>
> if (eth_dev->data->dev_started == 1) {
> @@ -3877,7 +3881,7 @@ bond_remove(struct rte_vdev_device *dev)
> return ret;
> }
>
> -/* this part will resolve the slave portids after all the other pdev and vdev
> +/* this part will resolve the member portids after all the other pdev and vdev
> * have been allocated */
> static int
> bond_ethdev_configure(struct rte_eth_dev *dev)
> @@ -3959,7 +3963,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
> if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
> if ((link_speeds &
> (internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
> - RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
> + RTE_BOND_LOG(ERR, "the fixed speed is not supported by all member devices.");
> return -EINVAL;
> }
> /*
> @@ -4041,7 +4045,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
> if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
> if (rte_kvargs_process(kvlist,
> PMD_BOND_AGG_MODE_KVARG,
> - &bond_ethdev_parse_slave_agg_mode_kvarg,
> + &bond_ethdev_parse_member_agg_mode_kvarg,
> &agg_mode) != 0) {
> RTE_BOND_LOG(ERR,
> "Failed to parse agg selection mode for bonded device %s",
> @@ -4059,60 +4063,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
> }
> }
>
> - /* Parse/add slave ports to bonded device */
> - if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
> - struct bond_ethdev_slave_ports slave_ports;
> + /* Parse/add member ports to bonded device */
> + if (rte_kvargs_count(kvlist, PMD_BOND_MEMBER_PORT_KVARG) > 0) {
> + struct bond_ethdev_member_ports member_ports;
> unsigned i;
>
> - memset(&slave_ports, 0, sizeof(slave_ports));
> + memset(&member_ports, 0, sizeof(member_ports));
>
> - if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
> - &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
> + if (rte_kvargs_process(kvlist, PMD_BOND_MEMBER_PORT_KVARG,
> + &bond_ethdev_parse_member_port_kvarg, &member_ports) != 0) {
> RTE_BOND_LOG(ERR,
> - "Failed to parse slave ports for bonded device %s",
> + "Failed to parse member ports for bonded device %s",
> name);
> return -1;
> }
>
> - for (i = 0; i < slave_ports.slave_count; i++) {
> - if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
> + for (i = 0; i < member_ports.member_count; i++) {
> + if (rte_eth_bond_member_add(port_id, member_ports.members[i]) != 0) {
> RTE_BOND_LOG(ERR,
> - "Failed to add port %d as slave to bonded device %s",
> - slave_ports.slaves[i], name);
> + "Failed to add port %d as member to bonded device %s",
> + member_ports.members[i], name);
> }
> }
>
> } else {
> - RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
> + RTE_BOND_LOG(INFO, "No members specified for bonded device %s", name);
> return -1;
> }
>
> - /* Parse/set primary slave port id*/
> - arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
> + /* Parse/set primary member port id*/
> + arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_MEMBER_KVARG);
> if (arg_count == 1) {
> - uint16_t primary_slave_port_id;
> + uint16_t primary_member_port_id;
>
> if (rte_kvargs_process(kvlist,
> - PMD_BOND_PRIMARY_SLAVE_KVARG,
> - &bond_ethdev_parse_primary_slave_port_id_kvarg,
> - &primary_slave_port_id) < 0) {
> + PMD_BOND_PRIMARY_MEMBER_KVARG,
> + &bond_ethdev_parse_primary_member_port_id_kvarg,
> + &primary_member_port_id) < 0) {
> RTE_BOND_LOG(INFO,
> - "Invalid primary slave port id specified for bonded device %s",
> + "Invalid primary member port id specified for bonded device %s",
> name);
> return -1;
> }
>
> /* Set balance mode transmit policy*/
> - if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
> + if (rte_eth_bond_primary_set(port_id, primary_member_port_id)
> != 0) {
> RTE_BOND_LOG(ERR,
> - "Failed to set primary slave port %d on bonded device %s",
> - primary_slave_port_id, name);
> + "Failed to set primary member port %d on bonded device %s",
> + primary_member_port_id, name);
> return -1;
> }
> } else if (arg_count > 1) {
> RTE_BOND_LOG(INFO,
> - "Primary slave can be specified only once for bonded device %s",
> + "Primary member can be specified only once for bonded device %s",
> name);
> return -1;
> }
> @@ -4206,15 +4210,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
> return -1;
> }
>
> - /* configure slaves so we can pass mtu setting */
> - for (i = 0; i < internals->slave_count; i++) {
> - struct rte_eth_dev *slave_ethdev =
> - &(rte_eth_devices[internals->slaves[i].port_id]);
> - if (slave_configure(dev, slave_ethdev) != 0) {
> + /* configure members so we can pass mtu setting */
> + for (i = 0; i < internals->member_count; i++) {
> + struct rte_eth_dev *member_ethdev =
> + &(rte_eth_devices[internals->members[i].port_id]);
> + if (member_configure(dev, member_ethdev) != 0) {
> RTE_BOND_LOG(ERR,
> - "bonded port (%d) failed to configure slave device (%d)",
> + "bonded port (%d) failed to configure member device (%d)",
> dev->data->port_id,
> - internals->slaves[i].port_id);
> + internals->members[i].port_id);
> return -1;
> }
> }
> @@ -4230,7 +4234,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
> RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
>
> RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
> - "slave=<ifc> "
> + "member=<ifc> "
> "primary=<ifc> "
> "mode=[0-6] "
> "xmit_policy=[l2 | l23 | l34] "
> diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
> index bd28ee78a5..09ee21c55f 100644
> --- a/drivers/net/bonding/version.map
> +++ b/drivers/net/bonding/version.map
> @@ -12,8 +12,6 @@ DPDK_24 {
> rte_eth_bond_8023ad_ext_distrib_get;
> rte_eth_bond_8023ad_ext_slowtx;
> rte_eth_bond_8023ad_setup;
> - rte_eth_bond_8023ad_slave_info;
> - rte_eth_bond_active_slaves_get;
> rte_eth_bond_create;
> rte_eth_bond_free;
> rte_eth_bond_link_monitoring_set;
> @@ -23,11 +21,18 @@ DPDK_24 {
> rte_eth_bond_mode_set;
> rte_eth_bond_primary_get;
> rte_eth_bond_primary_set;
> - rte_eth_bond_slave_add;
> - rte_eth_bond_slave_remove;
> - rte_eth_bond_slaves_get;
> rte_eth_bond_xmit_policy_get;
> rte_eth_bond_xmit_policy_set;
>
> local: *;
> };
> +
> +EXPERIMENTAL {
> + # added in 23.11
> + global:
> + rte_eth_bond_8023ad_member_info;
> + rte_eth_bond_active_members_get;
> + rte_eth_bond_member_add;
> + rte_eth_bond_member_remove;
> + rte_eth_bond_members_get;
> +};
> diff --git a/examples/bond/main.c b/examples/bond/main.c
> index 9b076bb39f..90f422ec11 100644
> --- a/examples/bond/main.c
> +++ b/examples/bond/main.c
> @@ -105,8 +105,8 @@
> ":%02"PRIx8":%02"PRIx8":%02"PRIx8, \
> RTE_ETHER_ADDR_BYTES(&addr))
>
> -uint16_t slaves[RTE_MAX_ETHPORTS];
> -uint16_t slaves_count;
> +uint16_t members[RTE_MAX_ETHPORTS];
> +uint16_t members_count;
>
> static uint16_t BOND_PORT = 0xffff;
>
> @@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
> };
>
> static void
> -slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
> +member_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
> {
> int retval;
> uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
> @@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
> rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
> "failed (res=%d)\n", BOND_PORT, retval);
>
> - for (i = 0; i < slaves_count; i++) {
> - if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
> - rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
> - slaves[i], BOND_PORT);
> + for (i = 0; i < members_count; i++) {
> + if (rte_eth_bond_member_add(BOND_PORT, members[i]) == -1)
> + rte_exit(-1, "Oooops! adding member (%u) to bond (%u) failed!\n",
> + members[i], BOND_PORT);
>
> }
>
> @@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
> if (retval < 0)
> rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
>
> - printf("Waiting for slaves to become active...");
> + printf("Waiting for members to become active...");
> while (wait_counter) {
> - uint16_t act_slaves[16] = {0};
> - if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
> - slaves_count) {
> + uint16_t act_members[16] = {0};
> + if (rte_eth_bond_active_members_get(BOND_PORT, act_members, 16) ==
> + members_count) {
> printf("\n");
> break;
> }
> sleep(1);
> printf("...");
> if (--wait_counter == 0)
> - rte_exit(-1, "\nFailed to activate slaves\n");
> + rte_exit(-1, "\nFailed to activate members\n");
> }
>
> retval = rte_eth_promiscuous_enable(BOND_PORT);
> @@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
> "send IP - sends one ARPrequest through bonding for IP.\n"
> "start - starts listening ARPs.\n"
> "stop - stops lcore_main.\n"
> - "show - shows some bond info: ex. active slaves etc.\n"
> + "show - shows some bond info: ex. active members etc.\n"
> "help - prints help.\n"
> "quit - terminate all threads and quit.\n"
> );
> @@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
> struct cmdline *cl,
> __rte_unused void *data)
> {
> - uint16_t slaves[16] = {0};
> + uint16_t members[16] = {0};
> uint8_t len = 16;
> struct rte_ether_addr addr;
> uint16_t i;
> int ret;
>
> - for (i = 0; i < slaves_count; i++) {
> + for (i = 0; i < members_count; i++) {
> ret = rte_eth_macaddr_get(i, &addr);
> if (ret != 0) {
> cmdline_printf(cl,
> @@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
>
> rte_spinlock_lock(&global_flag_stru_p->lock);
> cmdline_printf(cl,
> - "Active_slaves:%d "
> + "Active_members:%d "
> "packets received:Tot:%d Arp:%d IPv4:%d\n",
> - rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
> + rte_eth_bond_active_members_get(BOND_PORT, members, len),
> global_flag_stru_p->port_packets[0],
> global_flag_stru_p->port_packets[1],
> global_flag_stru_p->port_packets[2]);
> @@ -836,10 +836,10 @@ main(int argc, char *argv[])
> rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
>
> /* initialize all ports */
> - slaves_count = nb_ports;
> + members_count = nb_ports;
> RTE_ETH_FOREACH_DEV(i) {
> - slave_port_init(i, mbuf_pool);
> - slaves[i] = i;
> + member_port_init(i, mbuf_pool);
> + members[i] = i;
> }
>
> bond_port_init(mbuf_pool);
^ permalink raw reply [relevance 0%]
* [PATCH v4 0/6] RFC optional rte optional stdatomics API
2023-08-11 1:31 4% [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (2 preceding siblings ...)
2023-08-16 19:19 3% ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-16 21:38 3% ` Tyler Retzlaff
2023-08-16 21:38 2% ` [PATCH v4 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-17 21:42 3% ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
2023-08-22 21:00 3% ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
5 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-16 21:38 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v4:
* Move the definition of #define RTE_ATOMIC(type) to patch 1 where it
belongs (a mistake in v3)
* Provide comments for both RTE_ATOMIC and __rte_atomic macros indicating
their use as specified or qualified contexts.
v3:
* Remove comments from APIs mentioning the mapping to C++ memory model
memory orders
* Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
where _Atomic is used as a type specifier to declare variables. The
macro allows more clarity about what the atomic type being specified
is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
the former is an atomic pointer type and the latter is an atomic
type. it also has the benefit of (in the future) being interoperable
with c++23 syntactically
note: Morten i have retained your 'reviewed-by' tags if you disagree
given the changes in the above version please indicate as such but
i believe the changes are in the spirit of the feedback you provided
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 6 +-
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 +++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++--
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 +++++++----
lib/eal/include/generic/rte_pause.h | 50 ++++----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++----
lib/eal/include/rte_pflock.h | 25 ++--
lib/eal/include/rte_seqcount.h | 19 +--
lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 +++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
29 files changed, 497 insertions(+), 266 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [PATCH v4 3/6] eal: add rte atomic qualifier with casts
2023-08-16 21:38 3% ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-16 21:38 2% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-16 21:38 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 5940e7e..709bf15 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 256309e..b7b059f 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -81,7 +81,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint16_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -91,7 +92,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint32_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -101,7 +103,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..fb8539f 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile __rte_atomic uint64_t *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [relevance 2%]
* RE: C11 atomics adoption blocked
2023-08-16 17:25 0% ` Tyler Retzlaff
@ 2023-08-16 20:30 0% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2023-08-16 20:30 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: Thomas Monjalon, Bruce Richardson, dev, techboard,
david.marchand, Honnappa.Nagarahalli
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Wednesday, 16 August 2023 19.26
>
> On Mon, Aug 14, 2023 at 05:13:04PM +0200, Morten Brørup wrote:
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > Sent: Monday, 14 August 2023 15.46
> > >
> > > mercredi 9 août 2023, Morten Brørup:
> > > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > > Sent: Tuesday, 8 August 2023 22.50
> > > > >
> > > > > On Tue, Aug 08, 2023 at 10:22:09PM +0200, Morten Brørup wrote:
[...]
> > > > And what about C++ atomics... Do we want (or need?) a third
> variant
> > > using C++ atomics, e.g. "atomic<int> x;" instead of "_Atomic int
> x;"? (I
> > > hope not!) For reference, the "atomic_int" type is "_Atomic int" in
> C,
> > > but "std::atomic<int>" in C++.
> > > >
> > > > C++23 provides the C11 compatibility macro "_Atomic(T)", which
> means
> > > "_Atomic T" in C and "std::atomic<T>" in C++. Perhaps we can
> somewhat
> > > rely on this, and update our coding standards to require using e.g.
> > > "_Atomic(int)" for atomic types, and disallow using "_Atomic int".
> > >
> > > You mean the syntax _Atomic(T) is working well in both C and C++?
> >
> > This syntax is API compatible across C11 and C++23, so it would work
> with (C11 and C++23) applications building DPDK from scratch.
> >
> > But it is only "recommended" ABI compatible for compilers [1], so DPDK
> in distros cannot rely on.
> >
> > [1]: https://www.open-
> std.org/jtc1/sc22/wg21/docs/papers/2020/p0943r6.html
> >
> > It would be future-proofing for the benefit of C++23 based
> applications... I was mainly mentioning it for completeness, now that we
> are switching to a new standard for atomics.
> >
> > Realistically, considering that 1. such a coding standard (requiring
> "_Atomic(T)" instead of "_Atomic T") would only be relevant for a 2023
> standard, and 2. that we are now upgrading to a standard from 2011, we
> would probably have to wait for a very distant future (12 years?) before
> C++ applications can reap the benefits of such a coding standard.
> >
Since writing the paragraph above a few day ago, I have become wiser today [1]... It turns out that the "_Atomic(T)" syntax not only comes really into play with C++23, but it is directly relevant for C11. Everyone, please pardon the confusion the above paragraph might have caused!
[1]: http://inbox.dpdk.org/dev/98CBD80474FA8B44BF855DF32C47DC35D87B0F@smartserver.smartshare.dk/
>
> i just want to feedback on this coding convention topic here (in
> relation to the RFC patch series thread) i think the convention of using
> the macro should be adopted now. the main reason being that it is far
> easier that an atomic type is a type or a pointer type when the '*' is
> captured as a part of the macro parameter.
>
> please see the RFC patch thread for the details of how this was
> beneficial for rcs_mcslock.h and how the placement of the _Atomic
> keyword matters when applied to pointer types of incomplete types.
^ permalink raw reply [relevance 0%]
* [PATCH v3 3/6] eal: add rte atomic qualifier with casts
2023-08-16 19:19 3% ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-16 19:19 2% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-16 19:19 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 5940e7e..709bf15 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 256309e..b7b059f 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -81,7 +81,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint16_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -91,7 +92,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint32_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -101,7 +103,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..fb8539f 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile __rte_atomic uint64_t *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [relevance 2%]
* [PATCH v3 0/6] RFC optional rte optional stdatomics API
2023-08-11 1:31 4% [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 1:31 2% ` [PATCH 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-11 17:32 3% ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-16 19:19 3% ` Tyler Retzlaff
2023-08-16 19:19 2% ` [PATCH v3 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-16 21:38 3% ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (2 subsequent siblings)
5 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-16 19:19 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v3:
* Remove comments from APIs mentioning the mapping to C++ memory model
memory orders
* Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
where _Atomic is used as a type specifier to declare variables. The
macro allows more clarity about what the atomic type being specified
is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
the former is an atomic pointer type and the latter is an atomic
type. it also has the benefit of (in the future) being interoperable
with c++23 syntactically
note: Morten i have retained your 'reviewed-by' tags if you disagree
given the changes in the above version please indicate as such but
i believe the changes are in the spirit of the feedback you provided
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 6 +-
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++---
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 ++++++-----
lib/eal/include/generic/rte_pause.h | 50 ++++-----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++-----
lib/eal/include/rte_pflock.h | 25 +++--
lib/eal/include/rte_seqcount.h | 19 ++--
lib/eal/include/rte_stdatomic.h | 184 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 ++++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
29 files changed, 483 insertions(+), 266 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* Re: C11 atomics adoption blocked
2023-08-14 15:13 3% ` Morten Brørup
@ 2023-08-16 17:25 0% ` Tyler Retzlaff
2023-08-16 20:30 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-16 17:25 UTC (permalink / raw)
To: Morten Brørup
Cc: Thomas Monjalon, Bruce Richardson, dev, techboard,
david.marchand, Honnappa.Nagarahalli
On Mon, Aug 14, 2023 at 05:13:04PM +0200, Morten Brørup wrote:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > Sent: Monday, 14 August 2023 15.46
> >
> > mercredi 9 août 2023, Morten Brørup:
> > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > Sent: Tuesday, 8 August 2023 22.50
> > > >
> > > > On Tue, Aug 08, 2023 at 10:22:09PM +0200, Morten Brørup wrote:
> > > > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > > > Sent: Tuesday, 8 August 2023 21.20
> > > > > >
> > > > > > On Tue, Aug 08, 2023 at 07:23:41PM +0100, Bruce Richardson
> > wrote:
> > > > > > > On Tue, Aug 08, 2023 at 10:53:03AM -0700, Tyler Retzlaff
> > wrote:
> > > > > > > > Hi folks,
> > > > > > > >
> > > > > > > > Moving this discussion to the dev mailing list for broader
> > > > comment.
> > > > > > > >
> > > > > > > > Unfortunately, we've hit a roadblock with integrating C11
> > > > atomics
> > > > > > > > for DPDK. The main issue is that GNU C++ prior to -
> > std=c++23
> > > > > > explicitly
> > > > > > > > cannot be integrated with C11 stdatomic.h. Basically, you
> > can't
> > > > > > include
> > > > > > > > the header and you can't use `_Atomic' type specifier to
> > declare
> > > > > > atomic
> > > > > > > > types. This is not a problem with LLVM or MSVC as they both
> > > > allow
> > > > > > > > integration with C11 stdatomic.h, but going forward with C11
> > > > atomics
> > > > > > > > would break using DPDK in C++ programs when building with
> > GNU
> > > > g++.
> > > > > > > >
> > > > > > > > Essentially you cannot compile the following with g++.
> > > > > > > >
> > > > > > > > #include <stdatomic.h>
> > > > > > > >
> > > > > > > > int main(int argc, char *argv[]) { return 0; }
> > > > > > > >
> > > > > > > > In file included from atomic.cpp:1:
> > > > > > > > /usr/lib/gcc/x86_64-pc-cygwin/11/include/stdatomic.h:40:9:
> > > > error:
> > > > > > > > ‘_Atomic’ does not name a type
> > > > > > > > 40 | typedef _Atomic _Bool atomic_bool;
> > > > > > > >
> > > > > > > > ... more errors of same ...
> > > > > > > >
> > > > > > > > It's also acknowledged as something known and won't fix by
> > GNU
> > > > g++
> > > > > > > > maintainers.
> > > > > > > >
> > > > > > > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60932
> > > > > > > >
> > > > > > > > Given the timeframe I would like to propose the minimally
> > > > invasive,
> > > > > > > > lowest risk solution as follows.
> > > > > > > >
> > > > > > > > 1. Adopt stdatomic.h for all Windows targets, leave all
> > > > Linux/BSD
> > > > > > targets
> > > > > > > > using GCC builtin C++11 memory model atomics.
> > > > > > > > 2. Introduce a macro that allows _Atomic type specifier to
> > be
> > > > > > applied to
> > > > > > > > function parameter, structure field types and variable
> > > > > > declarations.
> > > > > > > >
> > > > > > > > * The macro would expand empty for Linux/BSD targets.
> > > > > > > > * The macro would expand to C11 _Atomic keyword for
> > Windows
> > > > > > targets.
> > > > > > > >
> > > > > > > > 3. Introduce basic macro that allows __atomic_xxx for
> > > > normalized
> > > > > > use
> > > > > > > > internal to DPDK.
> > > > > > > >
> > > > > > > > * The macro would not be defined for Linux/BSD targets.
> > > > > > > > * The macro would expand __atomic_xxx to corresponding
> > > > > > stdatomic.h
> > > > > > > > atomic_xxx operations for Windows targets.
> > > > > > > >
> > > > >
> > > > > Regarding naming of these macros (suggested in 2. and 3.), they
> > should
> > > > probably bear the rte_ prefix instead of overlapping existing names,
> > so
> > > > applications can also use them directly.
> > > > >
> > > > > E.g.:
> > > > > #define rte_atomic for _Atomic or nothing,
> > > > > #define rte_atomic_fetch_add() for atomic_fetch_add() or
> > > > __atomic_fetch_add(), and
> > > > > #define RTE_MEMORY_ORDER_SEQ_CST for memory_order_seq_cst or
> > > > __ATOMIC_SEQ_CST.
> > > > >
> > > > > Maybe that is what you meant already. I'm not sure of the scope
> > and
> > > > details of your suggestion here.
> > > >
> > > > I'm shy to do anything in the rte_ namespace because I don't want to
> > > > formalize it as an API.
> > > >
> > > > I was envisioning the following.
> > > >
> > > > Internally DPDK code just uses __atomic_fetch_add directly, the
> > macros
> > > > are provided for Windows targets to expand to __atomic_fetch_add.
> > > >
> > > > Externally DPDK applications that don't care about being portable
> > may
> > > > use __atomic_fetch_add (BSD/Linux) or atomic_fetch_add (Windows)
> > > > directly.
> > > >
> > > > Externally DPDK applications that care to be portable may do what is
> > > > done Internally and <<use>> the __atomic_fetch_add directly. By
> > > > including say rte_stdatomic.h indirectly (Windows) gets the macros
> > > > expanded to atomic_fetch_add and for BSD/Linux it's a noop include.
> > > >
> > > > Basically I'm placing a little ugly into Windows built code and in
> > trade
> > > > we don't end up with a bunch of rte_ APIs that were strongly
> > objected to
> > > > previously.
> > > >
> > > > It's a compromise.
> > >
> > > OK, we probably need to offer a public header file to wrap the
> > atomics, using either names prefixed with rte_ or names similar to the
> > gcc builtin atomics.
> > >
> > > I guess the objections were based on the assumption that we were
> > switching to C11 atomics with DPDK 23.11, so the rte_ prefixed atomic
> > APIs would be very short lived (DPDK 23.07 to 23.11 only). But with this
> > new information about GNU C++ incompatibility, that seems not to be the
> > case, so the naming discussion can be reopened.
> > >
> > > If we don't introduce such a wrapper header, all portable code needs
> > to surround the use of atomics with #ifdef USE_STDATOMIC_H.
> > >
> > > BTW: Can the compilers that understand both builtin atomics and C11
> > stdatomics.h handle code with #define __atomic_fetch_add
> > atomic_fetch_add and #define __ATOMIC_SEQ_CST memory_order_seq_cst? If
> > not, then we need to use rte_ prefixed atomics.
> > >
> > > And what about C++ atomics... Do we want (or need?) a third variant
> > using C++ atomics, e.g. "atomic<int> x;" instead of "_Atomic int x;"? (I
> > hope not!) For reference, the "atomic_int" type is "_Atomic int" in C,
> > but "std::atomic<int>" in C++.
> > >
> > > C++23 provides the C11 compatibility macro "_Atomic(T)", which means
> > "_Atomic T" in C and "std::atomic<T>" in C++. Perhaps we can somewhat
> > rely on this, and update our coding standards to require using e.g.
> > "_Atomic(int)" for atomic types, and disallow using "_Atomic int".
> >
> > You mean the syntax _Atomic(T) is working well in both C and C++?
>
> This syntax is API compatible across C11 and C++23, so it would work with (C11 and C++23) applications building DPDK from scratch.
>
> But it is only "recommended" ABI compatible for compilers [1], so DPDK in distros cannot rely on.
>
> [1]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0943r6.html
>
> It would be future-proofing for the benefit of C++23 based applications... I was mainly mentioning it for completeness, now that we are switching to a new standard for atomics.
>
> Realistically, considering that 1. such a coding standard (requiring "_Atomic(T)" instead of "_Atomic T") would only be relevant for a 2023 standard, and 2. that we are now upgrading to a standard from 2011, we would probably have to wait for a very distant future (12 years?) before C++ applications can reap the benefits of such a coding standard.
>
i just want to feedback on this coding convention topic here (in
relation to the RFC patch series thread) i think the convention of using
the macro should be adopted now. the main reason being that it is far
easier that an atomic type is a type or a pointer type when the '*' is
captured as a part of the macro parameter.
please see the RFC patch thread for the details of how this was
beneficial for rcs_mcslock.h and how the placement of the _Atomic
keyword matters when applied to pointer types of incomplete types.
^ permalink raw reply [relevance 0%]
* [PATCH v5 2/2] net/bonding: replace master/slave to main/member
@ 2023-08-16 6:27 1% ` Chaoyong He
2023-08-17 2:36 0% ` lihuisong (C)
0 siblings, 1 reply; 200+ results
From: Chaoyong He @ 2023-08-16 6:27 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Long Wu, James Hershaw, Chaoyong He
From: Long Wu <long.wu@corigine.com>
This patch replaces the usage of the word 'master/slave' with more
appropriate word 'main/member' in bonding PMD as well as in its docs
and examples. Also the test app and testpmd were modified to use the
new wording.
The bonding PMD's public APIs were modified according to the changes
in word:
rte_eth_bond_8023ad_slave_info is now called
rte_eth_bond_8023ad_member_info,
rte_eth_bond_active_slaves_get is now called
rte_eth_bond_active_members_get,
rte_eth_bond_slave_add is now called
rte_eth_bond_member_add,
rte_eth_bond_slave_remove is now called
rte_eth_bond_member_remove,
rte_eth_bond_slaves_get is now called
rte_eth_bond_members_get.
The data structure ``struct rte_eth_bond_8023ad_slave_info`` was
renamed to ``struct rte_eth_bond_8023ad_member_info``
Signed-off-by: Long Wu <long.wu@corigine.com>
Reviewed-by: James Hershaw <james.hershaw@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Acked-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
app/test-pmd/testpmd.c | 113 +-
app/test-pmd/testpmd.h | 8 +-
app/test/test_link_bonding.c | 2792 +++++++++--------
app/test/test_link_bonding_mode4.c | 588 ++--
| 166 +-
doc/guides/howto/lm_bond_virtio_sriov.rst | 24 +-
doc/guides/nics/bnxt.rst | 4 +-
doc/guides/prog_guide/img/bond-mode-1.svg | 2 +-
.../link_bonding_poll_mode_drv_lib.rst | 230 +-
doc/guides/rel_notes/deprecation.rst | 16 -
doc/guides/rel_notes/release_23_11.rst | 17 +
drivers/net/bonding/bonding_testpmd.c | 178 +-
drivers/net/bonding/eth_bond_8023ad_private.h | 40 +-
drivers/net/bonding/eth_bond_private.h | 108 +-
drivers/net/bonding/rte_eth_bond.h | 96 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 372 +--
drivers/net/bonding/rte_eth_bond_8023ad.h | 67 +-
drivers/net/bonding/rte_eth_bond_alb.c | 44 +-
drivers/net/bonding/rte_eth_bond_alb.h | 20 +-
drivers/net/bonding/rte_eth_bond_api.c | 482 +--
drivers/net/bonding/rte_eth_bond_args.c | 32 +-
drivers/net/bonding/rte_eth_bond_flow.c | 54 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 1384 ++++----
drivers/net/bonding/version.map | 15 +-
examples/bond/main.c | 40 +-
25 files changed, 3486 insertions(+), 3406 deletions(-)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 938ca035d4..d41eb2b6f1 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -602,27 +602,27 @@ eth_dev_configure_mp(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
}
static int
-change_bonding_slave_port_status(portid_t bond_pid, bool is_stop)
+change_bonding_member_port_status(portid_t bond_pid, bool is_stop)
{
#ifdef RTE_NET_BOND
- portid_t slave_pids[RTE_MAX_ETHPORTS];
+ portid_t member_pids[RTE_MAX_ETHPORTS];
struct rte_port *port;
- int num_slaves;
- portid_t slave_pid;
+ int num_members;
+ portid_t member_pid;
int i;
- num_slaves = rte_eth_bond_slaves_get(bond_pid, slave_pids,
+ num_members = rte_eth_bond_members_get(bond_pid, member_pids,
RTE_MAX_ETHPORTS);
- if (num_slaves < 0) {
- fprintf(stderr, "Failed to get slave list for port = %u\n",
+ if (num_members < 0) {
+ fprintf(stderr, "Failed to get member list for port = %u\n",
bond_pid);
- return num_slaves;
+ return num_members;
}
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- port = &ports[slave_pid];
+ for (i = 0; i < num_members; i++) {
+ member_pid = member_pids[i];
+ port = &ports[member_pid];
port->port_status =
is_stop ? RTE_PORT_STOPPED : RTE_PORT_STARTED;
}
@@ -646,12 +646,12 @@ eth_dev_start_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Starting a bonded port also starts all slaves under the bonded
+ * Starting a bonded port also starts all members under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these members.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, false);
+ return change_bonding_member_port_status(port_id, false);
}
return 0;
@@ -670,12 +670,12 @@ eth_dev_stop_mp(uint16_t port_id)
struct rte_port *port = &ports[port_id];
/*
- * Stopping a bonded port also stops all slaves under the bonded
+ * Stopping a bonded port also stops all members under the bonded
* device. So if this port is bond device, we need to modify the
- * port status of these slaves.
+ * port status of these members.
*/
if (port->bond_flag == 1)
- return change_bonding_slave_port_status(port_id, true);
+ return change_bonding_member_port_status(port_id, true);
}
return 0;
@@ -2624,7 +2624,7 @@ all_ports_started(void)
port = &ports[pi];
/* Check if there is a port which is not started */
if ((port->port_status != RTE_PORT_STARTED) &&
- (port->slave_flag == 0))
+ (port->member_flag == 0))
return 0;
}
@@ -2638,7 +2638,7 @@ port_is_stopped(portid_t port_id)
struct rte_port *port = &ports[port_id];
if ((port->port_status != RTE_PORT_STOPPED) &&
- (port->slave_flag == 0))
+ (port->member_flag == 0))
return 0;
return 1;
}
@@ -2984,8 +2984,8 @@ fill_xstats_display_info(void)
/*
* Some capabilities (like, rx_offload_capa and tx_offload_capa) of bonding
- * device in dev_info is zero when no slave is added. And its capability
- * will be updated when add a new slave device. So adding a slave device need
+ * device in dev_info is zero when no member is added. And its capability
+ * will be updated when add a new member device. So adding a member device need
* to update the port configurations of bonding device.
*/
static void
@@ -3042,7 +3042,7 @@ start_port(portid_t pid)
if (pid != pi && pid != (portid_t)RTE_PORT_ALL)
continue;
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3364,7 +3364,7 @@ stop_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3453,28 +3453,28 @@ flush_port_owned_resources(portid_t pi)
}
static void
-clear_bonding_slave_device(portid_t *slave_pids, uint16_t num_slaves)
+clear_bonding_member_device(portid_t *member_pids, uint16_t num_members)
{
struct rte_port *port;
- portid_t slave_pid;
+ portid_t member_pid;
uint16_t i;
- for (i = 0; i < num_slaves; i++) {
- slave_pid = slave_pids[i];
- if (port_is_started(slave_pid) == 1) {
- if (rte_eth_dev_stop(slave_pid) != 0)
+ for (i = 0; i < num_members; i++) {
+ member_pid = member_pids[i];
+ if (port_is_started(member_pid) == 1) {
+ if (rte_eth_dev_stop(member_pid) != 0)
fprintf(stderr, "rte_eth_dev_stop failed for port %u\n",
- slave_pid);
+ member_pid);
- port = &ports[slave_pid];
+ port = &ports[member_pid];
port->port_status = RTE_PORT_STOPPED;
}
- clear_port_slave_flag(slave_pid);
+ clear_port_member_flag(member_pid);
- /* Close slave device when testpmd quit or is killed. */
+ /* Close member device when testpmd quit or is killed. */
if (cl_quit == 1 || f_quit == 1)
- rte_eth_dev_close(slave_pid);
+ rte_eth_dev_close(member_pid);
}
}
@@ -3483,8 +3483,8 @@ close_port(portid_t pid)
{
portid_t pi;
struct rte_port *port;
- portid_t slave_pids[RTE_MAX_ETHPORTS];
- int num_slaves = 0;
+ portid_t member_pids[RTE_MAX_ETHPORTS];
+ int num_members = 0;
if (port_id_is_invalid(pid, ENABLED_WARN))
return;
@@ -3502,7 +3502,7 @@ close_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -3519,17 +3519,17 @@ close_port(portid_t pid)
flush_port_owned_resources(pi);
#ifdef RTE_NET_BOND
if (port->bond_flag == 1)
- num_slaves = rte_eth_bond_slaves_get(pi,
- slave_pids, RTE_MAX_ETHPORTS);
+ num_members = rte_eth_bond_members_get(pi,
+ member_pids, RTE_MAX_ETHPORTS);
#endif
rte_eth_dev_close(pi);
/*
- * If this port is bonded device, all slaves under the
+ * If this port is bonded device, all members under the
* device need to be removed or closed.
*/
- if (port->bond_flag == 1 && num_slaves > 0)
- clear_bonding_slave_device(slave_pids,
- num_slaves);
+ if (port->bond_flag == 1 && num_members > 0)
+ clear_bonding_member_device(member_pids,
+ num_members);
}
free_xstats_display_info(pi);
@@ -3569,7 +3569,7 @@ reset_port(portid_t pid)
continue;
}
- if (port_is_bonding_slave(pi)) {
+ if (port_is_bonding_member(pi)) {
fprintf(stderr,
"Please remove port %d from bonded device.\n",
pi);
@@ -4217,38 +4217,39 @@ init_port_config(void)
}
}
-void set_port_slave_flag(portid_t slave_pid)
+void set_port_member_flag(portid_t member_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 1;
+ port = &ports[member_pid];
+ port->member_flag = 1;
}
-void clear_port_slave_flag(portid_t slave_pid)
+void clear_port_member_flag(portid_t member_pid)
{
struct rte_port *port;
- port = &ports[slave_pid];
- port->slave_flag = 0;
+ port = &ports[member_pid];
+ port->member_flag = 0;
}
-uint8_t port_is_bonding_slave(portid_t slave_pid)
+uint8_t port_is_bonding_member(portid_t member_pid)
{
struct rte_port *port;
struct rte_eth_dev_info dev_info;
int ret;
- port = &ports[slave_pid];
- ret = eth_dev_info_get_print_err(slave_pid, &dev_info);
+ port = &ports[member_pid];
+ ret = eth_dev_info_get_print_err(member_pid, &dev_info);
if (ret != 0) {
TESTPMD_LOG(ERR,
"Failed to get device info for port id %d,"
- "cannot determine if the port is a bonded slave",
- slave_pid);
+ "cannot determine if the port is a bonded member",
+ member_pid);
return 0;
}
- if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDING_MEMBER) || (port->slave_flag == 1))
+
+ if ((*dev_info.dev_flags & RTE_ETH_DEV_BONDING_MEMBER) || (port->member_flag == 1))
return 1;
return 0;
}
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f1df6a8faf..888e30367f 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -337,7 +337,7 @@ struct rte_port {
uint32_t mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
queueid_t queue_nb; /**< nb. of queues for flow rules */
uint32_t queue_sz; /**< size of a queue for flow rules */
- uint8_t slave_flag : 1, /**< bonding slave port */
+ uint8_t member_flag : 1, /**< bonding member port */
bond_flag : 1, /**< port is bond device */
fwd_mac_swap : 1, /**< swap packet MAC before forward */
update_conf : 1; /**< need to update bonding device configuration */
@@ -1107,9 +1107,9 @@ void stop_packet_forwarding(void);
void dev_set_link_up(portid_t pid);
void dev_set_link_down(portid_t pid);
void init_port_config(void);
-void set_port_slave_flag(portid_t slave_pid);
-void clear_port_slave_flag(portid_t slave_pid);
-uint8_t port_is_bonding_slave(portid_t slave_pid);
+void set_port_member_flag(portid_t member_pid);
+void clear_port_member_flag(portid_t member_pid);
+uint8_t port_is_bonding_member(portid_t member_pid);
int init_port_dcb_config(portid_t pid, enum dcb_mode_enable dcb_mode,
enum rte_eth_nb_tcs num_tcs,
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 2f46e4c6ee..8dceb14ed0 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -59,13 +59,13 @@
#define INVALID_BONDING_MODE (-1)
-uint8_t slave_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
+uint8_t member_mac[] = {0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 };
uint8_t bonded_mac[] = {0xAA, 0xFF, 0xAA, 0xFF, 0xAA, 0xFF };
struct link_bonding_unittest_params {
int16_t bonded_port_id;
- int16_t slave_port_ids[TEST_MAX_NUMBER_OF_PORTS];
- uint16_t bonded_slave_count;
+ int16_t member_port_ids[TEST_MAX_NUMBER_OF_PORTS];
+ uint16_t bonded_member_count;
uint8_t bonding_mode;
uint16_t nb_rx_q;
@@ -73,7 +73,7 @@ struct link_bonding_unittest_params {
struct rte_mempool *mbuf_pool;
- struct rte_ether_addr *default_slave_mac;
+ struct rte_ether_addr *default_member_mac;
struct rte_ether_addr *default_bonded_mac;
/* Packet Headers */
@@ -90,8 +90,8 @@ static struct rte_udp_hdr pkt_udp_hdr;
static struct link_bonding_unittest_params default_params = {
.bonded_port_id = -1,
- .slave_port_ids = { -1 },
- .bonded_slave_count = 0,
+ .member_port_ids = { -1 },
+ .bonded_member_count = 0,
.bonding_mode = BONDING_MODE_ROUND_ROBIN,
.nb_rx_q = 1,
@@ -99,7 +99,7 @@ static struct link_bonding_unittest_params default_params = {
.mbuf_pool = NULL,
- .default_slave_mac = (struct rte_ether_addr *)slave_mac,
+ .default_member_mac = (struct rte_ether_addr *)member_mac,
.default_bonded_mac = (struct rte_ether_addr *)bonded_mac,
.pkt_eth_hdr = NULL,
@@ -202,8 +202,8 @@ configure_ethdev(uint16_t port_id, uint8_t start, uint8_t en_isr)
return 0;
}
-static int slaves_initialized;
-static int mac_slaves_initialized;
+static int members_initialized;
+static int mac_members_initialized;
static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
static pthread_cond_t cvar = PTHREAD_COND_INITIALIZER;
@@ -213,7 +213,7 @@ static int
test_setup(void)
{
int i, nb_mbuf_per_pool;
- struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)slave_mac;
+ struct rte_ether_addr *mac_addr = (struct rte_ether_addr *)member_mac;
/* Allocate ethernet packet header with space for VLAN header */
if (test_params->pkt_eth_hdr == NULL) {
@@ -235,7 +235,7 @@ test_setup(void)
}
/* Create / Initialize virtual eth devs */
- if (!slaves_initialized) {
+ if (!members_initialized) {
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
@@ -243,16 +243,16 @@ test_setup(void)
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_%d", i);
- test_params->slave_port_ids[i] = virtual_ethdev_create(pmd_name,
+ test_params->member_port_ids[i] = virtual_ethdev_create(pmd_name,
mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(test_params->slave_port_ids[i] >= 0,
+ TEST_ASSERT(test_params->member_port_ids[i] >= 0,
"Failed to create virtual virtual ethdev %s", pmd_name);
TEST_ASSERT_SUCCESS(configure_ethdev(
- test_params->slave_port_ids[i], 1, 0),
+ test_params->member_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s", pmd_name);
}
- slaves_initialized = 1;
+ members_initialized = 1;
}
return 0;
@@ -261,9 +261,9 @@ test_setup(void)
static int
test_create_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
/* Don't try to recreate bonded device if re-running test suite*/
if (test_params->bonded_port_id == -1) {
@@ -281,19 +281,19 @@ test_create_bonded_device(void)
test_params->bonding_mode), "Failed to set ethdev %d to mode %d",
test_params->bonded_port_id, test_params->bonding_mode);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of members %d is great than expected %d.",
+ current_member_count, 0);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves %d is great than expected %d.",
- current_slave_count, 0);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members %d is great than expected %d.",
+ current_member_count, 0);
return 0;
}
@@ -329,46 +329,46 @@ test_create_bonded_device_with_invalid_params(void)
}
static int
-test_add_slave_to_bonded_device(void)
+test_add_member_to_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave (%d) to bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count]),
+ "Failed to add member (%d) to bonded port (%d).",
+ test_params->member_port_ids[test_params->bonded_member_count],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count + 1,
- "Number of slaves (%d) is greater than expected (%d).",
- current_slave_count, test_params->bonded_slave_count + 1);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count + 1,
+ "Number of members (%d) is greater than expected (%d).",
+ current_member_count, test_params->bonded_member_count + 1);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d).\n",
- current_slave_count, 0);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members (%d) is not as expected (%d).\n",
+ current_member_count, 0);
- test_params->bonded_slave_count++;
+ test_params->bonded_member_count++;
return 0;
}
static int
-test_add_slave_to_invalid_bonded_device(void)
+test_add_member_to_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->bonded_port_id + 5,
+ test_params->member_port_ids[test_params->bonded_member_count]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_add(test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_add(test_params->member_port_ids[0],
+ test_params->member_port_ids[test_params->bonded_member_count]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -376,63 +376,63 @@ test_add_slave_to_invalid_bonded_device(void)
static int
-test_remove_slave_from_bonded_device(void)
+test_remove_member_from_bonded_device(void)
{
- int current_slave_count;
+ int current_member_count;
struct rte_ether_addr read_mac_addr, *mac_addr;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count-1]),
- "Failed to remove slave %d from bonded port (%d).",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count-1]),
+ "Failed to remove member %d from bonded port (%d).",
+ test_params->member_port_ids[test_params->bonded_member_count-1],
test_params->bonded_port_id);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count - 1,
- "Number of slaves (%d) is great than expected (%d).\n",
- current_slave_count, test_params->bonded_slave_count - 1);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count - 1,
+ "Number of members (%d) is great than expected (%d).\n",
+ current_member_count, test_params->bonded_member_count - 1);
- mac_addr = (struct rte_ether_addr *)slave_mac;
+ mac_addr = (struct rte_ether_addr *)member_mac;
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] =
- test_params->bonded_slave_count-1;
+ test_params->bonded_member_count-1;
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ test_params->member_port_ids[test_params->bonded_member_count-1],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->member_port_ids[test_params->bonded_member_count-1]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count-1]);
+ test_params->member_port_ids[test_params->bonded_member_count-1]);
virtual_ethdev_simulate_link_status_interrupt(test_params->bonded_port_id,
0);
- test_params->bonded_slave_count--;
+ test_params->bonded_member_count--;
return 0;
}
static int
-test_remove_slave_from_invalid_bonded_device(void)
+test_remove_member_from_invalid_bonded_device(void)
{
/* Invalid port ID */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
+ TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
test_params->bonded_port_id + 5,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ test_params->member_port_ids[test_params->bonded_member_count - 1]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_slave_remove(
- test_params->slave_port_ids[0],
- test_params->slave_port_ids[test_params->bonded_slave_count - 1]),
+ TEST_ASSERT_FAIL(rte_eth_bond_member_remove(
+ test_params->member_port_ids[0],
+ test_params->member_port_ids[test_params->bonded_member_count - 1]),
"Expected call to failed as invalid port specified.");
return 0;
@@ -441,19 +441,19 @@ test_remove_slave_from_invalid_bonded_device(void)
static int bonded_id = 2;
static int
-test_add_already_bonded_slave_to_bonded_device(void)
+test_add_already_bonded_member_to_bonded_device(void)
{
- int port_id, current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int port_id, current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- test_add_slave_to_bonded_device();
+ test_add_member_to_bonded_device();
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 1,
- "Number of slaves (%d) is not that expected (%d).",
- current_slave_count, 1);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 1,
+ "Number of members (%d) is not that expected (%d).",
+ current_member_count, 1);
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN, "%s_%d", BONDED_DEV_NAME, ++bonded_id);
@@ -461,93 +461,93 @@ test_add_already_bonded_slave_to_bonded_device(void)
rte_socket_id());
TEST_ASSERT(port_id >= 0, "Failed to create bonded device.");
- TEST_ASSERT(rte_eth_bond_slave_add(port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count - 1])
+ TEST_ASSERT(rte_eth_bond_member_add(port_id,
+ test_params->member_port_ids[test_params->bonded_member_count - 1])
< 0,
- "Added slave (%d) to bonded port (%d) unexpectedly.",
- test_params->slave_port_ids[test_params->bonded_slave_count-1],
+ "Added member (%d) to bonded port (%d) unexpectedly.",
+ test_params->member_port_ids[test_params->bonded_member_count-1],
port_id);
- return test_remove_slave_from_bonded_device();
+ return test_remove_member_from_bonded_device();
}
static int
-test_get_slaves_from_bonded_device(void)
+test_get_members_from_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
/* Invalid port id */
- current_slave_count = rte_eth_bond_slaves_get(INVALID_PORT_ID, slaves,
+ current_member_count = rte_eth_bond_members_get(INVALID_PORT_ID, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(INVALID_PORT_ID,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_active_members_get(INVALID_PORT_ID,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- /* Invalid slaves pointer */
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
+ /* Invalid members pointer */
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_member_count < 0,
+ "Invalid member array unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
+ current_member_count = rte_eth_bond_active_members_get(
test_params->bonded_port_id, NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
- "Invalid slave array unexpectedly succeeded");
+ TEST_ASSERT(current_member_count < 0,
+ "Invalid member array unexpectedly succeeded");
/* non bonded device*/
- current_slave_count = rte_eth_bond_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_members_get(
+ test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->slave_port_ids[0], NULL, RTE_MAX_ETHPORTS);
- TEST_ASSERT(current_slave_count < 0,
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->member_port_ids[0], NULL, RTE_MAX_ETHPORTS);
+ TEST_ASSERT(current_member_count < 0,
"Invalid port id unexpectedly succeeded");
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "Failed to remove members from bonded device");
return 0;
}
static int
-test_add_remove_multiple_slaves_to_from_bonded_device(void)
+test_add_remove_multiple_members_to_from_bonded_device(void)
{
int i;
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
for (i = 0; i < TEST_MAX_NUMBER_OF_PORTS; i++)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "Failed to remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "Failed to remove members from bonded device");
return 0;
}
static void
-enable_bonded_slaves(void)
+enable_bonded_members(void)
{
int i;
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- virtual_ethdev_tx_burst_fn_set_success(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ virtual_ethdev_tx_burst_fn_set_success(test_params->member_port_ids[i],
1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->member_port_ids[i], 1);
}
}
@@ -556,34 +556,36 @@ test_start_bonded_device(void)
{
struct rte_eth_link link_status;
- int current_slave_count, current_bonding_mode, primary_port;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count, current_bonding_mode, primary_port;
+ uint16_t members[RTE_MAX_ETHPORTS];
int retval;
- /* Add slave to bonded device*/
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device");
+ /* Add member to bonded device*/
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params->bonded_port_id),
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
- /* Change link status of virtual pmd so it will be added to the active
- * slave list of the bonded device*/
+ /*
+ * Change link status of virtual pmd so it will be added to the active
+ * member list of the bonded device.
+ */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[test_params->bonded_slave_count-1], 1);
+ test_params->member_port_ids[test_params->bonded_member_count-1], 1);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of active members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
current_bonding_mode = rte_eth_bond_mode_get(test_params->bonded_port_id);
TEST_ASSERT_EQUAL(current_bonding_mode, test_params->bonding_mode,
@@ -591,9 +593,9 @@ test_start_bonded_device(void)
current_bonding_mode, test_params->bonding_mode);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port (%d) is not expected value (%d).",
- primary_port, test_params->slave_port_ids[0]);
+ primary_port, test_params->member_port_ids[0]);
retval = rte_eth_link_get(test_params->bonded_port_id, &link_status);
TEST_ASSERT(retval >= 0,
@@ -609,8 +611,8 @@ test_start_bonded_device(void)
static int
test_stop_bonded_device(void)
{
- int current_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int current_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
struct rte_eth_link link_status;
int retval;
@@ -627,29 +629,29 @@ test_stop_bonded_device(void)
"Bonded port (%d) status (%d) is not expected value (%d).",
test_params->bonded_port_id, link_status.link_status, 0);
- current_slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, test_params->bonded_slave_count,
- "Number of slaves (%d) is not expected value (%d).",
- current_slave_count, test_params->bonded_slave_count);
+ current_member_count = rte_eth_bond_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, test_params->bonded_member_count,
+ "Number of members (%d) is not expected value (%d).",
+ current_member_count, test_params->bonded_member_count);
- current_slave_count = rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(current_slave_count, 0,
- "Number of active slaves (%d) is not expected value (%d).",
- current_slave_count, 0);
+ current_member_count = rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(current_member_count, 0,
+ "Number of active members (%d) is not expected value (%d).",
+ current_member_count, 0);
return 0;
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- /* Clean up and remove slaves from bonded device */
+ /* Clean up and remove members from bonded device */
free_virtualpmd_tx_queue();
- while (test_params->bonded_slave_count > 0)
- TEST_ASSERT_SUCCESS(test_remove_slave_from_bonded_device(),
- "test_remove_slave_from_bonded_device failed");
+ while (test_params->bonded_member_count > 0)
+ TEST_ASSERT_SUCCESS(test_remove_member_from_bonded_device(),
+ "test_remove_member_from_bonded_device failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -681,10 +683,10 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->slave_port_ids[0],
+ TEST_ASSERT_FAIL(rte_eth_bond_mode_set(test_params->member_port_ids[0],
bonding_modes[i]),
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
bonding_modes[i]),
@@ -704,26 +706,26 @@ test_set_bonding_mode(void)
INVALID_PORT_ID);
/* Non bonded device */
- bonding_mode = rte_eth_bond_mode_get(test_params->slave_port_ids[0]);
+ bonding_mode = rte_eth_bond_mode_get(test_params->member_port_ids[0]);
TEST_ASSERT(bonding_mode < 0,
"Expected call to failed as invalid port (%d) specified.",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
static int
-test_set_primary_slave(void)
+test_set_primary_member(void)
{
int i, j, retval;
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr *expected_mac_addr;
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.");
+ /* Add 4 members to bonded device */
+ for (i = test_params->bonded_member_count; i < 4; i++)
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_bond_mode_set(test_params->bonded_port_id,
BONDING_MODE_ROUND_ROBIN),
@@ -732,34 +734,34 @@ test_set_primary_slave(void)
/* Invalid port ID */
TEST_ASSERT_FAIL(rte_eth_bond_primary_set(INVALID_PORT_ID,
- test_params->slave_port_ids[i]),
+ test_params->member_port_ids[i]),
"Expected call to failed as invalid port specified.");
/* Non bonded device */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->slave_port_ids[i],
- test_params->slave_port_ids[i]),
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_set(test_params->member_port_ids[i],
+ test_params->member_port_ids[i]),
"Expected call to failed as invalid port specified.");
- /* Set slave as primary
- * Verify slave it is now primary slave
- * Verify that MAC address of bonded device is that of primary slave
- * Verify that MAC address of all bonded slaves are that of primary slave
+ /* Set member as primary
+ * Verify member it is now primary member
+ * Verify that MAC address of bonded device is that of primary member
+ * Verify that MAC address of all bonded members are that of primary member
*/
for (i = 0; i < 4; i++) {
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[i]),
+ test_params->member_port_ids[i]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
retval = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(retval >= 0,
"Failed to read primary port from bonded port (%d)\n",
test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(retval, test_params->slave_port_ids[i],
+ TEST_ASSERT_EQUAL(retval, test_params->member_port_ids[i],
"Bonded port (%d) primary port (%d) not expected value (%d)\n",
test_params->bonded_port_id, retval,
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
/* stop/start bonded eth dev to apply new MAC */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
@@ -770,13 +772,14 @@ test_set_primary_slave(void)
"Failed to start bonded port %d",
test_params->bonded_port_id);
- expected_mac_addr = (struct rte_ether_addr *)&slave_mac;
+ expected_mac_addr = (struct rte_ether_addr *)&member_mac;
expected_mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Check primary slave MAC */
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Check primary member MAC */
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
@@ -789,16 +792,17 @@ test_set_primary_slave(void)
sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port\n");
- /* Check other slaves MACs */
+ /* Check other members MACs */
for (j = 0; j < 4; j++) {
if (j != i) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[j],
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(
+ test_params->member_port_ids[j],
&read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[j]);
+ test_params->member_port_ids[j]);
TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary "
+ "member port mac address not set to that of primary "
"port");
}
}
@@ -809,14 +813,14 @@ test_set_primary_slave(void)
TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->bonded_port_id + 10),
"read primary port from expectedly");
- /* Test with slave port */
- TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->slave_port_ids[0]),
+ /* Test with member port */
+ TEST_ASSERT_FAIL(rte_eth_bond_primary_get(test_params->member_port_ids[0]),
"read primary port from expectedly\n");
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to stop and remove slaves from bonded device");
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+ "Failed to stop and remove members from bonded device");
- /* No slaves */
+ /* No members */
TEST_ASSERT(rte_eth_bond_primary_get(test_params->bonded_port_id) < 0,
"read primary port from expectedly\n");
@@ -840,7 +844,7 @@ test_set_explicit_bonded_mac(void)
/* Non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_mac_address_set(
- test_params->slave_port_ids[0], mac_addr),
+ test_params->member_port_ids[0], mac_addr),
"Expected call to failed as invalid port specified.");
/* NULL MAC address */
@@ -853,10 +857,10 @@ test_set_explicit_bonded_mac(void)
"Failed to set MAC address on bonded port (%d)",
test_params->bonded_port_id);
- /* Add 4 slaves to bonded device */
- for (i = test_params->bonded_slave_count; i < 4; i++) {
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave to bonded device.\n");
+ /* Add 4 members to bonded device */
+ for (i = test_params->bonded_member_count; i < 4; i++) {
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member to bonded device.\n");
}
/* Check bonded MAC */
@@ -866,14 +870,15 @@ test_set_explicit_bonded_mac(void)
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port mac address not set to that of primary port");
- /* Check other slaves MACs */
+ /* Check other members MACs */
for (i = 0; i < 4; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port mac address not set to that of primary port");
+ "member port mac address not set to that of primary port");
}
/* test resetting mac address on bonded device */
@@ -883,13 +888,13 @@ test_set_explicit_bonded_mac(void)
test_params->bonded_port_id);
TEST_ASSERT_FAIL(
- rte_eth_bond_mac_address_reset(test_params->slave_port_ids[0]),
+ rte_eth_bond_mac_address_reset(test_params->member_port_ids[0]),
"Reset MAC address on bonded port (%d) unexpectedly",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* test resetting mac address on bonded device with no slaves */
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
- "Failed to remove slaves and stop bonded device");
+ /* test resetting mac address on bonded device with no members */
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
+ "Failed to remove members and stop bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_reset(test_params->bonded_port_id),
"Failed to reset MAC address on bonded port (%d)",
@@ -898,25 +903,25 @@ test_set_explicit_bonded_mac(void)
return 0;
}
-#define BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT (3)
+#define BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT (3)
static int
test_set_bonded_port_initialization_mac_assignment(void)
{
- int i, slave_count;
+ int i, member_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
static int bonded_port_id = -1;
- static int slave_port_ids[BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT];
+ static int member_port_ids[BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT];
- struct rte_ether_addr slave_mac_addr, bonded_mac_addr, read_mac_addr;
+ struct rte_ether_addr member_mac_addr, bonded_mac_addr, read_mac_addr;
/* Initialize default values for MAC addresses */
- memcpy(&slave_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
- memcpy(&bonded_mac_addr, slave_mac, sizeof(struct rte_ether_addr));
+ memcpy(&member_mac_addr, member_mac, sizeof(struct rte_ether_addr));
+ memcpy(&bonded_mac_addr, member_mac, sizeof(struct rte_ether_addr));
/*
- * 1. a - Create / configure bonded / slave ethdevs
+ * 1. a - Create / configure bonded / member ethdevs
*/
if (bonded_port_id == -1) {
bonded_port_id = rte_eth_bond_create("net_bonding_mac_ass_test",
@@ -927,46 +932,46 @@ test_set_bonded_port_initialization_mac_assignment(void)
"Failed to configure bonded ethdev");
}
- if (!mac_slaves_initialized) {
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ if (!mac_members_initialized) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
char pmd_name[RTE_ETH_NAME_MAX_LEN];
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
i + 100;
snprintf(pmd_name, RTE_ETH_NAME_MAX_LEN,
- "eth_slave_%d", i);
+ "eth_member_%d", i);
- slave_port_ids[i] = virtual_ethdev_create(pmd_name,
- &slave_mac_addr, rte_socket_id(), 1);
+ member_port_ids[i] = virtual_ethdev_create(pmd_name,
+ &member_mac_addr, rte_socket_id(), 1);
- TEST_ASSERT(slave_port_ids[i] >= 0,
- "Failed to create slave ethdev %s",
+ TEST_ASSERT(member_port_ids[i] >= 0,
+ "Failed to create member ethdev %s",
pmd_name);
- TEST_ASSERT_SUCCESS(configure_ethdev(slave_port_ids[i], 1, 0),
+ TEST_ASSERT_SUCCESS(configure_ethdev(member_port_ids[i], 1, 0),
"Failed to configure virtual ethdev %s",
pmd_name);
}
- mac_slaves_initialized = 1;
+ mac_members_initialized = 1;
}
/*
- * 2. Add slave ethdevs to bonded device
+ * 2. Add member ethdevs to bonded device
*/
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(bonded_port_id,
- slave_port_ids[i]),
- "Failed to add slave (%d) to bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(bonded_port_id,
+ member_port_ids[i]),
+ "Failed to add member (%d) to bonded port (%d).",
+ member_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ member_count = rte_eth_bond_members_get(bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT, slave_count,
- "Number of slaves (%d) is not as expected (%d)",
- slave_count, BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT, member_count,
+ "Number of members (%d) is not as expected (%d)",
+ member_count, BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT);
/*
@@ -982,16 +987,16 @@ test_set_bonded_port_initialization_mac_assignment(void)
/* 4. a - Start bonded ethdev
- * b - Enable slave devices
- * c - Verify bonded/slaves ethdev MAC addresses
+ * b - Enable member devices
+ * c - Verify bonded/members ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_start(bonded_port_id),
"Failed to start bonded pmd eth device %d.",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- slave_port_ids[i], 1);
+ member_port_ids[i], 1);
}
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(bonded_port_id, &read_mac_addr),
@@ -1001,36 +1006,36 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
+ member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
/* 7. a - Change primary port
* b - Stop / Start bonded port
- * d - Verify slave ethdev MAC addresses
+ * d - Verify member ethdev MAC addresses
*/
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(bonded_port_id,
- slave_port_ids[2]),
+ member_port_ids[2]),
"failed to set primary port on bonded device.");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
@@ -1048,94 +1053,94 @@ test_set_bonded_port_initialization_mac_assignment(void)
sizeof(read_mac_addr)),
"bonded port mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
+ member_port_ids[2]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
/* 6. a - Stop bonded ethdev
- * b - remove slave ethdevs
- * c - Verify slave ethdevs MACs are restored
+ * b - remove member ethdevs
+ * c - Verify member ethdevs MACs are restored
*/
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(bonded_port_id),
"Failed to stop bonded port %u",
bonded_port_id);
- for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_SLAVE_COUNT; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(bonded_port_id,
- slave_port_ids[i]),
- "Failed to remove slave %d from bonded port (%d).",
- slave_port_ids[i], bonded_port_id);
+ for (i = 0; i < BONDED_INIT_MAC_ASSIGNMENT_MEMBER_COUNT; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(bonded_port_id,
+ member_port_ids[i]),
+ "Failed to remove member %d from bonded port (%d).",
+ member_port_ids[i], bonded_port_id);
}
- slave_count = rte_eth_bond_slaves_get(bonded_port_id, slaves,
+ member_count = rte_eth_bond_members_get(bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of slaves (%d) is great than expected (%d).",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(member_count, 0,
+ "Number of members (%d) is great than expected (%d).",
+ member_count, 0);
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[0], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 0 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 0 mac address not as expected");
+ "member port 0 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[1], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 1 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[1]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[1]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 1 mac address not as expected");
+ "member port 1 mac address not as expected");
- slave_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(slave_port_ids[2], &read_mac_addr),
+ member_mac_addr.addr_bytes[RTE_ETHER_ADDR_LEN-1] = 2 + 100;
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(member_port_ids[2], &read_mac_addr),
"Failed to get mac address (port %d)",
- slave_port_ids[2]);
- TEST_ASSERT_SUCCESS(memcmp(&slave_mac_addr, &read_mac_addr,
+ member_port_ids[2]);
+ TEST_ASSERT_SUCCESS(memcmp(&member_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port 2 mac address not as expected");
+ "member port 2 mac address not as expected");
return 0;
}
static int
-initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
- uint16_t number_of_slaves, uint8_t enable_slave)
+initialize_bonded_device_with_members(uint8_t bonding_mode, uint8_t bond_en_isr,
+ uint16_t number_of_members, uint8_t enable_member)
{
/* Configure bonded device */
TEST_ASSERT_SUCCESS(configure_ethdev(test_params->bonded_port_id, 0,
bond_en_isr), "Failed to configure bonding port (%d) in mode %d "
- "with (%d) slaves.", test_params->bonded_port_id, bonding_mode,
- number_of_slaves);
-
- /* Add slaves to bonded device */
- while (number_of_slaves > test_params->bonded_slave_count)
- TEST_ASSERT_SUCCESS(test_add_slave_to_bonded_device(),
- "Failed to add slave (%d to bonding port (%d).",
- test_params->bonded_slave_count - 1,
+ "with (%d) members.", test_params->bonded_port_id, bonding_mode,
+ number_of_members);
+
+ /* Add members to bonded device */
+ while (number_of_members > test_params->bonded_member_count)
+ TEST_ASSERT_SUCCESS(test_add_member_to_bonded_device(),
+ "Failed to add member (%d to bonding port (%d).",
+ test_params->bonded_member_count - 1,
test_params->bonded_port_id);
/* Set link bonding mode */
@@ -1148,40 +1153,40 @@ initialize_bonded_device_with_slaves(uint8_t bonding_mode, uint8_t bond_en_isr,
"Failed to start bonded pmd eth device %d.",
test_params->bonded_port_id);
- if (enable_slave)
- enable_bonded_slaves();
+ if (enable_member)
+ enable_bonded_members();
return 0;
}
static int
-test_adding_slave_after_bonded_device_started(void)
+test_adding_member_after_bonded_device_started(void)
{
int i;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 0),
- "Failed to add slaves to bonded device");
+ "Failed to add members to bonded device");
- /* Enabled slave devices */
- for (i = 0; i < test_params->bonded_slave_count + 1; i++) {
+ /* Enabled member devices */
+ for (i = 0; i < test_params->bonded_member_count + 1; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 1);
+ test_params->member_port_ids[i], 1);
}
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- test_params->slave_port_ids[test_params->bonded_slave_count]),
- "Failed to add slave to bonded port.\n");
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ test_params->member_port_ids[test_params->bonded_member_count]),
+ "Failed to add member to bonded port.\n");
rte_eth_stats_reset(
- test_params->slave_port_ids[test_params->bonded_slave_count]);
+ test_params->member_port_ids[test_params->bonded_member_count]);
- test_params->bonded_slave_count++;
+ test_params->bonded_member_count++;
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_STATUS_INTERRUPT_SLAVE_COUNT 4
+#define TEST_STATUS_INTERRUPT_MEMBER_COUNT 4
#define TEST_LSC_WAIT_TIMEOUT_US 500000
int test_lsc_interrupt_count;
@@ -1237,13 +1242,13 @@ lsc_timeout(int wait_us)
static int
test_status_interrupt(void)
{
- int slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ int member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
- /* initialized bonding device with T slaves */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* initialized bonding device with T members */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 1,
- TEST_STATUS_INTERRUPT_SLAVE_COUNT, 1),
+ TEST_STATUS_INTERRUPT_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
test_lsc_interrupt_count = 0;
@@ -1253,27 +1258,27 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, TEST_STATUS_INTERRUPT_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d)",
+ member_count, TEST_STATUS_INTERRUPT_MEMBER_COUNT);
- /* Bring all 4 slaves link status to down and test that we have received a
+ /* Bring all 4 members link status to down and test that we have received a
* lsc interrupts */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->member_port_ids[2], 0);
TEST_ASSERT_EQUAL(test_lsc_interrupt_count, 0,
"Received a link status change interrupt unexpectedly");
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1281,18 +1286,18 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 0,
- "Number of active slaves (%d) is not as expected (%d)",
- slave_count, 0);
+ TEST_ASSERT_EQUAL(member_count, 0,
+ "Number of active members (%d) is not as expected (%d)",
+ member_count, 0);
- /* bring one slave port up so link status will change */
+ /* bring one member port up so link status will change */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->member_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) == 0,
"timed out waiting for interrupt");
@@ -1301,12 +1306,12 @@ test_status_interrupt(void)
TEST_ASSERT(test_lsc_interrupt_count > 0,
"Did not receive link status change interrupt");
- /* Verify that calling the same slave lsc interrupt doesn't cause another
+ /* Verify that calling the same member lsc interrupt doesn't cause another
* lsc interrupt from bonded device */
test_lsc_interrupt_count = 0;
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 1);
+ test_params->member_port_ids[0], 1);
TEST_ASSERT(lsc_timeout(TEST_LSC_WAIT_TIMEOUT_US) != 0,
"received unexpected interrupt");
@@ -1320,8 +1325,8 @@ test_status_interrupt(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1398,11 +1403,11 @@ test_roundrobin_tx_burst(void)
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 2, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size <= MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -1423,20 +1428,20 @@ test_roundrobin_tx_burst(void)
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size / test_params->bonded_slave_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ (uint64_t)burst_size / test_params->bonded_member_count,
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_member_count);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -1444,8 +1449,8 @@ test_roundrobin_tx_burst(void)
pkt_burst, burst_size), 0,
"tx burst return unexpected value");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1471,13 +1476,13 @@ free_mbufs(struct rte_mbuf **mbufs, int nb_mbufs)
rte_pktmbuf_free(mbufs[i]);
}
-#define TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_RR_SLAVE_TX_FAIL_BURST_SIZE (64)
-#define TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT (22)
-#define TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (1)
+#define TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT (2)
+#define TEST_RR_MEMBER_TX_FAIL_BURST_SIZE (64)
+#define TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT (22)
+#define TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (1)
static int
-test_roundrobin_tx_burst_slave_tx_fail(void)
+test_roundrobin_tx_burst_member_tx_fail(void)
{
struct rte_mbuf *pkt_burst[MAX_PKT_BURST];
struct rte_mbuf *expected_tx_fail_pkts[MAX_PKT_BURST];
@@ -1486,49 +1491,51 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
int i, first_fail_idx, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0,
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE, 0, 1, 0, 0, 0),
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
/* Copy references to packets which we expect not to be transmitted */
- first_fail_idx = (TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- (TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT *
- TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)) +
- TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX;
+ first_fail_idx = (TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ (TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT *
+ TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)) +
+ TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX;
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
expected_tx_fail_pkts[i] = pkt_burst[first_fail_idx +
- (i * TEST_RR_SLAVE_TX_FAIL_SLAVE_COUNT)];
+ (i * TEST_RR_MEMBER_TX_FAIL_MEMBER_COUNT)];
}
- /* Set virtual slave to only fail transmission of
- * TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT packets in burst */
+ /*
+ * Set virtual member to only fail transmission of
+ * TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT packets in burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->member_port_ids[TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkt_burst,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) an unexpected (%d) number of packets", tx_count,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_tx_fail_pkts[i], pkt_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_tx_fail_pkts[i], pkt_burst[i + tx_count]);
@@ -1538,45 +1545,45 @@ test_roundrobin_tx_burst_slave_tx_fail(void)
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT,
+ (uint64_t)TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_RR_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_RR_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- int slave_expected_tx_count;
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ int member_expected_tx_count;
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
- slave_expected_tx_count = TEST_RR_SLAVE_TX_FAIL_BURST_SIZE /
- test_params->bonded_slave_count;
+ member_expected_tx_count = TEST_RR_MEMBER_TX_FAIL_BURST_SIZE /
+ test_params->bonded_member_count;
- if (i == TEST_RR_SLAVE_TX_FAIL_FAILING_SLAVE_IDX)
- slave_expected_tx_count = slave_expected_tx_count -
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT;
+ if (i == TEST_RR_MEMBER_TX_FAIL_FAILING_MEMBER_IDX)
+ member_expected_tx_count = member_expected_tx_count -
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT;
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)slave_expected_tx_count,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[i],
- (unsigned int)port_stats.opackets, slave_expected_tx_count);
+ (uint64_t)member_expected_tx_count,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[i],
+ (unsigned int)port_stats.opackets, member_expected_tx_count);
}
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkt_burst[tx_count],
- TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
- free_mbufs(&pkt_burst[tx_count], TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT);
+ free_mbufs(&pkt_burst[tx_count], TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_roundrobin_rx_burst_on_single_slave(void)
+test_roundrobin_rx_burst_on_single_member(void)
{
struct rte_mbuf *gen_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
@@ -1585,19 +1592,19 @@ test_roundrobin_rx_burst_on_single_slave(void)
int i, j, burst_size = 25;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
gen_pkt_burst, burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -1616,25 +1623,25 @@ test_roundrobin_rx_burst_on_single_slave(void)
- /* Verify bonded slave devices rx count */
- /* Verify slave ports tx stats */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ /* Verify member ports tx stats */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected"
- " (%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected"
+ " (%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
- /* Reset bonded slaves stats */
- rte_eth_stats_reset(test_params->slave_port_ids[j]);
+ /* Reset bonded members stats */
+ rte_eth_stats_reset(test_params->member_port_ids[j]);
}
/* reset bonded device stats */
rte_eth_stats_reset(test_params->bonded_port_id);
@@ -1646,38 +1653,38 @@ test_roundrobin_rx_burst_on_single_slave(void)
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT (3)
+#define TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT (3)
static int
-test_roundrobin_rx_burst_on_multiple_slaves(void)
+test_roundrobin_rx_burst_on_multiple_members(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT] = { 15, 13, 36 };
+ int burst_size[TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT] = { 15, 13, 36 };
int i, nb_rx;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 1, 0, 0, 0),
burst_size[i], "burst generation failed");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to members */
+ for (i = 0; i < TEST_ROUNDROBIN_TX_BURST_MEMBER_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -1697,29 +1704,29 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
test_params->bonded_port_id, (unsigned int)port_stats.ipackets,
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2],
(unsigned int)port_stats.ipackets, burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3],
(unsigned int)port_stats.ipackets, 0);
/* free mbufs */
@@ -1727,8 +1734,8 @@ test_roundrobin_rx_burst_on_multiple_slaves(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1739,48 +1746,54 @@ test_roundrobin_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_2),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+ &expected_mac_addr_2),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
- /* Verify that all MACs are the same as first slave added to bonded dev */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ /* Verify that all MACs are the same as first member added to bonded dev */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->member_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary"
+ "member port (%d) mac address has changed to that of primary"
" port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* stop / start bonded device and verify that primary MAC address is
- * propagate to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagate to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
test_params->bonded_port_id);
@@ -1794,16 +1807,17 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(
memcmp(&expected_mac_addr_2, &read_mac_addr, sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_2, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary"
- " port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary"
+ " port", test_params->member_port_ids[i]);
}
/* Set explicit MAC address */
@@ -1818,19 +1832,20 @@ test_roundrobin_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
- sizeof(read_mac_addr)), "slave port (%d) mac address not set to"
- " that of new primary port\n", test_params->slave_port_ids[i]);
+ sizeof(read_mac_addr)), "member port (%d) mac address not set to"
+ " that of new primary port\n", test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -1839,10 +1854,10 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
int i, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ROUND_ROBIN, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
TEST_ASSERT_SUCCESS(ret,
@@ -1854,12 +1869,12 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not enabled",
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -1872,76 +1887,76 @@ test_roundrobin_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
"Port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_RR_LINK_STATUS_SLAVE_COUNT (4)
-#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT (2)
+#define TEST_RR_LINK_STATUS_MEMBER_COUNT (4)
+#define TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT (2)
static int
-test_roundrobin_verify_slave_link_status_change_behaviour(void)
+test_roundrobin_verify_member_link_status_change_behaviour(void)
{
struct rte_mbuf *tx_pkt_burst[MAX_PKT_BURST] = { NULL };
- struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_RR_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
/* NULL all pointers in array to simplify cleanup */
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with TEST_RR_LINK_STATUS_SLAVE_COUNT slaves
+ /* Initialize bonded device with TEST_RR_LINK_STATUS_MEMBER_COUNT members
* in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_ROUND_ROBIN, 0, TEST_RR_LINK_STATUS_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_RR_LINK_STATUS_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, TEST_RR_LINK_STATUS_MEMBER_COUNT);
- /* Set 2 slaves eth_devs link status to down */
+ /* Set 2 members eth_devs link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count,
- TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).\n",
- slave_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count,
+ TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).\n",
+ member_count, TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT);
burst_size = 20;
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on members with link status down:
*
* 1. Generate test burst of traffic
* 2. Transmit burst on bonded eth_dev
* 3. Verify stats for bonded eth_dev (opackets = burst_size)
- * 4. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 4. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
TEST_ASSERT_EQUAL(
generate_test_burst(tx_pkt_burst, burst_size, 0, 1, 0, 0, 0),
@@ -1960,41 +1975,41 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[0], (int)port_stats.opackets, 10);
+ test_params->member_port_ids[0], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[1], (int)port_stats.opackets, 0);
+ test_params->member_port_ids[1], (int)port_stats.opackets, 0);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)10,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[2], (int)port_stats.opackets, 10);
+ test_params->member_port_ids[2], (int)port_stats.opackets, 10);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)0,
"Port (%d) opackets stats (%d) not expected (%d) value",
- test_params->slave_port_ids[3], (int)port_stats.opackets, 0);
+ test_params->member_port_ids[3], (int)port_stats.opackets, 0);
- /* Verify that pkts are not sent on slaves with link status down:
+ /* Verify that pkts are not sent on members with link status down:
*
* 1. Generate test bursts of traffic
* 2. Add bursts on to virtual eth_devs
* 3. Rx burst on bonded eth_dev, expected (burst_ size *
- * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_SLAVE_COUNT) received
+ * TEST_RR_LINK_STATUS_EXPECTED_ACTIVE_MEMBER_COUNT) received
* 4. Verify stats for bonded eth_dev
- * 6. Verify stats for slave eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
+ * 6. Verify stats for member eth_devs (s0 = 10, s1 = 0, s2 = 10, s3 = 0)
*/
- for (i = 0; i < TEST_RR_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_RR_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size);
}
@@ -2014,49 +2029,49 @@ test_roundrobin_verify_slave_link_status_change_behaviour(void)
rte_pktmbuf_free(rx_pkt_burst[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT (2)
+#define TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT (2)
-uint8_t polling_slave_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
+uint8_t polling_member_mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0x00, 0x00 };
-int polling_test_slaves[TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT] = { -1, -1 };
+int polling_test_members[TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT] = { -1, -1 };
static int
-test_roundrobin_verfiy_polling_slave_link_status_change(void)
+test_roundrobin_verify_polling_member_link_status_change(void)
{
struct rte_ether_addr *mac_addr =
- (struct rte_ether_addr *)polling_slave_mac;
- char slave_name[RTE_ETH_NAME_MAX_LEN];
+ (struct rte_ether_addr *)polling_member_mac;
+ char member_name[RTE_ETH_NAME_MAX_LEN];
int i;
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
- /* Generate slave name / MAC address */
- snprintf(slave_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
+ /* Generate member name / MAC address */
+ snprintf(member_name, RTE_ETH_NAME_MAX_LEN, "eth_virt_poll_%d", i);
mac_addr->addr_bytes[RTE_ETHER_ADDR_LEN-1] = i;
- /* Create slave devices with no ISR Support */
- if (polling_test_slaves[i] == -1) {
- polling_test_slaves[i] = virtual_ethdev_create(slave_name, mac_addr,
+ /* Create member devices with no ISR Support */
+ if (polling_test_members[i] == -1) {
+ polling_test_members[i] = virtual_ethdev_create(member_name, mac_addr,
rte_socket_id(), 0);
- TEST_ASSERT(polling_test_slaves[i] >= 0,
- "Failed to create virtual virtual ethdev %s\n", slave_name);
+ TEST_ASSERT(polling_test_members[i] >= 0,
+ "Failed to create virtual ethdev %s\n", member_name);
- /* Configure slave */
- TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_slaves[i], 0, 0),
- "Failed to configure virtual ethdev %s(%d)", slave_name,
- polling_test_slaves[i]);
+ /* Configure member */
+ TEST_ASSERT_SUCCESS(configure_ethdev(polling_test_members[i], 0, 0),
+ "Failed to configure virtual ethdev %s(%d)", member_name,
+ polling_test_members[i]);
}
- /* Add slave to bonded device */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to add slave %s(%d) to bonded device %d",
- slave_name, polling_test_slaves[i],
+ /* Add member to bonded device */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params->bonded_port_id,
+ polling_test_members[i]),
+ "Failed to add member %s(%d) to bonded device %d",
+ member_name, polling_test_members[i],
test_params->bonded_port_id);
}
@@ -2071,26 +2086,26 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
RTE_ETH_EVENT_INTR_LSC, test_bonding_lsc_event_callback,
&test_params->bonded_port_id);
- /* link status change callback for first slave link up */
+ /* link status change callback for first member link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 1);
+ virtual_ethdev_set_link_status(polling_test_members[0], 1);
TEST_ASSERT_SUCCESS(lsc_timeout(15000), "timed out waiting for interrupt");
- /* no link status change callback for second slave link up */
+ /* no link status change callback for second member link up */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[1], 1);
+ virtual_ethdev_set_link_status(polling_test_members[1], 1);
TEST_ASSERT_FAIL(lsc_timeout(15000), "unexpectedly succeeded");
- /* link status change callback for both slave links down */
+ /* link status change callback for both member links down */
test_lsc_interrupt_count = 0;
- virtual_ethdev_set_link_status(polling_test_slaves[0], 0);
- virtual_ethdev_set_link_status(polling_test_slaves[1], 0);
+ virtual_ethdev_set_link_status(polling_test_members[0], 0);
+ virtual_ethdev_set_link_status(polling_test_members[1], 0);
TEST_ASSERT_SUCCESS(lsc_timeout(20000), "timed out waiting for interrupt");
@@ -2100,17 +2115,17 @@ test_roundrobin_verfiy_polling_slave_link_status_change(void)
&test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_SLAVE_COUNT; i++) {
+ /* Clean up and remove members from bonded device */
+ for (i = 0; i < TEST_RR_POLLING_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_SUCCESS(
- rte_eth_bond_slave_remove(test_params->bonded_port_id,
- polling_test_slaves[i]),
- "Failed to remove slave %d from bonded port (%d)",
- polling_test_slaves[i], test_params->bonded_port_id);
+ rte_eth_bond_member_remove(test_params->bonded_port_id,
+ polling_test_members[i]),
+ "Failed to remove member %d from bonded port (%d)",
+ polling_test_members[i], test_params->bonded_port_id);
}
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
@@ -2123,9 +2138,9 @@ test_activebackup_tx_burst(void)
struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 1, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
initialize_eth_header(test_params->pkt_eth_hdr,
(struct rte_ether_addr *)src_mac,
@@ -2136,7 +2151,7 @@ test_activebackup_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -2160,38 +2175,38 @@ test_activebackup_tx_burst(void)
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
- if (test_params->slave_port_ids[i] == primary_port) {
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
+ if (test_params->member_port_ids[i] == primary_port) {
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets,
- burst_size / test_params->bonded_slave_count);
+ burst_size / test_params->bonded_member_count);
} else {
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, 0);
}
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
pkts_burst, burst_size), 0, "Sending empty burst failed");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT (4)
static int
test_activebackup_rx_burst(void)
@@ -2205,24 +2220,24 @@ test_activebackup_rx_burst(void)
int i, j, burst_size = 17;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0),
burst_size, "burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -2230,7 +2245,7 @@ test_activebackup_rx_burst(void)
&rx_pkt_burst[0], MAX_PKT_BURST), burst_size,
"rte_eth_rx_burst failed");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->member_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -2238,27 +2253,30 @@ test_activebackup_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)", test_params->slave_port_ids[i],
- (unsigned int)port_stats.ipackets, burst_size);
+ "Member Port (%d) ipackets value (%u) not as "
+ "expected (%d)",
+ test_params->member_port_ids[i],
+ (unsigned int)port_stats.ipackets,
+ burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as "
- "expected (%d)\n", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as "
+ "expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected "
- "(%d)", test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected "
+ "(%d)", test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -2275,8 +2293,8 @@ test_activebackup_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2285,14 +2303,14 @@ test_activebackup_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 4, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -2304,17 +2322,17 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->member_port_ids[i]);
+ if (primary_port == test_params->member_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, 1,
- "slave port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not enabled",
+ test_params->member_port_ids[i]);
} else {
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode enabled",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode enabled",
+ test_params->member_port_ids[i]);
}
}
@@ -2328,16 +2346,16 @@ test_activebackup_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, 0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not disabled\n",
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2346,19 +2364,21 @@ test_activebackup_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize bonded device with slaves");
+ "Failed to initialize bonded device with members");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first member and that the other member
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2368,27 +2388,27 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->member_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -2398,24 +2418,26 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -2432,21 +2454,21 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -2462,36 +2484,36 @@ test_activebackup_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_activebackup_verify_slave_link_status_change_failover(void)
+test_activebackup_verify_member_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, member_count, primary_port;
burst_size = 21;
@@ -2502,96 +2524,96 @@ test_activebackup_verify_slave_link_status_change_failover(void)
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0,
- TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 members down and verify active member count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
+ /* Bring primary port down, verify that active member count is 3 and primary
* has changed */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS),
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS),
3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
"Primary port not as expected");
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary member */
TEST_ASSERT_EQUAL(rte_eth_tx_burst(
test_params->bonded_port_id, 0, &pkt_burst[0][0],
burst_size), burst_size, "rte_eth_tx_burst failed");
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ACTIVE_BACKUP_RX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"generate_test_burst failed");
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
}
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
@@ -2604,28 +2626,28 @@ test_activebackup_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected",
test_params->bonded_port_id);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
/** Balance Mode Tests */
@@ -2633,9 +2655,9 @@ test_activebackup_verify_slave_link_status_change_failover(void)
static int
test_balance_xmit_policy_configuration(void)
{
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_ACTIVE_BACKUP, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
/* Invalid port id */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
@@ -2644,7 +2666,7 @@ test_balance_xmit_policy_configuration(void)
/* Set xmit policy on non bonded device */
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_set(
- test_params->slave_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
+ test_params->member_port_ids[0], BALANCE_XMIT_POLICY_LAYER2),
"Expected call to failed as invalid port specified.");
@@ -2677,25 +2699,25 @@ test_balance_xmit_policy_configuration(void)
TEST_ASSERT_FAIL(rte_eth_bond_xmit_policy_get(INVALID_PORT_ID),
"Expected call to failed as invalid port specified.");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT (2)
+#define TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT (2)
static int
test_balance_l2_tx_burst(void)
{
- struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
- int burst_size[TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT] = { 10, 15 };
+ struct rte_mbuf *pkts_burst[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
+ int burst_size[TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT] = { 10, 15 };
uint16_t pktlen;
int i;
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER2),
@@ -2730,7 +2752,7 @@ test_balance_l2_tx_burst(void)
"failed to generate packet burst");
/* Send burst 1 on bonded port */
- for (i = 0; i < TEST_BALANCE_L2_TX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_L2_TX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(rte_eth_tx_burst(test_params->bonded_port_id, 0,
&pkts_burst[i][0], burst_size[i]),
burst_size[i], "Failed to transmit packet burst");
@@ -2745,24 +2767,24 @@ test_balance_l2_tx_burst(void)
burst_size[0] + burst_size[1]);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[0],
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size[1],
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
burst_size[1]);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2770,8 +2792,8 @@ test_balance_l2_tx_burst(void)
test_params->bonded_port_id, 0, &pkts_burst[0][0], burst_size[0]),
0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2785,9 +2807,9 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER23),
@@ -2825,24 +2847,24 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2851,8 +2873,8 @@ balance_l23_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -2897,9 +2919,9 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ "Failed to initialize_bonded_device_with_members.");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
test_params->bonded_port_id, BALANCE_XMIT_POLICY_LAYER34),
@@ -2938,24 +2960,24 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
nb_tx_1 + nb_tx_2);
- /* Verify slave ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify member ports tx stats */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_1,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.opackets,
nb_tx_1);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)nb_tx_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.opackets,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.opackets,
nb_tx_2);
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -2963,8 +2985,8 @@ balance_l34_tx_burst(uint8_t vlan_enabled, uint8_t ipv4,
test_params->bonded_port_id, 0, pkts_burst_1,
burst_size_1), 0, "Expected zero packet");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3003,27 +3025,27 @@ test_balance_l34_tx_burst_ipv6_toggle_udp_port(void)
return balance_l34_tx_burst(0, 0, 0, 0, 1);
}
-#define TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT (2)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 (40)
-#define TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2 (20)
-#define TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT (25)
-#define TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX (0)
+#define TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT (2)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 (40)
+#define TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2 (20)
+#define TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT (25)
+#define TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX (0)
static int
-test_balance_tx_burst_slave_tx_fail(void)
+test_balance_tx_burst_member_tx_fail(void)
{
- struct rte_mbuf *pkts_burst_1[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1];
- struct rte_mbuf *pkts_burst_2[TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2];
+ struct rte_mbuf *pkts_burst_1[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1];
+ struct rte_mbuf *pkts_burst_2[TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2];
- struct rte_mbuf *expected_fail_pkts[TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT];
+ struct rte_mbuf *expected_fail_pkts[TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, first_tx_fail_idx, tx_count_1, tx_count_2;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0,
- TEST_BAL_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BAL_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3033,46 +3055,48 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1,
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1, 0, 0, 0, 0, 0),
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1,
"Failed to generate test packet burst 1");
- first_tx_fail_idx = TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT;
+ first_tx_fail_idx = TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT;
/* copy mbuf references for expected transmission failures */
- for (i = 0; i < TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT; i++)
+ for (i = 0; i < TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT; i++)
expected_fail_pkts[i] = pkts_burst_1[i + first_tx_fail_idx];
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2, 0, 0, 1, 0, 0),
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
"Failed to generate test packet burst 2");
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /*
+ * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+ * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
+ test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ test_params->member_port_ids[TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX],
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Transmit burst 1 */
tx_count_1 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_1,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1);
- TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_1, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ tx_count_1, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_RR_SLAVE_TX_FAIL_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_RR_MEMBER_TX_FAIL_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst_1[i + tx_count_1],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst_1[i + tx_count_1]);
@@ -3080,94 +3104,94 @@ test_balance_tx_burst_slave_tx_fail(void)
/* Transmit burst 2 */
tx_count_2 = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst_2,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
- TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
+ TEST_ASSERT_EQUAL(tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count_2, TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ tx_count_2, TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)((TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2),
+ (uint64_t)((TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2),
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- (TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT) +
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ (TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT) +
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
- /* Verify slave ports tx stats */
+ /* Verify member ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_1 -
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_1 -
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2,
- "Slave Port (%d) opackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1],
+ (uint64_t)TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2,
+ "Member Port (%d) opackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1],
(unsigned int)port_stats.opackets,
- TEST_BAL_SLAVE_TX_FAIL_BURST_SIZE_2);
+ TEST_BAL_MEMBER_TX_FAIL_BURST_SIZE_2);
/* Verify that all mbufs have a ref value of zero */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT, 1),
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst_1[tx_count_1],
- TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT);
+ TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_RX_BURST_SLAVE_COUNT (3)
+#define TEST_BALANCE_RX_BURST_MEMBER_COUNT (3)
static int
test_balance_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[TEST_BALANCE_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[TEST_BALANCE_RX_BURST_SLAVE_COUNT] = { 10, 5, 30 };
+ int burst_size[TEST_BALANCE_RX_BURST_MEMBER_COUNT] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1,
0, 0), burst_size[i],
"failed to generate packet burst");
}
- /* Add rx data to slaves */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to members */
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3187,33 +3211,33 @@ test_balance_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0],
(unsigned int)port_stats.ipackets, burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[1], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[1], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs */
- for (i = 0; i < TEST_BALANCE_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_RX_BURST_MEMBER_COUNT; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3222,8 +3246,8 @@ test_balance_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3232,8 +3256,8 @@ test_balance_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3246,11 +3270,11 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->member_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3262,15 +3286,15 @@ test_balance_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->member_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3279,19 +3303,21 @@ test_balance_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BALANCE, 0, 2, 1),
"Failed to initialise bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
+ /* Verify that bonded MACs is that of first member and that the other member
* MAC hasn't been changed */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3301,27 +3327,27 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]),
+ test_params->member_port_ids[1]),
"Failed to set bonded port (%d) primary port to (%d)\n",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -3331,24 +3357,26 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3365,21 +3393,21 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
TEST_ASSERT_SUCCESS(rte_eth_bond_mac_address_set(
@@ -3395,44 +3423,44 @@ test_balance_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected\n",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected\n",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BALANCE_LINK_STATUS_SLAVE_COUNT (4)
+#define TEST_BALANCE_LINK_STATUS_MEMBER_COUNT (4)
static int
-test_balance_verify_slave_link_status_change_behaviour(void)
+test_balance_verify_member_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_BALANCE_LINK_STATUS_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT, 1),
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BALANCE, 0, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
TEST_ASSERT_SUCCESS(rte_eth_bond_xmit_policy_set(
@@ -3440,32 +3468,34 @@ test_balance_verify_slave_link_status_change_behaviour(void)
"Failed to set balance xmit policy.");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, TEST_BALANCE_LINK_STATUS_SLAVE_COUNT);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, TEST_BALANCE_LINK_STATUS_MEMBER_COUNT);
- /* Set 2 slaves link status to down */
+ /* Set 2 members link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
- /* Send to sets of packet burst and verify that they are balanced across
- * slaves */
+ /*
+ * Send to sets of packet burst and verify that they are balanced across
+ * members.
+ */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -3491,27 +3521,27 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->member_port_ids[0], (int)port_stats.opackets,
burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[2], (int)port_stats.opackets,
+ test_params->member_port_ids[2], (int)port_stats.opackets,
burst_size);
- /* verify that all packets get send on primary slave when no other slaves
+ /* verify that all packets get send on primary member when no other members
* are available */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 0);
+ test_params->member_port_ids[2], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 1,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 1);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 1,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 1);
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[1][0], burst_size, 0, 1, 1, 0, 0), burst_size,
@@ -3528,31 +3558,31 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.opackets,
burst_size + burst_size + burst_size);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size + burst_size),
"(%d) port_stats.opackets (%d) not as expected (%d).",
- test_params->slave_port_ids[0], (int)port_stats.opackets,
+ test_params->member_port_ids[0], (int)port_stats.opackets,
burst_size + burst_size);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[2], 1);
+ test_params->member_port_ids[2], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- for (i = 0; i < TEST_BALANCE_LINK_STATUS_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_BALANCE_LINK_STATUS_MEMBER_COUNT; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0), burst_size,
"Failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on members with link status down */
rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
MAX_PKT_BURST);
@@ -3564,8 +3594,8 @@ test_balance_verify_slave_link_status_change_behaviour(void)
test_params->bonded_port_id, (int)port_stats.ipackets,
burst_size * 3);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3576,7 +3606,7 @@ test_broadcast_tx_burst(void)
struct rte_eth_stats port_stats;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 2, 1),
"Failed to initialise bonded device");
@@ -3590,7 +3620,7 @@ test_broadcast_tx_burst(void)
pktlen = initialize_ipv4_header(test_params->pkt_ipv4_hdr, src_addr,
dst_addr_0, pktlen);
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.");
@@ -3611,25 +3641,25 @@ test_broadcast_tx_burst(void)
/* Verify bonded port tx stats */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)burst_size * test_params->bonded_slave_count,
+ (uint64_t)burst_size * test_params->bonded_member_count,
"Bonded Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
- "Slave Port (%d) opackets value (%u) not as expected (%d)\n",
+ "Member Port (%d) opackets value (%u) not as expected (%d)\n",
test_params->bonded_port_id,
(unsigned int)port_stats.opackets, burst_size);
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -3637,159 +3667,161 @@ test_broadcast_tx_burst(void)
test_params->bonded_port_id, 0, pkts_burst, burst_size), 0,
"transmitted an unexpected number of packets");
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT (3)
-#define TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE (40)
-#define TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT (15)
-#define TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT (10)
+#define TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT (3)
+#define TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE (40)
+#define TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT (15)
+#define TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT (10)
static int
-test_broadcast_tx_burst_slave_tx_fail(void)
+test_broadcast_tx_burst_member_tx_fail(void)
{
- struct rte_mbuf *pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE];
- struct rte_mbuf *expected_fail_pkts[TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT];
+ struct rte_mbuf *pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE];
+ struct rte_mbuf *expected_fail_pkts[TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT];
struct rte_eth_stats port_stats;
int i, tx_count;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0,
- TEST_BCAST_SLAVE_TX_FAIL_SLAVE_COUNT, 1),
+ TEST_BCAST_MEMBER_TX_FAIL_MEMBER_COUNT, 1),
"Failed to initialise bonded device");
/* Generate test bursts for transmission */
TEST_ASSERT_EQUAL(generate_test_burst(pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE,
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE, 0, 0, 0, 0, 0),
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE,
"Failed to generate test packet burst");
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
- expected_fail_pkts[i] = pkts_burst[TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT + i];
+ for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ expected_fail_pkts[i] = pkts_burst[TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT + i];
}
- /* Set virtual slave TEST_BAL_SLAVE_TX_FAIL_FAILING_SLAVE_IDX to only fail
- * transmission of TEST_BAL_SLAVE_TX_FAIL_PACKETS_COUNT packets of burst */
+ /*
+ * Set virtual member TEST_BAL_MEMBER_TX_FAIL_FAILING_MEMBER_IDX to only fail
+ * transmission of TEST_BAL_MEMBER_TX_FAIL_PACKETS_COUNT packets of burst.
+ */
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[0],
+ test_params->member_port_ids[0],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[1],
+ test_params->member_port_ids[1],
0);
virtual_ethdev_tx_burst_fn_set_success(
- test_params->slave_port_ids[2],
+ test_params->member_port_ids[2],
0);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[0],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->member_port_ids[0],
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[1],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ test_params->member_port_ids[1],
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
virtual_ethdev_tx_burst_fn_set_tx_pkt_fail_count(
- test_params->slave_port_ids[2],
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ test_params->member_port_ids[2],
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
/* Transmit burst */
tx_count = rte_eth_tx_burst(test_params->bonded_port_id, 0, pkts_burst,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE);
- TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ TEST_ASSERT_EQUAL(tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
"Transmitted (%d) packets, expected to transmit (%d) packets",
- tx_count, TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ tx_count, TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
/* Verify that failed packet are expected failed packets */
- for (i = 0; i < TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT; i++) {
+ for (i = 0; i < TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT; i++) {
TEST_ASSERT_EQUAL(expected_fail_pkts[i], pkts_burst[i + tx_count],
"expected mbuf (%d) pointer %p not expected pointer %p",
i, expected_fail_pkts[i], pkts_burst[i + tx_count]);
}
- /* Verify slave ports tx stats */
+ /* Verify member ports tx stats */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets,
- (uint64_t)TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT,
+ (uint64_t)TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT,
"Port (%d) opackets value (%u) not as expected (%d)",
test_params->bonded_port_id, (unsigned int)port_stats.opackets,
- TEST_BCAST_SLAVE_TX_FAIL_BURST_SIZE -
- TEST_BCAST_SLAVE_TX_FAIL_MAX_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_BURST_SIZE -
+ TEST_BCAST_MEMBER_TX_FAIL_MAX_PACKETS_COUNT);
/* Verify that all mbufs who transmission failed have a ref value of one */
TEST_ASSERT_SUCCESS(verify_mbufs_ref_count(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT, 1),
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT, 1),
"mbufs refcnts not as expected");
free_mbufs(&pkts_burst[tx_count],
- TEST_BCAST_SLAVE_TX_FAIL_MIN_PACKETS_COUNT);
+ TEST_BCAST_MEMBER_TX_FAIL_MIN_PACKETS_COUNT);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define BROADCAST_RX_BURST_NUM_OF_SLAVES (3)
+#define BROADCAST_RX_BURST_NUM_OF_MEMBERS (3)
static int
test_broadcast_rx_burst(void)
{
- struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *gen_pkt_burst[BROADCAST_RX_BURST_NUM_OF_MEMBERS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- int burst_size[BROADCAST_RX_BURST_NUM_OF_SLAVES] = { 10, 5, 30 };
+ int burst_size[BROADCAST_RX_BURST_NUM_OF_MEMBERS] = { 10, 5, 30 };
int i, j;
memset(gen_pkt_burst, 0, sizeof(gen_pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 3, 1),
"Failed to initialise bonded device");
/* Generate test bursts of packets to transmit */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[i][0], burst_size[i], 0, 0, 1, 0, 0),
burst_size[i], "failed to generate packet burst");
}
- /* Add rx data to slave 0 */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member 0 */
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[i][0], burst_size[i]);
}
@@ -3810,33 +3842,33 @@ test_broadcast_rx_burst(void)
burst_size[0] + burst_size[1] + burst_size[2]);
- /* Verify bonded slave devices rx counts */
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ /* Verify bonded member devices rx counts */
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[0],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[1],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[0], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[0], (unsigned int)port_stats.ipackets,
burst_size[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size[2],
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[2], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[2], (unsigned int)port_stats.ipackets,
burst_size[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, 0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)",
- test_params->slave_port_ids[3], (unsigned int)port_stats.ipackets,
+ "Member Port (%d) ipackets value (%u) not as expected (%d)",
+ test_params->member_port_ids[3], (unsigned int)port_stats.ipackets,
0);
/* free mbufs allocate for rx testing */
- for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_RX_BURST_NUM_OF_MEMBERS; i++) {
for (j = 0; j < MAX_PKT_BURST; j++) {
if (gen_pkt_burst[i][j] != NULL) {
rte_pktmbuf_free(gen_pkt_burst[i][j]);
@@ -3845,8 +3877,8 @@ test_broadcast_rx_burst(void)
}
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3855,8 +3887,8 @@ test_broadcast_verify_promiscuous_enable_disable(void)
int i;
int ret;
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
@@ -3870,11 +3902,11 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not enabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 1,
+ test_params->member_port_ids[i]), 1,
"Port (%d) promiscuous mode not enabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
ret = rte_eth_promiscuous_disable(test_params->bonded_port_id);
@@ -3886,15 +3918,15 @@ test_broadcast_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT_EQUAL(rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]), 0,
+ test_params->member_port_ids[i]), 0,
"Port (%d) promiscuous mode not disabled",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -3905,49 +3937,55 @@ test_broadcast_verify_mac_assignment(void)
int i;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[2], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[2],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_BROADCAST, 0, 4, 1),
"Failed to initialise bonded device");
- /* Verify that all MACs are the same as first slave added to bonded
+ /* Verify that all MACs are the same as first member added to bonded
* device */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[i]);
}
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_SUCCESS(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[2]),
+ test_params->member_port_ids[2]),
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[i]);
+ test_params->bonded_port_id, test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address has changed to that of primary "
+ "member port (%d) mac address has changed to that of primary "
"port without stop/start toggle of bonded device",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
}
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -3962,16 +4000,17 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary "
+ "port", test_params->member_port_ids[i]);
}
/* Set explicit MAC address */
@@ -3986,71 +4025,72 @@ test_broadcast_verify_mac_assignment(void)
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
"bonded port (%d) mac address not set to that of new primary port",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[i], &read_mac_addr),
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[i],
+ &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_SUCCESS(memcmp(bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of new primary "
- "port", test_params->slave_port_ids[i]);
+ "member port (%d) mac address not set to that of new primary "
+ "port", test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define BROADCAST_LINK_STATUS_NUM_OF_SLAVES (4)
+#define BROADCAST_LINK_STATUS_NUM_OF_MEMBERS (4)
static int
-test_broadcast_verify_slave_link_status_change_behaviour(void)
+test_broadcast_verify_member_link_status_change_behaviour(void)
{
- struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_SLAVES][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[BROADCAST_LINK_STATUS_NUM_OF_MEMBERS][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count;
+ int i, burst_size, member_count;
memset(pkt_burst, 0, sizeof(pkt_burst));
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
- BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_SLAVES,
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
+ BONDING_MODE_BROADCAST, 0, BROADCAST_LINK_STATUS_NUM_OF_MEMBERS,
1), "Failed to initialise bonded device");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 4);
- /* Set 2 slaves link status to down */
+ /* Set 2 members link status to down */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
- for (i = 0; i < test_params->bonded_slave_count; i++)
- rte_eth_stats_reset(test_params->slave_port_ids[i]);
+ for (i = 0; i < test_params->bonded_member_count; i++)
+ rte_eth_stats_reset(test_params->member_port_ids[i]);
- /* Verify that pkts are not sent on slaves with link status down */
+ /* Verify that pkts are not sent on members with link status down */
burst_size = 21;
TEST_ASSERT_EQUAL(generate_test_burst(
@@ -4062,43 +4102,43 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"rte_eth_tx_burst failed\n");
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
- TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * slave_count),
+ TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)(burst_size * member_count),
"(%d) port_stats.opackets (%d) not as expected (%d)\n",
test_params->bonded_port_id, (int)port_stats.opackets,
- burst_size * slave_count);
+ burst_size * member_count);
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (uint64_t)burst_size,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, 0,
"(%d) port_stats.opackets not as expected",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
- for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_SLAVES; i++) {
+ for (i = 0; i < BROADCAST_LINK_STATUS_NUM_OF_MEMBERS; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[i][0], burst_size, 0, 0, 1, 0, 0),
burst_size, "failed to generate packet burst");
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&pkt_burst[i][0], burst_size);
}
- /* Verify that pkts are not received on slaves with link status down */
+ /* Verify that pkts are not received on members with link status down */
TEST_ASSERT_EQUAL(rte_eth_rx_burst(
test_params->bonded_port_id, 0, rx_pkt_burst, MAX_PKT_BURST),
burst_size + burst_size, "rte_eth_rx_burst failed");
@@ -4110,8 +4150,8 @@ test_broadcast_verify_slave_link_status_change_behaviour(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4146,21 +4186,21 @@ testsuite_teardown(void)
free(test_params->pkt_eth_hdr);
test_params->pkt_eth_hdr = NULL;
- /* Clean up and remove slaves from bonded device */
- remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ remove_members_and_stop_bonded_device();
}
static void
free_virtualpmd_tx_queue(void)
{
- int i, slave_port, to_free_cnt;
+ int i, member_port, to_free_cnt;
struct rte_mbuf *pkts_to_free[MAX_PKT_BURST];
/* Free tx queue of virtual pmd */
- for (slave_port = 0; slave_port < test_params->bonded_slave_count;
- slave_port++) {
+ for (member_port = 0; member_port < test_params->bonded_member_count;
+ member_port++) {
to_free_cnt = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_port],
+ test_params->member_port_ids[member_port],
pkts_to_free, MAX_PKT_BURST);
for (i = 0; i < to_free_cnt; i++)
rte_pktmbuf_free(pkts_to_free[i]);
@@ -4177,11 +4217,11 @@ test_tlb_tx_burst(void)
uint64_t sum_ports_opackets = 0, all_bond_opackets = 0, all_bond_obytes = 0;
uint16_t pktlen;
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members
(BONDING_MODE_TLB, 1, 3, 1),
"Failed to initialise bonded device");
- burst_size = 20 * test_params->bonded_slave_count;
+ burst_size = 20 * test_params->bonded_member_count;
TEST_ASSERT(burst_size < MAX_PKT_BURST,
"Burst size specified is greater than supported.\n");
@@ -4197,7 +4237,7 @@ test_tlb_tx_burst(void)
RTE_ETHER_TYPE_IPV4, 0, 0);
} else {
initialize_eth_header(test_params->pkt_eth_hdr,
- (struct rte_ether_addr *)test_params->default_slave_mac,
+ (struct rte_ether_addr *)test_params->default_member_mac,
(struct rte_ether_addr *)dst_mac_0,
RTE_ETHER_TYPE_IPV4, 0, 0);
}
@@ -4234,26 +4274,26 @@ test_tlb_tx_burst(void)
burst_size);
- /* Verify slave ports tx stats */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
- rte_eth_stats_get(test_params->slave_port_ids[i], &port_stats[i]);
+ /* Verify member ports tx stats */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
+ rte_eth_stats_get(test_params->member_port_ids[i], &port_stats[i]);
sum_ports_opackets += port_stats[i].opackets;
}
TEST_ASSERT_EQUAL(sum_ports_opackets, (uint64_t)all_bond_opackets,
- "Total packets sent by slaves is not equal to packets sent by bond interface");
+ "Total packets sent by members is not equal to packets sent by bond interface");
- /* checking if distribution of packets is balanced over slaves */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* checking if distribution of packets is balanced over members */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
TEST_ASSERT(port_stats[i].obytes > 0 &&
port_stats[i].obytes < all_bond_obytes,
- "Packets are not balanced over slaves");
+ "Packets are not balanced over members");
}
- /* Put all slaves down and try and transmit */
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ /* Put all members down and try and transmit */
+ for (i = 0; i < test_params->bonded_member_count; i++) {
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[i], 0);
+ test_params->member_port_ids[i], 0);
}
/* Send burst on bonded port */
@@ -4261,11 +4301,11 @@ test_tlb_tx_burst(void)
burst_size);
TEST_ASSERT_EQUAL(nb_tx, 0, " bad number of packet in burst");
- /* Clean ugit checkout masterp and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean ugit checkout masterp and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT (4)
+#define TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT (4)
static int
test_tlb_rx_burst(void)
@@ -4279,26 +4319,26 @@ test_tlb_rx_burst(void)
uint16_t i, j, nb_rx, burst_size = 17;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1, 1),
+ TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
/* Generate test bursts of packets to transmit */
TEST_ASSERT_EQUAL(generate_test_burst(
&gen_pkt_burst[0], burst_size, 0, 1, 0, 0, 0), burst_size,
"burst generation failed");
- /* Add rx data to slave */
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[i],
+ /* Add rx data to member */
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[i],
&gen_pkt_burst[0], burst_size);
/* Call rx burst on bonded device */
@@ -4307,7 +4347,7 @@ test_tlb_rx_burst(void)
TEST_ASSERT_EQUAL(nb_rx, burst_size, "rte_eth_rx_burst failed\n");
- if (test_params->slave_port_ids[i] == primary_port) {
+ if (test_params->member_port_ids[i] == primary_port) {
/* Verify bonded device rx count */
rte_eth_stats_get(test_params->bonded_port_id, &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
@@ -4315,27 +4355,27 @@ test_tlb_rx_burst(void)
test_params->bonded_port_id,
(unsigned int)port_stats.ipackets, burst_size);
- /* Verify bonded slave devices rx count */
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ /* Verify bonded member devices rx count */
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
if (i == j) {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)burst_size,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, burst_size);
} else {
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
} else {
- for (j = 0; j < test_params->bonded_slave_count; j++) {
- rte_eth_stats_get(test_params->slave_port_ids[j], &port_stats);
+ for (j = 0; j < test_params->bonded_member_count; j++) {
+ rte_eth_stats_get(test_params->member_port_ids[j], &port_stats);
TEST_ASSERT_EQUAL(port_stats.ipackets, (uint64_t)0,
- "Slave Port (%d) ipackets value (%u) not as expected (%d)\n",
- test_params->slave_port_ids[i],
+ "Member Port (%d) ipackets value (%u) not as expected (%d)\n",
+ test_params->member_port_ids[i],
(unsigned int)port_stats.ipackets, 0);
}
}
@@ -4348,8 +4388,8 @@ test_tlb_rx_burst(void)
rte_eth_stats_reset(test_params->bonded_port_id);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4358,14 +4398,14 @@ test_tlb_verify_promiscuous_enable_disable(void)
int i, primary_port, promiscuous_en;
int ret;
- /* Initialize bonded device with 4 slaves in transmit load balancing mode */
- TEST_ASSERT_SUCCESS( initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in transmit load balancing mode */
+ TEST_ASSERT_SUCCESS( initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0, 4, 1),
"Failed to initialize bonded device");
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
TEST_ASSERT(primary_port >= 0,
- "failed to get primary slave for bonded port (%d)",
+ "failed to get primary member for bonded port (%d)",
test_params->bonded_port_id);
ret = rte_eth_promiscuous_enable(test_params->bonded_port_id);
@@ -4377,10 +4417,10 @@ test_tlb_verify_promiscuous_enable_disable(void)
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
- if (primary_port == test_params->slave_port_ids[i]) {
+ test_params->member_port_ids[i]);
+ if (primary_port == test_params->member_port_ids[i]) {
TEST_ASSERT_EQUAL(promiscuous_en, (int)1,
"Port (%d) promiscuous mode not enabled\n",
test_params->bonded_port_id);
@@ -4402,16 +4442,16 @@ test_tlb_verify_promiscuous_enable_disable(void)
"Port (%d) promiscuous mode not disabled\n",
test_params->bonded_port_id);
- for (i = 0; i < test_params->bonded_slave_count; i++) {
+ for (i = 0; i < test_params->bonded_member_count; i++) {
promiscuous_en = rte_eth_promiscuous_get(
- test_params->slave_port_ids[i]);
+ test_params->member_port_ids[i]);
TEST_ASSERT_EQUAL(promiscuous_en, (int)0,
- "slave port (%d) promiscuous mode not disabled\n",
- test_params->slave_port_ids[i]);
+ "member port (%d) promiscuous mode not disabled\n",
+ test_params->member_port_ids[i]);
}
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
@@ -4420,20 +4460,24 @@ test_tlb_verify_mac_assignment(void)
struct rte_ether_addr read_mac_addr;
struct rte_ether_addr expected_mac_addr_0, expected_mac_addr_1;
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &expected_mac_addr_0),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0],
+ &expected_mac_addr_0),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &expected_mac_addr_1),
+ test_params->member_port_ids[0]);
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1],
+ &expected_mac_addr_1),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- /* Initialize bonded device with 2 slaves in active backup mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 2 members in active backup mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0, 2, 1),
"Failed to initialize bonded device");
- /* Verify that bonded MACs is that of first slave and that the other slave
- * MAC hasn't been changed */
+ /*
+ * Verify that bonded MACs is that of first member and that the other member
+ * MAC hasn't been changed.
+ */
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
test_params->bonded_port_id);
@@ -4442,27 +4486,27 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
/* change primary and verify that MAC addresses haven't changed */
TEST_ASSERT_EQUAL(rte_eth_bond_primary_set(test_params->bonded_port_id,
- test_params->slave_port_ids[1]), 0,
+ test_params->member_port_ids[1]), 0,
"Failed to set bonded port (%d) primary port to (%d)",
- test_params->bonded_port_id, test_params->slave_port_ids[1]);
+ test_params->bonded_port_id, test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->bonded_port_id, &read_mac_addr),
"Failed to get mac address (port %d)",
@@ -4472,24 +4516,26 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[1]);
- /* stop / start bonded device and verify that primary MAC address is
- * propagated to bonded device and slaves */
+ /*
+ * stop / start bonded device and verify that primary MAC address is
+ * propagated to bonded device and members.
+ */
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params->bonded_port_id),
"Failed to stop bonded port %u",
@@ -4506,21 +4552,21 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of primary port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_1, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of primary port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of primary port",
+ test_params->member_port_ids[1]);
/* Set explicit MAC address */
@@ -4537,36 +4583,36 @@ test_tlb_verify_mac_assignment(void)
"bonded port (%d) mac address not set to that of bonded port",
test_params->bonded_port_id);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[0], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[0], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
TEST_ASSERT_SUCCESS(memcmp(&expected_mac_addr_0, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not as expected",
- test_params->slave_port_ids[0]);
+ "member port (%d) mac address not as expected",
+ test_params->member_port_ids[0]);
- TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->slave_port_ids[1], &read_mac_addr),
+ TEST_ASSERT_SUCCESS(rte_eth_macaddr_get(test_params->member_port_ids[1], &read_mac_addr),
"Failed to get mac address (port %d)",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
TEST_ASSERT_SUCCESS(memcmp(&bonded_mac, &read_mac_addr,
sizeof(read_mac_addr)),
- "slave port (%d) mac address not set to that of bonded port",
- test_params->slave_port_ids[1]);
+ "member port (%d) mac address not set to that of bonded port",
+ test_params->member_port_ids[1]);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
static int
-test_tlb_verify_slave_link_status_change_failover(void)
+test_tlb_verify_member_link_status_change_failover(void)
{
- struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT][MAX_PKT_BURST];
+ struct rte_mbuf *pkt_burst[TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT][MAX_PKT_BURST];
struct rte_mbuf *rx_pkt_burst[MAX_PKT_BURST] = { NULL };
struct rte_eth_stats port_stats;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
- int i, burst_size, slave_count, primary_port;
+ int i, burst_size, member_count, primary_port;
burst_size = 21;
@@ -4574,61 +4620,63 @@ test_tlb_verify_slave_link_status_change_failover(void)
- /* Initialize bonded device with 4 slaves in round robin mode */
- TEST_ASSERT_SUCCESS(initialize_bonded_device_with_slaves(
+ /* Initialize bonded device with 4 members in round robin mode */
+ TEST_ASSERT_SUCCESS(initialize_bonded_device_with_members(
BONDING_MODE_TLB, 0,
- TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT, 1),
- "Failed to initialize bonded device with slaves");
+ TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT, 1),
+ "Failed to initialize bonded device with members");
- /* Verify Current Slaves Count /Active Slave Count is */
- slave_count = rte_eth_bond_slaves_get(test_params->bonded_port_id, slaves,
+ /* Verify Current Members Count /Active Member Count is */
+ member_count = rte_eth_bond_members_get(test_params->bonded_port_id, members,
RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, 4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).\n",
+ member_count, 4);
- slave_count = rte_eth_bond_active_slaves_get(test_params->bonded_port_id,
- slaves, RTE_MAX_ETHPORTS);
- TEST_ASSERT_EQUAL(slave_count, (int)4,
- "Number of slaves (%d) is not as expected (%d).\n",
- slave_count, 4);
+ member_count = rte_eth_bond_active_members_get(test_params->bonded_port_id,
+ members, RTE_MAX_ETHPORTS);
+ TEST_ASSERT_EQUAL(member_count, 4,
+ "Number of members (%d) is not as expected (%d).\n",
+ member_count, 4);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[0],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[0],
"Primary port not as expected");
- /* Bring 2 slaves down and verify active slave count */
+ /* Bring 2 members down and verify active member count */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 0);
+ test_params->member_port_ids[1], 0);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 0);
+ test_params->member_port_ids[3], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 2,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 2);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 2,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 2);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[1], 1);
+ test_params->member_port_ids[1], 1);
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[3], 1);
+ test_params->member_port_ids[3], 1);
- /* Bring primary port down, verify that active slave count is 3 and primary
- * has changed */
+ /*
+ * Bring primary port down, verify that active member count is 3 and primary
+ * has changed.
+ */
virtual_ethdev_simulate_link_status_interrupt(
- test_params->slave_port_ids[0], 0);
+ test_params->member_port_ids[0], 0);
- TEST_ASSERT_EQUAL(rte_eth_bond_active_slaves_get(
- test_params->bonded_port_id, slaves, RTE_MAX_ETHPORTS), 3,
- "Number of active slaves (%d) is not as expected (%d).",
- slave_count, 3);
+ TEST_ASSERT_EQUAL(rte_eth_bond_active_members_get(
+ test_params->bonded_port_id, members, RTE_MAX_ETHPORTS), 3,
+ "Number of active members (%d) is not as expected (%d).",
+ member_count, 3);
primary_port = rte_eth_bond_primary_get(test_params->bonded_port_id);
- TEST_ASSERT_EQUAL(primary_port, test_params->slave_port_ids[2],
+ TEST_ASSERT_EQUAL(primary_port, test_params->member_port_ids[2],
"Primary port not as expected");
rte_delay_us(500000);
- /* Verify that pkts are sent on new primary slave */
+ /* Verify that pkts are sent on new primary member */
for (i = 0; i < 4; i++) {
TEST_ASSERT_EQUAL(generate_test_burst(
&pkt_burst[0][0], burst_size, 0, 1, 0, 0, 0), burst_size,
@@ -4639,36 +4687,36 @@ test_tlb_verify_slave_link_status_change_failover(void)
rte_delay_us(11000);
}
- rte_eth_stats_get(test_params->slave_port_ids[0], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[0], &port_stats);
TEST_ASSERT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[0]);
+ test_params->member_port_ids[0]);
- rte_eth_stats_get(test_params->slave_port_ids[1], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[1], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[1]);
+ test_params->member_port_ids[1]);
- rte_eth_stats_get(test_params->slave_port_ids[2], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[2], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[2]);
+ test_params->member_port_ids[2]);
- rte_eth_stats_get(test_params->slave_port_ids[3], &port_stats);
+ rte_eth_stats_get(test_params->member_port_ids[3], &port_stats);
TEST_ASSERT_NOT_EQUAL(port_stats.opackets, (int8_t)0,
"(%d) port_stats.opackets not as expected\n",
- test_params->slave_port_ids[3]);
+ test_params->member_port_ids[3]);
/* Generate packet burst for testing */
- for (i = 0; i < TEST_ADAPTIVE_TRANSMIT_LOAD_BALANCING_RX_BURST_SLAVE_COUNT; i++) {
+ for (i = 0; i < TEST_ADAPTIVE_TLB_RX_BURST_MEMBER_COUNT; i++) {
if (generate_test_burst(&pkt_burst[i][0], burst_size, 0, 1, 0, 0, 0) !=
burst_size)
return -1;
virtual_ethdev_add_mbufs_to_rx_queue(
- test_params->slave_port_ids[i], &pkt_burst[i][0], burst_size);
+ test_params->member_port_ids[i], &pkt_burst[i][0], burst_size);
}
if (rte_eth_rx_burst(test_params->bonded_port_id, 0, rx_pkt_burst,
@@ -4684,11 +4732,11 @@ test_tlb_verify_slave_link_status_change_failover(void)
"(%d) port_stats.ipackets not as expected\n",
test_params->bonded_port_id);
- /* Clean up and remove slaves from bonded device */
- return remove_slaves_and_stop_bonded_device();
+ /* Clean up and remove members from bonded device */
+ return remove_members_and_stop_bonded_device();
}
-#define TEST_ALB_SLAVE_COUNT 2
+#define TEST_ALB_MEMBER_COUNT 2
static uint8_t mac_client1[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 1};
static uint8_t mac_client2[] = {0x00, 0xAA, 0x55, 0xFF, 0xCC, 2};
@@ -4710,23 +4758,23 @@ test_alb_change_mac_in_reply_sent(void)
struct rte_ether_hdr *eth_pkt;
struct rte_arp_hdr *arp_pkt;
- int slave_idx, nb_pkts, pkt_idx;
+ int member_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *member_mac1, *member_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count;
- slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count;
+ member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4782,18 +4830,18 @@ test_alb_change_mac_in_reply_sent(void)
RTE_ARP_OP_REPLY);
rte_eth_tx_burst(test_params->bonded_port_id, 0, &pkt, 1);
- slave_mac1 =
- rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 =
- rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ member_mac1 =
+ rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+ member_mac2 =
+ rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
/*
* Checking if packets are properly distributed on bonding ports. Packets
* 0 and 2 should be sent on port 0 and packets 1 and 3 on port 1.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4802,14 +4850,14 @@ test_alb_change_mac_in_reply_sent(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (member_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(member_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(member_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4819,7 +4867,7 @@ test_alb_change_mac_in_reply_sent(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -4832,22 +4880,22 @@ test_alb_reply_from_client(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
+ int member_idx, nb_pkts, pkt_idx, nb_pkts_sum = 0;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
- struct rte_ether_addr *slave_mac1, *slave_mac2;
+ struct rte_ether_addr *member_mac1, *member_mac2;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -4868,7 +4916,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4880,7 +4928,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client2, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4892,7 +4940,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client3, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
pkt = rte_pktmbuf_alloc(test_params->mbuf_pool);
@@ -4904,7 +4952,7 @@ test_alb_reply_from_client(void)
sizeof(struct rte_ether_hdr));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client4, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
/*
@@ -4914,15 +4962,15 @@ test_alb_reply_from_client(void)
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- slave_mac1 = rte_eth_devices[test_params->slave_port_ids[0]].data->mac_addrs;
- slave_mac2 = rte_eth_devices[test_params->slave_port_ids[1]].data->mac_addrs;
+ member_mac1 = rte_eth_devices[test_params->member_port_ids[0]].data->mac_addrs;
+ member_mac2 = rte_eth_devices[test_params->member_port_ids[1]].data->mac_addrs;
/*
- * Checking if update ARP packets were properly send on slave ports.
+ * Checking if update ARP packets were properly send on member ports.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent, MAX_PKT_BURST);
+ test_params->member_port_ids[member_idx], pkts_sent, MAX_PKT_BURST);
nb_pkts_sum += nb_pkts;
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -4931,14 +4979,14 @@ test_alb_reply_from_client(void)
arp_pkt = (struct rte_arp_hdr *)((char *)eth_pkt +
sizeof(struct rte_ether_hdr));
- if (slave_idx%2 == 0) {
- if (!rte_is_same_ether_addr(slave_mac1,
+ if (member_idx%2 == 0) {
+ if (!rte_is_same_ether_addr(member_mac1,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
}
} else {
- if (!rte_is_same_ether_addr(slave_mac2,
+ if (!rte_is_same_ether_addr(member_mac2,
&arp_pkt->arp_data.arp_sha)) {
retval = -1;
goto test_end;
@@ -4954,7 +5002,7 @@ test_alb_reply_from_client(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -4968,21 +5016,21 @@ test_alb_receive_vlan_reply(void)
struct rte_mbuf *pkt;
struct rte_mbuf *pkts_sent[MAX_PKT_BURST];
- int slave_idx, nb_pkts, pkt_idx;
+ int member_idx, nb_pkts, pkt_idx;
int retval = 0;
struct rte_ether_addr bond_mac, client_mac;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
/* Flush tx queue */
rte_eth_tx_burst(test_params->bonded_port_id, 0, NULL, 0);
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
}
@@ -5007,7 +5055,7 @@ test_alb_receive_vlan_reply(void)
arp_pkt = (struct rte_arp_hdr *)((char *)(vlan_pkt + 1));
initialize_arp_header(arp_pkt, &client_mac, &bond_mac, ip_client1, ip_host,
RTE_ARP_OP_REPLY);
- virtual_ethdev_add_mbufs_to_rx_queue(test_params->slave_port_ids[0], &pkt,
+ virtual_ethdev_add_mbufs_to_rx_queue(test_params->member_port_ids[0], &pkt,
1);
rte_eth_rx_burst(test_params->bonded_port_id, 0, pkts_sent, MAX_PKT_BURST);
@@ -5016,9 +5064,9 @@ test_alb_receive_vlan_reply(void)
/*
* Checking if VLAN headers in generated ARP Update packet are correct.
*/
- for (slave_idx = 0; slave_idx < test_params->bonded_slave_count; slave_idx++) {
+ for (member_idx = 0; member_idx < test_params->bonded_member_count; member_idx++) {
nb_pkts = virtual_ethdev_get_mbufs_from_tx_queue(
- test_params->slave_port_ids[slave_idx], pkts_sent,
+ test_params->member_port_ids[member_idx], pkts_sent,
MAX_PKT_BURST);
for (pkt_idx = 0; pkt_idx < nb_pkts; pkt_idx++) {
@@ -5049,7 +5097,7 @@ test_alb_receive_vlan_reply(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -5062,9 +5110,9 @@ test_alb_ipv4_tx(void)
retval = 0;
TEST_ASSERT_SUCCESS(
- initialize_bonded_device_with_slaves(BONDING_MODE_ALB,
- 0, TEST_ALB_SLAVE_COUNT, 1),
- "Failed to initialize_bonded_device_with_slaves.");
+ initialize_bonded_device_with_members(BONDING_MODE_ALB,
+ 0, TEST_ALB_MEMBER_COUNT, 1),
+ "Failed to initialize_bonded_device_with_members.");
burst_size = 32;
@@ -5085,7 +5133,7 @@ test_alb_ipv4_tx(void)
}
test_end:
- retval += remove_slaves_and_stop_bonded_device();
+ retval += remove_members_and_stop_bonded_device();
return retval;
}
@@ -5096,34 +5144,34 @@ static struct unit_test_suite link_bonding_test_suite = {
.unit_test_cases = {
TEST_CASE(test_create_bonded_device),
TEST_CASE(test_create_bonded_device_with_invalid_params),
- TEST_CASE(test_add_slave_to_bonded_device),
- TEST_CASE(test_add_slave_to_invalid_bonded_device),
- TEST_CASE(test_remove_slave_from_bonded_device),
- TEST_CASE(test_remove_slave_from_invalid_bonded_device),
- TEST_CASE(test_get_slaves_from_bonded_device),
- TEST_CASE(test_add_already_bonded_slave_to_bonded_device),
- TEST_CASE(test_add_remove_multiple_slaves_to_from_bonded_device),
+ TEST_CASE(test_add_member_to_bonded_device),
+ TEST_CASE(test_add_member_to_invalid_bonded_device),
+ TEST_CASE(test_remove_member_from_bonded_device),
+ TEST_CASE(test_remove_member_from_invalid_bonded_device),
+ TEST_CASE(test_get_members_from_bonded_device),
+ TEST_CASE(test_add_already_bonded_member_to_bonded_device),
+ TEST_CASE(test_add_remove_multiple_members_to_from_bonded_device),
TEST_CASE(test_start_bonded_device),
TEST_CASE(test_stop_bonded_device),
TEST_CASE(test_set_bonding_mode),
- TEST_CASE(test_set_primary_slave),
+ TEST_CASE(test_set_primary_member),
TEST_CASE(test_set_explicit_bonded_mac),
TEST_CASE(test_set_bonded_port_initialization_mac_assignment),
TEST_CASE(test_status_interrupt),
- TEST_CASE(test_adding_slave_after_bonded_device_started),
+ TEST_CASE(test_adding_member_after_bonded_device_started),
TEST_CASE(test_roundrobin_tx_burst),
- TEST_CASE(test_roundrobin_tx_burst_slave_tx_fail),
- TEST_CASE(test_roundrobin_rx_burst_on_single_slave),
- TEST_CASE(test_roundrobin_rx_burst_on_multiple_slaves),
+ TEST_CASE(test_roundrobin_tx_burst_member_tx_fail),
+ TEST_CASE(test_roundrobin_rx_burst_on_single_member),
+ TEST_CASE(test_roundrobin_rx_burst_on_multiple_members),
TEST_CASE(test_roundrobin_verify_promiscuous_enable_disable),
TEST_CASE(test_roundrobin_verify_mac_assignment),
- TEST_CASE(test_roundrobin_verify_slave_link_status_change_behaviour),
- TEST_CASE(test_roundrobin_verfiy_polling_slave_link_status_change),
+ TEST_CASE(test_roundrobin_verify_member_link_status_change_behaviour),
+ TEST_CASE(test_roundrobin_verify_polling_member_link_status_change),
TEST_CASE(test_activebackup_tx_burst),
TEST_CASE(test_activebackup_rx_burst),
TEST_CASE(test_activebackup_verify_promiscuous_enable_disable),
TEST_CASE(test_activebackup_verify_mac_assignment),
- TEST_CASE(test_activebackup_verify_slave_link_status_change_failover),
+ TEST_CASE(test_activebackup_verify_member_link_status_change_failover),
TEST_CASE(test_balance_xmit_policy_configuration),
TEST_CASE(test_balance_l2_tx_burst),
TEST_CASE(test_balance_l23_tx_burst_ipv4_toggle_ip_addr),
@@ -5137,26 +5185,26 @@ static struct unit_test_suite link_bonding_test_suite = {
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_vlan_ipv6_toggle_ip_addr),
TEST_CASE(test_balance_l34_tx_burst_ipv6_toggle_udp_port),
- TEST_CASE(test_balance_tx_burst_slave_tx_fail),
+ TEST_CASE(test_balance_tx_burst_member_tx_fail),
TEST_CASE(test_balance_rx_burst),
TEST_CASE(test_balance_verify_promiscuous_enable_disable),
TEST_CASE(test_balance_verify_mac_assignment),
- TEST_CASE(test_balance_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_balance_verify_member_link_status_change_behaviour),
TEST_CASE(test_tlb_tx_burst),
TEST_CASE(test_tlb_rx_burst),
TEST_CASE(test_tlb_verify_mac_assignment),
TEST_CASE(test_tlb_verify_promiscuous_enable_disable),
- TEST_CASE(test_tlb_verify_slave_link_status_change_failover),
+ TEST_CASE(test_tlb_verify_member_link_status_change_failover),
TEST_CASE(test_alb_change_mac_in_reply_sent),
TEST_CASE(test_alb_reply_from_client),
TEST_CASE(test_alb_receive_vlan_reply),
TEST_CASE(test_alb_ipv4_tx),
TEST_CASE(test_broadcast_tx_burst),
- TEST_CASE(test_broadcast_tx_burst_slave_tx_fail),
+ TEST_CASE(test_broadcast_tx_burst_member_tx_fail),
TEST_CASE(test_broadcast_rx_burst),
TEST_CASE(test_broadcast_verify_promiscuous_enable_disable),
TEST_CASE(test_broadcast_verify_mac_assignment),
- TEST_CASE(test_broadcast_verify_slave_link_status_change_behaviour),
+ TEST_CASE(test_broadcast_verify_member_link_status_change_behaviour),
TEST_CASE(test_reconfigure_bonded_device),
TEST_CASE(test_close_bonded_device),
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 21c512c94b..2de907e7f3 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -31,7 +31,7 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
#define RX_RING_SIZE 1024
#define TX_RING_SIZE 1024
@@ -46,15 +46,15 @@
#define BONDED_DEV_NAME ("net_bonding_m4_bond_dev")
-#define SLAVE_DEV_NAME_FMT ("net_virt_%d")
-#define SLAVE_RX_QUEUE_FMT ("net_virt_%d_rx")
-#define SLAVE_TX_QUEUE_FMT ("net_virt_%d_tx")
+#define MEMBER_DEV_NAME_FMT ("net_virt_%d")
+#define MEMBER_RX_QUEUE_FMT ("net_virt_%d_rx")
+#define MEMBER_TX_QUEUE_FMT ("net_virt_%d_tx")
#define INVALID_SOCKET_ID (-1)
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-static const struct rte_ether_addr slave_mac_default = {
+static const struct rte_ether_addr member_mac_default = {
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 }
};
@@ -70,7 +70,7 @@ static const struct rte_ether_addr slow_protocol_mac_addr = {
{ 0x01, 0x80, 0xC2, 0x00, 0x00, 0x02 }
};
-struct slave_conf {
+struct member_conf {
struct rte_ring *rx_queue;
struct rte_ring *tx_queue;
uint16_t port_id;
@@ -86,21 +86,21 @@ struct ether_vlan_hdr {
struct link_bonding_unittest_params {
uint8_t bonded_port_id;
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct member_conf member_ports[MEMBER_COUNT];
struct rte_mempool *mbuf_pool;
};
-#define TEST_DEFAULT_SLAVE_COUNT RTE_DIM(test_params.slave_ports)
-#define TEST_RX_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_TX_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_MARKER_SLAVE_COUT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_EXPIRED_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
-#define TEST_PROMISC_SLAVE_COUNT TEST_DEFAULT_SLAVE_COUNT
+#define TEST_DEFAULT_MEMBER_COUNT RTE_DIM(test_params.member_ports)
+#define TEST_RX_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_TX_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_MARKER_MEMBER_COUT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_EXPIRED_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
+#define TEST_PROMISC_MEMBER_COUNT TEST_DEFAULT_MEMBER_COUNT
static struct link_bonding_unittest_params test_params = {
.bonded_port_id = INVALID_PORT_ID,
- .slave_ports = { [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
+ .member_ports = { [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID} },
.mbuf_pool = NULL,
};
@@ -120,58 +120,58 @@ static uint8_t lacpdu_rx_count[RTE_MAX_ETHPORTS] = {0, };
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.member_ports, \
+ RTE_DIM(test_params.member_ports))
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test and satisfy given condition.
*
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
* _condition condition that need to be checked
*/
#define FOR_EACH_PORT_IF(_i, _port, _condition) FOR_EACH_PORT((_i), (_port)) \
if (!!(_condition))
-/* Macro for iterating over every port that is currently a slave of a bonded
+/* Macro for iterating over every port that is currently a member of a bonded
* device.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
* */
-#define FOR_EACH_SLAVE(_i, _slave) \
- FOR_EACH_PORT_IF(_i, _slave, (_slave)->bonded != 0)
+#define FOR_EACH_MEMBER(_i, _member) \
+ FOR_EACH_PORT_IF(_i, _member, (_member)->bonded != 0)
/*
- * Returns packets from slaves TX queue.
- * slave slave port
+ * Returns packets from members TX queue.
+ * member port
* buffer for packets
* size size of buffer
* return number of packets or negative error number
*/
static int
-slave_get_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_get_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_dequeue_burst(slave->tx_queue, (void **)buf,
+ return rte_ring_dequeue_burst(member->tx_queue, (void **)buf,
size, NULL);
}
/*
- * Injects given packets into slaves RX queue.
- * slave slave port
+ * Injects given packets into members RX queue.
+ * member port
* buffer for packets
* size number of packets to be injected
* return number of queued packets or negative error number
*/
static int
-slave_put_pkts(struct slave_conf *slave, struct rte_mbuf **buf, uint16_t size)
+member_put_pkts(struct member_conf *member, struct rte_mbuf **buf, uint16_t size)
{
- return rte_ring_enqueue_burst(slave->rx_queue, (void **)buf,
+ return rte_ring_enqueue_burst(member->rx_queue, (void **)buf,
size, NULL);
}
@@ -219,79 +219,79 @@ configure_ethdev(uint16_t port_id, uint8_t start)
}
static int
-add_slave(struct slave_conf *slave, uint8_t start)
+add_member(struct member_conf *member, uint8_t start)
{
struct rte_ether_addr addr, addr_check;
int retval;
/* Some sanity check */
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave - test_params.slave_ports < (int)RTE_DIM(test_params.slave_ports));
- RTE_VERIFY(slave->bonded == 0);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(test_params.member_ports <= member &&
+ member - test_params.member_ports < (int)RTE_DIM(test_params.member_ports));
+ RTE_VERIFY(member->bonded == 0);
+ RTE_VERIFY(member->port_id != INVALID_PORT_ID);
- rte_ether_addr_copy(&slave_mac_default, &addr);
- addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
+ rte_ether_addr_copy(&member_mac_default, &addr);
+ addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
- rte_eth_dev_mac_addr_remove(slave->port_id, &addr);
+ rte_eth_dev_mac_addr_remove(member->port_id, &addr);
- TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(slave->port_id, &addr, 0),
- "Failed to set slave MAC address");
+ TEST_ASSERT_SUCCESS(rte_eth_dev_mac_addr_add(member->port_id, &addr, 0),
+ "Failed to set member MAC address");
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bonded_port_id,
- slave->port_id),
- "Failed to add slave (idx=%u, id=%u) to bonding (id=%u)",
- (uint8_t)(slave - test_params.slave_ports), slave->port_id,
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bonded_port_id,
+ member->port_id),
+ "Failed to add member (idx=%u, id=%u) to bonding (id=%u)",
+ (uint8_t)(member - test_params.member_ports), member->port_id,
test_params.bonded_port_id);
- slave->bonded = 1;
+ member->bonded = 1;
if (start) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_start(slave->port_id),
- "Failed to start slave %u", slave->port_id);
+ TEST_ASSERT_SUCCESS(rte_eth_dev_start(member->port_id),
+ "Failed to start member %u", member->port_id);
}
- retval = rte_eth_macaddr_get(slave->port_id, &addr_check);
- TEST_ASSERT_SUCCESS(retval, "Failed to get slave mac address: %s",
+ retval = rte_eth_macaddr_get(member->port_id, &addr_check);
+ TEST_ASSERT_SUCCESS(retval, "Failed to get member mac address: %s",
strerror(-retval));
TEST_ASSERT_EQUAL(rte_is_same_ether_addr(&addr, &addr_check), 1,
- "Slave MAC address is not as expected");
+ "Member MAC address is not as expected");
- RTE_VERIFY(slave->lacp_parnter_state == 0);
+ RTE_VERIFY(member->lacp_parnter_state == 0);
return 0;
}
static int
-remove_slave(struct slave_conf *slave)
+remove_member(struct member_conf *member)
{
- ptrdiff_t slave_idx = slave - test_params.slave_ports;
+ ptrdiff_t member_idx = member - test_params.member_ports;
- RTE_VERIFY(test_params.slave_ports <= slave &&
- slave_idx < (ptrdiff_t)RTE_DIM(test_params.slave_ports));
+ RTE_VERIFY(test_params.member_ports <= member &&
+ member_idx < (ptrdiff_t)RTE_DIM(test_params.member_ports));
- RTE_VERIFY(slave->bonded == 1);
- RTE_VERIFY(slave->port_id != INVALID_PORT_ID);
+ RTE_VERIFY(member->bonded == 1);
+ RTE_VERIFY(member->port_id != INVALID_PORT_ID);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+ "Member %u tx queue not empty while removing from bonding.",
+ member->port_id);
- TEST_ASSERT_EQUAL(rte_ring_count(slave->rx_queue), 0,
- "Slave %u tx queue not empty while removing from bonding.",
- slave->port_id);
+ TEST_ASSERT_EQUAL(rte_ring_count(member->rx_queue), 0,
+ "Member %u tx queue not empty while removing from bonding.",
+ member->port_id);
- TEST_ASSERT_EQUAL(rte_eth_bond_slave_remove(test_params.bonded_port_id,
- slave->port_id), 0,
- "Failed to remove slave (idx=%u, id=%u) from bonding (id=%u)",
- (uint8_t)slave_idx, slave->port_id,
+ TEST_ASSERT_EQUAL(rte_eth_bond_member_remove(test_params.bonded_port_id,
+ member->port_id), 0,
+ "Failed to remove member (idx=%u, id=%u) from bonding (id=%u)",
+ (uint8_t)member_idx, member->port_id,
test_params.bonded_port_id);
- slave->bonded = 0;
- slave->lacp_parnter_state = 0;
+ member->bonded = 0;
+ member->lacp_parnter_state = 0;
return 0;
}
static void
-lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
+lacp_recv_cb(uint16_t member_id, struct rte_mbuf *lacp_pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -304,22 +304,22 @@ lacp_recv_cb(uint16_t slave_id, struct rte_mbuf *lacp_pkt)
slow_hdr = rte_pktmbuf_mtod(lacp_pkt, struct slow_protocol_frame *);
RTE_VERIFY(slow_hdr->slow_protocol.subtype == SLOW_SUBTYPE_LACP);
- lacpdu_rx_count[slave_id]++;
+ lacpdu_rx_count[member_id]++;
rte_pktmbuf_free(lacp_pkt);
}
static int
-initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
+initialize_bonded_device_with_members(uint16_t member_count, uint8_t external_sm)
{
uint8_t i;
int ret;
RTE_VERIFY(test_params.bonded_port_id != INVALID_PORT_ID);
- for (i = 0; i < slave_count; i++) {
- TEST_ASSERT_SUCCESS(add_slave(&test_params.slave_ports[i], 1),
+ for (i = 0; i < member_count; i++) {
+ TEST_ASSERT_SUCCESS(add_member(&test_params.member_ports[i], 1),
"Failed to add port %u to bonded device.\n",
- test_params.slave_ports[i].port_id);
+ test_params.member_ports[i].port_id);
}
/* Reset mode 4 configuration */
@@ -345,34 +345,34 @@ initialize_bonded_device_with_slaves(uint16_t slave_count, uint8_t external_sm)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
int retval;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
uint16_t i;
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bonded_port_id),
"Failed to stop bonded port %u",
test_params.bonded_port_id);
- FOR_EACH_SLAVE(i, slave)
- remove_slave(slave);
+ FOR_EACH_MEMBER(i, member)
+ remove_member(member);
- retval = rte_eth_bond_slaves_get(test_params.bonded_port_id, slaves,
- RTE_DIM(slaves));
+ retval = rte_eth_bond_members_get(test_params.bonded_port_id, members,
+ RTE_DIM(members));
TEST_ASSERT_EQUAL(retval, 0,
- "Expected bonded device %u have 0 slaves but returned %d.",
+ "Expected bonded device %u have 0 members but returned %d.",
test_params.bonded_port_id, retval);
- FOR_EACH_PORT(i, slave) {
- TEST_ASSERT_SUCCESS(rte_eth_dev_stop(slave->port_id),
+ FOR_EACH_PORT(i, member) {
+ TEST_ASSERT_SUCCESS(rte_eth_dev_stop(member->port_id),
"Failed to stop bonded port %u",
- slave->port_id);
+ member->port_id);
- TEST_ASSERT(slave->bonded == 0,
- "Port id=%u is still marked as enslaved.", slave->port_id);
+ TEST_ASSERT(member->bonded == 0,
+ "Port id=%u is still marked as enmemberd.", member->port_id);
}
return TEST_SUCCESS;
@@ -383,7 +383,7 @@ test_setup(void)
{
int retval, nb_mbuf_per_pool;
char name[RTE_ETH_NAME_MAX_LEN];
- struct slave_conf *port;
+ struct member_conf *port;
const uint8_t socket_id = rte_socket_id();
uint16_t i;
@@ -400,10 +400,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(i, port) {
- port = &test_params.slave_ports[i];
+ port = &test_params.member_ports[i];
if (port->rx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_RX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_RX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->rx_queue = rte_ring_create(name, RX_RING_SIZE, socket_id, 0);
TEST_ASSERT(port->rx_queue != NULL,
@@ -412,7 +412,7 @@ test_setup(void)
}
if (port->tx_queue == NULL) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_TX_QUEUE_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_TX_QUEUE_FMT, i);
TEST_ASSERT(retval <= (int)RTE_DIM(name) - 1, "Name too long");
port->tx_queue = rte_ring_create(name, TX_RING_SIZE, socket_id, 0);
TEST_ASSERT_NOT_NULL(port->tx_queue,
@@ -421,7 +421,7 @@ test_setup(void)
}
if (port->port_id == INVALID_PORT_ID) {
- retval = snprintf(name, RTE_DIM(name), SLAVE_DEV_NAME_FMT, i);
+ retval = snprintf(name, RTE_DIM(name), MEMBER_DEV_NAME_FMT, i);
TEST_ASSERT(retval < (int)RTE_DIM(name) - 1, "Name too long");
retval = rte_eth_from_rings(name, &port->rx_queue, 1,
&port->tx_queue, 1, socket_id);
@@ -460,7 +460,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i;
/* Only stop ports.
@@ -480,7 +480,7 @@ testsuite_teardown(void)
* frame but not LACP
*/
static int
-make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
+make_lacp_reply(struct member_conf *member, struct rte_mbuf *pkt)
{
struct rte_ether_hdr *hdr;
struct slow_protocol_frame *slow_hdr;
@@ -501,11 +501,11 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
/* Change source address to partner address */
rte_ether_addr_copy(&parnter_mac_default, &slow_hdr->eth_hdr.src_addr);
slow_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ member->port_id;
lacp = (struct lacpdu *) &slow_hdr->slow_protocol;
/* Save last received state */
- slave->lacp_parnter_state = lacp->actor.state;
+ member->lacp_parnter_state = lacp->actor.state;
/* Change it into LACP replay by matching parameters. */
memcpy(&lacp->partner.port_params, &lacp->actor.port_params,
sizeof(struct port_params));
@@ -523,27 +523,27 @@ make_lacp_reply(struct slave_conf *slave, struct rte_mbuf *pkt)
}
/*
- * Reads packets from given slave, search for LACP packet and reply them.
+ * Reads packets from given member, search for LACP packet and reply them.
*
- * Receives burst of packets from slave. Looks for LACP packet. Drops
+ * Receives burst of packets from member. Looks for LACP packet. Drops
* all other packets. Prepares response LACP and sends it back.
*
* return number of LACP received and replied, -1 on error.
*/
static int
-bond_handshake_reply(struct slave_conf *slave)
+bond_handshake_reply(struct member_conf *member)
{
int retval;
struct rte_mbuf *rx_buf[MAX_PKT_BURST];
struct rte_mbuf *lacp_tx_buf[MAX_PKT_BURST];
uint16_t lacp_tx_buf_cnt = 0, i;
- retval = slave_get_pkts(slave, rx_buf, RTE_DIM(rx_buf));
- TEST_ASSERT(retval >= 0, "Getting slave %u packets failed.",
- slave->port_id);
+ retval = member_get_pkts(member, rx_buf, RTE_DIM(rx_buf));
+ TEST_ASSERT(retval >= 0, "Getting member %u packets failed.",
+ member->port_id);
for (i = 0; i < (uint16_t)retval; i++) {
- if (make_lacp_reply(slave, rx_buf[i]) == 0) {
+ if (make_lacp_reply(member, rx_buf[i]) == 0) {
/* reply with actor's LACP */
lacp_tx_buf[lacp_tx_buf_cnt++] = rx_buf[i];
} else
@@ -553,7 +553,7 @@ bond_handshake_reply(struct slave_conf *slave)
if (lacp_tx_buf_cnt == 0)
return 0;
- retval = slave_put_pkts(slave, lacp_tx_buf, lacp_tx_buf_cnt);
+ retval = member_put_pkts(member, lacp_tx_buf, lacp_tx_buf_cnt);
if (retval <= lacp_tx_buf_cnt) {
/* retval might be negative */
for (i = RTE_MAX(0, retval); retval < lacp_tx_buf_cnt; retval++)
@@ -561,24 +561,24 @@ bond_handshake_reply(struct slave_conf *slave)
}
TEST_ASSERT_EQUAL(retval, lacp_tx_buf_cnt,
- "Failed to equeue lacp packets into slave %u tx queue.",
- slave->port_id);
+ "Failed to equeue lacp packets into member %u tx queue.",
+ member->port_id);
return lacp_tx_buf_cnt;
}
/*
- * Function check if given slave tx queue contains packets that make mode 4
- * handshake complete. It will drain slave queue.
+ * Function check if given member tx queue contains packets that make mode 4
+ * handshake complete. It will drain member queue.
* return 0 if handshake not completed, 1 if handshake was complete,
*/
static int
-bond_handshake_done(struct slave_conf *slave)
+bond_handshake_done(struct member_conf *member)
{
const uint8_t expected_state = STATE_LACP_ACTIVE | STATE_SYNCHRONIZATION |
STATE_AGGREGATION | STATE_COLLECTING | STATE_DISTRIBUTING;
- return slave->lacp_parnter_state == expected_state;
+ return member->lacp_parnter_state == expected_state;
}
static unsigned
@@ -603,32 +603,32 @@ bond_get_update_timeout_ms(void)
static int
bond_handshake(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
struct rte_mbuf *buf[MAX_PKT_BURST];
uint16_t nb_pkts;
- uint8_t all_slaves_done, i, j;
- uint8_t status[RTE_DIM(test_params.slave_ports)] = { 0 };
+ uint8_t all_members_done, i, j;
+ uint8_t status[RTE_DIM(test_params.member_ports)] = { 0 };
const unsigned delay = bond_get_update_timeout_ms();
/* Exchange LACP frames */
- all_slaves_done = 0;
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ all_members_done = 0;
+ for (i = 0; i < 30 && all_members_done == 0; ++i) {
rte_delay_ms(delay);
- all_slaves_done = 1;
- FOR_EACH_SLAVE(j, slave) {
- /* If response already send, skip slave */
+ all_members_done = 1;
+ FOR_EACH_MEMBER(j, member) {
+ /* If response already send, skip member */
if (status[j] != 0)
continue;
- if (bond_handshake_reply(slave) < 0) {
- all_slaves_done = 0;
+ if (bond_handshake_reply(member) < 0) {
+ all_members_done = 0;
break;
}
- status[j] = bond_handshake_done(slave);
+ status[j] = bond_handshake_done(member);
if (status[j] == 0)
- all_slaves_done = 0;
+ all_members_done = 0;
}
nb_pkts = bond_tx(NULL, 0);
@@ -639,26 +639,26 @@ bond_handshake(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
}
/* If response didn't send - report failure */
- TEST_ASSERT_EQUAL(all_slaves_done, 1, "Bond handshake failed\n");
+ TEST_ASSERT_EQUAL(all_members_done, 1, "Bond handshake failed\n");
/* If flags doesn't match - report failure */
- return all_slaves_done == 1 ? TEST_SUCCESS : TEST_FAILED;
+ return all_members_done == 1 ? TEST_SUCCESS : TEST_FAILED;
}
-#define TEST_LACP_SLAVE_COUT RTE_DIM(test_params.slave_ports)
+#define TEST_LACP_MEMBER_COUT RTE_DIM(test_params.member_ports)
static int
test_mode4_lacp(void)
{
int retval;
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
/* Test LACP handshake function */
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -668,7 +668,7 @@ test_mode4_agg_mode_selection(void)
{
int retval;
/* Test and verify for Stable mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -684,12 +684,12 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_STABLE,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify for Bandwidth mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -706,11 +706,11 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_BANDWIDTH,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
/* test and verify selection for count mode */
- retval = initialize_bonded_device_with_slaves(TEST_LACP_SLAVE_COUT, 0);
+ retval = initialize_bonded_device_with_members(TEST_LACP_MEMBER_COUT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -726,7 +726,7 @@ test_mode4_agg_mode_selection(void)
TEST_ASSERT_EQUAL(retval, AGG_COUNT,
"Wrong agg mode received from bonding device");
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -780,7 +780,7 @@ generate_packets(struct rte_ether_addr *src_mac,
}
static int
-generate_and_put_packets(struct slave_conf *slave,
+generate_and_put_packets(struct member_conf *member,
struct rte_ether_addr *src_mac,
struct rte_ether_addr *dst_mac, uint16_t count)
{
@@ -791,12 +791,12 @@ generate_and_put_packets(struct slave_conf *slave,
if (retval != (int)count)
return retval;
- retval = slave_put_pkts(slave, pkts, count);
+ retval = member_put_pkts(member, pkts, count);
if (retval > 0 && retval != count)
free_pkts(&pkts[retval], count - retval);
TEST_ASSERT_EQUAL(retval, count,
- "Failed to enqueue packets into slave %u RX queue", slave->port_id);
+ "Failed to enqueue packets into member %u RX queue", member->port_id);
return TEST_SUCCESS;
}
@@ -804,7 +804,7 @@ generate_and_put_packets(struct slave_conf *slave,
static int
test_mode4_rx(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
uint16_t i, j;
uint16_t expected_pkts_cnt;
@@ -819,7 +819,7 @@ test_mode4_rx(void)
struct rte_ether_addr dst_mac;
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_PROMISC_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_members(TEST_PROMISC_MEMBER_COUNT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -838,7 +838,7 @@ test_mode4_rx(void)
dst_mac.addr_bytes[0] += 2;
/* First try with promiscuous mode enabled.
- * Add 2 packets to each slave. First with bonding MAC address, second with
+ * Add 2 packets to each member. First with bonding MAC address, second with
* different. Check if we received all of them. */
retval = rte_eth_promiscuous_enable(test_params.bonded_port_id);
TEST_ASSERT_SUCCESS(retval,
@@ -846,16 +846,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_MEMBER(i, member) {
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- /* Expect 2 packets per slave */
+ /* Expect 2 packets per member */
expected_pkts_cnt += 2;
}
@@ -894,16 +894,16 @@ test_mode4_rx(void)
test_params.bonded_port_id, rte_strerror(-retval));
expected_pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ FOR_EACH_MEMBER(i, member) {
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
- TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to slave %u",
- slave->port_id);
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
+ TEST_ASSERT_SUCCESS(retval, "Failed to enqueue packets to member %u",
+ member->port_id);
- /* Expect only one packet per slave */
+ /* Expect only one packet per member */
expected_pkts_cnt += 1;
}
@@ -927,19 +927,19 @@ test_mode4_rx(void)
TEST_ASSERT_EQUAL(retval, expected_pkts_cnt,
"Expected %u packets but received only %d", expected_pkts_cnt, retval);
- /* Link down test: simulate link down for first slave. */
+ /* Link down test: simulate link down for first member. */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t member_down_id = INVALID_PORT_ID;
- /* Find first slave and make link down on it*/
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ /* Find first member and make link down on it*/
+ FOR_EACH_MEMBER(i, member) {
+ rte_eth_dev_set_link_down(member->port_id);
+ member_down_id = member->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(member_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding */
for (i = 0; i < 3; i++) {
@@ -949,16 +949,16 @@ test_mode4_rx(void)
TEST_ASSERT_SUCCESS(bond_handshake(), "Handshake after link down failed");
- /* Put packet to each slave */
- FOR_EACH_SLAVE(i, slave) {
+ /* Put packet to each member */
+ FOR_EACH_MEMBER(i, member) {
void *pkt = NULL;
- dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &dst_mac, 1);
+ dst_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+ retval = generate_and_put_packets(member, &src_mac, &dst_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
- src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = slave->port_id;
- retval = generate_and_put_packets(slave, &src_mac, &bonded_mac, 1);
+ src_mac.addr_bytes[RTE_ETHER_ADDR_LEN - 1] = member->port_id;
+ retval = generate_and_put_packets(member, &src_mac, &bonded_mac, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to generate test packet burst.");
retval = bond_rx(pkts, RTE_DIM(pkts));
@@ -967,36 +967,36 @@ test_mode4_rx(void)
if (retval > 0)
free_pkts(pkts, retval);
- while (rte_ring_dequeue(slave->rx_queue, (void **)&pkt) == 0)
+ while (rte_ring_dequeue(member->rx_queue, (void **)&pkt) == 0)
rte_pktmbuf_free(pkt);
- if (slave_down_id == slave->port_id)
+ if (member_down_id == member->port_id)
TEST_ASSERT_EQUAL(retval, 0, "Packets received unexpectedly.");
else
TEST_ASSERT_NOT_EQUAL(retval, 0,
- "Expected to receive some packets on slave %u.",
- slave->port_id);
- rte_eth_dev_start(slave->port_id);
+ "Expected to receive some packets on member %u.",
+ member->port_id);
+ rte_eth_dev_start(member->port_id);
for (j = 0; j < 5; j++) {
- TEST_ASSERT(bond_handshake_reply(slave) >= 0,
+ TEST_ASSERT(bond_handshake_reply(member) >= 0,
"Handshake after link up");
- if (bond_handshake_done(slave) == 1)
+ if (bond_handshake_done(member) == 1)
break;
}
- TEST_ASSERT(j < 5, "Failed to aggregate slave after link up");
+ TEST_ASSERT(j < 5, "Failed to aggregate member after link up");
}
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
static int
test_mode4_tx_burst(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
uint16_t i, j;
uint16_t exp_pkts_cnt, pkts_cnt = 0;
@@ -1008,7 +1008,7 @@ test_mode4_tx_burst(void)
{ 0x00, 0xFF, 0x00, 0xFF, 0x00, 0x00 } };
struct rte_ether_addr bonded_mac;
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
retval = bond_handshake();
@@ -1036,19 +1036,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets were transmitted properly. Every slave should have
+ /* Check if packets were transmitted properly. Every member should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(member, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1056,11 +1056,11 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets", slave->port_id,
+ "member %u unexpectedly transmitted %d SLOW packets", member->port_id,
slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "member %u did not transmitted any packets", member->port_id);
pkts_cnt += normal_cnt;
}
@@ -1068,19 +1068,21 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
- /* Link down test:
- * simulate link down for first slave. */
+ /*
+ * Link down test:
+ * simulate link down for first member.
+ */
delay = bond_get_update_timeout_ms();
- uint8_t slave_down_id = INVALID_PORT_ID;
+ uint8_t member_down_id = INVALID_PORT_ID;
- FOR_EACH_SLAVE(i, slave) {
- rte_eth_dev_set_link_down(slave->port_id);
- slave_down_id = slave->port_id;
+ FOR_EACH_MEMBER(i, member) {
+ rte_eth_dev_set_link_down(member->port_id);
+ member_down_id = member->port_id;
break;
}
- RTE_VERIFY(slave_down_id != INVALID_PORT_ID);
+ RTE_VERIFY(member_down_id != INVALID_PORT_ID);
/* Give some time to rearrange bonding. */
for (i = 0; i < 3; i++) {
@@ -1110,19 +1112,19 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(retval, pkts_cnt, "TX on bonded device failed");
- /* Check if packets was transmitted properly. Every slave should have
+ /* Check if packets was transmitted properly. Every member should have
* at least one packet, and sum must match. Under normal operation
* there should be no LACP nor MARKER frames. */
pkts_cnt = 0;
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
uint16_t normal_cnt, slow_cnt;
- retval = slave_get_pkts(slave, pkts, RTE_DIM(pkts));
+ retval = member_get_pkts(member, pkts, RTE_DIM(pkts));
normal_cnt = 0;
slow_cnt = 0;
for (j = 0; j < retval; j++) {
- if (make_lacp_reply(slave, pkts[j]) == 1)
+ if (make_lacp_reply(member, pkts[j]) == 1)
normal_cnt++;
else
slow_cnt++;
@@ -1130,17 +1132,17 @@ test_mode4_tx_burst(void)
free_pkts(pkts, normal_cnt + slow_cnt);
- if (slave_down_id == slave->port_id) {
+ if (member_down_id == member->port_id) {
TEST_ASSERT_EQUAL(normal_cnt + slow_cnt, 0,
- "slave %u enexpectedly transmitted %u packets",
- normal_cnt + slow_cnt, slave->port_id);
+ "member %u enexpectedly transmitted %u packets",
+ normal_cnt + slow_cnt, member->port_id);
} else {
TEST_ASSERT_EQUAL(slow_cnt, 0,
- "slave %u unexpectedly transmitted %d SLOW packets",
- slave->port_id, slow_cnt);
+ "member %u unexpectedly transmitted %d SLOW packets",
+ member->port_id, slow_cnt);
TEST_ASSERT_NOT_EQUAL(normal_cnt, 0,
- "slave %u did not transmitted any packets", slave->port_id);
+ "member %u did not transmitted any packets", member->port_id);
}
pkts_cnt += normal_cnt;
@@ -1149,11 +1151,11 @@ test_mode4_tx_burst(void)
TEST_ASSERT_EQUAL(exp_pkts_cnt, pkts_cnt,
"Expected %u packets but transmitted only %d", exp_pkts_cnt, pkts_cnt);
- return remove_slaves_and_stop_bonded_device();
+ return remove_members_and_stop_bonded_device();
}
static void
-init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
+init_marker(struct rte_mbuf *pkt, struct member_conf *member)
{
struct marker_header *marker_hdr = rte_pktmbuf_mtod(pkt,
struct marker_header *);
@@ -1166,7 +1168,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
rte_ether_addr_copy(&parnter_mac_default,
&marker_hdr->eth_hdr.src_addr);
marker_hdr->eth_hdr.src_addr.addr_bytes[RTE_ETHER_ADDR_LEN - 1] =
- slave->port_id;
+ member->port_id;
marker_hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
@@ -1177,7 +1179,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
offsetof(struct marker, reserved_90) -
offsetof(struct marker, requester_port);
RTE_VERIFY(marker_hdr->marker.info_length == 16);
- marker_hdr->marker.requester_port = slave->port_id + 1;
+ marker_hdr->marker.requester_port = member->port_id + 1;
marker_hdr->marker.tlv_type_terminator = TLV_TYPE_TERMINATOR_INFORMATION;
marker_hdr->marker.terminator_length = 0;
}
@@ -1185,7 +1187,7 @@ init_marker(struct rte_mbuf *pkt, struct slave_conf *slave)
static int
test_mode4_marker(void)
{
- struct slave_conf *slave;
+ struct member_conf *member;
struct rte_mbuf *pkts[MAX_PKT_BURST];
struct rte_mbuf *marker_pkt;
struct marker_header *marker_hdr;
@@ -1196,7 +1198,7 @@ test_mode4_marker(void)
uint8_t i, j;
const uint16_t ethtype_slow_be = rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
- retval = initialize_bonded_device_with_slaves(TEST_MARKER_SLAVE_COUT,
+ retval = initialize_bonded_device_with_members(TEST_MARKER_MEMBER_COUT,
0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
@@ -1205,17 +1207,17 @@ test_mode4_marker(void)
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
delay = bond_get_update_timeout_ms();
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
marker_pkt = rte_pktmbuf_alloc(test_params.mbuf_pool);
TEST_ASSERT_NOT_NULL(marker_pkt, "Failed to allocate marker packet");
- init_marker(marker_pkt, slave);
+ init_marker(marker_pkt, member);
- retval = slave_put_pkts(slave, &marker_pkt, 1);
+ retval = member_put_pkts(member, &marker_pkt, 1);
if (retval != 1)
rte_pktmbuf_free(marker_pkt);
TEST_ASSERT_EQUAL(retval, 1,
- "Failed to send marker packet to slave %u", slave->port_id);
+ "Failed to send marker packet to member %u", member->port_id);
for (j = 0; j < 20; ++j) {
rte_delay_ms(delay);
@@ -1233,13 +1235,13 @@ test_mode4_marker(void)
/* Check if LACP packet was send by state machines
First and only packet must be a maker response */
- retval = slave_get_pkts(slave, pkts, MAX_PKT_BURST);
+ retval = member_get_pkts(member, pkts, MAX_PKT_BURST);
if (retval == 0)
continue;
if (retval > 1)
free_pkts(pkts, retval);
- TEST_ASSERT_EQUAL(retval, 1, "failed to get slave packets");
+ TEST_ASSERT_EQUAL(retval, 1, "failed to get member packets");
nb_pkts = retval;
marker_hdr = rte_pktmbuf_mtod(pkts[0], struct marker_header *);
@@ -1263,7 +1265,7 @@ test_mode4_marker(void)
TEST_ASSERT(j < 20, "Marker response not found");
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1272,7 +1274,7 @@ test_mode4_marker(void)
static int
test_mode4_expired(void)
{
- struct slave_conf *slave, *exp_slave = NULL;
+ struct member_conf *member, *exp_member = NULL;
struct rte_mbuf *pkts[MAX_PKT_BURST];
int retval;
uint32_t old_delay;
@@ -1282,7 +1284,7 @@ test_mode4_expired(void)
struct rte_eth_bond_8023ad_conf conf;
- retval = initialize_bonded_device_with_slaves(TEST_EXPIRED_SLAVE_COUNT,
+ retval = initialize_bonded_device_with_members(TEST_EXPIRED_MEMBER_COUNT,
0);
/* Set custom timeouts to make test last shorter. */
rte_eth_bond_8023ad_conf_get(test_params.bonded_port_id, &conf);
@@ -1298,8 +1300,8 @@ test_mode4_expired(void)
/* Wait for new settings to be applied. */
for (i = 0; i < old_delay/conf.update_timeout_ms * 2; i++) {
- FOR_EACH_SLAVE(j, slave)
- bond_handshake_reply(slave);
+ FOR_EACH_MEMBER(j, member)
+ bond_handshake_reply(member);
rte_delay_ms(conf.update_timeout_ms);
}
@@ -1307,13 +1309,13 @@ test_mode4_expired(void)
retval = bond_handshake();
TEST_ASSERT_SUCCESS(retval, "Initial handshake failed");
- /* Find first slave */
- FOR_EACH_SLAVE(i, slave) {
- exp_slave = slave;
+ /* Find first member */
+ FOR_EACH_MEMBER(i, member) {
+ exp_member = member;
break;
}
- RTE_VERIFY(exp_slave != NULL);
+ RTE_VERIFY(exp_member != NULL);
/* When one of partners do not send or respond to LACP frame in
* conf.long_timeout_ms time, internal state machines should detect this
@@ -1325,16 +1327,16 @@ test_mode4_expired(void)
TEST_ASSERT_EQUAL(retval, 0, "Unexpectedly received %d packets",
retval);
- FOR_EACH_SLAVE(i, slave) {
- retval = bond_handshake_reply(slave);
+ FOR_EACH_MEMBER(i, member) {
+ retval = bond_handshake_reply(member);
TEST_ASSERT(retval >= 0, "Handshake failed");
- /* Remove replay for slave that suppose to be expired. */
- if (slave == exp_slave) {
- while (rte_ring_count(slave->rx_queue) > 0) {
+ /* Remove replay for member that suppose to be expired. */
+ if (member == exp_member) {
+ while (rte_ring_count(member->rx_queue) > 0) {
void *pkt = NULL;
- rte_ring_dequeue(slave->rx_queue, &pkt);
+ rte_ring_dequeue(member->rx_queue, &pkt);
rte_pktmbuf_free(pkt);
}
}
@@ -1348,17 +1350,17 @@ test_mode4_expired(void)
retval);
}
- /* After test only expected slave should be in EXPIRED state */
- FOR_EACH_SLAVE(i, slave) {
- if (slave == exp_slave)
- TEST_ASSERT(slave->lacp_parnter_state & STATE_EXPIRED,
- "Slave %u should be in expired.", slave->port_id);
+ /* After test only expected member should be in EXPIRED state */
+ FOR_EACH_MEMBER(i, member) {
+ if (member == exp_member)
+ TEST_ASSERT(member->lacp_parnter_state & STATE_EXPIRED,
+ "Member %u should be in expired.", member->port_id);
else
- TEST_ASSERT_EQUAL(bond_handshake_done(slave), 1,
- "Slave %u should be operational.", slave->port_id);
+ TEST_ASSERT_EQUAL(bond_handshake_done(member), 1,
+ "Member %u should be operational.", member->port_id);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1372,17 +1374,17 @@ test_mode4_ext_ctrl(void)
* . try to transmit lacpdu (should fail)
* . try to set collecting and distributing flags (should fail)
* reconfigure w/external sm
- * . transmit one lacpdu on each slave using new api
- * . make sure each slave receives one lacpdu using the callback api
- * . transmit one data pdu on each slave (should fail)
+ * . transmit one lacpdu on each member using new api
+ * . make sure each member receives one lacpdu using the callback api
+ * . transmit one data pdu on each member (should fail)
* . enable distribution and collection, send one data pdu each again
*/
int retval;
- struct slave_conf *slave = NULL;
+ struct member_conf *member = NULL;
uint8_t i;
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1396,30 +1398,30 @@ test_mode4_ext_ctrl(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < MEMBER_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 0);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 0);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]),
- "Slave should not allow manual LACP xmit");
+ member->port_id, lacp_tx_buf[i]),
+ "Member should not allow manual LACP xmit");
TEST_ASSERT_FAIL(rte_eth_bond_8023ad_ext_collect(
test_params.bonded_port_id,
- slave->port_id, 1),
- "Slave should not allow external state controls");
+ member->port_id, 1),
+ "Member should not allow external state controls");
}
free_pkts(lacp_tx_buf, RTE_DIM(lacp_tx_buf));
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Bonded device cleanup failed.");
return TEST_SUCCESS;
@@ -1430,13 +1432,13 @@ static int
test_mode4_ext_lacp(void)
{
int retval;
- struct slave_conf *slave = NULL;
- uint8_t all_slaves_done = 0, i;
+ struct member_conf *member = NULL;
+ uint8_t all_members_done = 0, i;
uint16_t nb_pkts;
const unsigned int delay = bond_get_update_timeout_ms();
- struct rte_mbuf *lacp_tx_buf[SLAVE_COUNT];
- struct rte_mbuf *buf[SLAVE_COUNT];
+ struct rte_mbuf *lacp_tx_buf[MEMBER_COUNT];
+ struct rte_mbuf *buf[MEMBER_COUNT];
struct rte_ether_addr src_mac, dst_mac;
struct lacpdu_header lacpdu = {
.lacpdu = {
@@ -1450,14 +1452,14 @@ test_mode4_ext_lacp(void)
initialize_eth_header(&lacpdu.eth_hdr, &src_mac, &dst_mac,
RTE_ETHER_TYPE_SLOW, 0, 0);
- for (i = 0; i < SLAVE_COUNT; i++) {
+ for (i = 0; i < MEMBER_COUNT; i++) {
lacp_tx_buf[i] = rte_pktmbuf_alloc(test_params.mbuf_pool);
rte_memcpy(rte_pktmbuf_mtod(lacp_tx_buf[i], char *),
&lacpdu, sizeof(lacpdu));
rte_pktmbuf_pkt_len(lacp_tx_buf[i]) = sizeof(lacpdu);
}
- retval = initialize_bonded_device_with_slaves(TEST_TX_SLAVE_COUNT, 1);
+ retval = initialize_bonded_device_with_members(TEST_TX_MEMBER_COUNT, 1);
TEST_ASSERT_SUCCESS(retval, "Failed to initialize bonded device");
memset(lacpdu_rx_count, 0, sizeof(lacpdu_rx_count));
@@ -1466,22 +1468,22 @@ test_mode4_ext_lacp(void)
for (i = 0; i < 30; ++i)
rte_delay_ms(delay);
- FOR_EACH_SLAVE(i, slave) {
+ FOR_EACH_MEMBER(i, member) {
retval = rte_eth_bond_8023ad_ext_slowtx(
test_params.bonded_port_id,
- slave->port_id, lacp_tx_buf[i]);
+ member->port_id, lacp_tx_buf[i]);
TEST_ASSERT_SUCCESS(retval,
- "Slave should allow manual LACP xmit");
+ "Member should allow manual LACP xmit");
}
nb_pkts = bond_tx(NULL, 0);
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets transmitted unexpectedly");
- FOR_EACH_SLAVE(i, slave) {
- nb_pkts = slave_get_pkts(slave, buf, RTE_DIM(buf));
- TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on slave %d\n",
+ FOR_EACH_MEMBER(i, member) {
+ nb_pkts = member_get_pkts(member, buf, RTE_DIM(buf));
+ TEST_ASSERT_EQUAL(nb_pkts, 1, "found %u packets on member %d\n",
nb_pkts, i);
- slave_put_pkts(slave, buf, nb_pkts);
+ member_put_pkts(member, buf, nb_pkts);
}
nb_pkts = bond_rx(buf, RTE_DIM(buf));
@@ -1489,26 +1491,26 @@ test_mode4_ext_lacp(void)
TEST_ASSERT_EQUAL(nb_pkts, 0, "Packets received unexpectedly");
/* wait for the periodic callback to run */
- for (i = 0; i < 30 && all_slaves_done == 0; ++i) {
+ for (i = 0; i < 30 && all_members_done == 0; ++i) {
uint8_t s, total = 0;
rte_delay_ms(delay);
- FOR_EACH_SLAVE(s, slave) {
- total += lacpdu_rx_count[slave->port_id];
+ FOR_EACH_MEMBER(s, member) {
+ total += lacpdu_rx_count[member->port_id];
}
- if (total >= SLAVE_COUNT)
- all_slaves_done = 1;
+ if (total >= MEMBER_COUNT)
+ all_members_done = 1;
}
- FOR_EACH_SLAVE(i, slave) {
- TEST_ASSERT_EQUAL(lacpdu_rx_count[slave->port_id], 1,
- "Slave port %u should have received 1 lacpdu (count=%u)",
- slave->port_id,
- lacpdu_rx_count[slave->port_id]);
+ FOR_EACH_MEMBER(i, member) {
+ TEST_ASSERT_EQUAL(lacpdu_rx_count[member->port_id], 1,
+ "Member port %u should have received 1 lacpdu (count=%u)",
+ member->port_id,
+ lacpdu_rx_count[member->port_id]);
}
- retval = remove_slaves_and_stop_bonded_device();
+ retval = remove_members_and_stop_bonded_device();
TEST_ASSERT_SUCCESS(retval, "Test cleanup failed.");
return TEST_SUCCESS;
@@ -1517,10 +1519,10 @@ test_mode4_ext_lacp(void)
static int
check_environment(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i, env_state;
- uint16_t slaves[RTE_DIM(test_params.slave_ports)];
- int slaves_count;
+ uint16_t members[RTE_DIM(test_params.member_ports)];
+ int members_count;
env_state = 0;
FOR_EACH_PORT(i, port) {
@@ -1540,20 +1542,20 @@ check_environment(void)
break;
}
- slaves_count = rte_eth_bond_slaves_get(test_params.bonded_port_id,
- slaves, RTE_DIM(slaves));
+ members_count = rte_eth_bond_members_get(test_params.bonded_port_id,
+ members, RTE_DIM(members));
- if (slaves_count != 0)
+ if (members_count != 0)
env_state |= 0x10;
TEST_ASSERT_EQUAL(env_state, 0,
"Environment not clean (port %u):%s%s%s%s%s",
port->port_id,
- env_state & 0x01 ? " slave rx queue not clean" : "",
- env_state & 0x02 ? " slave tx queue not clean" : "",
- env_state & 0x04 ? " port marked as enslaved" : "",
- env_state & 0x80 ? " slave state is not reset" : "",
- env_state & 0x10 ? " slave count not equal 0" : ".");
+ env_state & 0x01 ? " member rx queue not clean" : "",
+ env_state & 0x02 ? " member tx queue not clean" : "",
+ env_state & 0x04 ? " port marked as enmemberd" : "",
+ env_state & 0x80 ? " member state is not reset" : "",
+ env_state & 0x10 ? " member count not equal 0" : ".");
return TEST_SUCCESS;
@@ -1562,7 +1564,7 @@ check_environment(void)
static int
test_mode4_executor(int (*test_func)(void))
{
- struct slave_conf *port;
+ struct member_conf *port;
int test_result;
uint8_t i;
void *pkt;
@@ -1581,7 +1583,7 @@ test_mode4_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
"Failed to stop bonded device");
FOR_EACH_PORT(i, port) {
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 464fb2dbd0..1f888b4771 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -27,15 +27,15 @@
#include "test.h"
-#define SLAVE_COUNT (4)
+#define MEMBER_COUNT (4)
#define RXTX_RING_SIZE 1024
#define RXTX_QUEUE_COUNT 4
#define BONDED_DEV_NAME ("net_bonding_rss")
-#define SLAVE_DEV_NAME_FMT ("net_null%d")
-#define SLAVE_RXTX_QUEUE_FMT ("rssconf_slave%d_q%d")
+#define MEMBER_DEV_NAME_FMT ("net_null%d")
+#define MEMBER_RXTX_QUEUE_FMT ("rssconf_member%d_q%d")
#define NUM_MBUFS 8191
#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
@@ -46,7 +46,7 @@
#define INVALID_PORT_ID (0xFF)
#define INVALID_BONDING_MODE (-1)
-struct slave_conf {
+struct member_conf {
uint16_t port_id;
struct rte_eth_dev_info dev_info;
@@ -54,7 +54,7 @@ struct slave_conf {
uint8_t rss_key[40];
struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- uint8_t is_slave;
+ uint8_t is_member;
struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
};
@@ -62,15 +62,15 @@ struct link_bonding_rssconf_unittest_params {
uint8_t bond_port_id;
struct rte_eth_dev_info bond_dev_info;
struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
- struct slave_conf slave_ports[SLAVE_COUNT];
+ struct member_conf member_ports[MEMBER_COUNT];
struct rte_mempool *mbuf_pool;
};
static struct link_bonding_rssconf_unittest_params test_params = {
.bond_port_id = INVALID_PORT_ID,
- .slave_ports = {
- [0 ... SLAVE_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_slave = 0}
+ .member_ports = {
+ [0 ... MEMBER_COUNT - 1] = { .port_id = INVALID_PORT_ID, .is_member = 0}
},
.mbuf_pool = NULL,
};
@@ -107,14 +107,14 @@ static struct rte_eth_conf rss_pmd_conf = {
#define FOR_EACH(_i, _item, _array, _size) \
for (_i = 0, _item = &_array[0]; _i < _size && (_item = &_array[_i]); _i++)
-/* Macro for iterating over every port that can be used as a slave
+/* Macro for iterating over every port that can be used as a member
* in this test.
- * _i variable used as an index in test_params->slave_ports
- * _slave pointer to &test_params->slave_ports[_idx]
+ * _i variable used as an index in test_params->member_ports
+ * _member pointer to &test_params->member_ports[_idx]
*/
#define FOR_EACH_PORT(_i, _port) \
- FOR_EACH(_i, _port, test_params.slave_ports, \
- RTE_DIM(test_params.slave_ports))
+ FOR_EACH(_i, _port, test_params.member_ports, \
+ RTE_DIM(test_params.member_ports))
static int
configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
@@ -151,21 +151,21 @@ configure_ethdev(uint16_t port_id, struct rte_eth_conf *eth_conf,
}
/**
- * Remove all slaves from bonding
+ * Remove all members from bonding
*/
static int
-remove_slaves(void)
+remove_members(void)
{
unsigned n;
- struct slave_conf *port;
+ struct member_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(
+ port = &test_params.member_ports[n];
+ if (port->is_member) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(
test_params.bond_port_id, port->port_id),
- "Cannot remove slave %d from bonding", port->port_id);
- port->is_slave = 0;
+ "Cannot remove member %d from bonding", port->port_id);
+ port->is_member = 0;
}
}
@@ -173,30 +173,30 @@ remove_slaves(void)
}
static int
-remove_slaves_and_stop_bonded_device(void)
+remove_members_and_stop_bonded_device(void)
{
- TEST_ASSERT_SUCCESS(remove_slaves(), "Removing slaves");
+ TEST_ASSERT_SUCCESS(remove_members(), "Removing members");
TEST_ASSERT_SUCCESS(rte_eth_dev_stop(test_params.bond_port_id),
"Failed to stop port %u", test_params.bond_port_id);
return TEST_SUCCESS;
}
/**
- * Add all slaves to bonding
+ * Add all members to bonding
*/
static int
-bond_slaves(void)
+bond_members(void)
{
unsigned n;
- struct slave_conf *port;
+ struct member_conf *port;
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
- if (!port->is_slave) {
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot attach slave %d to the bonding",
+ port = &test_params.member_ports[n];
+ if (!port->is_member) {
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+ port->port_id), "Cannot attach member %d to the bonding",
port->port_id);
- port->is_slave = 1;
+ port->is_member = 1;
}
}
@@ -223,11 +223,11 @@ reta_set(uint16_t port_id, uint8_t value, int reta_size)
}
/**
- * Check if slaves RETA is synchronized with bonding port. Returns 1 if slave
+ * Check if members RETA is synchronized with bonding port. Returns 1 if member
* port is synced with bonding port.
*/
static int
-reta_check_synced(struct slave_conf *port)
+reta_check_synced(struct member_conf *port)
{
unsigned i;
@@ -264,10 +264,10 @@ bond_reta_fetch(void) {
}
/**
- * Fetch slaves RETA
+ * Fetch members RETA
*/
static int
-slave_reta_fetch(struct slave_conf *port) {
+member_reta_fetch(struct member_conf *port) {
unsigned j;
for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
@@ -280,49 +280,49 @@ slave_reta_fetch(struct slave_conf *port) {
}
/**
- * Remove and add slave to check if slaves configuration is synced with
- * the bonding ports values after adding new slave.
+ * Remove and add member to check if members configuration is synced with
+ * the bonding ports values after adding new member.
*/
static int
-slave_remove_and_add(void)
+member_remove_and_add(void)
{
- struct slave_conf *port = &(test_params.slave_ports[0]);
+ struct member_conf *port = &(test_params.member_ports[0]);
- /* 1. Remove first slave from bonding */
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_remove(test_params.bond_port_id,
- port->port_id), "Cannot remove slave #d from bonding");
+ /* 1. Remove first member from bonding */
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_remove(test_params.bond_port_id,
+ port->port_id), "Cannot remove member #d from bonding");
- /* 2. Change removed (ex-)slave and bonding configuration to different
+ /* 2. Change removed (ex-)member and bonding configuration to different
* values
*/
reta_set(test_params.bond_port_id, 1, test_params.bond_dev_info.reta_size);
bond_reta_fetch();
reta_set(port->port_id, 2, port->dev_info.reta_size);
- slave_reta_fetch(port);
+ member_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 0,
- "Removed slave didn't should be synchronized with bonding port");
+ "Removed member didn't should be synchronized with bonding port");
- /* 3. Add (ex-)slave and check if configuration changed*/
- TEST_ASSERT_SUCCESS(rte_eth_bond_slave_add(test_params.bond_port_id,
- port->port_id), "Cannot add slave");
+ /* 3. Add (ex-)member and check if configuration changed*/
+ TEST_ASSERT_SUCCESS(rte_eth_bond_member_add(test_params.bond_port_id,
+ port->port_id), "Cannot add member");
bond_reta_fetch();
- slave_reta_fetch(port);
+ member_reta_fetch(port);
return reta_check_synced(port);
}
/**
- * Test configuration propagation over slaves.
+ * Test configuration propagation over members.
*/
static int
test_propagate(void)
{
unsigned i;
uint8_t n;
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t bond_rss_key[40];
struct rte_eth_rss_conf bond_rss_conf;
@@ -349,18 +349,18 @@ test_propagate(void)
retval = rte_eth_dev_rss_hash_update(test_params.bond_port_id,
&bond_rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves hash function");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members hash function");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take members RSS configuration");
TEST_ASSERT(port->rss_conf.rss_hf == rss_hf,
- "Hash function not propagated for slave %d",
+ "Hash function not propagated for member %d",
port->port_id);
}
@@ -376,11 +376,11 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
memset(port->rss_conf.rss_key, 0, 40);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RSS keys");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members RSS keys");
}
memset(bond_rss_key, i, sizeof(bond_rss_key));
@@ -393,18 +393,18 @@ test_propagate(void)
TEST_ASSERT_SUCCESS(retval, "Cannot set bonded port RSS keys");
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&(port->rss_conf));
TEST_ASSERT_SUCCESS(retval,
- "Cannot take slaves RSS configuration");
+ "Cannot take members RSS configuration");
/* compare keys */
retval = memcmp(port->rss_conf.rss_key, bond_rss_key,
sizeof(bond_rss_key));
- TEST_ASSERT(retval == 0, "Key value not propagated for slave %d",
+ TEST_ASSERT(retval == 0, "Key value not propagated for member %d",
port->port_id);
}
}
@@ -416,10 +416,10 @@ test_propagate(void)
/* Set all keys to zero */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT_SUCCESS(retval, "Cannot set slaves RETA");
+ TEST_ASSERT_SUCCESS(retval, "Cannot set members RETA");
}
TEST_ASSERT_SUCCESS(reta_set(test_params.bond_port_id,
@@ -429,9 +429,9 @@ test_propagate(void)
bond_reta_fetch();
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
- slave_reta_fetch(port);
+ member_reta_fetch(port);
TEST_ASSERT(reta_check_synced(port) == 1, "RETAs inconsistent");
}
}
@@ -459,29 +459,29 @@ test_rss(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_propagate(), "Propagation test failed");
- TEST_ASSERT(slave_remove_and_add() == 1, "remove and add slaves success.");
+ TEST_ASSERT(member_remove_and_add() == 1, "remove and add members success.");
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
/**
- * Test RSS configuration over bonded and slaves.
+ * Test RSS configuration over bonded and members.
*/
static int
test_rss_config_lazy(void)
{
struct rte_eth_rss_conf bond_rss_conf = {0};
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t rss_key[40];
uint64_t rss_hf;
int retval;
@@ -502,18 +502,18 @@ test_rss_config_lazy(void)
TEST_ASSERT(retval != 0, "Succeeded in setting bonded port hash function");
}
- /* Set all keys to zero for all slaves */
+ /* Set all keys to zero for all members */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = rte_eth_dev_rss_hash_conf_get(port->port_id,
&port->rss_conf);
- TEST_ASSERT_SUCCESS(retval, "Cannot get slaves RSS configuration");
+ TEST_ASSERT_SUCCESS(retval, "Cannot get members RSS configuration");
memset(port->rss_key, 0, sizeof(port->rss_key));
port->rss_conf.rss_key = port->rss_key;
port->rss_conf.rss_key_len = sizeof(port->rss_key);
retval = rte_eth_dev_rss_hash_update(port->port_id,
&port->rss_conf);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RSS keys");
+ TEST_ASSERT(retval != 0, "Succeeded in setting members RSS keys");
}
/* Set RSS keys for bonded port */
@@ -529,10 +529,10 @@ test_rss_config_lazy(void)
/* Test RETA propagation */
for (i = 0; i < RXTX_QUEUE_COUNT; i++) {
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
retval = reta_set(port->port_id, (i + 1) % RXTX_QUEUE_COUNT,
port->dev_info.reta_size);
- TEST_ASSERT(retval != 0, "Succeeded in setting slaves RETA");
+ TEST_ASSERT(retval != 0, "Succeeded in setting members RETA");
}
retval = reta_set(test_params.bond_port_id, i % RXTX_QUEUE_COUNT,
@@ -560,14 +560,14 @@ test_rss_lazy(void)
"Error during getting device (port %u) info: %s\n",
test_params.bond_port_id, strerror(-ret));
- TEST_ASSERT_SUCCESS(bond_slaves(), "Bonding slaves failed");
+ TEST_ASSERT_SUCCESS(bond_members(), "Bonding members failed");
TEST_ASSERT_SUCCESS(rte_eth_dev_start(test_params.bond_port_id),
"Failed to start bonding port (%d).", test_params.bond_port_id);
TEST_ASSERT_SUCCESS(test_rss_config_lazy(), "Succeeded in setting RSS hash when RX_RSS mq_mode is turned off");
- remove_slaves_and_stop_bonded_device();
+ remove_members_and_stop_bonded_device();
return TEST_SUCCESS;
}
@@ -579,13 +579,13 @@ test_setup(void)
int retval;
int port_id;
char name[256];
- struct slave_conf *port;
+ struct member_conf *port;
struct rte_ether_addr mac_addr = { .addr_bytes = {0} };
if (test_params.mbuf_pool == NULL) {
test_params.mbuf_pool = rte_pktmbuf_pool_create(
- "RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
+ "RSS_MBUF_POOL", NUM_MBUFS * MEMBER_COUNT,
MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
TEST_ASSERT(test_params.mbuf_pool != NULL,
@@ -594,10 +594,10 @@ test_setup(void)
/* Create / initialize ring eth devs. */
FOR_EACH_PORT(n, port) {
- port = &test_params.slave_ports[n];
+ port = &test_params.member_ports[n];
port_id = rte_eth_dev_count_avail();
- snprintf(name, sizeof(name), SLAVE_DEV_NAME_FMT, port_id);
+ snprintf(name, sizeof(name), MEMBER_DEV_NAME_FMT, port_id);
retval = rte_vdev_init(name, "size=64,copy=0");
TEST_ASSERT_SUCCESS(retval, "Failed to create null device '%s'\n",
@@ -647,7 +647,7 @@ test_setup(void)
static void
testsuite_teardown(void)
{
- struct slave_conf *port;
+ struct member_conf *port;
uint8_t i;
/* Only stop ports.
@@ -685,7 +685,7 @@ test_rssconf_executor(int (*test_func)(void))
/* Reset environment in case test failed to do that. */
if (test_result != TEST_SUCCESS) {
- TEST_ASSERT_SUCCESS(remove_slaves_and_stop_bonded_device(),
+ TEST_ASSERT_SUCCESS(remove_members_and_stop_bonded_device(),
"Failed to stop bonded device");
}
diff --git a/doc/guides/howto/lm_bond_virtio_sriov.rst b/doc/guides/howto/lm_bond_virtio_sriov.rst
index e854ae214e..c06d1bc43c 100644
--- a/doc/guides/howto/lm_bond_virtio_sriov.rst
+++ b/doc/guides/howto/lm_bond_virtio_sriov.rst
@@ -17,8 +17,8 @@ Test Setup
----------
A bonded device is created in the VM.
-The virtio and VF PMD's are added as slaves to the bonded device.
-The VF is set as the primary slave of the bonded device.
+The virtio and VF PMD's are added as members to the bonded device.
+The VF is set as the primary member of the bonded device.
A bridge must be set up on the Host connecting the tap device, which is the
backend of the Virtio device and the Physical Function (PF) device.
@@ -116,13 +116,13 @@ Bonding is port 2 (P2).
testpmd> create bonded device 1 0
Created new bonded device net_bond_testpmd_0 on (port 2).
- testpmd> add bonding slave 0 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding member 0 2
+ testpmd> add bonding member 1 2
testpmd> show bonding config 2
The syntax of the ``testpmd`` command is:
-set bonding primary (slave id) (port id)
+set bonding primary (member id) (port id)
Set primary to P1 before starting bonding port.
@@ -139,7 +139,7 @@ Set primary to P1 before starting bonding port.
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
Use P2 only for forwarding.
@@ -151,7 +151,7 @@ Use P2 only for forwarding.
testpmd> start
testpmd> show bonding config 2
-Primary is now P1. There are 2 active slaves.
+Primary is now P1. There are 2 active members.
.. code-block:: console
@@ -163,10 +163,10 @@ VF traffic is seen at P1 and P2.
testpmd> clear port stats all
testpmd> set bonding primary 0 2
- testpmd> remove bonding slave 1 2
+ testpmd> remove bonding member 1 2
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
.. code-block:: console
@@ -210,7 +210,7 @@ On host_server_1: Terminal 1
testpmd> show bonding config 2
-Primary is now P0. There is 1 active slave.
+Primary is now P0. There is 1 active member.
.. code-block:: console
@@ -346,7 +346,7 @@ The ``mac_addr`` command only works with the Kernel PF for Niantic.
testpmd> show port stats all.
testpmd> show config fwd
testpmd> show bonding config 2
- testpmd> add bonding slave 1 2
+ testpmd> add bonding member 1 2
testpmd> set bonding primary 1 2
testpmd> show bonding config 2
testpmd> show port stats all
@@ -355,7 +355,7 @@ VF traffic is seen at P1 (VF) and P2 (Bonded device).
.. code-block:: console
- testpmd> remove bonding slave 0 2
+ testpmd> remove bonding member 0 2
testpmd> show bonding config 2
testpmd> port stop 0
testpmd> port close 0
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index 70242ab2ce..6db880d632 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -781,8 +781,8 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p
.. code-block:: console
- dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
- (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
+ dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,member=<PCI B:D.F device 1>,member=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
+ (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,member=0000:82:00.0,member=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
Vector Processing
-----------------
diff --git a/doc/guides/prog_guide/img/bond-mode-1.svg b/doc/guides/prog_guide/img/bond-mode-1.svg
index 7c81b856b7..5a9271facf 100644
--- a/doc/guides/prog_guide/img/bond-mode-1.svg
+++ b/doc/guides/prog_guide/img/bond-mode-1.svg
@@ -53,7 +53,7 @@
v:langID="1033"
v:metric="true"
v:viewMarkup="false"><v:userDefs><v:ud
- v:nameU="msvSubprocessMaster"
+ v:nameU="msvSubprocessMain"
v:prompt=""
v:val="VT4(Rectangle)" /><v:ud
v:nameU="msvNoAutoConnect"
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 1f66154e35..58e5ef41da 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -16,14 +16,14 @@ allows physical PMDs to be bonded together to create a single logical PMD.
The Link Bonding PMD library(librte_net_bond) supports bonding of groups of
``rte_eth_dev`` ports of the same speed and duplex to provide similar
capabilities to that found in Linux bonding driver to allow the aggregation
-of multiple (slave) NICs into a single logical interface between a server
+of multiple (member) NICs into a single logical interface between a server
and a switch. The new bonded PMD will then process these interfaces based on
the mode of operation specified to provide support for features such as
redundant links, fault tolerance and/or load balancing.
The librte_net_bond library exports a C API which provides an API for the
creation of bonded devices as well as the configuration and management of the
-bonded device and its slave devices.
+bonded device and its member devices.
.. note::
@@ -45,7 +45,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides load balancing and fault tolerance by transmission of
- packets in sequential order from the first available slave device through
+ packets in sequential order from the first available member device through
the last. Packets are bulk dequeued from devices then serviced in a
round-robin manner. This mode does not guarantee in order reception of
packets and down stream should be able to handle out of order packets.
@@ -57,9 +57,9 @@ Currently the Link Bonding PMD library supports following modes of operation:
Active Backup (Mode 1)
- In this mode only one slave in the bond is active at any time, a different
- slave becomes active if, and only if, the primary active slave fails,
- thereby providing fault tolerance to slave failure. The single logical
+ In this mode only one member in the bond is active at any time, a different
+ member becomes active if, and only if, the primary active member fails,
+ thereby providing fault tolerance to member failure. The single logical
bonded interface's MAC address is externally visible on only one NIC (port)
to avoid confusing the network switch.
@@ -73,10 +73,10 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides transmit load balancing (based on the selected
transmission policy) and fault tolerance. The default policy (layer2) uses
a simple calculation based on the packet flow source and destination MAC
- addresses as well as the number of active slaves available to the bonded
- device to classify the packet to a specific slave to transmit on. Alternate
+ addresses as well as the number of active members available to the bonded
+ device to classify the packet to a specific member to transmit on. Alternate
transmission policies supported are layer 2+3, this takes the IP source and
- destination addresses into the calculation of the transmit slave port and
+ destination addresses into the calculation of the transmit member port and
the final supported policy is layer 3+4, this uses IP source and
destination addresses as well as the TCP/UDP source and destination port.
@@ -92,7 +92,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
Broadcast (Mode 3)
- This mode provides fault tolerance by transmission of packets on all slave
+ This mode provides fault tolerance by transmission of packets on all member
ports.
* **Link Aggregation 802.3AD (Mode 4):**
@@ -114,7 +114,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
intervals period of less than 100ms.
#. Calls to ``rte_eth_tx_burst`` must have a buffer size of at least 2xN,
- where N is the number of slaves. This is a space required for LACP
+ where N is the number of members. This is a space required for LACP
frames. Additionally LACP packets are included in the statistics, but
they are not returned to the application.
@@ -126,7 +126,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
This mode provides an adaptive transmit load balancing. It dynamically
- changes the transmitting slave, according to the computed load. Statistics
+ changes the transmitting member, according to the computed load. Statistics
are collected in 100ms intervals and scheduled every 10ms.
@@ -140,74 +140,74 @@ The Link Bonding Library supports the creation of bonded devices at application
startup time during EAL initialization using the ``--vdev`` option as well as
programmatically via the C API ``rte_eth_bond_create`` function.
-Bonded devices support the dynamical addition and removal of slave devices using
-the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove`` APIs.
+Bonded devices support the dynamical addition and removal of member devices using
+the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove`` APIs.
-After a slave device is added to a bonded device slave is stopped using
+After a member device is added to a bonded device member is stopped using
``rte_eth_dev_stop`` and then reconfigured using ``rte_eth_dev_configure``
the RX and TX queues are also reconfigured using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup`` with the parameters use to configure the bonding
device. If RSS is enabled for bonding device, this mode is also enabled on new
-slave and configured as well.
+member and configured as well.
Any flow which was configured to the bond device also is configured to the added
-slave.
+member.
Setting up multi-queue mode for bonding device to RSS, makes it fully
-RSS-capable, so all slaves are synchronized with its configuration. This mode is
-intended to provide RSS configuration on slaves transparent for client
+RSS-capable, so all members are synchronized with its configuration. This mode is
+intended to provide RSS configuration on members transparent for client
application implementation.
Bonding device stores its own version of RSS settings i.e. RETA, RSS hash
-function and RSS key, used to set up its slaves. That let to define the meaning
+function and RSS key, used to set up its members. That let to define the meaning
of RSS configuration of bonding device as desired configuration of whole bonding
-(as one unit), without pointing any of slave inside. It is required to ensure
+(as one unit), without pointing any of member inside. It is required to ensure
consistency and made it more error-proof.
RSS hash function set for bonding device, is a maximal set of RSS hash functions
-supported by all bonded slaves. RETA size is a GCD of all its RETA's sizes, so
-it can be easily used as a pattern providing expected behavior, even if slave
+supported by all bonded members. RETA size is a GCD of all its RETA's sizes, so
+it can be easily used as a pattern providing expected behavior, even if member
RETAs' sizes are different. If RSS Key is not set for bonded device, it's not
-changed on the slaves and default key for device is used.
+changed on the members and default key for device is used.
-As RSS configurations, there is flow consistency in the bonded slaves for the
+As RSS configurations, there is flow consistency in the bonded members for the
next rte flow operations:
Validate:
- - Validate flow for each slave, failure at least for one slave causes to
+ - Validate flow for each member, failure at least for one member causes to
bond validation failure.
Create:
- - Create the flow in all slaves.
- - Save all the slaves created flows objects in bonding internal flow
+ - Create the flow in all members.
+ - Save all the members created flows objects in bonding internal flow
structure.
- - Failure in flow creation for existed slave rejects the flow.
- - Failure in flow creation for new slaves in slave adding time rejects
- the slave.
+ - Failure in flow creation for existed member rejects the flow.
+ - Failure in flow creation for new members in member adding time rejects
+ the member.
Destroy:
- - Destroy the flow in all slaves and release the bond internal flow
+ - Destroy the flow in all members and release the bond internal flow
memory.
Flush:
- - Destroy all the bonding PMD flows in all the slaves.
+ - Destroy all the bonding PMD flows in all the members.
.. note::
- Don't call slaves flush directly, It destroys all the slave flows which
+ Don't call members flush directly, It destroys all the member flows which
may include external flows or the bond internal LACP flow.
Query:
- - Summarize flow counters from all the slaves, relevant only for
+ - Summarize flow counters from all the members, relevant only for
``RTE_FLOW_ACTION_TYPE_COUNT``.
Isolate:
- - Call to flow isolate for all slaves.
- - Failure in flow isolation for existed slave rejects the isolate mode.
- - Failure in flow isolation for new slaves in slave adding time rejects
- the slave.
+ - Call to flow isolate for all members.
+ - Failure in flow isolation for existed member rejects the isolate mode.
+ - Failure in flow isolation for new members in member adding time rejects
+ the member.
All settings are managed through the bonding port API and always are propagated
-in one direction (from bonding to slaves).
+in one direction (from bonding to members).
Link Status Change Interrupts / Polling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -215,16 +215,16 @@ Link Status Change Interrupts / Polling
Link bonding devices support the registration of a link status change callback,
using the ``rte_eth_dev_callback_register`` API, this will be called when the
status of the bonding device changes. For example in the case of a bonding
-device which has 3 slaves, the link status will change to up when one slave
-becomes active or change to down when all slaves become inactive. There is no
-callback notification when a single slave changes state and the previous
-conditions are not met. If a user wishes to monitor individual slaves then they
-must register callbacks with that slave directly.
+device which has 3 members, the link status will change to up when one member
+becomes active or change to down when all members become inactive. There is no
+callback notification when a single member changes state and the previous
+conditions are not met. If a user wishes to monitor individual members then they
+must register callbacks with that member directly.
The link bonding library also supports devices which do not implement link
status change interrupts, this is achieved by polling the devices link status at
a defined period which is set using the ``rte_eth_bond_link_monitoring_set``
-API, the default polling interval is 10ms. When a device is added as a slave to
+API, the default polling interval is 10ms. When a device is added as a member to
a bonding device it is determined using the ``RTE_PCI_DRV_INTR_LSC`` flag
whether the device supports interrupts or whether the link status should be
monitored by polling it.
@@ -233,30 +233,30 @@ Requirements / Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~
The current implementation only supports devices that support the same speed
-and duplex to be added as a slaves to the same bonded device. The bonded device
-inherits these attributes from the first active slave added to the bonded
-device and then all further slaves added to the bonded device must support
+and duplex to be added as a members to the same bonded device. The bonded device
+inherits these attributes from the first active member added to the bonded
+device and then all further members added to the bonded device must support
these parameters.
-A bonding device must have a minimum of one slave before the bonding device
+A bonding device must have a minimum of one member before the bonding device
itself can be started.
To use a bonding device dynamic RSS configuration feature effectively, it is
-also required, that all slaves should be RSS-capable and support, at least one
+also required, that all members should be RSS-capable and support, at least one
common hash function available for each of them. Changing RSS key is only
-possible, when all slave devices support the same key size.
+possible, when all member devices support the same key size.
-To prevent inconsistency on how slaves process packets, once a device is added
+To prevent inconsistency on how members process packets, once a device is added
to a bonding device, RSS and rte flow configurations should be managed through
-the bonding device API, and not directly on the slave.
+the bonding device API, and not directly on the member.
Like all other PMD, all functions exported by a PMD are lock-free functions
that are assumed not to be invoked in parallel on different logical cores to
work on the same target object.
It should also be noted that the PMD receive function should not be invoked
-directly on a slave devices after they have been to a bonded device since
-packets read directly from the slave device will no longer be available to the
+directly on a member devices after they have been to a bonded device since
+packets read directly from the member device will no longer be available to the
bonded device to read.
Configuration
@@ -265,25 +265,25 @@ Configuration
Link bonding devices are created using the ``rte_eth_bond_create`` API
which requires a unique device name, the bonding mode,
and the socket Id to allocate the bonding device's resources on.
-The other configurable parameters for a bonded device are its slave devices,
-its primary slave, a user defined MAC address and transmission policy to use if
+The other configurable parameters for a bonded device are its member devices,
+its primary member, a user defined MAC address and transmission policy to use if
the device is in balance XOR mode.
-Slave Devices
-^^^^^^^^^^^^^
+Member Devices
+^^^^^^^^^^^^^^
-Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` slave devices
-of the same speed and duplex. Ethernet devices can be added as a slave to a
-maximum of one bonded device. Slave devices are reconfigured with the
+Bonding devices support up to a maximum of ``RTE_MAX_ETHPORTS`` member devices
+of the same speed and duplex. Ethernet devices can be added as a member to a
+maximum of one bonded device. Member devices are reconfigured with the
configuration of the bonded device on being added to a bonded device.
-The bonded also guarantees to return the MAC address of the slave device to its
-original value of removal of a slave from it.
+The bonded also guarantees to return the MAC address of the member device to its
+original value of removal of a member from it.
-Primary Slave
-^^^^^^^^^^^^^
+Primary Member
+^^^^^^^^^^^^^^
-The primary slave is used to define the default port to use when a bonded
+The primary member is used to define the default port to use when a bonded
device is in active backup mode. A different port will only be used if, and
only if, the current primary port goes down. If the user does not specify a
primary port it will default to being the first port added to the bonded device.
@@ -292,14 +292,14 @@ MAC Address
^^^^^^^^^^^
The bonded device can be configured with a user specified MAC address, this
-address will be inherited by the some/all slave devices depending on the
+address will be inherited by the some/all member devices depending on the
operating mode. If the device is in active backup mode then only the primary
-device will have the user specified MAC, all other slaves will retain their
-original MAC address. In mode 0, 2, 3, 4 all slaves devices are configure with
+device will have the user specified MAC, all other members will retain their
+original MAC address. In mode 0, 2, 3, 4 all members devices are configure with
the bonded devices MAC address.
If a user defined MAC address is not defined then the bonded device will
-default to using the primary slaves MAC address.
+default to using the primary members MAC address.
Balance XOR Transmit Policies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -310,17 +310,17 @@ Balance XOR mode. Layer 2, Layer 2+3, Layer 3+4.
* **Layer 2:** Ethernet MAC address based balancing is the default
transmission policy for Balance XOR bonding mode. It uses a simple XOR
calculation on the source MAC address and destination MAC address of the
- packet and then calculate the modulus of this value to calculate the slave
+ packet and then calculate the modulus of this value to calculate the member
device to transmit the packet on.
* **Layer 2 + 3:** Ethernet MAC address & IP Address based balancing uses a
combination of source/destination MAC addresses and the source/destination
- IP addresses of the data packet to decide which slave port the packet will
+ IP addresses of the data packet to decide which member port the packet will
be transmitted on.
* **Layer 3 + 4:** IP Address & UDP Port based balancing uses a combination
of source/destination IP Address and the source/destination UDP ports of
- the packet of the data packet to decide which slave port the packet will be
+ the packet of the data packet to decide which member port the packet will be
transmitted on.
All these policies support 802.1Q VLAN Ethernet packets, as well as IPv4, IPv6
@@ -350,13 +350,13 @@ device configure API ``rte_eth_dev_configure`` and then the RX and TX queues
which will be used must be setup using ``rte_eth_tx_queue_setup`` /
``rte_eth_rx_queue_setup``.
-Slave devices can be dynamically added and removed from a link bonding device
-using the ``rte_eth_bond_slave_add`` / ``rte_eth_bond_slave_remove``
-APIs but at least one slave device must be added to the link bonding device
+Member devices can be dynamically added and removed from a link bonding device
+using the ``rte_eth_bond_member_add`` / ``rte_eth_bond_member_remove``
+APIs but at least one member device must be added to the link bonding device
before it can be started using ``rte_eth_dev_start``.
-The link status of a bonded device is dictated by that of its slaves, if all
-slave device link status are down or if all slaves are removed from the link
+The link status of a bonded device is dictated by that of its members, if all
+member device link status are down or if all members are removed from the link
bonding device then the link status of the bonding device will go down.
It is also possible to configure / query the configuration of the control
@@ -390,7 +390,7 @@ long as the following two rules are respected:
where X can be any combination of numbers and/or letters,
and the name is no greater than 32 characters long.
-* A least one slave device is provided with for each bonded device definition.
+* A least one member device is provided with for each bonded device definition.
* The operation mode of the bonded device being created is provided.
@@ -404,20 +404,20 @@ The different options are:
mode=2
-* slave: Defines the PMD device which will be added as slave to the bonded
+* member: Defines the PMD device which will be added as member to the bonded
device. This option can be selected multiple times, for each device to be
- added as a slave. Physical devices should be specified using their PCI
+ added as a member. Physical devices should be specified using their PCI
address, in the format domain:bus:devid.function
.. code-block:: console
- slave=0000:0a:00.0,slave=0000:0a:00.1
+ member=0000:0a:00.0,member=0000:0a:00.1
-* primary: Optional parameter which defines the primary slave port,
- is used in active backup mode to select the primary slave for data TX/RX if
+* primary: Optional parameter which defines the primary member port,
+ is used in active backup mode to select the primary member for data TX/RX if
it is available. The primary port also is used to select the MAC address to
- use when it is not defined by the user. This defaults to the first slave
- added to the device if it is specified. The primary device must be a slave
+ use when it is not defined by the user. This defaults to the first member
+ added to the device if it is specified. The primary device must be a member
of the bonded device.
.. code-block:: console
@@ -432,7 +432,7 @@ The different options are:
socket_id=0
* mac: Optional parameter to select a MAC address for link bonding device,
- this overrides the value of the primary slave device.
+ this overrides the value of the primary member device.
.. code-block:: console
@@ -474,29 +474,29 @@ The different options are:
Examples of Usage
^^^^^^^^^^^^^^^^^
-Create a bonded device in round robin mode with two slaves specified by their PCI address:
+Create a bonded device in round robin mode with two members specified by their PCI address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00' -- --port-topology=chained
-Create a bonded device in round robin mode with two slaves specified by their PCI address and an overriding MAC address:
+Create a bonded device in round robin mode with two members specified by their PCI address and an overriding MAC address:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,slave=0000:0a:00.01,slave=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=0,member=0000:0a:00.01,member=0000:04:00.00,mac=00:1e:67:1d:fd:1d' -- --port-topology=chained
-Create a bonded device in active backup mode with two slaves specified, and a primary slave specified by their PCI addresses:
+Create a bonded device in active backup mode with two members specified, and a primary member specified by their PCI addresses:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,slave=0000:0a:00.01,slave=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=1,member=0000:0a:00.01,member=0000:04:00.00,primary=0000:0a:00.01' -- --port-topology=chained
-Create a bonded device in balance mode with two slaves specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
+Create a bonded device in balance mode with two members specified by their PCI addresses, and a transmission policy of layer 3 + 4 forwarding:
.. code-block:: console
- ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,slave=0000:0a:00.01,slave=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
+ ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 --vdev 'net_bonding0,mode=2,member=0000:0a:00.01,member=0000:04:00.00,xmit_policy=l34' -- --port-topology=chained
.. _bonding_testpmd_commands:
@@ -517,28 +517,28 @@ For example, to create a bonded device in mode 1 on socket 0::
testpmd> create bonded device 1 0
created new bonded device (port X)
-add bonding slave
-~~~~~~~~~~~~~~~~~
+add bonding member
+~~~~~~~~~~~~~~~~~~
Adds Ethernet device to a Link Bonding device::
- testpmd> add bonding slave (slave id) (port id)
+ testpmd> add bonding member (member id) (port id)
For example, to add Ethernet device (port 6) to a Link Bonding device (port 10)::
- testpmd> add bonding slave 6 10
+ testpmd> add bonding member 6 10
-remove bonding slave
-~~~~~~~~~~~~~~~~~~~~
+remove bonding member
+~~~~~~~~~~~~~~~~~~~~~
-Removes an Ethernet slave device from a Link Bonding device::
+Removes an Ethernet member device from a Link Bonding device::
- testpmd> remove bonding slave (slave id) (port id)
+ testpmd> remove bonding member (member id) (port id)
-For example, to remove Ethernet slave device (port 6) to a Link Bonding device (port 10)::
+For example, to remove Ethernet member device (port 6) to a Link Bonding device (port 10)::
- testpmd> remove bonding slave 6 10
+ testpmd> remove bonding member 6 10
set bonding mode
~~~~~~~~~~~~~~~~
@@ -554,11 +554,11 @@ For example, to set the bonding mode of a Link Bonding device (port 10) to broad
set bonding primary
~~~~~~~~~~~~~~~~~~~
-Set an Ethernet slave device as the primary device on a Link Bonding device::
+Set an Ethernet member device as the primary device on a Link Bonding device::
- testpmd> set bonding primary (slave id) (port id)
+ testpmd> set bonding primary (member id) (port id)
-For example, to set the Ethernet slave device (port 6) as the primary port of a Link Bonding device (port 10)::
+For example, to set the Ethernet member device (port 6) as the primary port of a Link Bonding device (port 10)::
testpmd> set bonding primary 6 10
@@ -590,7 +590,7 @@ set bonding mon_period
Set the link status monitoring polling period in milliseconds for a bonding device.
-This adds support for PMD slave devices which do not support link status interrupts.
+This adds support for PMD member devices which do not support link status interrupts.
When the mon_period is set to a value greater than 0 then all PMD's which do not support
link status ISR will be queried every polling interval to check if their link status has changed::
@@ -604,7 +604,7 @@ For example, to set the link status monitoring polling period of bonded device (
set bonding lacp dedicated_queue
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Enable dedicated tx/rx queues on bonding devices slaves to handle LACP control plane traffic
+Enable dedicated tx/rx queues on bonding devices members to handle LACP control plane traffic
when in mode 4 (link-aggregation-802.3ad)::
testpmd> set bonding lacp dedicated_queues (port_id) (enable|disable)
@@ -627,13 +627,13 @@ it also shows link-aggregation-802.3ad information if the link mode is mode 4::
testpmd> show bonding config (port id)
For example,
-to show the configuration a Link Bonding device (port 9) with 3 slave devices (1, 3, 4)
+to show the configuration a Link Bonding device (port 9) with 3 member devices (1, 3, 4)
in balance mode with a transmission policy of layer 2+3::
testpmd> show bonding config 9
- Dev basic:
Bonding mode: BALANCE(2)
Balance Xmit Policy: BALANCE_XMIT_POLICY_LAYER23
- Slaves (3): [1 3 4]
- Active Slaves (3): [1 3 4]
+ Members (3): [1 3 4]
+ Active Members (3): [1 3 4]
Primary: [3]
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 82455f9e18..535a361a22 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -124,22 +124,6 @@ Deprecation Notices
The legacy actions should be removed
once ``MODIFY_FIELD`` alternative is implemented in drivers.
-* bonding: The data structure ``struct rte_eth_bond_8023ad_slave_info`` will be
- renamed to ``struct rte_eth_bond_8023ad_member_info`` in DPDK 23.11.
- The following functions will be removed in DPDK 23.11.
- The old functions:
- ``rte_eth_bond_8023ad_slave_info``,
- ``rte_eth_bond_active_slaves_get``,
- ``rte_eth_bond_slave_add``,
- ``rte_eth_bond_slave_remove``, and
- ``rte_eth_bond_slaves_get``
- will be replaced by:
- ``rte_eth_bond_8023ad_member_info``,
- ``rte_eth_bond_active_members_get``,
- ``rte_eth_bond_member_add``,
- ``rte_eth_bond_member_remove``, and
- ``rte_eth_bond_members_get``.
-
* cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
to have another parameter ``qp_id`` to return the queue pair ID
which got error interrupt to the application,
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 2fae9539e2..f0ef597351 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -109,6 +109,23 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* bonding: Replace master/slave to main/member. The data structure
+ ``struct rte_eth_bond_8023ad_slave_info`` was renamed to
+ ``struct rte_eth_bond_8023ad_member_info`` in DPDK 23.11.
+ The following functions were removed in DPDK 23.11.
+ The old functions:
+ ``rte_eth_bond_8023ad_slave_info``,
+ ``rte_eth_bond_active_slaves_get``,
+ ``rte_eth_bond_slave_add``,
+ ``rte_eth_bond_slave_remove``, and
+ ``rte_eth_bond_slaves_get``
+ will be replaced by:
+ ``rte_eth_bond_8023ad_member_info``,
+ ``rte_eth_bond_active_members_get``,
+ ``rte_eth_bond_member_add``,
+ ``rte_eth_bond_member_remove``, and
+ ``rte_eth_bond_members_get``.
+
ABI Changes
-----------
diff --git a/drivers/net/bonding/bonding_testpmd.c b/drivers/net/bonding/bonding_testpmd.c
index b3c12cada0..1fe85839ed 100644
--- a/drivers/net/bonding/bonding_testpmd.c
+++ b/drivers/net/bonding/bonding_testpmd.c
@@ -279,7 +279,7 @@ struct cmd_set_bonding_primary_result {
cmdline_fixed_string_t set;
cmdline_fixed_string_t bonding;
cmdline_fixed_string_t primary;
- portid_t slave_id;
+ portid_t member_id;
portid_t port_id;
};
@@ -287,13 +287,13 @@ static void cmd_set_bonding_primary_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
struct cmd_set_bonding_primary_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* Set the primary slave for a bonded device. */
- if (rte_eth_bond_primary_set(master_port_id, slave_port_id) != 0) {
- fprintf(stderr, "\t Failed to set primary slave for port = %d.\n",
- master_port_id);
+ /* Set the primary member for a bonded device. */
+ if (rte_eth_bond_primary_set(main_port_id, member_port_id) != 0) {
+ fprintf(stderr, "\t Failed to set primary member for port = %d.\n",
+ main_port_id);
return;
}
init_port_config();
@@ -308,141 +308,141 @@ static cmdline_parse_token_string_t cmd_setbonding_primary_bonding =
static cmdline_parse_token_string_t cmd_setbonding_primary_primary =
TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
primary, "primary");
-static cmdline_parse_token_num_t cmd_setbonding_primary_slave =
+static cmdline_parse_token_num_t cmd_setbonding_primary_member =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
- slave_id, RTE_UINT16);
+ member_id, RTE_UINT16);
static cmdline_parse_token_num_t cmd_setbonding_primary_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
port_id, RTE_UINT16);
static cmdline_parse_inst_t cmd_set_bonding_primary = {
.f = cmd_set_bonding_primary_parsed,
- .help_str = "set bonding primary <slave_id> <port_id>: "
- "Set the primary slave for port_id",
+ .help_str = "set bonding primary <member_id> <port_id>: "
+ "Set the primary member for port_id",
.data = NULL,
.tokens = {
(void *)&cmd_setbonding_primary_set,
(void *)&cmd_setbonding_primary_bonding,
(void *)&cmd_setbonding_primary_primary,
- (void *)&cmd_setbonding_primary_slave,
+ (void *)&cmd_setbonding_primary_member,
(void *)&cmd_setbonding_primary_port,
NULL
}
};
-/* *** ADD SLAVE *** */
-struct cmd_add_bonding_slave_result {
+/* *** ADD Member *** */
+struct cmd_add_bonding_member_result {
cmdline_fixed_string_t add;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t member;
+ portid_t member_id;
portid_t port_id;
};
-static void cmd_add_bonding_slave_parsed(void *parsed_result,
+static void cmd_add_bonding_member_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_add_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_add_bonding_member_result *res = parsed_result;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* add the slave for a bonded device. */
- if (rte_eth_bond_slave_add(master_port_id, slave_port_id) != 0) {
+ /* add the member for a bonded device. */
+ if (rte_eth_bond_member_add(main_port_id, member_port_id) != 0) {
fprintf(stderr,
- "\t Failed to add slave %d to master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to add member %d to main port = %d.\n",
+ member_port_id, main_port_id);
return;
}
- ports[master_port_id].update_conf = 1;
+ ports[main_port_id].update_conf = 1;
init_port_config();
- set_port_slave_flag(slave_port_id);
+ set_port_member_flag(member_port_id);
}
-static cmdline_parse_token_string_t cmd_addbonding_slave_add =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_add =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
add, "add");
-static cmdline_parse_token_string_t cmd_addbonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_addbonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_addbonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_addbonding_member_member =
+ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_member_result,
+ member, "member");
+static cmdline_parse_token_num_t cmd_addbonding_member_memberid =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
+ member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_addbonding_member_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_member_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_add_bonding_slave = {
- .f = cmd_add_bonding_slave_parsed,
- .help_str = "add bonding slave <slave_id> <port_id>: "
- "Add a slave device to a bonded device",
+static cmdline_parse_inst_t cmd_add_bonding_member = {
+ .f = cmd_add_bonding_member_parsed,
+ .help_str = "add bonding member <member_id> <port_id>: "
+ "Add a member device to a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_addbonding_slave_add,
- (void *)&cmd_addbonding_slave_bonding,
- (void *)&cmd_addbonding_slave_slave,
- (void *)&cmd_addbonding_slave_slaveid,
- (void *)&cmd_addbonding_slave_port,
+ (void *)&cmd_addbonding_member_add,
+ (void *)&cmd_addbonding_member_bonding,
+ (void *)&cmd_addbonding_member_member,
+ (void *)&cmd_addbonding_member_memberid,
+ (void *)&cmd_addbonding_member_port,
NULL
}
};
-/* *** REMOVE SLAVE *** */
-struct cmd_remove_bonding_slave_result {
+/* *** REMOVE Member *** */
+struct cmd_remove_bonding_member_result {
cmdline_fixed_string_t remove;
cmdline_fixed_string_t bonding;
- cmdline_fixed_string_t slave;
- portid_t slave_id;
+ cmdline_fixed_string_t member;
+ portid_t member_id;
portid_t port_id;
};
-static void cmd_remove_bonding_slave_parsed(void *parsed_result,
+static void cmd_remove_bonding_member_parsed(void *parsed_result,
__rte_unused struct cmdline *cl, __rte_unused void *data)
{
- struct cmd_remove_bonding_slave_result *res = parsed_result;
- portid_t master_port_id = res->port_id;
- portid_t slave_port_id = res->slave_id;
+ struct cmd_remove_bonding_member_result *res = parsed_result;
+ portid_t main_port_id = res->port_id;
+ portid_t member_port_id = res->member_id;
- /* remove the slave from a bonded device. */
- if (rte_eth_bond_slave_remove(master_port_id, slave_port_id) != 0) {
+ /* remove the member from a bonded device. */
+ if (rte_eth_bond_member_remove(main_port_id, member_port_id) != 0) {
fprintf(stderr,
- "\t Failed to remove slave %d from master port = %d.\n",
- slave_port_id, master_port_id);
+ "\t Failed to remove member %d from main port = %d.\n",
+ member_port_id, main_port_id);
return;
}
init_port_config();
- clear_port_slave_flag(slave_port_id);
+ clear_port_member_flag(member_port_id);
}
-static cmdline_parse_token_string_t cmd_removebonding_slave_remove =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_remove =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
remove, "remove");
-static cmdline_parse_token_string_t cmd_removebonding_slave_bonding =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_bonding =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
bonding, "bonding");
-static cmdline_parse_token_string_t cmd_removebonding_slave_slave =
- TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave, "slave");
-static cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave_id, RTE_UINT16);
-static cmdline_parse_token_num_t cmd_removebonding_slave_port =
- TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
+static cmdline_parse_token_string_t cmd_removebonding_member_member =
+ TOKEN_STRING_INITIALIZER(struct cmd_remove_bonding_member_result,
+ member, "member");
+static cmdline_parse_token_num_t cmd_removebonding_member_memberid =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
+ member_id, RTE_UINT16);
+static cmdline_parse_token_num_t cmd_removebonding_member_port =
+ TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_member_result,
port_id, RTE_UINT16);
-static cmdline_parse_inst_t cmd_remove_bonding_slave = {
- .f = cmd_remove_bonding_slave_parsed,
- .help_str = "remove bonding slave <slave_id> <port_id>: "
- "Remove a slave device from a bonded device",
+static cmdline_parse_inst_t cmd_remove_bonding_member = {
+ .f = cmd_remove_bonding_member_parsed,
+ .help_str = "remove bonding member <member_id> <port_id>: "
+ "Remove a member device from a bonded device",
.data = NULL,
.tokens = {
- (void *)&cmd_removebonding_slave_remove,
- (void *)&cmd_removebonding_slave_bonding,
- (void *)&cmd_removebonding_slave_slave,
- (void *)&cmd_removebonding_slave_slaveid,
- (void *)&cmd_removebonding_slave_port,
+ (void *)&cmd_removebonding_member_remove,
+ (void *)&cmd_removebonding_member_bonding,
+ (void *)&cmd_removebonding_member_member,
+ (void *)&cmd_removebonding_member_memberid,
+ (void *)&cmd_removebonding_member_port,
NULL
}
};
@@ -706,18 +706,18 @@ static struct testpmd_driver_commands bonding_cmds = {
},
{
&cmd_set_bonding_primary,
- "set bonding primary (slave_id) (port_id)\n"
- " Set the primary slave for a bonded device.\n",
+ "set bonding primary (member_id) (port_id)\n"
+ " Set the primary member for a bonded device.\n",
},
{
- &cmd_add_bonding_slave,
- "add bonding slave (slave_id) (port_id)\n"
- " Add a slave device to a bonded device.\n",
+ &cmd_add_bonding_member,
+ "add bonding member (member_id) (port_id)\n"
+ " Add a member device to a bonded device.\n",
},
{
- &cmd_remove_bonding_slave,
- "remove bonding slave (slave_id) (port_id)\n"
- " Remove a slave device from a bonded device.\n",
+ &cmd_remove_bonding_member,
+ "remove bonding member (member_id) (port_id)\n"
+ " Remove a member device from a bonded device.\n",
},
{
&cmd_create_bonded_device,
diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
index a5e1fffea1..77892c0601 100644
--- a/drivers/net/bonding/eth_bond_8023ad_private.h
+++ b/drivers/net/bonding/eth_bond_8023ad_private.h
@@ -15,10 +15,10 @@
#include "rte_eth_bond_8023ad.h"
#define BOND_MODE_8023AX_UPDATE_TIMEOUT_MS 100
-/** Maximum number of packets to one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_RX_PKTS 3
-/** Maximum number of LACP packets from one slave queued in TX ring. */
-#define BOND_MODE_8023AX_SLAVE_TX_PKTS 1
+/** Maximum number of packets to one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_RX_PKTS 3
+/** Maximum number of LACP packets from one member queued in TX ring. */
+#define BOND_MODE_8023AX_MEMBER_TX_PKTS 1
/**
* Timeouts definitions (5.4.4 in 802.1AX documentation).
*/
@@ -113,7 +113,7 @@ struct port {
enum rte_bond_8023ad_selection selected;
/** Indicates if either allmulti or promisc has been enforced on the
- * slave so that we can receive lacp packets
+ * member so that we can receive lacp packets
*/
#define BOND_8023AD_FORCED_ALLMULTI (1 << 0)
#define BOND_8023AD_FORCED_PROMISC (1 << 1)
@@ -162,8 +162,8 @@ struct mode8023ad_private {
uint8_t external_sm;
struct rte_ether_addr mac_addr;
- struct rte_eth_link slave_link;
- /***< slave link properties */
+ struct rte_eth_link member_link;
+ /***< member link properties */
/**
* Configuration of dedicated hardware queues for control plane
@@ -208,7 +208,7 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
/**
* @internal
*
- * Enables 802.1AX mode and all active slaves on bonded interface.
+ * Enables 802.1AX mode and all active members on bonded interface.
*
* @param dev Bonded interface
* @return
@@ -220,7 +220,7 @@ bond_mode_8023ad_enable(struct rte_eth_dev *dev);
/**
* @internal
*
- * Disables 802.1AX mode of the bonded interface and slaves.
+ * Disables 802.1AX mode of the bonded interface and members.
*
* @param dev Bonded interface
* @return
@@ -256,43 +256,43 @@ bond_mode_8023ad_stop(struct rte_eth_dev *dev);
*
* Passes given slow packet to state machines management logic.
* @param internals Bonded device private data.
- * @param slave_id Slave port id.
+ * @param member_id Member port id.
* @param slot_pkt Slow packet.
*/
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt);
+ uint16_t member_id, struct rte_mbuf *pkt);
/**
* @internal
*
- * Appends given slave used slave
+ * Appends given member used member
*
* @param dev Bonded interface.
- * @param port_id Slave port ID to be added
+ * @param port_id Member port ID to be added
*
* @return
* 0 on success, negative value otherwise.
*/
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *dev, uint16_t port_id);
+bond_mode_8023ad_activate_member(struct rte_eth_dev *dev, uint16_t port_id);
/**
* @internal
*
- * Denitializes and removes given slave from 802.1AX mode.
+ * Denitializes and removes given member from 802.1AX mode.
*
* @param dev Bonded interface.
- * @param slave_num Position of slave in active_slaves array
+ * @param member_num Position of member in active_members array
*
* @return
* 0 on success, negative value otherwise.
*/
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *dev, uint16_t slave_pos);
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *dev, uint16_t member_pos);
/**
- * Updates state when MAC was changed on bonded device or one of its slaves.
+ * Updates state when MAC was changed on bonded device or one of its members.
* @param bond_dev Bonded device
*/
void
@@ -300,10 +300,10 @@ bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev);
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port);
+ uint16_t member_port);
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port);
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port);
int
bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id);
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index d4f1fb27d4..93d03b0a79 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -18,8 +18,8 @@
#include "eth_bond_8023ad_private.h"
#include "rte_eth_bond_alb.h"
-#define PMD_BOND_SLAVE_PORT_KVARG ("slave")
-#define PMD_BOND_PRIMARY_SLAVE_KVARG ("primary")
+#define PMD_BOND_MEMBER_PORT_KVARG ("member")
+#define PMD_BOND_PRIMARY_MEMBER_KVARG ("primary")
#define PMD_BOND_MODE_KVARG ("mode")
#define PMD_BOND_AGG_MODE_KVARG ("agg_mode")
#define PMD_BOND_XMIT_POLICY_KVARG ("xmit_policy")
@@ -50,8 +50,8 @@ extern const struct rte_flow_ops bond_flow_ops;
/** Port Queue Mapping Structure */
struct bond_rx_queue {
uint16_t queue_id;
- /**< Next active_slave to poll */
- uint16_t active_slave;
+ /**< Next active_member to poll */
+ uint16_t active_member;
/**< Queue Id */
struct bond_dev_private *dev_private;
/**< Reference to eth_dev private structure */
@@ -74,19 +74,19 @@ struct bond_tx_queue {
/**< Copy of TX configuration structure for queue */
};
-/** Bonded slave devices structure */
-struct bond_ethdev_slave_ports {
- uint16_t slaves[RTE_MAX_ETHPORTS]; /**< Slave port id array */
- uint16_t slave_count; /**< Number of slaves */
+/** Bonded member devices structure */
+struct bond_ethdev_member_ports {
+ uint16_t members[RTE_MAX_ETHPORTS]; /**< Member port id array */
+ uint16_t member_count; /**< Number of members */
};
-struct bond_slave_details {
+struct bond_member_details {
uint16_t port_id;
uint8_t link_status_poll_enabled;
uint8_t link_status_wait_to_complete;
uint8_t last_link_status;
- /**< Port Id of slave eth_dev */
+ /**< Port Id of member eth_dev */
struct rte_ether_addr persisted_mac_addr;
uint16_t reta_size;
@@ -94,7 +94,7 @@ struct bond_slave_details {
struct rte_flow {
TAILQ_ENTRY(rte_flow) next;
- /* Slaves flows */
+ /* Members flows */
struct rte_flow *flows[RTE_MAX_ETHPORTS];
/* Flow description for synchronization */
struct rte_flow_conv_rule rule;
@@ -102,7 +102,7 @@ struct rte_flow {
};
typedef void (*burst_xmit_hash_t)(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
/** Link Bonding PMD device private configuration Structure */
struct bond_dev_private {
@@ -112,8 +112,8 @@ struct bond_dev_private {
rte_spinlock_t lock;
rte_spinlock_t lsc_lock;
- uint16_t primary_port; /**< Primary Slave Port */
- uint16_t current_primary_port; /**< Primary Slave Port */
+ uint16_t primary_port; /**< Primary Member Port */
+ uint16_t current_primary_port; /**< Primary Member Port */
uint16_t user_defined_primary_port;
/**< Flag for whether primary port is user defined or not */
@@ -137,16 +137,16 @@ struct bond_dev_private {
uint16_t nb_rx_queues; /**< Total number of rx queues */
uint16_t nb_tx_queues; /**< Total number of tx queues*/
- uint16_t active_slave_count; /**< Number of active slaves */
- uint16_t active_slaves[RTE_MAX_ETHPORTS]; /**< Active slave list */
+ uint16_t active_member_count; /**< Number of active members */
+ uint16_t active_members[RTE_MAX_ETHPORTS]; /**< Active member list */
- uint16_t slave_count; /**< Number of bonded slaves */
- struct bond_slave_details slaves[RTE_MAX_ETHPORTS];
- /**< Array of bonded slaves details */
+ uint16_t member_count; /**< Number of bonded members */
+ struct bond_member_details members[RTE_MAX_ETHPORTS];
+ /**< Array of bonded members details */
struct mode8023ad_private mode4;
- uint16_t tlb_slaves_order[RTE_MAX_ETHPORTS];
- /**< TLB active slaves send order */
+ uint16_t tlb_members_order[RTE_MAX_ETHPORTS];
+ /**< TLB active members send order */
struct mode_alb_private mode6;
uint64_t rx_offload_capa; /** Rx offload capability */
@@ -177,7 +177,7 @@ struct bond_dev_private {
uint8_t rss_key_len; /**< hash key length in bytes. */
struct rte_kvargs *kvlist;
- uint8_t slave_update_idx;
+ uint8_t member_update_idx;
bool kvargs_processing_is_done;
@@ -191,19 +191,21 @@ struct bond_dev_private {
extern const struct eth_dev_ops default_dev_ops;
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev);
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev);
int
check_for_bonded_ethdev(const struct rte_eth_dev *eth_dev);
-/* Search given slave array to find position of given id.
- * Return slave pos or slaves_count if not found. */
+/*
+ * Search given member array to find position of given id.
+ * Return member pos or members_count if not found.
+ */
static inline uint16_t
-find_slave_by_id(uint16_t *slaves, uint16_t slaves_count, uint16_t slave_id) {
+find_member_by_id(uint16_t *members, uint16_t members_count, uint16_t member_id) {
uint16_t pos;
- for (pos = 0; pos < slaves_count; pos++) {
- if (slave_id == slaves[pos])
+ for (pos = 0; pos < members_count; pos++) {
+ if (member_id == members[pos])
break;
}
@@ -217,13 +219,13 @@ int
valid_bonded_port_id(uint16_t port_id);
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t port_id);
+valid_member_port_id(struct bond_dev_private *internals, uint16_t port_id);
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id);
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id);
int
mac_address_set(struct rte_eth_dev *eth_dev,
@@ -234,66 +236,66 @@ mac_address_get(struct rte_eth_dev *eth_dev,
struct rte_ether_addr *dst_mac_addr);
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev);
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev);
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id);
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id);
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id);
int
bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode);
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev);
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev);
+member_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev);
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+member_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev);
+member_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev);
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves);
+ uint16_t member_count, uint16_t *members);
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id);
+ uint16_t member_port_id);
int
bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
void *param, void *ret_param);
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key,
+bond_ethdev_parse_member_mode_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args);
int
@@ -301,7 +303,7 @@ bond_ethdev_parse_socket_id_kvarg(const char *key,
const char *value, void *extra_args);
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key,
const char *value, void *extra_args);
int
@@ -323,7 +325,7 @@ void
bond_tlb_enable(struct bond_dev_private *internals);
void
-bond_tlb_activate_slave(struct bond_dev_private *internals);
+bond_tlb_activate_member(struct bond_dev_private *internals);
int
bond_ethdev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/bonding/rte_eth_bond.h b/drivers/net/bonding/rte_eth_bond.h
index 874aa91a5f..f0cd5767ad 100644
--- a/drivers/net/bonding/rte_eth_bond.h
+++ b/drivers/net/bonding/rte_eth_bond.h
@@ -10,7 +10,7 @@
*
* RTE Link Bonding Ethernet Device
* Link Bonding for 1GbE and 10GbE ports to allow the aggregation of multiple
- * (slave) NICs into a single logical interface. The bonded device processes
+ * (member) NICs into a single logical interface. The bonded device processes
* these interfaces based on the mode of operation specified and supported.
* This implementation supports 4 modes of operation round robin, active backup
* balance and broadcast. Providing redundant links, fault tolerance and/or
@@ -28,24 +28,28 @@ extern "C" {
#define BONDING_MODE_ROUND_ROBIN (0)
/**< Round Robin (Mode 0).
* In this mode all transmitted packets will be balanced equally across all
- * active slaves of the bonded in a round robin fashion. */
+ * active members of the bonded in a round robin fashion.
+ */
#define BONDING_MODE_ACTIVE_BACKUP (1)
/**< Active Backup (Mode 1).
* In this mode all packets transmitted will be transmitted on the primary
- * slave until such point as the primary slave is no longer available and then
- * transmitted packets will be sent on the next available slaves. The primary
- * slave can be defined by the user but defaults to the first active slave
- * available if not specified. */
+ * member until such point as the primary member is no longer available and then
+ * transmitted packets will be sent on the next available members. The primary
+ * member can be defined by the user but defaults to the first active member
+ * available if not specified.
+ */
#define BONDING_MODE_BALANCE (2)
/**< Balance (Mode 2).
* In this mode all packets transmitted will be balanced across the available
- * slaves using one of three available transmit policies - l2, l2+3 or l3+4.
+ * members using one of three available transmit policies - l2, l2+3 or l3+4.
* See BALANCE_XMIT_POLICY macros definitions for further details on transmit
- * policies. */
+ * policies.
+ */
#define BONDING_MODE_BROADCAST (3)
/**< Broadcast (Mode 3).
* In this mode all transmitted packets will be transmitted on all available
- * active slaves of the bonded. */
+ * active members of the bonded.
+ */
#define BONDING_MODE_8023AD (4)
/**< 802.3AD (Mode 4).
*
@@ -62,22 +66,22 @@ extern "C" {
* be handled with the expected latency and this may cause the link status to be
* incorrectly marked as down or failure to correctly negotiate with peers.
* - For optimal performance during initial handshaking the array of mbufs provided
- * to rx_burst should be at least 2 times the slave count size.
- *
+ * to rx_burst should be at least 2 times the member count size.
*/
#define BONDING_MODE_TLB (5)
/**< Adaptive TLB (Mode 5)
* This mode provides an adaptive transmit load balancing. It dynamically
- * changes the transmitting slave, according to the computed load. Statistics
- * are collected in 100ms intervals and scheduled every 10ms */
+ * changes the transmitting member, according to the computed load. Statistics
+ * are collected in 100ms intervals and scheduled every 10ms.
+ */
#define BONDING_MODE_ALB (6)
/**< Adaptive Load Balancing (Mode 6)
* This mode includes adaptive TLB and receive load balancing (RLB). In RLB the
* bonding driver intercepts ARP replies send by local system and overwrites its
* source MAC address, so that different peers send data to the server on
- * different slave interfaces. When local system sends ARP request, it saves IP
+ * different member interfaces. When local system sends ARP request, it saves IP
* information from it. When ARP reply from that peer is received, its MAC is
- * stored, one of slave MACs assigned and ARP reply send to that peer.
+ * stored, one of member MACs assigned and ARP reply send to that peer.
*/
/* Balance Mode Transmit Policies */
@@ -113,28 +117,30 @@ int
rte_eth_bond_free(const char *name);
/**
- * Add a rte_eth_dev device as a slave to the bonded device
+ * Add a rte_eth_dev device as a member to the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id);
/**
- * Remove a slave rte_eth_dev device from the bonded device
+ * Remove a member rte_eth_dev device from the bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id);
/**
* Set link bonding mode of bonded device
@@ -160,65 +166,67 @@ int
rte_eth_bond_mode_get(uint16_t bonded_port_id);
/**
- * Set slave rte_eth_dev as primary slave of bonded device
+ * Set member rte_eth_dev as primary member of bonded device
*
* @param bonded_port_id Port ID of bonded device.
- * @param slave_port_id Port ID of slave device.
+ * @param member_port_id Port ID of member device.
*
* @return
* 0 on success, negative value otherwise
*/
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id);
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id);
/**
- * Get primary slave of bonded device
+ * Get primary member of bonded device
*
* @param bonded_port_id Port ID of bonded device.
*
* @return
- * Port Id of primary slave on success, -1 on failure
+ * Port Id of primary member on success, -1 on failure
*/
int
rte_eth_bond_primary_get(uint16_t bonded_port_id);
/**
- * Populate an array with list of the slaves port id's of the bonded device
+ * Populate an array with list of the members port id's of the bonded device
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param members Array to be populated with the current active members
+ * @param len Length of members array
*
* @return
- * Number of slaves associated with bonded device on success,
+ * Number of members associated with bonded device on success,
* negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
- uint16_t len);
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len);
/**
- * Populate an array with list of the active slaves port id's of the bonded
+ * Populate an array with list of the active members port id's of the bonded
* device.
*
* @param bonded_port_id Port ID of bonded eth_dev to interrogate
- * @param slaves Array to be populated with the current active slaves
- * @param len Length of slaves array
+ * @param members Array to be populated with the current active members
+ * @param len Length of members array
*
* @return
- * Number of active slaves associated with bonded device on success,
+ * Number of active members associated with bonded device on success,
* negative value otherwise
*/
+__rte_experimental
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
- uint16_t len);
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
+ uint16_t len);
/**
- * Set explicit MAC address to use on bonded device and it's slaves.
+ * Set explicit MAC address to use on bonded device and it's members.
*
* @param bonded_port_id Port ID of bonded device.
* @param mac_addr MAC Address to use on bonded device overriding
- * slaves MAC addresses
+ * members MAC addresses
*
* @return
* 0 on success, negative value otherwise
@@ -228,8 +236,8 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
struct rte_ether_addr *mac_addr);
/**
- * Reset bonded device to use MAC from primary slave on bonded device and it's
- * slaves.
+ * Reset bonded device to use MAC from primary member on bonded device and it's
+ * members.
*
* @param bonded_port_id Port ID of bonded device.
*
@@ -266,7 +274,7 @@ rte_eth_bond_xmit_policy_get(uint16_t bonded_port_id);
/**
* Set the link monitoring frequency (in ms) for monitoring the link status of
- * slave devices
+ * member devices
*
* @param bonded_port_id Port ID of bonded device.
* @param internal_ms Monitoring interval in milliseconds
@@ -280,7 +288,7 @@ rte_eth_bond_link_monitoring_set(uint16_t bonded_port_id, uint32_t internal_ms);
/**
* Get the current link monitoring frequency (in ms) for monitoring of the link
- * status of slave devices
+ * status of member devices
*
* @param bonded_port_id Port ID of bonded device.
*
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 4a266bb2ca..ac9f414e74 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -19,7 +19,7 @@ static void bond_mode_8023ad_ext_periodic_cb(void *arg);
#define MODE4_DEBUG(fmt, ...) \
rte_log(RTE_LOG_DEBUG, bond_logtype, \
"%6u [Port %u: %s] " fmt, \
- bond_dbg_get_time_diff_ms(), slave_id, \
+ bond_dbg_get_time_diff_ms(), member_id, \
__func__, ##__VA_ARGS__)
static uint64_t start_time;
@@ -184,9 +184,9 @@ set_warning_flags(struct port *port, uint16_t flags)
}
static void
-show_warnings(uint16_t slave_id)
+show_warnings(uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
uint8_t warnings;
do {
@@ -205,36 +205,36 @@ show_warnings(uint16_t slave_id)
if (warnings & WRN_RX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into RX ring.\n"
+ "Member %u: failed to enqueue LACP packet into RX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will notwork correctly",
- slave_id);
+ member_id);
}
if (warnings & WRN_TX_QUEUE_FULL) {
RTE_BOND_LOG(DEBUG,
- "Slave %u: failed to enqueue LACP packet into TX ring.\n"
+ "Member %u: failed to enqueue LACP packet into TX ring.\n"
"Receive and transmit functions must be invoked on bonded"
"interface at least 10 times per second or LACP will not work correctly",
- slave_id);
+ member_id);
}
if (warnings & WRN_RX_MARKER_TO_FAST)
- RTE_BOND_LOG(INFO, "Slave %u: marker to early - ignoring.",
- slave_id);
+ RTE_BOND_LOG(INFO, "Member %u: marker to early - ignoring.",
+ member_id);
if (warnings & WRN_UNKNOWN_SLOW_TYPE) {
RTE_BOND_LOG(INFO,
- "Slave %u: ignoring unknown slow protocol frame type",
- slave_id);
+ "Member %u: ignoring unknown slow protocol frame type",
+ member_id);
}
if (warnings & WRN_UNKNOWN_MARKER_TYPE)
- RTE_BOND_LOG(INFO, "Slave %u: ignoring unknown marker type",
- slave_id);
+ RTE_BOND_LOG(INFO, "Member %u: ignoring unknown marker type",
+ member_id);
if (warnings & WRN_NOT_LACP_CAPABLE)
- MODE4_DEBUG("Port %u is not LACP capable!\n", slave_id);
+ MODE4_DEBUG("Port %u is not LACP capable!\n", member_id);
}
static void
@@ -256,10 +256,10 @@ record_default(struct port *port)
* @param port Port on which LACPDU was received.
*/
static void
-rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine(struct bond_dev_private *internals, uint16_t member_id,
struct lacpdu *lacp)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
uint64_t timeout;
if (SM_FLAG(port, BEGIN)) {
@@ -389,9 +389,9 @@ rx_machine(struct bond_dev_private *internals, uint16_t slave_id,
* @param port Port to handle state machine.
*/
static void
-periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
+periodic_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
/* Calculate if either site is LACP enabled */
uint64_t timeout;
uint8_t active = ACTOR_STATE(port, LACP_ACTIVE) ||
@@ -451,9 +451,9 @@ periodic_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port Port to handle state machine.
*/
static void
-mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
+mux_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
/* Save current state for later use */
const uint8_t state_mask = STATE_SYNCHRONIZATION | STATE_DISTRIBUTING |
@@ -527,8 +527,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("COLLECTING -> DISTRIBUTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing started.",
- internals->port_id, slave_id);
+ "Bond %u: member id %u distributing started.",
+ internals->port_id, member_id);
}
} else {
if (!PARTNER_STATE(port, COLLECTING)) {
@@ -538,8 +538,8 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
SM_FLAG_SET(port, NTT);
MODE4_DEBUG("DISTRIBUTING -> COLLECTING\n");
RTE_BOND_LOG(INFO,
- "Bond %u: slave id %u distributing stopped.",
- internals->port_id, slave_id);
+ "Bond %u: member id %u distributing stopped.",
+ internals->port_id, member_id);
}
}
}
@@ -554,9 +554,9 @@ mux_machine(struct bond_dev_private *internals, uint16_t slave_id)
* @param port
*/
static void
-tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
+tx_machine(struct bond_dev_private *internals, uint16_t member_id)
{
- struct port *agg, *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *agg, *port = &bond_mode_8023ad_ports[member_id];
struct rte_mbuf *lacp_pkt = NULL;
struct lacpdu_header *hdr;
@@ -587,7 +587,7 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
/* Source and destination MAC */
rte_ether_addr_copy(&lacp_mac_addr, &hdr->eth_hdr.dst_addr);
- rte_eth_macaddr_get(slave_id, &hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(member_id, &hdr->eth_hdr.src_addr);
hdr->eth_hdr.ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_SLOW);
lacpdu = &hdr->lacpdu;
@@ -635,10 +635,10 @@ tx_machine(struct bond_dev_private *internals, uint16_t slave_id)
return;
}
} else {
- uint16_t pkts_sent = rte_eth_tx_prepare(slave_id,
+ uint16_t pkts_sent = rte_eth_tx_prepare(member_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, 1);
- pkts_sent = rte_eth_tx_burst(slave_id,
+ pkts_sent = rte_eth_tx_burst(member_id,
internals->mode4.dedicated_queues.tx_qid,
&lacp_pkt, pkts_sent);
if (pkts_sent != 1) {
@@ -679,40 +679,40 @@ max_index(uint64_t *a, int n)
* @param port_pos Port to assign.
*/
static void
-selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
+selection_logic(struct bond_dev_private *internals, uint16_t member_id)
{
struct port *agg, *port;
- uint16_t slaves_count, new_agg_id, i, j = 0;
- uint16_t *slaves;
+ uint16_t members_count, new_agg_id, i, j = 0;
+ uint16_t *members;
uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
- uint16_t default_slave = 0;
+ uint16_t default_member = 0;
struct rte_eth_link link_info;
uint16_t agg_new_idx = 0;
int ret;
- slaves = internals->active_slaves;
- slaves_count = internals->active_slave_count;
- port = &bond_mode_8023ad_ports[slave_id];
+ members = internals->active_members;
+ members_count = internals->active_member_count;
+ port = &bond_mode_8023ad_ports[member_id];
/* Search for aggregator suitable for this port */
- for (i = 0; i < slaves_count; ++i) {
- agg = &bond_mode_8023ad_ports[slaves[i]];
+ for (i = 0; i < members_count; ++i) {
+ agg = &bond_mode_8023ad_ports[members[i]];
/* Skip ports that are not aggregators */
- if (agg->aggregator_port_id != slaves[i])
+ if (agg->aggregator_port_id != members[i])
continue;
- ret = rte_eth_link_get_nowait(slaves[i], &link_info);
+ ret = rte_eth_link_get_nowait(members[i], &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slaves[i], rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ members[i], rte_strerror(-ret));
continue;
}
agg_count[i] += 1;
agg_bandwidth[i] += link_info.link_speed;
- /* Actors system ID is not checked since all slave device have the same
+ /* Actors system ID is not checked since all member device have the same
* ID (MAC address). */
if ((agg->actor.key == port->actor.key &&
agg->partner.system_priority == port->partner.system_priority &&
@@ -724,31 +724,31 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) != 0) {
if (j == 0)
- default_slave = i;
+ default_member = i;
j++;
}
}
switch (internals->mode4.agg_selection) {
case AGG_COUNT:
- agg_new_idx = max_index(agg_count, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_count, members_count);
+ new_agg_id = members[agg_new_idx];
break;
case AGG_BANDWIDTH:
- agg_new_idx = max_index(agg_bandwidth, slaves_count);
- new_agg_id = slaves[agg_new_idx];
+ agg_new_idx = max_index(agg_bandwidth, members_count);
+ new_agg_id = members[agg_new_idx];
break;
case AGG_STABLE:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_member == members_count)
+ new_agg_id = members[member_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = members[default_member];
break;
default:
- if (default_slave == slaves_count)
- new_agg_id = slaves[slave_id];
+ if (default_member == members_count)
+ new_agg_id = members[member_id];
else
- new_agg_id = slaves[default_slave];
+ new_agg_id = members[default_member];
break;
}
@@ -758,7 +758,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t slave_id)
MODE4_DEBUG("-> SELECTED: ID=%3u\n"
"\t%s aggregator ID=%3u\n",
port->aggregator_port_id,
- port->aggregator_port_id == slave_id ?
+ port->aggregator_port_id == member_id ?
"aggregator not found, using default" : "aggregator found",
port->aggregator_port_id);
}
@@ -802,7 +802,7 @@ link_speed_key(uint16_t speed) {
}
static void
-rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
+rx_machine_update(struct bond_dev_private *internals, uint16_t member_id,
struct rte_mbuf *lacp_pkt) {
struct lacpdu_header *lacp;
struct lacpdu_actor_partner_params *partner;
@@ -813,7 +813,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
RTE_ASSERT(lacp->lacpdu.subtype == SLOW_SUBTYPE_LACP);
partner = &lacp->lacpdu.partner;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
agg = &bond_mode_8023ad_ports[port->aggregator_port_id];
if (rte_is_zero_ether_addr(&partner->port_params.system) ||
@@ -822,7 +822,7 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
/* This LACP frame is sending to the bonding port
* so pass it to rx_machine.
*/
- rx_machine(internals, slave_id, &lacp->lacpdu);
+ rx_machine(internals, member_id, &lacp->lacpdu);
} else {
char preferred_system_name[RTE_ETHER_ADDR_FMT_SIZE];
char self_system_name[RTE_ETHER_ADDR_FMT_SIZE];
@@ -837,16 +837,16 @@ rx_machine_update(struct bond_dev_private *internals, uint16_t slave_id,
}
rte_pktmbuf_free(lacp_pkt);
} else
- rx_machine(internals, slave_id, NULL);
+ rx_machine(internals, member_id, NULL);
}
static void
bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
- uint16_t slave_id)
+ uint16_t member_id)
{
#define DEDICATED_QUEUE_BURST_SIZE 32
struct rte_mbuf *lacp_pkt[DEDICATED_QUEUE_BURST_SIZE];
- uint16_t rx_count = rte_eth_rx_burst(slave_id,
+ uint16_t rx_count = rte_eth_rx_burst(member_id,
internals->mode4.dedicated_queues.rx_qid,
lacp_pkt, DEDICATED_QUEUE_BURST_SIZE);
@@ -854,10 +854,10 @@ bond_mode_8023ad_dedicated_rxq_process(struct bond_dev_private *internals,
uint16_t i;
for (i = 0; i < rx_count; i++)
- bond_mode_8023ad_handle_slow_pkt(internals, slave_id,
+ bond_mode_8023ad_handle_slow_pkt(internals, member_id,
lacp_pkt[i]);
} else {
- rx_machine_update(internals, slave_id, NULL);
+ rx_machine_update(internals, member_id, NULL);
}
}
@@ -868,23 +868,23 @@ bond_mode_8023ad_periodic_cb(void *arg)
struct bond_dev_private *internals = bond_dev->data->dev_private;
struct port *port;
struct rte_eth_link link_info;
- struct rte_ether_addr slave_addr;
+ struct rte_ether_addr member_addr;
struct rte_mbuf *lacp_pkt = NULL;
- uint16_t slave_id;
+ uint16_t member_id;
uint16_t i;
/* Update link status on each port */
- for (i = 0; i < internals->active_slave_count; i++) {
+ for (i = 0; i < internals->active_member_count; i++) {
uint16_t key;
int ret;
- slave_id = internals->active_slaves[i];
- ret = rte_eth_link_get_nowait(slave_id, &link_info);
+ member_id = internals->active_members[i];
+ ret = rte_eth_link_get_nowait(member_id, &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_id, rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ member_id, rte_strerror(-ret));
}
if (ret >= 0 && link_info.link_status != 0) {
@@ -895,8 +895,8 @@ bond_mode_8023ad_periodic_cb(void *arg)
key = 0;
}
- rte_eth_macaddr_get(slave_id, &slave_addr);
- port = &bond_mode_8023ad_ports[slave_id];
+ rte_eth_macaddr_get(member_id, &member_addr);
+ port = &bond_mode_8023ad_ports[member_id];
key = rte_cpu_to_be_16(key);
if (key != port->actor.key) {
@@ -907,16 +907,16 @@ bond_mode_8023ad_periodic_cb(void *arg)
SM_FLAG_SET(port, NTT);
}
- if (!rte_is_same_ether_addr(&port->actor.system, &slave_addr)) {
- rte_ether_addr_copy(&slave_addr, &port->actor.system);
- if (port->aggregator_port_id == slave_id)
+ if (!rte_is_same_ether_addr(&port->actor.system, &member_addr)) {
+ rte_ether_addr_copy(&member_addr, &port->actor.system);
+ if (port->aggregator_port_id == member_id)
SM_FLAG_SET(port, NTT);
}
}
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ port = &bond_mode_8023ad_ports[member_id];
if ((port->actor.key &
rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
@@ -947,19 +947,19 @@ bond_mode_8023ad_periodic_cb(void *arg)
if (retval != 0)
lacp_pkt = NULL;
- rx_machine_update(internals, slave_id, lacp_pkt);
+ rx_machine_update(internals, member_id, lacp_pkt);
} else {
bond_mode_8023ad_dedicated_rxq_process(internals,
- slave_id);
+ member_id);
}
- periodic_machine(internals, slave_id);
- mux_machine(internals, slave_id);
- tx_machine(internals, slave_id);
- selection_logic(internals, slave_id);
+ periodic_machine(internals, member_id);
+ mux_machine(internals, member_id);
+ tx_machine(internals, member_id);
+ selection_logic(internals, member_id);
SM_FLAG_CLR(port, BEGIN);
- show_warnings(slave_id);
+ show_warnings(member_id);
}
rte_eal_alarm_set(internals->mode4.update_timeout_us,
@@ -967,34 +967,34 @@ bond_mode_8023ad_periodic_cb(void *arg)
}
static int
-bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_register_lacp_mac(uint16_t member_id)
{
int ret;
- ret = rte_eth_allmulticast_enable(slave_id);
+ ret = rte_eth_allmulticast_enable(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
}
- if (rte_eth_allmulticast_get(slave_id)) {
+ if (rte_eth_allmulticast_get(member_id)) {
RTE_BOND_LOG(DEBUG, "forced allmulti for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ member_id);
+ bond_mode_8023ad_ports[member_id].forced_rx_flags =
BOND_8023AD_FORCED_ALLMULTI;
return 0;
}
- ret = rte_eth_promiscuous_enable(slave_id);
+ ret = rte_eth_promiscuous_enable(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"failed to enable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
}
- if (rte_eth_promiscuous_get(slave_id)) {
+ if (rte_eth_promiscuous_get(member_id)) {
RTE_BOND_LOG(DEBUG, "forced promiscuous for port %u",
- slave_id);
- bond_mode_8023ad_ports[slave_id].forced_rx_flags =
+ member_id);
+ bond_mode_8023ad_ports[member_id].forced_rx_flags =
BOND_8023AD_FORCED_PROMISC;
return 0;
}
@@ -1003,27 +1003,27 @@ bond_mode_8023ad_register_lacp_mac(uint16_t slave_id)
}
static void
-bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
+bond_mode_8023ad_unregister_lacp_mac(uint16_t member_id)
{
int ret;
- switch (bond_mode_8023ad_ports[slave_id].forced_rx_flags) {
+ switch (bond_mode_8023ad_ports[member_id].forced_rx_flags) {
case BOND_8023AD_FORCED_ALLMULTI:
- RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", slave_id);
- ret = rte_eth_allmulticast_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset allmulti for port %u", member_id);
+ ret = rte_eth_allmulticast_disable(member_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable allmulti mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
break;
case BOND_8023AD_FORCED_PROMISC:
- RTE_BOND_LOG(DEBUG, "unset promisc for port %u", slave_id);
- ret = rte_eth_promiscuous_disable(slave_id);
+ RTE_BOND_LOG(DEBUG, "unset promisc for port %u", member_id);
+ ret = rte_eth_promiscuous_disable(member_id);
if (ret != 0)
RTE_BOND_LOG(ERR,
"failed to disable promiscuous mode for port %u: %s",
- slave_id, rte_strerror(-ret));
+ member_id, rte_strerror(-ret));
break;
default:
@@ -1032,12 +1032,12 @@ bond_mode_8023ad_unregister_lacp_mac(uint16_t slave_id)
}
void
-bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
- uint16_t slave_id)
+bond_mode_8023ad_activate_member(struct rte_eth_dev *bond_dev,
+ uint16_t member_id)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
struct port_params initial = {
.system = { { 0 } },
.system_priority = rte_cpu_to_be_16(0xFFFF),
@@ -1053,15 +1053,15 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
struct bond_tx_queue *bd_tx_q;
uint16_t q_id;
- /* Given slave mus not be in active list */
- RTE_ASSERT(find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) == internals->active_slave_count);
+ /* Given member mus not be in active list */
+ RTE_ASSERT(find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) == internals->active_member_count);
RTE_SET_USED(internals); /* used only for assert when enabled */
memcpy(&port->actor, &initial, sizeof(struct port_params));
/* Standard requires that port ID must be grater than 0.
* Add 1 do get corresponding port_number */
- port->actor.port_number = rte_cpu_to_be_16(slave_id + 1);
+ port->actor.port_number = rte_cpu_to_be_16(member_id + 1);
memcpy(&port->partner, &initial, sizeof(struct port_params));
memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
@@ -1072,11 +1072,11 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
port->sm_flags = SM_FLAGS_BEGIN;
/* use this port as aggregator */
- port->aggregator_port_id = slave_id;
+ port->aggregator_port_id = member_id;
- if (bond_mode_8023ad_register_lacp_mac(slave_id) < 0) {
- RTE_BOND_LOG(WARNING, "slave %u is most likely broken and won't receive LACP packets",
- slave_id);
+ if (bond_mode_8023ad_register_lacp_mac(member_id) < 0) {
+ RTE_BOND_LOG(WARNING, "member %u is most likely broken and won't receive LACP packets",
+ member_id);
}
timer_cancel(&port->warning_timer);
@@ -1087,22 +1087,24 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
RTE_ASSERT(port->rx_ring == NULL);
RTE_ASSERT(port->tx_ring == NULL);
- socket_id = rte_eth_dev_socket_id(slave_id);
+ socket_id = rte_eth_dev_socket_id(member_id);
if (socket_id == -1)
socket_id = rte_socket_id();
element_size = sizeof(struct slow_protocol_frame) +
RTE_PKTMBUF_HEADROOM;
- /* The size of the mempool should be at least:
- * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
- total_tx_desc = BOND_MODE_8023AX_SLAVE_TX_PKTS;
+ /*
+ * The size of the mempool should be at least:
+ * the sum of the TX descriptors + BOND_MODE_8023AX_MEMBER_TX_PKTS.
+ */
+ total_tx_desc = BOND_MODE_8023AX_MEMBER_TX_PKTS;
for (q_id = 0; q_id < bond_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue*)bond_dev->data->tx_queues[q_id];
total_tx_desc += bd_tx_q->nb_tx_desc;
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_pool", member_id);
port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
@@ -1111,39 +1113,39 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->mbuf_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+ member_id, mem_name, rte_strerror(rte_errno));
}
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_rx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_%u_rx", member_id);
port->rx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_RX_PKTS), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_MEMBER_RX_PKTS), socket_id, 0);
if (port->rx_ring == NULL) {
- rte_panic("Slave %u: Failed to create rx ring '%s': %s\n", slave_id,
+ rte_panic("Member %u: Failed to create rx ring '%s': %s\n", member_id,
mem_name, rte_strerror(rte_errno));
}
/* TX ring is at least one pkt longer to make room for marker packet. */
- snprintf(mem_name, RTE_DIM(mem_name), "slave_%u_tx", slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_%u_tx", member_id);
port->tx_ring = rte_ring_create(mem_name,
- rte_align32pow2(BOND_MODE_8023AX_SLAVE_TX_PKTS + 1), socket_id, 0);
+ rte_align32pow2(BOND_MODE_8023AX_MEMBER_TX_PKTS + 1), socket_id, 0);
if (port->tx_ring == NULL) {
- rte_panic("Slave %u: Failed to create tx ring '%s': %s\n", slave_id,
+ rte_panic("Member %u: Failed to create tx ring '%s': %s\n", member_id,
mem_name, rte_strerror(rte_errno));
}
}
int
-bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
- uint16_t slave_id)
+bond_mode_8023ad_deactivate_member(struct rte_eth_dev *bond_dev __rte_unused,
+ uint16_t member_id)
{
void *pkt = NULL;
struct port *port = NULL;
uint8_t old_partner_state;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
ACTOR_STATE_CLR(port, AGGREGATION);
port->selected = UNSELECTED;
@@ -1151,7 +1153,7 @@ bond_mode_8023ad_deactivate_slave(struct rte_eth_dev *bond_dev __rte_unused,
old_partner_state = port->partner_state;
record_default(port);
- bond_mode_8023ad_unregister_lacp_mac(slave_id);
+ bond_mode_8023ad_unregister_lacp_mac(member_id);
/* If partner timeout state changes then disable timer */
if (!((old_partner_state ^ port->partner_state) &
@@ -1174,30 +1176,30 @@ void
bond_mode_8023ad_mac_address_update(struct rte_eth_dev *bond_dev)
{
struct bond_dev_private *internals = bond_dev->data->dev_private;
- struct rte_ether_addr slave_addr;
- struct port *slave, *agg_slave;
- uint16_t slave_id, i, j;
+ struct rte_ether_addr member_addr;
+ struct port *member, *agg_member;
+ uint16_t member_id, i, j;
bond_mode_8023ad_stop(bond_dev);
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- slave = &bond_mode_8023ad_ports[slave_id];
- rte_eth_macaddr_get(slave_id, &slave_addr);
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ member = &bond_mode_8023ad_ports[member_id];
+ rte_eth_macaddr_get(member_id, &member_addr);
- if (rte_is_same_ether_addr(&slave_addr, &slave->actor.system))
+ if (rte_is_same_ether_addr(&member_addr, &member->actor.system))
continue;
- rte_ether_addr_copy(&slave_addr, &slave->actor.system);
+ rte_ether_addr_copy(&member_addr, &member->actor.system);
/* Do nothing if this port is not an aggregator. In other case
* Set NTT flag on every port that use this aggregator. */
- if (slave->aggregator_port_id != slave_id)
+ if (member->aggregator_port_id != member_id)
continue;
- for (j = 0; j < internals->active_slave_count; j++) {
- agg_slave = &bond_mode_8023ad_ports[internals->active_slaves[j]];
- if (agg_slave->aggregator_port_id == slave_id)
- SM_FLAG_SET(agg_slave, NTT);
+ for (j = 0; j < internals->active_member_count; j++) {
+ agg_member = &bond_mode_8023ad_ports[internals->active_members[j]];
+ if (agg_member->aggregator_port_id == member_id)
+ SM_FLAG_SET(agg_member, NTT);
}
}
@@ -1288,9 +1290,9 @@ bond_mode_8023ad_enable(struct rte_eth_dev *bond_dev)
struct bond_dev_private *internals = bond_dev->data->dev_private;
uint16_t i;
- for (i = 0; i < internals->active_slave_count; i++)
- bond_mode_8023ad_activate_slave(bond_dev,
- internals->active_slaves[i]);
+ for (i = 0; i < internals->active_member_count; i++)
+ bond_mode_8023ad_activate_member(bond_dev,
+ internals->active_members[i]);
return 0;
}
@@ -1326,10 +1328,10 @@ bond_mode_8023ad_stop(struct rte_eth_dev *bond_dev)
void
bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
- uint16_t slave_id, struct rte_mbuf *pkt)
+ uint16_t member_id, struct rte_mbuf *pkt)
{
struct mode8023ad_private *mode4 = &internals->mode4;
- struct port *port = &bond_mode_8023ad_ports[slave_id];
+ struct port *port = &bond_mode_8023ad_ports[member_id];
struct marker_header *m_hdr;
uint64_t marker_timer, old_marker_timer;
int retval;
@@ -1362,7 +1364,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
} while (unlikely(retval == 0));
m_hdr->marker.tlv_type_marker = MARKER_TLV_TYPE_RESP;
- rte_eth_macaddr_get(slave_id, &m_hdr->eth_hdr.src_addr);
+ rte_eth_macaddr_get(member_id, &m_hdr->eth_hdr.src_addr);
if (internals->mode4.dedicated_queues.enabled == 0) {
if (rte_ring_enqueue(port->tx_ring, pkt) != 0) {
@@ -1373,10 +1375,10 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
}
} else {
/* Send packet directly to the slow queue */
- uint16_t tx_count = rte_eth_tx_prepare(slave_id,
+ uint16_t tx_count = rte_eth_tx_prepare(member_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, 1);
- tx_count = rte_eth_tx_burst(slave_id,
+ tx_count = rte_eth_tx_burst(member_id,
internals->mode4.dedicated_queues.tx_qid,
&pkt, tx_count);
if (tx_count != 1) {
@@ -1394,7 +1396,7 @@ bond_mode_8023ad_handle_slow_pkt(struct bond_dev_private *internals,
goto free_out;
}
} else
- rx_machine_update(internals, slave_id, pkt);
+ rx_machine_update(internals, member_id, pkt);
} else {
wrn = WRN_UNKNOWN_SLOW_TYPE;
goto free_out;
@@ -1517,8 +1519,8 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *info)
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *info)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1531,12 +1533,12 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
bond_dev = &rte_eth_devices[port_id];
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) ==
+ internals->active_member_count)
return -EINVAL;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
info->selected = port->selected;
info->actor_state = port->actor_state;
@@ -1550,7 +1552,7 @@ rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
}
static int
-bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
+bond_8023ad_ext_validate(uint16_t port_id, uint16_t member_id)
{
struct rte_eth_dev *bond_dev;
struct bond_dev_private *internals;
@@ -1565,9 +1567,9 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
return -EINVAL;
internals = bond_dev->data->dev_private;
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) ==
- internals->active_slave_count)
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) ==
+ internals->active_member_count)
return -EINVAL;
mode4 = &internals->mode4;
@@ -1578,17 +1580,17 @@ bond_8023ad_ext_validate(uint16_t port_id, uint16_t slave_id)
}
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (enabled)
ACTOR_STATE_SET(port, COLLECTING);
@@ -1599,17 +1601,17 @@ rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
int enabled)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (enabled)
ACTOR_STATE_SET(port, DISTRIBUTING);
@@ -1620,45 +1622,45 @@ rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
}
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, member_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
return ACTOR_STATE(port, DISTRIBUTING);
}
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id)
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id)
{
struct port *port;
int err;
- err = bond_8023ad_ext_validate(port_id, slave_id);
+ err = bond_8023ad_ext_validate(port_id, member_id);
if (err != 0)
return err;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
return ACTOR_STATE(port, COLLECTING);
}
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
struct rte_mbuf *lacp_pkt)
{
struct port *port;
int res;
- res = bond_8023ad_ext_validate(port_id, slave_id);
+ res = bond_8023ad_ext_validate(port_id, member_id);
if (res != 0)
return res;
- port = &bond_mode_8023ad_ports[slave_id];
+ port = &bond_mode_8023ad_ports[member_id];
if (rte_pktmbuf_pkt_len(lacp_pkt) < sizeof(struct lacpdu_header))
return -EINVAL;
@@ -1683,11 +1685,11 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
struct mode8023ad_private *mode4 = &internals->mode4;
struct port *port;
void *pkt = NULL;
- uint16_t i, slave_id;
+ uint16_t i, member_id;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- port = &bond_mode_8023ad_ports[slave_id];
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ port = &bond_mode_8023ad_ports[member_id];
if (rte_ring_dequeue(port->rx_ring, &pkt) == 0) {
struct rte_mbuf *lacp_pkt = pkt;
@@ -1700,7 +1702,7 @@ bond_mode_8023ad_ext_periodic_cb(void *arg)
/* This is LACP frame so pass it to rx callback.
* Callback is responsible for freeing mbuf.
*/
- mode4->slowrx_cb(slave_id, lacp_pkt);
+ mode4->slowrx_cb(member_id, lacp_pkt);
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
index 921b4446b7..589141d42c 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.h
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
@@ -35,7 +35,7 @@ extern "C" {
#define MARKER_TLV_TYPE_INFO 0x01
#define MARKER_TLV_TYPE_RESP 0x02
-typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t slave_id,
+typedef void (*rte_eth_bond_8023ad_ext_slowrx_fn)(uint16_t member_id,
struct rte_mbuf *lacp_pkt);
enum rte_bond_8023ad_selection {
@@ -66,13 +66,13 @@ struct port_params {
uint16_t system_priority;
/**< System priority (unused in current implementation) */
struct rte_ether_addr system;
- /**< System ID - Slave MAC address, same as bonding MAC address */
+ /**< System ID - Member MAC address, same as bonding MAC address */
uint16_t key;
/**< Speed information (implementation dependent) and duplex. */
uint16_t port_priority;
/**< Priority of this (unused in current implementation) */
uint16_t port_number;
- /**< Port number. It corresponds to slave port id. */
+ /**< Port number. It corresponds to member port id. */
} __rte_packed __rte_aligned(2);
struct lacpdu_actor_partner_params {
@@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
enum rte_bond_8023ad_agg_selection agg_selection;
};
-struct rte_eth_bond_8023ad_slave_info {
+struct rte_eth_bond_8023ad_member_info {
enum rte_bond_8023ad_selection selected;
uint8_t actor_state;
struct port_params actor;
@@ -184,100 +184,101 @@ rte_eth_bond_8023ad_setup(uint16_t port_id,
/**
* @internal
*
- * Function returns current state of given slave device.
+ * Function returns current state of given member device.
*
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param conf buffer for configuration
* @return
* 0 - if ok
- * -EINVAL if conf is NULL or slave id is invalid (not a slave of given
+ * -EINVAL if conf is NULL or member id is invalid (not a member of given
* bonded device or is not inactive).
*/
+__rte_experimental
int
-rte_eth_bond_8023ad_slave_info(uint16_t port_id, uint16_t slave_id,
- struct rte_eth_bond_8023ad_slave_info *conf);
+rte_eth_bond_8023ad_member_info(uint16_t port_id, uint16_t member_id,
+ struct rte_eth_bond_8023ad_member_info *conf);
/**
- * Configure a slave port to start collecting.
+ * Configure a member port to start collecting.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param enabled Non-zero when collection enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_collect(uint16_t port_id, uint16_t member_id,
int enabled);
/**
- * Get COLLECTING flag from slave port actor state.
+ * Get COLLECTING flag from member port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_collect_get(uint16_t port_id, uint16_t member_id);
/**
- * Configure a slave port to start distributing.
+ * Configure a member port to start distributing.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @param enabled Non-zero when distribution enabled.
* @return
* 0 - if ok
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_distrib(uint16_t port_id, uint16_t member_id,
int enabled);
/**
- * Get DISTRIBUTING flag from slave port actor state.
+ * Get DISTRIBUTING flag from member port actor state.
*
* @param port_id Bonding device id
- * @param slave_id Port id of valid slave.
+ * @param member_id Port id of valid member.
* @return
* 0 - if not set
* 1 - if set
- * -EINVAL if slave is not valid.
+ * -EINVAL if member is not valid.
*/
int
-rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t slave_id);
+rte_eth_bond_8023ad_ext_distrib_get(uint16_t port_id, uint16_t member_id);
/**
* LACPDU transmit path for external 802.3ad state machine. Caller retains
* ownership of the packet on failure.
*
* @param port_id Bonding device id
- * @param slave_id Port ID of valid slave device.
+ * @param member_id Port ID of valid member device.
* @param lacp_pkt mbuf containing LACPDU.
*
* @return
* 0 on success, negative value otherwise.
*/
int
-rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t slave_id,
+rte_eth_bond_8023ad_ext_slowtx(uint16_t port_id, uint16_t member_id,
struct rte_mbuf *lacp_pkt);
/**
- * Enable dedicated hw queues for 802.3ad control plane traffic on slaves
+ * Enable dedicated hw queues for 802.3ad control plane traffic on members
*
- * This function creates an additional tx and rx queue on each slave for
+ * This function creates an additional tx and rx queue on each member for
* dedicated 802.3ad control plane traffic . A flow filtering rule is
- * programmed on each slave to redirect all LACP slow packets to that rx queue
+ * programmed on each member to redirect all LACP slow packets to that rx queue
* for processing in the LACP state machine, this removes the need to filter
* these packets in the bonded devices data path. The additional tx queue is
* used to enable the LACP state machine to enqueue LACP packets directly to
- * slave hw independently of the bonded devices data path.
+ * member hw independently of the bonded devices data path.
*
- * To use this feature all slaves must support the programming of the flow
+ * To use this feature all members must support the programming of the flow
* filter rule required for rx and have enough queues that one rx and tx queue
* can be reserved for the LACP state machines control packets.
*
@@ -292,7 +293,7 @@ int
rte_eth_bond_8023ad_dedicated_queues_enable(uint16_t port_id);
/**
- * Disable slow queue on slaves
+ * Disable slow queue on members
*
* This function disables hardware slow packet filter.
*
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 86335a7971..56945e2349 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -19,13 +19,13 @@ simple_hash(uint8_t *hash_start, int hash_size)
}
static uint16_t
-calculate_slave(struct bond_dev_private *internals)
+calculate_member(struct bond_dev_private *internals)
{
uint16_t idx;
- idx = (internals->mode6.last_slave + 1) % internals->active_slave_count;
- internals->mode6.last_slave = idx;
- return internals->active_slaves[idx];
+ idx = (internals->mode6.last_member + 1) % internals->active_member_count;
+ internals->mode6.last_member = idx;
+ return internals->active_members[idx];
}
int
@@ -41,7 +41,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
/* Fill hash table with initial values */
memset(hash_table, 0, sizeof(struct client_data) * ALB_HASH_TABLE_SIZE);
rte_spinlock_init(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_member = ALB_NULL_INDEX;
internals->mode6.ntt = 0;
/* Initialize memory pool for ARP packets to send */
@@ -96,7 +96,7 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
/*
* We got reply for ARP Request send by the application. We need to
* update client table when received data differ from what is stored
- * in ALB table and issue sending update packet to that slave.
+ * in ALB table and issue sending update packet to that member.
*/
rte_spinlock_lock(&internals->mode6.lock);
if (client_info->in_use == 0 ||
@@ -112,8 +112,8 @@ void bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
client_info->cli_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_sha,
&client_info->cli_mac);
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_tha);
@@ -166,33 +166,33 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
&arp->arp_data.arp_tha,
&client_info->cli_mac);
}
- rte_eth_macaddr_get(client_info->slave_idx,
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->member_idx;
}
}
- /* Assign new slave to this client and update src mac in ARP */
+ /* Assign new member to this client and update src mac in ARP */
client_info->in_use = 1;
client_info->ntt = 0;
client_info->app_ip = arp->arp_data.arp_sip;
rte_ether_addr_copy(&arp->arp_data.arp_tha,
&client_info->cli_mac);
client_info->cli_ip = arp->arp_data.arp_tip;
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx,
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx,
&client_info->app_mac);
rte_ether_addr_copy(&client_info->app_mac,
&arp->arp_data.arp_sha);
memcpy(client_info->vlan, eth_h + 1, offset);
client_info->vlan_count = offset / sizeof(struct rte_vlan_hdr);
rte_spinlock_unlock(&internals->mode6.lock);
- return client_info->slave_idx;
+ return client_info->member_idx;
}
/* If packet is not ARP Reply, send it on current primary port. */
@@ -208,7 +208,7 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
{
struct rte_ether_hdr *eth_h;
struct rte_arp_hdr *arp_h;
- uint16_t slave_idx;
+ uint16_t member_idx;
rte_spinlock_lock(&internals->mode6.lock);
eth_h = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);
@@ -238,10 +238,10 @@ bond_mode_alb_arp_upd(struct client_data *client_info,
arp_h->arp_plen = sizeof(uint32_t);
arp_h->arp_opcode = rte_cpu_to_be_16(RTE_ARP_OP_REPLY);
- slave_idx = client_info->slave_idx;
+ member_idx = client_info->member_idx;
rte_spinlock_unlock(&internals->mode6.lock);
- return slave_idx;
+ return member_idx;
}
void
@@ -252,18 +252,18 @@ bond_mode_alb_client_list_upd(struct rte_eth_dev *bond_dev)
int i;
- /* If active slave count is 0, it's pointless to refresh alb table */
- if (internals->active_slave_count <= 0)
+ /* If active member count is 0, it's pointless to refresh alb table */
+ if (internals->active_member_count <= 0)
return;
rte_spinlock_lock(&internals->mode6.lock);
- internals->mode6.last_slave = ALB_NULL_INDEX;
+ internals->mode6.last_member = ALB_NULL_INDEX;
for (i = 0; i < ALB_HASH_TABLE_SIZE; i++) {
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- client_info->slave_idx = calculate_slave(internals);
- rte_eth_macaddr_get(client_info->slave_idx, &client_info->app_mac);
+ client_info->member_idx = calculate_member(internals);
+ rte_eth_macaddr_get(client_info->member_idx, &client_info->app_mac);
internals->mode6.ntt = 1;
}
}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.h b/drivers/net/bonding/rte_eth_bond_alb.h
index 4e9aeda9bc..beb2e619f9 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.h
+++ b/drivers/net/bonding/rte_eth_bond_alb.h
@@ -22,8 +22,8 @@ struct client_data {
uint32_t cli_ip;
/**< Client IP address */
- uint16_t slave_idx;
- /**< Index of slave on which we connect with that client */
+ uint16_t member_idx;
+ /**< Index of member on which we connect with that client */
uint8_t in_use;
/**< Flag indicating if entry in client table is currently used */
uint8_t ntt;
@@ -42,8 +42,8 @@ struct mode_alb_private {
/**< Mempool for creating ARP update packets */
uint8_t ntt;
/**< Flag indicating if we need to send update to any client on next tx */
- uint32_t last_slave;
- /**< Index of last used slave in client table */
+ uint32_t last_member;
+ /**< Index of last used member in client table */
rte_spinlock_t lock;
};
@@ -72,9 +72,9 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
struct bond_dev_private *internals);
/**
- * Function handles ARP packet transmission. It also decides on which slave
- * send that packet. If packet is ARP Request, it is send on primary slave.
- * If it is ARP Reply, it is send on slave stored in client table for that
+ * Function handles ARP packet transmission. It also decides on which member
+ * send that packet. If packet is ARP Request, it is send on primary member.
+ * If it is ARP Reply, it is send on member stored in client table for that
* connection. On Reply function also updates data in client table.
*
* @param eth_h ETH header of transmitted packet.
@@ -82,7 +82,7 @@ bond_mode_alb_arp_recv(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
@@ -96,14 +96,14 @@ bond_mode_alb_arp_xmit(struct rte_ether_hdr *eth_h, uint16_t offset,
* @param internals Bonding data.
*
* @return
- * Index of slave on which packet should be sent.
+ * Index of member on which packet should be sent.
*/
uint16_t
bond_mode_alb_arp_upd(struct client_data *client_info,
struct rte_mbuf *pkt, struct bond_dev_private *internals);
/**
- * Function updates slave indexes of active connections.
+ * Function updates member indexes of active connections.
*
* @param bond_dev Pointer to bonded device struct.
*/
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 8b6cdce34a..b366c02564 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -37,7 +37,7 @@ valid_bonded_port_id(uint16_t port_id)
}
int
-check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
+check_for_main_bonded_ethdev(const struct rte_eth_dev *eth_dev)
{
int i;
struct bond_dev_private *internals;
@@ -47,31 +47,31 @@ check_for_master_bonded_ethdev(const struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- /* Check if any of slave devices is a bonded device */
- for (i = 0; i < internals->slave_count; i++)
- if (valid_bonded_port_id(internals->slaves[i].port_id) == 0)
+ /* Check if any of member devices is a bonded device */
+ for (i = 0; i < internals->member_count; i++)
+ if (valid_bonded_port_id(internals->members[i].port_id) == 0)
return 1;
return 0;
}
int
-valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
+valid_member_port_id(struct bond_dev_private *internals, uint16_t member_port_id)
{
- RTE_ETH_VALID_PORTID_OR_ERR_RET(slave_port_id, -1);
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(member_port_id, -1);
- /* Verify that slave_port_id refers to a non bonded port */
- if (check_for_bonded_ethdev(&rte_eth_devices[slave_port_id]) == 0 &&
+ /* Verify that member_port_id refers to a non bonded port */
+ if (check_for_bonded_ethdev(&rte_eth_devices[member_port_id]) == 0 &&
internals->mode == BONDING_MODE_8023AD) {
- RTE_BOND_LOG(ERR, "Cannot add slave to bonded device in 802.3ad"
- " mode as slave is also a bonded device, only "
+ RTE_BOND_LOG(ERR, "Cannot add member to bonded device in 802.3ad"
+ " mode as member is also a bonded device, only "
"physical devices can be support in this mode.");
return -1;
}
- if (internals->port_id == slave_port_id) {
+ if (internals->port_id == member_port_id) {
RTE_BOND_LOG(ERR,
- "Cannot add the bonded device itself as its slave.");
+ "Cannot add the bonded device itself as its member.");
return -1;
}
@@ -79,61 +79,63 @@ valid_slave_port_id(struct bond_dev_private *internals, uint16_t slave_port_id)
}
void
-activate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+activate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_member_count;
if (internals->mode == BONDING_MODE_8023AD)
- bond_mode_8023ad_activate_slave(eth_dev, port_id);
+ bond_mode_8023ad_activate_member(eth_dev, port_id);
if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB) {
- internals->tlb_slaves_order[active_count] = port_id;
+ internals->tlb_members_order[active_count] = port_id;
}
- RTE_ASSERT(internals->active_slave_count <
- (RTE_DIM(internals->active_slaves) - 1));
+ RTE_ASSERT(internals->active_member_count <
+ (RTE_DIM(internals->active_members) - 1));
- internals->active_slaves[internals->active_slave_count] = port_id;
- internals->active_slave_count++;
+ internals->active_members[internals->active_member_count] = port_id;
+ internals->active_member_count++;
if (internals->mode == BONDING_MODE_TLB)
- bond_tlb_activate_slave(internals);
+ bond_tlb_activate_member(internals);
if (internals->mode == BONDING_MODE_ALB)
bond_mode_alb_client_list_upd(eth_dev);
}
void
-deactivate_slave(struct rte_eth_dev *eth_dev, uint16_t port_id)
+deactivate_member(struct rte_eth_dev *eth_dev, uint16_t port_id)
{
- uint16_t slave_pos;
+ uint16_t member_pos;
struct bond_dev_private *internals = eth_dev->data->dev_private;
- uint16_t active_count = internals->active_slave_count;
+ uint16_t active_count = internals->active_member_count;
if (internals->mode == BONDING_MODE_8023AD) {
bond_mode_8023ad_stop(eth_dev);
- bond_mode_8023ad_deactivate_slave(eth_dev, port_id);
+ bond_mode_8023ad_deactivate_member(eth_dev, port_id);
} else if (internals->mode == BONDING_MODE_TLB
|| internals->mode == BONDING_MODE_ALB)
bond_tlb_disable(internals);
- slave_pos = find_slave_by_id(internals->active_slaves, active_count,
+ member_pos = find_member_by_id(internals->active_members, active_count,
port_id);
- /* If slave was not at the end of the list
- * shift active slaves up active array list */
- if (slave_pos < active_count) {
+ /*
+ * If member was not at the end of the list
+ * shift active members up active array list.
+ */
+ if (member_pos < active_count) {
active_count--;
- memmove(internals->active_slaves + slave_pos,
- internals->active_slaves + slave_pos + 1,
- (active_count - slave_pos) *
- sizeof(internals->active_slaves[0]));
+ memmove(internals->active_members + member_pos,
+ internals->active_members + member_pos + 1,
+ (active_count - member_pos) *
+ sizeof(internals->active_members[0]));
}
- RTE_ASSERT(active_count < RTE_DIM(internals->active_slaves));
- internals->active_slave_count = active_count;
+ RTE_ASSERT(active_count < RTE_DIM(internals->active_members));
+ internals->active_member_count = active_count;
if (eth_dev->data->dev_started) {
if (internals->mode == BONDING_MODE_8023AD) {
@@ -192,7 +194,7 @@ rte_eth_bond_free(const char *name)
}
static int
-slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+member_vlan_filter_set(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -224,7 +226,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
if (unlikely(slab & mask)) {
uint16_t vlan_id = pos + i;
- res = rte_eth_dev_vlan_filter(slave_port_id,
+ res = rte_eth_dev_vlan_filter(member_port_id,
vlan_id, 1);
}
}
@@ -236,45 +238,45 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
+member_rte_flow_prepare(uint16_t member_id, struct bond_dev_private *internals)
{
struct rte_flow *flow;
struct rte_flow_error ferror;
- uint16_t slave_port_id = internals->slaves[slave_id].port_id;
+ uint16_t member_port_id = internals->members[member_id].port_id;
if (internals->flow_isolated_valid != 0) {
- if (rte_eth_dev_stop(slave_port_id) != 0) {
+ if (rte_eth_dev_stop(member_port_id) != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_port_id);
+ member_port_id);
return -1;
}
- if (rte_flow_isolate(slave_port_id, internals->flow_isolated,
+ if (rte_flow_isolate(member_port_id, internals->flow_isolated,
&ferror)) {
- RTE_BOND_LOG(ERR, "rte_flow_isolate failed for slave"
- " %d: %s", slave_id, ferror.message ?
+ RTE_BOND_LOG(ERR, "rte_flow_isolate failed for member"
+ " %d: %s", member_id, ferror.message ?
ferror.message : "(no stated reason)");
return -1;
}
}
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- flow->flows[slave_id] = rte_flow_create(slave_port_id,
+ flow->flows[member_id] = rte_flow_create(member_port_id,
flow->rule.attr,
flow->rule.pattern,
flow->rule.actions,
&ferror);
- if (flow->flows[slave_id] == NULL) {
- RTE_BOND_LOG(ERR, "Cannot create flow for slave"
- " %d: %s", slave_id,
+ if (flow->flows[member_id] == NULL) {
+ RTE_BOND_LOG(ERR, "Cannot create flow for member"
+ " %d: %s", member_id,
ferror.message ? ferror.message :
"(no stated reason)");
- /* Destroy successful bond flows from the slave */
+ /* Destroy successful bond flows from the member */
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_id] != NULL) {
- rte_flow_destroy(slave_port_id,
- flow->flows[slave_id],
+ if (flow->flows[member_id] != NULL) {
+ rte_flow_destroy(member_port_id,
+ flow->flows[member_id],
&ferror);
- flow->flows[slave_id] = NULL;
+ flow->flows[member_id] = NULL;
}
}
return -1;
@@ -284,7 +286,7 @@ slave_rte_flow_prepare(uint16_t slave_id, struct bond_dev_private *internals)
}
static void
-eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -292,20 +294,20 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
internals->reta_size = di->reta_size;
internals->rss_key_len = di->hash_key_size;
- /* Inherit Rx offload capabilities from the first slave device */
+ /* Inherit Rx offload capabilities from the first member device */
internals->rx_offload_capa = di->rx_offload_capa;
internals->rx_queue_offload_capa = di->rx_queue_offload_capa;
internals->flow_type_rss_offloads = di->flow_type_rss_offloads;
- /* Inherit maximum Rx packet size from the first slave device */
+ /* Inherit maximum Rx packet size from the first member device */
internals->candidate_max_rx_pktlen = di->max_rx_pktlen;
- /* Inherit default Rx queue settings from the first slave device */
+ /* Inherit default Rx queue settings from the first member device */
memcpy(rxconf_i, &di->default_rxconf, sizeof(*rxconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * member devices. Applications may tweak this setting if need be.
*/
rxconf_i->rx_thresh.pthresh = 0;
rxconf_i->rx_thresh.hthresh = 0;
@@ -314,26 +316,26 @@ eth_bond_slave_inherit_dev_info_rx_first(struct bond_dev_private *internals,
/* Setting this to zero should effectively enable default values */
rxconf_i->rx_free_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all member devices */
rxconf_i->rx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_first(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
- /* Inherit Tx offload capabilities from the first slave device */
+ /* Inherit Tx offload capabilities from the first member device */
internals->tx_offload_capa = di->tx_offload_capa;
internals->tx_queue_offload_capa = di->tx_queue_offload_capa;
- /* Inherit default Tx queue settings from the first slave device */
+ /* Inherit default Tx queue settings from the first member device */
memcpy(txconf_i, &di->default_txconf, sizeof(*txconf_i));
/*
* Turn off descriptor prefetch and writeback by default for all
- * slave devices. Applications may tweak this setting if need be.
+ * member devices. Applications may tweak this setting if need be.
*/
txconf_i->tx_thresh.pthresh = 0;
txconf_i->tx_thresh.hthresh = 0;
@@ -341,17 +343,17 @@ eth_bond_slave_inherit_dev_info_tx_first(struct bond_dev_private *internals,
/*
* Setting these parameters to zero assumes that default
- * values will be configured implicitly by slave devices.
+ * values will be configured implicitly by member devices.
*/
txconf_i->tx_free_thresh = 0;
txconf_i->tx_rs_thresh = 0;
- /* Disable deferred start by default for all slave devices */
+ /* Disable deferred start by default for all member devices */
txconf_i->tx_deferred_start = 0;
}
static void
-eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_rx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_rxconf *rxconf_i = &internals->default_rxconf;
@@ -362,32 +364,32 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
internals->flow_type_rss_offloads &= di->flow_type_rss_offloads;
/*
- * If at least one slave device suggests enabling this
- * setting by default, enable it for all slave devices
+ * If at least one member device suggests enabling this
+ * setting by default, enable it for all member devices
* since disabling it may not be necessarily supported.
*/
if (rxconf->rx_drop_en == 1)
rxconf_i->rx_drop_en = 1;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new member device may cause some of previously inherited
* offloads to be withdrawn from the internal rx_queue_offload_capa
* value. Thus, the new internal value of default Rx queue offloads
* has to be masked by rx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new member device.
*/
rxconf_i->offloads = (rxconf_i->offloads | rxconf->offloads) &
internals->rx_queue_offload_capa;
/*
- * RETA size is GCD of all slaves RETA sizes, so, if all sizes will be
+ * RETA size is GCD of all members RETA sizes, so, if all sizes will be
* the power of 2, the lower one is GCD
*/
if (internals->reta_size > di->reta_size)
internals->reta_size = di->reta_size;
if (internals->rss_key_len > di->hash_key_size) {
- RTE_BOND_LOG(WARNING, "slave has different rss key size, "
+ RTE_BOND_LOG(WARNING, "member has different rss key size, "
"configuring rss may fail");
internals->rss_key_len = di->hash_key_size;
}
@@ -398,7 +400,7 @@ eth_bond_slave_inherit_dev_info_rx_next(struct bond_dev_private *internals,
}
static void
-eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
+eth_bond_member_inherit_dev_info_tx_next(struct bond_dev_private *internals,
const struct rte_eth_dev_info *di)
{
struct rte_eth_txconf *txconf_i = &internals->default_txconf;
@@ -408,34 +410,34 @@ eth_bond_slave_inherit_dev_info_tx_next(struct bond_dev_private *internals,
internals->tx_queue_offload_capa &= di->tx_queue_offload_capa;
/*
- * Adding a new slave device may cause some of previously inherited
+ * Adding a new member device may cause some of previously inherited
* offloads to be withdrawn from the internal tx_queue_offload_capa
* value. Thus, the new internal value of default Tx queue offloads
* has to be masked by tx_queue_offload_capa to make sure that only
* commonly supported offloads are preserved from both the previous
- * value and the value being inherited from the new slave device.
+ * value and the value being inherited from the new member device.
*/
txconf_i->offloads = (txconf_i->offloads | txconf->offloads) &
internals->tx_queue_offload_capa;
}
static void
-eth_bond_slave_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_first(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *member_desc_lim)
{
- memcpy(bond_desc_lim, slave_desc_lim, sizeof(*bond_desc_lim));
+ memcpy(bond_desc_lim, member_desc_lim, sizeof(*bond_desc_lim));
}
static int
-eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
- const struct rte_eth_desc_lim *slave_desc_lim)
+eth_bond_member_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
+ const struct rte_eth_desc_lim *member_desc_lim)
{
bond_desc_lim->nb_max = RTE_MIN(bond_desc_lim->nb_max,
- slave_desc_lim->nb_max);
+ member_desc_lim->nb_max);
bond_desc_lim->nb_min = RTE_MAX(bond_desc_lim->nb_min,
- slave_desc_lim->nb_min);
+ member_desc_lim->nb_min);
bond_desc_lim->nb_align = RTE_MAX(bond_desc_lim->nb_align,
- slave_desc_lim->nb_align);
+ member_desc_lim->nb_align);
if (bond_desc_lim->nb_min > bond_desc_lim->nb_max ||
bond_desc_lim->nb_align > bond_desc_lim->nb_max) {
@@ -444,22 +446,22 @@ eth_bond_slave_inherit_desc_lim_next(struct rte_eth_desc_lim *bond_desc_lim,
}
/* Treat maximum number of segments equal to 0 as unspecified */
- if (slave_desc_lim->nb_seg_max != 0 &&
+ if (member_desc_lim->nb_seg_max != 0 &&
(bond_desc_lim->nb_seg_max == 0 ||
- slave_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
- bond_desc_lim->nb_seg_max = slave_desc_lim->nb_seg_max;
- if (slave_desc_lim->nb_mtu_seg_max != 0 &&
+ member_desc_lim->nb_seg_max < bond_desc_lim->nb_seg_max))
+ bond_desc_lim->nb_seg_max = member_desc_lim->nb_seg_max;
+ if (member_desc_lim->nb_mtu_seg_max != 0 &&
(bond_desc_lim->nb_mtu_seg_max == 0 ||
- slave_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
- bond_desc_lim->nb_mtu_seg_max = slave_desc_lim->nb_mtu_seg_max;
+ member_desc_lim->nb_mtu_seg_max < bond_desc_lim->nb_mtu_seg_max))
+ bond_desc_lim->nb_mtu_seg_max = member_desc_lim->nb_mtu_seg_max;
return 0;
}
static int
-__eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
+__eth_bond_member_add_lock_free(uint16_t bonded_port_id, uint16_t member_port_id)
{
- struct rte_eth_dev *bonded_eth_dev, *slave_eth_dev;
+ struct rte_eth_dev *bonded_eth_dev, *member_eth_dev;
struct bond_dev_private *internals;
struct rte_eth_link link_props;
struct rte_eth_dev_info dev_info;
@@ -468,78 +470,78 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDING_MEMBER) {
- RTE_BOND_LOG(ERR, "Slave device is already a slave of a bonded device");
+ member_eth_dev = &rte_eth_devices[member_port_id];
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_BONDING_MEMBER) {
+ RTE_BOND_LOG(ERR, "Member device is already a member of a bonded device");
return -1;
}
- ret = rte_eth_dev_info_get(slave_port_id, &dev_info);
+ ret = rte_eth_dev_info_get(member_port_id, &dev_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port_id, strerror(-ret));
+ __func__, member_port_id, strerror(-ret));
return ret;
}
if (dev_info.max_rx_pktlen < internals->max_rx_pktlen) {
- RTE_BOND_LOG(ERR, "Slave (port %u) max_rx_pktlen too small",
- slave_port_id);
+ RTE_BOND_LOG(ERR, "Member (port %u) max_rx_pktlen too small",
+ member_port_id);
return -1;
}
- slave_add(internals, slave_eth_dev);
+ member_add(internals, member_eth_dev);
- /* We need to store slaves reta_size to be able to synchronize RETA for all
- * slave devices even if its sizes are different.
+ /* We need to store members reta_size to be able to synchronize RETA for all
+ * member devices even if its sizes are different.
*/
- internals->slaves[internals->slave_count].reta_size = dev_info.reta_size;
+ internals->members[internals->member_count].reta_size = dev_info.reta_size;
- if (internals->slave_count < 1) {
- /* if MAC is not user defined then use MAC of first slave add to
+ if (internals->member_count < 1) {
+ /* if MAC is not user defined then use MAC of first member add to
* bonded device */
if (!internals->user_defined_mac) {
if (mac_address_set(bonded_eth_dev,
- slave_eth_dev->data->mac_addrs)) {
+ member_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to set MAC address");
return -1;
}
}
- /* Make primary slave */
- internals->primary_port = slave_port_id;
- internals->current_primary_port = slave_port_id;
+ /* Make primary member */
+ internals->primary_port = member_port_id;
+ internals->current_primary_port = member_port_id;
internals->speed_capa = dev_info.speed_capa;
- /* Inherit queues settings from first slave */
- internals->nb_rx_queues = slave_eth_dev->data->nb_rx_queues;
- internals->nb_tx_queues = slave_eth_dev->data->nb_tx_queues;
+ /* Inherit queues settings from first member */
+ internals->nb_rx_queues = member_eth_dev->data->nb_rx_queues;
+ internals->nb_tx_queues = member_eth_dev->data->nb_tx_queues;
- eth_bond_slave_inherit_dev_info_rx_first(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_first(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_rx_first(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_tx_first(internals, &dev_info);
- eth_bond_slave_inherit_desc_lim_first(&internals->rx_desc_lim,
+ eth_bond_member_inherit_desc_lim_first(&internals->rx_desc_lim,
&dev_info.rx_desc_lim);
- eth_bond_slave_inherit_desc_lim_first(&internals->tx_desc_lim,
+ eth_bond_member_inherit_desc_lim_first(&internals->tx_desc_lim,
&dev_info.tx_desc_lim);
} else {
int ret;
internals->speed_capa &= dev_info.speed_capa;
- eth_bond_slave_inherit_dev_info_rx_next(internals, &dev_info);
- eth_bond_slave_inherit_dev_info_tx_next(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_rx_next(internals, &dev_info);
+ eth_bond_member_inherit_dev_info_tx_next(internals, &dev_info);
- ret = eth_bond_slave_inherit_desc_lim_next(
- &internals->rx_desc_lim, &dev_info.rx_desc_lim);
+ ret = eth_bond_member_inherit_desc_lim_next(&internals->rx_desc_lim,
+ &dev_info.rx_desc_lim);
if (ret != 0)
return ret;
- ret = eth_bond_slave_inherit_desc_lim_next(
- &internals->tx_desc_lim, &dev_info.tx_desc_lim);
+ ret = eth_bond_member_inherit_desc_lim_next(&internals->tx_desc_lim,
+ &dev_info.tx_desc_lim);
if (ret != 0)
return ret;
}
@@ -552,79 +554,81 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf &=
internals->flow_type_rss_offloads;
- if (slave_rte_flow_prepare(internals->slave_count, internals) != 0) {
- RTE_BOND_LOG(ERR, "Failed to prepare new slave flows: port=%d",
- slave_port_id);
+ if (member_rte_flow_prepare(internals->member_count, internals) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to prepare new member flows: port=%d",
+ member_port_id);
return -1;
}
- /* Add additional MAC addresses to the slave */
- if (slave_add_mac_addresses(bonded_eth_dev, slave_port_id) != 0) {
- RTE_BOND_LOG(ERR, "Failed to add mac address(es) to slave %hu",
- slave_port_id);
+ /* Add additional MAC addresses to the member */
+ if (member_add_mac_addresses(bonded_eth_dev, member_port_id) != 0) {
+ RTE_BOND_LOG(ERR, "Failed to add mac address(es) to member %hu",
+ member_port_id);
return -1;
}
- internals->slave_count++;
+ internals->member_count++;
if (bonded_eth_dev->data->dev_started) {
- if (slave_configure(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_configure: port=%d",
- slave_port_id);
+ if (member_configure(bonded_eth_dev, member_eth_dev) != 0) {
+ internals->member_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_members_configure: port=%d",
+ member_port_id);
return -1;
}
- if (slave_start(bonded_eth_dev, slave_eth_dev) != 0) {
- internals->slave_count--;
- RTE_BOND_LOG(ERR, "rte_bond_slaves_start: port=%d",
- slave_port_id);
+ if (member_start(bonded_eth_dev, member_eth_dev) != 0) {
+ internals->member_count--;
+ RTE_BOND_LOG(ERR, "rte_bond_members_start: port=%d",
+ member_port_id);
return -1;
}
}
- /* Update all slave devices MACs */
- mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MACs */
+ mac_address_members_update(bonded_eth_dev);
/* Register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_register(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_register(member_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback, &bonded_eth_dev->data->port_id);
- /* If bonded device is started then we can add the slave to our active
- * slave array */
+ /*
+ * If bonded device is started then we can add the member to our active
+ * member array.
+ */
if (bonded_eth_dev->data->dev_started) {
- ret = rte_eth_link_get_nowait(slave_port_id, &link_props);
+ ret = rte_eth_link_get_nowait(member_port_id, &link_props);
if (ret < 0) {
- rte_eth_dev_callback_unregister(slave_port_id,
+ rte_eth_dev_callback_unregister(member_port_id,
RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&bonded_eth_dev->data->port_id);
- internals->slave_count--;
+ internals->member_count--;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s\n",
- slave_port_id, rte_strerror(-ret));
+ "Member (port %u) link get failed: %s\n",
+ member_port_id, rte_strerror(-ret));
return -1;
}
if (link_props.link_status == RTE_ETH_LINK_UP) {
- if (internals->active_slave_count == 0 &&
+ if (internals->active_member_count == 0 &&
!internals->user_defined_primary_port)
bond_ethdev_primary_set(internals,
- slave_port_id);
+ member_port_id);
}
}
- /* Add slave details to bonded device */
- slave_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDING_MEMBER;
+ /* Add member details to bonded device */
+ member_eth_dev->data->dev_flags |= RTE_ETH_DEV_BONDING_MEMBER;
- slave_vlan_filter_set(bonded_port_id, slave_port_id);
+ member_vlan_filter_set(bonded_port_id, member_port_id);
return 0;
}
int
-rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -637,12 +641,12 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_member_add_lock_free(bonded_port_id, member_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -650,103 +654,105 @@ rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
}
static int
-__eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
- uint16_t slave_port_id)
+__eth_bond_member_remove_lock_free(uint16_t bonded_port_id,
+ uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct rte_flow_error flow_error;
struct rte_flow *flow;
- int i, slave_idx;
+ int i, member_idx;
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
internals = bonded_eth_dev->data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) < 0)
+ if (valid_member_port_id(internals, member_port_id) < 0)
return -1;
- /* first remove from active slave list */
- slave_idx = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_port_id);
+ /* first remove from active member list */
+ member_idx = find_member_by_id(internals->active_members,
+ internals->active_member_count, member_port_id);
- if (slave_idx < internals->active_slave_count)
- deactivate_slave(bonded_eth_dev, slave_port_id);
+ if (member_idx < internals->active_member_count)
+ deactivate_member(bonded_eth_dev, member_port_id);
- slave_idx = -1;
- /* now find in slave list */
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == slave_port_id) {
- slave_idx = i;
+ member_idx = -1;
+ /* now find in member list */
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id == member_port_id) {
+ member_idx = i;
break;
}
- if (slave_idx < 0) {
- RTE_BOND_LOG(ERR, "Couldn't find slave in port list, slave count %u",
- internals->slave_count);
+ if (member_idx < 0) {
+ RTE_BOND_LOG(ERR, "Could not find member in port list, member count %u",
+ internals->member_count);
return -1;
}
/* Un-register link status change callback with bonded device pointer as
* argument*/
- rte_eth_dev_callback_unregister(slave_port_id, RTE_ETH_EVENT_INTR_LSC,
+ rte_eth_dev_callback_unregister(member_port_id, RTE_ETH_EVENT_INTR_LSC,
bond_ethdev_lsc_event_callback,
&rte_eth_devices[bonded_port_id].data->port_id);
- /* Restore original MAC address of slave device */
- rte_eth_dev_default_mac_addr_set(slave_port_id,
- &(internals->slaves[slave_idx].persisted_mac_addr));
+ /* Restore original MAC address of member device */
+ rte_eth_dev_default_mac_addr_set(member_port_id,
+ &internals->members[member_idx].persisted_mac_addr);
- /* remove additional MAC addresses from the slave */
- slave_remove_mac_addresses(bonded_eth_dev, slave_port_id);
+ /* remove additional MAC addresses from the member */
+ member_remove_mac_addresses(bonded_eth_dev, member_port_id);
/*
- * Remove bond device flows from slave device.
+ * Remove bond device flows from member device.
* Note: don't restore flow isolate mode.
*/
TAILQ_FOREACH(flow, &internals->flow_list, next) {
- if (flow->flows[slave_idx] != NULL) {
- rte_flow_destroy(slave_port_id, flow->flows[slave_idx],
+ if (flow->flows[member_idx] != NULL) {
+ rte_flow_destroy(member_port_id, flow->flows[member_idx],
&flow_error);
- flow->flows[slave_idx] = NULL;
+ flow->flows[member_idx] = NULL;
}
}
/* Remove the dedicated queues flow */
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1 &&
- internals->mode4.dedicated_queues.flow[slave_port_id] != NULL) {
- rte_flow_destroy(slave_port_id,
- internals->mode4.dedicated_queues.flow[slave_port_id],
+ internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
+ rte_flow_destroy(member_port_id,
+ internals->mode4.dedicated_queues.flow[member_port_id],
&flow_error);
- internals->mode4.dedicated_queues.flow[slave_port_id] = NULL;
+ internals->mode4.dedicated_queues.flow[member_port_id] = NULL;
}
- slave_eth_dev = &rte_eth_devices[slave_port_id];
- slave_remove(internals, slave_eth_dev);
- slave_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDING_MEMBER);
+ member_eth_dev = &rte_eth_devices[member_port_id];
+ member_remove(internals, member_eth_dev);
+ member_eth_dev->data->dev_flags &= (~RTE_ETH_DEV_BONDING_MEMBER);
- /* first slave in the active list will be the primary by default,
+ /* first member in the active list will be the primary by default,
* otherwise use first device in list */
- if (internals->current_primary_port == slave_port_id) {
- if (internals->active_slave_count > 0)
- internals->current_primary_port = internals->active_slaves[0];
- else if (internals->slave_count > 0)
- internals->current_primary_port = internals->slaves[0].port_id;
+ if (internals->current_primary_port == member_port_id) {
+ if (internals->active_member_count > 0)
+ internals->current_primary_port = internals->active_members[0];
+ else if (internals->member_count > 0)
+ internals->current_primary_port = internals->members[0].port_id;
else
internals->primary_port = 0;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
}
- if (internals->active_slave_count < 1) {
- /* if no slaves are any longer attached to bonded device and MAC is not
+ if (internals->active_member_count < 1) {
+ /*
+ * if no members are any longer attached to bonded device and MAC is not
* user defined then clear MAC of bonded device as it will be reset
- * when a new slave is added */
- if (internals->slave_count < 1 && !internals->user_defined_mac)
+ * when a new member is added.
+ */
+ if (internals->member_count < 1 && !internals->user_defined_mac)
memset(rte_eth_devices[bonded_port_id].data->mac_addrs, 0,
sizeof(*(rte_eth_devices[bonded_port_id].data->mac_addrs)));
}
- if (internals->slave_count == 0) {
+ if (internals->member_count == 0) {
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -760,7 +766,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
}
int
-rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_member_remove(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct rte_eth_dev *bonded_eth_dev;
struct bond_dev_private *internals;
@@ -774,7 +780,7 @@ rte_eth_bond_slave_remove(uint16_t bonded_port_id, uint16_t slave_port_id)
rte_spinlock_lock(&internals->lock);
- retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
+ retval = __eth_bond_member_remove_lock_free(bonded_port_id, member_port_id);
rte_spinlock_unlock(&internals->lock);
@@ -791,7 +797,7 @@ rte_eth_bond_mode_set(uint16_t bonded_port_id, uint8_t mode)
bonded_eth_dev = &rte_eth_devices[bonded_port_id];
- if (check_for_master_bonded_ethdev(bonded_eth_dev) != 0 &&
+ if (check_for_main_bonded_ethdev(bonded_eth_dev) != 0 &&
mode == BONDING_MODE_8023AD)
return -1;
@@ -812,7 +818,7 @@ rte_eth_bond_mode_get(uint16_t bonded_port_id)
}
int
-rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
+rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t member_port_id)
{
struct bond_dev_private *internals;
@@ -821,13 +827,13 @@ rte_eth_bond_primary_set(uint16_t bonded_port_id, uint16_t slave_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (valid_slave_port_id(internals, slave_port_id) != 0)
+ if (valid_member_port_id(internals, member_port_id) != 0)
return -1;
internals->user_defined_primary_port = 1;
- internals->primary_port = slave_port_id;
+ internals->primary_port = member_port_id;
- bond_ethdev_primary_set(internals, slave_port_id);
+ bond_ethdev_primary_set(internals, member_port_id);
return 0;
}
@@ -842,14 +848,14 @@ rte_eth_bond_primary_get(uint16_t bonded_port_id)
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count < 1)
+ if (internals->member_count < 1)
return -1;
return internals->current_primary_port;
}
int
-rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_members_get(uint16_t bonded_port_id, uint16_t members[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -858,22 +864,22 @@ rte_eth_bond_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (members == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->slave_count > len)
+ if (internals->member_count > len)
return -1;
- for (i = 0; i < internals->slave_count; i++)
- slaves[i] = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++)
+ members[i] = internals->members[i].port_id;
- return internals->slave_count;
+ return internals->member_count;
}
int
-rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
+rte_eth_bond_active_members_get(uint16_t bonded_port_id, uint16_t members[],
uint16_t len)
{
struct bond_dev_private *internals;
@@ -881,18 +887,18 @@ rte_eth_bond_active_slaves_get(uint16_t bonded_port_id, uint16_t slaves[],
if (valid_bonded_port_id(bonded_port_id) != 0)
return -1;
- if (slaves == NULL)
+ if (members == NULL)
return -1;
internals = rte_eth_devices[bonded_port_id].data->dev_private;
- if (internals->active_slave_count > len)
+ if (internals->active_member_count > len)
return -1;
- memcpy(slaves, internals->active_slaves,
- internals->active_slave_count * sizeof(internals->active_slaves[0]));
+ memcpy(members, internals->active_members,
+ internals->active_member_count * sizeof(internals->active_members[0]));
- return internals->active_slave_count;
+ return internals->active_member_count;
}
int
@@ -914,9 +920,9 @@ rte_eth_bond_mac_address_set(uint16_t bonded_port_id,
internals->user_defined_mac = 1;
- /* Update all slave devices MACs*/
- if (internals->slave_count > 0)
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MACs*/
+ if (internals->member_count > 0)
+ return mac_address_members_update(bonded_eth_dev);
return 0;
}
@@ -935,30 +941,30 @@ rte_eth_bond_mac_address_reset(uint16_t bonded_port_id)
internals->user_defined_mac = 0;
- if (internals->slave_count > 0) {
- int slave_port;
- /* Get the primary slave location based on the primary port
- * number as, while slave_add(), we will keep the primary
- * slave based on slave_count,but not based on the primary port.
+ if (internals->member_count > 0) {
+ int member_port;
+ /* Get the primary member location based on the primary port
+ * number as, while member_add(), we will keep the primary
+ * member based on member_count,but not based on the primary port.
*/
- for (slave_port = 0; slave_port < internals->slave_count;
- slave_port++) {
- if (internals->slaves[slave_port].port_id ==
+ for (member_port = 0; member_port < internals->member_count;
+ member_port++) {
+ if (internals->members[member_port].port_id ==
internals->primary_port)
break;
}
/* Set MAC Address of Bonded Device */
if (mac_address_set(bonded_eth_dev,
- &internals->slaves[slave_port].persisted_mac_addr)
+ &internals->members[member_port].persisted_mac_addr)
!= 0) {
RTE_BOND_LOG(ERR, "Failed to set MAC address on bonded device");
return -1;
}
- /* Update all slave devices MAC addresses */
- return mac_address_slaves_update(bonded_eth_dev);
+ /* Update all member devices MAC addresses */
+ return mac_address_members_update(bonded_eth_dev);
}
- /* No need to update anything as no slaves present */
+ /* No need to update anything as no members present */
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_args.c b/drivers/net/bonding/rte_eth_bond_args.c
index c137efd55f..bdec5d61d4 100644
--- a/drivers/net/bonding/rte_eth_bond_args.c
+++ b/drivers/net/bonding/rte_eth_bond_args.c
@@ -12,8 +12,8 @@
#include "eth_bond_private.h"
const char *pmd_bond_init_valid_arguments[] = {
- PMD_BOND_SLAVE_PORT_KVARG,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
+ PMD_BOND_MEMBER_PORT_KVARG,
+ PMD_BOND_PRIMARY_MEMBER_KVARG,
PMD_BOND_MODE_KVARG,
PMD_BOND_XMIT_POLICY_KVARG,
PMD_BOND_SOCKET_ID_KVARG,
@@ -109,31 +109,31 @@ parse_port_id(const char *port_str)
}
int
-bond_ethdev_parse_slave_port_kvarg(const char *key,
+bond_ethdev_parse_member_port_kvarg(const char *key,
const char *value, void *extra_args)
{
- struct bond_ethdev_slave_ports *slave_ports;
+ struct bond_ethdev_member_ports *member_ports;
if (value == NULL || extra_args == NULL)
return -1;
- slave_ports = extra_args;
+ member_ports = extra_args;
- if (strcmp(key, PMD_BOND_SLAVE_PORT_KVARG) == 0) {
+ if (strcmp(key, PMD_BOND_MEMBER_PORT_KVARG) == 0) {
int port_id = parse_port_id(value);
if (port_id < 0) {
- RTE_BOND_LOG(ERR, "Invalid slave port value (%s) specified",
+ RTE_BOND_LOG(ERR, "Invalid member port value (%s) specified",
value);
return -1;
} else
- slave_ports->slaves[slave_ports->slave_count++] =
+ member_ports->members[member_ports->member_count++] =
port_id;
}
return 0;
}
int
-bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *mode;
@@ -160,13 +160,13 @@ bond_ethdev_parse_slave_mode_kvarg(const char *key __rte_unused,
case BONDING_MODE_ALB:
return 0;
default:
- RTE_BOND_LOG(ERR, "Invalid slave mode value (%s) specified", value);
+ RTE_BOND_LOG(ERR, "Invalid member mode value (%s) specified", value);
return -1;
}
}
int
-bond_ethdev_parse_slave_agg_mode_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_member_agg_mode_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
uint8_t *agg_mode;
@@ -227,19 +227,19 @@ bond_ethdev_parse_socket_id_kvarg(const char *key __rte_unused,
}
int
-bond_ethdev_parse_primary_slave_port_id_kvarg(const char *key __rte_unused,
+bond_ethdev_parse_primary_member_port_id_kvarg(const char *key __rte_unused,
const char *value, void *extra_args)
{
- int primary_slave_port_id;
+ int primary_member_port_id;
if (value == NULL || extra_args == NULL)
return -1;
- primary_slave_port_id = parse_port_id(value);
- if (primary_slave_port_id < 0)
+ primary_member_port_id = parse_port_id(value);
+ if (primary_member_port_id < 0)
return -1;
- *(uint16_t *)extra_args = (uint16_t)primary_slave_port_id;
+ *(uint16_t *)extra_args = (uint16_t)primary_member_port_id;
return 0;
}
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 65b77faae7..71a91675f7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -69,12 +69,12 @@ bond_flow_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_validate(internals->slaves[i].port_id, attr,
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_validate(internals->members[i].port_id, attr,
patterns, actions, err);
if (ret) {
RTE_BOND_LOG(ERR, "Operation rte_flow_validate failed"
- " for slave %d with error %d", i, ret);
+ " for member %d with error %d", i, ret);
return ret;
}
}
@@ -97,11 +97,11 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
NULL, rte_strerror(ENOMEM));
return NULL;
}
- for (i = 0; i < internals->slave_count; i++) {
- flow->flows[i] = rte_flow_create(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ flow->flows[i] = rte_flow_create(internals->members[i].port_id,
attr, patterns, actions, err);
if (unlikely(flow->flows[i] == NULL)) {
- RTE_BOND_LOG(ERR, "Failed to create flow on slave %d",
+ RTE_BOND_LOG(ERR, "Failed to create flow on member %d",
i);
goto err;
}
@@ -109,10 +109,10 @@ bond_flow_create(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
TAILQ_INSERT_TAIL(&internals->flow_list, flow, next);
return flow;
err:
- /* Destroy all slaves flows. */
- for (i = 0; i < internals->slave_count; i++) {
+ /* Destroy all members flows. */
+ for (i = 0; i < internals->member_count; i++) {
if (flow->flows[i] != NULL)
- rte_flow_destroy(internals->slaves[i].port_id,
+ rte_flow_destroy(internals->members[i].port_id,
flow->flows[i], err);
}
bond_flow_release(&flow);
@@ -127,15 +127,15 @@ bond_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow,
int i;
int ret = 0;
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->member_count; i++) {
int lret;
if (unlikely(flow->flows[i] == NULL))
continue;
- lret = rte_flow_destroy(internals->slaves[i].port_id,
+ lret = rte_flow_destroy(internals->members[i].port_id,
flow->flows[i], err);
if (unlikely(lret != 0)) {
- RTE_BOND_LOG(ERR, "Failed to destroy flow on slave %d:"
+ RTE_BOND_LOG(ERR, "Failed to destroy flow on member %d:"
" %d", i, lret);
ret = lret;
}
@@ -154,7 +154,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
int ret = 0;
int lret;
- /* Destroy all bond flows from its slaves instead of flushing them to
+ /* Destroy all bond flows from its members instead of flushing them to
* keep the LACP flow or any other external flows.
*/
RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
@@ -163,7 +163,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
ret = lret;
}
if (unlikely(ret != 0))
- RTE_BOND_LOG(ERR, "Failed to flush flow in all slaves");
+ RTE_BOND_LOG(ERR, "Failed to flush flow in all members");
return ret;
}
@@ -174,26 +174,26 @@ bond_flow_query_count(struct rte_eth_dev *dev, struct rte_flow *flow,
struct rte_flow_error *err)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_flow_query_count slave_count;
+ struct rte_flow_query_count member_count;
int i;
int ret;
count->bytes = 0;
count->hits = 0;
- rte_memcpy(&slave_count, count, sizeof(slave_count));
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_query(internals->slaves[i].port_id,
+ rte_memcpy(&member_count, count, sizeof(member_count));
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_query(internals->members[i].port_id,
flow->flows[i], action,
- &slave_count, err);
+ &member_count, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Failed to query flow on"
- " slave %d: %d", i, ret);
+ " member %d: %d", i, ret);
return ret;
}
- count->bytes += slave_count.bytes;
- count->hits += slave_count.hits;
- slave_count.bytes = 0;
- slave_count.hits = 0;
+ count->bytes += member_count.bytes;
+ count->hits += member_count.hits;
+ member_count.bytes = 0;
+ member_count.hits = 0;
}
return 0;
}
@@ -221,11 +221,11 @@ bond_flow_isolate(struct rte_eth_dev *dev, int set,
int i;
int ret;
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_flow_isolate(internals->slaves[i].port_id, set, err);
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_flow_isolate(internals->members[i].port_id, set, err);
if (unlikely(ret != 0)) {
RTE_BOND_LOG(ERR, "Operation rte_flow_isolate failed"
- " for slave %d with error %d", i, ret);
+ " for member %d with error %d", i, ret);
internals->flow_isolated_valid = 0;
return ret;
}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 73205f78f4..499c980db8 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -61,33 +61,35 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct bond_dev_private *internals;
uint16_t num_rx_total = 0;
- uint16_t slave_count;
- uint16_t active_slave;
+ uint16_t member_count;
+ uint16_t active_member;
int i;
/* Cast to structure, containing bonded device's port id and queue id */
struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
internals = bd_rx_q->dev_private;
- slave_count = internals->active_slave_count;
- active_slave = bd_rx_q->active_slave;
+ member_count = internals->active_member_count;
+ active_member = bd_rx_q->active_member;
- for (i = 0; i < slave_count && nb_pkts; i++) {
- uint16_t num_rx_slave;
+ for (i = 0; i < member_count && nb_pkts; i++) {
+ uint16_t num_rx_member;
- /* Offset of pointer to *bufs increases as packets are received
- * from other slaves */
- num_rx_slave =
- rte_eth_rx_burst(internals->active_slaves[active_slave],
+ /*
+ * Offset of pointer to *bufs increases as packets are received
+ * from other members.
+ */
+ num_rx_member =
+ rte_eth_rx_burst(internals->active_members[active_member],
bd_rx_q->queue_id,
bufs + num_rx_total, nb_pkts);
- num_rx_total += num_rx_slave;
- nb_pkts -= num_rx_slave;
- if (++active_slave >= slave_count)
- active_slave = 0;
+ num_rx_total += num_rx_member;
+ nb_pkts -= num_rx_member;
+ if (++active_member >= member_count)
+ active_member = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_member >= member_count)
+ bd_rx_q->active_member = 0;
return num_rx_total;
}
@@ -158,8 +160,8 @@ const struct rte_flow_attr flow_attr_8023ad = {
int
bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
- uint16_t slave_port) {
- struct rte_eth_dev_info slave_info;
+ uint16_t member_port) {
+ struct rte_eth_dev_info member_info;
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -177,29 +179,29 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
}
};
- int ret = rte_flow_validate(slave_port, &flow_attr_8023ad,
+ int ret = rte_flow_validate(member_port, &flow_attr_8023ad,
flow_item_8023ad, actions, &error);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "%s: %s (slave_port=%d queue_id=%d)",
- __func__, error.message, slave_port,
+ RTE_BOND_LOG(ERR, "%s: %s (member_port=%d queue_id=%d)",
+ __func__, error.message, member_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
- ret = rte_eth_dev_info_get(slave_port, &slave_info);
+ ret = rte_eth_dev_info_get(member_port, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
- __func__, slave_port, strerror(-ret));
+ __func__, member_port, strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
- slave_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
+ if (member_info.max_rx_queues < bond_dev->data->nb_rx_queues ||
+ member_info.max_tx_queues < bond_dev->data->nb_tx_queues) {
RTE_BOND_LOG(ERR,
- "%s: Slave %d capabilities doesn't allow allocating additional queues",
- __func__, slave_port);
+ "%s: Member %d capabilities doesn't allow allocating additional queues",
+ __func__, member_port);
return -1;
}
@@ -214,8 +216,8 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
uint16_t idx;
int ret;
- /* Verify if all slaves in bonding supports flow director and */
- if (internals->slave_count > 0) {
+ /* Verify if all members in bonding supports flow director and */
+ if (internals->member_count > 0) {
ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
@@ -229,9 +231,9 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
internals->mode4.dedicated_queues.rx_qid = bond_info.nb_rx_queues;
internals->mode4.dedicated_queues.tx_qid = bond_info.nb_tx_queues;
- for (idx = 0; idx < internals->slave_count; idx++) {
+ for (idx = 0; idx < internals->member_count; idx++) {
if (bond_ethdev_8023ad_flow_verify(bond_dev,
- internals->slaves[idx].port_id) != 0)
+ internals->members[idx].port_id) != 0)
return -1;
}
}
@@ -240,7 +242,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
}
int
-bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
+bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t member_port) {
struct rte_flow_error error;
struct bond_dev_private *internals = bond_dev->data->dev_private;
@@ -258,12 +260,12 @@ bond_ethdev_8023ad_flow_set(struct rte_eth_dev *bond_dev, uint16_t slave_port) {
}
};
- internals->mode4.dedicated_queues.flow[slave_port] = rte_flow_create(slave_port,
+ internals->mode4.dedicated_queues.flow[member_port] = rte_flow_create(member_port,
&flow_attr_8023ad, flow_item_8023ad, actions, &error);
- if (internals->mode4.dedicated_queues.flow[slave_port] == NULL) {
+ if (internals->mode4.dedicated_queues.flow[member_port] == NULL) {
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_set: %s "
- "(slave_port=%d queue_id=%d)",
- error.message, slave_port,
+ "(member_port=%d queue_id=%d)",
+ error.message, member_port,
internals->mode4.dedicated_queues.rx_qid);
return -1;
}
@@ -304,10 +306,10 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
const uint16_t ether_type_slow_be =
rte_be_to_cpu_16(RTE_ETHER_TYPE_SLOW);
uint16_t num_rx_total = 0; /* Total number of received packets */
- uint16_t slaves[RTE_MAX_ETHPORTS];
- uint16_t slave_count, idx;
+ uint16_t members[RTE_MAX_ETHPORTS];
+ uint16_t member_count, idx;
- uint8_t collecting; /* current slave collecting status */
+ uint8_t collecting; /* current member collecting status */
const uint8_t promisc = rte_eth_promiscuous_get(internals->port_id);
const uint8_t allmulti = rte_eth_allmulticast_get(internals->port_id);
uint8_t subtype;
@@ -315,24 +317,24 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
uint16_t j;
uint16_t k;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * slave_count);
+ member_count = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * member_count);
- idx = bd_rx_q->active_slave;
- if (idx >= slave_count) {
- bd_rx_q->active_slave = 0;
+ idx = bd_rx_q->active_member;
+ if (idx >= member_count) {
+ bd_rx_q->active_member = 0;
idx = 0;
}
- for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+ for (i = 0; i < member_count && num_rx_total < nb_pkts; i++) {
j = num_rx_total;
- collecting = ACTOR_STATE(&bond_mode_8023ad_ports[slaves[idx]],
+ collecting = ACTOR_STATE(&bond_mode_8023ad_ports[members[idx]],
COLLECTING);
- /* Read packets from this slave */
- num_rx_total += rte_eth_rx_burst(slaves[idx], bd_rx_q->queue_id,
+ /* Read packets from this member */
+ num_rx_total += rte_eth_rx_burst(members[idx], bd_rx_q->queue_id,
&bufs[num_rx_total], nb_pkts - num_rx_total);
for (k = j; k < 2 && k < num_rx_total; k++)
@@ -348,7 +350,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
/* Remove packet from array if:
* - it is slow packet but no dedicated rxq is present,
- * - slave is not in collecting state,
+ * - member is not in collecting state,
* - bonding interface is not in promiscuous mode and
* packet address isn't in mac_addrs array:
* - packet is unicast,
@@ -367,7 +369,7 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
!allmulti)))) {
if (hdr->ether_type == ether_type_slow_be) {
bond_mode_8023ad_handle_slow_pkt(
- internals, slaves[idx], bufs[j]);
+ internals, members[idx], bufs[j]);
} else
rte_pktmbuf_free(bufs[j]);
@@ -380,12 +382,12 @@ rx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts,
} else
j++;
}
- if (unlikely(++idx == slave_count))
+ if (unlikely(++idx == member_count))
idx = 0;
}
- if (++bd_rx_q->active_slave >= slave_count)
- bd_rx_q->active_slave = 0;
+ if (++bd_rx_q->active_member >= member_count)
+ bd_rx_q->active_member = 0;
return num_rx_total;
}
@@ -406,7 +408,7 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void *queue, struct rte_mbuf **bufs,
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
uint32_t burstnumberRX;
-uint32_t burstnumberTX;
+uint32_t burst_number_TX;
#ifdef RTE_LIBRTE_BOND_DEBUG_ALB
@@ -583,59 +585,61 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
- uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_pkts];
+ uint16_t member_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
- uint16_t num_of_slaves;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_members;
+ uint16_t members[RTE_MAX_ETHPORTS];
- uint16_t num_tx_total = 0, num_tx_slave;
+ uint16_t num_tx_total = 0, num_tx_member;
- static int slave_idx = 0;
- int i, cslave_idx = 0, tx_fail_total = 0;
+ static int member_idx;
+ int i, cmember_idx = 0, tx_fail_total = 0;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_members = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * num_of_members);
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return num_tx_total;
- /* Populate slaves mbuf with which packets are to be sent on it */
+ /* Populate members mbuf with which packets are to be sent on it */
for (i = 0; i < nb_pkts; i++) {
- cslave_idx = (slave_idx + i) % num_of_slaves;
- slave_bufs[cslave_idx][(slave_nb_pkts[cslave_idx])++] = bufs[i];
+ cmember_idx = (member_idx + i) % num_of_members;
+ member_bufs[cmember_idx][(member_nb_pkts[cmember_idx])++] = bufs[i];
}
- /* increment current slave index so the next call to tx burst starts on the
- * next slave */
- slave_idx = ++cslave_idx;
+ /*
+ * increment current member index so the next call to tx burst starts on the
+ * next member.
+ */
+ member_idx = ++cmember_idx;
- /* Send packet burst on each slave device */
- for (i = 0; i < num_of_slaves; i++) {
- if (slave_nb_pkts[i] > 0) {
- num_tx_slave = rte_eth_tx_prepare(slaves[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_pkts[i]);
- num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
- slave_bufs[i], num_tx_slave);
+ /* Send packet burst on each member device */
+ for (i = 0; i < num_of_members; i++) {
+ if (member_nb_pkts[i] > 0) {
+ num_tx_member = rte_eth_tx_prepare(members[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_nb_pkts[i]);
+ num_tx_member = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
+ member_bufs[i], num_tx_member);
/* if tx burst fails move packets to end of bufs */
- if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
- int tx_fail_slave = slave_nb_pkts[i] - num_tx_slave;
+ if (unlikely(num_tx_member < member_nb_pkts[i])) {
+ int tx_fail_member = member_nb_pkts[i] - num_tx_member;
- tx_fail_total += tx_fail_slave;
+ tx_fail_total += tx_fail_member;
memcpy(&bufs[nb_pkts - tx_fail_total],
- &slave_bufs[i][num_tx_slave],
- tx_fail_slave * sizeof(bufs[0]));
+ &member_bufs[i][num_tx_member],
+ tx_fail_member * sizeof(bufs[0]));
}
- num_tx_total += num_tx_slave;
+ num_tx_total += num_tx_member;
}
}
@@ -653,7 +657,7 @@ bond_ethdev_tx_burst_active_backup(void *queue,
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- if (internals->active_slave_count < 1)
+ if (internals->active_member_count < 1)
return 0;
nb_prep_pkts = rte_eth_tx_prepare(internals->current_primary_port,
@@ -699,7 +703,7 @@ ipv6_hash(struct rte_ipv6_hdr *ipv6_hdr)
void
burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
struct rte_ether_hdr *eth_hdr;
uint32_t hash;
@@ -710,13 +714,13 @@ burst_xmit_l2_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash = ether_hash(eth_hdr);
- slaves[i] = (hash ^= hash >> 8) % slave_count;
+ members[i] = (hash ^= hash >> 8) % member_count;
}
}
void
burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
uint16_t i;
struct rte_ether_hdr *eth_hdr;
@@ -748,13 +752,13 @@ burst_xmit_l23_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ members[i] = hash % member_count;
}
}
void
burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
- uint16_t slave_count, uint16_t *slaves)
+ uint16_t member_count, uint16_t *members)
{
struct rte_ether_hdr *eth_hdr;
uint16_t proto;
@@ -822,30 +826,29 @@ burst_xmit_l34_hash(struct rte_mbuf **buf, uint16_t nb_pkts,
hash ^= hash >> 16;
hash ^= hash >> 8;
- slaves[i] = hash % slave_count;
+ members[i] = hash % member_count;
}
}
-struct bwg_slave {
+struct bwg_member {
uint64_t bwg_left_int;
uint64_t bwg_left_remainder;
- uint16_t slave;
+ uint16_t member;
};
void
-bond_tlb_activate_slave(struct bond_dev_private *internals) {
+bond_tlb_activate_member(struct bond_dev_private *internals) {
int i;
- for (i = 0; i < internals->active_slave_count; i++) {
- tlb_last_obytets[internals->active_slaves[i]] = 0;
- }
+ for (i = 0; i < internals->active_member_count; i++)
+ tlb_last_obytets[internals->active_members[i]] = 0;
}
static int
bandwidth_cmp(const void *a, const void *b)
{
- const struct bwg_slave *bwg_a = a;
- const struct bwg_slave *bwg_b = b;
+ const struct bwg_member *bwg_a = a;
+ const struct bwg_member *bwg_b = b;
int64_t diff = (int64_t)bwg_b->bwg_left_int - (int64_t)bwg_a->bwg_left_int;
int64_t diff2 = (int64_t)bwg_b->bwg_left_remainder -
(int64_t)bwg_a->bwg_left_remainder;
@@ -863,14 +866,14 @@ bandwidth_cmp(const void *a, const void *b)
static void
bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
- struct bwg_slave *bwg_slave)
+ struct bwg_member *bwg_member)
{
struct rte_eth_link link_status;
int ret;
ret = rte_eth_link_get_nowait(port_id, &link_status);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
port_id, rte_strerror(-ret));
return;
}
@@ -878,51 +881,51 @@ bandwidth_left(uint16_t port_id, uint64_t load, uint8_t update_idx,
if (link_bwg == 0)
return;
link_bwg = link_bwg * (update_idx+1) * REORDER_PERIOD_MS;
- bwg_slave->bwg_left_int = (link_bwg - 1000*load) / link_bwg;
- bwg_slave->bwg_left_remainder = (link_bwg - 1000*load) % link_bwg;
+ bwg_member->bwg_left_int = (link_bwg - 1000 * load) / link_bwg;
+ bwg_member->bwg_left_remainder = (link_bwg - 1000 * load) % link_bwg;
}
static void
-bond_ethdev_update_tlb_slave_cb(void *arg)
+bond_ethdev_update_tlb_member_cb(void *arg)
{
struct bond_dev_private *internals = arg;
- struct rte_eth_stats slave_stats;
- struct bwg_slave bwg_array[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ struct rte_eth_stats member_stats;
+ struct bwg_member bwg_array[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
uint64_t tx_bytes;
uint8_t update_stats = 0;
- uint16_t slave_id;
+ uint16_t member_id;
uint16_t i;
- internals->slave_update_idx++;
+ internals->member_update_idx++;
- if (internals->slave_update_idx >= REORDER_PERIOD_MS)
+ if (internals->member_update_idx >= REORDER_PERIOD_MS)
update_stats = 1;
- for (i = 0; i < internals->active_slave_count; i++) {
- slave_id = internals->active_slaves[i];
- rte_eth_stats_get(slave_id, &slave_stats);
- tx_bytes = slave_stats.obytes - tlb_last_obytets[slave_id];
- bandwidth_left(slave_id, tx_bytes,
- internals->slave_update_idx, &bwg_array[i]);
- bwg_array[i].slave = slave_id;
+ for (i = 0; i < internals->active_member_count; i++) {
+ member_id = internals->active_members[i];
+ rte_eth_stats_get(member_id, &member_stats);
+ tx_bytes = member_stats.obytes - tlb_last_obytets[member_id];
+ bandwidth_left(member_id, tx_bytes,
+ internals->member_update_idx, &bwg_array[i]);
+ bwg_array[i].member = member_id;
if (update_stats) {
- tlb_last_obytets[slave_id] = slave_stats.obytes;
+ tlb_last_obytets[member_id] = member_stats.obytes;
}
}
if (update_stats == 1)
- internals->slave_update_idx = 0;
+ internals->member_update_idx = 0;
- slave_count = i;
- qsort(bwg_array, slave_count, sizeof(bwg_array[0]), bandwidth_cmp);
- for (i = 0; i < slave_count; i++)
- internals->tlb_slaves_order[i] = bwg_array[i].slave;
+ member_count = i;
+ qsort(bwg_array, member_count, sizeof(bwg_array[0]), bandwidth_cmp);
+ for (i = 0; i < member_count; i++)
+ internals->tlb_members_order[i] = bwg_array[i].member;
- rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_slave_cb,
+ rte_eal_alarm_set(REORDER_PERIOD_MS * 1000, bond_ethdev_update_tlb_member_cb,
(struct bond_dev_private *)internals);
}
@@ -937,29 +940,29 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_tx_total = 0, num_tx_prep;
uint16_t i, j;
- uint16_t num_of_slaves = internals->active_slave_count;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t num_of_members = internals->active_member_count;
+ uint16_t members[RTE_MAX_ETHPORTS];
struct rte_ether_hdr *ether_hdr;
- struct rte_ether_addr primary_slave_addr;
- struct rte_ether_addr active_slave_addr;
+ struct rte_ether_addr primary_member_addr;
+ struct rte_ether_addr active_member_addr;
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return num_tx_total;
- memcpy(slaves, internals->tlb_slaves_order,
- sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
+ memcpy(members, internals->tlb_members_order,
+ sizeof(internals->tlb_members_order[0]) * num_of_members);
- rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
+ rte_ether_addr_copy(primary_port->data->mac_addrs, &primary_member_addr);
if (nb_pkts > 3) {
for (i = 0; i < 3; i++)
rte_prefetch0(rte_pktmbuf_mtod(bufs[i], void*));
}
- for (i = 0; i < num_of_slaves; i++) {
- rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+ for (i = 0; i < num_of_members; i++) {
+ rte_eth_macaddr_get(members[i], &active_member_addr);
for (j = num_tx_total; j < nb_pkts; j++) {
if (j + 3 < nb_pkts)
rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -967,17 +970,18 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ether_hdr = rte_pktmbuf_mtod(bufs[j],
struct rte_ether_hdr *);
if (rte_is_same_ether_addr(ðer_hdr->src_addr,
- &primary_slave_addr))
- rte_ether_addr_copy(&active_slave_addr,
+ &primary_member_addr))
+ rte_ether_addr_copy(&active_member_addr,
ðer_hdr->src_addr);
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
- mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
+ mode6_debug("TX IPv4:", ether_hdr, members[i],
+ &burst_number_TX);
#endif
}
- num_tx_prep = rte_eth_tx_prepare(slaves[i], bd_tx_q->queue_id,
+ num_tx_prep = rte_eth_tx_prepare(members[i], bd_tx_q->queue_id,
bufs + num_tx_total, nb_pkts - num_tx_total);
- num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ num_tx_total += rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
bufs + num_tx_total, num_tx_prep);
if (num_tx_total == nb_pkts)
@@ -990,13 +994,13 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
void
bond_tlb_disable(struct bond_dev_private *internals)
{
- rte_eal_alarm_cancel(bond_ethdev_update_tlb_slave_cb, internals);
+ rte_eal_alarm_cancel(bond_ethdev_update_tlb_member_cb, internals);
}
void
bond_tlb_enable(struct bond_dev_private *internals)
{
- bond_ethdev_update_tlb_slave_cb(internals);
+ bond_ethdev_update_tlb_member_cb(internals);
}
static uint16_t
@@ -1011,11 +1015,11 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
struct client_data *client_info;
/*
- * We create transmit buffers for every slave and one additional to send
+ * We create transmit buffers for every member and one additional to send
* through tlb. In worst case every packet will be send on one port.
*/
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
- uint16_t slave_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS + 1][nb_pkts];
+ uint16_t member_bufs_pkts[RTE_MAX_ETHPORTS + 1] = { 0 };
/*
* We create separate transmit buffers for update packets as they won't
@@ -1029,7 +1033,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
uint16_t num_send, num_not_send = 0;
uint16_t num_tx_total = 0;
- uint16_t slave_idx;
+ uint16_t member_idx;
int i, j;
@@ -1040,19 +1044,19 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
offset = get_vlan_offset(eth_h, ðer_type);
if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_ARP)) {
- slave_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
+ member_idx = bond_mode_alb_arp_xmit(eth_h, offset, internals);
/* Change src mac in eth header */
- rte_eth_macaddr_get(slave_idx, ð_h->src_addr);
+ rte_eth_macaddr_get(member_idx, ð_h->src_addr);
- /* Add packet to slave tx buffer */
- slave_bufs[slave_idx][slave_bufs_pkts[slave_idx]] = bufs[i];
- slave_bufs_pkts[slave_idx]++;
+ /* Add packet to member tx buffer */
+ member_bufs[member_idx][member_bufs_pkts[member_idx]] = bufs[i];
+ member_bufs_pkts[member_idx]++;
} else {
/* If packet is not ARP, send it with TLB policy */
- slave_bufs[RTE_MAX_ETHPORTS][slave_bufs_pkts[RTE_MAX_ETHPORTS]] =
+ member_bufs[RTE_MAX_ETHPORTS][member_bufs_pkts[RTE_MAX_ETHPORTS]] =
bufs[i];
- slave_bufs_pkts[RTE_MAX_ETHPORTS]++;
+ member_bufs_pkts[RTE_MAX_ETHPORTS]++;
}
}
@@ -1062,7 +1066,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
client_info = &internals->mode6.client_table[i];
if (client_info->in_use) {
- /* Allocate new packet to send ARP update on current slave */
+ /* Allocate new packet to send ARP update on current member */
upd_pkt = rte_pktmbuf_alloc(internals->mode6.mempool);
if (upd_pkt == NULL) {
RTE_BOND_LOG(ERR,
@@ -1076,44 +1080,44 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
upd_pkt->data_len = pkt_size;
upd_pkt->pkt_len = pkt_size;
- slave_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
+ member_idx = bond_mode_alb_arp_upd(client_info, upd_pkt,
internals);
/* Add packet to update tx buffer */
- update_bufs[slave_idx][update_bufs_pkts[slave_idx]] = upd_pkt;
- update_bufs_pkts[slave_idx]++;
+ update_bufs[member_idx][update_bufs_pkts[member_idx]] = upd_pkt;
+ update_bufs_pkts[member_idx]++;
}
}
internals->mode6.ntt = 0;
}
- /* Send ARP packets on proper slaves */
+ /* Send ARP packets on proper members */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
- if (slave_bufs_pkts[i] > 0) {
+ if (member_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
- slave_bufs[i], slave_bufs_pkts[i]);
+ member_bufs[i], member_bufs_pkts[i]);
num_send = rte_eth_tx_burst(i, bd_tx_q->queue_id,
- slave_bufs[i], num_send);
- for (j = 0; j < slave_bufs_pkts[i] - num_send; j++) {
+ member_bufs[i], num_send);
+ for (j = 0; j < member_bufs_pkts[i] - num_send; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[i][nb_pkts - 1 - j];
+ member_bufs[i][nb_pkts - 1 - j];
}
num_tx_total += num_send;
- num_not_send += slave_bufs_pkts[i] - num_send;
+ num_not_send += member_bufs_pkts[i] - num_send;
#if defined(RTE_LIBRTE_BOND_DEBUG_ALB) || defined(RTE_LIBRTE_BOND_DEBUG_ALB_L1)
/* Print TX stats including update packets */
- for (j = 0; j < slave_bufs_pkts[i]; j++) {
- eth_h = rte_pktmbuf_mtod(slave_bufs[i][j],
+ for (j = 0; j < member_bufs_pkts[i]; j++) {
+ eth_h = rte_pktmbuf_mtod(member_bufs[i][j],
struct rte_ether_hdr *);
- mode6_debug("TX ARP:", eth_h, i, &burstnumberTX);
+ mode6_debug("TX ARP:", eth_h, i, &burst_number_TX);
}
#endif
}
}
- /* Send update packets on proper slaves */
+ /* Send update packets on proper members */
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
if (update_bufs_pkts[i] > 0) {
num_send = rte_eth_tx_prepare(i, bd_tx_q->queue_id,
@@ -1127,21 +1131,21 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
for (j = 0; j < update_bufs_pkts[i]; j++) {
eth_h = rte_pktmbuf_mtod(update_bufs[i][j],
struct rte_ether_hdr *);
- mode6_debug("TX ARPupd:", eth_h, i, &burstnumberTX);
+ mode6_debug("TX ARPupd:", eth_h, i, &burst_number_TX);
}
#endif
}
}
/* Send non-ARP packets using tlb policy */
- if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
+ if (member_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
num_send = bond_ethdev_tx_burst_tlb(queue,
- slave_bufs[RTE_MAX_ETHPORTS],
- slave_bufs_pkts[RTE_MAX_ETHPORTS]);
+ member_bufs[RTE_MAX_ETHPORTS],
+ member_bufs_pkts[RTE_MAX_ETHPORTS]);
- for (j = 0; j < slave_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
+ for (j = 0; j < member_bufs_pkts[RTE_MAX_ETHPORTS]; j++) {
bufs[nb_pkts - 1 - num_not_send - j] =
- slave_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
+ member_bufs[RTE_MAX_ETHPORTS][nb_pkts - 1 - j];
}
num_tx_total += num_send;
@@ -1152,59 +1156,59 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
static inline uint16_t
tx_burst_balance(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
- uint16_t *slave_port_ids, uint16_t slave_count)
+ uint16_t *member_port_ids, uint16_t member_count)
{
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- /* Array to sort mbufs for transmission on each slave into */
- struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];
- /* Number of mbufs for transmission on each slave */
- uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
- /* Mapping array generated by hash function to map mbufs to slaves */
- uint16_t bufs_slave_port_idxs[nb_bufs];
+ /* Array to sort mbufs for transmission on each member into */
+ struct rte_mbuf *member_bufs[RTE_MAX_ETHPORTS][nb_bufs];
+ /* Number of mbufs for transmission on each member */
+ uint16_t member_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };
+ /* Mapping array generated by hash function to map mbufs to members */
+ uint16_t bufs_member_port_idxs[nb_bufs];
- uint16_t slave_tx_count;
+ uint16_t member_tx_count;
uint16_t total_tx_count = 0, total_tx_fail_count = 0;
uint16_t i;
/*
- * Populate slaves mbuf with the packets which are to be sent on it
- * selecting output slave using hash based on xmit policy
+ * Populate members mbuf with the packets which are to be sent on it
+ * selecting output member using hash based on xmit policy
*/
- internals->burst_xmit_hash(bufs, nb_bufs, slave_count,
- bufs_slave_port_idxs);
+ internals->burst_xmit_hash(bufs, nb_bufs, member_count,
+ bufs_member_port_idxs);
for (i = 0; i < nb_bufs; i++) {
- /* Populate slave mbuf arrays with mbufs for that slave. */
- uint16_t slave_idx = bufs_slave_port_idxs[i];
+ /* Populate member mbuf arrays with mbufs for that member. */
+ uint16_t member_idx = bufs_member_port_idxs[i];
- slave_bufs[slave_idx][slave_nb_bufs[slave_idx]++] = bufs[i];
+ member_bufs[member_idx][member_nb_bufs[member_idx]++] = bufs[i];
}
- /* Send packet burst on each slave device */
- for (i = 0; i < slave_count; i++) {
- if (slave_nb_bufs[i] == 0)
+ /* Send packet burst on each member device */
+ for (i = 0; i < member_count; i++) {
+ if (member_nb_bufs[i] == 0)
continue;
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_nb_bufs[i]);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, slave_bufs[i],
- slave_tx_count);
+ member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_nb_bufs[i]);
+ member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+ bd_tx_q->queue_id, member_bufs[i],
+ member_tx_count);
- total_tx_count += slave_tx_count;
+ total_tx_count += member_tx_count;
/* If tx burst fails move packets to end of bufs */
- if (unlikely(slave_tx_count < slave_nb_bufs[i])) {
- int slave_tx_fail_count = slave_nb_bufs[i] -
- slave_tx_count;
- total_tx_fail_count += slave_tx_fail_count;
+ if (unlikely(member_tx_count < member_nb_bufs[i])) {
+ int member_tx_fail_count = member_nb_bufs[i] -
+ member_tx_count;
+ total_tx_fail_count += member_tx_fail_count;
memcpy(&bufs[nb_bufs - total_tx_fail_count],
- &slave_bufs[i][slave_tx_count],
- slave_tx_fail_count * sizeof(bufs[0]));
+ &member_bufs[i][member_tx_count],
+ member_tx_fail_count * sizeof(bufs[0]));
}
}
@@ -1218,23 +1222,23 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
if (unlikely(nb_bufs == 0))
return 0;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting
*/
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ member_count = internals->active_member_count;
+ if (unlikely(member_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
- return tx_burst_balance(queue, bufs, nb_bufs, slave_port_ids,
- slave_count);
+ memcpy(member_port_ids, internals->active_members,
+ sizeof(member_port_ids[0]) * member_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, member_port_ids,
+ member_count);
}
static inline uint16_t
@@ -1244,31 +1248,31 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;
struct bond_dev_private *internals = bd_tx_q->dev_private;
- uint16_t slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t slave_count;
+ uint16_t member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t member_count;
- uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];
- uint16_t dist_slave_count;
+ uint16_t dist_member_port_ids[RTE_MAX_ETHPORTS];
+ uint16_t dist_member_count;
- uint16_t slave_tx_count;
+ uint16_t member_tx_count;
uint16_t i;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- slave_count = internals->active_slave_count;
- if (unlikely(slave_count < 1))
+ member_count = internals->active_member_count;
+ if (unlikely(member_count < 1))
return 0;
- memcpy(slave_port_ids, internals->active_slaves,
- sizeof(slave_port_ids[0]) * slave_count);
+ memcpy(member_port_ids, internals->active_members,
+ sizeof(member_port_ids[0]) * member_count);
if (dedicated_txq)
goto skip_tx_ring;
/* Check for LACP control packets and send if available */
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ for (i = 0; i < member_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
struct rte_mbuf *ctrl_pkt = NULL;
if (likely(rte_ring_empty(port->tx_ring)))
@@ -1276,15 +1280,15 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (rte_ring_dequeue(port->tx_ring,
(void **)&ctrl_pkt) != -ENOENT) {
- slave_tx_count = rte_eth_tx_prepare(slave_port_ids[i],
+ member_tx_count = rte_eth_tx_prepare(member_port_ids[i],
bd_tx_q->queue_id, &ctrl_pkt, 1);
- slave_tx_count = rte_eth_tx_burst(slave_port_ids[i],
- bd_tx_q->queue_id, &ctrl_pkt, slave_tx_count);
+ member_tx_count = rte_eth_tx_burst(member_port_ids[i],
+ bd_tx_q->queue_id, &ctrl_pkt, member_tx_count);
/*
* re-enqueue LAG control plane packets to buffering
* ring if transmission fails so the packet isn't lost.
*/
- if (slave_tx_count != 1)
+ if (member_tx_count != 1)
rte_ring_enqueue(port->tx_ring, ctrl_pkt);
}
}
@@ -1293,20 +1297,20 @@ tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs,
if (unlikely(nb_bufs == 0))
return 0;
- dist_slave_count = 0;
- for (i = 0; i < slave_count; i++) {
- struct port *port = &bond_mode_8023ad_ports[slave_port_ids[i]];
+ dist_member_count = 0;
+ for (i = 0; i < member_count; i++) {
+ struct port *port = &bond_mode_8023ad_ports[member_port_ids[i]];
if (ACTOR_STATE(port, DISTRIBUTING))
- dist_slave_port_ids[dist_slave_count++] =
- slave_port_ids[i];
+ dist_member_port_ids[dist_member_count++] =
+ member_port_ids[i];
}
- if (unlikely(dist_slave_count < 1))
+ if (unlikely(dist_member_count < 1))
return 0;
- return tx_burst_balance(queue, bufs, nb_bufs, dist_slave_port_ids,
- dist_slave_count);
+ return tx_burst_balance(queue, bufs, nb_bufs, dist_member_port_ids,
+ dist_member_count);
}
static uint16_t
@@ -1330,78 +1334,78 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
struct bond_dev_private *internals;
struct bond_tx_queue *bd_tx_q;
- uint16_t slaves[RTE_MAX_ETHPORTS];
+ uint16_t members[RTE_MAX_ETHPORTS];
uint8_t tx_failed_flag = 0;
- uint16_t num_of_slaves;
+ uint16_t num_of_members;
uint16_t max_nb_of_tx_pkts = 0;
- int slave_tx_total[RTE_MAX_ETHPORTS];
- int i, most_successful_tx_slave = -1;
+ int member_tx_total[RTE_MAX_ETHPORTS];
+ int i, most_successful_tx_member = -1;
bd_tx_q = (struct bond_tx_queue *)queue;
internals = bd_tx_q->dev_private;
- /* Copy slave list to protect against slave up/down changes during tx
+ /* Copy member list to protect against member up/down changes during tx
* bursting */
- num_of_slaves = internals->active_slave_count;
- memcpy(slaves, internals->active_slaves,
- sizeof(internals->active_slaves[0]) * num_of_slaves);
+ num_of_members = internals->active_member_count;
+ memcpy(members, internals->active_members,
+ sizeof(internals->active_members[0]) * num_of_members);
- if (num_of_slaves < 1)
+ if (num_of_members < 1)
return 0;
/* It is rare that bond different PMDs together, so just call tx-prepare once */
- nb_pkts = rte_eth_tx_prepare(slaves[0], bd_tx_q->queue_id, bufs, nb_pkts);
+ nb_pkts = rte_eth_tx_prepare(members[0], bd_tx_q->queue_id, bufs, nb_pkts);
/* Increment reference count on mbufs */
for (i = 0; i < nb_pkts; i++)
- rte_pktmbuf_refcnt_update(bufs[i], num_of_slaves - 1);
+ rte_pktmbuf_refcnt_update(bufs[i], num_of_members - 1);
- /* Transmit burst on each active slave */
- for (i = 0; i < num_of_slaves; i++) {
- slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+ /* Transmit burst on each active member */
+ for (i = 0; i < num_of_members; i++) {
+ member_tx_total[i] = rte_eth_tx_burst(members[i], bd_tx_q->queue_id,
bufs, nb_pkts);
- if (unlikely(slave_tx_total[i] < nb_pkts))
+ if (unlikely(member_tx_total[i] < nb_pkts))
tx_failed_flag = 1;
- /* record the value and slave index for the slave which transmits the
+ /* record the value and member index for the member which transmits the
* maximum number of packets */
- if (slave_tx_total[i] > max_nb_of_tx_pkts) {
- max_nb_of_tx_pkts = slave_tx_total[i];
- most_successful_tx_slave = i;
+ if (member_tx_total[i] > max_nb_of_tx_pkts) {
+ max_nb_of_tx_pkts = member_tx_total[i];
+ most_successful_tx_member = i;
}
}
- /* if slaves fail to transmit packets from burst, the calling application
+ /* if members fail to transmit packets from burst, the calling application
* is not expected to know about multiple references to packets so we must
- * handle failures of all packets except those of the most successful slave
+ * handle failures of all packets except those of the most successful member
*/
if (unlikely(tx_failed_flag))
- for (i = 0; i < num_of_slaves; i++)
- if (i != most_successful_tx_slave)
- while (slave_tx_total[i] < nb_pkts)
- rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
+ for (i = 0; i < num_of_members; i++)
+ if (i != most_successful_tx_member)
+ while (member_tx_total[i] < nb_pkts)
+ rte_pktmbuf_free(bufs[member_tx_total[i]++]);
return max_nb_of_tx_pkts;
}
static void
-link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
+link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *member_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
/**
* If in mode 4 then save the link properties of the first
- * slave, all subsequent slaves must match these properties
+ * member, all subsequent members must match these properties
*/
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
- bond_link->link_autoneg = slave_link->link_autoneg;
- bond_link->link_duplex = slave_link->link_duplex;
- bond_link->link_speed = slave_link->link_speed;
+ bond_link->link_autoneg = member_link->link_autoneg;
+ bond_link->link_duplex = member_link->link_duplex;
+ bond_link->link_speed = member_link->link_speed;
} else {
/**
* In any other mode the link properties are set to default
@@ -1414,16 +1418,16 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
static int
link_properties_valid(struct rte_eth_dev *ethdev,
- struct rte_eth_link *slave_link)
+ struct rte_eth_link *member_link)
{
struct bond_dev_private *bond_ctx = ethdev->data->dev_private;
if (bond_ctx->mode == BONDING_MODE_8023AD) {
- struct rte_eth_link *bond_link = &bond_ctx->mode4.slave_link;
+ struct rte_eth_link *bond_link = &bond_ctx->mode4.member_link;
- if (bond_link->link_duplex != slave_link->link_duplex ||
- bond_link->link_autoneg != slave_link->link_autoneg ||
- bond_link->link_speed != slave_link->link_speed)
+ if (bond_link->link_duplex != member_link->link_duplex ||
+ bond_link->link_autoneg != member_link->link_autoneg ||
+ bond_link->link_speed != member_link->link_speed)
return -1;
}
@@ -1480,11 +1484,11 @@ mac_address_set(struct rte_eth_dev *eth_dev,
static const struct rte_ether_addr null_mac_addr;
/*
- * Add additional MAC addresses to the slave
+ * Add additional MAC addresses to the member
*/
int
-slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+member_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id)
{
int i, ret;
struct rte_ether_addr *mac_addr;
@@ -1494,11 +1498,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_add(slave_port_id, mac_addr, 0);
+ ret = rte_eth_dev_mac_addr_add(member_port_id, mac_addr, 0);
if (ret < 0) {
/* rollback */
for (i--; i > 0; i--)
- rte_eth_dev_mac_addr_remove(slave_port_id,
+ rte_eth_dev_mac_addr_remove(member_port_id,
&bonded_eth_dev->data->mac_addrs[i]);
return ret;
}
@@ -1508,11 +1512,11 @@ slave_add_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
/*
- * Remove additional MAC addresses from the slave
+ * Remove additional MAC addresses from the member
*/
int
-slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
- uint16_t slave_port_id)
+member_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
+ uint16_t member_port_id)
{
int i, rc, ret;
struct rte_ether_addr *mac_addr;
@@ -1523,7 +1527,7 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
if (rte_is_same_ether_addr(mac_addr, &null_mac_addr))
break;
- ret = rte_eth_dev_mac_addr_remove(slave_port_id, mac_addr);
+ ret = rte_eth_dev_mac_addr_remove(member_port_id, mac_addr);
/* save only the first error */
if (ret < 0 && rc == 0)
rc = ret;
@@ -1533,26 +1537,26 @@ slave_remove_mac_addresses(struct rte_eth_dev *bonded_eth_dev,
}
int
-mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
+mac_address_members_update(struct rte_eth_dev *bonded_eth_dev)
{
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
bool set;
int i;
- /* Update slave devices MAC addresses */
- if (internals->slave_count < 1)
+ /* Update member devices MAC addresses */
+ if (internals->member_count < 1)
return -1;
switch (internals->mode) {
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
- for (i = 0; i < internals->slave_count; i++) {
+ for (i = 0; i < internals->member_count; i++) {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
+ internals->members[i].port_id,
bonded_eth_dev->data->mac_addrs)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
return -1;
}
}
@@ -1565,8 +1569,8 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
case BONDING_MODE_ALB:
default:
set = true;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id ==
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id ==
internals->current_primary_port) {
if (rte_eth_dev_default_mac_addr_set(
internals->current_primary_port,
@@ -1577,10 +1581,10 @@ mac_address_slaves_update(struct rte_eth_dev *bonded_eth_dev)
}
} else {
if (rte_eth_dev_default_mac_addr_set(
- internals->slaves[i].port_id,
- &internals->slaves[i].persisted_mac_addr)) {
+ internals->members[i].port_id,
+ &internals->members[i].persisted_mac_addr)) {
RTE_BOND_LOG(ERR, "Failed to update port Id %d MAC address",
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
}
}
}
@@ -1655,55 +1659,55 @@ bond_ethdev_mode_set(struct rte_eth_dev *eth_dev, uint8_t mode)
static int
-slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
int errval = 0;
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- struct port *port = &bond_mode_8023ad_ports[slave_eth_dev->data->port_id];
+ struct port *port = &bond_mode_8023ad_ports[member_eth_dev->data->port_id];
if (port->slow_pool == NULL) {
char mem_name[256];
- int slave_id = slave_eth_dev->data->port_id;
+ int member_id = member_eth_dev->data->port_id;
- snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_slow_pool",
- slave_id);
+ snprintf(mem_name, RTE_DIM(mem_name), "member_port%u_slow_pool",
+ member_id);
port->slow_pool = rte_pktmbuf_pool_create(mem_name, 8191,
250, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
- slave_eth_dev->data->numa_node);
+ member_eth_dev->data->numa_node);
/* Any memory allocation failure in initialization is critical because
* resources can't be free, so reinitialization is impossible. */
if (port->slow_pool == NULL) {
- rte_panic("Slave %u: Failed to create memory pool '%s': %s\n",
- slave_id, mem_name, rte_strerror(rte_errno));
+ rte_panic("Member %u: Failed to create memory pool '%s': %s\n",
+ member_id, mem_name, rte_strerror(rte_errno));
}
}
if (internals->mode4.dedicated_queues.enabled == 1) {
/* Configure slow Rx queue */
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_rx_queue_setup(member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid, 128,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL, port->slow_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.rx_qid,
errval);
return errval;
}
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id,
+ errval = rte_eth_tx_queue_setup(member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid, 512,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_eth_dev->data->port_id),
NULL);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id,
+ member_eth_dev->data->port_id,
internals->mode4.dedicated_queues.tx_qid,
errval);
return errval;
@@ -1713,8 +1717,8 @@ slave_configure_slow_queue(struct rte_eth_dev *bonded_eth_dev,
}
int
-slave_configure(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_configure(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
uint16_t nb_rx_queues;
uint16_t nb_tx_queues;
@@ -1723,45 +1727,45 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
- /* Stop slave */
- errval = rte_eth_dev_stop(slave_eth_dev->data->port_id);
+ /* Stop member */
+ errval = rte_eth_dev_stop(member_eth_dev->data->port_id);
if (errval != 0)
RTE_BOND_LOG(ERR, "rte_eth_dev_stop: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_eth_dev->data->port_id, errval);
- /* Enable interrupts on slave device if supported */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
- slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
+ /* Enable interrupts on member device if supported */
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
+ member_eth_dev->data->dev_conf.intr_conf.lsc = 1;
- /* If RSS is enabled for bonding, try to enable it for slaves */
+ /* If RSS is enabled for bonding, try to enable it for members */
if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
/* rss_key won't be empty if RSS is configured in bonded dev */
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
internals->rss_key_len;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key =
internals->rss_key;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf =
bonded_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ member_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
} else {
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
- slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
- slave_eth_dev->data->dev_conf.rxmode.mq_mode =
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len = 0;
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+ member_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf = 0;
+ member_eth_dev->data->dev_conf.rxmode.mq_mode =
bonded_eth_dev->data->dev_conf.rxmode.mq_mode;
}
- slave_eth_dev->data->dev_conf.rxmode.mtu =
+ member_eth_dev->data->dev_conf.rxmode.mtu =
bonded_eth_dev->data->dev_conf.rxmode.mtu;
- slave_eth_dev->data->dev_conf.link_speeds =
+ member_eth_dev->data->dev_conf.link_speeds =
bonded_eth_dev->data->dev_conf.link_speeds;
- slave_eth_dev->data->dev_conf.txmode.offloads =
+ member_eth_dev->data->dev_conf.txmode.offloads =
bonded_eth_dev->data->dev_conf.txmode.offloads;
- slave_eth_dev->data->dev_conf.rxmode.offloads =
+ member_eth_dev->data->dev_conf.rxmode.offloads =
bonded_eth_dev->data->dev_conf.rxmode.offloads;
nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
@@ -1775,28 +1779,28 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
}
/* Configure device */
- errval = rte_eth_dev_configure(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_configure(member_eth_dev->data->port_id,
nb_rx_queues, nb_tx_queues,
- &(slave_eth_dev->data->dev_conf));
+ &member_eth_dev->data->dev_conf);
if (errval != 0) {
- RTE_BOND_LOG(ERR, "Cannot configure slave device: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ RTE_BOND_LOG(ERR, "Cannot configure member device: port %u, err (%d)",
+ member_eth_dev->data->port_id, errval);
return errval;
}
- errval = rte_eth_dev_set_mtu(slave_eth_dev->data->port_id,
+ errval = rte_eth_dev_set_mtu(member_eth_dev->data->port_id,
bonded_eth_dev->data->mtu);
if (errval != 0 && errval != -ENOTSUP) {
RTE_BOND_LOG(ERR, "rte_eth_dev_set_mtu: port %u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_eth_dev->data->port_id, errval);
return errval;
}
return 0;
}
int
-slave_start(struct rte_eth_dev *bonded_eth_dev,
- struct rte_eth_dev *slave_eth_dev)
+member_start(struct rte_eth_dev *bonded_eth_dev,
+ struct rte_eth_dev *member_eth_dev)
{
int errval = 0;
struct bond_rx_queue *bd_rx_q;
@@ -1804,19 +1808,20 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
uint16_t q_id;
struct rte_flow_error flow_error;
struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
+ uint16_t member_port_id = member_eth_dev->data->port_id;
/* Setup Rx Queues */
for (q_id = 0; q_id < bonded_eth_dev->data->nb_rx_queues; q_id++) {
bd_rx_q = (struct bond_rx_queue *)bonded_eth_dev->data->rx_queues[q_id];
- errval = rte_eth_rx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_rx_queue_setup(member_port_id, q_id,
bd_rx_q->nb_rx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_port_id),
&(bd_rx_q->rx_conf), bd_rx_q->mb_pool);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_rx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ member_port_id, q_id, errval);
return errval;
}
}
@@ -1825,58 +1830,58 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
for (q_id = 0; q_id < bonded_eth_dev->data->nb_tx_queues; q_id++) {
bd_tx_q = (struct bond_tx_queue *)bonded_eth_dev->data->tx_queues[q_id];
- errval = rte_eth_tx_queue_setup(slave_eth_dev->data->port_id, q_id,
+ errval = rte_eth_tx_queue_setup(member_port_id, q_id,
bd_tx_q->nb_tx_desc,
- rte_eth_dev_socket_id(slave_eth_dev->data->port_id),
+ rte_eth_dev_socket_id(member_port_id),
&bd_tx_q->tx_conf);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"rte_eth_tx_queue_setup: port=%d queue_id %d, err (%d)",
- slave_eth_dev->data->port_id, q_id, errval);
+ member_port_id, q_id, errval);
return errval;
}
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
- if (slave_configure_slow_queue(bonded_eth_dev, slave_eth_dev)
+ if (member_configure_slow_queue(bonded_eth_dev, member_eth_dev)
!= 0)
return errval;
errval = bond_ethdev_8023ad_flow_verify(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_verify: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return errval;
}
- if (internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id] != NULL) {
- errval = rte_flow_destroy(slave_eth_dev->data->port_id,
- internals->mode4.dedicated_queues.flow[slave_eth_dev->data->port_id],
+ if (internals->mode4.dedicated_queues.flow[member_port_id] != NULL) {
+ errval = rte_flow_destroy(member_port_id,
+ internals->mode4.dedicated_queues.flow[member_port_id],
&flow_error);
RTE_BOND_LOG(ERR, "bond_ethdev_8023ad_flow_destroy: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
}
}
/* Start device */
- errval = rte_eth_dev_start(slave_eth_dev->data->port_id);
+ errval = rte_eth_dev_start(member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR, "rte_eth_dev_start: port=%u, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return -1;
}
if (internals->mode == BONDING_MODE_8023AD &&
internals->mode4.dedicated_queues.enabled == 1) {
errval = bond_ethdev_8023ad_flow_set(bonded_eth_dev,
- slave_eth_dev->data->port_id);
+ member_port_id);
if (errval != 0) {
RTE_BOND_LOG(ERR,
"bond_ethdev_8023ad_flow_set: port=%d, err (%d)",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
return errval;
}
}
@@ -1888,27 +1893,27 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
internals = bonded_eth_dev->data->dev_private;
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == slave_eth_dev->data->port_id) {
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id == member_port_id) {
errval = rte_eth_dev_rss_reta_update(
- slave_eth_dev->data->port_id,
+ member_port_id,
&internals->reta_conf[0],
- internals->slaves[i].reta_size);
+ internals->members[i].reta_size);
if (errval != 0) {
RTE_BOND_LOG(WARNING,
- "rte_eth_dev_rss_reta_update on slave port %d fails (err %d)."
+ "rte_eth_dev_rss_reta_update on member port %d fails (err %d)."
" RSS Configuration for bonding may be inconsistent.",
- slave_eth_dev->data->port_id, errval);
+ member_port_id, errval);
}
break;
}
}
}
- /* If lsc interrupt is set, check initial slave's link status */
- if (slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
- slave_eth_dev->dev_ops->link_update(slave_eth_dev, 0);
- bond_ethdev_lsc_event_callback(slave_eth_dev->data->port_id,
+ /* If lsc interrupt is set, check initial member's link status */
+ if (member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) {
+ member_eth_dev->dev_ops->link_update(member_eth_dev, 0);
+ bond_ethdev_lsc_event_callback(member_port_id,
RTE_ETH_EVENT_INTR_LSC, &bonded_eth_dev->data->port_id,
NULL);
}
@@ -1917,75 +1922,74 @@ slave_start(struct rte_eth_dev *bonded_eth_dev,
}
void
-slave_remove(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+member_remove(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev)
{
uint16_t i;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id ==
- slave_eth_dev->data->port_id)
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id ==
+ member_eth_dev->data->port_id)
break;
- if (i < (internals->slave_count - 1)) {
+ if (i < (internals->member_count - 1)) {
struct rte_flow *flow;
- memmove(&internals->slaves[i], &internals->slaves[i + 1],
- sizeof(internals->slaves[0]) *
- (internals->slave_count - i - 1));
+ memmove(&internals->members[i], &internals->members[i + 1],
+ sizeof(internals->members[0]) *
+ (internals->member_count - i - 1));
TAILQ_FOREACH(flow, &internals->flow_list, next) {
memmove(&flow->flows[i], &flow->flows[i + 1],
sizeof(flow->flows[0]) *
- (internals->slave_count - i - 1));
- flow->flows[internals->slave_count - 1] = NULL;
+ (internals->member_count - i - 1));
+ flow->flows[internals->member_count - 1] = NULL;
}
}
- internals->slave_count--;
+ internals->member_count--;
- /* force reconfiguration of slave interfaces */
- rte_eth_dev_internal_reset(slave_eth_dev);
+ /* force reconfiguration of member interfaces */
+ rte_eth_dev_internal_reset(member_eth_dev);
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg);
+bond_ethdev_member_link_status_change_monitor(void *cb_arg);
void
-slave_add(struct bond_dev_private *internals,
- struct rte_eth_dev *slave_eth_dev)
+member_add(struct bond_dev_private *internals,
+ struct rte_eth_dev *member_eth_dev)
{
- struct bond_slave_details *slave_details =
- &internals->slaves[internals->slave_count];
+ struct bond_member_details *member_details =
+ &internals->members[internals->member_count];
- slave_details->port_id = slave_eth_dev->data->port_id;
- slave_details->last_link_status = 0;
+ member_details->port_id = member_eth_dev->data->port_id;
+ member_details->last_link_status = 0;
- /* Mark slave devices that don't support interrupts so we can
+ /* Mark member devices that don't support interrupts so we can
* compensate when we start the bond
*/
- if (!(slave_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)) {
- slave_details->link_status_poll_enabled = 1;
- }
+ if (!(member_eth_dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))
+ member_details->link_status_poll_enabled = 1;
- slave_details->link_status_wait_to_complete = 0;
+ member_details->link_status_wait_to_complete = 0;
/* clean tlb_last_obytes when adding port for bonding device */
- memcpy(&(slave_details->persisted_mac_addr), slave_eth_dev->data->mac_addrs,
+ memcpy(&member_details->persisted_mac_addr, member_eth_dev->data->mac_addrs,
sizeof(struct rte_ether_addr));
}
void
bond_ethdev_primary_set(struct bond_dev_private *internals,
- uint16_t slave_port_id)
+ uint16_t member_port_id)
{
int i;
- if (internals->active_slave_count < 1)
- internals->current_primary_port = slave_port_id;
+ if (internals->active_member_count < 1)
+ internals->current_primary_port = member_port_id;
else
- /* Search bonded device slave ports for new proposed primary port */
- for (i = 0; i < internals->active_slave_count; i++) {
- if (internals->active_slaves[i] == slave_port_id)
- internals->current_primary_port = slave_port_id;
+ /* Search bonded device member ports for new proposed primary port */
+ for (i = 0; i < internals->active_member_count; i++) {
+ if (internals->active_members[i] == member_port_id)
+ internals->current_primary_port = member_port_id;
}
}
@@ -1998,9 +2002,9 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
struct bond_dev_private *internals;
int i;
- /* slave eth dev will be started by bonded device */
+ /* member eth dev will be started by bonded device */
if (check_for_bonded_ethdev(eth_dev)) {
- RTE_BOND_LOG(ERR, "User tried to explicitly start a slave eth_dev (%d)",
+ RTE_BOND_LOG(ERR, "User tried to explicitly start a member eth_dev (%d)",
eth_dev->data->port_id);
return -1;
}
@@ -2010,17 +2014,17 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
internals = eth_dev->data->dev_private;
- if (internals->slave_count == 0) {
- RTE_BOND_LOG(ERR, "Cannot start port since there are no slave devices");
+ if (internals->member_count == 0) {
+ RTE_BOND_LOG(ERR, "Cannot start port since there are no member devices");
goto out_err;
}
if (internals->user_defined_mac == 0) {
struct rte_ether_addr *new_mac_addr = NULL;
- for (i = 0; i < internals->slave_count; i++)
- if (internals->slaves[i].port_id == internals->primary_port)
- new_mac_addr = &internals->slaves[i].persisted_mac_addr;
+ for (i = 0; i < internals->member_count; i++)
+ if (internals->members[i].port_id == internals->primary_port)
+ new_mac_addr = &internals->members[i].persisted_mac_addr;
if (new_mac_addr == NULL)
goto out_err;
@@ -2042,28 +2046,28 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
}
- /* Reconfigure each slave device if starting bonded device */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(eth_dev, slave_ethdev) != 0) {
+ /* Reconfigure each member device if starting bonded device */
+ for (i = 0; i < internals->member_count; i++) {
+ struct rte_eth_dev *member_ethdev =
+ &(rte_eth_devices[internals->members[i].port_id]);
+ if (member_configure(eth_dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to reconfigure slave device (%d)",
+ "bonded port (%d) failed to reconfigure member device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
goto out_err;
}
- if (slave_start(eth_dev, slave_ethdev) != 0) {
+ if (member_start(eth_dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to start slave device (%d)",
+ "bonded port (%d) failed to start member device (%d)",
eth_dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
goto out_err;
}
- /* We will need to poll for link status if any slave doesn't
+ /* We will need to poll for link status if any member doesn't
* support interrupts
*/
- if (internals->slaves[i].link_status_poll_enabled)
+ if (internals->members[i].link_status_poll_enabled)
internals->link_status_polling_enabled = 1;
}
@@ -2071,12 +2075,12 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
if (internals->link_status_polling_enabled) {
rte_eal_alarm_set(
internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor,
+ bond_ethdev_member_link_status_change_monitor,
(void *)&rte_eth_devices[internals->port_id]);
}
- /* Update all slave devices MACs*/
- if (mac_address_slaves_update(eth_dev) != 0)
+ /* Update all member devices MACs*/
+ if (mac_address_members_update(eth_dev) != 0)
goto out_err;
if (internals->user_defined_primary_port)
@@ -2132,8 +2136,8 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
bond_mode_8023ad_stop(eth_dev);
/* Discard all messages to/from mode 4 state machines */
- for (i = 0; i < internals->active_slave_count; i++) {
- port = &bond_mode_8023ad_ports[internals->active_slaves[i]];
+ for (i = 0; i < internals->active_member_count; i++) {
+ port = &bond_mode_8023ad_ports[internals->active_members[i]];
RTE_ASSERT(port->rx_ring != NULL);
while (rte_ring_dequeue(port->rx_ring, &pkt) != -ENOENT)
@@ -2148,30 +2152,30 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
if (internals->mode == BONDING_MODE_TLB ||
internals->mode == BONDING_MODE_ALB) {
bond_tlb_disable(internals);
- for (i = 0; i < internals->active_slave_count; i++)
- tlb_last_obytets[internals->active_slaves[i]] = 0;
+ for (i = 0; i < internals->active_member_count; i++)
+ tlb_last_obytets[internals->active_members[i]] = 0;
}
eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
eth_dev->data->dev_started = 0;
internals->link_status_polling_enabled = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t slave_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t member_id = internals->members[i].port_id;
- internals->slaves[i].last_link_status = 0;
- ret = rte_eth_dev_stop(slave_id);
+ internals->members[i].last_link_status = 0;
+ ret = rte_eth_dev_stop(member_id);
if (ret != 0) {
RTE_BOND_LOG(ERR, "Failed to stop device on port %u",
- slave_id);
+ member_id);
return ret;
}
- /* active slaves need to be deactivated. */
- if (find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, slave_id) !=
- internals->active_slave_count)
- deactivate_slave(eth_dev, slave_id);
+ /* active members need to be deactivated. */
+ if (find_member_by_id(internals->active_members,
+ internals->active_member_count, member_id) !=
+ internals->active_member_count)
+ deactivate_member(eth_dev, member_id);
}
return 0;
@@ -2188,8 +2192,8 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
/* Flush flows in all back-end devices before removing them */
bond_flow_ops.flush(dev, &ferror);
- while (internals->slave_count != skipped) {
- uint16_t port_id = internals->slaves[skipped].port_id;
+ while (internals->member_count != skipped) {
+ uint16_t port_id = internals->members[skipped].port_id;
int ret;
ret = rte_eth_dev_stop(port_id);
@@ -2203,7 +2207,7 @@ bond_ethdev_cfg_cleanup(struct rte_eth_dev *dev, bool remove)
continue;
}
- if (rte_eth_bond_slave_remove(bond_port_id, port_id) != 0) {
+ if (rte_eth_bond_member_remove(bond_port_id, port_id) != 0) {
RTE_BOND_LOG(ERR,
"Failed to remove port %d from bonded device %s",
port_id, dev->device->name);
@@ -2246,7 +2250,7 @@ static int
bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct bond_slave_details slave;
+ struct bond_member_details member;
int ret;
uint16_t max_nb_rx_queues = UINT16_MAX;
@@ -2259,31 +2263,31 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
RTE_ETHER_MAX_JUMBO_FRAME_LEN;
/* Max number of tx/rx queues that the bonded device can support is the
- * minimum values of the bonded slaves, as all slaves must be capable
+ * minimum values of the bonded members, as all members must be capable
* of supporting the same number of tx/rx queues.
*/
- if (internals->slave_count > 0) {
- struct rte_eth_dev_info slave_info;
+ if (internals->member_count > 0) {
+ struct rte_eth_dev_info member_info;
uint16_t idx;
- for (idx = 0; idx < internals->slave_count; idx++) {
- slave = internals->slaves[idx];
- ret = rte_eth_dev_info_get(slave.port_id, &slave_info);
+ for (idx = 0; idx < internals->member_count; idx++) {
+ member = internals->members[idx];
+ ret = rte_eth_dev_info_get(member.port_id, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
"%s: Error during getting device (port %u) info: %s\n",
__func__,
- slave.port_id,
+ member.port_id,
strerror(-ret));
return ret;
}
- if (slave_info.max_rx_queues < max_nb_rx_queues)
- max_nb_rx_queues = slave_info.max_rx_queues;
+ if (member_info.max_rx_queues < max_nb_rx_queues)
+ max_nb_rx_queues = member_info.max_rx_queues;
- if (slave_info.max_tx_queues < max_nb_tx_queues)
- max_nb_tx_queues = slave_info.max_tx_queues;
+ if (member_info.max_tx_queues < max_nb_tx_queues)
+ max_nb_tx_queues = member_info.max_tx_queues;
}
}
@@ -2332,7 +2336,7 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
uint16_t i;
struct bond_dev_private *internals = dev->data->dev_private;
- /* don't do this while a slave is being added */
+ /* don't do this while a member is being added */
rte_spinlock_lock(&internals->lock);
if (on)
@@ -2340,13 +2344,13 @@ bond_ethdev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
else
rte_bitmap_clear(internals->vlan_filter_bmp, vlan_id);
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t port_id = internals->members[i].port_id;
res = rte_eth_dev_vlan_filter(port_id, vlan_id, on);
if (res == ENOTSUP)
RTE_BOND_LOG(WARNING,
- "Setting VLAN filter on slave port %u not supported.",
+ "Setting VLAN filter on member port %u not supported.",
port_id);
}
@@ -2424,14 +2428,14 @@ bond_ethdev_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_id)
}
static void
-bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
+bond_ethdev_member_link_status_change_monitor(void *cb_arg)
{
- struct rte_eth_dev *bonded_ethdev, *slave_ethdev;
+ struct rte_eth_dev *bonded_ethdev, *member_ethdev;
struct bond_dev_private *internals;
- /* Default value for polling slave found is true as we don't want to
+ /* Default value for polling member found is true as we don't want to
* disable the polling thread if we cannot get the lock */
- int i, polling_slave_found = 1;
+ int i, polling_member_found = 1;
if (cb_arg == NULL)
return;
@@ -2443,28 +2447,28 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
!internals->link_status_polling_enabled)
return;
- /* If device is currently being configured then don't check slaves link
+ /* If device is currently being configured then don't check members link
* status, wait until next period */
if (rte_spinlock_trylock(&internals->lock)) {
- if (internals->slave_count > 0)
- polling_slave_found = 0;
+ if (internals->member_count > 0)
+ polling_member_found = 0;
- for (i = 0; i < internals->slave_count; i++) {
- if (!internals->slaves[i].link_status_poll_enabled)
+ for (i = 0; i < internals->member_count; i++) {
+ if (!internals->members[i].link_status_poll_enabled)
continue;
- slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
- polling_slave_found = 1;
+ member_ethdev = &rte_eth_devices[internals->members[i].port_id];
+ polling_member_found = 1;
- /* Update slave link status */
- (*slave_ethdev->dev_ops->link_update)(slave_ethdev,
- internals->slaves[i].link_status_wait_to_complete);
+ /* Update member link status */
+ (*member_ethdev->dev_ops->link_update)(member_ethdev,
+ internals->members[i].link_status_wait_to_complete);
/* if link status has changed since last checked then call lsc
* event callback */
- if (slave_ethdev->data->dev_link.link_status !=
- internals->slaves[i].last_link_status) {
- bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
+ if (member_ethdev->data->dev_link.link_status !=
+ internals->members[i].last_link_status) {
+ bond_ethdev_lsc_event_callback(internals->members[i].port_id,
RTE_ETH_EVENT_INTR_LSC,
&bonded_ethdev->data->port_id,
NULL);
@@ -2473,10 +2477,10 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
rte_spinlock_unlock(&internals->lock);
}
- if (polling_slave_found)
- /* Set alarm to continue monitoring link status of slave ethdev's */
+ if (polling_member_found)
+ /* Set alarm to continue monitoring link status of member ethdev's */
rte_eal_alarm_set(internals->link_status_polling_interval_ms * 1000,
- bond_ethdev_slave_link_status_change_monitor, cb_arg);
+ bond_ethdev_member_link_status_change_monitor, cb_arg);
}
static int
@@ -2485,7 +2489,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
int (*link_update)(uint16_t port_id, struct rte_eth_link *eth_link);
struct bond_dev_private *bond_ctx;
- struct rte_eth_link slave_link;
+ struct rte_eth_link member_link;
bool one_link_update_succeeded;
uint32_t idx;
@@ -2496,7 +2500,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
if (ethdev->data->dev_started == 0 ||
- bond_ctx->active_slave_count == 0) {
+ bond_ctx->active_member_count == 0) {
ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
return 0;
}
@@ -2512,51 +2516,51 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
case BONDING_MODE_BROADCAST:
/**
* Setting link speed to UINT32_MAX to ensure we pick up the
- * value of the first active slave
+ * value of the first active member
*/
ethdev->data->dev_link.link_speed = UINT32_MAX;
/**
- * link speed is minimum value of all the slaves link speed as
- * packet loss will occur on this slave if transmission at rates
+ * link speed is minimum value of all the members link speed as
+ * packet loss will occur on this member if transmission at rates
* greater than this are attempted
*/
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+ ret = link_update(bond_ctx->active_members[idx],
+ &member_link);
if (ret < 0) {
ethdev->data->dev_link.link_speed =
RTE_ETH_SPEED_NUM_NONE;
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Member (port %u) link get failed: %s",
+ bond_ctx->active_members[idx],
rte_strerror(-ret));
return 0;
}
- if (slave_link.link_speed <
+ if (member_link.link_speed <
ethdev->data->dev_link.link_speed)
ethdev->data->dev_link.link_speed =
- slave_link.link_speed;
+ member_link.link_speed;
}
break;
case BONDING_MODE_ACTIVE_BACKUP:
- /* Current primary slave */
- ret = link_update(bond_ctx->current_primary_port, &slave_link);
+ /* Current primary member */
+ ret = link_update(bond_ctx->current_primary_port, &member_link);
if (ret < 0) {
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed: %s",
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed: %s",
bond_ctx->current_primary_port,
rte_strerror(-ret));
return 0;
}
- ethdev->data->dev_link.link_speed = slave_link.link_speed;
+ ethdev->data->dev_link.link_speed = member_link.link_speed;
break;
case BONDING_MODE_8023AD:
ethdev->data->dev_link.link_autoneg =
- bond_ctx->mode4.slave_link.link_autoneg;
+ bond_ctx->mode4.member_link.link_autoneg;
ethdev->data->dev_link.link_duplex =
- bond_ctx->mode4.slave_link.link_duplex;
+ bond_ctx->mode4.member_link.link_duplex;
/* fall through */
/* to update link speed */
case BONDING_MODE_ROUND_ROBIN:
@@ -2566,29 +2570,29 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
default:
/**
* In theses mode the maximum theoretical link speed is the sum
- * of all the slaves
+ * of all the members
*/
ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
one_link_update_succeeded = false;
- for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
- ret = link_update(bond_ctx->active_slaves[idx],
- &slave_link);
+ for (idx = 0; idx < bond_ctx->active_member_count; idx++) {
+ ret = link_update(bond_ctx->active_members[idx],
+ &member_link);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Slave (port %u) link get failed: %s",
- bond_ctx->active_slaves[idx],
+ "Member (port %u) link get failed: %s",
+ bond_ctx->active_members[idx],
rte_strerror(-ret));
continue;
}
one_link_update_succeeded = true;
ethdev->data->dev_link.link_speed +=
- slave_link.link_speed;
+ member_link.link_speed;
}
if (!one_link_update_succeeded) {
- RTE_BOND_LOG(ERR, "All slaves link get failed");
+ RTE_BOND_LOG(ERR, "All members link get failed");
return 0;
}
}
@@ -2602,27 +2606,27 @@ static int
bond_ethdev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
{
struct bond_dev_private *internals = dev->data->dev_private;
- struct rte_eth_stats slave_stats;
+ struct rte_eth_stats member_stats;
int i, j;
- for (i = 0; i < internals->slave_count; i++) {
- rte_eth_stats_get(internals->slaves[i].port_id, &slave_stats);
+ for (i = 0; i < internals->member_count; i++) {
+ rte_eth_stats_get(internals->members[i].port_id, &member_stats);
- stats->ipackets += slave_stats.ipackets;
- stats->opackets += slave_stats.opackets;
- stats->ibytes += slave_stats.ibytes;
- stats->obytes += slave_stats.obytes;
- stats->imissed += slave_stats.imissed;
- stats->ierrors += slave_stats.ierrors;
- stats->oerrors += slave_stats.oerrors;
- stats->rx_nombuf += slave_stats.rx_nombuf;
+ stats->ipackets += member_stats.ipackets;
+ stats->opackets += member_stats.opackets;
+ stats->ibytes += member_stats.ibytes;
+ stats->obytes += member_stats.obytes;
+ stats->imissed += member_stats.imissed;
+ stats->ierrors += member_stats.ierrors;
+ stats->oerrors += member_stats.oerrors;
+ stats->rx_nombuf += member_stats.rx_nombuf;
for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
- stats->q_ipackets[j] += slave_stats.q_ipackets[j];
- stats->q_opackets[j] += slave_stats.q_opackets[j];
- stats->q_ibytes[j] += slave_stats.q_ibytes[j];
- stats->q_obytes[j] += slave_stats.q_obytes[j];
- stats->q_errors[j] += slave_stats.q_errors[j];
+ stats->q_ipackets[j] += member_stats.q_ipackets[j];
+ stats->q_opackets[j] += member_stats.q_opackets[j];
+ stats->q_ibytes[j] += member_stats.q_ibytes[j];
+ stats->q_obytes[j] += member_stats.q_obytes[j];
+ stats->q_errors[j] += member_stats.q_errors[j];
}
}
@@ -2638,8 +2642,8 @@ bond_ethdev_stats_reset(struct rte_eth_dev *dev)
int err;
int ret;
- for (i = 0, err = 0; i < internals->slave_count; i++) {
- ret = rte_eth_stats_reset(internals->slaves[i].port_id);
+ for (i = 0, err = 0; i < internals->member_count; i++) {
+ ret = rte_eth_stats_reset(internals->members[i].port_id);
if (ret != 0)
err = ret;
}
@@ -2656,15 +2660,15 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
ret = rte_eth_promiscuous_enable(port_id);
if (ret != 0)
@@ -2672,23 +2676,23 @@ bond_ethdev_promiscuous_enable(struct rte_eth_dev *eth_dev)
"Failed to enable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_enable(port_id);
@@ -2710,20 +2714,20 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
uint16_t port_id;
switch (internals->mode) {
- /* Promiscuous mode is propagated to all slaves */
+ /* Promiscuous mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
BOND_8023AD_FORCED_PROMISC) {
- slave_ok++;
+ member_ok++;
continue;
}
ret = rte_eth_promiscuous_disable(port_id);
@@ -2732,23 +2736,23 @@ bond_ethdev_promiscuous_disable(struct rte_eth_dev *dev)
"Failed to disable promiscuous mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* Promiscuous mode is propagated only to primary slave */
+ /* Promiscuous mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch promisc when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_promiscuous_disable(port_id);
@@ -2772,7 +2776,7 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As promiscuous mode is propagated to all slaves for these
+ /* As promiscuous mode is propagated to all members for these
* mode, no need to update for bonding device.
*/
break;
@@ -2780,9 +2784,9 @@ bond_ethdev_promiscuous_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As promiscuous mode is propagated only to primary slave
+ /* As promiscuous mode is propagated only to primary member
* for these mode. When active/standby switchover, promiscuous
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary member according to bonding
* device.
*/
if (rte_eth_promiscuous_get(internals->port_id) == 1)
@@ -2803,15 +2807,15 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ port_id = internals->members[i].port_id;
ret = rte_eth_allmulticast_enable(port_id);
if (ret != 0)
@@ -2819,23 +2823,23 @@ bond_ethdev_allmulticast_enable(struct rte_eth_dev *eth_dev)
"Failed to enable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_enable(port_id);
@@ -2857,15 +2861,15 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
uint16_t port_id;
switch (internals->mode) {
- /* allmulti mode is propagated to all slaves */
+ /* allmulti mode is propagated to all members */
case BONDING_MODE_ROUND_ROBIN:
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD: {
- unsigned int slave_ok = 0;
+ unsigned int member_ok = 0;
- for (i = 0; i < internals->slave_count; i++) {
- uint16_t port_id = internals->slaves[i].port_id;
+ for (i = 0; i < internals->member_count; i++) {
+ uint16_t port_id = internals->members[i].port_id;
if (internals->mode == BONDING_MODE_8023AD &&
bond_mode_8023ad_ports[port_id].forced_rx_flags ==
@@ -2878,23 +2882,23 @@ bond_ethdev_allmulticast_disable(struct rte_eth_dev *eth_dev)
"Failed to disable allmulti mode for port %u: %s",
port_id, rte_strerror(-ret));
else
- slave_ok++;
+ member_ok++;
}
/*
* Report success if operation is successful on at least
- * on one slave. Otherwise return last error code.
+ * on one member. Otherwise return last error code.
*/
- if (slave_ok > 0)
+ if (member_ok > 0)
ret = 0;
break;
}
- /* allmulti mode is propagated only to primary slave */
+ /* allmulti mode is propagated only to primary member */
case BONDING_MODE_ACTIVE_BACKUP:
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
/* Do not touch allmulti when there cannot be primary ports */
- if (internals->slave_count == 0)
+ if (internals->member_count == 0)
break;
port_id = internals->current_primary_port;
ret = rte_eth_allmulticast_disable(port_id);
@@ -2918,7 +2922,7 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_BALANCE:
case BONDING_MODE_BROADCAST:
case BONDING_MODE_8023AD:
- /* As allmulticast mode is propagated to all slaves for these
+ /* As allmulticast mode is propagated to all members for these
* mode, no need to update for bonding device.
*/
break;
@@ -2926,9 +2930,9 @@ bond_ethdev_allmulticast_update(struct rte_eth_dev *dev)
case BONDING_MODE_TLB:
case BONDING_MODE_ALB:
default:
- /* As allmulticast mode is propagated only to primary slave
+ /* As allmulticast mode is propagated only to primary member
* for these mode. When active/standby switchover, allmulticast
- * mode should be set to new primary slave according to bonding
+ * mode should be set to new primary member according to bonding
* device.
*/
if (rte_eth_allmulticast_get(internals->port_id) == 1)
@@ -2961,8 +2965,8 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
int ret;
uint8_t lsc_flag = 0;
- int valid_slave = 0;
- uint16_t active_pos, slave_idx;
+ int valid_member = 0;
+ uint16_t active_pos, member_idx;
uint16_t i;
if (type != RTE_ETH_EVENT_INTR_LSC || param == NULL)
@@ -2979,62 +2983,62 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
if (!bonded_eth_dev->data->dev_started)
return rc;
- /* verify that port_id is a valid slave of bonded port */
- for (i = 0; i < internals->slave_count; i++) {
- if (internals->slaves[i].port_id == port_id) {
- valid_slave = 1;
- slave_idx = i;
+ /* verify that port_id is a valid member of bonded port */
+ for (i = 0; i < internals->member_count; i++) {
+ if (internals->members[i].port_id == port_id) {
+ valid_member = 1;
+ member_idx = i;
break;
}
}
- if (!valid_slave)
+ if (!valid_member)
return rc;
/* Synchronize lsc callback parallel calls either by real link event
- * from the slaves PMDs or by the bonding PMD itself.
+ * from the members PMDs or by the bonding PMD itself.
*/
rte_spinlock_lock(&internals->lsc_lock);
/* Search for port in active port list */
- active_pos = find_slave_by_id(internals->active_slaves,
- internals->active_slave_count, port_id);
+ active_pos = find_member_by_id(internals->active_members,
+ internals->active_member_count, port_id);
ret = rte_eth_link_get_nowait(port_id, &link);
if (ret < 0)
- RTE_BOND_LOG(ERR, "Slave (port %u) link get failed", port_id);
+ RTE_BOND_LOG(ERR, "Member (port %u) link get failed", port_id);
if (ret == 0 && link.link_status) {
- if (active_pos < internals->active_slave_count)
+ if (active_pos < internals->active_member_count)
goto link_update;
/* check link state properties if bonded link is up*/
if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
if (link_properties_valid(bonded_eth_dev, &link) != 0)
RTE_BOND_LOG(ERR, "Invalid link properties "
- "for slave %d in bonding mode %d",
+ "for member %d in bonding mode %d",
port_id, internals->mode);
} else {
- /* inherit slave link properties */
+ /* inherit member link properties */
link_properties_set(bonded_eth_dev, &link);
}
- /* If no active slave ports then set this port to be
+ /* If no active member ports then set this port to be
* the primary port.
*/
- if (internals->active_slave_count < 1) {
- /* If first active slave, then change link status */
+ if (internals->active_member_count < 1) {
+ /* If first active member, then change link status */
bonded_eth_dev->data->dev_link.link_status =
RTE_ETH_LINK_UP;
internals->current_primary_port = port_id;
lsc_flag = 1;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
- activate_slave(bonded_eth_dev, port_id);
+ activate_member(bonded_eth_dev, port_id);
/* If the user has defined the primary port then default to
* using it.
@@ -3043,24 +3047,24 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
internals->primary_port == port_id)
bond_ethdev_primary_set(internals, port_id);
} else {
- if (active_pos == internals->active_slave_count)
+ if (active_pos == internals->active_member_count)
goto link_update;
- /* Remove from active slave list */
- deactivate_slave(bonded_eth_dev, port_id);
+ /* Remove from active member list */
+ deactivate_member(bonded_eth_dev, port_id);
- if (internals->active_slave_count < 1)
+ if (internals->active_member_count < 1)
lsc_flag = 1;
- /* Update primary id, take first active slave from list or if none
+ /* Update primary id, take first active member from list or if none
* available set to -1 */
if (port_id == internals->current_primary_port) {
- if (internals->active_slave_count > 0)
+ if (internals->active_member_count > 0)
bond_ethdev_primary_set(internals,
- internals->active_slaves[0]);
+ internals->active_members[0]);
else
internals->current_primary_port = internals->primary_port;
- mac_address_slaves_update(bonded_eth_dev);
+ mac_address_members_update(bonded_eth_dev);
bond_ethdev_promiscuous_update(bonded_eth_dev);
bond_ethdev_allmulticast_update(bonded_eth_dev);
}
@@ -3069,10 +3073,10 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
link_update:
/**
* Update bonded device link properties after any change to active
- * slaves
+ * members
*/
bond_ethdev_link_update(bonded_eth_dev, 0);
- internals->slaves[slave_idx].last_link_status = link.link_status;
+ internals->members[member_idx].last_link_status = link.link_status;
if (lsc_flag) {
/* Cancel any possible outstanding interrupts if delays are enabled */
@@ -3114,7 +3118,7 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
{
unsigned i, j;
int result = 0;
- int slave_reta_size;
+ int member_reta_size;
unsigned reta_count;
struct bond_dev_private *internals = dev->data->dev_private;
@@ -3137,11 +3141,11 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
memcpy(&internals->reta_conf[i], &internals->reta_conf[0],
sizeof(internals->reta_conf[0]) * reta_count);
- /* Propagate RETA over slaves */
- for (i = 0; i < internals->slave_count; i++) {
- slave_reta_size = internals->slaves[i].reta_size;
- result = rte_eth_dev_rss_reta_update(internals->slaves[i].port_id,
- &internals->reta_conf[0], slave_reta_size);
+ /* Propagate RETA over members */
+ for (i = 0; i < internals->member_count; i++) {
+ member_reta_size = internals->members[i].reta_size;
+ result = rte_eth_dev_rss_reta_update(internals->members[i].port_id,
+ &internals->reta_conf[0], member_reta_size);
if (result < 0)
return result;
}
@@ -3194,8 +3198,8 @@ bond_ethdev_rss_hash_update(struct rte_eth_dev *dev,
bond_rss_conf.rss_key_len = internals->rss_key_len;
}
- for (i = 0; i < internals->slave_count; i++) {
- result = rte_eth_dev_rss_hash_update(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ result = rte_eth_dev_rss_hash_update(internals->members[i].port_id,
&bond_rss_conf);
if (result < 0)
return result;
@@ -3221,21 +3225,21 @@ bond_ethdev_rss_hash_conf_get(struct rte_eth_dev *dev,
static int
bond_ethdev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mtu_set == NULL) {
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mtu_set == NULL) {
rte_spinlock_unlock(&internals->lock);
return -ENOTSUP;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_set_mtu(internals->slaves[i].port_id, mtu);
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_eth_dev_set_mtu(internals->members[i].port_id, mtu);
if (ret < 0) {
rte_spinlock_unlock(&internals->lock);
return ret;
@@ -3271,29 +3275,29 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
struct rte_ether_addr *mac_addr,
__rte_unused uint32_t index, uint32_t vmdq)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int ret, i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_add == NULL ||
- *slave_eth_dev->dev_ops->mac_addr_remove == NULL) {
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mac_addr_add == NULL ||
+ *member_eth_dev->dev_ops->mac_addr_remove == NULL) {
ret = -ENOTSUP;
goto end;
}
}
- for (i = 0; i < internals->slave_count; i++) {
- ret = rte_eth_dev_mac_addr_add(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++) {
+ ret = rte_eth_dev_mac_addr_add(internals->members[i].port_id,
mac_addr, vmdq);
if (ret < 0) {
/* rollback */
for (i--; i >= 0; i--)
rte_eth_dev_mac_addr_remove(
- internals->slaves[i].port_id, mac_addr);
+ internals->members[i].port_id, mac_addr);
goto end;
}
}
@@ -3307,22 +3311,22 @@ bond_ethdev_mac_addr_add(struct rte_eth_dev *dev,
static void
bond_ethdev_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index)
{
- struct rte_eth_dev *slave_eth_dev;
+ struct rte_eth_dev *member_eth_dev;
struct bond_dev_private *internals = dev->data->dev_private;
int i;
rte_spinlock_lock(&internals->lock);
- for (i = 0; i < internals->slave_count; i++) {
- slave_eth_dev = &rte_eth_devices[internals->slaves[i].port_id];
- if (*slave_eth_dev->dev_ops->mac_addr_remove == NULL)
+ for (i = 0; i < internals->member_count; i++) {
+ member_eth_dev = &rte_eth_devices[internals->members[i].port_id];
+ if (*member_eth_dev->dev_ops->mac_addr_remove == NULL)
goto end;
}
struct rte_ether_addr *mac_addr = &dev->data->mac_addrs[index];
- for (i = 0; i < internals->slave_count; i++)
- rte_eth_dev_mac_addr_remove(internals->slaves[i].port_id,
+ for (i = 0; i < internals->member_count; i++)
+ rte_eth_dev_mac_addr_remove(internals->members[i].port_id,
mac_addr);
end:
@@ -3402,30 +3406,30 @@ dump_basic(const struct rte_eth_dev *dev, FILE *f)
fprintf(f, "\n");
}
- if (internals->slave_count > 0) {
- fprintf(f, "\tSlaves (%u): [", internals->slave_count);
- for (i = 0; i < internals->slave_count - 1; i++)
- fprintf(f, "%u ", internals->slaves[i].port_id);
+ if (internals->member_count > 0) {
+ fprintf(f, "\tMembers (%u): [", internals->member_count);
+ for (i = 0; i < internals->member_count - 1; i++)
+ fprintf(f, "%u ", internals->members[i].port_id);
- fprintf(f, "%u]\n", internals->slaves[internals->slave_count - 1].port_id);
+ fprintf(f, "%u]\n", internals->members[internals->member_count - 1].port_id);
} else {
- fprintf(f, "\tSlaves: []\n");
+ fprintf(f, "\tMembers: []\n");
}
- if (internals->active_slave_count > 0) {
- fprintf(f, "\tActive Slaves (%u): [", internals->active_slave_count);
- for (i = 0; i < internals->active_slave_count - 1; i++)
- fprintf(f, "%u ", internals->active_slaves[i]);
+ if (internals->active_member_count > 0) {
+ fprintf(f, "\tActive Members (%u): [", internals->active_member_count);
+ for (i = 0; i < internals->active_member_count - 1; i++)
+ fprintf(f, "%u ", internals->active_members[i]);
- fprintf(f, "%u]\n", internals->active_slaves[internals->active_slave_count - 1]);
+ fprintf(f, "%u]\n", internals->active_members[internals->active_member_count - 1]);
} else {
- fprintf(f, "\tActive Slaves: []\n");
+ fprintf(f, "\tActive Members: []\n");
}
if (internals->user_defined_primary_port)
fprintf(f, "\tUser Defined Primary: [%u]\n", internals->primary_port);
- if (internals->slave_count > 0)
+ if (internals->member_count > 0)
fprintf(f, "\tCurrent Primary: [%u]\n", internals->current_primary_port);
}
@@ -3471,7 +3475,7 @@ dump_lacp_port_param(const struct port_params *params, FILE *f)
}
static void
-dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
+dump_lacp_member(const struct rte_eth_bond_8023ad_member_info *info, FILE *f)
{
char a_state[256] = { 0 };
char p_state[256] = { 0 };
@@ -3520,18 +3524,18 @@ dump_lacp_slave(const struct rte_eth_bond_8023ad_slave_info *info, FILE *f)
static void
dump_lacp(uint16_t port_id, FILE *f)
{
- struct rte_eth_bond_8023ad_slave_info slave_info;
+ struct rte_eth_bond_8023ad_member_info member_info;
struct rte_eth_bond_8023ad_conf port_conf;
- uint16_t slaves[RTE_MAX_ETHPORTS];
- int num_active_slaves;
+ uint16_t members[RTE_MAX_ETHPORTS];
+ int num_active_members;
int i, ret;
fprintf(f, " - Lacp info:\n");
- num_active_slaves = rte_eth_bond_active_slaves_get(port_id, slaves,
+ num_active_members = rte_eth_bond_active_members_get(port_id, members,
RTE_MAX_ETHPORTS);
- if (num_active_slaves < 0) {
- fprintf(f, "\tFailed to get active slave list for port %u\n",
+ if (num_active_members < 0) {
+ fprintf(f, "\tFailed to get active member list for port %u\n",
port_id);
return;
}
@@ -3545,16 +3549,16 @@ dump_lacp(uint16_t port_id, FILE *f)
}
dump_lacp_conf(&port_conf, f);
- for (i = 0; i < num_active_slaves; i++) {
- ret = rte_eth_bond_8023ad_slave_info(port_id, slaves[i],
- &slave_info);
+ for (i = 0; i < num_active_members; i++) {
+ ret = rte_eth_bond_8023ad_member_info(port_id, members[i],
+ &member_info);
if (ret) {
- fprintf(f, "\tGet slave device %u 8023ad info failed\n",
- slaves[i]);
+ fprintf(f, "\tGet member device %u 8023ad info failed\n",
+ members[i]);
return;
}
- fprintf(f, "\tSlave Port: %u\n", slaves[i]);
- dump_lacp_slave(&slave_info, f);
+ fprintf(f, "\tMember Port: %u\n", members[i]);
+ dump_lacp_member(&member_info, f);
}
}
@@ -3655,8 +3659,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->link_down_delay_ms = 0;
internals->link_up_delay_ms = 0;
- internals->slave_count = 0;
- internals->active_slave_count = 0;
+ internals->member_count = 0;
+ internals->active_member_count = 0;
internals->rx_offload_capa = 0;
internals->tx_offload_capa = 0;
internals->rx_queue_offload_capa = 0;
@@ -3684,8 +3688,8 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
internals->rx_desc_lim.nb_align = 1;
internals->tx_desc_lim.nb_align = 1;
- memset(internals->active_slaves, 0, sizeof(internals->active_slaves));
- memset(internals->slaves, 0, sizeof(internals->slaves));
+ memset(internals->active_members, 0, sizeof(internals->active_members));
+ memset(internals->members, 0, sizeof(internals->members));
TAILQ_INIT(&internals->flow_list);
internals->flow_isolated_valid = 0;
@@ -3770,7 +3774,7 @@ bond_probe(struct rte_vdev_device *dev)
/* Parse link bonding mode */
if (rte_kvargs_count(kvlist, PMD_BOND_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist, PMD_BOND_MODE_KVARG,
- &bond_ethdev_parse_slave_mode_kvarg,
+ &bond_ethdev_parse_member_mode_kvarg,
&bonding_mode) != 0) {
RTE_BOND_LOG(ERR, "Invalid mode for bonded device %s",
name);
@@ -3815,7 +3819,7 @@ bond_probe(struct rte_vdev_device *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_member_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -3865,7 +3869,7 @@ bond_remove(struct rte_vdev_device *dev)
RTE_ASSERT(eth_dev->device == &dev->device);
internals = eth_dev->data->dev_private;
- if (internals->slave_count != 0)
+ if (internals->member_count != 0)
return -EBUSY;
if (eth_dev->data->dev_started == 1) {
@@ -3877,7 +3881,7 @@ bond_remove(struct rte_vdev_device *dev)
return ret;
}
-/* this part will resolve the slave portids after all the other pdev and vdev
+/* this part will resolve the member portids after all the other pdev and vdev
* have been allocated */
static int
bond_ethdev_configure(struct rte_eth_dev *dev)
@@ -3959,7 +3963,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
if ((link_speeds &
(internals->speed_capa & ~RTE_ETH_LINK_SPEED_FIXED)) == 0) {
- RTE_BOND_LOG(ERR, "the fixed speed is not supported by all slave devices.");
+ RTE_BOND_LOG(ERR, "the fixed speed is not supported by all member devices.");
return -EINVAL;
}
/*
@@ -4041,7 +4045,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
if (rte_kvargs_count(kvlist, PMD_BOND_AGG_MODE_KVARG) == 1) {
if (rte_kvargs_process(kvlist,
PMD_BOND_AGG_MODE_KVARG,
- &bond_ethdev_parse_slave_agg_mode_kvarg,
+ &bond_ethdev_parse_member_agg_mode_kvarg,
&agg_mode) != 0) {
RTE_BOND_LOG(ERR,
"Failed to parse agg selection mode for bonded device %s",
@@ -4059,60 +4063,60 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
}
}
- /* Parse/add slave ports to bonded device */
- if (rte_kvargs_count(kvlist, PMD_BOND_SLAVE_PORT_KVARG) > 0) {
- struct bond_ethdev_slave_ports slave_ports;
+ /* Parse/add member ports to bonded device */
+ if (rte_kvargs_count(kvlist, PMD_BOND_MEMBER_PORT_KVARG) > 0) {
+ struct bond_ethdev_member_ports member_ports;
unsigned i;
- memset(&slave_ports, 0, sizeof(slave_ports));
+ memset(&member_ports, 0, sizeof(member_ports));
- if (rte_kvargs_process(kvlist, PMD_BOND_SLAVE_PORT_KVARG,
- &bond_ethdev_parse_slave_port_kvarg, &slave_ports) != 0) {
+ if (rte_kvargs_process(kvlist, PMD_BOND_MEMBER_PORT_KVARG,
+ &bond_ethdev_parse_member_port_kvarg, &member_ports) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to parse slave ports for bonded device %s",
+ "Failed to parse member ports for bonded device %s",
name);
return -1;
}
- for (i = 0; i < slave_ports.slave_count; i++) {
- if (rte_eth_bond_slave_add(port_id, slave_ports.slaves[i]) != 0) {
+ for (i = 0; i < member_ports.member_count; i++) {
+ if (rte_eth_bond_member_add(port_id, member_ports.members[i]) != 0) {
RTE_BOND_LOG(ERR,
- "Failed to add port %d as slave to bonded device %s",
- slave_ports.slaves[i], name);
+ "Failed to add port %d as member to bonded device %s",
+ member_ports.members[i], name);
}
}
} else {
- RTE_BOND_LOG(INFO, "No slaves specified for bonded device %s", name);
+ RTE_BOND_LOG(INFO, "No members specified for bonded device %s", name);
return -1;
}
- /* Parse/set primary slave port id*/
- arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_SLAVE_KVARG);
+ /* Parse/set primary member port id*/
+ arg_count = rte_kvargs_count(kvlist, PMD_BOND_PRIMARY_MEMBER_KVARG);
if (arg_count == 1) {
- uint16_t primary_slave_port_id;
+ uint16_t primary_member_port_id;
if (rte_kvargs_process(kvlist,
- PMD_BOND_PRIMARY_SLAVE_KVARG,
- &bond_ethdev_parse_primary_slave_port_id_kvarg,
- &primary_slave_port_id) < 0) {
+ PMD_BOND_PRIMARY_MEMBER_KVARG,
+ &bond_ethdev_parse_primary_member_port_id_kvarg,
+ &primary_member_port_id) < 0) {
RTE_BOND_LOG(INFO,
- "Invalid primary slave port id specified for bonded device %s",
+ "Invalid primary member port id specified for bonded device %s",
name);
return -1;
}
/* Set balance mode transmit policy*/
- if (rte_eth_bond_primary_set(port_id, primary_slave_port_id)
+ if (rte_eth_bond_primary_set(port_id, primary_member_port_id)
!= 0) {
RTE_BOND_LOG(ERR,
- "Failed to set primary slave port %d on bonded device %s",
- primary_slave_port_id, name);
+ "Failed to set primary member port %d on bonded device %s",
+ primary_member_port_id, name);
return -1;
}
} else if (arg_count > 1) {
RTE_BOND_LOG(INFO,
- "Primary slave can be specified only once for bonded device %s",
+ "Primary member can be specified only once for bonded device %s",
name);
return -1;
}
@@ -4206,15 +4210,15 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
return -1;
}
- /* configure slaves so we can pass mtu setting */
- for (i = 0; i < internals->slave_count; i++) {
- struct rte_eth_dev *slave_ethdev =
- &(rte_eth_devices[internals->slaves[i].port_id]);
- if (slave_configure(dev, slave_ethdev) != 0) {
+ /* configure members so we can pass mtu setting */
+ for (i = 0; i < internals->member_count; i++) {
+ struct rte_eth_dev *member_ethdev =
+ &(rte_eth_devices[internals->members[i].port_id]);
+ if (member_configure(dev, member_ethdev) != 0) {
RTE_BOND_LOG(ERR,
- "bonded port (%d) failed to configure slave device (%d)",
+ "bonded port (%d) failed to configure member device (%d)",
dev->data->port_id,
- internals->slaves[i].port_id);
+ internals->members[i].port_id);
return -1;
}
}
@@ -4230,7 +4234,7 @@ RTE_PMD_REGISTER_VDEV(net_bonding, pmd_bond_drv);
RTE_PMD_REGISTER_ALIAS(net_bonding, eth_bond);
RTE_PMD_REGISTER_PARAM_STRING(net_bonding,
- "slave=<ifc> "
+ "member=<ifc> "
"primary=<ifc> "
"mode=[0-6] "
"xmit_policy=[l2 | l23 | l34] "
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index bd28ee78a5..09ee21c55f 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -12,8 +12,6 @@ DPDK_24 {
rte_eth_bond_8023ad_ext_distrib_get;
rte_eth_bond_8023ad_ext_slowtx;
rte_eth_bond_8023ad_setup;
- rte_eth_bond_8023ad_slave_info;
- rte_eth_bond_active_slaves_get;
rte_eth_bond_create;
rte_eth_bond_free;
rte_eth_bond_link_monitoring_set;
@@ -23,11 +21,18 @@ DPDK_24 {
rte_eth_bond_mode_set;
rte_eth_bond_primary_get;
rte_eth_bond_primary_set;
- rte_eth_bond_slave_add;
- rte_eth_bond_slave_remove;
- rte_eth_bond_slaves_get;
rte_eth_bond_xmit_policy_get;
rte_eth_bond_xmit_policy_set;
local: *;
};
+
+EXPERIMENTAL {
+ # added in 23.11
+ global:
+ rte_eth_bond_8023ad_member_info;
+ rte_eth_bond_active_members_get;
+ rte_eth_bond_member_add;
+ rte_eth_bond_member_remove;
+ rte_eth_bond_members_get;
+};
diff --git a/examples/bond/main.c b/examples/bond/main.c
index 9b076bb39f..90f422ec11 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -105,8 +105,8 @@
":%02"PRIx8":%02"PRIx8":%02"PRIx8, \
RTE_ETHER_ADDR_BYTES(&addr))
-uint16_t slaves[RTE_MAX_ETHPORTS];
-uint16_t slaves_count;
+uint16_t members[RTE_MAX_ETHPORTS];
+uint16_t members_count;
static uint16_t BOND_PORT = 0xffff;
@@ -128,7 +128,7 @@ static struct rte_eth_conf port_conf = {
};
static void
-slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
+member_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
{
int retval;
uint16_t nb_rxd = RTE_RX_DESC_DEFAULT;
@@ -252,10 +252,10 @@ bond_port_init(struct rte_mempool *mbuf_pool)
rte_exit(EXIT_FAILURE, "port %u: rte_eth_dev_adjust_nb_rx_tx_desc "
"failed (res=%d)\n", BOND_PORT, retval);
- for (i = 0; i < slaves_count; i++) {
- if (rte_eth_bond_slave_add(BOND_PORT, slaves[i]) == -1)
- rte_exit(-1, "Oooops! adding slave (%u) to bond (%u) failed!\n",
- slaves[i], BOND_PORT);
+ for (i = 0; i < members_count; i++) {
+ if (rte_eth_bond_member_add(BOND_PORT, members[i]) == -1)
+ rte_exit(-1, "Oooops! adding member (%u) to bond (%u) failed!\n",
+ members[i], BOND_PORT);
}
@@ -283,18 +283,18 @@ bond_port_init(struct rte_mempool *mbuf_pool)
if (retval < 0)
rte_exit(retval, "Start port %d failed (res=%d)", BOND_PORT, retval);
- printf("Waiting for slaves to become active...");
+ printf("Waiting for members to become active...");
while (wait_counter) {
- uint16_t act_slaves[16] = {0};
- if (rte_eth_bond_active_slaves_get(BOND_PORT, act_slaves, 16) ==
- slaves_count) {
+ uint16_t act_members[16] = {0};
+ if (rte_eth_bond_active_members_get(BOND_PORT, act_members, 16) ==
+ members_count) {
printf("\n");
break;
}
sleep(1);
printf("...");
if (--wait_counter == 0)
- rte_exit(-1, "\nFailed to activate slaves\n");
+ rte_exit(-1, "\nFailed to activate members\n");
}
retval = rte_eth_promiscuous_enable(BOND_PORT);
@@ -631,7 +631,7 @@ static void cmd_help_parsed(__rte_unused void *parsed_result,
"send IP - sends one ARPrequest through bonding for IP.\n"
"start - starts listening ARPs.\n"
"stop - stops lcore_main.\n"
- "show - shows some bond info: ex. active slaves etc.\n"
+ "show - shows some bond info: ex. active members etc.\n"
"help - prints help.\n"
"quit - terminate all threads and quit.\n"
);
@@ -742,13 +742,13 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
struct cmdline *cl,
__rte_unused void *data)
{
- uint16_t slaves[16] = {0};
+ uint16_t members[16] = {0};
uint8_t len = 16;
struct rte_ether_addr addr;
uint16_t i;
int ret;
- for (i = 0; i < slaves_count; i++) {
+ for (i = 0; i < members_count; i++) {
ret = rte_eth_macaddr_get(i, &addr);
if (ret != 0) {
cmdline_printf(cl,
@@ -763,9 +763,9 @@ static void cmd_show_parsed(__rte_unused void *parsed_result,
rte_spinlock_lock(&global_flag_stru_p->lock);
cmdline_printf(cl,
- "Active_slaves:%d "
+ "Active_members:%d "
"packets received:Tot:%d Arp:%d IPv4:%d\n",
- rte_eth_bond_active_slaves_get(BOND_PORT, slaves, len),
+ rte_eth_bond_active_members_get(BOND_PORT, members, len),
global_flag_stru_p->port_packets[0],
global_flag_stru_p->port_packets[1],
global_flag_stru_p->port_packets[2]);
@@ -836,10 +836,10 @@ main(int argc, char *argv[])
rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
/* initialize all ports */
- slaves_count = nb_ports;
+ members_count = nb_ports;
RTE_ETH_FOREACH_DEV(i) {
- slave_port_init(i, mbuf_pool);
- slaves[i] = i;
+ member_port_init(i, mbuf_pool);
+ members[i] = i;
}
bond_port_init(mbuf_pool);
--
2.39.1
^ permalink raw reply [relevance 1%]
* Re: [RFC] ethdev: introduce maximum Rx buffer size
2023-08-11 12:07 3% ` Andrew Rybchenko
@ 2023-08-15 8:16 0% ` lihuisong (C)
0 siblings, 0 replies; 200+ results
From: lihuisong (C) @ 2023-08-15 8:16 UTC (permalink / raw)
To: Andrew Rybchenko, dev; +Cc: thomas, ferruh.yigit, liuyonglong
Hi Andrew,
Thanks for your review.
在 2023/8/11 20:07, Andrew Rybchenko 写道:
> On 8/8/23 07:02, Huisong Li wrote:
>> The Rx buffer size stands for the size hardware supported to receive
>> packets in one mbuf. The "min_rx_bufsize" is the minimum buffer hardware
>> supported in Rx. Actually, some engines also have the maximum buffer
>> specification, like, hns3. For these engines, the available data size
>> of one mbuf in Rx also depends on the maximum buffer hardware supported.
>> So introduce maximum Rx buffer size in struct rte_eth_dev_info to report
>> user to avoid memory waste.
>
> I think that the field should be defined as for informational purposes
> only (highlighted in comments). I.e. if application specifies larger Rx
> buffer, driver should accept it and just pass smaller value value to HW.
Ok, will add it.
> Also I think it would be useful to log warning from Rx queue setup
> if provided Rx buffer is larger than maximum reported by the driver.
Ack
>
>>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>> ---
>> lib/ethdev/rte_ethdev.c | 1 +
>> lib/ethdev/rte_ethdev.h | 4 ++--
>> 2 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>> index 0840d2b594..6d1b92e607 100644
>> --- a/lib/ethdev/rte_ethdev.c
>> +++ b/lib/ethdev/rte_ethdev.c
>> @@ -3689,6 +3689,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct
>> rte_eth_dev_info *dev_info)
>> dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
>> RTE_ETHER_CRC_LEN;
>> dev_info->max_mtu = UINT16_MAX;
>> + dev_info->max_rx_bufsize = UINT32_MAX;
>> if (*dev->dev_ops->dev_infos_get == NULL)
>> return -ENOTSUP;
>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> index 04a2564f22..1f0ab9c5d8 100644
>> --- a/lib/ethdev/rte_ethdev.h
>> +++ b/lib/ethdev/rte_ethdev.h
>> @@ -1779,8 +1779,8 @@ struct rte_eth_dev_info {
>> struct rte_eth_switch_info switch_info;
>> /** Supported error handling mode. */
>> enum rte_eth_err_handle_mode err_handle_mode;
>> -
>> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
>> + uint32_t max_rx_bufsize; /**< Maximum size of Rx buffer. */
>
> IMHO, comment should be aligned similar to comments below.
> Since the next release is ABI breaking, I think it should be put
> nearby min_rx_bufsize to make it easier to notice it.
Yes, let's put min/max_rx_bufsize together.
>
>> + uint32_t reserved_32s[3]; /**< Reserved for future fields */
>> void *reserved_ptrs[2]; /**< Reserved for future fields */
>> };
>
> .
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 2/2] eal: remove NUMFLAGS enumeration
2023-08-11 6:07 2% ` [PATCH v2 2/2] eal: remove NUMFLAGS enumeration Sivaprasad Tummala
@ 2023-08-15 6:10 3% ` Stanisław Kardach
0 siblings, 0 replies; 200+ results
From: Stanisław Kardach @ 2023-08-15 6:10 UTC (permalink / raw)
To: Sivaprasad Tummala
Cc: Ruifeng Wang, Min Zhou, David Christensen, Bruce Richardson,
Konstantin Ananyev, dev
[-- Attachment #1: Type: text/plain, Size: 9614 bytes --]
On Fri, Aug 11, 2023, 08:08 Sivaprasad Tummala <sivaprasad.tummala@amd.com>
wrote:
> This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
> features without breaking ABI each time.
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
> lib/eal/arm/include/rte_cpuflags_32.h | 1 -
> lib/eal/arm/include/rte_cpuflags_64.h | 1 -
> lib/eal/arm/rte_cpuflags.c | 7 +++++--
> lib/eal/loongarch/include/rte_cpuflags.h | 1 -
> lib/eal/loongarch/rte_cpuflags.c | 7 +++++--
> lib/eal/ppc/include/rte_cpuflags.h | 1 -
> lib/eal/ppc/rte_cpuflags.c | 7 +++++--
> lib/eal/riscv/include/rte_cpuflags.h | 1 -
> lib/eal/riscv/rte_cpuflags.c | 7 +++++--
> lib/eal/x86/include/rte_cpuflags.h | 1 -
> lib/eal/x86/rte_cpuflags.c | 7 +++++--
> 11 files changed, 25 insertions(+), 16 deletions(-)
>
> diff --git a/lib/eal/arm/include/rte_cpuflags_32.h
> b/lib/eal/arm/include/rte_cpuflags_32.h
> index 4e254428a2..41ab0d5f21 100644
> --- a/lib/eal/arm/include/rte_cpuflags_32.h
> +++ b/lib/eal/arm/include/rte_cpuflags_32.h
> @@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_V7L,
> RTE_CPUFLAG_V8L,
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
> };
>
Since there is no description of V1 to V2 changes, could you point me to
what has changed?
Also I see you're still removing the RTE_CPUFLAG_NUMFLAGS (what I call a
last element canary). Why? If you're concerned with ABI, then we're talking
about an application linking dynamically with DPDK or talking via some RPC
channel with another DPDK application. So clashing with this definition
does not come into question. One should rather use
rte_cpu_get_flag_enabled().
Also if you want to introduce new features, one would add them yo the
rte_cpuflags headers, unless you'd like to not add those and keep an
undocumented list "above" the last defined element.
Could you explain a bit more Your use-case?
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/arm/include/rte_cpuflags_64.h
> b/lib/eal/arm/include/rte_cpuflags_64.h
> index aa7a56d491..ea5193e510 100644
> --- a/lib/eal/arm/include/rte_cpuflags_64.h
> +++ b/lib/eal/arm/include/rte_cpuflags_64.h
> @@ -37,7 +37,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_SVEBF16,
> RTE_CPUFLAG_AARCH64,
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
> index 56e7b2e689..f33fee242b 100644
> --- a/lib/eal/arm/rte_cpuflags.c
> +++ b/lib/eal/arm/rte_cpuflags.c
> @@ -139,8 +139,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> {
> const struct feature_entry *feat;
> hwcap_registers_t regs = {0};
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if ((unsigned int)feature >= num_flags)
> return -ENOENT;
>
> feat = &rte_cpu_feature_table[feature];
> @@ -154,7 +155,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if ((unsigned int)feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> diff --git a/lib/eal/loongarch/include/rte_cpuflags.h
> b/lib/eal/loongarch/include/rte_cpuflags.h
> index 1c80779262..9ff8baaa3c 100644
> --- a/lib/eal/loongarch/include/rte_cpuflags.h
> +++ b/lib/eal/loongarch/include/rte_cpuflags.h
> @@ -27,7 +27,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_LBT_ARM,
> RTE_CPUFLAG_LBT_MIPS,
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS /**< This should always be the last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/loongarch/rte_cpuflags.c
> b/lib/eal/loongarch/rte_cpuflags.c
> index 0a75ca58d4..73b53b8a3a 100644
> --- a/lib/eal/loongarch/rte_cpuflags.c
> +++ b/lib/eal/loongarch/rte_cpuflags.c
> @@ -66,8 +66,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> {
> const struct feature_entry *feat;
> hwcap_registers_t regs = {0};
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if ((unsigned int)feature >= num_flags)
> return -ENOENT;
>
> feat = &rte_cpu_feature_table[feature];
> @@ -81,7 +82,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if ((unsigned int)feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> diff --git a/lib/eal/ppc/include/rte_cpuflags.h
> b/lib/eal/ppc/include/rte_cpuflags.h
> index a88355d170..b74e7a73ee 100644
> --- a/lib/eal/ppc/include/rte_cpuflags.h
> +++ b/lib/eal/ppc/include/rte_cpuflags.h
> @@ -49,7 +49,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_HTM,
> RTE_CPUFLAG_ARCH_2_07,
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/ppc/rte_cpuflags.c b/lib/eal/ppc/rte_cpuflags.c
> index 61db5c216d..a173c62631 100644
> --- a/lib/eal/ppc/rte_cpuflags.c
> +++ b/lib/eal/ppc/rte_cpuflags.c
> @@ -90,8 +90,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> {
> const struct feature_entry *feat;
> hwcap_registers_t regs = {0};
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if ((unsigned int)feature >= num_flags)
> return -ENOENT;
>
> feat = &rte_cpu_feature_table[feature];
> @@ -105,7 +106,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if ((unsigned int)feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> diff --git a/lib/eal/riscv/include/rte_cpuflags.h
> b/lib/eal/riscv/include/rte_cpuflags.h
> index 66e787f898..803c3655ae 100644
> --- a/lib/eal/riscv/include/rte_cpuflags.h
> +++ b/lib/eal/riscv/include/rte_cpuflags.h
> @@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_RISCV_ISA_Y, /* Reserved */
> RTE_CPUFLAG_RISCV_ISA_Z, /* Reserved */
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/riscv/rte_cpuflags.c b/lib/eal/riscv/rte_cpuflags.c
> index 4f6d29b947..6d3f8f16cc 100644
> --- a/lib/eal/riscv/rte_cpuflags.c
> +++ b/lib/eal/riscv/rte_cpuflags.c
> @@ -95,8 +95,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> {
> const struct feature_entry *feat;
> hwcap_registers_t regs = {0};
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if ((unsigned int)feature >= num_flags)
> return -ENOENT;
>
> feat = &rte_cpu_feature_table[feature];
> @@ -110,7 +111,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if ((unsigned int)feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> diff --git a/lib/eal/x86/include/rte_cpuflags.h
> b/lib/eal/x86/include/rte_cpuflags.h
> index 92e90fb6e0..7fc6117243 100644
> --- a/lib/eal/x86/include/rte_cpuflags.h
> +++ b/lib/eal/x86/include/rte_cpuflags.h
> @@ -135,7 +135,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_WAITPKG, /**< UMONITOR/UMWAIT/TPAUSE */
>
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS, /**< This should always be the
> last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/x86/rte_cpuflags.c b/lib/eal/x86/rte_cpuflags.c
> index d6b518251b..22061cb6d3 100644
> --- a/lib/eal/x86/rte_cpuflags.c
> +++ b/lib/eal/x86/rte_cpuflags.c
> @@ -149,8 +149,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const struct feature_entry *feat;
> cpuid_registers_t regs;
> unsigned int maxleaf;
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if ((unsigned int)feature >= num_flags)
> /* Flag does not match anything in the feature tables */
> return -ENOENT;
>
> @@ -176,7 +177,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if ((unsigned int)feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> --
> 2.34.1
>
>
[-- Attachment #2: Type: text/html, Size: 11579 bytes --]
^ permalink raw reply [relevance 3%]
* RE: C11 atomics adoption blocked
@ 2023-08-14 15:13 3% ` Morten Brørup
2023-08-16 17:25 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-08-14 15:13 UTC (permalink / raw)
To: Thomas Monjalon, Tyler Retzlaff
Cc: Bruce Richardson, dev, techboard, david.marchand, Honnappa.Nagarahalli
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Monday, 14 August 2023 15.46
>
> mercredi 9 août 2023, Morten Brørup:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Tuesday, 8 August 2023 22.50
> > >
> > > On Tue, Aug 08, 2023 at 10:22:09PM +0200, Morten Brørup wrote:
> > > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > > Sent: Tuesday, 8 August 2023 21.20
> > > > >
> > > > > On Tue, Aug 08, 2023 at 07:23:41PM +0100, Bruce Richardson
> wrote:
> > > > > > On Tue, Aug 08, 2023 at 10:53:03AM -0700, Tyler Retzlaff
> wrote:
> > > > > > > Hi folks,
> > > > > > >
> > > > > > > Moving this discussion to the dev mailing list for broader
> > > comment.
> > > > > > >
> > > > > > > Unfortunately, we've hit a roadblock with integrating C11
> > > atomics
> > > > > > > for DPDK. The main issue is that GNU C++ prior to -
> std=c++23
> > > > > explicitly
> > > > > > > cannot be integrated with C11 stdatomic.h. Basically, you
> can't
> > > > > include
> > > > > > > the header and you can't use `_Atomic' type specifier to
> declare
> > > > > atomic
> > > > > > > types. This is not a problem with LLVM or MSVC as they both
> > > allow
> > > > > > > integration with C11 stdatomic.h, but going forward with C11
> > > atomics
> > > > > > > would break using DPDK in C++ programs when building with
> GNU
> > > g++.
> > > > > > >
> > > > > > > Essentially you cannot compile the following with g++.
> > > > > > >
> > > > > > > #include <stdatomic.h>
> > > > > > >
> > > > > > > int main(int argc, char *argv[]) { return 0; }
> > > > > > >
> > > > > > > In file included from atomic.cpp:1:
> > > > > > > /usr/lib/gcc/x86_64-pc-cygwin/11/include/stdatomic.h:40:9:
> > > error:
> > > > > > > ‘_Atomic’ does not name a type
> > > > > > > 40 | typedef _Atomic _Bool atomic_bool;
> > > > > > >
> > > > > > > ... more errors of same ...
> > > > > > >
> > > > > > > It's also acknowledged as something known and won't fix by
> GNU
> > > g++
> > > > > > > maintainers.
> > > > > > >
> > > > > > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60932
> > > > > > >
> > > > > > > Given the timeframe I would like to propose the minimally
> > > invasive,
> > > > > > > lowest risk solution as follows.
> > > > > > >
> > > > > > > 1. Adopt stdatomic.h for all Windows targets, leave all
> > > Linux/BSD
> > > > > targets
> > > > > > > using GCC builtin C++11 memory model atomics.
> > > > > > > 2. Introduce a macro that allows _Atomic type specifier to
> be
> > > > > applied to
> > > > > > > function parameter, structure field types and variable
> > > > > declarations.
> > > > > > >
> > > > > > > * The macro would expand empty for Linux/BSD targets.
> > > > > > > * The macro would expand to C11 _Atomic keyword for
> Windows
> > > > > targets.
> > > > > > >
> > > > > > > 3. Introduce basic macro that allows __atomic_xxx for
> > > normalized
> > > > > use
> > > > > > > internal to DPDK.
> > > > > > >
> > > > > > > * The macro would not be defined for Linux/BSD targets.
> > > > > > > * The macro would expand __atomic_xxx to corresponding
> > > > > stdatomic.h
> > > > > > > atomic_xxx operations for Windows targets.
> > > > > > >
> > > >
> > > > Regarding naming of these macros (suggested in 2. and 3.), they
> should
> > > probably bear the rte_ prefix instead of overlapping existing names,
> so
> > > applications can also use them directly.
> > > >
> > > > E.g.:
> > > > #define rte_atomic for _Atomic or nothing,
> > > > #define rte_atomic_fetch_add() for atomic_fetch_add() or
> > > __atomic_fetch_add(), and
> > > > #define RTE_MEMORY_ORDER_SEQ_CST for memory_order_seq_cst or
> > > __ATOMIC_SEQ_CST.
> > > >
> > > > Maybe that is what you meant already. I'm not sure of the scope
> and
> > > details of your suggestion here.
> > >
> > > I'm shy to do anything in the rte_ namespace because I don't want to
> > > formalize it as an API.
> > >
> > > I was envisioning the following.
> > >
> > > Internally DPDK code just uses __atomic_fetch_add directly, the
> macros
> > > are provided for Windows targets to expand to __atomic_fetch_add.
> > >
> > > Externally DPDK applications that don't care about being portable
> may
> > > use __atomic_fetch_add (BSD/Linux) or atomic_fetch_add (Windows)
> > > directly.
> > >
> > > Externally DPDK applications that care to be portable may do what is
> > > done Internally and <<use>> the __atomic_fetch_add directly. By
> > > including say rte_stdatomic.h indirectly (Windows) gets the macros
> > > expanded to atomic_fetch_add and for BSD/Linux it's a noop include.
> > >
> > > Basically I'm placing a little ugly into Windows built code and in
> trade
> > > we don't end up with a bunch of rte_ APIs that were strongly
> objected to
> > > previously.
> > >
> > > It's a compromise.
> >
> > OK, we probably need to offer a public header file to wrap the
> atomics, using either names prefixed with rte_ or names similar to the
> gcc builtin atomics.
> >
> > I guess the objections were based on the assumption that we were
> switching to C11 atomics with DPDK 23.11, so the rte_ prefixed atomic
> APIs would be very short lived (DPDK 23.07 to 23.11 only). But with this
> new information about GNU C++ incompatibility, that seems not to be the
> case, so the naming discussion can be reopened.
> >
> > If we don't introduce such a wrapper header, all portable code needs
> to surround the use of atomics with #ifdef USE_STDATOMIC_H.
> >
> > BTW: Can the compilers that understand both builtin atomics and C11
> stdatomics.h handle code with #define __atomic_fetch_add
> atomic_fetch_add and #define __ATOMIC_SEQ_CST memory_order_seq_cst? If
> not, then we need to use rte_ prefixed atomics.
> >
> > And what about C++ atomics... Do we want (or need?) a third variant
> using C++ atomics, e.g. "atomic<int> x;" instead of "_Atomic int x;"? (I
> hope not!) For reference, the "atomic_int" type is "_Atomic int" in C,
> but "std::atomic<int>" in C++.
> >
> > C++23 provides the C11 compatibility macro "_Atomic(T)", which means
> "_Atomic T" in C and "std::atomic<T>" in C++. Perhaps we can somewhat
> rely on this, and update our coding standards to require using e.g.
> "_Atomic(int)" for atomic types, and disallow using "_Atomic int".
>
> You mean the syntax _Atomic(T) is working well in both C and C++?
This syntax is API compatible across C11 and C++23, so it would work with (C11 and C++23) applications building DPDK from scratch.
But it is only "recommended" ABI compatible for compilers [1], so DPDK in distros cannot rely on.
[1]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p0943r6.html
It would be future-proofing for the benefit of C++23 based applications... I was mainly mentioning it for completeness, now that we are switching to a new standard for atomics.
Realistically, considering that 1. such a coding standard (requiring "_Atomic(T)" instead of "_Atomic T") would only be relevant for a 2023 standard, and 2. that we are now upgrading to a standard from 2011, we would probably have to wait for a very distant future (12 years?) before C++ applications can reap the benefits of such a coding standard.
^ permalink raw reply [relevance 3%]
* RE: [PATCH v2 3/6] eal: add rte atomic qualifier with casts
2023-08-11 17:32 2% ` [PATCH v2 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-14 8:05 0% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2023-08-14 8:05 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 19.32
>
> Introduce __rte_atomic qualifying casts in rte_optional atomics inline
> functions to prevent cascading the need to pass __rte_atomic qualified
> arguments.
>
> Warning, this is really implementation dependent and being done
> temporarily to avoid having to convert more of the libraries and tests in
> DPDK in the initial series that introduces the API. The consequence of the
> assumption of the ABI of the types in question not being ``the same'' is
> only a risk that may be realized when enable_stdatomic=true.
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
> lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++-----------
> -
> lib/eal/include/generic/rte_pause.h | 9 ++++---
> lib/eal/x86/rte_power_intrinsics.c | 7 +++---
> 3 files changed, 42 insertions(+), 22 deletions(-)
>
> diff --git a/lib/eal/include/generic/rte_atomic.h
> b/lib/eal/include/generic/rte_atomic.h
> index f6c4b3e..4f954e0 100644
> --- a/lib/eal/include/generic/rte_atomic.h
> +++ b/lib/eal/include/generic/rte_atomic.h
> @@ -274,7 +274,8 @@
> static inline void
> rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
> {
> - rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
> + rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt,
> inc,
> + rte_memory_order_seq_cst);
As mentioned in my review to the 2/6 patch, I think __rte_atomic should come before the type, like this:
(volatile __rte_atomic int16_t *)
Same with all the changes.
Otherwise good.
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [relevance 0%]
* [PATCH v11 10/16] eal: expand most macros to empty when using MSVC
@ 2023-08-11 19:20 5% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-11 19:20 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Konstantin Ananyev, Ciara Power, thomas,
david.marchand, mb, Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/rte_branch_prediction.h | 8 +++++
lib/eal/include/rte_common.h | 54 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++
3 files changed, 82 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 414cd92..c0356ca 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -24,7 +24,11 @@
* do_stuff();
*/
#ifndef likely
+#ifdef RTE_TOOLCHAIN_MSVC
+#define likely(x) (!!(x))
+#else
#define likely(x) __builtin_expect(!!(x), 1)
+#endif
#endif /* likely */
/**
@@ -37,7 +41,11 @@
* do_stuff();
*/
#ifndef unlikely
+#ifdef RTE_TOOLCHAIN_MSVC
+#define unlikely(x) (!!(x))
+#else
#define unlikely(x) __builtin_expect(!!(x), 0)
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..b087532 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -41,6 +41,10 @@
#define RTE_STD_C11
#endif
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __extension__
+#endif
+
/*
* RTE_TOOLCHAIN_GCC is defined if the target is built with GCC,
* while a host application (like pmdinfogen) may have another compiler.
@@ -65,7 +69,11 @@
/**
* Force alignment
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_aligned(a)
+#else
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +88,29 @@
/**
* Force a structure to be packed
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_packed
+#else
#define __rte_packed __attribute__((__packed__))
+#endif
/**
* Macro to mark a type that is not subject to type-based aliasing rules
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_may_alias
+#else
#define __rte_may_alias __attribute__((__may_alias__))
+#endif
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#else
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -110,14 +131,22 @@
/**
* Force symbol to be generated even if it appears to be unused.
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_used
+#else
#define __rte_used __attribute__((used))
+#endif
/*********** Macros to eliminate unused variable warnings ********/
/**
* short definition to mark a function parameter unused
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_unused
+#else
#define __rte_unused __attribute__((__unused__))
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +170,9 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_format_printf(format_index, first_arg)
+#else
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +180,7 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +255,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_noreturn
+#else
#define __rte_noreturn __attribute__((noreturn))
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +284,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_warn_unused_result
+#else
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#endif
/**
* Force a function to be inlined
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_always_inline
+#else
#define __rte_always_inline inline __attribute__((always_inline))
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +482,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_cache_aligned
+#else
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
@@ -812,6 +861,10 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* struct wrapper *w = container_of(x, struct wrapper, c);
*/
#ifndef container_of
+#ifdef RTE_TOOLCHAIN_MSVC
+#define container_of(ptr, type, member) \
+ ((type *)((uintptr_t)(ptr) - offsetof(type, member)))
+#else
#define container_of(ptr, type, member) __extension__ ({ \
const typeof(((type *)0)->member) *_ptr = (ptr); \
__rte_unused type *_target_ptr = \
@@ -819,6 +872,7 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
(type *)(((uintptr_t)_ptr) - offsetof(type, member)); \
})
#endif
+#endif
/** Swap two variables. */
#define RTE_SWAP(a, b) \
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..716bc03 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_experimental
+#else
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#endif
#else
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_experimental
+#else
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_internal
+#else
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_internal
+#else
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#endif
#else
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_internal
+#else
#define __rte_internal \
__attribute__((section(".text.internal")))
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [PATCH v2 3/6] eal: add rte atomic qualifier with casts
2023-08-11 17:32 3% ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-11 17:32 2% ` Tyler Retzlaff
2023-08-14 8:05 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-11 17:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index f6c4b3e..4f954e0 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index c816e7d..c261689 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -87,7 +87,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint16_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -97,7 +98,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint32_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -107,7 +109,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..6c192f0 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t __rte_atomic *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [relevance 2%]
* [PATCH v2 0/6] RFC optional rte optional stdatomics API
2023-08-11 1:31 4% [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 1:31 2% ` [PATCH 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-11 17:32 3% ` Tyler Retzlaff
2023-08-11 17:32 2% ` [PATCH v2 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-16 19:19 3% ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (3 subsequent siblings)
5 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-11 17:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 6 +-
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++---
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 +++++++-----
lib/eal/include/generic/rte_pause.h | 42 +++----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++-----
lib/eal/include/rte_pflock.h | 25 +++--
lib/eal/include/rte_seqcount.h | 19 ++--
lib/eal/include/rte_stdatomic.h | 182 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 ++++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
29 files changed, 481 insertions(+), 258 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v8 0/3] Split logging functionality out of EAL
2023-08-09 13:35 3% ` [PATCH v8 " Bruce Richardson
@ 2023-08-11 12:46 4% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-08-11 12:46 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Thomas Monjalon, Morten Brørup
On Wed, Aug 9, 2023 at 3:36 PM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> There is a general desire to reduce the size and scope of EAL. To this
> end, this patchset makes a (very) small step in that direction by taking
> the logging functionality out of EAL and putting it into its own library
> that can be built and maintained separately.
>
> As with the first RFC for this, the main obstacle is the "fnmatch"
> function which is needed by both EAL and the new log function when
> building on windows. While the function cannot stay in EAL - or we would
> have a circular dependency, moving it to a new library or just putting
> it in the log library have the disadvantages that it then "leaks" into
> the public namespace without an rte_prefix, which could cause issues.
> Since only a single function is involved, subsequent versions take a
> different approach to v1, and just moves the offending function to be a
> static function in a header file. This allows use by multiple libs
> without conflicting names or making it public.
>
> The other complication, as explained in v1 RFC was that of multiple
> implementations for different OS's. This is solved here in the same
> way as v1, by including the OS in the name and having meson pick the
> correct file for each build. Since only one file is involved, there
> seemed little need for replicating EAL's separate subdirectories
> per-OS.
Series applied, thanks Bruce for this first step.
As mentionned during the maintainers weekly call yesterday, this is
only a first "easy" step but, thinking of next steps, more splitting
may not be that easy.
At least, on the libabigail topic, as we need the ABI check to handle
libraries splits, a new feature has been cooked in (not yet released)
2.4 libabigail.
https://sourceware.org/git/?p=libabigail.git;a=commitdiff;h=0b338dfaf690993e123b6433201b3a8b8204d662
Hopefully, we will have a libabigail release available by the time we
start the v24.03 release (and re-enable ABI checks).
--
David Marchand
^ permalink raw reply [relevance 4%]
* Re: [RFC] ethdev: introduce maximum Rx buffer size
@ 2023-08-11 12:07 3% ` Andrew Rybchenko
2023-08-15 8:16 0% ` lihuisong (C)
0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2023-08-11 12:07 UTC (permalink / raw)
To: Huisong Li, dev; +Cc: thomas, ferruh.yigit, liuyonglong
On 8/8/23 07:02, Huisong Li wrote:
> The Rx buffer size stands for the size hardware supported to receive
> packets in one mbuf. The "min_rx_bufsize" is the minimum buffer hardware
> supported in Rx. Actually, some engines also have the maximum buffer
> specification, like, hns3. For these engines, the available data size
> of one mbuf in Rx also depends on the maximum buffer hardware supported.
> So introduce maximum Rx buffer size in struct rte_eth_dev_info to report
> user to avoid memory waste.
I think that the field should be defined as for informational purposes
only (highlighted in comments). I.e. if application specifies larger Rx
buffer, driver should accept it and just pass smaller value value to HW.
Also I think it would be useful to log warning from Rx queue setup
if provided Rx buffer is larger than maximum reported by the driver.
>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
> lib/ethdev/rte_ethdev.c | 1 +
> lib/ethdev/rte_ethdev.h | 4 ++--
> 2 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 0840d2b594..6d1b92e607 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -3689,6 +3689,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
> dev_info->min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN -
> RTE_ETHER_CRC_LEN;
> dev_info->max_mtu = UINT16_MAX;
> + dev_info->max_rx_bufsize = UINT32_MAX;
>
> if (*dev->dev_ops->dev_infos_get == NULL)
> return -ENOTSUP;
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 04a2564f22..1f0ab9c5d8 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1779,8 +1779,8 @@ struct rte_eth_dev_info {
> struct rte_eth_switch_info switch_info;
> /** Supported error handling mode. */
> enum rte_eth_err_handle_mode err_handle_mode;
> -
> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> + uint32_t max_rx_bufsize; /**< Maximum size of Rx buffer. */
IMHO, comment should be aligned similar to comments below.
Since the next release is ABI breaking, I think it should be put
nearby min_rx_bufsize to make it easier to notice it.
> + uint32_t reserved_32s[3]; /**< Reserved for future fields */
> void *reserved_ptrs[2]; /**< Reserved for future fields */
> };
>
^ permalink raw reply [relevance 3%]
* [PATCH v2 1/2] test/cpuflags: removed test for NUMFLAGS
2023-08-02 21:11 2% [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS Sivaprasad Tummala
2023-08-02 21:11 3% ` [PATCH 2/2] test/cpuflags: " Sivaprasad Tummala
2023-08-02 23:50 0% ` [PATCH 1/2] eal: " Stanisław Kardach
@ 2023-08-11 6:07 3% ` Sivaprasad Tummala
2023-08-11 6:07 2% ` [PATCH v2 2/2] eal: remove NUMFLAGS enumeration Sivaprasad Tummala
2 siblings, 1 reply; 200+ results
From: Sivaprasad Tummala @ 2023-08-11 6:07 UTC (permalink / raw)
To: ruifeng.wang, zhoumin, drc, kda, bruce.richardson, konstantin.v.ananyev
Cc: dev
This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
features without breaking ABI each time.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
app/test/test_cpuflags.c | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/app/test/test_cpuflags.c b/app/test/test_cpuflags.c
index a0e342ae48..2b8563602c 100644
--- a/app/test/test_cpuflags.c
+++ b/app/test/test_cpuflags.c
@@ -322,15 +322,6 @@ test_cpuflags(void)
CHECK_FOR_FLAG(RTE_CPUFLAG_LBT_MIPS);
#endif
- /*
- * Check if invalid data is handled properly
- */
- printf("\nCheck for invalid flag:\t");
- result = rte_cpu_get_flag_enabled(RTE_CPUFLAG_NUMFLAGS);
- printf("%s\n", cpu_flag_result(result));
- if (result != -ENOENT)
- return -1;
-
return 0;
}
--
2.34.1
^ permalink raw reply [relevance 3%]
* [PATCH v2 2/2] eal: remove NUMFLAGS enumeration
2023-08-11 6:07 3% ` [PATCH v2 1/2] test/cpuflags: removed test for NUMFLAGS Sivaprasad Tummala
@ 2023-08-11 6:07 2% ` Sivaprasad Tummala
2023-08-15 6:10 3% ` Stanisław Kardach
0 siblings, 1 reply; 200+ results
From: Sivaprasad Tummala @ 2023-08-11 6:07 UTC (permalink / raw)
To: ruifeng.wang, zhoumin, drc, kda, bruce.richardson, konstantin.v.ananyev
Cc: dev
This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
features without breaking ABI each time.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
lib/eal/arm/include/rte_cpuflags_32.h | 1 -
lib/eal/arm/include/rte_cpuflags_64.h | 1 -
lib/eal/arm/rte_cpuflags.c | 7 +++++--
lib/eal/loongarch/include/rte_cpuflags.h | 1 -
lib/eal/loongarch/rte_cpuflags.c | 7 +++++--
lib/eal/ppc/include/rte_cpuflags.h | 1 -
lib/eal/ppc/rte_cpuflags.c | 7 +++++--
lib/eal/riscv/include/rte_cpuflags.h | 1 -
lib/eal/riscv/rte_cpuflags.c | 7 +++++--
lib/eal/x86/include/rte_cpuflags.h | 1 -
lib/eal/x86/rte_cpuflags.c | 7 +++++--
11 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/lib/eal/arm/include/rte_cpuflags_32.h b/lib/eal/arm/include/rte_cpuflags_32.h
index 4e254428a2..41ab0d5f21 100644
--- a/lib/eal/arm/include/rte_cpuflags_32.h
+++ b/lib/eal/arm/include/rte_cpuflags_32.h
@@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_V7L,
RTE_CPUFLAG_V8L,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/include/rte_cpuflags_64.h b/lib/eal/arm/include/rte_cpuflags_64.h
index aa7a56d491..ea5193e510 100644
--- a/lib/eal/arm/include/rte_cpuflags_64.h
+++ b/lib/eal/arm/include/rte_cpuflags_64.h
@@ -37,7 +37,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_SVEBF16,
RTE_CPUFLAG_AARCH64,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
index 56e7b2e689..f33fee242b 100644
--- a/lib/eal/arm/rte_cpuflags.c
+++ b/lib/eal/arm/rte_cpuflags.c
@@ -139,8 +139,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if ((unsigned int)feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -154,7 +155,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if ((unsigned int)feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/loongarch/include/rte_cpuflags.h b/lib/eal/loongarch/include/rte_cpuflags.h
index 1c80779262..9ff8baaa3c 100644
--- a/lib/eal/loongarch/include/rte_cpuflags.h
+++ b/lib/eal/loongarch/include/rte_cpuflags.h
@@ -27,7 +27,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_LBT_ARM,
RTE_CPUFLAG_LBT_MIPS,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS /**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/loongarch/rte_cpuflags.c b/lib/eal/loongarch/rte_cpuflags.c
index 0a75ca58d4..73b53b8a3a 100644
--- a/lib/eal/loongarch/rte_cpuflags.c
+++ b/lib/eal/loongarch/rte_cpuflags.c
@@ -66,8 +66,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if ((unsigned int)feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -81,7 +82,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if ((unsigned int)feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/ppc/include/rte_cpuflags.h b/lib/eal/ppc/include/rte_cpuflags.h
index a88355d170..b74e7a73ee 100644
--- a/lib/eal/ppc/include/rte_cpuflags.h
+++ b/lib/eal/ppc/include/rte_cpuflags.h
@@ -49,7 +49,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_HTM,
RTE_CPUFLAG_ARCH_2_07,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/ppc/rte_cpuflags.c b/lib/eal/ppc/rte_cpuflags.c
index 61db5c216d..a173c62631 100644
--- a/lib/eal/ppc/rte_cpuflags.c
+++ b/lib/eal/ppc/rte_cpuflags.c
@@ -90,8 +90,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if ((unsigned int)feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -105,7 +106,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if ((unsigned int)feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/riscv/include/rte_cpuflags.h b/lib/eal/riscv/include/rte_cpuflags.h
index 66e787f898..803c3655ae 100644
--- a/lib/eal/riscv/include/rte_cpuflags.h
+++ b/lib/eal/riscv/include/rte_cpuflags.h
@@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_RISCV_ISA_Y, /* Reserved */
RTE_CPUFLAG_RISCV_ISA_Z, /* Reserved */
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/riscv/rte_cpuflags.c b/lib/eal/riscv/rte_cpuflags.c
index 4f6d29b947..6d3f8f16cc 100644
--- a/lib/eal/riscv/rte_cpuflags.c
+++ b/lib/eal/riscv/rte_cpuflags.c
@@ -95,8 +95,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if ((unsigned int)feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -110,7 +111,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if ((unsigned int)feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/x86/include/rte_cpuflags.h b/lib/eal/x86/include/rte_cpuflags.h
index 92e90fb6e0..7fc6117243 100644
--- a/lib/eal/x86/include/rte_cpuflags.h
+++ b/lib/eal/x86/include/rte_cpuflags.h
@@ -135,7 +135,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_WAITPKG, /**< UMONITOR/UMWAIT/TPAUSE */
/* The last item */
- RTE_CPUFLAG_NUMFLAGS, /**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/x86/rte_cpuflags.c b/lib/eal/x86/rte_cpuflags.c
index d6b518251b..22061cb6d3 100644
--- a/lib/eal/x86/rte_cpuflags.c
+++ b/lib/eal/x86/rte_cpuflags.c
@@ -149,8 +149,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const struct feature_entry *feat;
cpuid_registers_t regs;
unsigned int maxleaf;
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if ((unsigned int)feature >= num_flags)
/* Flag does not match anything in the feature tables */
return -ENOENT;
@@ -176,7 +177,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if ((unsigned int)feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
--
2.34.1
^ permalink raw reply [relevance 2%]
* RE: [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS
2023-08-02 23:50 0% ` [PATCH 1/2] eal: " Stanisław Kardach
@ 2023-08-11 4:02 2% ` Tummala, Sivaprasad
0 siblings, 0 replies; 200+ results
From: Tummala, Sivaprasad @ 2023-08-11 4:02 UTC (permalink / raw)
To: Stanisław Kardach
Cc: Ruifeng Wang, Min Zhou, David Christensen, Bruce Richardson,
Konstantin Ananyev, dev
[AMD Official Use Only - General]
```
From: Stanisław Kardach <kda@semihalf.com>
Sent: Thursday, August 3, 2023 5:20 AM
To: Tummala, Sivaprasad <Sivaprasad.Tummala@amd.com>
Cc: Ruifeng Wang <ruifeng.wang@arm.com>; Min Zhou <zhoumin@loongson.cn>; David Christensen <drc@linux.vnet.ibm.com>; Bruce Richardson <bruce.richardson@intel.com>; Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>; dev <dev@dpdk.org>
Subject: Re: [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS
Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
On Wed, Aug 2, 2023, 23:12 Sivaprasad Tummala <mailto:sivaprasad.tummala@amd.com> wrote:
This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
features without breaking ABI each time.
I'm not sure I understand the reason for removing the last element canary. It's quite useful in the coffee that you're refactoring.
Isn't it so that you want to essentially remove the test (other commit in this series)?
Because that I can understand as a forward compatibility measure.
``
Yes, I will fix this in v2.
```
Signed-off-by: Sivaprasad Tummala <mailto:sivaprasad.tummala@amd.com>
---
lib/eal/arm/include/rte_cpuflags_32.h | 1 -
lib/eal/arm/include/rte_cpuflags_64.h | 1 -
lib/eal/arm/rte_cpuflags.c | 7 +++++--
lib/eal/loongarch/include/rte_cpuflags.h | 1 -
lib/eal/loongarch/rte_cpuflags.c | 7 +++++--
lib/eal/ppc/include/rte_cpuflags.h | 1 -
lib/eal/ppc/rte_cpuflags.c | 7 +++++--
lib/eal/riscv/include/rte_cpuflags.h | 1 -
lib/eal/riscv/rte_cpuflags.c | 7 +++++--
lib/eal/x86/include/rte_cpuflags.h | 1 -
lib/eal/x86/rte_cpuflags.c | 7 +++++--
11 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/lib/eal/arm/include/rte_cpuflags_32.h b/lib/eal/arm/include/rte_cpuflags_32.h
index 4e254428a2..41ab0d5f21 100644
--- a/lib/eal/arm/include/rte_cpuflags_32.h
+++ b/lib/eal/arm/include/rte_cpuflags_32.h
@@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_V7L,
RTE_CPUFLAG_V8L,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/include/rte_cpuflags_64.h b/lib/eal/arm/include/rte_cpuflags_64.h
index aa7a56d491..ea5193e510 100644
--- a/lib/eal/arm/include/rte_cpuflags_64.h
+++ b/lib/eal/arm/include/rte_cpuflags_64.h
@@ -37,7 +37,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_SVEBF16,
RTE_CPUFLAG_AARCH64,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
index 56e7b2e689..447a8d9f9f 100644
--- a/lib/eal/arm/rte_cpuflags.c
+++ b/lib/eal/arm/rte_cpuflags.c
@@ -139,8 +139,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -154,7 +155,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/loongarch/include/rte_cpuflags.h b/lib/eal/loongarch/include/rte_cpuflags.h
index 1c80779262..9ff8baaa3c 100644
--- a/lib/eal/loongarch/include/rte_cpuflags.h
+++ b/lib/eal/loongarch/include/rte_cpuflags.h
@@ -27,7 +27,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_LBT_ARM,
RTE_CPUFLAG_LBT_MIPS,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS /**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/loongarch/rte_cpuflags.c b/lib/eal/loongarch/rte_cpuflags.c
index 0a75ca58d4..642eb42509 100644
--- a/lib/eal/loongarch/rte_cpuflags.c
+++ b/lib/eal/loongarch/rte_cpuflags.c
@@ -66,8 +66,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -81,7 +82,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/ppc/include/rte_cpuflags.h b/lib/eal/ppc/include/rte_cpuflags.h
index a88355d170..b74e7a73ee 100644
--- a/lib/eal/ppc/include/rte_cpuflags.h
+++ b/lib/eal/ppc/include/rte_cpuflags.h
@@ -49,7 +49,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_HTM,
RTE_CPUFLAG_ARCH_2_07,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/ppc/rte_cpuflags.c b/lib/eal/ppc/rte_cpuflags.c
index 61db5c216d..3a639ef45a 100644
--- a/lib/eal/ppc/rte_cpuflags.c
+++ b/lib/eal/ppc/rte_cpuflags.c
@@ -90,8 +90,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -105,7 +106,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/riscv/include/rte_cpuflags.h b/lib/eal/riscv/include/rte_cpuflags.h
index 66e787f898..803c3655ae 100644
--- a/lib/eal/riscv/include/rte_cpuflags.h
+++ b/lib/eal/riscv/include/rte_cpuflags.h
@@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_RISCV_ISA_Y, /* Reserved */
RTE_CPUFLAG_RISCV_ISA_Z, /* Reserved */
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/riscv/rte_cpuflags.c b/lib/eal/riscv/rte_cpuflags.c
index 4f6d29b947..a452261188 100644
--- a/lib/eal/riscv/rte_cpuflags.c
+++ b/lib/eal/riscv/rte_cpuflags.c
@@ -95,8 +95,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -110,7 +111,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/x86/include/rte_cpuflags.h b/lib/eal/x86/include/rte_cpuflags.h
index 92e90fb6e0..7fc6117243 100644
--- a/lib/eal/x86/include/rte_cpuflags.h
+++ b/lib/eal/x86/include/rte_cpuflags.h
@@ -135,7 +135,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_WAITPKG, /**< UMONITOR/UMWAIT/TPAUSE */
/* The last item */
- RTE_CPUFLAG_NUMFLAGS, /**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/x86/rte_cpuflags.c b/lib/eal/x86/rte_cpuflags.c
index d6b518251b..00d17c7515 100644
--- a/lib/eal/x86/rte_cpuflags.c
+++ b/lib/eal/x86/rte_cpuflags.c
@@ -149,8 +149,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const struct feature_entry *feat;
cpuid_registers_t regs;
unsigned int maxleaf;
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
/* Flag does not match anything in the feature tables */
return -ENOENT;
@@ -176,7 +177,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
--
2.34.1
```
^ permalink raw reply [relevance 2%]
* [PATCH 3/6] eal: add rte atomic qualifier with casts
2023-08-11 1:31 4% [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-11 1:31 2% ` Tyler Retzlaff
2023-08-11 17:32 3% ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (4 subsequent siblings)
5 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-11 1:31 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 15a36f3..2c65304 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -273,7 +273,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -287,7 +288,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -340,7 +342,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -360,7 +363,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -379,7 +383,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -399,7 +404,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -552,7 +558,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -566,7 +573,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -619,7 +627,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -639,7 +648,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -658,7 +668,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -678,7 +689,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -884,7 +896,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -903,7 +916,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -961,7 +975,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -985,7 +1000,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 3ea1553..db8a1f8 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -86,7 +86,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint16_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -96,7 +97,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint32_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -106,7 +108,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..6c192f0 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t __rte_atomic *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [relevance 2%]
* [PATCH 0/6] RFC optional rte optional stdatomics API
@ 2023-08-11 1:31 4% Tyler Retzlaff
2023-08-11 1:31 2% ` [PATCH 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
` (5 more replies)
0 siblings, 6 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-11 1:31 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the next series.
* the eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
config/rte_config.h | 1 +
devtools/checkpatches.sh | 8 ++
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++-----
lib/eal/arm/include/rte_atomic_64.h | 32 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++---
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 66 ++++++++-----
lib/eal/include/generic/rte_pause.h | 41 ++++----
lib/eal/include/generic/rte_rwlock.h | 47 ++++-----
lib/eal/include/generic/rte_spinlock.h | 19 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 50 +++++-----
lib/eal/include/rte_pflock.h | 24 ++---
lib/eal/include/rte_seqcount.h | 18 ++--
lib/eal/include/rte_stdatomic.h | 162 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 42 ++++----
lib/eal/include/rte_trace_point.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 50 +++++-----
lib/eal/x86/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 1 +
27 files changed, 445 insertions(+), 243 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* RE: [PATCH v2] eventdev/eth_rx: update adapter create APIs
2023-08-10 8:07 0% ` Jerin Jacob
@ 2023-08-10 11:58 0% ` Naga Harish K, S V
0 siblings, 0 replies; 200+ results
From: Naga Harish K, S V @ 2023-08-10 11:58 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, Jayatheerthan, Jay
Hi Jerin,
Thinking of another approach for this patch.
Instead of changing all create APIs, update rte_event_eth_rx_adapter_create_ext() alone with additional parameters.
rte_event_eth_rx_adapter_create() and rte_event_eth_rx_adapter_create_with_params() APIs will be untouched.
How about this approach?
-Harish
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, August 10, 2023 1:37 PM
> To: Naga Harish K, S V <s.v.naga.harish.k@intel.com>
> Cc: dev@dpdk.org; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> Subject: Re: [PATCH v2] eventdev/eth_rx: update adapter create APIs
>
> On Thu, Aug 10, 2023 at 1:09 PM Naga Harish K, S V
> <s.v.naga.harish.k@intel.com> wrote:
> >
> > Hi Jerin,
> > As per DPDK Guidelines, API changes or ABI breakage is allowed during LTS
> releases
> >
> > (https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-breakage
> > s)
>
> Yes. Provided if depreciation notice has sent, approved and changes absolutely
> needed.
>
> >
> > Also, there are previous instances where API changes happened, some of them
> are mentioned below.
>
> These are not the cases where existing APIs removed and changed prototype to
> cover up the removed function.
>
> >
> > In DPDK 22.11, the cryptodev library had undergone the following API
> changes.
> > * rte_cryptodev_sym_session_create() and
> rte_cryptodev_asym_session_create() API parameters changed.
> > rte_cryptodev_sym_session_free() and rte_cryptodev_asym_session_free()
> API parameters changed.
> > rte_cryptodev_sym_session_init() and rte_cryptodev_asym_session_init()
> APIs are removed.
> >
> > * eventdev: The function ``rte_event_crypto_adapter_queue_pair_add`` was
> updated
> > to accept configuration of type ``rte_event_crypto_adapter_queue_conf``
> > instead of ``rte_event``,
> > similar to ``rte_event_eth_rx_adapter_queue_add`` signature.
> > Event will be one of the configuration fields,
> > together with additional vector parameters.
> >
> > Applications have to change to accommodate the above API changes.
> >
> > As discussed earlier, fewer adapter-create APIs are useful for the application
> design.
> > Please let us know your thoughts on the same.
>
>
> mempool have different variants of create API. IMO, Different variants of
> _create API is OK and application can pick the correct one based on the needed.
> It is OK to break the API prototype if absolutely needed, in this case it is not.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2] eventdev/eth_rx: update adapter create APIs
2023-08-10 7:38 4% ` Naga Harish K, S V
@ 2023-08-10 8:07 0% ` Jerin Jacob
2023-08-10 11:58 0% ` Naga Harish K, S V
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-08-10 8:07 UTC (permalink / raw)
To: Naga Harish K, S V; +Cc: dev, Jayatheerthan, Jay
On Thu, Aug 10, 2023 at 1:09 PM Naga Harish K, S V
<s.v.naga.harish.k@intel.com> wrote:
>
> Hi Jerin,
> As per DPDK Guidelines, API changes or ABI breakage is allowed during LTS releases
> (https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-breakages)
Yes. Provided if depreciation notice has sent, approved and changes
absolutely needed.
>
> Also, there are previous instances where API changes happened, some of them are mentioned below.
These are not the cases where existing APIs removed and changed
prototype to cover up the removed function.
>
> In DPDK 22.11, the cryptodev library had undergone the following API changes.
> * rte_cryptodev_sym_session_create() and rte_cryptodev_asym_session_create() API parameters changed.
> rte_cryptodev_sym_session_free() and rte_cryptodev_asym_session_free() API parameters changed.
> rte_cryptodev_sym_session_init() and rte_cryptodev_asym_session_init() APIs are removed.
>
> * eventdev: The function ``rte_event_crypto_adapter_queue_pair_add`` was updated
> to accept configuration of type ``rte_event_crypto_adapter_queue_conf``
> instead of ``rte_event``,
> similar to ``rte_event_eth_rx_adapter_queue_add`` signature.
> Event will be one of the configuration fields,
> together with additional vector parameters.
>
> Applications have to change to accommodate the above API changes.
>
> As discussed earlier, fewer adapter-create APIs are useful for the application design.
> Please let us know your thoughts on the same.
mempool have different variants of create API. IMO, Different variants
of _create API is OK and application
can pick the correct one based on the needed.
It is OK to break the API prototype if absolutely needed, in this case
it is not.
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2] eventdev/eth_rx: update adapter create APIs
@ 2023-08-10 7:38 4% ` Naga Harish K, S V
2023-08-10 8:07 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Naga Harish K, S V @ 2023-08-10 7:38 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, Jayatheerthan, Jay
Hi Jerin,
As per DPDK Guidelines, API changes or ABI breakage is allowed during LTS releases
(https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-breakages)
Also, there are previous instances where API changes happened, some of them are mentioned below.
In DPDK 22.11, the cryptodev library had undergone the following API changes.
* rte_cryptodev_sym_session_create() and rte_cryptodev_asym_session_create() API parameters changed.
rte_cryptodev_sym_session_free() and rte_cryptodev_asym_session_free() API parameters changed.
rte_cryptodev_sym_session_init() and rte_cryptodev_asym_session_init() APIs are removed.
* eventdev: The function ``rte_event_crypto_adapter_queue_pair_add`` was updated
to accept configuration of type ``rte_event_crypto_adapter_queue_conf``
instead of ``rte_event``,
similar to ``rte_event_eth_rx_adapter_queue_add`` signature.
Event will be one of the configuration fields,
together with additional vector parameters.
Applications have to change to accommodate the above API changes.
As discussed earlier, fewer adapter-create APIs are useful for the application design.
Please let us know your thoughts on the same.
-Harish
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Wednesday, August 2, 2023 9:42 PM
> To: Naga Harish K, S V <s.v.naga.harish.k@intel.com>
> Cc: dev@dpdk.org; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> Subject: Re: [PATCH v2] eventdev/eth_rx: update adapter create APIs
>
> On Wed, Aug 2, 2023 at 7:58 PM Naga Harish K, S V
> <s.v.naga.harish.k@intel.com> wrote:
> >
> > Hi Jerin,
>
>
> Hi Harish,
>
> >
> > The API “rte_event_eth_rx_adapter_create_with_params()” is an extension to
> rte_event_eth_rx_adapter_create() with an additional adapter configuration
> params structure.
> > There is no equivalent API existing today for the
> “rte_event_eth_rx_adapter_create_ext()” API which takes additional adapter
> params.
> > There are use cases where create_ext() version of create API with additional
> parameters is needed. We may need to have one more adapter create API for
> this.
> > That makes so many Adapter create APIs (4 in number) and will be confusing
> for the user.
> >
> > That's why proposed the following changes to the Rx adapter create APIs
> which will consolidate the create APIs to 2 in number with all possible
> combinations.
> > The applications that are currently using
> > rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_create()
> APIs for creating Rx Adapter can pass NULL argument for the newly added
> argument which will behave the same as before.
> >
> > Trying to understand what are the concerns from your perspective with this
> consolidated API approach.
>
> If single application code base needs to support both version of DPDK then they
> need have #ifdef clutter based on DPDK version check as we are changing the
> function prototype.
> IMO, We should do API prototype change as last resort. It is quite common have
> two APIs versions of single operation with more specialized parameters.
>
>
>
> >
> > -Harish
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Tuesday, August 1, 2023 8:54 PM
> > > To: Naga Harish K, S V <s.v.naga.harish.k@intel.com>
> > > Cc: dev@dpdk.org; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> > > Subject: Re: [PATCH v2] eventdev/eth_rx: update adapter create APIs
> > >
> > > On Tue, Aug 1, 2023 at 7:22 PM Naga Harish K S V
> > > <s.v.naga.harish.k@intel.com> wrote:
> > > >
> > > > The adapter create APIs such as
> > > > rte_event_eth_rx_adapter_create_ext()
> > > > and
> > > > rte_event_eth_rx_adapter_create() are updated to take additional
> > > > argument as a pointer of type struct rte_event_eth_rx_adapter_params.
> > > >
> > > > The API rte_event_eth_rx_adapter_create_with_params() is deprecated.
> > > >
> > > > Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
> > >
> > > Pleas check v1 comment
> > > http://mails.dpdk.org/archives/dev/2023-August/273602.html
> > >
> > > > ---
> > > > v2:
> > > > * Fix doxygen compile issue and warning
> > > > ---
> > > > ---
> > > > app/test-eventdev/test_perf_common.c | 2 +-
> > > > app/test-eventdev/test_pipeline_common.c | 2 +-
> > > > app/test/test_event_eth_rx_adapter.c | 22 ++--
> > > > app/test/test_security_inline_proto.c | 2 +-
> > > > .../pipeline_worker_generic.c | 2 +-
> > > > .../eventdev_pipeline/pipeline_worker_tx.c | 2 +-
> > > > examples/ipsec-secgw/event_helper.c | 2 +-
> > > > examples/l2fwd-event/l2fwd_event_generic.c | 2 +-
> > > > .../l2fwd-event/l2fwd_event_internal_port.c | 2 +-
> > > > examples/l3fwd/l3fwd_event_generic.c | 2 +-
> > > > examples/l3fwd/l3fwd_event_internal_port.c | 2 +-
> > > > lib/eventdev/rte_event_eth_rx_adapter.c | 100 ++++++++----------
> > > > lib/eventdev/rte_event_eth_rx_adapter.h | 36 ++-----
> > > > lib/eventdev/version.map | 1 -
> > > > 14 files changed, 74 insertions(+), 105 deletions(-)
> > > >
> > > > diff --git a/app/test-eventdev/test_perf_common.c
> > > > b/app/test-eventdev/test_perf_common.c
> > > > index 5e0255cfeb..0c6c252f7d 100644
> > > > --- a/app/test-eventdev/test_perf_common.c
> > > > +++ b/app/test-eventdev/test_perf_common.c
> > > > @@ -1002,7 +1002,7 @@ perf_event_rx_adapter_setup(struct
> > > evt_options *opt, uint8_t stride,
> > > > }
> > > > queue_conf.ev.queue_id = prod * stride;
> > > > ret = rte_event_eth_rx_adapter_create(prod, opt->dev_id,
> > > > - &prod_conf);
> > > > + &prod_conf, NULL);
> > > > if (ret) {
> > > > evt_err("failed to create rx adapter[%d]", prod);
> > > > return ret; diff --git
> > > > a/app/test-eventdev/test_pipeline_common.c
> > > > b/app/test-eventdev/test_pipeline_common.c
> > > > index b111690b7c..5ae175f2c7 100644
> > > > --- a/app/test-eventdev/test_pipeline_common.c
> > > > +++ b/app/test-eventdev/test_pipeline_common.c
> > > > @@ -571,7 +571,7 @@ pipeline_event_rx_adapter_setup(struct
> > > evt_options *opt, uint8_t stride,
> > > > }
> > > > queue_conf.ev.queue_id = prod * stride;
> > > > ret = rte_event_eth_rx_adapter_create(prod, opt->dev_id,
> > > > - &prod_conf);
> > > > + &prod_conf, NULL);
> > > > if (ret) {
> > > > evt_err("failed to create rx adapter[%d]", prod);
> > > > return ret; diff --git
> > > > a/app/test/test_event_eth_rx_adapter.c
> > > > b/app/test/test_event_eth_rx_adapter.c
> > > > index 52d146f97c..42edcb625a 100644
> > > > --- a/app/test/test_event_eth_rx_adapter.c
> > > > +++ b/app/test/test_event_eth_rx_adapter.c
> > > > @@ -401,7 +401,7 @@ adapter_create(void)
> > > > rx_p_conf.dequeue_depth =
> > > dev_info.max_event_port_dequeue_depth;
> > > > rx_p_conf.enqueue_depth =
> > > dev_info.max_event_port_enqueue_depth;
> > > > err = rte_event_eth_rx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
> > > > - &rx_p_conf);
> > > > + &rx_p_conf, NULL);
> > > > TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> > > >
> > > > return err;
> > > > @@ -427,17 +427,17 @@ adapter_create_with_params(void)
> > > > rxa_params.use_queue_event_buf = false;
> > > > rxa_params.event_buf_size = 0;
> > > >
> > > > - err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID,
> > > > + err = rte_event_eth_rx_adapter_create(TEST_INST_ID,
> > > > TEST_DEV_ID, &rx_p_conf, &rxa_params);
> > > > TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d",
> > > > err);
> > > >
> > > > rxa_params.use_queue_event_buf = true;
> > > >
> > > > - err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID,
> > > > + err = rte_event_eth_rx_adapter_create(TEST_INST_ID,
> > > > TEST_DEV_ID, &rx_p_conf, &rxa_params);
> > > > TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> > > >
> > > > - err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID,
> > > > + err = rte_event_eth_rx_adapter_create(TEST_INST_ID,
> > > > TEST_DEV_ID, &rx_p_conf, &rxa_params);
> > > > TEST_ASSERT(err == -EEXIST, "Expected -EEXIST got %d",
> > > > err);
> > > >
> > > > @@ -567,15 +567,15 @@ adapter_create_free(void)
> > > > };
> > > >
> > > > err = rte_event_eth_rx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
> > > > - NULL);
> > > > + NULL, NULL);
> > > > TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d",
> > > > err);
> > > >
> > > > err = rte_event_eth_rx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
> > > > - &rx_p_conf);
> > > > + &rx_p_conf, NULL);
> > > > TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> > > >
> > > > err = rte_event_eth_rx_adapter_create(TEST_INST_ID,
> > > > - TEST_DEV_ID, &rx_p_conf);
> > > > + TEST_DEV_ID, &rx_p_conf,
> > > > + NULL);
> > > > TEST_ASSERT(err == -EEXIST, "Expected -EEXIST %d got %d",
> > > > -EEXIST, err);
> > > >
> > > > err = rte_event_eth_rx_adapter_free(TEST_INST_ID);
> > > > @@ -605,20 +605,20 @@ adapter_create_free_with_params(void)
> > > > .event_buf_size = 1024
> > > > };
> > > >
> > > > - err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID,
> > > > + err = rte_event_eth_rx_adapter_create(TEST_INST_ID,
> > > > TEST_DEV_ID, NULL, NULL);
> > > > TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d",
> > > > err);
> > > >
> > > > - err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID,
> > > > + err = rte_event_eth_rx_adapter_create(TEST_INST_ID,
> > > > TEST_DEV_ID, &rx_p_conf, &rxa_params);
> > > > TEST_ASSERT(err == 0, "Expected 0 got %d", err);
> > > >
> > > > - err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID,
> > > > + err = rte_event_eth_rx_adapter_create(TEST_INST_ID,
> > > > TEST_DEV_ID, &rx_p_conf, &rxa_params);
> > > > TEST_ASSERT(err == -EEXIST, "Expected -EEXIST %d got %d",
> > > > -EEXIST, err);
> > > >
> > > > rxa_params.event_buf_size = 0;
> > > > - err = rte_event_eth_rx_adapter_create_with_params(TEST_INST_ID,
> > > > + err = rte_event_eth_rx_adapter_create(TEST_INST_ID,
> > > > TEST_DEV_ID, &rx_p_conf, &rxa_params);
> > > > TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d",
> > > > err);
> > > >
> > > > diff --git a/app/test/test_security_inline_proto.c
> > > > b/app/test/test_security_inline_proto.c
> > > > index 45aa742c6b..fc240201a3 100644
> > > > --- a/app/test/test_security_inline_proto.c
> > > > +++ b/app/test/test_security_inline_proto.c
> > > > @@ -1872,7 +1872,7 @@ event_inline_ipsec_testsuite_setup(void)
> > > >
> > > > /* Create Rx adapter */
> > > > ret = rte_event_eth_rx_adapter_create(rx_adapter_id, eventdev_id,
> > > > - &ev_port_conf);
> > > > + &ev_port_conf, NULL);
> > > > if (ret < 0) {
> > > > printf("Failed to create rx adapter %d\n", ret);
> > > > return ret;
> > > > diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c
> > > > b/examples/eventdev_pipeline/pipeline_worker_generic.c
> > > > index 783f68c91e..74510338ba 100644
> > > > --- a/examples/eventdev_pipeline/pipeline_worker_generic.c
> > > > +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
> > > > @@ -436,7 +436,7 @@ init_adapters(uint16_t nb_ports)
> > > > init_ports(nb_ports);
> > > > /* Create one adapter for all the ethernet ports. */
> > > > ret = rte_event_eth_rx_adapter_create(cdata.rx_adapter_id,
> > > evdev_id,
> > > > - &adptr_p_conf);
> > > > + &adptr_p_conf, NULL);
> > > > if (ret)
> > > > rte_exit(EXIT_FAILURE, "failed to create rx adapter[%d]",
> > > > cdata.rx_adapter_id); diff --git
> > > > a/examples/eventdev_pipeline/pipeline_worker_tx.c
> > > > b/examples/eventdev_pipeline/pipeline_worker_tx.c
> > > > index 98a52f3892..88619d6c2e 100644
> > > > --- a/examples/eventdev_pipeline/pipeline_worker_tx.c
> > > > +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
> > > > @@ -793,7 +793,7 @@ init_adapters(uint16_t nb_ports)
> > > > uint32_t service_id;
> > > >
> > > > ret = rte_event_eth_rx_adapter_create(i, evdev_id,
> > > > - &adptr_p_conf);
> > > > + &adptr_p_conf, NULL);
> > > > if (ret)
> > > > rte_exit(EXIT_FAILURE,
> > > > "failed to create rx
> > > > adapter[%d]", i); diff --git a/examples/ipsec-secgw/event_helper.c
> > > > b/examples/ipsec-secgw/event_helper.c
> > > > index 89fb7e62a5..28d6778134 100644
> > > > --- a/examples/ipsec-secgw/event_helper.c
> > > > +++ b/examples/ipsec-secgw/event_helper.c
> > > > @@ -1035,7 +1035,7 @@ eh_rx_adapter_configure(struct
> > > eventmode_conf
> > > > *em_conf,
> > > >
> > > > /* Create Rx adapter */
> > > > ret = rte_event_eth_rx_adapter_create(adapter->adapter_id,
> > > > - adapter->eventdev_id, &port_conf);
> > > > + adapter->eventdev_id, &port_conf, NULL);
> > > > if (ret < 0) {
> > > > EH_LOG_ERR("Failed to create rx adapter %d", ret);
> > > > return ret;
> > > > diff --git a/examples/l2fwd-event/l2fwd_event_generic.c
> > > > b/examples/l2fwd-event/l2fwd_event_generic.c
> > > > index 1977e23261..4360b20aa0 100644
> > > > --- a/examples/l2fwd-event/l2fwd_event_generic.c
> > > > +++ b/examples/l2fwd-event/l2fwd_event_generic.c
> > > > @@ -235,7 +235,7 @@ l2fwd_rx_tx_adapter_setup_generic(struct
> > > l2fwd_resources *rsrc)
> > > > }
> > > >
> > > > ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
> > > > - &evt_rsrc->def_p_conf);
> > > > +
> > > > + &evt_rsrc->def_p_conf, NULL);
> > > > if (ret)
> > > > rte_panic("Failed to create rx adapter\n");
> > > >
> > > > diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c
> > > > b/examples/l2fwd-event/l2fwd_event_internal_port.c
> > > > index 717a7bceb8..542890f354 100644
> > > > --- a/examples/l2fwd-event/l2fwd_event_internal_port.c
> > > > +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
> > > > @@ -253,7 +253,7 @@ l2fwd_rx_tx_adapter_setup_internal_port(struct
> > > l2fwd_resources *rsrc)
> > > > }
> > > >
> > > > ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
> > > > - &evt_rsrc->def_p_conf);
> > > > +
> > > > + &evt_rsrc->def_p_conf, NULL);
> > > > if (ret)
> > > > rte_panic("Failed to create rx adapter[%d]\n",
> > > > adapter_id); diff --git
> > > > a/examples/l3fwd/l3fwd_event_generic.c
> > > > b/examples/l3fwd/l3fwd_event_generic.c
> > > > index c80573fc58..88e7af538e 100644
> > > > --- a/examples/l3fwd/l3fwd_event_generic.c
> > > > +++ b/examples/l3fwd/l3fwd_event_generic.c
> > > > @@ -217,7 +217,7 @@ l3fwd_rx_tx_adapter_setup_generic(void)
> > > > }
> > > >
> > > > ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
> > > > - &evt_rsrc->def_p_conf);
> > > > +
> > > > + &evt_rsrc->def_p_conf, NULL);
> > > > if (ret)
> > > > rte_panic("Failed to create rx adapter\n");
> > > >
> > > > diff --git a/examples/l3fwd/l3fwd_event_internal_port.c
> > > > b/examples/l3fwd/l3fwd_event_internal_port.c
> > > > index 32cf657148..dc8b5013cb 100644
> > > > --- a/examples/l3fwd/l3fwd_event_internal_port.c
> > > > +++ b/examples/l3fwd/l3fwd_event_internal_port.c
> > > > @@ -246,7 +246,7 @@ l3fwd_rx_tx_adapter_setup_internal_port(void)
> > > > }
> > > >
> > > > ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
> > > > - &evt_rsrc->def_p_conf);
> > > > +
> > > > + &evt_rsrc->def_p_conf, NULL);
> > > > if (ret)
> > > > rte_panic("Failed to create rx adapter[%d]\n",
> > > > adapter_id); diff --git
> > > > a/lib/eventdev/rte_event_eth_rx_adapter.c
> > > > b/lib/eventdev/rte_event_eth_rx_adapter.c
> > > > index f7f93ccdfd..ce203a5e4b 100644
> > > > --- a/lib/eventdev/rte_event_eth_rx_adapter.c
> > > > +++ b/lib/eventdev/rte_event_eth_rx_adapter.c
> > > > @@ -2485,90 +2485,78 @@ rxa_create(uint8_t id, uint8_t dev_id,
> > > > return 0;
> > > > }
> > > >
> > > > -int
> > > > -rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id,
> > > > - rte_event_eth_rx_adapter_conf_cb conf_cb,
> > > > - void *conf_arg)
> > > > +static int __rte_cold
> > > > +rxa_config_params_validate(struct rte_event_eth_rx_adapter_params
> > > *rxa_params,
> > > > + struct rte_event_eth_rx_adapter_params
> > > > +*temp_params)
> > > > {
> > > > - struct rte_event_eth_rx_adapter_params rxa_params = {0};
> > > > + if (rxa_params == NULL) {
> > > > + /* use default values if rxa_params is NULL */
> > > > + temp_params->event_buf_size = ETH_EVENT_BUFFER_SIZE;
> > > > + temp_params->use_queue_event_buf = false;
> > > > + } else if (!rxa_params->use_queue_event_buf &&
> > > > + rxa_params->event_buf_size == 0) {
> > > > + RTE_EDEV_LOG_ERR("event buffer size can't be zero\n");
> > > > + return -EINVAL;
> > > > + } else if (rxa_params->use_queue_event_buf &&
> > > > + rxa_params->event_buf_size != 0) {
> > > > + RTE_EDEV_LOG_ERR("event buffer size needs to be configured "
> > > > + "as part of queue add\n");
> > > > + return -EINVAL;
> > > > + }
> > > >
> > > > - /* use default values for adapter params */
> > > > - rxa_params.event_buf_size = ETH_EVENT_BUFFER_SIZE;
> > > > - rxa_params.use_queue_event_buf = false;
> > > > + *temp_params = *rxa_params;
> > > > + /* adjust event buff size with BATCH_SIZE used for fetching
> > > > + * packets from NIC rx queues to get full buffer utilization
> > > > + * and prevent unnecessary rollovers.
> > > > + */
> > > > + if (!temp_params->use_queue_event_buf) {
> > > > + temp_params->event_buf_size =
> > > > + RTE_ALIGN(temp_params->event_buf_size, BATCH_SIZE);
> > > > + temp_params->event_buf_size += (BATCH_SIZE + BATCH_SIZE);
> > > > + }
> > > >
> > > > - return rxa_create(id, dev_id, &rxa_params, conf_cb, conf_arg);
> > > > + return 0;
> > > > }
> > > >
> > > > int
> > > > -rte_event_eth_rx_adapter_create_with_params(uint8_t id, uint8_t
> > > dev_id,
> > > > - struct rte_event_port_conf *port_config,
> > > > +rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id,
> > > > + rte_event_eth_rx_adapter_conf_cb conf_cb,
> > > > + void *conf_arg,
> > > > struct rte_event_eth_rx_adapter_params
> > > > *rxa_params) {
> > > > - struct rte_event_port_conf *pc;
> > > > - int ret;
> > > > struct rte_event_eth_rx_adapter_params temp_params = {0};
> > > > + int ret;
> > > >
> > > > - if (port_config == NULL)
> > > > - return -EINVAL;
> > > > -
> > > > - if (rxa_params == NULL) {
> > > > - /* use default values if rxa_params is NULL */
> > > > - rxa_params = &temp_params;
> > > > - rxa_params->event_buf_size = ETH_EVENT_BUFFER_SIZE;
> > > > - rxa_params->use_queue_event_buf = false;
> > > > - } else if ((!rxa_params->use_queue_event_buf &&
> > > > - rxa_params->event_buf_size == 0) ||
> > > > - (rxa_params->use_queue_event_buf &&
> > > > - rxa_params->event_buf_size != 0)) {
> > > > - RTE_EDEV_LOG_ERR("Invalid adapter params\n");
> > > > - return -EINVAL;
> > > > - } else if (!rxa_params->use_queue_event_buf) {
> > > > - /* adjust event buff size with BATCH_SIZE used for fetching
> > > > - * packets from NIC rx queues to get full buffer utilization
> > > > - * and prevent unnecessary rollovers.
> > > > - */
> > > > -
> > > > - rxa_params->event_buf_size =
> > > > - RTE_ALIGN(rxa_params->event_buf_size, BATCH_SIZE);
> > > > - rxa_params->event_buf_size += (BATCH_SIZE + BATCH_SIZE);
> > > > - }
> > > > -
> > > > - pc = rte_malloc(NULL, sizeof(*pc), 0);
> > > > - if (pc == NULL)
> > > > - return -ENOMEM;
> > > > -
> > > > - *pc = *port_config;
> > > > -
> > > > - ret = rxa_create(id, dev_id, rxa_params, rxa_default_conf_cb, pc);
> > > > - if (ret)
> > > > - rte_free(pc);
> > > > -
> > > > - rte_eventdev_trace_eth_rx_adapter_create_with_params(id, dev_id,
> > > > - port_config, rxa_params, ret);
> > > > + ret = rxa_config_params_validate(rxa_params, &temp_params);
> > > > + if (ret != 0)
> > > > + return ret;
> > > >
> > > > - return ret;
> > > > + return rxa_create(id, dev_id, &temp_params, conf_cb,
> > > > + conf_arg);
> > > > }
> > > >
> > > > int
> > > > rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id,
> > > > - struct rte_event_port_conf *port_config)
> > > > + struct rte_event_port_conf *port_config,
> > > > + struct rte_event_eth_rx_adapter_params
> > > > + *rxa_params)
> > > > {
> > > > struct rte_event_port_conf *pc;
> > > > int ret;
> > > > + struct rte_event_eth_rx_adapter_params temp_params = {0};
> > > >
> > > > if (port_config == NULL)
> > > > return -EINVAL;
> > > >
> > > > - RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
> > > > + ret = rxa_config_params_validate(rxa_params, &temp_params);
> > > > + if (ret != 0)
> > > > + return ret;
> > > >
> > > > pc = rte_malloc(NULL, sizeof(*pc), 0);
> > > > if (pc == NULL)
> > > > return -ENOMEM;
> > > > +
> > > > *pc = *port_config;
> > > >
> > > > - ret = rte_event_eth_rx_adapter_create_ext(id, dev_id,
> > > > - rxa_default_conf_cb,
> > > > - pc);
> > > > + ret = rxa_create(id, dev_id, &temp_params,
> > > > + rxa_default_conf_cb, pc);
> > > > if (ret)
> > > > rte_free(pc);
> > > > return ret;
> > > > diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h
> > > > b/lib/eventdev/rte_event_eth_rx_adapter.h
> > > > index fe2a6bdd2c..793e3cedad 100644
> > > > --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> > > > +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> > > > @@ -26,7 +26,6 @@
> > > > * The ethernet Rx event adapter's functions are:
> > > > * - rte_event_eth_rx_adapter_create_ext()
> > > > * - rte_event_eth_rx_adapter_create()
> > > > - * - rte_event_eth_rx_adapter_create_with_params()
> > > > * - rte_event_eth_rx_adapter_free()
> > > > * - rte_event_eth_rx_adapter_queue_add()
> > > > * - rte_event_eth_rx_adapter_queue_del()
> > > > @@ -45,7 +44,7 @@
> > > > *
> > > > * The application creates an ethernet to event adapter using
> > > > * rte_event_eth_rx_adapter_create_ext() or
> > > > rte_event_eth_rx_adapter_create()
> > > > - * or rte_event_eth_rx_adapter_create_with_params() functions.
> > > > + * functions.
> > > > *
> > > > * The adapter needs to know which ethernet rx queues to poll for
> > > > mbufs
> > > as well
> > > > * as event device parameters such as the event queue identifier,
> > > > event @@ -394,13 +393,18 @@ typedef uint16_t
> > > (*rte_event_eth_rx_adapter_cb_fn)(uint16_t eth_dev_id,
> > > > * @param conf_arg
> > > > * Argument that is passed to the conf_cb function.
> > > > *
> > > > + * @param rxa_params
> > > > + * Pointer to struct rte_event_eth_rx_adapter_params.
> > > > + * In case of NULL, default values are used.
> > > > + *
> > > > * @return
> > > > * - 0: Success
> > > > * - <0: Error code on failure
> > > > */
> > > > int rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id,
> > > > - rte_event_eth_rx_adapter_conf_cb conf_cb,
> > > > - void *conf_arg);
> > > > + rte_event_eth_rx_adapter_conf_cb conf_cb,
> > > > + void *conf_arg,
> > > > + struct rte_event_eth_rx_adapter_params
> > > > + *rxa_params);
> > > >
> > > > /**
> > > > * Create a new ethernet Rx event adapter with the specified identifier.
> > > > @@ -435,27 +439,6 @@ int
> > > > rte_event_eth_rx_adapter_create_ext(uint8_t
> > > id, uint8_t dev_id,
> > > > * Argument of type *rte_event_port_conf* that is passed to the conf_cb
> > > > * function.
> > > > *
> > > > - * @return
> > > > - * - 0: Success
> > > > - * - <0: Error code on failure
> > > > - */
> > > > -int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id,
> > > > - struct rte_event_port_conf *port_config);
> > > > -
> > > > -/**
> > > > - * This is a variant of rte_event_eth_rx_adapter_create() with
> > > > additional
> > > > - * adapter params specified in ``struct
> > > rte_event_eth_rx_adapter_params``.
> > > > - *
> > > > - * @param id
> > > > - * The identifier of the ethernet Rx event adapter.
> > > > - *
> > > > - * @param dev_id
> > > > - * The identifier of the event device to configure.
> > > > - *
> > > > - * @param port_config
> > > > - * Argument of type *rte_event_port_conf* that is passed to the
> > > > conf_cb
> > > > - * function.
> > > > - *
> > > > * @param rxa_params
> > > > * Pointer to struct rte_event_eth_rx_adapter_params.
> > > > * In case of NULL, default values are used.
> > > > @@ -464,8 +447,7 @@ int rte_event_eth_rx_adapter_create(uint8_t
> > > > id,
> > > uint8_t dev_id,
> > > > * - 0: Success
> > > > * - <0: Error code on failure
> > > > */
> > > > -__rte_experimental
> > > > -int rte_event_eth_rx_adapter_create_with_params(uint8_t id,
> > > > uint8_t dev_id,
> > > > +int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id,
> > > > struct rte_event_port_conf *port_config,
> > > > struct rte_event_eth_rx_adapter_params
> > > > *rxa_params);
> > > >
> > > > diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> > > > index b03c10d99f..1cf58f0d6c 100644
> > > > --- a/lib/eventdev/version.map
> > > > +++ b/lib/eventdev/version.map
> > > > @@ -101,7 +101,6 @@ EXPERIMENTAL {
> > > > global:
> > > >
> > > > # added in 21.11
> > > > - rte_event_eth_rx_adapter_create_with_params;
> > > > rte_event_eth_rx_adapter_queue_conf_get;
> > > > rte_event_eth_rx_adapter_queue_stats_get;
> > > > rte_event_eth_rx_adapter_queue_stats_reset;
> > > > --
> > > > 2.25.1
> > > >
^ permalink raw reply [relevance 4%]
* [PATCH v8 0/3] Split logging functionality out of EAL
2023-07-31 10:17 3% ` [PATCH v6 0/3] Split logging functionality " Bruce Richardson
2023-07-31 15:38 4% ` [PATCH v7 " Bruce Richardson
@ 2023-08-09 13:35 3% ` Bruce Richardson
2023-08-11 12:46 4% ` David Marchand
2 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-08-09 13:35 UTC (permalink / raw)
To: dev; +Cc: david.marchand, Bruce Richardson
There is a general desire to reduce the size and scope of EAL. To this
end, this patchset makes a (very) small step in that direction by taking
the logging functionality out of EAL and putting it into its own library
that can be built and maintained separately.
As with the first RFC for this, the main obstacle is the "fnmatch"
function which is needed by both EAL and the new log function when
building on windows. While the function cannot stay in EAL - or we would
have a circular dependency, moving it to a new library or just putting
it in the log library have the disadvantages that it then "leaks" into
the public namespace without an rte_prefix, which could cause issues.
Since only a single function is involved, subsequent versions take a
different approach to v1, and just moves the offending function to be a
static function in a header file. This allows use by multiple libs
without conflicting names or making it public.
The other complication, as explained in v1 RFC was that of multiple
implementations for different OS's. This is solved here in the same
way as v1, by including the OS in the name and having meson pick the
correct file for each build. Since only one file is involved, there
seemed little need for replicating EAL's separate subdirectories
per-OS.
V8:
* Added "inline" to static functions in fnmatch header
* Removed SCCS tag as unneeded carryover from .c file
* Corrected doc cross-references and headers
* Added maintainers entry
V7:
* re-submit to re-run CI with ABI checks disabled
v6:
* Updated ABI version to DPDK_24 for new log library for 23.11 release.
v5:
* rebased to latest main branch
* fixed trailing whitespace issues in new doc section
v4:
* Fixed windows build error, due to missing strdup (_strdup on windows)
* Added doc updates to programmers guide.
v3:
* Fixed missing log file for BSD
* Removed "eal" from the filenames of files in the log directory
* added prefixes to elements in the fnmatch header to avoid conflicts
* fixed space indentation in new lines in telemetry.c (checkpatch)
* removed "extern int logtype" definition in telemetry.c (checkpatch)
* added log directory to list for doxygen scanning
Bruce Richardson (3):
eal/windows: move fnmatch function to header file
log: separate logging functions out of EAL
telemetry: use standard logging
MAINTAINERS | 6 +-
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/env_abstraction_layer.rst | 4 +-
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/log_lib.rst | 113 ++++++++++++
lib/eal/common/eal_common_options.c | 2 +-
lib/eal/common/eal_private.h | 7 -
lib/eal/common/meson.build | 1 -
lib/eal/freebsd/eal.c | 6 +-
lib/eal/include/meson.build | 1 -
lib/eal/linux/eal.c | 8 +-
lib/eal/linux/meson.build | 1 -
lib/eal/meson.build | 2 +-
lib/eal/version.map | 17 --
lib/eal/windows/eal.c | 2 +-
lib/eal/windows/fnmatch.c | 172 ------------------
lib/eal/windows/include/fnmatch.h | 169 +++++++++++++++--
lib/eal/windows/meson.build | 2 -
lib/kvargs/meson.build | 3 +-
.../common/eal_common_log.c => log/log.c} | 7 +-
lib/log/log_freebsd.c | 12 ++
.../common/eal_log.h => log/log_internal.h} | 18 +-
lib/{eal/linux/eal_log.c => log/log_linux.c} | 2 +-
.../windows/eal_log.c => log/log_windows.c} | 2 +-
lib/log/meson.build | 9 +
lib/{eal/include => log}/rte_log.h | 0
lib/log/version.map | 34 ++++
lib/meson.build | 1 +
lib/telemetry/meson.build | 3 +-
lib/telemetry/telemetry.c | 11 +-
lib/telemetry/telemetry_internal.h | 3 +-
31 files changed, 367 insertions(+), 253 deletions(-)
create mode 100644 doc/guides/prog_guide/log_lib.rst
delete mode 100644 lib/eal/windows/fnmatch.c
rename lib/{eal/common/eal_common_log.c => log/log.c} (99%)
create mode 100644 lib/log/log_freebsd.c
rename lib/{eal/common/eal_log.h => log/log_internal.h} (69%)
rename lib/{eal/linux/eal_log.c => log/log_linux.c} (97%)
rename lib/{eal/windows/eal_log.c => log/log_windows.c} (93%)
create mode 100644 lib/log/meson.build
rename lib/{eal/include => log}/rte_log.h (100%)
create mode 100644 lib/log/version.map
--
2.39.2
^ permalink raw reply [relevance 3%]
* RE: [PATCH] ethdev: add new symmetric hash function
2023-08-08 1:43 3% ` fengchengwen
@ 2023-08-09 12:00 0% ` Xueming(Steven) Li
0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2023-08-09 12:00 UTC (permalink / raw)
To: fengchengwen, Ivan Malov; +Cc: Ori Kam, dev
> -----Original Message-----
> From: fengchengwen <fengchengwen@huawei.com>
> Sent: 8/8/2023 9:43
> To: Ivan Malov <ivan.malov@arknetworks.am>; Xueming(Steven) Li
> <xuemingl@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; dev@dpdk.org
> Subject: Re: [PATCH] ethdev: add new symmetric hash function
>
> On 2023/8/8 6:32, Ivan Malov wrote:
> > Hi,
> >
> > Please see my notes below.
> >
> > On Mon, 7 Aug 2023, Xueming Li wrote:
> >
> >> The new symmetric hash function swap src/dst L3 address and
> >> L4 ports automatically by sorting.
> >>
> >> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> >> ---
> >> lib/ethdev/rte_flow.h | 5 +++++
> >> 1 file changed, 5 insertions(+)
> >>
> >> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> >> 86ed98c562..ec6dd170b5 100644
> >> --- a/lib/ethdev/rte_flow.h
> >> +++ b/lib/ethdev/rte_flow.h
> >> @@ -3204,6 +3204,11 @@ enum rte_eth_hash_function {
> >> * src or dst address will xor with zero pair.
> >> */
> >> RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ,
> >> + /**
> >> + * Symmetric Toeplitz: src, dst will be swapped
> >> + * automatically by sorting.
> >
> > This is very vague. Consider:
> >
> > For symmetric Toeplitz, four inputs are prepared as follows:
> > - src_addr | dst_addr
> > - src_addr ^ dst_addr
> > - src_port | dst_port
> > - src_port ^ dst_port
> > and then passed to the regular Toeplitz function.
> >
> > It is important to be as specific as possible so that readers don't
> > have to guess.
>
> +1 for this, I try to understand and google it, but can't find useful info.
>
> Also, how this new algo with src/dst only ?
>
Thanks for taking care of this.
When set the L3 and the L4 fields are sorted prior to the hash function.
If src_ip > dst_ip, swap src_ip and dst_ip.
If src_port > dst_port, swap src_port and dst_port.
> >
> > Thank you.
> >
> >> + */
> >> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT,
> >> RTE_ETH_HASH_FUNCTION_MAX,
>
> The new value will break the definition of MAX (maybe ABI compatible).
> but I found only hns3 drivers use RTE_ETH_HASH_FUNCTION_MAX, not sure
> the application will use it.
>
> >> };
> >>
> >> --
> >> 2.25.1
> >>
> >>
> >
> > .
^ permalink raw reply [relevance 0%]
* RE: C11 atomics adoption blocked
2023-08-08 20:49 0% ` Tyler Retzlaff
@ 2023-08-09 8:48 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-08-09 8:48 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: Bruce Richardson, dev, techboard, thomas, david.marchand,
Honnappa.Nagarahalli
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Tuesday, 8 August 2023 22.50
>
> On Tue, Aug 08, 2023 at 10:22:09PM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Tuesday, 8 August 2023 21.20
> > >
> > > On Tue, Aug 08, 2023 at 07:23:41PM +0100, Bruce Richardson wrote:
> > > > On Tue, Aug 08, 2023 at 10:53:03AM -0700, Tyler Retzlaff wrote:
> > > > > Hi folks,
> > > > >
> > > > > Moving this discussion to the dev mailing list for broader
> comment.
> > > > >
> > > > > Unfortunately, we've hit a roadblock with integrating C11
> atomics
> > > > > for DPDK. The main issue is that GNU C++ prior to -std=c++23
> > > explicitly
> > > > > cannot be integrated with C11 stdatomic.h. Basically, you can't
> > > include
> > > > > the header and you can't use `_Atomic' type specifier to declare
> > > atomic
> > > > > types. This is not a problem with LLVM or MSVC as they both
> allow
> > > > > integration with C11 stdatomic.h, but going forward with C11
> atomics
> > > > > would break using DPDK in C++ programs when building with GNU
> g++.
> > > > >
> > > > > Essentially you cannot compile the following with g++.
> > > > >
> > > > > #include <stdatomic.h>
> > > > >
> > > > > int main(int argc, char *argv[]) { return 0; }
> > > > >
> > > > > In file included from atomic.cpp:1:
> > > > > /usr/lib/gcc/x86_64-pc-cygwin/11/include/stdatomic.h:40:9:
> error:
> > > > > ‘_Atomic’ does not name a type
> > > > > 40 | typedef _Atomic _Bool atomic_bool;
> > > > >
> > > > > ... more errors of same ...
> > > > >
> > > > > It's also acknowledged as something known and won't fix by GNU
> g++
> > > > > maintainers.
> > > > >
> > > > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60932
> > > > >
> > > > > Given the timeframe I would like to propose the minimally
> invasive,
> > > > > lowest risk solution as follows.
> > > > >
> > > > > 1. Adopt stdatomic.h for all Windows targets, leave all
> Linux/BSD
> > > targets
> > > > > using GCC builtin C++11 memory model atomics.
> > > > > 2. Introduce a macro that allows _Atomic type specifier to be
> > > applied to
> > > > > function parameter, structure field types and variable
> > > declarations.
> > > > >
> > > > > * The macro would expand empty for Linux/BSD targets.
> > > > > * The macro would expand to C11 _Atomic keyword for Windows
> > > targets.
> > > > >
> > > > > 3. Introduce basic macro that allows __atomic_xxx for
> normalized
> > > use
> > > > > internal to DPDK.
> > > > >
> > > > > * The macro would not be defined for Linux/BSD targets.
> > > > > * The macro would expand __atomic_xxx to corresponding
> > > stdatomic.h
> > > > > atomic_xxx operations for Windows targets.
> > > > >
> >
> > Regarding naming of these macros (suggested in 2. and 3.), they should
> probably bear the rte_ prefix instead of overlapping existing names, so
> applications can also use them directly.
> >
> > E.g.:
> > #define rte_atomic for _Atomic or nothing,
> > #define rte_atomic_fetch_add() for atomic_fetch_add() or
> __atomic_fetch_add(), and
> > #define RTE_MEMORY_ORDER_SEQ_CST for memory_order_seq_cst or
> __ATOMIC_SEQ_CST.
> >
> > Maybe that is what you meant already. I'm not sure of the scope and
> details of your suggestion here.
>
> I'm shy to do anything in the rte_ namespace because I don't want to
> formalize it as an API.
>
> I was envisioning the following.
>
> Internally DPDK code just uses __atomic_fetch_add directly, the macros
> are provided for Windows targets to expand to __atomic_fetch_add.
>
> Externally DPDK applications that don't care about being portable may
> use __atomic_fetch_add (BSD/Linux) or atomic_fetch_add (Windows)
> directly.
>
> Externally DPDK applications that care to be portable may do what is
> done Internally and <<use>> the __atomic_fetch_add directly. By
> including say rte_stdatomic.h indirectly (Windows) gets the macros
> expanded to atomic_fetch_add and for BSD/Linux it's a noop include.
>
> Basically I'm placing a little ugly into Windows built code and in trade
> we don't end up with a bunch of rte_ APIs that were strongly objected to
> previously.
>
> It's a compromise.
OK, we probably need to offer a public header file to wrap the atomics, using either names prefixed with rte_ or names similar to the gcc builtin atomics.
I guess the objections were based on the assumption that we were switching to C11 atomics with DPDK 23.11, so the rte_ prefixed atomic APIs would be very short lived (DPDK 23.07 to 23.11 only). But with this new information about GNU C++ incompatibility, that seems not to be the case, so the naming discussion can be reopened.
If we don't introduce such a wrapper header, all portable code needs to surround the use of atomics with #ifdef USE_STDATOMIC_H.
BTW: Can the compilers that understand both builtin atomics and C11 stdatomics.h handle code with #define __atomic_fetch_add atomic_fetch_add and #define __ATOMIC_SEQ_CST memory_order_seq_cst? If not, then we need to use rte_ prefixed atomics.
And what about C++ atomics... Do we want (or need?) a third variant using C++ atomics, e.g. "atomic<int> x;" instead of "_Atomic int x;"? (I hope not!) For reference, the "atomic_int" type is "_Atomic int" in C, but "std::atomic<int>" in C++.
C++23 provides the C11 compatibility macro "_Atomic(T)", which means "_Atomic T" in C and "std::atomic<T>" in C++. Perhaps we can somewhat rely on this, and update our coding standards to require using e.g. "_Atomic(int)" for atomic types, and disallow using "_Atomic int".
>
> >
> > > > > 4. We re-evaluate adoption of C11 atomics and corresponding
> > > requirement of
> > > > > -std=c++23 compliant compiler at the next long term ABI
> promise
> > > release.
> > > > >
> > > > > Q: Why not define macros that look like the standard and expand
> > > those
> > > > > names to builtins?
> > > > > A: Because introducing the names is a violation of the C
> standard,
> > > we
> > > > > can't / shouldn't define atomic_xxx names in the applications
> > > namespace
> > > > > as we are not ``the implementation''.
> > > > > A: Because the builtins offer a subset of stdatomic.h capability
> > > they
> > > > > can only operate on pointer and integer types. If we
> presented
> > > the
> > > > > stdatomic.h names there might be some confusion attempting to
> > > perform
> > > > > atomic operations on e.g. _Atomic specified struct would fail
> but
> > > only
> > > > > on BSD/Linux builds (with the proposed solution).
> > > > >
> > > >
> > > > Out of interest, rather than splitting on Windows vs *nix OS for
> the
> > > > atomics, what would it look like if we split behaviour based on C
> vs
> > > C++
> > > > use? Would such a thing work?
> > >
> > > Unfortunately no. The reason is binary packages and we don't know
> which
> > > toolchain consumes them.
> > >
> > > For example.
> > >
> > > Assume we build libeal-dev package with gcc. We'll end up with
> headers
> > > that contain the _Atomic specifier.
> > >
> > > Now we write an application and build it with
> > > * gcc, sure works fine it knows about _Atomic
> > > * clang, same as gcc
> > > * clang++, works but is implementation detail that it works (it
> isn't
> > > standard)
> > > * g++, does not work
> > >
> > > So the LCD is build package without _Atomic i.e. what we already
> have
> > > today
> > > on BSD/Linux.
> > >
> >
> > I agree with Tyler's conceptual solution as proposed in the first
> email in this thread, but with a twist:
> >
> > Instead of splitting Windows vs. Linux/BSD, the split should be a
> build time configuration parameter, e.g. USE_STDATOMIC_H. This would be
> default true for Windows, and default false for Linux/BSD distros - i.e.
> behave exactly as Tyler described.
>
> Interesting, so the intention here is default stdatomic off for
> BSD/Linux and default on for Windows. Binary packagers could then choose
> if they wanted to build binary packages incompatible with g++ < -
> std=c++23
> by overriding the default and enabling stdatomic.
>
> I don't object to this if noone else does and it does seem to give more
> options to packagers and users to decide for their distribution
> channels. One note I'll make is that we would only commit to testing the
> defaults in the CI to avoid blowing out the test matrix with non-default
> options.
Yes, I think everyone agrees about this for CI.
Another thing: I just learned that FreeBSD uses CLANG as its default compiler:
https://docs.freebsd.org/en/books/developers-handbook/tools/#tools-compiling
So should the default be Windows/BSD use stdatomic.h, and only Linux using GCC builtin atomics? We are using "the default compiler in the distro" as the argument, so I think yes.
>
> >
> > Having a build time configuration parameter would also allow the use
> of stdatomic.h for applications that build DPDK from scratch, instead of
> using the DPDK included with the distro. This could be C applications
> built with the distro's C compiler or some other C compiler, or C++
> applications built with a newer GNU C++ compiler or CLANG++.
> >
> > It might also allow building C++ applications using an old GNU C++
> compiler on Windows (where the application is built with DPDK from
> scratch). Not really an argument, just mentioning it.
>
> Yes, it seems like this would solve that problem in that on Windows the
> default could be similarly overridden and turn stdatomic off if building
> with GNU g++ on Windows.
>
> >
> > > > Also, just wondering about the scope of the changes here. How many
> > > header
> > > > files are affected where we publicly expose atomics?
> > >
> > > So what is impacted is roughly what is in my v4 series that raised
> my
> > > attention to the issue.
> > >
> > > https://patchwork.dpdk.org/project/dpdk/list/?series=29086
> > >
> > > We really can't solve the problem by not talking about atomics in
> the
> > > API because of the performance requirements of the APIs in question.
> > >
> > > e.g. It is stuff like rte_rwlock, rte_spinlock, rte_pause all stuff
> that
> > > can't have additional levels of indirection because of the overhead
> > > involved.
> >
> > I strongly support this position. Hiding all atomic operations in non-
> inline functions will have an unacceptable performance impact! (I know
> Bruce wasn't suggesting this with his question; but someone once
> suggested this on a techboard meeting, arguing that the overhead of
> calling a function is very small.)
>
> Yeah, I think Bruce is aware but was just curious.
>
> The overhead of calling a function is in fact not-small on ports that
> have
> security features similar to Windows control flow guard (which is
> required
> to be enabled for some of our customers including our own (Microsoft)
> shipped code).
Very good to know. We should keep this in mind when someone suggests de-inlining fast path functions.
Approximately how many CPU cycles does it cost to call a simple function with this security feature enabled (vs. inlining the function)?
>
> >
> > >
> > > >
> > > > Thanks,
> > > > /Bruce
^ permalink raw reply [relevance 0%]
* [PATCH v2 24/29] compressdev: remove experimental flag
2023-08-09 0:09 3% ` [PATCH v2 00/29] promote many API's to stable Stephen Hemminger
@ 2023-08-09 0:10 2% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-08-09 0:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Thomas Monjalon, Fan Zhang, Ashish Gupta
The compressdev can not hide under the experimental flag.
Remove the experimental flag and require ABI to be stable.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
MAINTAINERS | 2 +-
lib/compressdev/rte_comp.h | 6 ------
lib/compressdev/rte_compressdev.h | 26 --------------------------
lib/compressdev/rte_compressdev_pmd.h | 6 ------
lib/compressdev/version.map | 2 +-
5 files changed, 2 insertions(+), 40 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index f020972b609b..75e020892471 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -458,7 +458,7 @@ F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
F: app/test/test_security*
-Compression API - EXPERIMENTAL
+Compression API
M: Fan Zhang <fanzhang.oss@gmail.com>
M: Ashish Gupta <ashish.gupta@marvell.com>
T: git://dpdk.org/next/dpdk-next-crypto
diff --git a/lib/compressdev/rte_comp.h b/lib/compressdev/rte_comp.h
index bf896d07223c..232564cf5e9a 100644
--- a/lib/compressdev/rte_comp.h
+++ b/lib/compressdev/rte_comp.h
@@ -499,7 +499,6 @@ struct rte_comp_op {
* - On success pointer to mempool
* - On failure NULL
*/
-__rte_experimental
struct rte_mempool *
rte_comp_op_pool_create(const char *name,
unsigned int nb_elts, unsigned int cache_size,
@@ -515,7 +514,6 @@ rte_comp_op_pool_create(const char *name,
* - On success returns a valid rte_comp_op structure
* - On failure returns NULL
*/
-__rte_experimental
struct rte_comp_op *
rte_comp_op_alloc(struct rte_mempool *mempool);
@@ -532,7 +530,6 @@ rte_comp_op_alloc(struct rte_mempool *mempool);
* - nb_ops: Success, the nb_ops requested was allocated
* - 0: Not enough entries in the mempool; no ops are retrieved.
*/
-__rte_experimental
int
rte_comp_op_bulk_alloc(struct rte_mempool *mempool,
struct rte_comp_op **ops, uint16_t nb_ops);
@@ -546,7 +543,6 @@ rte_comp_op_bulk_alloc(struct rte_mempool *mempool,
* Compress operation pointer allocated from rte_comp_op_alloc()
* If op is NULL, no operation is performed.
*/
-__rte_experimental
void
rte_comp_op_free(struct rte_comp_op *op);
@@ -561,7 +557,6 @@ rte_comp_op_free(struct rte_comp_op *op);
* @param nb_ops
* Number of operations to free
*/
-__rte_experimental
void
rte_comp_op_bulk_free(struct rte_comp_op **ops, uint16_t nb_ops);
@@ -574,7 +569,6 @@ rte_comp_op_bulk_free(struct rte_comp_op **ops, uint16_t nb_ops);
* @return
* The name of this flag, or NULL if it's not a valid feature flag.
*/
-__rte_experimental
const char *
rte_comp_get_feature_name(uint64_t flag);
diff --git a/lib/compressdev/rte_compressdev.h b/lib/compressdev/rte_compressdev.h
index 13a418631893..8cb5db0e3f7d 100644
--- a/lib/compressdev/rte_compressdev.h
+++ b/lib/compressdev/rte_compressdev.h
@@ -10,10 +10,6 @@
*
* RTE Compression Device APIs.
*
- * @warning
- * @b EXPERIMENTAL:
- * All functions in this file may be changed or removed without prior notice.
- *
* Defines comp device APIs for the provisioning of compression operations.
*/
@@ -54,7 +50,6 @@ struct rte_compressdev_capabilities {
#define RTE_COMP_END_OF_CAPABILITIES_LIST() \
{ RTE_COMP_ALGO_UNSPECIFIED }
-__rte_experimental
const struct rte_compressdev_capabilities *
rte_compressdev_capability_get(uint8_t dev_id,
enum rte_comp_algorithm algo);
@@ -94,7 +89,6 @@ rte_compressdev_capability_get(uint8_t dev_id,
* @return
* The name of this flag, or NULL if it's not a valid feature flag.
*/
-__rte_experimental
const char *
rte_compressdev_get_feature_name(uint64_t flag);
@@ -133,7 +127,6 @@ struct rte_compressdev_stats {
* - Returns compress device identifier on success.
* - Return -1 on failure to find named compress device.
*/
-__rte_experimental
int
rte_compressdev_get_dev_id(const char *name);
@@ -146,7 +139,6 @@ rte_compressdev_get_dev_id(const char *name);
* - Returns compress device name.
* - Returns NULL if compress device is not present.
*/
-__rte_experimental
const char *
rte_compressdev_name_get(uint8_t dev_id);
@@ -157,7 +149,6 @@ rte_compressdev_name_get(uint8_t dev_id);
* @return
* - The total number of usable compress devices.
*/
-__rte_experimental
uint8_t
rte_compressdev_count(void);
@@ -175,7 +166,6 @@ rte_compressdev_count(void);
* @return
* Returns number of attached compress devices.
*/
-__rte_experimental
uint8_t
rte_compressdev_devices_get(const char *driver_name, uint8_t *devices,
uint8_t nb_devices);
@@ -190,7 +180,6 @@ rte_compressdev_devices_get(const char *driver_name, uint8_t *devices,
* a default of zero if the socket could not be determined.
* -1 if returned is the dev_id value is out of range.
*/
-__rte_experimental
int
rte_compressdev_socket_id(uint8_t dev_id);
@@ -221,7 +210,6 @@ struct rte_compressdev_config {
* - 0: Success, device configured.
* - <0: Error code returned by the driver configuration function.
*/
-__rte_experimental
int
rte_compressdev_configure(uint8_t dev_id,
struct rte_compressdev_config *config);
@@ -240,7 +228,6 @@ rte_compressdev_configure(uint8_t dev_id,
* - 0: Success, device started.
* - <0: Error code of the driver device start function.
*/
-__rte_experimental
int
rte_compressdev_start(uint8_t dev_id);
@@ -251,7 +238,6 @@ rte_compressdev_start(uint8_t dev_id);
* @param dev_id
* Compress device identifier
*/
-__rte_experimental
void
rte_compressdev_stop(uint8_t dev_id);
@@ -269,7 +255,6 @@ rte_compressdev_stop(uint8_t dev_id);
* - 0 on successfully closing device
* - <0 on failure to close device
*/
-__rte_experimental
int
rte_compressdev_close(uint8_t dev_id);
@@ -296,7 +281,6 @@ rte_compressdev_close(uint8_t dev_id);
* - 0: Success, queue pair correctly set up.
* - <0: Queue pair configuration failed
*/
-__rte_experimental
int
rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
uint32_t max_inflight_ops, int socket_id);
@@ -309,7 +293,6 @@ rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
* @return
* - The number of configured queue pairs.
*/
-__rte_experimental
uint16_t
rte_compressdev_queue_pair_count(uint8_t dev_id);
@@ -327,7 +310,6 @@ rte_compressdev_queue_pair_count(uint8_t dev_id);
* - Zero if successful.
* - Non-zero otherwise.
*/
-__rte_experimental
int
rte_compressdev_stats_get(uint8_t dev_id, struct rte_compressdev_stats *stats);
@@ -337,7 +319,6 @@ rte_compressdev_stats_get(uint8_t dev_id, struct rte_compressdev_stats *stats);
* @param dev_id
* The identifier of the device.
*/
-__rte_experimental
void
rte_compressdev_stats_reset(uint8_t dev_id);
@@ -355,7 +336,6 @@ rte_compressdev_stats_reset(uint8_t dev_id);
* The element after the last valid element has it's op field set to
* RTE_COMP_ALGO_UNSPECIFIED.
*/
-__rte_experimental
void
rte_compressdev_info_get(uint8_t dev_id, struct rte_compressdev_info *dev_info);
@@ -413,7 +393,6 @@ rte_compressdev_info_get(uint8_t dev_id, struct rte_compressdev_info *dev_info);
* of pointers to *rte_comp_op* structures effectively supplied to the
* *ops* array.
*/
-__rte_experimental
uint16_t
rte_compressdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_comp_op **ops, uint16_t nb_ops);
@@ -468,7 +447,6 @@ rte_compressdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
* comp devices queue is full or if invalid parameters are specified in
* a *rte_comp_op*.
*/
-__rte_experimental
uint16_t
rte_compressdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
struct rte_comp_op **ops, uint16_t nb_ops);
@@ -496,7 +474,6 @@ rte_compressdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
* - Returns -ENOTSUP if comp device does not support the comp transform.
* - Returns -ENOMEM if the private stream could not be allocated.
*/
-__rte_experimental
int
rte_compressdev_stream_create(uint8_t dev_id,
const struct rte_comp_xform *xform,
@@ -518,7 +495,6 @@ rte_compressdev_stream_create(uint8_t dev_id,
* - Returns -ENOTSUP if comp device does not support STATEFUL operations.
* - Returns -EBUSY if can't free stream as there are inflight operations
*/
-__rte_experimental
int
rte_compressdev_stream_free(uint8_t dev_id, void *stream);
@@ -545,7 +521,6 @@ rte_compressdev_stream_free(uint8_t dev_id, void *stream);
* - Returns -ENOTSUP if comp device does not support the comp transform.
* - Returns -ENOMEM if the private_xform could not be allocated.
*/
-__rte_experimental
int
rte_compressdev_private_xform_create(uint8_t dev_id,
const struct rte_comp_xform *xform,
@@ -567,7 +542,6 @@ rte_compressdev_private_xform_create(uint8_t dev_id,
* - <0 in error cases
* - Returns -EINVAL if input parameters are invalid.
*/
-__rte_experimental
int
rte_compressdev_private_xform_free(uint8_t dev_id, void *private_xform);
diff --git a/lib/compressdev/rte_compressdev_pmd.h b/lib/compressdev/rte_compressdev_pmd.h
index ea012908b783..fa233492fe1f 100644
--- a/lib/compressdev/rte_compressdev_pmd.h
+++ b/lib/compressdev/rte_compressdev_pmd.h
@@ -59,7 +59,6 @@ struct rte_compressdev_global {
* @return
* - The rte_compressdev structure pointer for the given device identifier.
*/
-__rte_experimental
struct rte_compressdev *
rte_compressdev_pmd_get_named_dev(const char *name);
@@ -292,7 +291,6 @@ struct rte_compressdev_ops {
* @return
* - Slot in the rte_dev_devices array for a new device;
*/
-__rte_experimental
struct rte_compressdev *
rte_compressdev_pmd_allocate(const char *name, int socket_id);
@@ -308,7 +306,6 @@ rte_compressdev_pmd_allocate(const char *name, int socket_id);
* @return
* - 0 on success, negative on error
*/
-__rte_experimental
int
rte_compressdev_pmd_release_device(struct rte_compressdev *dev);
@@ -331,7 +328,6 @@ rte_compressdev_pmd_release_device(struct rte_compressdev *dev);
* - 0 on success
* - errno on failure
*/
-__rte_experimental
int
rte_compressdev_pmd_parse_input_args(
struct rte_compressdev_pmd_init_params *params,
@@ -353,7 +349,6 @@ rte_compressdev_pmd_parse_input_args(
* - comp device instance on success
* - NULL on creation failure
*/
-__rte_experimental
struct rte_compressdev *
rte_compressdev_pmd_create(const char *name,
struct rte_device *device,
@@ -372,7 +367,6 @@ rte_compressdev_pmd_create(const char *name,
* - 0 on success
* - errno on failure
*/
-__rte_experimental
int
rte_compressdev_pmd_destroy(struct rte_compressdev *dev);
diff --git a/lib/compressdev/version.map b/lib/compressdev/version.map
index e2a108b6509f..fa891f669b5d 100644
--- a/lib/compressdev/version.map
+++ b/lib/compressdev/version.map
@@ -1,4 +1,4 @@
-EXPERIMENTAL {
+DPDK_24 {
global:
rte_compressdev_capability_get;
--
2.39.2
^ permalink raw reply [relevance 2%]
* [PATCH v2 00/29] promote many API's to stable
2023-08-08 17:35 3% [PATCH 00/20] remove experimental flag from some API's Stephen Hemminger
2023-08-08 18:19 0% ` Tyler Retzlaff
@ 2023-08-09 0:09 3% ` Stephen Hemminger
2023-08-09 0:10 2% ` [PATCH v2 24/29] compressdev: remove experimental flag Stephen Hemminger
1 sibling, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-08-09 0:09 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Since 23.11 is an LTS release it is time to remove the experimental
bandaid off many API's. There are about 850 API's marked with experimental
on current main branch. This addresses the easy to remove ones and
gets it down to about 690 places.
The rule is any API that has been in since 22.11 needs to have
experimental removed (or deleted). The experimental flag is
intended to be temporary not a "get out of ABI stability for free" card.
v2 - add more libraries to the mix
- remove EXPERIMENTAL where tagged in MAINTAINERS
Stephen Hemminger (29):
bpf: make rte_bpf_dump and rte_bpf_convert stable API's
cmdline: make experimental API's stable
ethdev: mark rte_mtr API's as stable
ethdev: mark rte_tm API's as stable
pdump: make API's stable
pcapng: mark API's as stable
net: remove experimental from functions
rcu: remove experimental from rte_rcu_qbsr
lpm: remove experimental
mbuf: remove experimental from create_extbuf
hash: remove experimental from toeplitz hash
timer: remove experimental from rte_timer_next_ticks
sched: remove experimental
dmadev: mark API's as not experimental
meter: remove experimental warning from comments
power: remove experimental from API's
kvargs: remove experimental flag
ip_frag: mark a couple of functions stable
member: remove experimental tag
security: remove experimental flag
vhost: remove experimental from some API's
bbdev: remove experimental tag
ipsec: remove experimental from SA API
compressdev: remove experimental flag
regexdev: remove experimental tag
node: remove experimental tag
cryptodev: remove experimental from more API's
table: remove experimental from API
port: make API's stable
MAINTAINERS | 10 +-
doc/guides/rel_notes/deprecation.rst | 6 --
lib/bbdev/rte_bbdev.h | 4 -
lib/bbdev/rte_bbdev_op.h | 2 -
lib/bbdev/version.map | 18 ++--
lib/bpf/rte_bpf.h | 2 -
lib/bpf/version.map | 9 +-
lib/cmdline/cmdline.h | 1 -
lib/cmdline/cmdline_parse.h | 4 -
lib/cmdline/cmdline_rdline.h | 4 -
lib/cmdline/version.map | 26 ++---
lib/compressdev/rte_comp.h | 6 --
lib/compressdev/rte_compressdev.h | 26 -----
lib/compressdev/rte_compressdev_pmd.h | 6 --
lib/compressdev/version.map | 2 +-
lib/cryptodev/rte_crypto_sym.h | 1 -
lib/cryptodev/rte_cryptodev.h | 32 ------
lib/cryptodev/version.map | 77 ++++++--------
lib/dmadev/rte_dmadev.h | 85 ----------------
lib/dmadev/version.map | 2 +-
lib/ethdev/rte_mtr.h | 25 +----
lib/ethdev/rte_tm.h | 34 -------
lib/ethdev/version.map | 88 ++++++++--------
lib/hash/rte_thash.h | 44 --------
lib/hash/rte_thash_gfni.h | 8 --
lib/hash/rte_thash_x86_gfni.h | 8 --
lib/hash/version.map | 16 +--
lib/ip_frag/rte_ip_frag.h | 2 -
lib/ip_frag/version.map | 9 +-
lib/ipsec/rte_ipsec.h | 2 -
lib/ipsec/version.map | 9 +-
lib/kvargs/rte_kvargs.h | 4 -
lib/kvargs/version.map | 8 +-
lib/lpm/rte_lpm.h | 4 -
lib/lpm/version.map | 7 +-
lib/mbuf/rte_mbuf.h | 1 -
lib/mbuf/version.map | 8 +-
lib/member/rte_member.h | 54 ----------
lib/member/version.map | 12 +--
lib/meter/rte_meter.h | 12 ---
lib/net/rte_ip.h | 19 ----
lib/node/rte_node_eth_api.h | 5 -
lib/node/rte_node_ip4_api.h | 6 --
lib/node/rte_node_ip6_api.h | 6 --
lib/node/version.map | 2 +-
lib/pcapng/rte_pcapng.h | 11 --
lib/pcapng/version.map | 6 +-
lib/pdump/rte_pdump.h | 12 ---
lib/pdump/version.map | 11 +-
lib/pipeline/rte_port_in_action.h | 8 --
lib/pipeline/rte_swx_ctl.h | 57 -----------
lib/pipeline/rte_swx_pipeline.h | 29 ------
lib/pipeline/rte_table_action.h | 16 ---
lib/pipeline/version.map | 140 ++++++++++++--------------
lib/port/version.map | 24 ++---
lib/power/rte_power.h | 4 -
lib/power/rte_power_guest_channel.h | 4 -
lib/power/rte_power_intel_uncore.h | 9 --
lib/power/rte_power_pmd_mgmt.h | 40 --------
lib/power/version.map | 33 ++----
lib/rcu/rte_rcu_qsbr.h | 20 ----
lib/rcu/version.map | 15 +--
lib/regexdev/rte_regexdev.h | 92 -----------------
lib/regexdev/version.map | 2 +-
lib/sched/rte_pie.h | 8 --
lib/sched/rte_sched.h | 5 -
lib/sched/version.map | 18 +---
lib/security/rte_security.h | 35 -------
lib/security/version.map | 17 ++--
lib/table/rte_swx_table_learner.h | 10 --
lib/table/rte_swx_table_selector.h | 6 --
lib/table/rte_table_hash_func.h | 9 --
lib/table/version.map | 18 +---
lib/timer/rte_timer.h | 4 -
lib/timer/version.map | 7 +-
lib/vhost/rte_vhost.h | 5 -
lib/vhost/rte_vhost_async.h | 19 ----
lib/vhost/rte_vhost_crypto.h | 1 -
lib/vhost/version.map | 51 ++++------
79 files changed, 234 insertions(+), 1228 deletions(-)
--
2.39.2
^ permalink raw reply [relevance 3%]
* Re: [PATCH 00/20] remove experimental flag from some API's
2023-08-08 21:33 0% ` Stephen Hemminger
@ 2023-08-08 23:23 0% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-08 23:23 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
On Tue, Aug 08, 2023 at 02:33:52PM -0700, Stephen Hemminger wrote:
> On Tue, 8 Aug 2023 11:19:12 -0700
> Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
>
> > On Tue, Aug 08, 2023 at 10:35:07AM -0700, Stephen Hemminger wrote:
> > > Since 23.11 is an LTS release it is time to remove the experimental
> > > bandaid off many API's. There are about 850 API's marked with experimental
> > > on current main branch. This addresses the easy to remove ones and
> > > gets it down to about 690 places.
> > >
> > > The rule is any API that has been in since 22.11 needs to have
> > > experimental removed (or deleted). The experimental flag is not a
> > > "get out of ABI stability for free" card.
> >
> > For the libraries here that are enabled for Windows are the APIs being
> > marked stable have real implementations or just stubs on Windows?
> >
> > If they are just stubs then i think more review is necessary for the
> > stubbed APIs to understand that they *can* be implemented on Windows.
> >
> > I would prefer not to have to encounter this later and have to go
> > through the overhead of deprecation like with rte_thread_ctrl_create
> > again.
> >
> > This obviously doesn't apply to libraries that are not currently enabled
> > for Windows. If the implementations aren't stubs then that's okay too.
>
> I don't see any stubs when looking.
>
> bpf: not built on Windows. Needs some libelf.
> pdump: not built on Windows. Needs bpf for filtering
> rte_tm: ok
> rte_mtr: ok
> cmdline: ok
> pcapng: ok
> net: ok
> rcu: ok
> lpm: ok
> mbuf: ok
> hash: ok
> timer: ok
> dmadev: ok
> meter: ok
> power: not on windows, probably need special API's
> kvargs: ok
> ip_frag: ok
> member: not build on windows, not sure why
> security: ok
> vhost: not build on windows, not sure why
> regexdev: not build on windows, not sure why
> node: not build on windows, not sure why
>
> Changes to eal need to be more selective.
Thanks Stephen I appreciate you checking it out it helps a lot.
Series-acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
>
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH 00/20] remove experimental flag from some API's
2023-08-08 18:19 0% ` Tyler Retzlaff
@ 2023-08-08 21:33 0% ` Stephen Hemminger
2023-08-08 23:23 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-08-08 21:33 UTC (permalink / raw)
To: Tyler Retzlaff; +Cc: dev
On Tue, 8 Aug 2023 11:19:12 -0700
Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
> On Tue, Aug 08, 2023 at 10:35:07AM -0700, Stephen Hemminger wrote:
> > Since 23.11 is an LTS release it is time to remove the experimental
> > bandaid off many API's. There are about 850 API's marked with experimental
> > on current main branch. This addresses the easy to remove ones and
> > gets it down to about 690 places.
> >
> > The rule is any API that has been in since 22.11 needs to have
> > experimental removed (or deleted). The experimental flag is not a
> > "get out of ABI stability for free" card.
>
> For the libraries here that are enabled for Windows are the APIs being
> marked stable have real implementations or just stubs on Windows?
>
> If they are just stubs then i think more review is necessary for the
> stubbed APIs to understand that they *can* be implemented on Windows.
>
> I would prefer not to have to encounter this later and have to go
> through the overhead of deprecation like with rte_thread_ctrl_create
> again.
>
> This obviously doesn't apply to libraries that are not currently enabled
> for Windows. If the implementations aren't stubs then that's okay too.
I don't see any stubs when looking.
bpf: not built on Windows. Needs some libelf.
pdump: not built on Windows. Needs bpf for filtering
rte_tm: ok
rte_mtr: ok
cmdline: ok
pcapng: ok
net: ok
rcu: ok
lpm: ok
mbuf: ok
hash: ok
timer: ok
dmadev: ok
meter: ok
power: not on windows, probably need special API's
kvargs: ok
ip_frag: ok
member: not build on windows, not sure why
security: ok
vhost: not build on windows, not sure why
regexdev: not build on windows, not sure why
node: not build on windows, not sure why
Changes to eal need to be more selective.
^ permalink raw reply [relevance 0%]
* Re: C11 atomics adoption blocked
2023-08-08 20:22 0% ` Morten Brørup
@ 2023-08-08 20:49 0% ` Tyler Retzlaff
2023-08-09 8:48 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-08 20:49 UTC (permalink / raw)
To: Morten Brørup
Cc: Bruce Richardson, dev, techboard, thomas, david.marchand,
Honnappa.Nagarahalli
On Tue, Aug 08, 2023 at 10:22:09PM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Tuesday, 8 August 2023 21.20
> >
> > On Tue, Aug 08, 2023 at 07:23:41PM +0100, Bruce Richardson wrote:
> > > On Tue, Aug 08, 2023 at 10:53:03AM -0700, Tyler Retzlaff wrote:
> > > > Hi folks,
> > > >
> > > > Moving this discussion to the dev mailing list for broader comment.
> > > >
> > > > Unfortunately, we've hit a roadblock with integrating C11 atomics
> > > > for DPDK. The main issue is that GNU C++ prior to -std=c++23
> > explicitly
> > > > cannot be integrated with C11 stdatomic.h. Basically, you can't
> > include
> > > > the header and you can't use `_Atomic' type specifier to declare
> > atomic
> > > > types. This is not a problem with LLVM or MSVC as they both allow
> > > > integration with C11 stdatomic.h, but going forward with C11 atomics
> > > > would break using DPDK in C++ programs when building with GNU g++.
> > > >
> > > > Essentially you cannot compile the following with g++.
> > > >
> > > > #include <stdatomic.h>
> > > >
> > > > int main(int argc, char *argv[]) { return 0; }
> > > >
> > > > In file included from atomic.cpp:1:
> > > > /usr/lib/gcc/x86_64-pc-cygwin/11/include/stdatomic.h:40:9: error:
> > > > ‘_Atomic’ does not name a type
> > > > 40 | typedef _Atomic _Bool atomic_bool;
> > > >
> > > > ... more errors of same ...
> > > >
> > > > It's also acknowledged as something known and won't fix by GNU g++
> > > > maintainers.
> > > >
> > > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60932
> > > >
> > > > Given the timeframe I would like to propose the minimally invasive,
> > > > lowest risk solution as follows.
> > > >
> > > > 1. Adopt stdatomic.h for all Windows targets, leave all Linux/BSD
> > targets
> > > > using GCC builtin C++11 memory model atomics.
> > > > 2. Introduce a macro that allows _Atomic type specifier to be
> > applied to
> > > > function parameter, structure field types and variable
> > declarations.
> > > >
> > > > * The macro would expand empty for Linux/BSD targets.
> > > > * The macro would expand to C11 _Atomic keyword for Windows
> > targets.
> > > >
> > > > 3. Introduce basic macro that allows __atomic_xxx for normalized
> > use
> > > > internal to DPDK.
> > > >
> > > > * The macro would not be defined for Linux/BSD targets.
> > > > * The macro would expand __atomic_xxx to corresponding
> > stdatomic.h
> > > > atomic_xxx operations for Windows targets.
> > > >
>
> Regarding naming of these macros (suggested in 2. and 3.), they should probably bear the rte_ prefix instead of overlapping existing names, so applications can also use them directly.
>
> E.g.:
> #define rte_atomic for _Atomic or nothing,
> #define rte_atomic_fetch_add() for atomic_fetch_add() or __atomic_fetch_add(), and
> #define RTE_MEMORY_ORDER_SEQ_CST for memory_order_seq_cst or __ATOMIC_SEQ_CST.
>
> Maybe that is what you meant already. I'm not sure of the scope and details of your suggestion here.
I'm shy to do anything in the rte_ namespace because I don't want to
formalize it as an API.
I was envisioning the following.
Internally DPDK code just uses __atomic_fetch_add directly, the macros
are provided for Windows targets to expand to __atomic_fetch_add.
Externally DPDK applications that don't care about being portable may
use __atomic_fetch_add (BSD/Linux) or atomic_fetch_add (Windows)
directly.
Externally DPDK applications that care to be portable may do what is
done Internally and <<use>> the __atomic_fetch_add directly. By
including say rte_stdatomic.h indirectly (Windows) gets the macros
expanded to atomic_fetch_add and for BSD/Linux it's a noop include.
Basically I'm placing a little ugly into Windows built code and in trade
we don't end up with a bunch of rte_ APIs that were strongly objected to
previously.
It's a compromise.
>
> > > > 4. We re-evaluate adoption of C11 atomics and corresponding
> > requirement of
> > > > -std=c++23 compliant compiler at the next long term ABI promise
> > release.
> > > >
> > > > Q: Why not define macros that look like the standard and expand
> > those
> > > > names to builtins?
> > > > A: Because introducing the names is a violation of the C standard,
> > we
> > > > can't / shouldn't define atomic_xxx names in the applications
> > namespace
> > > > as we are not ``the implementation''.
> > > > A: Because the builtins offer a subset of stdatomic.h capability
> > they
> > > > can only operate on pointer and integer types. If we presented
> > the
> > > > stdatomic.h names there might be some confusion attempting to
> > perform
> > > > atomic operations on e.g. _Atomic specified struct would fail but
> > only
> > > > on BSD/Linux builds (with the proposed solution).
> > > >
> > >
> > > Out of interest, rather than splitting on Windows vs *nix OS for the
> > > atomics, what would it look like if we split behaviour based on C vs
> > C++
> > > use? Would such a thing work?
> >
> > Unfortunately no. The reason is binary packages and we don't know which
> > toolchain consumes them.
> >
> > For example.
> >
> > Assume we build libeal-dev package with gcc. We'll end up with headers
> > that contain the _Atomic specifier.
> >
> > Now we write an application and build it with
> > * gcc, sure works fine it knows about _Atomic
> > * clang, same as gcc
> > * clang++, works but is implementation detail that it works (it isn't
> > standard)
> > * g++, does not work
> >
> > So the LCD is build package without _Atomic i.e. what we already have
> > today
> > on BSD/Linux.
> >
>
> I agree with Tyler's conceptual solution as proposed in the first email in this thread, but with a twist:
>
> Instead of splitting Windows vs. Linux/BSD, the split should be a build time configuration parameter, e.g. USE_STDATOMIC_H. This would be default true for Windows, and default false for Linux/BSD distros - i.e. behave exactly as Tyler described.
Interesting, so the intention here is default stdatomic off for
BSD/Linux and default on for Windows. Binary packagers could then choose
if they wanted to build binary packages incompatible with g++ < -std=c++23
by overriding the default and enabling stdatomic.
I don't object to this if noone else does and it does seem to give more
options to packagers and users to decide for their distribution
channels. One note I'll make is that we would only commit to testing the
defaults in the CI to avoid blowing out the test matrix with non-default
options.
>
> Having a build time configuration parameter would also allow the use of stdatomic.h for applications that build DPDK from scratch, instead of using the DPDK included with the distro. This could be C applications built with the distro's C compiler or some other C compiler, or C++ applications built with a newer GNU C++ compiler or CLANG++.
>
> It might also allow building C++ applications using an old GNU C++ compiler on Windows (where the application is built with DPDK from scratch). Not really an argument, just mentioning it.
Yes, it seems like this would solve that problem in that on Windows the
default could be similarly overridden and turn stdatomic off if building
with GNU g++ on Windows.
>
> > > Also, just wondering about the scope of the changes here. How many
> > header
> > > files are affected where we publicly expose atomics?
> >
> > So what is impacted is roughly what is in my v4 series that raised my
> > attention to the issue.
> >
> > https://patchwork.dpdk.org/project/dpdk/list/?series=29086
> >
> > We really can't solve the problem by not talking about atomics in the
> > API because of the performance requirements of the APIs in question.
> >
> > e.g. It is stuff like rte_rwlock, rte_spinlock, rte_pause all stuff that
> > can't have additional levels of indirection because of the overhead
> > involved.
>
> I strongly support this position. Hiding all atomic operations in non-inline functions will have an unacceptable performance impact! (I know Bruce wasn't suggesting this with his question; but someone once suggested this on a techboard meeting, arguing that the overhead of calling a function is very small.)
Yeah, I think Bruce is aware but was just curious.
The overhead of calling a function is in fact not-small on ports that have
security features similar to Windows control flow guard (which is required
to be enabled for some of our customers including our own (Microsoft)
shipped code).
>
> >
> > >
> > > Thanks,
> > > /Bruce
^ permalink raw reply [relevance 0%]
* RE: C11 atomics adoption blocked
2023-08-08 19:19 0% ` Tyler Retzlaff
@ 2023-08-08 20:22 0% ` Morten Brørup
2023-08-08 20:49 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-08-08 20:22 UTC (permalink / raw)
To: Tyler Retzlaff, Bruce Richardson
Cc: dev, techboard, thomas, david.marchand, Honnappa.Nagarahalli
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Tuesday, 8 August 2023 21.20
>
> On Tue, Aug 08, 2023 at 07:23:41PM +0100, Bruce Richardson wrote:
> > On Tue, Aug 08, 2023 at 10:53:03AM -0700, Tyler Retzlaff wrote:
> > > Hi folks,
> > >
> > > Moving this discussion to the dev mailing list for broader comment.
> > >
> > > Unfortunately, we've hit a roadblock with integrating C11 atomics
> > > for DPDK. The main issue is that GNU C++ prior to -std=c++23
> explicitly
> > > cannot be integrated with C11 stdatomic.h. Basically, you can't
> include
> > > the header and you can't use `_Atomic' type specifier to declare
> atomic
> > > types. This is not a problem with LLVM or MSVC as they both allow
> > > integration with C11 stdatomic.h, but going forward with C11 atomics
> > > would break using DPDK in C++ programs when building with GNU g++.
> > >
> > > Essentially you cannot compile the following with g++.
> > >
> > > #include <stdatomic.h>
> > >
> > > int main(int argc, char *argv[]) { return 0; }
> > >
> > > In file included from atomic.cpp:1:
> > > /usr/lib/gcc/x86_64-pc-cygwin/11/include/stdatomic.h:40:9: error:
> > > ‘_Atomic’ does not name a type
> > > 40 | typedef _Atomic _Bool atomic_bool;
> > >
> > > ... more errors of same ...
> > >
> > > It's also acknowledged as something known and won't fix by GNU g++
> > > maintainers.
> > >
> > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60932
> > >
> > > Given the timeframe I would like to propose the minimally invasive,
> > > lowest risk solution as follows.
> > >
> > > 1. Adopt stdatomic.h for all Windows targets, leave all Linux/BSD
> targets
> > > using GCC builtin C++11 memory model atomics.
> > > 2. Introduce a macro that allows _Atomic type specifier to be
> applied to
> > > function parameter, structure field types and variable
> declarations.
> > >
> > > * The macro would expand empty for Linux/BSD targets.
> > > * The macro would expand to C11 _Atomic keyword for Windows
> targets.
> > >
> > > 3. Introduce basic macro that allows __atomic_xxx for normalized
> use
> > > internal to DPDK.
> > >
> > > * The macro would not be defined for Linux/BSD targets.
> > > * The macro would expand __atomic_xxx to corresponding
> stdatomic.h
> > > atomic_xxx operations for Windows targets.
> > >
Regarding naming of these macros (suggested in 2. and 3.), they should probably bear the rte_ prefix instead of overlapping existing names, so applications can also use them directly.
E.g.:
#define rte_atomic for _Atomic or nothing,
#define rte_atomic_fetch_add() for atomic_fetch_add() or __atomic_fetch_add(), and
#define RTE_MEMORY_ORDER_SEQ_CST for memory_order_seq_cst or __ATOMIC_SEQ_CST.
Maybe that is what you meant already. I'm not sure of the scope and details of your suggestion here.
> > > 4. We re-evaluate adoption of C11 atomics and corresponding
> requirement of
> > > -std=c++23 compliant compiler at the next long term ABI promise
> release.
> > >
> > > Q: Why not define macros that look like the standard and expand
> those
> > > names to builtins?
> > > A: Because introducing the names is a violation of the C standard,
> we
> > > can't / shouldn't define atomic_xxx names in the applications
> namespace
> > > as we are not ``the implementation''.
> > > A: Because the builtins offer a subset of stdatomic.h capability
> they
> > > can only operate on pointer and integer types. If we presented
> the
> > > stdatomic.h names there might be some confusion attempting to
> perform
> > > atomic operations on e.g. _Atomic specified struct would fail but
> only
> > > on BSD/Linux builds (with the proposed solution).
> > >
> >
> > Out of interest, rather than splitting on Windows vs *nix OS for the
> > atomics, what would it look like if we split behaviour based on C vs
> C++
> > use? Would such a thing work?
>
> Unfortunately no. The reason is binary packages and we don't know which
> toolchain consumes them.
>
> For example.
>
> Assume we build libeal-dev package with gcc. We'll end up with headers
> that contain the _Atomic specifier.
>
> Now we write an application and build it with
> * gcc, sure works fine it knows about _Atomic
> * clang, same as gcc
> * clang++, works but is implementation detail that it works (it isn't
> standard)
> * g++, does not work
>
> So the LCD is build package without _Atomic i.e. what we already have
> today
> on BSD/Linux.
>
I agree with Tyler's conceptual solution as proposed in the first email in this thread, but with a twist:
Instead of splitting Windows vs. Linux/BSD, the split should be a build time configuration parameter, e.g. USE_STDATOMIC_H. This would be default true for Windows, and default false for Linux/BSD distros - i.e. behave exactly as Tyler described.
Having a build time configuration parameter would also allow the use of stdatomic.h for applications that build DPDK from scratch, instead of using the DPDK included with the distro. This could be C applications built with the distro's C compiler or some other C compiler, or C++ applications built with a newer GNU C++ compiler or CLANG++.
It might also allow building C++ applications using an old GNU C++ compiler on Windows (where the application is built with DPDK from scratch). Not really an argument, just mentioning it.
> > Also, just wondering about the scope of the changes here. How many
> header
> > files are affected where we publicly expose atomics?
>
> So what is impacted is roughly what is in my v4 series that raised my
> attention to the issue.
>
> https://patchwork.dpdk.org/project/dpdk/list/?series=29086
>
> We really can't solve the problem by not talking about atomics in the
> API because of the performance requirements of the APIs in question.
>
> e.g. It is stuff like rte_rwlock, rte_spinlock, rte_pause all stuff that
> can't have additional levels of indirection because of the overhead
> involved.
I strongly support this position. Hiding all atomic operations in non-inline functions will have an unacceptable performance impact! (I know Bruce wasn't suggesting this with his question; but someone once suggested this on a techboard meeting, arguing that the overhead of calling a function is very small.)
>
> >
> > Thanks,
> > /Bruce
^ permalink raw reply [relevance 0%]
* Re: C11 atomics adoption blocked
2023-08-08 18:23 0% ` Bruce Richardson
@ 2023-08-08 19:19 0% ` Tyler Retzlaff
2023-08-08 20:22 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-08 19:19 UTC (permalink / raw)
To: Bruce Richardson
Cc: dev, techboard, thomas, david.marchand, Honnappa.Nagarahalli, mb
On Tue, Aug 08, 2023 at 07:23:41PM +0100, Bruce Richardson wrote:
> On Tue, Aug 08, 2023 at 10:53:03AM -0700, Tyler Retzlaff wrote:
> > Hi folks,
> >
> > Moving this discussion to the dev mailing list for broader comment.
> >
> > Unfortunately, we've hit a roadblock with integrating C11 atomics
> > for DPDK. The main issue is that GNU C++ prior to -std=c++23 explicitly
> > cannot be integrated with C11 stdatomic.h. Basically, you can't include
> > the header and you can't use `_Atomic' type specifier to declare atomic
> > types. This is not a problem with LLVM or MSVC as they both allow
> > integration with C11 stdatomic.h, but going forward with C11 atomics
> > would break using DPDK in C++ programs when building with GNU g++.
> >
> > Essentially you cannot compile the following with g++.
> >
> > #include <stdatomic.h>
> >
> > int main(int argc, char *argv[]) { return 0; }
> >
> > In file included from atomic.cpp:1:
> > /usr/lib/gcc/x86_64-pc-cygwin/11/include/stdatomic.h:40:9: error:
> > ‘_Atomic’ does not name a type
> > 40 | typedef _Atomic _Bool atomic_bool;
> >
> > ... more errors of same ...
> >
> > It's also acknowledged as something known and won't fix by GNU g++
> > maintainers.
> >
> > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60932
> >
> > Given the timeframe I would like to propose the minimally invasive,
> > lowest risk solution as follows.
> >
> > 1. Adopt stdatomic.h for all Windows targets, leave all Linux/BSD targets
> > using GCC builtin C++11 memory model atomics.
> > 2. Introduce a macro that allows _Atomic type specifier to be applied to
> > function parameter, structure field types and variable declarations.
> >
> > * The macro would expand empty for Linux/BSD targets.
> > * The macro would expand to C11 _Atomic keyword for Windows targets.
> >
> > 3. Introduce basic macro that allows __atomic_xxx for normalized use
> > internal to DPDK.
> >
> > * The macro would not be defined for Linux/BSD targets.
> > * The macro would expand __atomic_xxx to corresponding stdatomic.h
> > atomic_xxx operations for Windows targets.
> >
> > 4. We re-evaluate adoption of C11 atomics and corresponding requirement of
> > -std=c++23 compliant compiler at the next long term ABI promise release.
> >
> > Q: Why not define macros that look like the standard and expand those
> > names to builtins?
> > A: Because introducing the names is a violation of the C standard, we
> > can't / shouldn't define atomic_xxx names in the applications namespace
> > as we are not ``the implementation''.
> > A: Because the builtins offer a subset of stdatomic.h capability they
> > can only operate on pointer and integer types. If we presented the
> > stdatomic.h names there might be some confusion attempting to perform
> > atomic operations on e.g. _Atomic specified struct would fail but only
> > on BSD/Linux builds (with the proposed solution).
> >
>
> Out of interest, rather than splitting on Windows vs *nix OS for the
> atomics, what would it look like if we split behaviour based on C vs C++
> use? Would such a thing work?
Unfortunately no. The reason is binary packages and we don't know which
toolchain consumes them.
For example.
Assume we build libeal-dev package with gcc. We'll end up with headers
that contain the _Atomic specifier.
Now we write an application and build it with
* gcc, sure works fine it knows about _Atomic
* clang, same as gcc
* clang++, works but is implementation detail that it works (it isn't standard)
* g++, does not work
So the LCD is build package without _Atomic i.e. what we already have today
on BSD/Linux.
> Also, just wondering about the scope of the changes here. How many header
> files are affected where we publicly expose atomics?
So what is impacted is roughly what is in my v4 series that raised my
attention to the issue.
https://patchwork.dpdk.org/project/dpdk/list/?series=29086
We really can't solve the problem by not talking about atomics in the
API because of the performance requirements of the APIs in question.
e.g. It is stuff like rte_rwlock, rte_spinlock, rte_pause all stuff that
can't have additional levels of indirection because of the overhead
involved.
>
> Thanks,
> /Bruce
^ permalink raw reply [relevance 0%]
* Re: C11 atomics adoption blocked
2023-08-08 17:53 3% C11 atomics adoption blocked Tyler Retzlaff
@ 2023-08-08 18:23 0% ` Bruce Richardson
2023-08-08 19:19 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-08-08 18:23 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, techboard, thomas, david.marchand, Honnappa.Nagarahalli, mb
On Tue, Aug 08, 2023 at 10:53:03AM -0700, Tyler Retzlaff wrote:
> Hi folks,
>
> Moving this discussion to the dev mailing list for broader comment.
>
> Unfortunately, we've hit a roadblock with integrating C11 atomics
> for DPDK. The main issue is that GNU C++ prior to -std=c++23 explicitly
> cannot be integrated with C11 stdatomic.h. Basically, you can't include
> the header and you can't use `_Atomic' type specifier to declare atomic
> types. This is not a problem with LLVM or MSVC as they both allow
> integration with C11 stdatomic.h, but going forward with C11 atomics
> would break using DPDK in C++ programs when building with GNU g++.
>
> Essentially you cannot compile the following with g++.
>
> #include <stdatomic.h>
>
> int main(int argc, char *argv[]) { return 0; }
>
> In file included from atomic.cpp:1:
> /usr/lib/gcc/x86_64-pc-cygwin/11/include/stdatomic.h:40:9: error:
> ‘_Atomic’ does not name a type
> 40 | typedef _Atomic _Bool atomic_bool;
>
> ... more errors of same ...
>
> It's also acknowledged as something known and won't fix by GNU g++
> maintainers.
>
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60932
>
> Given the timeframe I would like to propose the minimally invasive,
> lowest risk solution as follows.
>
> 1. Adopt stdatomic.h for all Windows targets, leave all Linux/BSD targets
> using GCC builtin C++11 memory model atomics.
> 2. Introduce a macro that allows _Atomic type specifier to be applied to
> function parameter, structure field types and variable declarations.
>
> * The macro would expand empty for Linux/BSD targets.
> * The macro would expand to C11 _Atomic keyword for Windows targets.
>
> 3. Introduce basic macro that allows __atomic_xxx for normalized use
> internal to DPDK.
>
> * The macro would not be defined for Linux/BSD targets.
> * The macro would expand __atomic_xxx to corresponding stdatomic.h
> atomic_xxx operations for Windows targets.
>
> 4. We re-evaluate adoption of C11 atomics and corresponding requirement of
> -std=c++23 compliant compiler at the next long term ABI promise release.
>
> Q: Why not define macros that look like the standard and expand those
> names to builtins?
> A: Because introducing the names is a violation of the C standard, we
> can't / shouldn't define atomic_xxx names in the applications namespace
> as we are not ``the implementation''.
> A: Because the builtins offer a subset of stdatomic.h capability they
> can only operate on pointer and integer types. If we presented the
> stdatomic.h names there might be some confusion attempting to perform
> atomic operations on e.g. _Atomic specified struct would fail but only
> on BSD/Linux builds (with the proposed solution).
>
Out of interest, rather than splitting on Windows vs *nix OS for the
atomics, what would it look like if we split behaviour based on C vs C++
use? Would such a thing work?
Also, just wondering about the scope of the changes here. How many header
files are affected where we publicly expose atomics?
Thanks,
/Bruce
^ permalink raw reply [relevance 0%]
* Re: [PATCH 00/20] remove experimental flag from some API's
2023-08-08 17:35 3% [PATCH 00/20] remove experimental flag from some API's Stephen Hemminger
@ 2023-08-08 18:19 0% ` Tyler Retzlaff
2023-08-08 21:33 0% ` Stephen Hemminger
2023-08-09 0:09 3% ` [PATCH v2 00/29] promote many API's to stable Stephen Hemminger
1 sibling, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-08 18:19 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
On Tue, Aug 08, 2023 at 10:35:07AM -0700, Stephen Hemminger wrote:
> Since 23.11 is an LTS release it is time to remove the experimental
> bandaid off many API's. There are about 850 API's marked with experimental
> on current main branch. This addresses the easy to remove ones and
> gets it down to about 690 places.
>
> The rule is any API that has been in since 22.11 needs to have
> experimental removed (or deleted). The experimental flag is not a
> "get out of ABI stability for free" card.
For the libraries here that are enabled for Windows are the APIs being
marked stable have real implementations or just stubs on Windows?
If they are just stubs then i think more review is necessary for the
stubbed APIs to understand that they *can* be implemented on Windows.
I would prefer not to have to encounter this later and have to go
through the overhead of deprecation like with rte_thread_ctrl_create
again.
This obviously doesn't apply to libraries that are not currently enabled
for Windows. If the implementations aren't stubs then that's okay too.
Ty
^ permalink raw reply [relevance 0%]
* C11 atomics adoption blocked
@ 2023-08-08 17:53 3% Tyler Retzlaff
2023-08-08 18:23 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-08 17:53 UTC (permalink / raw)
To: dev, techboard; +Cc: thomas, david.marchand, Honnappa.Nagarahalli, mb
Hi folks,
Moving this discussion to the dev mailing list for broader comment.
Unfortunately, we've hit a roadblock with integrating C11 atomics
for DPDK. The main issue is that GNU C++ prior to -std=c++23 explicitly
cannot be integrated with C11 stdatomic.h. Basically, you can't include
the header and you can't use `_Atomic' type specifier to declare atomic
types. This is not a problem with LLVM or MSVC as they both allow
integration with C11 stdatomic.h, but going forward with C11 atomics
would break using DPDK in C++ programs when building with GNU g++.
Essentially you cannot compile the following with g++.
#include <stdatomic.h>
int main(int argc, char *argv[]) { return 0; }
In file included from atomic.cpp:1:
/usr/lib/gcc/x86_64-pc-cygwin/11/include/stdatomic.h:40:9: error:
‘_Atomic’ does not name a type
40 | typedef _Atomic _Bool atomic_bool;
... more errors of same ...
It's also acknowledged as something known and won't fix by GNU g++
maintainers.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60932
Given the timeframe I would like to propose the minimally invasive,
lowest risk solution as follows.
1. Adopt stdatomic.h for all Windows targets, leave all Linux/BSD targets
using GCC builtin C++11 memory model atomics.
2. Introduce a macro that allows _Atomic type specifier to be applied to
function parameter, structure field types and variable declarations.
* The macro would expand empty for Linux/BSD targets.
* The macro would expand to C11 _Atomic keyword for Windows targets.
3. Introduce basic macro that allows __atomic_xxx for normalized use
internal to DPDK.
* The macro would not be defined for Linux/BSD targets.
* The macro would expand __atomic_xxx to corresponding stdatomic.h
atomic_xxx operations for Windows targets.
4. We re-evaluate adoption of C11 atomics and corresponding requirement of
-std=c++23 compliant compiler at the next long term ABI promise release.
Q: Why not define macros that look like the standard and expand those
names to builtins?
A: Because introducing the names is a violation of the C standard, we
can't / shouldn't define atomic_xxx names in the applications namespace
as we are not ``the implementation''.
A: Because the builtins offer a subset of stdatomic.h capability they
can only operate on pointer and integer types. If we presented the
stdatomic.h names there might be some confusion attempting to perform
atomic operations on e.g. _Atomic specified struct would fail but only
on BSD/Linux builds (with the proposed solution).
Please comment asap as we have limited time to define the path forward
within the 23.11 merge window.
Your help is appreciated.
Thanks
^ permalink raw reply [relevance 3%]
* [PATCH 00/20] remove experimental flag from some API's
@ 2023-08-08 17:35 3% Stephen Hemminger
2023-08-08 18:19 0% ` Tyler Retzlaff
2023-08-09 0:09 3% ` [PATCH v2 00/29] promote many API's to stable Stephen Hemminger
0 siblings, 2 replies; 200+ results
From: Stephen Hemminger @ 2023-08-08 17:35 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Since 23.11 is an LTS release it is time to remove the experimental
bandaid off many API's. There are about 850 API's marked with experimental
on current main branch. This addresses the easy to remove ones and
gets it down to about 690 places.
The rule is any API that has been in since 22.11 needs to have
experimental removed (or deleted). The experimental flag is not a
"get out of ABI stability for free" card.
Stephen Hemminger (20):
bpf: make rte_bpf_dump and rte_bpf_convert stable API's
cmdline: make experimental API's stable
ethdev: mark rte_mtr API's as stable
ethdev: mark rte_tm API's as stable
pdump: make API's stable
pcapng: mark API's as stable
net: remove experimental from functions
rcu: remove experimental from rte_rcu_qbsr
lpm: remove experimental
mbuf: remove experimental from create_extbuf
hash: remove experimental from toeplitz hash
timer: remove experimental from rte_timer_next_ticks
sched: remove experimental
dmadev: mark API's as not experimental
meter: remove experimental warning from comments
power: remove experimental from API's
kvargs: remove experimental flag
ip_frag: mark a couple of functions stable
member: remove experimental tag
security: remove experimental flag
lib/bpf/rte_bpf.h | 2 -
lib/bpf/version.map | 9 +--
lib/cmdline/cmdline.h | 1 -
lib/cmdline/cmdline_parse.h | 4 --
lib/cmdline/cmdline_rdline.h | 4 --
lib/cmdline/version.map | 26 +++------
lib/dmadev/rte_dmadev.h | 85 ----------------------------
lib/dmadev/version.map | 2 +-
lib/ethdev/rte_mtr.h | 25 +-------
lib/ethdev/rte_tm.h | 34 -----------
lib/ethdev/version.map | 88 ++++++++++++++---------------
lib/hash/rte_thash.h | 44 ---------------
lib/hash/rte_thash_gfni.h | 8 ---
lib/hash/rte_thash_x86_gfni.h | 8 ---
lib/hash/version.map | 16 ++----
lib/ip_frag/rte_ip_frag.h | 2 -
lib/ip_frag/version.map | 9 +--
lib/kvargs/rte_kvargs.h | 4 --
lib/kvargs/version.map | 8 +--
lib/lpm/rte_lpm.h | 4 --
lib/lpm/version.map | 7 +--
lib/mbuf/rte_mbuf.h | 1 -
lib/mbuf/version.map | 8 +--
lib/member/rte_member.h | 54 ------------------
lib/member/version.map | 12 +---
lib/meter/rte_meter.h | 12 ----
lib/net/rte_ip.h | 19 -------
lib/pcapng/rte_pcapng.h | 11 ----
lib/pcapng/version.map | 6 +-
lib/pdump/rte_pdump.h | 12 ----
lib/pdump/version.map | 11 +---
lib/power/rte_power.h | 4 --
lib/power/rte_power_guest_channel.h | 4 --
lib/power/rte_power_intel_uncore.h | 9 ---
lib/power/rte_power_pmd_mgmt.h | 40 -------------
lib/power/version.map | 33 ++++-------
lib/rcu/rte_rcu_qsbr.h | 20 -------
lib/rcu/version.map | 15 ++---
lib/sched/rte_pie.h | 8 ---
lib/sched/rte_sched.h | 5 --
lib/sched/version.map | 18 ++----
lib/security/rte_security.h | 35 ------------
lib/security/version.map | 17 ++----
lib/timer/rte_timer.h | 4 --
lib/timer/version.map | 7 +--
45 files changed, 97 insertions(+), 658 deletions(-)
--
2.39.2
^ permalink raw reply [relevance 3%]
* Re: [PATCH] eventdev: fix alignment padding
2023-08-08 10:24 0% ` Jerin Jacob
@ 2023-08-08 10:25 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2023-08-08 10:25 UTC (permalink / raw)
To: Morten Brørup; +Cc: Mattias Rönnblom, Sivaprasad Tummala, jerinj, dev
On Tue, Aug 8, 2023 at 3:54 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Wed, Aug 2, 2023 at 9:49 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> >
> > On Tue, May 23, 2023 at 8:45 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > >
> > > On Wed, May 17, 2023 at 7:05 PM Morten Brørup <mb@smartsharesystems.com> wrote:
> > > >
> >
> > > Shiva,
> > >
> > > Please send ABI change notice for this for 23.11 NOW.
> > > Once it is Acked and merged. I will merge the patch for 23.11 release.
> > >
> > > I am marking the patch as DEFERRED in patchwork and next release
> > > window it will come as NEW in patchwork.
> >
> >
> > Any objection to merge this?
>
>
> pahole output after the change,
>
> [for-main]dell[dpdk-next-eventdev] $ pahole build/app/test/dpdk-test
> -C rte_event_fp_ops
> struct rte_event_fp_ops {
> void * * data; /* 0 8 */
> event_enqueue_t enqueue; /* 8 8 */
> event_enqueue_burst_t enqueue_burst; /* 16 8 */
> event_enqueue_burst_t enqueue_new_burst; /* 24 8 */
> event_enqueue_burst_t enqueue_forward_burst; /* 32 8 */
> event_dequeue_t dequeue; /* 40 8 */
> event_dequeue_burst_t dequeue_burst; /* 48 8 */
> event_maintain_t maintain; /* 56 8 */
> /* --- cacheline 1 boundary (64 bytes) --- */
> event_tx_adapter_enqueue_t txa_enqueue; /* 64 8 */
> event_tx_adapter_enqueue_t txa_enqueue_same_dest; /* 72 8 */
> event_crypto_adapter_enqueue_t ca_enqueue; /* 80 8 */
> uintptr_t reserved[5]; /* 88 40 */
>
> /* size: 128, cachelines: 2, members: 12 */
> } __attribute__((__aligned__(64)));
>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
Applied to dpdk-next-net-eventdev/for-main. Thanks
^ permalink raw reply [relevance 0%]
* Re: [PATCH] eventdev: fix alignment padding
2023-08-02 16:19 0% ` Jerin Jacob
@ 2023-08-08 10:24 0% ` Jerin Jacob
2023-08-08 10:25 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-08-08 10:24 UTC (permalink / raw)
To: Morten Brørup; +Cc: Mattias Rönnblom, Sivaprasad Tummala, jerinj, dev
On Wed, Aug 2, 2023 at 9:49 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Tue, May 23, 2023 at 8:45 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> >
> > On Wed, May 17, 2023 at 7:05 PM Morten Brørup <mb@smartsharesystems.com> wrote:
> > >
>
> > Shiva,
> >
> > Please send ABI change notice for this for 23.11 NOW.
> > Once it is Acked and merged. I will merge the patch for 23.11 release.
> >
> > I am marking the patch as DEFERRED in patchwork and next release
> > window it will come as NEW in patchwork.
>
>
> Any objection to merge this?
pahole output after the change,
[for-main]dell[dpdk-next-eventdev] $ pahole build/app/test/dpdk-test
-C rte_event_fp_ops
struct rte_event_fp_ops {
void * * data; /* 0 8 */
event_enqueue_t enqueue; /* 8 8 */
event_enqueue_burst_t enqueue_burst; /* 16 8 */
event_enqueue_burst_t enqueue_new_burst; /* 24 8 */
event_enqueue_burst_t enqueue_forward_burst; /* 32 8 */
event_dequeue_t dequeue; /* 40 8 */
event_dequeue_burst_t dequeue_burst; /* 48 8 */
event_maintain_t maintain; /* 56 8 */
/* --- cacheline 1 boundary (64 bytes) --- */
event_tx_adapter_enqueue_t txa_enqueue; /* 64 8 */
event_tx_adapter_enqueue_t txa_enqueue_same_dest; /* 72 8 */
event_crypto_adapter_enqueue_t ca_enqueue; /* 80 8 */
uintptr_t reserved[5]; /* 88 40 */
/* size: 128, cachelines: 2, members: 12 */
} __attribute__((__aligned__(64)));
Acked-by: Jerin Jacob <jerinj@marvell.com>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] ethdev: add new symmetric hash function
@ 2023-08-08 1:43 3% ` fengchengwen
2023-08-09 12:00 0% ` Xueming(Steven) Li
0 siblings, 1 reply; 200+ results
From: fengchengwen @ 2023-08-08 1:43 UTC (permalink / raw)
To: Ivan Malov, Xueming Li; +Cc: Ori Kam, dev
On 2023/8/8 6:32, Ivan Malov wrote:
> Hi,
>
> Please see my notes below.
>
> On Mon, 7 Aug 2023, Xueming Li wrote:
>
>> The new symmetric hash function swap src/dst L3 address and
>> L4 ports automatically by sorting.
>>
>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
>> ---
>> lib/ethdev/rte_flow.h | 5 +++++
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>> index 86ed98c562..ec6dd170b5 100644
>> --- a/lib/ethdev/rte_flow.h
>> +++ b/lib/ethdev/rte_flow.h
>> @@ -3204,6 +3204,11 @@ enum rte_eth_hash_function {
>> * src or dst address will xor with zero pair.
>> */
>> RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ,
>> + /**
>> + * Symmetric Toeplitz: src, dst will be swapped
>> + * automatically by sorting.
>
> This is very vague. Consider:
>
> For symmetric Toeplitz, four inputs are prepared as follows:
> - src_addr | dst_addr
> - src_addr ^ dst_addr
> - src_port | dst_port
> - src_port ^ dst_port
> and then passed to the regular Toeplitz function.
>
> It is important to be as specific as possible
> so that readers don't have to guess.
+1 for this, I try to understand and google it, but can't find useful info.
Also, how this new algo with src/dst only ?
>
> Thank you.
>
>> + */
>> + RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ_SORT,
>> RTE_ETH_HASH_FUNCTION_MAX,
The new value will break the definition of MAX (maybe ABI compatible).
but I found only hns3 drivers use RTE_ETH_HASH_FUNCTION_MAX, not sure the application will use it.
>> };
>>
>> --
>> 2.25.1
>>
>>
>
> .
^ permalink raw reply [relevance 3%]
* RE: [PATCH v2 0/5] bbdev: API extension for 23.11
2023-07-17 22:28 0% ` Chautru, Nicolas
@ 2023-08-04 16:14 0% ` Vargas, Hernan
0 siblings, 0 replies; 200+ results
From: Vargas, Hernan @ 2023-08-04 16:14 UTC (permalink / raw)
To: Chautru, Nicolas, dev, maxime.coquelin
Cc: Rix, Tom, hemant.agrawal, david.marchand
Hi Maxime,
Kind reminder to get a review on this series:
https://patchwork.dpdk.org/project/dpdk/list/?series=28544
Thanks,
Hernan
> -----Original Message-----
> From: Chautru, Nicolas <nicolas.chautru@intel.com>
> Sent: Monday, July 17, 2023 5:29 PM
> To: dev@dpdk.org; maxime.coquelin@redhat.com
> Cc: Rix, Tom <trix@redhat.com>; hemant.agrawal@nxp.com;
> david.marchand@redhat.com; Vargas, Hernan <hernan.vargas@intel.com>
> Subject: RE: [PATCH v2 0/5] bbdev: API extension for 23.11
>
> Hi Maxime, Hemant,
> Can I get some review/ack for this serie please.
> Thanks
> Nic
>
> > -----Original Message-----
> > From: Chautru, Nicolas <nicolas.chautru@intel.com>
> > Sent: Thursday, June 15, 2023 9:49 AM
> > To: dev@dpdk.org; maxime.coquelin@redhat.com
> > Cc: Rix, Tom <trix@redhat.com>; hemant.agrawal@nxp.com;
> > david.marchand@redhat.com; Vargas, Hernan <hernan.vargas@intel.com>;
> > Chautru, Nicolas <nicolas.chautru@intel.com>
> > Subject: [PATCH v2 0/5] bbdev: API extension for 23.11
> >
> > v2: moving the new mld functions at the end of struct rte_bbdev to
> > avoid ABI offset changes based on feedback with Maxime.
> > Adding a commit to waive the FFT ABI warning since that experimental
> > function could break ABI (let me know if preferred to be merged with
> > the FFT commit causing the FFT change).
> >
> >
> > Including v1 for extending the bbdev api for 23.11.
> > The new MLD-TS is expected to be non ABI compatible, the other ones
> > should not break ABI.
> > I will send a deprecation notice in parallel.
> >
> > This introduces a new operation (on top of FEC and FFT) to support
> > equalization for MLD-TS. There also more modular API extension for
> > existing FFT and FEC operation.
> >
> > Thanks
> > Nic
> >
> >
> > Nicolas Chautru (5):
> > bbdev: add operation type for MLDTS procession
> > bbdev: add new capabilities for FFT processing
> > bbdev: add new capability for FEC 5G UL processing
> > bbdev: improving error handling for queue configuration
> > devtools: ignore changes into bbdev experimental API
> >
> > devtools/libabigail.abignore | 4 +-
> > doc/guides/prog_guide/bbdev.rst | 83 ++++++++++++++++++
> > lib/bbdev/rte_bbdev.c | 26 +++---
> > lib/bbdev/rte_bbdev.h | 76 +++++++++++++++++
> > lib/bbdev/rte_bbdev_op.h | 143
> > +++++++++++++++++++++++++++++++-
> > lib/bbdev/version.map | 5 ++
> > 6 files changed, 323 insertions(+), 14 deletions(-)
> >
> > --
> > 2.34.1
^ permalink raw reply [relevance 0%]
* [PATCH v10 1/4] ethdev: add API for mbufs recycle mode
@ 2023-08-04 9:24 3% ` Feifei Wang
0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2023-08-04 9:24 UTC (permalink / raw)
To: Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Feifei Wang, Honnappa Nagarahalli, Ruifeng Wang,
Morten Brørup
Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
APIs to recycle used mbufs from a transmit queue of an Ethernet device,
and move these mbufs into a mbuf ring for a receive queue of an Ethernet
device. This can bypass mempool 'put/get' operations hence saving CPU
cycles.
For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
the following operations:
- Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
ring.
- Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
from the Tx mbuf ring.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
doc/guides/rel_notes/release_23_11.rst | 15 ++
lib/ethdev/ethdev_driver.h | 10 ++
lib/ethdev/ethdev_private.c | 2 +
lib/ethdev/rte_ethdev.c | 31 +++++
lib/ethdev/rte_ethdev.h | 181 +++++++++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 23 +++-
lib/ethdev/version.map | 3 +
7 files changed, 259 insertions(+), 6 deletions(-)
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0..fd16d267ae 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -55,6 +55,13 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Add mbufs recycling support. **
+
+ Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``
+ APIs which allow the user to copy used mbufs from the Tx mbuf ring
+ into the Rx mbuf ring. This feature supports the case that the Rx Ethernet
+ device is different from the Tx Ethernet device with respective driver
+ callback functions in ``rte_eth_recycle_mbufs``.
Removed Items
-------------
@@ -100,6 +107,14 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: Added ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields to ``rte_eth_dev`` structure.
+
+* ethdev: Structure ``rte_eth_fp_ops`` was affected to add
+ ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields, to move ``rxq`` and ``txq`` fields, to change the size of
+ ``reserved1`` and ``reserved2`` fields.
+
Known Issues
------------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 980f837ab6..b0c55a8523 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -58,6 +58,10 @@ struct rte_eth_dev {
eth_rx_descriptor_status_t rx_descriptor_status;
/** Check the status of a Tx descriptor */
eth_tx_descriptor_status_t tx_descriptor_status;
+ /** Pointer to PMD transmit mbufs reuse function */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ /** Pointer to PMD receive descriptors refill function */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
/**
* Device data that is shared between primary and secondary processes
@@ -507,6 +511,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
uint16_t queue_id, struct rte_eth_burst_mode *mode);
@@ -1250,6 +1258,8 @@ struct eth_dev_ops {
eth_rxq_info_get_t rxq_info_get;
/** Retrieve Tx queue information */
eth_txq_info_get_t txq_info_get;
+ /** Retrieve mbufs recycle Rx queue information */
+ eth_recycle_rxq_info_get_t recycle_rxq_info_get;
eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */
eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */
eth_fw_version_get_t fw_version_get; /**< Get firmware version */
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 14ec8c6ccf..f8ab64f195 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->rx_queue_count = dev->rx_queue_count;
fpo->rx_descriptor_status = dev->rx_descriptor_status;
fpo->tx_descriptor_status = dev->tx_descriptor_status;
+ fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
+ fpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;
fpo->rxq.data = dev->data->rx_queues;
fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0840d2b594..ea89a101a1 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5876,6 +5876,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
return 0;
}
+int
+rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (queue_id >= dev->data->nb_rx_queues) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->rx_queues == NULL ||
+ dev->data->rx_queues[queue_id] == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Rx queue %"PRIu16" of device with port_id=%"
+ PRIu16" has not been setup\n",
+ queue_id, port_id);
+ return -EINVAL;
+ }
+
+ if (*dev->dev_ops->recycle_rxq_info_get == NULL)
+ return -ENOTSUP;
+
+ dev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);
+
+ return 0;
+}
+
int
rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 04a2564f22..9dc5749d83 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {
uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */
} __rte_cache_min_aligned;
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice.
+ *
+ * Ethernet device Rx queue information structure for recycling mbufs.
+ * Used to retrieve Rx queue information when Tx queue reusing mbufs and moving
+ * them into Rx mbuf ring.
+ */
+struct rte_eth_recycle_rxq_info {
+ struct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */
+ struct rte_mempool *mp; /**< mempool of Rx queue. */
+ uint16_t *refill_head; /**< head of Rx queue refilling mbufs. */
+ uint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */
+ uint16_t mbuf_ring_size; /**< configured number of mbuf ring size. */
+ /**
+ * Requirement on mbuf refilling batch size of Rx mbuf ring.
+ * For some PMD drivers, the number of Rx mbuf ring refilling mbufs
+ * should be aligned with mbuf ring size, in order to simplify
+ * ring wrapping around.
+ * Value 0 means that PMD drivers have no requirement for this.
+ */
+ uint16_t refill_requirement;
+} __rte_cache_min_aligned;
+
/* Generic Burst mode flag definition, values can be ORed. */
/**
@@ -4853,6 +4877,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve information about given ports's Rx queue for recycling mbufs.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The Rx queue on the Ethernet devicefor which information
+ * will be retrieved.
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.
+ *
+ * @return
+ * - 0: Success
+ * - -ENODEV: If *port_id* is invalid.
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The queue_id is out of range.
+ */
+__rte_experimental
+int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,
+ uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
/**
* Retrieve information about the Rx packet burst mode.
*
@@ -6527,6 +6576,138 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id,
return rte_eth_tx_buffer_flush(port_id, queue_id, buffer);
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Recycle used mbufs from a transmit queue of an Ethernet device, and move
+ * these mbufs into a mbuf ring for a receive queue of an Ethernet device.
+ * This can bypass mempool path to save CPU cycles.
+ *
+ * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst() and
+ * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing Rx
+ * descriptors. The number of recycling mbufs depends on the request of Rx mbuf
+ * ring, with the constraint of enough used mbufs from Tx mbuf ring.
+ *
+ * For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the
+ * following operations:
+ *
+ * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.
+ *
+ * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
+ * from the Tx mbuf ring.
+ *
+ * This function spilts Rx and Tx path with different callback functions. The
+ * callback function recycle_tx_mbufs_reuse is for Tx driver. The callback
+ * function recycle_rx_descriptors_refill is for Rx driver. rte_eth_recycle_mbufs()
+ * can support the case that Rx Ethernet device is different from Tx Ethernet device.
+ *
+ * It is the responsibility of users to select the Rx/Tx queue pair to recycle
+ * mbufs. Before call this function, users must call rte_eth_recycle_rxq_info_get
+ * function to retrieve selected Rx queue information.
+ * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info
+ *
+ * Currently, the rte_eth_recycle_mbufs() function can support to feed 1 Rx queue from
+ * 2 Tx queues in the same thread. Do not pair the Rx queue and Tx queue in different
+ * threads, in order to avoid memory error rewriting.
+ *
+ * @param rx_port_id
+ * Port identifying the receive side.
+ * @param rx_queue_id
+ * The index of the receive queue identifying the receive side.
+ * The value must be in the range [0, nb_rx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param tx_port_id
+ * Port identifying the transmit side.
+ * @param tx_queue_id
+ * The index of the transmit queue identifying the transmit side.
+ * The value must be in the range [0, nb_tx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains
+ * the information of the Rx queue mbuf ring.
+ * @return
+ * The number of recycling mbufs.
+ */
+__rte_experimental
+static inline uint16_t
+rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
+ uint16_t tx_port_id, uint16_t tx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_fp_ops *p;
+ void *qd;
+ uint16_t nb_mbufs;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ if (tx_port_id >= RTE_MAX_ETHPORTS ||
+ tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid tx_port_id=%u or tx_queue_id=%u\n",
+ tx_port_id, tx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[tx_port_id];
+ qd = p->txq.data[tx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
+ tx_queue_id, tx_port_id);
+ return 0;
+ }
+#endif
+ if (p->recycle_tx_mbufs_reuse == NULL)
+ return 0;
+
+ /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
+ * into Rx mbuf ring.
+ */
+ nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
+
+ /* If no recycling mbufs, return 0. */
+ if (nb_mbufs == 0)
+ return 0;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ if (rx_port_id >= RTE_MAX_ETHPORTS ||
+ rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n",
+ rx_port_id, rx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[rx_port_id];
+ qd = p->rxq.data[rx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
+ rx_queue_id, rx_port_id);
+ return 0;
+ }
+#endif
+
+ if (p->recycle_rx_descriptors_refill == NULL)
+ return 0;
+
+ /* Replenish the Rx descriptors with the recycling
+ * into Rx mbuf ring.
+ */
+ p->recycle_rx_descriptors_refill(qd, nb_mbufs);
+
+ return nb_mbufs;
+}
+
/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 46e9721e07..a24ad7a6b2 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -55,6 +55,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
/** @internal Check the status of a Tx descriptor */
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
+/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
+typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
+/** @internal Refill Rx descriptors with the recycling mbufs */
+typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);
+
/**
* @internal
* Structure used to hold opaque pointers to internal ethdev Rx/Tx
@@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
* Rx fast-path functions and related data.
* 64-bit systems: occupies first 64B line
*/
+ /** Rx queues data. */
+ struct rte_ethdev_qdata rxq;
/** PMD receive function. */
eth_rx_burst_t rx_pkt_burst;
/** Get the number of used Rx descriptors. */
eth_rx_queue_count_t rx_queue_count;
/** Check the status of a Rx descriptor. */
eth_rx_descriptor_status_t rx_descriptor_status;
- /** Rx queues data. */
- struct rte_ethdev_qdata rxq;
- uintptr_t reserved1[3];
+ /** Refill Rx descriptors with the recycling mbufs. */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
+ uintptr_t reserved1[2];
/**@}*/
/**@{*/
@@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
* Tx fast-path functions and related data.
* 64-bit systems: occupies second 64B line
*/
+ /** Tx queues data. */
+ struct rte_ethdev_qdata txq;
/** PMD transmit function. */
eth_tx_burst_t tx_pkt_burst;
/** PMD transmit prepare function. */
eth_tx_prep_t tx_pkt_prepare;
/** Check the status of a Tx descriptor. */
eth_tx_descriptor_status_t tx_descriptor_status;
- /** Tx queues data. */
- struct rte_ethdev_qdata txq;
- uintptr_t reserved2[3];
+ /** Copy used mbufs from Tx mbuf ring into Rx. */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ uintptr_t reserved2[2];
/**@}*/
} __rte_cache_aligned;
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index b965d6aa52..eec159dfdd 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -312,6 +312,9 @@ EXPERIMENTAL {
rte_flow_async_action_list_handle_query_update;
rte_flow_async_actions_update;
rte_flow_restore_info_dynflag;
+
+ # added in 23.11
+ rte_eth_recycle_rx_queue_info_get;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH] eal/windows: resolve conversion and truncation warnings
2023-08-02 23:44 0% ` Dmitry Kozlyuk
@ 2023-08-03 0:30 0% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-03 0:30 UTC (permalink / raw)
To: Dmitry Kozlyuk
Cc: dev, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam
On Thu, Aug 03, 2023 at 02:44:45AM +0300, Dmitry Kozlyuk wrote:
> 2023-08-02 15:41 (UTC-0700), Tyler Retzlaff:
> > one thing that confuses me a little and this change won't break how the
> > code already works (just makes the cast redundant) is that for mingw
> > sizeof(long) is being reported as 8 bytes.
> >
> > this is in spec relative to the C standard but it does leave me somewhat
> > concerned if struct timespec as defined in the windows headers crosses
> > an abi boundary.
>
> MinGW-w64 shows sizeof(long) == 4 in my tests, both native and cross build.
> Which MinGW setup reports sizeof(long) == 8 on Windows target?
it must have been a dream, i just checked and i get the results you do.
ignore me i'm tired, thanks for checking though.
^ permalink raw reply [relevance 0%]
* Re: [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS
2023-08-02 21:11 2% [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS Sivaprasad Tummala
2023-08-02 21:11 3% ` [PATCH 2/2] test/cpuflags: " Sivaprasad Tummala
@ 2023-08-02 23:50 0% ` Stanisław Kardach
2023-08-11 4:02 2% ` Tummala, Sivaprasad
2023-08-11 6:07 3% ` [PATCH v2 1/2] test/cpuflags: removed test for NUMFLAGS Sivaprasad Tummala
2 siblings, 1 reply; 200+ results
From: Stanisław Kardach @ 2023-08-02 23:50 UTC (permalink / raw)
To: Sivaprasad Tummala
Cc: Ruifeng Wang, Min Zhou, David Christensen, Bruce Richardson,
Konstantin Ananyev, dev
[-- Attachment #1: Type: text/plain, Size: 9058 bytes --]
On Wed, Aug 2, 2023, 23:12 Sivaprasad Tummala <sivaprasad.tummala@amd.com>
wrote:
> This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
> features without breaking ABI each time.
>
I'm not sure I understand the reason for removing the last element canary.
It's quite useful in the coffee that you're refactoring.
Isn't it so that you want to essentially remove the test (other commit in
this series)?
Because that I can understand as a forward compatibility measure.
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
> lib/eal/arm/include/rte_cpuflags_32.h | 1 -
> lib/eal/arm/include/rte_cpuflags_64.h | 1 -
> lib/eal/arm/rte_cpuflags.c | 7 +++++--
> lib/eal/loongarch/include/rte_cpuflags.h | 1 -
> lib/eal/loongarch/rte_cpuflags.c | 7 +++++--
> lib/eal/ppc/include/rte_cpuflags.h | 1 -
> lib/eal/ppc/rte_cpuflags.c | 7 +++++--
> lib/eal/riscv/include/rte_cpuflags.h | 1 -
> lib/eal/riscv/rte_cpuflags.c | 7 +++++--
> lib/eal/x86/include/rte_cpuflags.h | 1 -
> lib/eal/x86/rte_cpuflags.c | 7 +++++--
> 11 files changed, 25 insertions(+), 16 deletions(-)
>
> diff --git a/lib/eal/arm/include/rte_cpuflags_32.h
> b/lib/eal/arm/include/rte_cpuflags_32.h
> index 4e254428a2..41ab0d5f21 100644
> --- a/lib/eal/arm/include/rte_cpuflags_32.h
> +++ b/lib/eal/arm/include/rte_cpuflags_32.h
> @@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_V7L,
> RTE_CPUFLAG_V8L,
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/arm/include/rte_cpuflags_64.h
> b/lib/eal/arm/include/rte_cpuflags_64.h
> index aa7a56d491..ea5193e510 100644
> --- a/lib/eal/arm/include/rte_cpuflags_64.h
> +++ b/lib/eal/arm/include/rte_cpuflags_64.h
> @@ -37,7 +37,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_SVEBF16,
> RTE_CPUFLAG_AARCH64,
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
> index 56e7b2e689..447a8d9f9f 100644
> --- a/lib/eal/arm/rte_cpuflags.c
> +++ b/lib/eal/arm/rte_cpuflags.c
> @@ -139,8 +139,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> {
> const struct feature_entry *feat;
> hwcap_registers_t regs = {0};
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if (feature >= num_flags)
> return -ENOENT;
>
> feat = &rte_cpu_feature_table[feature];
> @@ -154,7 +155,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if (feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> diff --git a/lib/eal/loongarch/include/rte_cpuflags.h
> b/lib/eal/loongarch/include/rte_cpuflags.h
> index 1c80779262..9ff8baaa3c 100644
> --- a/lib/eal/loongarch/include/rte_cpuflags.h
> +++ b/lib/eal/loongarch/include/rte_cpuflags.h
> @@ -27,7 +27,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_LBT_ARM,
> RTE_CPUFLAG_LBT_MIPS,
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS /**< This should always be the last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/loongarch/rte_cpuflags.c
> b/lib/eal/loongarch/rte_cpuflags.c
> index 0a75ca58d4..642eb42509 100644
> --- a/lib/eal/loongarch/rte_cpuflags.c
> +++ b/lib/eal/loongarch/rte_cpuflags.c
> @@ -66,8 +66,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> {
> const struct feature_entry *feat;
> hwcap_registers_t regs = {0};
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if (feature >= num_flags)
> return -ENOENT;
>
> feat = &rte_cpu_feature_table[feature];
> @@ -81,7 +82,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if (feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> diff --git a/lib/eal/ppc/include/rte_cpuflags.h
> b/lib/eal/ppc/include/rte_cpuflags.h
> index a88355d170..b74e7a73ee 100644
> --- a/lib/eal/ppc/include/rte_cpuflags.h
> +++ b/lib/eal/ppc/include/rte_cpuflags.h
> @@ -49,7 +49,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_HTM,
> RTE_CPUFLAG_ARCH_2_07,
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/ppc/rte_cpuflags.c b/lib/eal/ppc/rte_cpuflags.c
> index 61db5c216d..3a639ef45a 100644
> --- a/lib/eal/ppc/rte_cpuflags.c
> +++ b/lib/eal/ppc/rte_cpuflags.c
> @@ -90,8 +90,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> {
> const struct feature_entry *feat;
> hwcap_registers_t regs = {0};
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if (feature >= num_flags)
> return -ENOENT;
>
> feat = &rte_cpu_feature_table[feature];
> @@ -105,7 +106,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if (feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> diff --git a/lib/eal/riscv/include/rte_cpuflags.h
> b/lib/eal/riscv/include/rte_cpuflags.h
> index 66e787f898..803c3655ae 100644
> --- a/lib/eal/riscv/include/rte_cpuflags.h
> +++ b/lib/eal/riscv/include/rte_cpuflags.h
> @@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_RISCV_ISA_Y, /* Reserved */
> RTE_CPUFLAG_RISCV_ISA_Z, /* Reserved */
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/riscv/rte_cpuflags.c b/lib/eal/riscv/rte_cpuflags.c
> index 4f6d29b947..a452261188 100644
> --- a/lib/eal/riscv/rte_cpuflags.c
> +++ b/lib/eal/riscv/rte_cpuflags.c
> @@ -95,8 +95,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> {
> const struct feature_entry *feat;
> hwcap_registers_t regs = {0};
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if (feature >= num_flags)
> return -ENOENT;
>
> feat = &rte_cpu_feature_table[feature];
> @@ -110,7 +111,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if (feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> diff --git a/lib/eal/x86/include/rte_cpuflags.h
> b/lib/eal/x86/include/rte_cpuflags.h
> index 92e90fb6e0..7fc6117243 100644
> --- a/lib/eal/x86/include/rte_cpuflags.h
> +++ b/lib/eal/x86/include/rte_cpuflags.h
> @@ -135,7 +135,6 @@ enum rte_cpu_flag_t {
> RTE_CPUFLAG_WAITPKG, /**< UMONITOR/UMWAIT/TPAUSE */
>
> /* The last item */
> - RTE_CPUFLAG_NUMFLAGS, /**< This should always be the
> last! */
> };
>
> #include "generic/rte_cpuflags.h"
> diff --git a/lib/eal/x86/rte_cpuflags.c b/lib/eal/x86/rte_cpuflags.c
> index d6b518251b..00d17c7515 100644
> --- a/lib/eal/x86/rte_cpuflags.c
> +++ b/lib/eal/x86/rte_cpuflags.c
> @@ -149,8 +149,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const struct feature_entry *feat;
> cpuid_registers_t regs;
> unsigned int maxleaf;
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
>
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + if (feature >= num_flags)
> /* Flag does not match anything in the feature tables */
> return -ENOENT;
>
> @@ -176,7 +177,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
> const char *
> rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
> {
> - if (feature >= RTE_CPUFLAG_NUMFLAGS)
> + unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
> +
> + if (feature >= num_flags)
> return NULL;
> return rte_cpu_feature_table[feature].name;
> }
> --
> 2.34.1
>
>
[-- Attachment #2: Type: text/html, Size: 11064 bytes --]
^ permalink raw reply [relevance 0%]
* Re: [PATCH] eal/windows: resolve conversion and truncation warnings
2023-08-02 22:41 3% ` Tyler Retzlaff
@ 2023-08-02 23:44 0% ` Dmitry Kozlyuk
2023-08-03 0:30 0% ` Tyler Retzlaff
0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2023-08-02 23:44 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam
2023-08-02 15:41 (UTC-0700), Tyler Retzlaff:
> one thing that confuses me a little and this change won't break how the
> code already works (just makes the cast redundant) is that for mingw
> sizeof(long) is being reported as 8 bytes.
>
> this is in spec relative to the C standard but it does leave me somewhat
> concerned if struct timespec as defined in the windows headers crosses
> an abi boundary.
MinGW-w64 shows sizeof(long) == 4 in my tests, both native and cross build.
Which MinGW setup reports sizeof(long) == 8 on Windows target?
^ permalink raw reply [relevance 0%]
* [PATCH v4 01/19] mbuf: replace term sanity check
@ 2023-08-02 23:25 5% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-08-02 23:25 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Andrew Rybchenko, Morten Brørup,
Olivier Matz, Steven Webster, Matt Peters
Replace rte_mbuf_sanity_check() with rte_mbuf_verify()
to match the similar macro RTE_VERIFY() in rte_debug.h
The term sanity check is on the Tier 2 list of words
that should be replaced.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
app/test/test_mbuf.c | 28 +++++-----
doc/guides/prog_guide/mbuf_lib.rst | 4 +-
doc/guides/rel_notes/deprecation.rst | 3 ++
doc/guides/rel_notes/release_23_11.rst | 4 ++
drivers/net/avp/avp_ethdev.c | 18 +++----
drivers/net/sfc/sfc_ef100_rx.c | 6 +--
drivers/net/sfc/sfc_ef10_essb_rx.c | 4 +-
drivers/net/sfc/sfc_ef10_rx.c | 4 +-
drivers/net/sfc/sfc_rx.c | 2 +-
examples/ipv4_multicast/main.c | 2 +-
lib/mbuf/rte_mbuf.c | 23 ++++++---
lib/mbuf/rte_mbuf.h | 71 ++++++++++++++------------
lib/mbuf/version.map | 1 +
13 files changed, 94 insertions(+), 76 deletions(-)
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index efac01806bee..f3f5400e2eca 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -261,8 +261,8 @@ test_one_pktmbuf(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("Buffer should be continuous");
memset(hdr, 0x55, MBUF_TEST_HDR2_LEN);
- rte_mbuf_sanity_check(m, 1);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 1);
+ rte_mbuf_verify(m, 0);
rte_pktmbuf_dump(stdout, m, 0);
/* this prepend should fail */
@@ -1161,7 +1161,7 @@ test_refcnt_mbuf(void)
#ifdef RTE_EXEC_ENV_WINDOWS
static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
{
RTE_SET_USED(pktmbuf_pool);
return TEST_SKIPPED;
@@ -1180,12 +1180,12 @@ mbuf_check_pass(struct rte_mbuf *buf)
}
static int
-test_failing_mbuf_sanity_check(struct rte_mempool *pktmbuf_pool)
+test_failing_mbuf_verify(struct rte_mempool *pktmbuf_pool)
{
struct rte_mbuf *buf;
struct rte_mbuf badbuf;
- printf("Checking rte_mbuf_sanity_check for failure conditions\n");
+ printf("Checking rte_mbuf_verify for failure conditions\n");
/* get a good mbuf to use to make copies */
buf = rte_pktmbuf_alloc(pktmbuf_pool);
@@ -1707,7 +1707,7 @@ test_mbuf_validate_tx_offload(const char *test_name,
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
m->ol_flags = ol_flags;
m->tso_segsz = segsize;
ret = rte_validate_tx_offload(m);
@@ -1914,7 +1914,7 @@ test_pktmbuf_read(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
data = rte_pktmbuf_append(m, MBUF_TEST_DATA_LEN2);
if (data == NULL)
@@ -1963,7 +1963,7 @@ test_pktmbuf_read_from_offset(struct rte_mempool *pktmbuf_pool)
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
/* prepend an ethernet header */
hdr = (struct ether_hdr *)rte_pktmbuf_prepend(m, hdr_len);
@@ -2108,7 +2108,7 @@ create_packet(struct rte_mempool *pktmbuf_pool,
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(pkt_seg) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(pkt_seg, 0);
+ rte_mbuf_verify(pkt_seg, 0);
/* Add header only for the first segment */
if (test_data->flags == MBUF_HEADER && seg == 0) {
hdr_len = sizeof(struct rte_ether_hdr);
@@ -2320,7 +2320,7 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
GOTO_FAIL("%s: mbuf allocation failed!\n", __func__);
if (rte_pktmbuf_pkt_len(m) != 0)
GOTO_FAIL("%s: Bad packet length\n", __func__);
- rte_mbuf_sanity_check(m, 0);
+ rte_mbuf_verify(m, 0);
ext_buf_addr = rte_malloc("External buffer", buf_len,
RTE_CACHE_LINE_SIZE);
@@ -2484,8 +2484,8 @@ test_pktmbuf_ext_pinned_buffer(struct rte_mempool *std_pool)
GOTO_FAIL("%s: test_pktmbuf_copy(pinned) failed\n",
__func__);
- if (test_failing_mbuf_sanity_check(pinned_pool) < 0)
- GOTO_FAIL("%s: test_failing_mbuf_sanity_check(pinned)"
+ if (test_failing_mbuf_verify(pinned_pool) < 0)
+ GOTO_FAIL("%s: test_failing_mbuf_verify(pinned)"
" failed\n", __func__);
if (test_mbuf_linearize_check(pinned_pool) < 0)
@@ -2859,8 +2859,8 @@ test_mbuf(void)
goto err;
}
- if (test_failing_mbuf_sanity_check(pktmbuf_pool) < 0) {
- printf("test_failing_mbuf_sanity_check() failed\n");
+ if (test_failing_mbuf_verify(pktmbuf_pool) < 0) {
+ printf("test_failing_mbuf_verify() failed\n");
goto err;
}
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 049357c75563..0accb51a98c7 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -266,8 +266,8 @@ can be found in several of the sample applications, for example, the IPv4 Multic
Debug
-----
-In debug mode, the functions of the mbuf library perform sanity checks before any operation (such as, buffer corruption,
-bad type, and so on).
+In debug mode, the functions of the mbuf library perform consistency checks
+before any operation (such as, buffer corruption, bad type, and so on).
Use Cases
---------
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 494b401cda4b..1d8bbb0fd5f6 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -220,3 +220,6 @@ Deprecation Notices
will be deprecated and subsequently removed in DPDK 24.11 release.
Before this, the new port library API (functions rte_swx_port_*)
will gradually transition from experimental to stable status.
+
+* mbuf: The function ``rte_mbuf_sanity_check`` is deprecated in DPDK 23.11
+ release. The new function will be ``rte_mbuf_verify``.
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0e1..8ff07c4a075f 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -84,6 +84,10 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* mbuf: function ``rte_mbuf_sanity_check`` has been renamed to
+ ``rte_mbuf_verify``. The old function name is deprecated
+ and will be removed in DPDK 24.11.
+
ABI Changes
-----------
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index b2a08f563542..b402c7a2ad16 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -1231,7 +1231,7 @@ _avp_mac_filter(struct avp_dev *avp, struct rte_mbuf *m)
#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
static inline void
-__avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
+__avp_dev_buffer_check(struct avp_dev *avp, struct rte_avp_desc *buf)
{
struct rte_avp_desc *first_buf;
struct rte_avp_desc *pkt_buf;
@@ -1272,12 +1272,12 @@ __avp_dev_buffer_sanity_check(struct avp_dev *avp, struct rte_avp_desc *buf)
first_buf->pkt_len, pkt_len);
}
-#define avp_dev_buffer_sanity_check(a, b) \
- __avp_dev_buffer_sanity_check((a), (b))
+#define avp_dev_buffer_check(a, b) \
+ __avp_dev_buffer_check((a), (b))
#else /* RTE_LIBRTE_AVP_DEBUG_BUFFERS */
-#define avp_dev_buffer_sanity_check(a, b) do {} while (0)
+#define avp_dev_buffer_check(a, b) do {} while (0)
#endif
@@ -1302,7 +1302,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
void *pkt_data;
unsigned int i;
- avp_dev_buffer_sanity_check(avp, buf);
+ avp_dev_buffer_check(avp, buf);
/* setup the first source buffer */
pkt_buf = avp_dev_translate_buffer(avp, buf);
@@ -1370,7 +1370,7 @@ avp_dev_copy_from_buffers(struct avp_dev *avp,
rte_pktmbuf_pkt_len(m) = total_length;
m->vlan_tci = vlan_tci;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
return m;
}
@@ -1614,7 +1614,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
char *pkt_data;
unsigned int i;
- __rte_mbuf_sanity_check(mbuf, 1);
+ __rte_mbuf_verify(mbuf, 1);
m = mbuf;
src_offset = 0;
@@ -1680,7 +1680,7 @@ avp_dev_copy_to_buffers(struct avp_dev *avp,
first_buf->vlan_tci = mbuf->vlan_tci;
}
- avp_dev_buffer_sanity_check(avp, buffers[0]);
+ avp_dev_buffer_check(avp, buffers[0]);
return total_length;
}
@@ -1798,7 +1798,7 @@ avp_xmit_scattered_pkts(void *tx_queue,
#ifdef RTE_LIBRTE_AVP_DEBUG_BUFFERS
for (i = 0; i < nb_pkts; i++)
- avp_dev_buffer_sanity_check(avp, tx_bufs[i]);
+ avp_dev_buffer_check(avp, tx_bufs[i]);
#endif
/* send the packets */
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 2677003da326..8199b56f2740 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -179,7 +179,7 @@ sfc_ef100_rx_qrefill(struct sfc_ef100_rxq *rxq)
struct sfc_ef100_rx_sw_desc *rxd;
rte_iova_t dma_addr;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
dma_addr = rte_mbuf_data_iova_default(m);
if (rxq->flags & SFC_EF100_RXQ_NIC_DMA_MAP) {
@@ -551,7 +551,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
rxq->ready_pkts--;
pkt = sfc_ef100_rx_next_mbuf(rxq);
- __rte_mbuf_raw_sanity_check(pkt);
+ __rte_mbuf_raw_verify(pkt);
RTE_BUILD_BUG_ON(sizeof(pkt->rearm_data[0]) !=
sizeof(rxq->rearm_data));
@@ -575,7 +575,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
struct rte_mbuf *seg;
seg = sfc_ef100_rx_next_mbuf(rxq);
- __rte_mbuf_raw_sanity_check(seg);
+ __rte_mbuf_raw_verify(seg);
seg->data_off = RTE_PKTMBUF_HEADROOM;
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 78bd430363b1..74647e2792b1 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -125,7 +125,7 @@ sfc_ef10_essb_next_mbuf(const struct sfc_ef10_essb_rxq *rxq,
struct rte_mbuf *m;
m = (struct rte_mbuf *)((uintptr_t)mbuf + rxq->buf_stride);
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
return m;
}
@@ -136,7 +136,7 @@ sfc_ef10_essb_mbuf_by_index(const struct sfc_ef10_essb_rxq *rxq,
struct rte_mbuf *m;
m = (struct rte_mbuf *)((uintptr_t)mbuf + idx * rxq->buf_stride);
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
return m;
}
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 30a320d0791c..72b03b3bba7a 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -148,7 +148,7 @@ sfc_ef10_rx_qrefill(struct sfc_ef10_rxq *rxq)
struct sfc_ef10_rx_sw_desc *rxd;
rte_iova_t phys_addr;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
SFC_ASSERT((id & ~ptr_mask) == 0);
rxd = &rxq->sw_ring[id];
@@ -297,7 +297,7 @@ sfc_ef10_rx_process_event(struct sfc_ef10_rxq *rxq, efx_qword_t rx_ev,
rxd = &rxq->sw_ring[pending++ & ptr_mask];
m = rxd->mbuf;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
m->data_off = RTE_PKTMBUF_HEADROOM;
rte_pktmbuf_data_len(m) = seg_len;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 1dde2c111001..645c8643d1c1 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -120,7 +120,7 @@ sfc_efx_rx_qrefill(struct sfc_efx_rxq *rxq)
++i, id = (id + 1) & rxq->ptr_mask) {
m = objs[i];
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
rxd = &rxq->sw_desc[id];
rxd->mbuf = m;
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index 6d0a8501eff5..f39658f4e249 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -258,7 +258,7 @@ mcast_out_pkt(struct rte_mbuf *pkt, int use_clone)
hdr->pkt_len = (uint16_t)(hdr->data_len + pkt->pkt_len);
hdr->nb_segs = pkt->nb_segs + 1;
- __rte_mbuf_sanity_check(hdr, 1);
+ __rte_mbuf_verify(hdr, 1);
return hdr;
}
/* >8 End of mcast_out_kt. */
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index 686e797c80c4..91cb2f84f6a1 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -363,9 +363,9 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
return mp;
}
-/* do some sanity checks on a mbuf: panic if it fails */
+/* do some checks on a mbuf: panic if it fails */
void
-rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header)
{
const char *reason;
@@ -373,6 +373,13 @@ rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
rte_panic("%s\n", reason);
}
+/* For ABI compatibility, to be removed in next release */
+void
+rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header)
+{
+ rte_mbuf_verify(m, is_header);
+}
+
int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
const char **reason)
{
@@ -492,7 +499,7 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
if (unlikely(m == NULL))
continue;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
do {
m_next = m->next;
@@ -542,7 +549,7 @@ rte_pktmbuf_clone(struct rte_mbuf *md, struct rte_mempool *mp)
return NULL;
}
- __rte_mbuf_sanity_check(mc, 1);
+ __rte_mbuf_verify(mc, 1);
return mc;
}
@@ -592,7 +599,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
struct rte_mbuf *mc, *m_last, **prev;
/* garbage in check */
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
/* check for request to copy at offset past end of mbuf */
if (unlikely(off >= m->pkt_len))
@@ -656,7 +663,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
}
/* garbage out check */
- __rte_mbuf_sanity_check(mc, 1);
+ __rte_mbuf_verify(mc, 1);
return mc;
}
@@ -667,7 +674,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
unsigned int len;
unsigned int nb_segs;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
fprintf(f, "dump mbuf at %p, iova=%#" PRIx64 ", buf_len=%u\n", m, rte_mbuf_iova_get(m),
m->buf_len);
@@ -685,7 +692,7 @@ rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len)
nb_segs = m->nb_segs;
while (m && nb_segs != 0) {
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
fprintf(f, " segment at %p, data=%p, len=%u, off=%u, refcnt=%u\n",
m, rte_pktmbuf_mtod(m, void *),
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 913c459b1cc6..3bd50d7307b3 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -339,13 +339,13 @@ rte_pktmbuf_priv_flags(struct rte_mempool *mp)
#ifdef RTE_LIBRTE_MBUF_DEBUG
-/** check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) rte_mbuf_sanity_check(m, is_h)
+/** do mbuf type in debug mode */
+#define __rte_mbuf_verify(m, is_h) rte_mbuf_verify(m, is_h)
#else /* RTE_LIBRTE_MBUF_DEBUG */
-/** check mbuf type in debug mode */
-#define __rte_mbuf_sanity_check(m, is_h) do { } while (0)
+/** ignore mbuf checks if not in debug mode */
+#define __rte_mbuf_verify(m, is_h) do { } while (0)
#endif /* RTE_LIBRTE_MBUF_DEBUG */
@@ -514,10 +514,9 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
/**
- * Sanity checks on an mbuf.
+ * Check that the mbuf is valid and panic if corrupted.
*
- * Check the consistency of the given mbuf. The function will cause a
- * panic if corruption is detected.
+ * Acts assertion that mbuf is consistent. If not it calls rte_panic().
*
* @param m
* The mbuf to be checked.
@@ -526,13 +525,17 @@ rte_mbuf_ext_refcnt_update(struct rte_mbuf_ext_shared_info *shinfo,
* of a packet (in this case, some fields like nb_segs are not checked)
*/
void
+rte_mbuf_verify(const struct rte_mbuf *m, int is_header);
+
+/* Older deprecated name for rte_mbuf_verify() */
+void __rte_deprecated
rte_mbuf_sanity_check(const struct rte_mbuf *m, int is_header);
/**
- * Sanity checks on a mbuf.
+ * Do consistency checks on a mbuf.
*
- * Almost like rte_mbuf_sanity_check(), but this function gives the reason
- * if corruption is detected rather than panic.
+ * Check the consistency of the given mbuf and if not valid
+ * return the reason.
*
* @param m
* The mbuf to be checked.
@@ -551,7 +554,7 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
const char **reason);
/**
- * Sanity checks on a reinitialized mbuf in debug mode.
+ * Do checks on a reinitialized mbuf in debug mode.
*
* Check the consistency of the given reinitialized mbuf.
* The function will cause a panic if corruption is detected.
@@ -563,16 +566,16 @@ int rte_mbuf_check(const struct rte_mbuf *m, int is_header,
* The mbuf to be checked.
*/
static __rte_always_inline void
-__rte_mbuf_raw_sanity_check(__rte_unused const struct rte_mbuf *m)
+__rte_mbuf_raw_verify(__rte_unused const struct rte_mbuf *m)
{
RTE_ASSERT(rte_mbuf_refcnt_read(m) == 1);
RTE_ASSERT(m->next == NULL);
RTE_ASSERT(m->nb_segs == 1);
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
}
/** For backwards compatibility. */
-#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_sanity_check(m)
+#define MBUF_RAW_ALLOC_CHECK(m) __rte_mbuf_raw_verify(m)
/**
* Allocate an uninitialized mbuf from mempool *mp*.
@@ -599,7 +602,7 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
if (rte_mempool_get(mp, (void **)&m) < 0)
return NULL;
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
return m;
}
@@ -622,7 +625,7 @@ rte_mbuf_raw_free(struct rte_mbuf *m)
{
RTE_ASSERT(!RTE_MBUF_CLONED(m) &&
(!RTE_MBUF_HAS_EXTBUF(m) || RTE_MBUF_HAS_PINNED_EXTBUF(m)));
- __rte_mbuf_raw_sanity_check(m);
+ __rte_mbuf_raw_verify(m);
rte_mempool_put(m->pool, m);
}
@@ -886,7 +889,7 @@ static inline void rte_pktmbuf_reset(struct rte_mbuf *m)
rte_pktmbuf_reset_headroom(m);
m->data_len = 0;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
}
/**
@@ -942,22 +945,22 @@ static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
switch (count % 4) {
case 0:
while (idx != count) {
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_verify(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
case 3:
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_verify(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
case 2:
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_verify(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
case 1:
- __rte_mbuf_raw_sanity_check(mbufs[idx]);
+ __rte_mbuf_raw_verify(mbufs[idx]);
rte_pktmbuf_reset(mbufs[idx]);
idx++;
/* fall-through */
@@ -1185,8 +1188,8 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
mi->pkt_len = mi->data_len;
mi->nb_segs = 1;
- __rte_mbuf_sanity_check(mi, 1);
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(mi, 1);
+ __rte_mbuf_verify(m, 0);
}
/**
@@ -1341,7 +1344,7 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
static __rte_always_inline struct rte_mbuf *
rte_pktmbuf_prefree_seg(struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
if (likely(rte_mbuf_refcnt_read(m) == 1)) {
@@ -1412,7 +1415,7 @@ static inline void rte_pktmbuf_free(struct rte_mbuf *m)
struct rte_mbuf *m_next;
if (m != NULL)
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
while (m != NULL) {
m_next = m->next;
@@ -1493,7 +1496,7 @@ rte_pktmbuf_copy(const struct rte_mbuf *m, struct rte_mempool *mp,
*/
static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
do {
rte_mbuf_refcnt_update(m, v);
@@ -1510,7 +1513,7 @@ static inline void rte_pktmbuf_refcnt_update(struct rte_mbuf *m, int16_t v)
*/
static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
return m->data_off;
}
@@ -1524,7 +1527,7 @@ static inline uint16_t rte_pktmbuf_headroom(const struct rte_mbuf *m)
*/
static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 0);
+ __rte_mbuf_verify(m, 0);
return (uint16_t)(m->buf_len - rte_pktmbuf_headroom(m) -
m->data_len);
}
@@ -1539,7 +1542,7 @@ static inline uint16_t rte_pktmbuf_tailroom(const struct rte_mbuf *m)
*/
static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
while (m->next != NULL)
m = m->next;
return m;
@@ -1583,7 +1586,7 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)
static inline char *rte_pktmbuf_prepend(struct rte_mbuf *m,
uint16_t len)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
if (unlikely(len > rte_pktmbuf_headroom(m)))
return NULL;
@@ -1618,7 +1621,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
void *tail;
struct rte_mbuf *m_last;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
m_last = rte_pktmbuf_lastseg(m);
if (unlikely(len > rte_pktmbuf_tailroom(m_last)))
@@ -1646,7 +1649,7 @@ static inline char *rte_pktmbuf_append(struct rte_mbuf *m, uint16_t len)
*/
static inline char *rte_pktmbuf_adj(struct rte_mbuf *m, uint16_t len)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
if (unlikely(len > m->data_len))
return NULL;
@@ -1678,7 +1681,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
{
struct rte_mbuf *m_last;
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
m_last = rte_pktmbuf_lastseg(m);
if (unlikely(len > m_last->data_len))
@@ -1700,7 +1703,7 @@ static inline int rte_pktmbuf_trim(struct rte_mbuf *m, uint16_t len)
*/
static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)
{
- __rte_mbuf_sanity_check(m, 1);
+ __rte_mbuf_verify(m, 1);
return m->nb_segs == 1;
}
diff --git a/lib/mbuf/version.map b/lib/mbuf/version.map
index f010d4692e3e..04d9ffc1bdb3 100644
--- a/lib/mbuf/version.map
+++ b/lib/mbuf/version.map
@@ -31,6 +31,7 @@ DPDK_24 {
rte_mbuf_set_platform_mempool_ops;
rte_mbuf_set_user_mempool_ops;
rte_mbuf_user_mempool_ops;
+ rte_mbuf_verify;
rte_pktmbuf_clone;
rte_pktmbuf_copy;
rte_pktmbuf_dump;
--
2.39.2
^ permalink raw reply [relevance 5%]
* Re: [PATCH] eal/windows: resolve conversion and truncation warnings
@ 2023-08-02 22:41 3% ` Tyler Retzlaff
2023-08-02 23:44 0% ` Dmitry Kozlyuk
0 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-08-02 22:41 UTC (permalink / raw)
To: Dmitry Kozlyuk
Cc: dev, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam
On Thu, Aug 03, 2023 at 01:29:00AM +0300, Dmitry Kozlyuk wrote:
> 2023-08-02 13:48 (UTC-0700), Tyler Retzlaff:
> > * Initialize const int NS_PER_SEC with an integer literal instead of
> > double thereby avoiding implicit conversion from double to int.
> >
> > * Cast the result of the expression assigned to timspec.tv_nsec to long.
>
> Typo: "timespec".
oops
>
> > Windows builds generate integer truncation warning for this assignment
> > since the result of the expression was 8 bytes (LONGLONG) but
> > on Windows targets is 4 bytes.
>
> Probably "but **tv_nsec** on Windows targets is 4 bytes".
thanks i'll update the wording.
one thing that confuses me a little and this change won't break how the
code already works (just makes the cast redundant) is that for mingw
sizeof(long) is being reported as 8 bytes.
this is in spec relative to the C standard but it does leave me somewhat
concerned if struct timespec as defined in the windows headers crosses
an abi boundary.
have you ever noticed this? any thoughts on it?
>
> > The value produced for the expression should safely fit in the long.
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > ---
> > lib/eal/windows/include/rte_os_shim.h | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
>
> Acked-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
thanks!
^ permalink raw reply [relevance 3%]
* [PATCH v10 10/13] eal: expand most macros to empty when using MSVC
@ 2023-08-02 21:35 5% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-08-02 21:35 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Konstantin Ananyev, Ciara Power, thomas,
david.marchand, mb, Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/rte_branch_prediction.h | 8 +++++
lib/eal/include/rte_common.h | 54 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++
3 files changed, 82 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 414cd92..c0356ca 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -24,7 +24,11 @@
* do_stuff();
*/
#ifndef likely
+#ifdef RTE_TOOLCHAIN_MSVC
+#define likely(x) (!!(x))
+#else
#define likely(x) __builtin_expect(!!(x), 1)
+#endif
#endif /* likely */
/**
@@ -37,7 +41,11 @@
* do_stuff();
*/
#ifndef unlikely
+#ifdef RTE_TOOLCHAIN_MSVC
+#define unlikely(x) (!!(x))
+#else
#define unlikely(x) __builtin_expect(!!(x), 0)
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..b087532 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -41,6 +41,10 @@
#define RTE_STD_C11
#endif
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __extension__
+#endif
+
/*
* RTE_TOOLCHAIN_GCC is defined if the target is built with GCC,
* while a host application (like pmdinfogen) may have another compiler.
@@ -65,7 +69,11 @@
/**
* Force alignment
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_aligned(a)
+#else
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +88,29 @@
/**
* Force a structure to be packed
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_packed
+#else
#define __rte_packed __attribute__((__packed__))
+#endif
/**
* Macro to mark a type that is not subject to type-based aliasing rules
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_may_alias
+#else
#define __rte_may_alias __attribute__((__may_alias__))
+#endif
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#else
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -110,14 +131,22 @@
/**
* Force symbol to be generated even if it appears to be unused.
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_used
+#else
#define __rte_used __attribute__((used))
+#endif
/*********** Macros to eliminate unused variable warnings ********/
/**
* short definition to mark a function parameter unused
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_unused
+#else
#define __rte_unused __attribute__((__unused__))
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +170,9 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_format_printf(format_index, first_arg)
+#else
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +180,7 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +255,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_noreturn
+#else
#define __rte_noreturn __attribute__((noreturn))
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +284,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_warn_unused_result
+#else
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#endif
/**
* Force a function to be inlined
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_always_inline
+#else
#define __rte_always_inline inline __attribute__((always_inline))
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +482,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_cache_aligned
+#else
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
@@ -812,6 +861,10 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* struct wrapper *w = container_of(x, struct wrapper, c);
*/
#ifndef container_of
+#ifdef RTE_TOOLCHAIN_MSVC
+#define container_of(ptr, type, member) \
+ ((type *)((uintptr_t)(ptr) - offsetof(type, member)))
+#else
#define container_of(ptr, type, member) __extension__ ({ \
const typeof(((type *)0)->member) *_ptr = (ptr); \
__rte_unused type *_target_ptr = \
@@ -819,6 +872,7 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
(type *)(((uintptr_t)_ptr) - offsetof(type, member)); \
})
#endif
+#endif
/** Swap two variables. */
#define RTE_SWAP(a, b) \
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..716bc03 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_experimental
+#else
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#endif
#else
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_experimental
+#else
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_internal
+#else
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_internal
+#else
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#endif
#else
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_internal
+#else
#define __rte_internal \
__attribute__((section(".text.internal")))
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS
@ 2023-08-02 21:11 2% Sivaprasad Tummala
2023-08-02 21:11 3% ` [PATCH 2/2] test/cpuflags: " Sivaprasad Tummala
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Sivaprasad Tummala @ 2023-08-02 21:11 UTC (permalink / raw)
To: ruifeng.wang, zhoumin, drc, kda, bruce.richardson, konstantin.v.ananyev
Cc: dev
This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
features without breaking ABI each time.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
lib/eal/arm/include/rte_cpuflags_32.h | 1 -
lib/eal/arm/include/rte_cpuflags_64.h | 1 -
lib/eal/arm/rte_cpuflags.c | 7 +++++--
lib/eal/loongarch/include/rte_cpuflags.h | 1 -
lib/eal/loongarch/rte_cpuflags.c | 7 +++++--
lib/eal/ppc/include/rte_cpuflags.h | 1 -
lib/eal/ppc/rte_cpuflags.c | 7 +++++--
lib/eal/riscv/include/rte_cpuflags.h | 1 -
lib/eal/riscv/rte_cpuflags.c | 7 +++++--
lib/eal/x86/include/rte_cpuflags.h | 1 -
lib/eal/x86/rte_cpuflags.c | 7 +++++--
11 files changed, 25 insertions(+), 16 deletions(-)
diff --git a/lib/eal/arm/include/rte_cpuflags_32.h b/lib/eal/arm/include/rte_cpuflags_32.h
index 4e254428a2..41ab0d5f21 100644
--- a/lib/eal/arm/include/rte_cpuflags_32.h
+++ b/lib/eal/arm/include/rte_cpuflags_32.h
@@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_V7L,
RTE_CPUFLAG_V8L,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/include/rte_cpuflags_64.h b/lib/eal/arm/include/rte_cpuflags_64.h
index aa7a56d491..ea5193e510 100644
--- a/lib/eal/arm/include/rte_cpuflags_64.h
+++ b/lib/eal/arm/include/rte_cpuflags_64.h
@@ -37,7 +37,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_SVEBF16,
RTE_CPUFLAG_AARCH64,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/arm/rte_cpuflags.c b/lib/eal/arm/rte_cpuflags.c
index 56e7b2e689..447a8d9f9f 100644
--- a/lib/eal/arm/rte_cpuflags.c
+++ b/lib/eal/arm/rte_cpuflags.c
@@ -139,8 +139,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -154,7 +155,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/loongarch/include/rte_cpuflags.h b/lib/eal/loongarch/include/rte_cpuflags.h
index 1c80779262..9ff8baaa3c 100644
--- a/lib/eal/loongarch/include/rte_cpuflags.h
+++ b/lib/eal/loongarch/include/rte_cpuflags.h
@@ -27,7 +27,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_LBT_ARM,
RTE_CPUFLAG_LBT_MIPS,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS /**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/loongarch/rte_cpuflags.c b/lib/eal/loongarch/rte_cpuflags.c
index 0a75ca58d4..642eb42509 100644
--- a/lib/eal/loongarch/rte_cpuflags.c
+++ b/lib/eal/loongarch/rte_cpuflags.c
@@ -66,8 +66,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -81,7 +82,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/ppc/include/rte_cpuflags.h b/lib/eal/ppc/include/rte_cpuflags.h
index a88355d170..b74e7a73ee 100644
--- a/lib/eal/ppc/include/rte_cpuflags.h
+++ b/lib/eal/ppc/include/rte_cpuflags.h
@@ -49,7 +49,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_HTM,
RTE_CPUFLAG_ARCH_2_07,
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/ppc/rte_cpuflags.c b/lib/eal/ppc/rte_cpuflags.c
index 61db5c216d..3a639ef45a 100644
--- a/lib/eal/ppc/rte_cpuflags.c
+++ b/lib/eal/ppc/rte_cpuflags.c
@@ -90,8 +90,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -105,7 +106,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/riscv/include/rte_cpuflags.h b/lib/eal/riscv/include/rte_cpuflags.h
index 66e787f898..803c3655ae 100644
--- a/lib/eal/riscv/include/rte_cpuflags.h
+++ b/lib/eal/riscv/include/rte_cpuflags.h
@@ -43,7 +43,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_RISCV_ISA_Y, /* Reserved */
RTE_CPUFLAG_RISCV_ISA_Z, /* Reserved */
/* The last item */
- RTE_CPUFLAG_NUMFLAGS,/**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/riscv/rte_cpuflags.c b/lib/eal/riscv/rte_cpuflags.c
index 4f6d29b947..a452261188 100644
--- a/lib/eal/riscv/rte_cpuflags.c
+++ b/lib/eal/riscv/rte_cpuflags.c
@@ -95,8 +95,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
{
const struct feature_entry *feat;
hwcap_registers_t regs = {0};
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
return -ENOENT;
feat = &rte_cpu_feature_table[feature];
@@ -110,7 +111,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
diff --git a/lib/eal/x86/include/rte_cpuflags.h b/lib/eal/x86/include/rte_cpuflags.h
index 92e90fb6e0..7fc6117243 100644
--- a/lib/eal/x86/include/rte_cpuflags.h
+++ b/lib/eal/x86/include/rte_cpuflags.h
@@ -135,7 +135,6 @@ enum rte_cpu_flag_t {
RTE_CPUFLAG_WAITPKG, /**< UMONITOR/UMWAIT/TPAUSE */
/* The last item */
- RTE_CPUFLAG_NUMFLAGS, /**< This should always be the last! */
};
#include "generic/rte_cpuflags.h"
diff --git a/lib/eal/x86/rte_cpuflags.c b/lib/eal/x86/rte_cpuflags.c
index d6b518251b..00d17c7515 100644
--- a/lib/eal/x86/rte_cpuflags.c
+++ b/lib/eal/x86/rte_cpuflags.c
@@ -149,8 +149,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const struct feature_entry *feat;
cpuid_registers_t regs;
unsigned int maxleaf;
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ if (feature >= num_flags)
/* Flag does not match anything in the feature tables */
return -ENOENT;
@@ -176,7 +177,9 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature)
const char *
rte_cpu_get_flag_name(enum rte_cpu_flag_t feature)
{
- if (feature >= RTE_CPUFLAG_NUMFLAGS)
+ unsigned int num_flags = RTE_DIM(rte_cpu_feature_table);
+
+ if (feature >= num_flags)
return NULL;
return rte_cpu_feature_table[feature].name;
}
--
2.34.1
^ permalink raw reply [relevance 2%]
* [PATCH 2/2] test/cpuflags: remove RTE_CPUFLAG_NUMFLAGS
2023-08-02 21:11 2% [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS Sivaprasad Tummala
@ 2023-08-02 21:11 3% ` Sivaprasad Tummala
2023-08-02 23:50 0% ` [PATCH 1/2] eal: " Stanisław Kardach
2023-08-11 6:07 3% ` [PATCH v2 1/2] test/cpuflags: removed test for NUMFLAGS Sivaprasad Tummala
2 siblings, 0 replies; 200+ results
From: Sivaprasad Tummala @ 2023-08-02 21:11 UTC (permalink / raw)
To: ruifeng.wang, zhoumin, drc, kda, bruce.richardson, konstantin.v.ananyev
Cc: dev
This patch removes RTE_CPUFLAG_NUMFLAGS to allow new CPU
features without breaking ABI each time.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
app/test/test_cpuflags.c | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/app/test/test_cpuflags.c b/app/test/test_cpuflags.c
index a0e342ae48..2b8563602c 100644
--- a/app/test/test_cpuflags.c
+++ b/app/test/test_cpuflags.c
@@ -322,15 +322,6 @@ test_cpuflags(void)
CHECK_FOR_FLAG(RTE_CPUFLAG_LBT_MIPS);
#endif
- /*
- * Check if invalid data is handled properly
- */
- printf("\nCheck for invalid flag:\t");
- result = rte_cpu_get_flag_enabled(RTE_CPUFLAG_NUMFLAGS);
- printf("%s\n", cpu_flag_result(result));
- if (result != -ENOENT)
- return -1;
-
return 0;
}
--
2.34.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH] eventdev: fix alignment padding
@ 2023-08-02 16:19 0% ` Jerin Jacob
2023-08-08 10:24 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-08-02 16:19 UTC (permalink / raw)
To: Morten Brørup; +Cc: Mattias Rönnblom, Sivaprasad Tummala, jerinj, dev
On Tue, May 23, 2023 at 8:45 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Wed, May 17, 2023 at 7:05 PM Morten Brørup <mb@smartsharesystems.com> wrote:
> >
> Shiva,
>
> Please send ABI change notice for this for 23.11 NOW.
> Once it is Acked and merged. I will merge the patch for 23.11 release.
>
> I am marking the patch as DEFERRED in patchwork and next release
> window it will come as NEW in patchwork.
Any objection to merge this?
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3] Add support for IBM Z s390x
2023-08-02 15:34 3% ` David Miller
@ 2023-08-02 15:48 3% ` David Miller
0 siblings, 0 replies; 200+ results
From: David Miller @ 2023-08-02 15:48 UTC (permalink / raw)
To: David Marchand
Cc: dev, Mathew S Thoennes, Konstantin Ananyev, Olivier Matz,
Yipeng Wang, Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin,
Dmitry Kozlyuk, Yuying Zhang, Beilei Xing, Matan Azrad,
Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Qiming Yang,
Wenjun Wu, Jakub Grajciar, Harman Kalra, Thomas Monjalon,
David Christensen
It looks like this is still from v2, v3 (fixes the build issue,
missing operator) was submit the same day..
The abi-test failure referenced is present on v3 (which the bot has
only accepted today).
Patch v4 will be split as requested.
Thanks.
- David Miller
On Wed, Aug 2, 2023 at 10:34 AM David Miller <dmiller423@gmail.com> wrote:
>
> Hello,
>
> I'm happy to split it, I will resubmit when these changes are made.
> I was planning to spend some time to figure out why the CI abi test is
> failing / it had previously passed all tests locally.
> The (one) long term maintainer will be Mathew S Thoennes <tardis@us.ibm.com>.
> I will relay your concerns about CI and have him speak with David Christensen.
>
> - David Miller
>
> On Wed, Aug 2, 2023 at 10:25 AM David Marchand
> <david.marchand@redhat.com> wrote:
> >
> > Hello David,
> >
> > On Wed, Jul 26, 2023 at 3:35 AM David Miller <dmiller423@gmail.com> wrote:
> > >
> > > Minimal changes to drivers and app to support the IBM s390x.
> >
> > This seems a bit more than "minimal changes" :-).
> >
> > >
> > > Signed-off-by: David Miller <dmiller423@gmail.com>
> > > Reviewed-by: Mathew S Thoennes <tardis@us.ibm.com>
> > > ---
> > > app/test-acl/main.c | 4 +
> > > app/test/test_acl.c | 1 +
> > > app/test/test_atomic.c | 7 +-
> > > app/test/test_cmdline_ipaddr.c | 12 +-
> > > app/test/test_cmdline_num.c | 110 ++++
> > > app/test/test_hash_functions.c | 29 +
> > > app/test/test_xmmt_ops.h | 14 +
> > > buildtools/pmdinfogen.py | 11 +-
> > > config/meson.build | 2 +
> > > config/s390x/meson.build | 51 ++
> > > config/s390x/s390x_linux_clang_ubuntu | 19 +
> > > doc/guides/nics/features/i40e.ini | 1 +
> > > drivers/common/mlx5/mlx5_common.h | 9 +
> > > drivers/net/i40e/i40e_rxtx_vec_s390x.c | 630 +++++++++++++++++++
> > > drivers/net/i40e/meson.build | 2 +
> > > drivers/net/ixgbe/ixgbe_rxtx.c | 8 +-
> > > drivers/net/memif/rte_eth_memif.h | 2 +
> > > drivers/net/mlx5/mlx5_rx.c | 24 +-
> > > drivers/net/octeontx/base/octeontx_pki_var.h | 6 +
> > > examples/l3fwd/l3fwd_em.c | 8 +
> > > examples/l3fwd/l3fwd_lpm_s390x.h | 137 ++++
> > > examples/l3fwd/l3fwd_s390x.h | 261 ++++++++
> > > lib/acl/acl_bld.c | 3 +
> > > lib/acl/acl_gen.c | 9 +
> > > lib/acl/acl_run_scalar.c | 8 +
> > > lib/acl/rte_acl.c | 27 +
> > > lib/acl/rte_acl.h | 5 +-
> > > lib/eal/s390x/include/meson.build | 16 +
> > > lib/eal/s390x/include/rte_atomic.h | 44 ++
> > > lib/eal/s390x/include/rte_byteorder.h | 43 ++
> > > lib/eal/s390x/include/rte_cpuflags.h | 41 ++
> > > lib/eal/s390x/include/rte_cycles.h | 44 ++
> > > lib/eal/s390x/include/rte_io.h | 184 ++++++
> > > lib/eal/s390x/include/rte_mcslock.h | 18 +
> > > lib/eal/s390x/include/rte_memcpy.h | 55 ++
> > > lib/eal/s390x/include/rte_pause.h | 22 +
> > > lib/eal/s390x/include/rte_power_intrinsics.h | 20 +
> > > lib/eal/s390x/include/rte_prefetch.h | 46 ++
> > > lib/eal/s390x/include/rte_rwlock.h | 42 ++
> > > lib/eal/s390x/include/rte_spinlock.h | 85 +++
> > > lib/eal/s390x/include/rte_ticketlock.h | 18 +
> > > lib/eal/s390x/include/rte_vect.h | 35 ++
> > > lib/eal/s390x/meson.build | 16 +
> > > lib/eal/s390x/rte_cpuflags.c | 91 +++
> > > lib/eal/s390x/rte_cycles.c | 11 +
> > > lib/eal/s390x/rte_hypervisor.c | 11 +
> > > lib/eal/s390x/rte_power_intrinsics.c | 51 ++
> > > lib/hash/rte_fbk_hash.h | 7 +
> > > lib/lpm/meson.build | 1 +
> > > lib/lpm/rte_lpm.h | 2 +
> > > lib/lpm/rte_lpm6.c | 18 +
> > > lib/lpm/rte_lpm_s390x.h | 130 ++++
> > > meson.build | 2 +
> > > 53 files changed, 2439 insertions(+), 14 deletions(-)
> >
> > - This is too big to review.
> > Please split this patch separating the really minimum support (getting
> > EAL and main libraries to build, disabling the rest that is "broken"
> > for s390x) then adding more components support in later patches.
> >
> > RISC V and LoongArch "recent" additions are good examples.
> > https://patchwork.dpdk.org/project/dpdk/list/?series=23380&state=%2A&archive=both
> > https://patchwork.dpdk.org/project/dpdk/list/?series=24969&state=%2A&archive=both
> >
> > - We need one maintainer for this new architecture.
> >
> > - You'll notice that the DPDK CI reported issues, please fix them.
> >
> > - What are the plans in terms of CI? We need some compilation testing
> > and ideally some regular runtime testing.
> > Maybe you can reach out to IBM PPC DPDK guys, like David Christensen,
> > to see what they are doing.
> >
> >
> > --
> > David Marchand
> >
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3] Add support for IBM Z s390x
@ 2023-08-02 15:34 3% ` David Miller
2023-08-02 15:48 3% ` David Miller
0 siblings, 1 reply; 200+ results
From: David Miller @ 2023-08-02 15:34 UTC (permalink / raw)
To: David Marchand
Cc: dev, Mathew S Thoennes, Konstantin Ananyev, Olivier Matz,
Yipeng Wang, Sameh Gobriel, Bruce Richardson, Vladimir Medvedkin,
Dmitry Kozlyuk, Yuying Zhang, Beilei Xing, Matan Azrad,
Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Qiming Yang,
Wenjun Wu, Jakub Grajciar, Harman Kalra, Thomas Monjalon,
David Christensen
Hello,
I'm happy to split it, I will resubmit when these changes are made.
I was planning to spend some time to figure out why the CI abi test is
failing / it had previously passed all tests locally.
The (one) long term maintainer will be Mathew S Thoennes <tardis@us.ibm.com>.
I will relay your concerns about CI and have him speak with David Christensen.
- David Miller
On Wed, Aug 2, 2023 at 10:25 AM David Marchand
<david.marchand@redhat.com> wrote:
>
> Hello David,
>
> On Wed, Jul 26, 2023 at 3:35 AM David Miller <dmiller423@gmail.com> wrote:
> >
> > Minimal changes to drivers and app to support the IBM s390x.
>
> This seems a bit more than "minimal changes" :-).
>
> >
> > Signed-off-by: David Miller <dmiller423@gmail.com>
> > Reviewed-by: Mathew S Thoennes <tardis@us.ibm.com>
> > ---
> > app/test-acl/main.c | 4 +
> > app/test/test_acl.c | 1 +
> > app/test/test_atomic.c | 7 +-
> > app/test/test_cmdline_ipaddr.c | 12 +-
> > app/test/test_cmdline_num.c | 110 ++++
> > app/test/test_hash_functions.c | 29 +
> > app/test/test_xmmt_ops.h | 14 +
> > buildtools/pmdinfogen.py | 11 +-
> > config/meson.build | 2 +
> > config/s390x/meson.build | 51 ++
> > config/s390x/s390x_linux_clang_ubuntu | 19 +
> > doc/guides/nics/features/i40e.ini | 1 +
> > drivers/common/mlx5/mlx5_common.h | 9 +
> > drivers/net/i40e/i40e_rxtx_vec_s390x.c | 630 +++++++++++++++++++
> > drivers/net/i40e/meson.build | 2 +
> > drivers/net/ixgbe/ixgbe_rxtx.c | 8 +-
> > drivers/net/memif/rte_eth_memif.h | 2 +
> > drivers/net/mlx5/mlx5_rx.c | 24 +-
> > drivers/net/octeontx/base/octeontx_pki_var.h | 6 +
> > examples/l3fwd/l3fwd_em.c | 8 +
> > examples/l3fwd/l3fwd_lpm_s390x.h | 137 ++++
> > examples/l3fwd/l3fwd_s390x.h | 261 ++++++++
> > lib/acl/acl_bld.c | 3 +
> > lib/acl/acl_gen.c | 9 +
> > lib/acl/acl_run_scalar.c | 8 +
> > lib/acl/rte_acl.c | 27 +
> > lib/acl/rte_acl.h | 5 +-
> > lib/eal/s390x/include/meson.build | 16 +
> > lib/eal/s390x/include/rte_atomic.h | 44 ++
> > lib/eal/s390x/include/rte_byteorder.h | 43 ++
> > lib/eal/s390x/include/rte_cpuflags.h | 41 ++
> > lib/eal/s390x/include/rte_cycles.h | 44 ++
> > lib/eal/s390x/include/rte_io.h | 184 ++++++
> > lib/eal/s390x/include/rte_mcslock.h | 18 +
> > lib/eal/s390x/include/rte_memcpy.h | 55 ++
> > lib/eal/s390x/include/rte_pause.h | 22 +
> > lib/eal/s390x/include/rte_power_intrinsics.h | 20 +
> > lib/eal/s390x/include/rte_prefetch.h | 46 ++
> > lib/eal/s390x/include/rte_rwlock.h | 42 ++
> > lib/eal/s390x/include/rte_spinlock.h | 85 +++
> > lib/eal/s390x/include/rte_ticketlock.h | 18 +
> > lib/eal/s390x/include/rte_vect.h | 35 ++
> > lib/eal/s390x/meson.build | 16 +
> > lib/eal/s390x/rte_cpuflags.c | 91 +++
> > lib/eal/s390x/rte_cycles.c | 11 +
> > lib/eal/s390x/rte_hypervisor.c | 11 +
> > lib/eal/s390x/rte_power_intrinsics.c | 51 ++
> > lib/hash/rte_fbk_hash.h | 7 +
> > lib/lpm/meson.build | 1 +
> > lib/lpm/rte_lpm.h | 2 +
> > lib/lpm/rte_lpm6.c | 18 +
> > lib/lpm/rte_lpm_s390x.h | 130 ++++
> > meson.build | 2 +
> > 53 files changed, 2439 insertions(+), 14 deletions(-)
>
> - This is too big to review.
> Please split this patch separating the really minimum support (getting
> EAL and main libraries to build, disabling the rest that is "broken"
> for s390x) then adding more components support in later patches.
>
> RISC V and LoongArch "recent" additions are good examples.
> https://patchwork.dpdk.org/project/dpdk/list/?series=23380&state=%2A&archive=both
> https://patchwork.dpdk.org/project/dpdk/list/?series=24969&state=%2A&archive=both
>
> - We need one maintainer for this new architecture.
>
> - You'll notice that the DPDK CI reported issues, please fix them.
>
> - What are the plans in terms of CI? We need some compilation testing
> and ideally some regular runtime testing.
> Maybe you can reach out to IBM PPC DPDK guys, like David Christensen,
> to see what they are doing.
>
>
> --
> David Marchand
>
^ permalink raw reply [relevance 3%]
* [PATCH v5] build: update DPDK to use C11 standard
2023-07-31 10:38 4% [PATCH] build: update DPDK to use C11 standard Bruce Richardson
` (2 preceding siblings ...)
2023-08-01 13:15 4% ` [PATCH v4] " Bruce Richardson
@ 2023-08-02 12:31 4% ` Bruce Richardson
3 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-08-02 12:31 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Morten Brørup, Tyler Retzlaff
As previously announced, DPDK 23.11 will require a C11 supporting
compiler and will use the C11 standard in all builds.
Forcing use of the C standard, rather than the standard with
GNU extensions, means that some posix definitions which are not in
the C standard are unavailable by default. We fix this by ensuring
the correct defines or cflags are passed to the components that
need them.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
V5:
* Fix build issues with bool type in altivec code, due to bool type
being in C11. Use __bool for altivec-specific version instead.
V4:
* pass cflags to the structure and definition checks in mlx* drivers
to ensure posix definitions - as well as C-standard ones - are
available.
V3:
* remove (now unneeded) use of -std=gnu99 in failsafe net driver.
V2:
* Resubmit now that 23.11-rc0 patch applied
* Add _POSIX_C_SOURCE macro to eal_common_errno.c to get POSIX
definition of strerror_r() with c11 standard.
---
doc/guides/linux_gsg/sys_reqs.rst | 3 ++-
doc/guides/rel_notes/deprecation.rst | 18 ------------------
doc/guides/rel_notes/release_23_11.rst | 17 +++++++++++++++++
drivers/common/mlx5/linux/meson.build | 5 +++--
drivers/net/failsafe/meson.build | 1 -
drivers/net/mlx4/meson.build | 4 ++--
lib/acl/acl_run_altivec.h | 4 ++--
lib/eal/common/eal_common_errno.c | 1 +
meson.build | 1 +
9 files changed, 28 insertions(+), 26 deletions(-)
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index dfeaf4e1c5..13be715933 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -27,7 +27,8 @@ Compilation of the DPDK
The setup commands and installed packages needed on various systems may be different.
For details on Linux distributions and the versions tested, please consult the DPDK Release Notes.
-* General development tools including a supported C compiler such as gcc (version 4.9+) or clang (version 3.4+),
+* General development tools including a C compiler supporting the C11 standard,
+ including standard atomics, for example: GCC (version 5.0+) or Clang (version 3.6+),
and ``pkg-config`` or ``pkgconf`` to be used when building end-user binaries against DPDK.
* For RHEL/Fedora systems these can be installed using ``dnf groupinstall "Development Tools"``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 494b401cda..cc939d3c67 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,24 +17,6 @@ Other API and ABI deprecation notices are to be posted below.
Deprecation Notices
-------------------
-* C Compiler: From DPDK 23.11 onwards,
- building DPDK will require a C compiler which supports the C11 standard,
- including support for C11 standard atomics.
-
- More specifically, the requirements will be:
-
- * Support for flag "-std=c11" (or similar)
- * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
-
- Please note:
-
- * C11, including standard atomics, is supported from GCC version 5 onwards,
- and is the default language version in that release
- (Ref: https://gcc.gnu.org/gcc-5/changes.html)
- * C11 is the default compilation mode in Clang from version 3.6,
- which also added support for standard atomics
- (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
-
* build: Enabling deprecated libraries (``flow_classify``, ``kni``)
won't be possible anymore through the use of the ``disable_libs`` build option.
A new build option for deprecated libraries will be introduced instead.
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0..c8b9ed456c 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -20,6 +20,23 @@ DPDK Release 23.11
ninja -C build doc
xdg-open build/doc/guides/html/rel_notes/release_23_11.html
+* Build Requirements: From DPDK 23.11 onwards,
+ building DPDK will require a C compiler which supports the C11 standard,
+ including support for C11 standard atomics.
+
+ More specifically, the requirements will be:
+
+ * Support for flag "-std=c11" (or similar)
+ * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
+
+ Please note:
+
+ * C11, including standard atomics, is supported from GCC version 5 onwards,
+ and is the default language version in that release
+ (Ref: https://gcc.gnu.org/gcc-5/changes.html)
+ * C11 is the default compilation mode in Clang from version 3.6,
+ which also added support for standard atomics
+ (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
New Features
------------
diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 15edc13041..b3a64547c5 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -231,11 +231,12 @@ if libmtcr_ul_found
endif
foreach arg:has_sym_args
- mlx5_config.set(arg[0], cc.has_header_symbol(arg[1], arg[2], dependencies: libs))
+ mlx5_config.set(arg[0], cc.has_header_symbol(arg[1], arg[2], dependencies: libs, args: cflags))
endforeach
foreach arg:has_member_args
file_prefix = '#include <' + arg[1] + '>'
- mlx5_config.set(arg[0], cc.has_member(arg[2], arg[3], prefix : file_prefix, dependencies: libs))
+ mlx5_config.set(arg[0],
+ cc.has_member(arg[2], arg[3], prefix : file_prefix, dependencies: libs, args: cflags))
endforeach
# Build Glue Library
diff --git a/drivers/net/failsafe/meson.build b/drivers/net/failsafe/meson.build
index 6013e13722..c1d361083b 100644
--- a/drivers/net/failsafe/meson.build
+++ b/drivers/net/failsafe/meson.build
@@ -7,7 +7,6 @@ if is_windows
subdir_done()
endif
-cflags += '-std=gnu99'
cflags += '-D_DEFAULT_SOURCE'
cflags += '-D_XOPEN_SOURCE=700'
cflags += '-pedantic'
diff --git a/drivers/net/mlx4/meson.build b/drivers/net/mlx4/meson.build
index a038c1ec1b..3c5ee24186 100644
--- a/drivers/net/mlx4/meson.build
+++ b/drivers/net/mlx4/meson.build
@@ -103,12 +103,12 @@ has_sym_args = [
config = configuration_data()
foreach arg:has_sym_args
config.set(arg[0], cc.has_header_symbol(arg[1], arg[2],
- dependencies: libs))
+ dependencies: libs, args: cflags))
endforeach
foreach arg:has_member_args
file_prefix = '#include <' + arg[1] + '>'
config.set(arg[0], cc.has_member(arg[2], arg[3],
- prefix: file_prefix, dependencies: libs))
+ prefix: file_prefix, dependencies: libs, args: cflags))
endforeach
configure_file(output : 'mlx4_autoconf.h', configuration : config)
diff --git a/lib/acl/acl_run_altivec.h b/lib/acl/acl_run_altivec.h
index 4556e1503b..3c30466d2d 100644
--- a/lib/acl/acl_run_altivec.h
+++ b/lib/acl/acl_run_altivec.h
@@ -41,7 +41,7 @@ resolve_priority_altivec(uint64_t transition, int n,
{
uint32_t x;
xmm_t results, priority, results1, priority1;
- __vector bool int selector;
+ __vector __bool int selector;
xmm_t *saved_results, *saved_priority;
for (x = 0; x < categories; x += RTE_ACL_RESULTS_MULTIPLIER) {
@@ -110,7 +110,7 @@ transition4(xmm_t next_input, const uint64_t *trans,
xmm_t in, node_type, r, t;
xmm_t dfa_ofs, quad_ofs;
xmm_t *index_mask, *tp;
- __vector bool int dfa_msk;
+ __vector __bool int dfa_msk;
__vector signed char zeroes = {};
union {
uint64_t d64[2];
diff --git a/lib/eal/common/eal_common_errno.c b/lib/eal/common/eal_common_errno.c
index ef8f782abb..b30e2f0ad4 100644
--- a/lib/eal/common/eal_common_errno.c
+++ b/lib/eal/common/eal_common_errno.c
@@ -4,6 +4,7 @@
/* Use XSI-compliant portable version of strerror_r() */
#undef _GNU_SOURCE
+#define _POSIX_C_SOURCE 200809L
#include <stdio.h>
#include <string.h>
diff --git a/meson.build b/meson.build
index 39cb73846d..70b54f0c98 100644
--- a/meson.build
+++ b/meson.build
@@ -9,6 +9,7 @@ project('DPDK', 'c',
license: 'BSD',
default_options: [
'buildtype=release',
+ 'c_std=c11',
'default_library=static',
'warning_level=2',
],
--
2.39.2
^ permalink raw reply [relevance 4%]
* [PATCH v9 1/4] ethdev: add API for mbufs recycle mode
@ 2023-08-02 8:08 3% ` Feifei Wang
0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2023-08-02 8:08 UTC (permalink / raw)
To: Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Feifei Wang, Honnappa Nagarahalli, Ruifeng Wang,
Morten Brørup
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=quit, Size: 15883 bytes --]
Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
APIs to recycle used mbufs from a transmit queue of an Ethernet device,
and move these mbufs into a mbuf ring for a receive queue of an Ethernet
device. This can bypass mempool 'put/get' operations hence saving CPU
cycles.
For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
the following operations:
- Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
ring.
- Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
from the Tx mbuf ring.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
doc/guides/rel_notes/release_23_11.rst | 15 ++
lib/ethdev/ethdev_driver.h | 10 ++
lib/ethdev/ethdev_private.c | 2 +
lib/ethdev/rte_ethdev.c | 31 +++++
lib/ethdev/rte_ethdev.h | 181 +++++++++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 23 +++-
lib/ethdev/version.map | 4 +
7 files changed, 260 insertions(+), 6 deletions(-)
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0..fd16d267ae 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -55,6 +55,13 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Add mbufs recycling support. **
+
+ Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``
+ APIs which allow the user to copy used mbufs from the Tx mbuf ring
+ into the Rx mbuf ring. This feature supports the case that the Rx Ethernet
+ device is different from the Tx Ethernet device with respective driver
+ callback functions in ``rte_eth_recycle_mbufs``.
Removed Items
-------------
@@ -100,6 +107,14 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: Added ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields to ``rte_eth_dev`` structure.
+
+* ethdev: Structure ``rte_eth_fp_ops`` was affected to add
+ ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields, to move ``rxq`` and ``txq`` fields, to change the size of
+ ``reserved1`` and ``reserved2`` fields.
+
Known Issues
------------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 980f837ab6..b0c55a8523 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -58,6 +58,10 @@ struct rte_eth_dev {
eth_rx_descriptor_status_t rx_descriptor_status;
/** Check the status of a Tx descriptor */
eth_tx_descriptor_status_t tx_descriptor_status;
+ /** Pointer to PMD transmit mbufs reuse function */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ /** Pointer to PMD receive descriptors refill function */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
/**
* Device data that is shared between primary and secondary processes
@@ -507,6 +511,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
uint16_t queue_id, struct rte_eth_burst_mode *mode);
@@ -1250,6 +1258,8 @@ struct eth_dev_ops {
eth_rxq_info_get_t rxq_info_get;
/** Retrieve Tx queue information */
eth_txq_info_get_t txq_info_get;
+ /** Retrieve mbufs recycle Rx queue information */
+ eth_recycle_rxq_info_get_t recycle_rxq_info_get;
eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */
eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */
eth_fw_version_get_t fw_version_get; /**< Get firmware version */
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 14ec8c6ccf..f8ab64f195 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->rx_queue_count = dev->rx_queue_count;
fpo->rx_descriptor_status = dev->rx_descriptor_status;
fpo->tx_descriptor_status = dev->tx_descriptor_status;
+ fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
+ fpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;
fpo->rxq.data = dev->data->rx_queues;
fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0840d2b594..ea89a101a1 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5876,6 +5876,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
return 0;
}
+int
+rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (queue_id >= dev->data->nb_rx_queues) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->rx_queues == NULL ||
+ dev->data->rx_queues[queue_id] == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Rx queue %"PRIu16" of device with port_id=%"
+ PRIu16" has not been setup\n",
+ queue_id, port_id);
+ return -EINVAL;
+ }
+
+ if (*dev->dev_ops->recycle_rxq_info_get == NULL)
+ return -ENOTSUP;
+
+ dev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);
+
+ return 0;
+}
+
int
rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 04a2564f22..9dc5749d83 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {
uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */
} __rte_cache_min_aligned;
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice.
+ *
+ * Ethernet device Rx queue information structure for recycling mbufs.
+ * Used to retrieve Rx queue information when Tx queue reusing mbufs and moving
+ * them into Rx mbuf ring.
+ */
+struct rte_eth_recycle_rxq_info {
+ struct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */
+ struct rte_mempool *mp; /**< mempool of Rx queue. */
+ uint16_t *refill_head; /**< head of Rx queue refilling mbufs. */
+ uint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */
+ uint16_t mbuf_ring_size; /**< configured number of mbuf ring size. */
+ /**
+ * Requirement on mbuf refilling batch size of Rx mbuf ring.
+ * For some PMD drivers, the number of Rx mbuf ring refilling mbufs
+ * should be aligned with mbuf ring size, in order to simplify
+ * ring wrapping around.
+ * Value 0 means that PMD drivers have no requirement for this.
+ */
+ uint16_t refill_requirement;
+} __rte_cache_min_aligned;
+
/* Generic Burst mode flag definition, values can be ORed. */
/**
@@ -4853,6 +4877,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve information about given ports's Rx queue for recycling mbufs.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The Rx queue on the Ethernet devicefor which information
+ * will be retrieved.
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.
+ *
+ * @return
+ * - 0: Success
+ * - -ENODEV: If *port_id* is invalid.
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The queue_id is out of range.
+ */
+__rte_experimental
+int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,
+ uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
/**
* Retrieve information about the Rx packet burst mode.
*
@@ -6527,6 +6576,138 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id,
return rte_eth_tx_buffer_flush(port_id, queue_id, buffer);
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Recycle used mbufs from a transmit queue of an Ethernet device, and move
+ * these mbufs into a mbuf ring for a receive queue of an Ethernet device.
+ * This can bypass mempool path to save CPU cycles.
+ *
+ * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst() and
+ * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing Rx
+ * descriptors. The number of recycling mbufs depends on the request of Rx mbuf
+ * ring, with the constraint of enough used mbufs from Tx mbuf ring.
+ *
+ * For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the
+ * following operations:
+ *
+ * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.
+ *
+ * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
+ * from the Tx mbuf ring.
+ *
+ * This function spilts Rx and Tx path with different callback functions. The
+ * callback function recycle_tx_mbufs_reuse is for Tx driver. The callback
+ * function recycle_rx_descriptors_refill is for Rx driver. rte_eth_recycle_mbufs()
+ * can support the case that Rx Ethernet device is different from Tx Ethernet device.
+ *
+ * It is the responsibility of users to select the Rx/Tx queue pair to recycle
+ * mbufs. Before call this function, users must call rte_eth_recycle_rxq_info_get
+ * function to retrieve selected Rx queue information.
+ * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info
+ *
+ * Currently, the rte_eth_recycle_mbufs() function can support to feed 1 Rx queue from
+ * 2 Tx queues in the same thread. Do not pair the Rx queue and Tx queue in different
+ * threads, in order to avoid memory error rewriting.
+ *
+ * @param rx_port_id
+ * Port identifying the receive side.
+ * @param rx_queue_id
+ * The index of the receive queue identifying the receive side.
+ * The value must be in the range [0, nb_rx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param tx_port_id
+ * Port identifying the transmit side.
+ * @param tx_queue_id
+ * The index of the transmit queue identifying the transmit side.
+ * The value must be in the range [0, nb_tx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains
+ * the information of the Rx queue mbuf ring.
+ * @return
+ * The number of recycling mbufs.
+ */
+__rte_experimental
+static inline uint16_t
+rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
+ uint16_t tx_port_id, uint16_t tx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_fp_ops *p;
+ void *qd;
+ uint16_t nb_mbufs;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ if (tx_port_id >= RTE_MAX_ETHPORTS ||
+ tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid tx_port_id=%u or tx_queue_id=%u\n",
+ tx_port_id, tx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[tx_port_id];
+ qd = p->txq.data[tx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
+ tx_queue_id, tx_port_id);
+ return 0;
+ }
+#endif
+ if (p->recycle_tx_mbufs_reuse == NULL)
+ return 0;
+
+ /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
+ * into Rx mbuf ring.
+ */
+ nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
+
+ /* If no recycling mbufs, return 0. */
+ if (nb_mbufs == 0)
+ return 0;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ if (rx_port_id >= RTE_MAX_ETHPORTS ||
+ rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n",
+ rx_port_id, rx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[rx_port_id];
+ qd = p->rxq.data[rx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
+ rx_queue_id, rx_port_id);
+ return 0;
+ }
+#endif
+
+ if (p->recycle_rx_descriptors_refill == NULL)
+ return 0;
+
+ /* Replenish the Rx descriptors with the recycling
+ * into Rx mbuf ring.
+ */
+ p->recycle_rx_descriptors_refill(qd, nb_mbufs);
+
+ return nb_mbufs;
+}
+
/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 46e9721e07..a24ad7a6b2 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -55,6 +55,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
/** @internal Check the status of a Tx descriptor */
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
+/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
+typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
+/** @internal Refill Rx descriptors with the recycling mbufs */
+typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);
+
/**
* @internal
* Structure used to hold opaque pointers to internal ethdev Rx/Tx
@@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
* Rx fast-path functions and related data.
* 64-bit systems: occupies first 64B line
*/
+ /** Rx queues data. */
+ struct rte_ethdev_qdata rxq;
/** PMD receive function. */
eth_rx_burst_t rx_pkt_burst;
/** Get the number of used Rx descriptors. */
eth_rx_queue_count_t rx_queue_count;
/** Check the status of a Rx descriptor. */
eth_rx_descriptor_status_t rx_descriptor_status;
- /** Rx queues data. */
- struct rte_ethdev_qdata rxq;
- uintptr_t reserved1[3];
+ /** Refill Rx descriptors with the recycling mbufs. */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
+ uintptr_t reserved1[2];
/**@}*/
/**@{*/
@@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
* Tx fast-path functions and related data.
* 64-bit systems: occupies second 64B line
*/
+ /** Tx queues data. */
+ struct rte_ethdev_qdata txq;
/** PMD transmit function. */
eth_tx_burst_t tx_pkt_burst;
/** PMD transmit prepare function. */
eth_tx_prep_t tx_pkt_prepare;
/** Check the status of a Tx descriptor. */
eth_tx_descriptor_status_t tx_descriptor_status;
- /** Tx queues data. */
- struct rte_ethdev_qdata txq;
- uintptr_t reserved2[3];
+ /** Copy used mbufs from Tx mbuf ring into Rx. */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ uintptr_t reserved2[2];
/**@}*/
} __rte_cache_aligned;
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index b965d6aa52..e52c1563b4 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -312,6 +312,10 @@ EXPERIMENTAL {
rte_flow_async_action_list_handle_query_update;
rte_flow_async_actions_update;
rte_flow_restore_info_dynflag;
+
+ # added in 23.11
+ rte_eth_recycle_mbufs;
+ rte_eth_recycle_rx_queue_info_get;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 3%]
* [PATCH v8 1/4] ethdev: add API for mbufs recycle mode
@ 2023-08-02 7:38 3% ` Feifei Wang
0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2023-08-02 7:38 UTC (permalink / raw)
To: Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
Cc: dev, nd, Feifei Wang, Honnappa Nagarahalli, Ruifeng Wang,
Morten Brørup
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=yes, Size: 15883 bytes --]
Add 'rte_eth_recycle_rx_queue_info_get' and 'rte_eth_recycle_mbufs'
APIs to recycle used mbufs from a transmit queue of an Ethernet device,
and move these mbufs into a mbuf ring for a receive queue of an Ethernet
device. This can bypass mempool 'put/get' operations hence saving CPU
cycles.
For each recycling mbufs, the rte_eth_recycle_mbufs() function performs
the following operations:
- Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf
ring.
- Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
from the Tx mbuf ring.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Suggested-by: Ruifeng Wang <ruifeng.wang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
doc/guides/rel_notes/release_23_11.rst | 15 ++
lib/ethdev/ethdev_driver.h | 10 ++
lib/ethdev/ethdev_private.c | 2 +
lib/ethdev/rte_ethdev.c | 31 +++++
lib/ethdev/rte_ethdev.h | 181 +++++++++++++++++++++++++
lib/ethdev/rte_ethdev_core.h | 23 +++-
lib/ethdev/version.map | 4 +
7 files changed, 260 insertions(+), 6 deletions(-)
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0..fd16d267ae 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -55,6 +55,13 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Add mbufs recycling support. **
+
+ Added ``rte_eth_recycle_rx_queue_info_get`` and ``rte_eth_recycle_mbufs``
+ APIs which allow the user to copy used mbufs from the Tx mbuf ring
+ into the Rx mbuf ring. This feature supports the case that the Rx Ethernet
+ device is different from the Tx Ethernet device with respective driver
+ callback functions in ``rte_eth_recycle_mbufs``.
Removed Items
-------------
@@ -100,6 +107,14 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: Added ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields to ``rte_eth_dev`` structure.
+
+* ethdev: Structure ``rte_eth_fp_ops`` was affected to add
+ ``recycle_tx_mbufs_reuse`` and ``recycle_rx_descriptors_refill``
+ fields, to move ``rxq`` and ``txq`` fields, to change the size of
+ ``reserved1`` and ``reserved2`` fields.
+
Known Issues
------------
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 980f837ab6..b0c55a8523 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -58,6 +58,10 @@ struct rte_eth_dev {
eth_rx_descriptor_status_t rx_descriptor_status;
/** Check the status of a Tx descriptor */
eth_tx_descriptor_status_t tx_descriptor_status;
+ /** Pointer to PMD transmit mbufs reuse function */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ /** Pointer to PMD receive descriptors refill function */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
/**
* Device data that is shared between primary and secondary processes
@@ -507,6 +511,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev,
typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev,
uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo);
+typedef void (*eth_recycle_rxq_info_get_t)(struct rte_eth_dev *dev,
+ uint16_t rx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev,
uint16_t queue_id, struct rte_eth_burst_mode *mode);
@@ -1250,6 +1258,8 @@ struct eth_dev_ops {
eth_rxq_info_get_t rxq_info_get;
/** Retrieve Tx queue information */
eth_txq_info_get_t txq_info_get;
+ /** Retrieve mbufs recycle Rx queue information */
+ eth_recycle_rxq_info_get_t recycle_rxq_info_get;
eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */
eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */
eth_fw_version_get_t fw_version_get; /**< Get firmware version */
diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 14ec8c6ccf..f8ab64f195 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo,
fpo->rx_queue_count = dev->rx_queue_count;
fpo->rx_descriptor_status = dev->rx_descriptor_status;
fpo->tx_descriptor_status = dev->tx_descriptor_status;
+ fpo->recycle_tx_mbufs_reuse = dev->recycle_tx_mbufs_reuse;
+ fpo->recycle_rx_descriptors_refill = dev->recycle_rx_descriptors_refill;
fpo->rxq.data = dev->data->rx_queues;
fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0840d2b594..ea89a101a1 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5876,6 +5876,37 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
return 0;
}
+int
+rte_eth_recycle_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (queue_id >= dev->data->nb_rx_queues) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->rx_queues == NULL ||
+ dev->data->rx_queues[queue_id] == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Rx queue %"PRIu16" of device with port_id=%"
+ PRIu16" has not been setup\n",
+ queue_id, port_id);
+ return -EINVAL;
+ }
+
+ if (*dev->dev_ops->recycle_rxq_info_get == NULL)
+ return -ENOTSUP;
+
+ dev->dev_ops->recycle_rxq_info_get(dev, queue_id, recycle_rxq_info);
+
+ return 0;
+}
+
int
rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_burst_mode *mode)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 04a2564f22..9dc5749d83 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1820,6 +1820,30 @@ struct rte_eth_txq_info {
uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */
} __rte_cache_min_aligned;
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice.
+ *
+ * Ethernet device Rx queue information structure for recycling mbufs.
+ * Used to retrieve Rx queue information when Tx queue reusing mbufs and moving
+ * them into Rx mbuf ring.
+ */
+struct rte_eth_recycle_rxq_info {
+ struct rte_mbuf **mbuf_ring; /**< mbuf ring of Rx queue. */
+ struct rte_mempool *mp; /**< mempool of Rx queue. */
+ uint16_t *refill_head; /**< head of Rx queue refilling mbufs. */
+ uint16_t *receive_tail; /**< tail of Rx queue receiving pkts. */
+ uint16_t mbuf_ring_size; /**< configured number of mbuf ring size. */
+ /**
+ * Requirement on mbuf refilling batch size of Rx mbuf ring.
+ * For some PMD drivers, the number of Rx mbuf ring refilling mbufs
+ * should be aligned with mbuf ring size, in order to simplify
+ * ring wrapping around.
+ * Value 0 means that PMD drivers have no requirement for this.
+ */
+ uint16_t refill_requirement;
+} __rte_cache_min_aligned;
+
/* Generic Burst mode flag definition, values can be ORed. */
/**
@@ -4853,6 +4877,31 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id,
int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id,
struct rte_eth_txq_info *qinfo);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Retrieve information about given ports's Rx queue for recycling mbufs.
+ *
+ * @param port_id
+ * The port identifier of the Ethernet device.
+ * @param queue_id
+ * The Rx queue on the Ethernet devicefor which information
+ * will be retrieved.
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* to be filled.
+ *
+ * @return
+ * - 0: Success
+ * - -ENODEV: If *port_id* is invalid.
+ * - -ENOTSUP: routine is not supported by the device PMD.
+ * - -EINVAL: The queue_id is out of range.
+ */
+__rte_experimental
+int rte_eth_recycle_rx_queue_info_get(uint16_t port_id,
+ uint16_t queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
/**
* Retrieve information about the Rx packet burst mode.
*
@@ -6527,6 +6576,138 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id,
return rte_eth_tx_buffer_flush(port_id, queue_id, buffer);
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Recycle used mbufs from a transmit queue of an Ethernet device, and move
+ * these mbufs into a mbuf ring for a receive queue of an Ethernet device.
+ * This can bypass mempool path to save CPU cycles.
+ *
+ * The rte_eth_recycle_mbufs() function loops, with rte_eth_rx_burst() and
+ * rte_eth_tx_burst() functions, freeing Tx used mbufs and replenishing Rx
+ * descriptors. The number of recycling mbufs depends on the request of Rx mbuf
+ * ring, with the constraint of enough used mbufs from Tx mbuf ring.
+ *
+ * For each recycling mbufs, the rte_eth_recycle_mbufs() function performs the
+ * following operations:
+ *
+ * - Copy used *rte_mbuf* buffer pointers from Tx mbuf ring into Rx mbuf ring.
+ *
+ * - Replenish the Rx descriptors with the recycling *rte_mbuf* mbufs freed
+ * from the Tx mbuf ring.
+ *
+ * This function spilts Rx and Tx path with different callback functions. The
+ * callback function recycle_tx_mbufs_reuse is for Tx driver. The callback
+ * function recycle_rx_descriptors_refill is for Rx driver. rte_eth_recycle_mbufs()
+ * can support the case that Rx Ethernet device is different from Tx Ethernet device.
+ *
+ * It is the responsibility of users to select the Rx/Tx queue pair to recycle
+ * mbufs. Before call this function, users must call rte_eth_recycle_rxq_info_get
+ * function to retrieve selected Rx queue information.
+ * @see rte_eth_recycle_rxq_info_get, struct rte_eth_recycle_rxq_info
+ *
+ * Currently, the rte_eth_recycle_mbufs() function can support to feed 1 Rx queue from
+ * 2 Tx queues in the same thread. Do not pair the Rx queue and Tx queue in different
+ * threads, in order to avoid memory error rewriting.
+ *
+ * @param rx_port_id
+ * Port identifying the receive side.
+ * @param rx_queue_id
+ * The index of the receive queue identifying the receive side.
+ * The value must be in the range [0, nb_rx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param tx_port_id
+ * Port identifying the transmit side.
+ * @param tx_queue_id
+ * The index of the transmit queue identifying the transmit side.
+ * The value must be in the range [0, nb_tx_queue - 1] previously supplied
+ * to rte_eth_dev_configure().
+ * @param recycle_rxq_info
+ * A pointer to a structure of type *rte_eth_recycle_rxq_info* which contains
+ * the information of the Rx queue mbuf ring.
+ * @return
+ * The number of recycling mbufs.
+ */
+__rte_experimental
+static inline uint16_t
+rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id,
+ uint16_t tx_port_id, uint16_t tx_queue_id,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info)
+{
+ struct rte_eth_fp_ops *p;
+ void *qd;
+ uint16_t nb_mbufs;
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ if (tx_port_id >= RTE_MAX_ETHPORTS ||
+ tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid tx_port_id=%u or tx_queue_id=%u\n",
+ tx_port_id, tx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[tx_port_id];
+ qd = p->txq.data[tx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_TX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n",
+ tx_queue_id, tx_port_id);
+ return 0;
+ }
+#endif
+ if (p->recycle_tx_mbufs_reuse == NULL)
+ return 0;
+
+ /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring
+ * into Rx mbuf ring.
+ */
+ nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info);
+
+ /* If no recycling mbufs, return 0. */
+ if (nb_mbufs == 0)
+ return 0;
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ if (rx_port_id >= RTE_MAX_ETHPORTS ||
+ rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) {
+ RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n",
+ rx_port_id, rx_queue_id);
+ return 0;
+ }
+#endif
+
+ /* fetch pointer to queue data */
+ p = &rte_eth_fp_ops[rx_port_id];
+ qd = p->rxq.data[rx_queue_id];
+
+#ifdef RTE_ETHDEV_DEBUG_RX
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0);
+
+ if (qd == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n",
+ rx_queue_id, rx_port_id);
+ return 0;
+ }
+#endif
+
+ if (p->recycle_rx_descriptors_refill == NULL)
+ return 0;
+
+ /* Replenish the Rx descriptors with the recycling
+ * into Rx mbuf ring.
+ */
+ p->recycle_rx_descriptors_refill(qd, nb_mbufs);
+
+ return nb_mbufs;
+}
+
/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index 46e9721e07..a24ad7a6b2 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -55,6 +55,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
/** @internal Check the status of a Tx descriptor */
typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
+/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */
+typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq,
+ struct rte_eth_recycle_rxq_info *recycle_rxq_info);
+
+/** @internal Refill Rx descriptors with the recycling mbufs */
+typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, uint16_t nb);
+
/**
* @internal
* Structure used to hold opaque pointers to internal ethdev Rx/Tx
@@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
* Rx fast-path functions and related data.
* 64-bit systems: occupies first 64B line
*/
+ /** Rx queues data. */
+ struct rte_ethdev_qdata rxq;
/** PMD receive function. */
eth_rx_burst_t rx_pkt_burst;
/** Get the number of used Rx descriptors. */
eth_rx_queue_count_t rx_queue_count;
/** Check the status of a Rx descriptor. */
eth_rx_descriptor_status_t rx_descriptor_status;
- /** Rx queues data. */
- struct rte_ethdev_qdata rxq;
- uintptr_t reserved1[3];
+ /** Refill Rx descriptors with the recycling mbufs. */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
+ uintptr_t reserved1[2];
/**@}*/
/**@{*/
@@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
* Tx fast-path functions and related data.
* 64-bit systems: occupies second 64B line
*/
+ /** Tx queues data. */
+ struct rte_ethdev_qdata txq;
/** PMD transmit function. */
eth_tx_burst_t tx_pkt_burst;
/** PMD transmit prepare function. */
eth_tx_prep_t tx_pkt_prepare;
/** Check the status of a Tx descriptor. */
eth_tx_descriptor_status_t tx_descriptor_status;
- /** Tx queues data. */
- struct rte_ethdev_qdata txq;
- uintptr_t reserved2[3];
+ /** Copy used mbufs from Tx mbuf ring into Rx. */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ uintptr_t reserved2[2];
/**@}*/
} __rte_cache_aligned;
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index b965d6aa52..e52c1563b4 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -312,6 +312,10 @@ EXPERIMENTAL {
rte_flow_async_action_list_handle_query_update;
rte_flow_async_actions_update;
rte_flow_restore_info_dynflag;
+
+ # added in 23.11
+ rte_eth_recycle_mbufs;
+ rte_eth_recycle_rx_queue_info_get;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 3%]
* [PATCH RESEND v6 2/5] ethdev: fix skip valid port in probing callback
2023-08-02 3:15 3% ` [PATCH RESEND v6 " Huisong Li
@ 2023-08-02 3:15 2% ` Huisong Li
0 siblings, 0 replies; 200+ results
From: Huisong Li @ 2023-08-02 3:15 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, andrew.rybchenko, fengchengwen,
liudongdong3, liuyonglong, lihuisong
The event callback in application may use the macro RTE_ETH_FOREACH_DEV to
iterate over all enabled ports to do something(like, verifying the port id
validity) when receive a probing event. If the ethdev state of a port is
not RTE_ETH_DEV_UNUSED, this port will be considered as a valid port.
However, this state is set to RTE_ETH_DEV_ATTACHED after pushing probing
event. It means that probing callback will skip this port. But this
assignment can not move to front of probing notification. See
commit be8cd210379a ("ethdev: fix port probing notification")
So this patch has to add a new state, RTE_ETH_DEV_ALLOCATED. Set the ethdev
state to RTE_ETH_DEV_ALLOCATED before pushing probing event and set it to
RTE_ETH_DEV_ATTACHED after definitely probed. And this port is valid if its
device state is 'ALLOCATED' or 'ATTACHED'.
In addition, the new state has to be placed behind 'REMOVED' to avoid ABI
break. Fortunately, this ethdev state is internal and applications can not
access it directly. So this patch encapsulates an API, rte_eth_dev_is_used,
for ethdev or PMD to call and eliminate concerns about using this state
enum value comparison.
Fixes: be8cd210379a ("ethdev: fix port probing notification")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 3 ++-
drivers/net/mlx5/mlx5.c | 2 +-
lib/ethdev/ethdev_driver.c | 13 ++++++++++---
lib/ethdev/ethdev_driver.h | 12 ++++++++++++
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_class_eth.c | 2 +-
lib/ethdev/rte_ethdev.c | 4 ++--
lib/ethdev/rte_ethdev.h | 4 +++-
lib/ethdev/version.map | 1 +
9 files changed, 33 insertions(+), 10 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index ee1552452a..bf1910709b 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -6108,7 +6108,8 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
PMD_DRV_LOG(DEBUG, "Calling Device uninit\n");
- if (eth_dev->state != RTE_ETH_DEV_UNUSED)
+
+ if (rte_eth_dev_is_used(eth_dev->state))
bnxt_dev_close_op(eth_dev);
return 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index b373306f98..54c6fff889 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -3152,7 +3152,7 @@ mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev)
while (port_id < RTE_MAX_ETHPORTS) {
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
- if (dev->state != RTE_ETH_DEV_UNUSED &&
+ if (rte_eth_dev_is_used(dev->state) &&
dev->device &&
(dev->device == odev ||
(dev->device->driver &&
diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c
index 0be1e8ca04..29e9417bea 100644
--- a/lib/ethdev/ethdev_driver.c
+++ b/lib/ethdev/ethdev_driver.c
@@ -50,8 +50,8 @@ eth_dev_find_free_port(void)
for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
/* Using shared name field to find a free port. */
if (eth_dev_shared_data->data[i].name[0] == '\0') {
- RTE_ASSERT(rte_eth_devices[i].state ==
- RTE_ETH_DEV_UNUSED);
+ RTE_ASSERT(!rte_eth_dev_is_used(
+ rte_eth_devices[i].state));
return i;
}
}
@@ -208,11 +208,18 @@ rte_eth_dev_probing_finish(struct rte_eth_dev *dev)
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
eth_dev_fp_ops_setup(rte_eth_fp_ops + dev->data->port_id, dev);
+ dev->state = RTE_ETH_DEV_ALLOCATED;
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_NEW, NULL);
dev->state = RTE_ETH_DEV_ATTACHED;
}
+bool rte_eth_dev_is_used(uint16_t dev_state)
+{
+ return dev_state == RTE_ETH_DEV_ALLOCATED ||
+ dev_state == RTE_ETH_DEV_ATTACHED;
+}
+
int
rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
{
@@ -221,7 +228,7 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev)
eth_dev_shared_data_prepare();
- if (eth_dev->state != RTE_ETH_DEV_UNUSED)
+ if (rte_eth_dev_is_used(eth_dev->state))
rte_eth_dev_callback_process(eth_dev,
RTE_ETH_EVENT_DESTROY, NULL);
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 980f837ab6..5bd2780643 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1582,6 +1582,18 @@ int rte_eth_dev_callback_process(struct rte_eth_dev *dev,
__rte_internal
void rte_eth_dev_probing_finish(struct rte_eth_dev *dev);
+/**
+ * Check if a Ethernet device state is used or not
+ *
+ * @param dev_state
+ * The state of the Ethernet device
+ * @return
+ * - true if the state of the Ethernet device is allocated or attached
+ * - false if this state is neither allocated nor attached
+ */
+__rte_internal
+bool rte_eth_dev_is_used(uint16_t dev_state);
+
/**
* Create memzone for HW rings.
* malloc can't be used as the physical address is needed.
diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h
index 320e3e0093..efe20be1a7 100644
--- a/lib/ethdev/ethdev_pci.h
+++ b/lib/ethdev/ethdev_pci.h
@@ -165,7 +165,7 @@ rte_eth_dev_pci_generic_remove(struct rte_pci_device *pci_dev,
* eth device has been released.
*/
if (rte_eal_process_type() == RTE_PROC_SECONDARY &&
- eth_dev->state == RTE_ETH_DEV_UNUSED)
+ !rte_eth_dev_is_used(eth_dev->state))
return 0;
if (dev_uninit) {
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index b61dae849d..88e56dd9a4 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -118,7 +118,7 @@ eth_dev_match(const struct rte_eth_dev *edev,
const struct rte_kvargs *kvlist = arg->kvlist;
unsigned int pair;
- if (edev->state == RTE_ETH_DEV_UNUSED)
+ if (!rte_eth_dev_is_used(edev->state))
return -1;
if (arg->device != NULL && arg->device != edev->device)
return -1;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0840d2b594..ec44490cde 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -338,7 +338,7 @@ uint16_t
rte_eth_find_next(uint16_t port_id)
{
while (port_id < RTE_MAX_ETHPORTS &&
- rte_eth_devices[port_id].state == RTE_ETH_DEV_UNUSED)
+ !rte_eth_dev_is_used(rte_eth_devices[port_id].state))
port_id++;
if (port_id >= RTE_MAX_ETHPORTS)
@@ -397,7 +397,7 @@ rte_eth_dev_is_valid_port(uint16_t port_id)
int is_valid;
if (port_id >= RTE_MAX_ETHPORTS ||
- (rte_eth_devices[port_id].state == RTE_ETH_DEV_UNUSED))
+ !rte_eth_dev_is_used(rte_eth_devices[port_id].state))
is_valid = 0;
else
is_valid = 1;
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 04a2564f22..e7e521efc4 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -2003,10 +2003,12 @@ typedef uint16_t (*rte_tx_callback_fn)(uint16_t port_id, uint16_t queue,
enum rte_eth_dev_state {
/** Device is unused before being probed. */
RTE_ETH_DEV_UNUSED = 0,
- /** Device is attached when allocated in probing. */
+ /** Device is attached when definitely probed. */
RTE_ETH_DEV_ATTACHED,
/** Device is in removed state when plug-out is detected. */
RTE_ETH_DEV_REMOVED,
+ /** Device is allocated and is set before reporting new event. */
+ RTE_ETH_DEV_ALLOCATED,
};
struct rte_eth_dev_sriov {
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index b965d6aa52..ad95329b57 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -326,6 +326,7 @@ INTERNAL {
rte_eth_dev_get_by_name;
rte_eth_dev_is_rx_hairpin_queue;
rte_eth_dev_is_tx_hairpin_queue;
+ rte_eth_dev_is_used;
rte_eth_dev_probing_finish;
rte_eth_dev_release_port;
rte_eth_dev_internal_reset;
--
2.22.0
^ permalink raw reply [relevance 2%]
* [PATCH RESEND v6 0/5] app/testpmd: support multiple process attach and detach port
[not found] <20220825024425.10534-1-lihuisong@huawei.com>
@ 2023-08-02 3:15 3% ` Huisong Li
2023-08-02 3:15 2% ` [PATCH RESEND v6 2/5] ethdev: fix skip valid port in probing callback Huisong Li
1 sibling, 1 reply; 200+ results
From: Huisong Li @ 2023-08-02 3:15 UTC (permalink / raw)
To: dev
Cc: thomas, ferruh.yigit, andrew.rybchenko, fengchengwen,
liudongdong3, liuyonglong, lihuisong
This patchset fix some bugs and support attaching and detaching port
in primary and secondary.
---
-v6: adjust rte_eth_dev_is_used position based on alphabetical order
in version.map
-v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break.
-v4: fix a misspelling.
-v3:
#1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification
for other bus type.
#2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve
the probelm in patch 2/5.
-v2: resend due to CI unexplained failure.
Huisong Li (5):
drivers/bus: restore driver assignment at front of probing
ethdev: fix skip valid port in probing callback
app/testpmd: check the validity of the port
app/testpmd: add attach and detach port for multiple process
app/testpmd: stop forwarding in new or destroy event
app/test-pmd/testpmd.c | 47 +++++++++++++++---------
app/test-pmd/testpmd.h | 1 -
drivers/bus/auxiliary/auxiliary_common.c | 9 ++++-
drivers/bus/dpaa/dpaa_bus.c | 9 ++++-
drivers/bus/fslmc/fslmc_bus.c | 8 +++-
drivers/bus/ifpga/ifpga_bus.c | 12 ++++--
drivers/bus/pci/pci_common.c | 9 ++++-
drivers/bus/vdev/vdev.c | 10 ++++-
drivers/bus/vmbus/vmbus_common.c | 9 ++++-
drivers/net/bnxt/bnxt_ethdev.c | 3 +-
drivers/net/bonding/bonding_testpmd.c | 1 -
drivers/net/mlx5/mlx5.c | 2 +-
lib/ethdev/ethdev_driver.c | 13 +++++--
lib/ethdev/ethdev_driver.h | 12 ++++++
lib/ethdev/ethdev_pci.h | 2 +-
lib/ethdev/rte_class_eth.c | 2 +-
lib/ethdev/rte_ethdev.c | 4 +-
lib/ethdev/rte_ethdev.h | 4 +-
lib/ethdev/version.map | 1 +
19 files changed, 114 insertions(+), 44 deletions(-)
--
2.22.0
^ permalink raw reply [relevance 3%]
* [PATCH v2 2/2] kni: remove deprecated kernel network interface
@ 2023-08-01 16:05 1% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-08-01 16:05 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Thomas Monjalon, Maxime Coquelin, Chenbo Xia,
Anatoly Burakov, Cristian Dumitrescu, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Bruce Richardson
The KNI driver had design flaws such as calling userspace with kernel
mutex held that made it prone to deadlock. The design also introduced
security risks because the kernel driver trusted that the userspace
(DPDK) kni interface. The kernel driver was never reviewed by
the upstream kernel community and would never have been accepted.
And since the Linux kernel API is not stable, it was a continual
source of maintenance issues especially with distribution kernels.
There are better ways to inject packets into the kernel such as
virtio_user, tap and XDP drivers. All of these do not need out of
tree kernel drivers.
The deprecation was announced in 22.11 release; and users were
directed to alternatives there.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
MAINTAINERS | 10 -
app/test/meson.build | 2 -
app/test/test_kni.c | 740 ---------------
doc/api/doxy-api-index.md | 2 -
doc/api/doxy-api.conf.in | 1 -
doc/guides/contributing/documentation.rst | 4 +-
doc/guides/howto/flow_bifurcation.rst | 3 +-
doc/guides/nics/index.rst | 1 -
doc/guides/nics/kni.rst | 170 ----
doc/guides/nics/virtio.rst | 92 +-
.../prog_guide/env_abstraction_layer.rst | 2 -
doc/guides/prog_guide/glossary.rst | 3 -
doc/guides/prog_guide/index.rst | 1 -
.../prog_guide/kernel_nic_interface.rst | 423 ---------
doc/guides/prog_guide/packet_framework.rst | 9 +-
doc/guides/rel_notes/deprecation.rst | 9 +-
doc/guides/rel_notes/release_23_11.rst | 2 +
doc/guides/sample_app_ug/ip_pipeline.rst | 22 -
drivers/net/cnxk/cnxk_ethdev.c | 2 +-
drivers/net/kni/meson.build | 11 -
drivers/net/kni/rte_eth_kni.c | 524 -----------
drivers/net/meson.build | 1 -
examples/ip_pipeline/Makefile | 1 -
examples/ip_pipeline/cli.c | 95 --
examples/ip_pipeline/examples/kni.cli | 69 --
examples/ip_pipeline/kni.c | 168 ----
examples/ip_pipeline/kni.h | 46 -
examples/ip_pipeline/main.c | 10 -
examples/ip_pipeline/meson.build | 1 -
examples/ip_pipeline/pipeline.c | 57 --
examples/ip_pipeline/pipeline.h | 2 -
kernel/linux/kni/Kbuild | 6 -
kernel/linux/kni/compat.h | 157 ----
kernel/linux/kni/kni_dev.h | 137 ---
kernel/linux/kni/kni_fifo.h | 87 --
kernel/linux/kni/kni_misc.c | 719 --------------
kernel/linux/kni/kni_net.c | 878 ------------------
kernel/linux/kni/meson.build | 41 -
kernel/linux/meson.build | 2 +-
lib/eal/common/eal_common_log.c | 1 -
lib/eal/include/rte_log.h | 2 +-
lib/eal/linux/eal.c | 19 -
lib/kni/meson.build | 21 -
lib/kni/rte_kni.c | 843 -----------------
lib/kni/rte_kni.h | 269 ------
lib/kni/rte_kni_common.h | 147 ---
lib/kni/rte_kni_fifo.h | 117 ---
lib/kni/version.map | 24 -
lib/meson.build | 6 -
lib/port/meson.build | 6 -
lib/port/rte_port_kni.c | 515 ----------
lib/port/rte_port_kni.h | 63 --
lib/port/version.map | 3 -
meson_options.txt | 2 +-
54 files changed, 14 insertions(+), 6534 deletions(-)
delete mode 100644 app/test/test_kni.c
delete mode 100644 doc/guides/nics/kni.rst
delete mode 100644 doc/guides/prog_guide/kernel_nic_interface.rst
delete mode 100644 drivers/net/kni/meson.build
delete mode 100644 drivers/net/kni/rte_eth_kni.c
delete mode 100644 examples/ip_pipeline/examples/kni.cli
delete mode 100644 examples/ip_pipeline/kni.c
delete mode 100644 examples/ip_pipeline/kni.h
delete mode 100644 kernel/linux/kni/Kbuild
delete mode 100644 kernel/linux/kni/compat.h
delete mode 100644 kernel/linux/kni/kni_dev.h
delete mode 100644 kernel/linux/kni/kni_fifo.h
delete mode 100644 kernel/linux/kni/kni_misc.c
delete mode 100644 kernel/linux/kni/kni_net.c
delete mode 100644 kernel/linux/kni/meson.build
delete mode 100644 lib/kni/meson.build
delete mode 100644 lib/kni/rte_kni.c
delete mode 100644 lib/kni/rte_kni.h
delete mode 100644 lib/kni/rte_kni_common.h
delete mode 100644 lib/kni/rte_kni_fifo.h
delete mode 100644 lib/kni/version.map
delete mode 100644 lib/port/rte_port_kni.c
delete mode 100644 lib/port/rte_port_kni.h
diff --git a/MAINTAINERS b/MAINTAINERS
index dbb25211c367..6345e7f8a65d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -617,12 +617,6 @@ F: doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
F: app/test/test_link_bonding*
F: examples/bond/
-Linux KNI
-F: kernel/linux/kni/
-F: lib/kni/
-F: doc/guides/prog_guide/kernel_nic_interface.rst
-F: app/test/test_kni.c
-
Linux AF_PACKET
M: John W. Linville <linville@tuxdriver.com>
F: drivers/net/af_packet/
@@ -1027,10 +1021,6 @@ F: drivers/net/tap/
F: doc/guides/nics/tap.rst
F: doc/guides/nics/features/tap.ini
-KNI PMD
-F: drivers/net/kni/
-F: doc/guides/nics/kni.rst
-
Ring PMD
M: Bruce Richardson <bruce.richardson@intel.com>
F: drivers/net/ring/
diff --git a/app/test/meson.build b/app/test/meson.build
index 90a2e350c7ae..66897c14a399 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -72,7 +72,6 @@ test_sources = files(
'test_ipsec.c',
'test_ipsec_sad.c',
'test_ipsec_perf.c',
- 'test_kni.c',
'test_kvargs.c',
'test_lcores.c',
'test_logs.c',
@@ -237,7 +236,6 @@ fast_tests = [
['fbarray_autotest', true, true],
['hash_readwrite_func_autotest', false, true],
['ipsec_autotest', true, true],
- ['kni_autotest', false, true],
['kvargs_autotest', true, true],
['member_autotest', true, true],
['power_cpufreq_autotest', false, true],
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
deleted file mode 100644
index 4039da0b080c..000000000000
--- a/app/test/test_kni.c
+++ /dev/null
@@ -1,740 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#include "test.h"
-
-#include <stdio.h>
-#include <stdint.h>
-#include <unistd.h>
-#include <string.h>
-#if !defined(RTE_EXEC_ENV_LINUX) || !defined(RTE_LIB_KNI)
-
-static int
-test_kni(void)
-{
- printf("KNI not supported, skipping test\n");
- return TEST_SKIPPED;
-}
-
-#else
-
-#include <sys/wait.h>
-#include <dirent.h>
-
-#include <rte_string_fns.h>
-#include <rte_mempool.h>
-#include <rte_ethdev.h>
-#include <rte_cycles.h>
-#include <rte_kni.h>
-
-#define NB_MBUF 8192
-#define MAX_PACKET_SZ 2048
-#define MBUF_DATA_SZ (MAX_PACKET_SZ + RTE_PKTMBUF_HEADROOM)
-#define PKT_BURST_SZ 32
-#define MEMPOOL_CACHE_SZ PKT_BURST_SZ
-#define SOCKET 0
-#define NB_RXD 1024
-#define NB_TXD 1024
-#define KNI_TIMEOUT_MS 5000 /* ms */
-
-#define IFCONFIG "/sbin/ifconfig "
-#define TEST_KNI_PORT "test_kni_port"
-#define KNI_MODULE_PATH "/sys/module/rte_kni"
-#define KNI_MODULE_PARAM_LO KNI_MODULE_PATH"/parameters/lo_mode"
-#define KNI_TEST_MAX_PORTS 4
-/* The threshold number of mbufs to be transmitted or received. */
-#define KNI_NUM_MBUF_THRESHOLD 100
-static int kni_pkt_mtu = 0;
-
-struct test_kni_stats {
- volatile uint64_t ingress;
- volatile uint64_t egress;
-};
-
-static const struct rte_eth_rxconf rx_conf = {
- .rx_thresh = {
- .pthresh = 8,
- .hthresh = 8,
- .wthresh = 4,
- },
- .rx_free_thresh = 0,
-};
-
-static const struct rte_eth_txconf tx_conf = {
- .tx_thresh = {
- .pthresh = 36,
- .hthresh = 0,
- .wthresh = 0,
- },
- .tx_free_thresh = 0,
- .tx_rs_thresh = 0,
-};
-
-static const struct rte_eth_conf port_conf = {
- .txmode = {
- .mq_mode = RTE_ETH_MQ_TX_NONE,
- },
-};
-
-static struct rte_kni_ops kni_ops = {
- .change_mtu = NULL,
- .config_network_if = NULL,
- .config_mac_address = NULL,
- .config_promiscusity = NULL,
-};
-
-static unsigned int lcore_main, lcore_ingress, lcore_egress;
-static struct rte_kni *test_kni_ctx;
-static struct test_kni_stats stats;
-
-static volatile uint32_t test_kni_processing_flag;
-
-static struct rte_mempool *
-test_kni_create_mempool(void)
-{
- struct rte_mempool * mp;
-
- mp = rte_mempool_lookup("kni_mempool");
- if (!mp)
- mp = rte_pktmbuf_pool_create("kni_mempool",
- NB_MBUF,
- MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ,
- SOCKET);
-
- return mp;
-}
-
-static struct rte_mempool *
-test_kni_lookup_mempool(void)
-{
- return rte_mempool_lookup("kni_mempool");
-}
-/* Callback for request of changing MTU */
-static int
-kni_change_mtu(uint16_t port_id, unsigned int new_mtu)
-{
- printf("Change MTU of port %d to %u\n", port_id, new_mtu);
- kni_pkt_mtu = new_mtu;
- printf("Change MTU of port %d to %i successfully.\n",
- port_id, kni_pkt_mtu);
- return 0;
-}
-
-static int
-test_kni_link_change(void)
-{
- int ret;
- int pid;
-
- pid = fork();
- if (pid < 0) {
- printf("Error: Failed to fork a process\n");
- return -1;
- }
-
- if (pid == 0) {
- printf("Starting KNI Link status change tests.\n");
- if (system(IFCONFIG TEST_KNI_PORT" up") == -1) {
- ret = -1;
- goto error;
- }
-
- ret = rte_kni_update_link(test_kni_ctx, 1);
- if (ret < 0) {
- printf("Failed to change link state to Up ret=%d.\n",
- ret);
- goto error;
- }
- rte_delay_ms(1000);
- printf("KNI: Set LINKUP, previous state=%d\n", ret);
-
- ret = rte_kni_update_link(test_kni_ctx, 0);
- if (ret != 1) {
- printf(
- "Failed! Previous link state should be 1, returned %d.\n",
- ret);
- goto error;
- }
- rte_delay_ms(1000);
- printf("KNI: Set LINKDOWN, previous state=%d\n", ret);
-
- ret = rte_kni_update_link(test_kni_ctx, 1);
- if (ret != 0) {
- printf(
- "Failed! Previous link state should be 0, returned %d.\n",
- ret);
- goto error;
- }
- printf("KNI: Set LINKUP, previous state=%d\n", ret);
-
- ret = 0;
- rte_delay_ms(1000);
-
-error:
- if (system(IFCONFIG TEST_KNI_PORT" down") == -1)
- ret = -1;
-
- printf("KNI: Link status change tests: %s.\n",
- (ret == 0) ? "Passed" : "Failed");
- exit(ret);
- } else {
- int p_ret, status;
-
- while (1) {
- p_ret = waitpid(pid, &status, WNOHANG);
- if (p_ret != 0) {
- if (WIFEXITED(status))
- return WEXITSTATUS(status);
- return -1;
- }
- rte_delay_ms(10);
- rte_kni_handle_request(test_kni_ctx);
- }
- }
-}
-/**
- * This loop fully tests the basic functions of KNI. e.g. transmitting,
- * receiving to, from kernel space, and kernel requests.
- *
- * This is the loop to transmit/receive mbufs to/from kernel interface with
- * supported by KNI kernel module. The ingress lcore will allocate mbufs and
- * transmit them to kernel space; while the egress lcore will receive the mbufs
- * from kernel space and free them.
- * On the main lcore, several commands will be run to check handling the
- * kernel requests. And it will finally set the flag to exit the KNI
- * transmitting/receiving to/from the kernel space.
- *
- * Note: To support this testing, the KNI kernel module needs to be insmodded
- * in one of its loopback modes.
- */
-static int
-test_kni_loop(__rte_unused void *arg)
-{
- int ret = 0;
- unsigned nb_rx, nb_tx, num, i;
- const unsigned lcore_id = rte_lcore_id();
- struct rte_mbuf *pkts_burst[PKT_BURST_SZ];
-
- if (lcore_id == lcore_main) {
- rte_delay_ms(KNI_TIMEOUT_MS);
- /* tests of handling kernel request */
- if (system(IFCONFIG TEST_KNI_PORT" up") == -1)
- ret = -1;
- if (system(IFCONFIG TEST_KNI_PORT" mtu 1400") == -1)
- ret = -1;
- if (system(IFCONFIG TEST_KNI_PORT" down") == -1)
- ret = -1;
- rte_delay_ms(KNI_TIMEOUT_MS);
- test_kni_processing_flag = 1;
- } else if (lcore_id == lcore_ingress) {
- struct rte_mempool *mp = test_kni_lookup_mempool();
-
- if (mp == NULL)
- return -1;
-
- while (1) {
- if (test_kni_processing_flag)
- break;
-
- for (nb_rx = 0; nb_rx < PKT_BURST_SZ; nb_rx++) {
- pkts_burst[nb_rx] = rte_pktmbuf_alloc(mp);
- if (!pkts_burst[nb_rx])
- break;
- }
-
- num = rte_kni_tx_burst(test_kni_ctx, pkts_burst,
- nb_rx);
- stats.ingress += num;
- rte_kni_handle_request(test_kni_ctx);
- if (num < nb_rx) {
- for (i = num; i < nb_rx; i++) {
- rte_pktmbuf_free(pkts_burst[i]);
- }
- }
- rte_delay_ms(10);
- }
- } else if (lcore_id == lcore_egress) {
- while (1) {
- if (test_kni_processing_flag)
- break;
- num = rte_kni_rx_burst(test_kni_ctx, pkts_burst,
- PKT_BURST_SZ);
- stats.egress += num;
- for (nb_tx = 0; nb_tx < num; nb_tx++)
- rte_pktmbuf_free(pkts_burst[nb_tx]);
- rte_delay_ms(10);
- }
- }
-
- return ret;
-}
-
-static int
-test_kni_allocate_lcores(void)
-{
- unsigned i, count = 0;
-
- lcore_main = rte_get_main_lcore();
- printf("main lcore: %u\n", lcore_main);
- for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (count >=2 )
- break;
- if (rte_lcore_is_enabled(i) && i != lcore_main) {
- count ++;
- if (count == 1)
- lcore_ingress = i;
- else if (count == 2)
- lcore_egress = i;
- }
- }
- printf("count: %u\n", count);
-
- return count == 2 ? 0 : -1;
-}
-
-static int
-test_kni_register_handler_mp(void)
-{
-#define TEST_KNI_HANDLE_REQ_COUNT 10 /* 5s */
-#define TEST_KNI_HANDLE_REQ_INTERVAL 500 /* ms */
-#define TEST_KNI_MTU 1450
-#define TEST_KNI_MTU_STR " 1450"
- int pid;
-
- pid = fork();
- if (pid < 0) {
- printf("Failed to fork a process\n");
- return -1;
- } else if (pid == 0) {
- int i;
- struct rte_kni *kni = rte_kni_get(TEST_KNI_PORT);
- struct rte_kni_ops ops = {
- .change_mtu = kni_change_mtu,
- .config_network_if = NULL,
- .config_mac_address = NULL,
- .config_promiscusity = NULL,
- };
-
- if (!kni) {
- printf("Failed to get KNI named %s\n", TEST_KNI_PORT);
- exit(-1);
- }
-
- kni_pkt_mtu = 0;
-
- /* Check with the invalid parameters */
- if (rte_kni_register_handlers(kni, NULL) == 0) {
- printf("Unexpectedly register successfully "
- "with NULL ops pointer\n");
- exit(-1);
- }
- if (rte_kni_register_handlers(NULL, &ops) == 0) {
- printf("Unexpectedly register successfully "
- "to NULL KNI device pointer\n");
- exit(-1);
- }
-
- if (rte_kni_register_handlers(kni, &ops)) {
- printf("Fail to register ops\n");
- exit(-1);
- }
-
- /* Check registering again after it has been registered */
- if (rte_kni_register_handlers(kni, &ops) == 0) {
- printf("Unexpectedly register successfully after "
- "it has already been registered\n");
- exit(-1);
- }
-
- /**
- * Handle the request of setting MTU,
- * with registered handlers.
- */
- for (i = 0; i < TEST_KNI_HANDLE_REQ_COUNT; i++) {
- rte_kni_handle_request(kni);
- if (kni_pkt_mtu == TEST_KNI_MTU)
- break;
- rte_delay_ms(TEST_KNI_HANDLE_REQ_INTERVAL);
- }
- if (i >= TEST_KNI_HANDLE_REQ_COUNT) {
- printf("MTU has not been set\n");
- exit(-1);
- }
-
- kni_pkt_mtu = 0;
- if (rte_kni_unregister_handlers(kni) < 0) {
- printf("Fail to unregister ops\n");
- exit(-1);
- }
-
- /* Check with invalid parameter */
- if (rte_kni_unregister_handlers(NULL) == 0) {
- exit(-1);
- }
-
- /**
- * Handle the request of setting MTU,
- * without registered handlers.
- */
- for (i = 0; i < TEST_KNI_HANDLE_REQ_COUNT; i++) {
- rte_kni_handle_request(kni);
- if (kni_pkt_mtu != 0)
- break;
- rte_delay_ms(TEST_KNI_HANDLE_REQ_INTERVAL);
- }
- if (kni_pkt_mtu != 0) {
- printf("MTU shouldn't be set\n");
- exit(-1);
- }
-
- exit(0);
- } else {
- int p_ret, status;
-
- rte_delay_ms(1000);
- if (system(IFCONFIG TEST_KNI_PORT " mtu" TEST_KNI_MTU_STR)
- == -1)
- return -1;
-
- rte_delay_ms(1000);
- if (system(IFCONFIG TEST_KNI_PORT " mtu" TEST_KNI_MTU_STR)
- == -1)
- return -1;
-
- p_ret = wait(&status);
- if (!WIFEXITED(status)) {
- printf("Child process (%d) exit abnormally\n", p_ret);
- return -1;
- }
- if (WEXITSTATUS(status) != 0) {
- printf("Child process exit with failure\n");
- return -1;
- }
- }
-
- return 0;
-}
-
-static int
-test_kni_processing(uint16_t port_id, struct rte_mempool *mp)
-{
- int ret = 0;
- unsigned i;
- struct rte_kni *kni;
- struct rte_kni_conf conf;
- struct rte_eth_dev_info info;
- struct rte_kni_ops ops;
-
- if (!mp)
- return -1;
-
- memset(&conf, 0, sizeof(conf));
- memset(&info, 0, sizeof(info));
- memset(&ops, 0, sizeof(ops));
-
- ret = rte_eth_dev_info_get(port_id, &info);
- if (ret != 0) {
- printf("Error during getting device (port %u) info: %s\n",
- port_id, strerror(-ret));
- return -1;
- }
-
- snprintf(conf.name, sizeof(conf.name), TEST_KNI_PORT);
-
- /* core id 1 configured for kernel thread */
- conf.core_id = 1;
- conf.force_bind = 1;
- conf.mbuf_size = MAX_PACKET_SZ;
- conf.group_id = port_id;
-
- ops = kni_ops;
- ops.port_id = port_id;
-
- /* basic test of kni processing */
- kni = rte_kni_alloc(mp, &conf, &ops);
- if (!kni) {
- printf("fail to create kni\n");
- return -1;
- }
-
- test_kni_ctx = kni;
- test_kni_processing_flag = 0;
- stats.ingress = 0;
- stats.egress = 0;
-
- /**
- * Check multiple processes support on
- * registering/unregistering handlers.
- */
- if (test_kni_register_handler_mp() < 0) {
- printf("fail to check multiple process support\n");
- ret = -1;
- goto fail_kni;
- }
-
- ret = test_kni_link_change();
- if (ret != 0)
- goto fail_kni;
-
- rte_eal_mp_remote_launch(test_kni_loop, NULL, CALL_MAIN);
- RTE_LCORE_FOREACH_WORKER(i) {
- if (rte_eal_wait_lcore(i) < 0) {
- ret = -1;
- goto fail_kni;
- }
- }
- /**
- * Check if the number of mbufs received from kernel space is equal
- * to that of transmitted to kernel space
- */
- if (stats.ingress < KNI_NUM_MBUF_THRESHOLD ||
- stats.egress < KNI_NUM_MBUF_THRESHOLD) {
- printf("The ingress/egress number should not be "
- "less than %u\n", (unsigned)KNI_NUM_MBUF_THRESHOLD);
- ret = -1;
- goto fail_kni;
- }
-
- if (rte_kni_release(kni) < 0) {
- printf("fail to release kni\n");
- return -1;
- }
- test_kni_ctx = NULL;
-
- /* test of reusing memzone */
- kni = rte_kni_alloc(mp, &conf, &ops);
- if (!kni) {
- printf("fail to create kni\n");
- return -1;
- }
-
- /* Release the kni for following testing */
- if (rte_kni_release(kni) < 0) {
- printf("fail to release kni\n");
- return -1;
- }
-
- return ret;
-fail_kni:
- if (rte_kni_release(kni) < 0) {
- printf("fail to release kni\n");
- ret = -1;
- }
-
- return ret;
-}
-
-static int
-test_kni(void)
-{
- int ret = -1;
- uint16_t port_id;
- struct rte_kni *kni;
- struct rte_mempool *mp;
- struct rte_kni_conf conf;
- struct rte_eth_dev_info info;
- struct rte_kni_ops ops;
- FILE *fd;
- DIR *dir;
- char buf[16];
-
- dir = opendir(KNI_MODULE_PATH);
- if (!dir) {
- if (errno == ENOENT) {
- printf("Cannot run UT due to missing rte_kni module\n");
- return TEST_SKIPPED;
- }
- printf("opendir: %s", strerror(errno));
- return -1;
- }
- closedir(dir);
-
- /* Initialize KNI subsystem */
- ret = rte_kni_init(KNI_TEST_MAX_PORTS);
- if (ret < 0) {
- printf("fail to initialize KNI subsystem\n");
- return -1;
- }
-
- if (test_kni_allocate_lcores() < 0) {
- printf("No enough lcores for kni processing\n");
- return -1;
- }
-
- mp = test_kni_create_mempool();
- if (!mp) {
- printf("fail to create mempool for kni\n");
- return -1;
- }
-
- /* configuring port 0 for the test is enough */
- port_id = 0;
- ret = rte_eth_dev_configure(port_id, 1, 1, &port_conf);
- if (ret < 0) {
- printf("fail to configure port %d\n", port_id);
- return -1;
- }
-
- ret = rte_eth_rx_queue_setup(port_id, 0, NB_RXD, SOCKET, &rx_conf, mp);
- if (ret < 0) {
- printf("fail to setup rx queue for port %d\n", port_id);
- return -1;
- }
-
- ret = rte_eth_tx_queue_setup(port_id, 0, NB_TXD, SOCKET, &tx_conf);
- if (ret < 0) {
- printf("fail to setup tx queue for port %d\n", port_id);
- return -1;
- }
-
- ret = rte_eth_dev_start(port_id);
- if (ret < 0) {
- printf("fail to start port %d\n", port_id);
- return -1;
- }
- ret = rte_eth_promiscuous_enable(port_id);
- if (ret != 0) {
- printf("fail to enable promiscuous mode for port %d: %s\n",
- port_id, rte_strerror(-ret));
- return -1;
- }
-
- /* basic test of kni processing */
- fd = fopen(KNI_MODULE_PARAM_LO, "r");
- if (fd == NULL) {
- printf("fopen: %s", strerror(errno));
- return -1;
- }
- memset(&buf, 0, sizeof(buf));
- if (fgets(buf, sizeof(buf), fd)) {
- if (!strncmp(buf, "lo_mode_fifo", strlen("lo_mode_fifo")) ||
- !strncmp(buf, "lo_mode_fifo_skb",
- strlen("lo_mode_fifo_skb"))) {
- ret = test_kni_processing(port_id, mp);
- if (ret < 0) {
- fclose(fd);
- goto fail;
- }
- } else
- printf("test_kni_processing skipped because of missing rte_kni module lo_mode argument\n");
- }
- fclose(fd);
-
- /* test of allocating KNI with NULL mempool pointer */
- memset(&info, 0, sizeof(info));
- memset(&conf, 0, sizeof(conf));
- memset(&ops, 0, sizeof(ops));
-
- ret = rte_eth_dev_info_get(port_id, &info);
- if (ret != 0) {
- printf("Error during getting device (port %u) info: %s\n",
- port_id, strerror(-ret));
- return -1;
- }
-
- conf.group_id = port_id;
- conf.mbuf_size = MAX_PACKET_SZ;
-
- ops = kni_ops;
- ops.port_id = port_id;
- kni = rte_kni_alloc(NULL, &conf, &ops);
- if (kni) {
- ret = -1;
- printf("unexpectedly creates kni successfully with NULL "
- "mempool pointer\n");
- goto fail;
- }
-
- /* test of allocating KNI without configurations */
- kni = rte_kni_alloc(mp, NULL, NULL);
- if (kni) {
- ret = -1;
- printf("Unexpectedly allocate KNI device successfully "
- "without configurations\n");
- goto fail;
- }
-
- /* test of allocating KNI without a name */
- memset(&conf, 0, sizeof(conf));
- memset(&info, 0, sizeof(info));
- memset(&ops, 0, sizeof(ops));
-
- ret = rte_eth_dev_info_get(port_id, &info);
- if (ret != 0) {
- printf("Error during getting device (port %u) info: %s\n",
- port_id, strerror(-ret));
- ret = -1;
- goto fail;
- }
-
- conf.group_id = port_id;
- conf.mbuf_size = MAX_PACKET_SZ;
-
- ops = kni_ops;
- ops.port_id = port_id;
- kni = rte_kni_alloc(mp, &conf, &ops);
- if (kni) {
- ret = -1;
- printf("Unexpectedly allocate a KNI device successfully "
- "without a name\n");
- goto fail;
- }
-
- /* test of releasing NULL kni context */
- ret = rte_kni_release(NULL);
- if (ret == 0) {
- ret = -1;
- printf("unexpectedly release kni successfully\n");
- goto fail;
- }
-
- /* test of handling request on NULL device pointer */
- ret = rte_kni_handle_request(NULL);
- if (ret == 0) {
- ret = -1;
- printf("Unexpectedly handle request on NULL device pointer\n");
- goto fail;
- }
-
- /* test of getting KNI device with pointer to NULL */
- kni = rte_kni_get(NULL);
- if (kni) {
- ret = -1;
- printf("Unexpectedly get a KNI device with "
- "NULL name pointer\n");
- goto fail;
- }
-
- /* test of getting KNI device with an zero length name string */
- memset(&conf, 0, sizeof(conf));
- kni = rte_kni_get(conf.name);
- if (kni) {
- ret = -1;
- printf("Unexpectedly get a KNI device with "
- "zero length name string\n");
- goto fail;
- }
-
- /* test of getting KNI device with an invalid string name */
- memset(&conf, 0, sizeof(conf));
- snprintf(conf.name, sizeof(conf.name), "testing");
- kni = rte_kni_get(conf.name);
- if (kni) {
- ret = -1;
- printf("Unexpectedly get a KNI device with "
- "a never used name string\n");
- goto fail;
- }
- ret = 0;
-
-fail:
- if (rte_eth_dev_stop(port_id) != 0)
- printf("Failed to stop port %u\n", port_id);
-
- return ret;
-}
-
-#endif
-
-REGISTER_TEST_COMMAND(kni_autotest, test_kni);
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 5cd8c9de8105..fdeda139329e 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -43,7 +43,6 @@ The public API headers are grouped by topics:
[bond](@ref rte_eth_bond.h),
[vhost](@ref rte_vhost.h),
[vdpa](@ref rte_vdpa.h),
- [KNI](@ref rte_kni.h),
[ixgbe](@ref rte_pmd_ixgbe.h),
[i40e](@ref rte_pmd_i40e.h),
[iavf](@ref rte_pmd_iavf.h),
@@ -177,7 +176,6 @@ The public API headers are grouped by topics:
[frag](@ref rte_port_frag.h),
[reass](@ref rte_port_ras.h),
[sched](@ref rte_port_sched.h),
- [kni](@ref rte_port_kni.h),
[src/sink](@ref rte_port_source_sink.h)
* [table](@ref rte_table.h):
[lpm IPv4](@ref rte_table_lpm.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 9a9c52e5569c..31885039c768 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -48,7 +48,6 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/ip_frag \
@TOPDIR@/lib/ipsec \
@TOPDIR@/lib/jobstats \
- @TOPDIR@/lib/kni \
@TOPDIR@/lib/kvargs \
@TOPDIR@/lib/latencystats \
@TOPDIR@/lib/lpm \
diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index 7fcbb7fc43b2..38e184a130ee 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -94,8 +94,8 @@ added to by the developer.
* **The Programmers Guide**
- The Programmers Guide explains how the API components of DPDK such as the EAL, Memzone, Rings and the Hash Library work.
- It also explains how some higher level functionality such as Packet Distributor, Packet Framework and KNI work.
+ The Programmers Guide explains how the API components of the DPDK such as the EAL, Memzone, Rings and the Hash Library work.
+ It also describes how some of the higher level functionality such as Packet Distributor and Packet Framework.
It also shows the build system and explains how to add applications.
The Programmers Guide should be expanded when new functionality is added to DPDK.
diff --git a/doc/guides/howto/flow_bifurcation.rst b/doc/guides/howto/flow_bifurcation.rst
index 838eb2a4cc89..554dd24c32c5 100644
--- a/doc/guides/howto/flow_bifurcation.rst
+++ b/doc/guides/howto/flow_bifurcation.rst
@@ -7,8 +7,7 @@ Flow Bifurcation How-to Guide
Flow Bifurcation is a mechanism which uses hardware capable Ethernet devices
to split traffic between Linux user space and kernel space. Since it is a
hardware assisted feature this approach can provide line rate processing
-capability. Other than :ref:`KNI <kni>`, the software is just required to
-enable device configuration, there is no need to take care of the packet
+capability. There is no need to take care of the packet
movement during the traffic split. This can yield better performance with
less CPU overhead.
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 31296822e5ec..7bfcac880f44 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -43,7 +43,6 @@ Network Interface Controller Drivers
ionic
ipn3ke
ixgbe
- kni
mana
memif
mlx4
diff --git a/doc/guides/nics/kni.rst b/doc/guides/nics/kni.rst
deleted file mode 100644
index bd3033bb585c..000000000000
--- a/doc/guides/nics/kni.rst
+++ /dev/null
@@ -1,170 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2017 Intel Corporation.
-
-KNI Poll Mode Driver
-======================
-
-KNI PMD is wrapper to the :ref:`librte_kni <kni>` library.
-
-This PMD enables using KNI without having a KNI specific application,
-any forwarding application can use PMD interface for KNI.
-
-Sending packets to any DPDK controlled interface or sending to the
-Linux networking stack will be transparent to the DPDK application.
-
-To create a KNI device ``net_kni#`` device name should be used, and this
-will create ``kni#`` Linux virtual network interface.
-
-There is no physical device backend for the virtual KNI device.
-
-Packets sent to the KNI Linux interface will be received by the DPDK
-application, and DPDK application may forward packets to a physical NIC
-or to a virtual device (like another KNI interface or PCAP interface).
-
-To forward any traffic from physical NIC to the Linux networking stack,
-an application should control a physical port and create one virtual KNI port,
-and forward between two.
-
-Using this PMD requires KNI kernel module be inserted.
-
-
-Usage
------
-
-EAL ``--vdev`` argument can be used to create KNI device instance, like::
-
- dpdk-testpmd --vdev=net_kni0 --vdev=net_kni1 -- -i
-
-Above command will create ``kni0`` and ``kni1`` Linux network interfaces,
-those interfaces can be controlled by standard Linux tools.
-
-When testpmd forwarding starts, any packets sent to ``kni0`` interface
-forwarded to the ``kni1`` interface and vice versa.
-
-There is no hard limit on number of interfaces that can be created.
-
-
-Default interface configuration
--------------------------------
-
-``librte_kni`` can create Linux network interfaces with different features,
-feature set controlled by a configuration struct, and KNI PMD uses a fixed
-configuration:
-
- .. code-block:: console
-
- Interface name: kni#
- force bind kernel thread to a core : NO
- mbuf size: (rte_pktmbuf_data_room_size(pktmbuf_pool) - RTE_PKTMBUF_HEADROOM)
- mtu: (conf.mbuf_size - RTE_ETHER_HDR_LEN)
-
-KNI control path is not supported with the PMD, since there is no physical
-backend device by default.
-
-
-Runtime Configuration
----------------------
-
-``no_request_thread``, by default PMD creates a pthread for each KNI interface
-to handle Linux network interface control commands, like ``ifconfig kni0 up``
-
-With ``no_request_thread`` option, pthread is not created and control commands
-not handled by PMD.
-
-By default request thread is enabled. And this argument should not be used
-most of the time, unless this PMD used with customized DPDK application to handle
-requests itself.
-
-Argument usage::
-
- dpdk-testpmd --vdev "net_kni0,no_request_thread=1" -- -i
-
-
-PMD log messages
-----------------
-
-If KNI kernel module (rte_kni.ko) not inserted, following error log printed::
-
- "KNI: KNI subsystem has not been initialized. Invoke rte_kni_init() first"
-
-
-PMD testing
------------
-
-It is possible to test PMD quickly using KNI kernel module loopback feature:
-
-* Insert KNI kernel module with loopback support:
-
- .. code-block:: console
-
- insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo_skb
-
-* Start testpmd with no physical device but two KNI virtual devices:
-
- .. code-block:: console
-
- ./dpdk-testpmd --vdev net_kni0 --vdev net_kni1 -- -i
-
- .. code-block:: console
-
- ...
- Configuring Port 0 (socket 0)
- KNI: pci: 00:00:00 c580:b8
- Port 0: 1A:4A:5B:7C:A2:8C
- Configuring Port 1 (socket 0)
- KNI: pci: 00:00:00 600:b9
- Port 1: AE:95:21:07:93:DD
- Checking link statuses...
- Port 0 Link Up - speed 10000 Mbps - full-duplex
- Port 1 Link Up - speed 10000 Mbps - full-duplex
- Done
- testpmd>
-
-* Observe Linux interfaces
-
- .. code-block:: console
-
- $ ifconfig kni0 && ifconfig kni1
- kni0: flags=4098<BROADCAST,MULTICAST> mtu 1500
- ether ae:8e:79:8e:9b:c8 txqueuelen 1000 (Ethernet)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
- kni1: flags=4098<BROADCAST,MULTICAST> mtu 1500
- ether 9e:76:43:53:3e:9b txqueuelen 1000 (Ethernet)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
-
-* Start forwarding with tx_first:
-
- .. code-block:: console
-
- testpmd> start tx_first
-
-* Quit and check forwarding stats:
-
- .. code-block:: console
-
- testpmd> quit
- Telling cores to stop...
- Waiting for lcores to finish...
-
- ---------------------- Forward statistics for port 0 ----------------------
- RX-packets: 35637905 RX-dropped: 0 RX-total: 35637905
- TX-packets: 35637947 TX-dropped: 0 TX-total: 35637947
- ----------------------------------------------------------------------------
-
- ---------------------- Forward statistics for port 1 ----------------------
- RX-packets: 35637915 RX-dropped: 0 RX-total: 35637915
- TX-packets: 35637937 TX-dropped: 0 TX-total: 35637937
- ----------------------------------------------------------------------------
-
- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
- RX-packets: 71275820 RX-dropped: 0 RX-total: 71275820
- TX-packets: 71275884 TX-dropped: 0 TX-total: 71275884
- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index f5e54a5e9cfd..ba6247170dbb 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -10,15 +10,12 @@ we provide a virtio Poll Mode Driver (PMD) as a software solution, comparing to
for fast guest VM to guest VM communication and guest VM to host communication.
Vhost is a kernel acceleration module for virtio qemu backend.
-The DPDK extends kni to support vhost raw socket interface,
-which enables vhost to directly read/ write packets from/to a physical port.
-With this enhancement, virtio could achieve quite promising performance.
For basic qemu-KVM installation and other Intel EM poll mode driver in guest VM,
please refer to Chapter "Driver for VM Emulated Devices".
In this chapter, we will demonstrate usage of virtio PMD with two backends,
-standard qemu vhost back end and vhost kni back end.
+standard qemu vhost back end.
Virtio Implementation in DPDK
-----------------------------
@@ -89,93 +86,6 @@ The following prerequisites apply:
* When using legacy interface, ``SYS_RAWIO`` capability is required
for ``iopl()`` call to enable access to PCI I/O ports.
-Virtio with kni vhost Back End
-------------------------------
-
-This section demonstrates kni vhost back end example setup for Phy-VM Communication.
-
-.. _figure_host_vm_comms:
-
-.. figure:: img/host_vm_comms.*
-
- Host2VM Communication Example Using kni vhost Back End
-
-
-Host2VM communication example
-
-#. Load the kni kernel module:
-
- .. code-block:: console
-
- insmod rte_kni.ko
-
- Other basic DPDK preparations like hugepage enabling,
- UIO port binding are not listed here.
- Please refer to the *DPDK Getting Started Guide* for detailed instructions.
-
-#. Launch the kni user application:
-
- .. code-block:: console
-
- <build_dir>/examples/dpdk-kni -l 0-3 -n 4 -- -p 0x1 -P --config="(0,1,3)"
-
- This command generates one network device vEth0 for physical port.
- If specify more physical ports, the generated network device will be vEth1, vEth2, and so on.
-
- For each physical port, kni creates two user threads.
- One thread loops to fetch packets from the physical NIC port into the kni receive queue.
- The other user thread loops to send packets in the kni transmit queue.
-
- For each physical port, kni also creates a kernel thread that retrieves packets from the kni receive queue,
- place them onto kni's raw socket's queue and wake up the vhost kernel thread to exchange packets with the virtio virt queue.
-
- For more details about kni, please refer to :ref:`kni`.
-
-#. Enable the kni raw socket functionality for the specified physical NIC port,
- get the generated file descriptor and set it in the qemu command line parameter.
- Always remember to set ioeventfd_on and vhost_on.
-
- Example:
-
- .. code-block:: console
-
- echo 1 > /sys/class/net/vEth0/sock_en
- fd=`cat /sys/class/net/vEth0/sock_fd`
- exec qemu-system-x86_64 -enable-kvm -cpu host \
- -m 2048 -smp 4 -name dpdk-test1-vm1 \
- -drive file=/data/DPDKVMS/dpdk-vm.img \
- -netdev tap, fd=$fd,id=mynet_kni, script=no,vhost=on \
- -device virtio-net-pci,netdev=mynet_kni,bus=pci.0,addr=0x3,ioeventfd=on \
- -vnc:1 -daemonize
-
- In the above example, virtio port 0 in the guest VM will be associated with vEth0, which in turns corresponds to a physical port,
- which means received packets come from vEth0, and transmitted packets is sent to vEth0.
-
-#. In the guest, bind the virtio device to the uio_pci_generic kernel module and start the forwarding application.
- When the virtio port in guest bursts Rx, it is getting packets from the
- raw socket's receive queue.
- When the virtio port bursts Tx, it is sending packet to the tx_q.
-
- .. code-block:: console
-
- modprobe uio
- dpdk-hugepages.py --setup 1G
- modprobe uio_pci_generic
- ./usertools/dpdk-devbind.py -b uio_pci_generic 00:03.0
-
- We use testpmd as the forwarding application in this example.
-
- .. figure:: img/console.*
-
- Running testpmd
-
-#. Use IXIA packet generator to inject a packet stream into the KNI physical port.
-
- The packet reception and transmission flow path is:
-
- IXIA packet generator->82599 PF->KNI Rx queue->KNI raw socket queue->Guest
- VM virtio port 0 Rx burst->Guest VM virtio port 0 Tx burst-> KNI Tx queue
- ->82599 PF-> IXIA packet generator
Virtio with qemu virtio Back End
--------------------------------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 93c8a031be56..5d382fdd9032 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -610,8 +610,6 @@ devices would fail anyway.
``RTE_PCI_DRV_NEED_IOVA_AS_VA`` flag is used to dictate that this PCI
driver can only work in RTE_IOVA_VA mode.
- When the KNI kernel module is detected, RTE_IOVA_PA mode is preferred as a
- performance penalty is expected in RTE_IOVA_VA mode.
IOVA Mode Configuration
~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/glossary.rst b/doc/guides/prog_guide/glossary.rst
index fb0910ba5b3f..8d6349701e43 100644
--- a/doc/guides/prog_guide/glossary.rst
+++ b/doc/guides/prog_guide/glossary.rst
@@ -103,9 +103,6 @@ lcore
A logical execution unit of the processor, sometimes called a *hardware
thread*.
-KNI
- Kernel Network Interface
-
L1
Layer 1
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index c04847bfa148..2c47d9d010f4 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -53,7 +53,6 @@ Programmer's Guide
pcapng_lib
pdump_lib
multi_proc_support
- kernel_nic_interface
thread_safety_dpdk_functions
eventdev
event_ethernet_rx_adapter
diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
deleted file mode 100644
index 392e5df75fcf..000000000000
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ /dev/null
@@ -1,423 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2010-2015 Intel Corporation.
-
-.. _kni:
-
-Kernel NIC Interface
-====================
-
-.. note::
-
- KNI is deprecated and will be removed in future.
- See :doc:`../rel_notes/deprecation`.
-
- :ref:`virtio_user_as_exception_path` alternative is the preferred way
- for interfacing with the Linux network stack
- as it is an in-kernel solution and has similar performance expectations.
-
-.. note::
-
- KNI is disabled by default in the DPDK build.
- To re-enable the library, remove 'kni' from the "disable_libs" meson option when configuring a build.
-
-The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
-
-KNI provides an interface with the kernel network stack
-and allows management of DPDK ports using standard Linux net tools
-such as ``ethtool``, ``iproute2`` and ``tcpdump``.
-
-The main use case of KNI is to get/receive exception packets from/to Linux network stack
-while main datapath IO is done bypassing the networking stack.
-
-There are other alternatives to KNI, all are available in the upstream Linux:
-
-#. :ref:`virtio_user_as_exception_path`
-
-#. :doc:`../nics/tap` as wrapper to `Linux tun/tap
- <https://www.kernel.org/doc/Documentation/networking/tuntap.txt>`_
-
-The benefits of using the KNI against alternatives are:
-
-* Faster than existing Linux TUN/TAP interfaces
- (by eliminating system calls and copy_to_user()/copy_from_user() operations.
-
-The disadvantages of the KNI are:
-
-* It is out-of-tree Linux kernel module
- which makes updating and distributing the driver more difficult.
- Most users end up building the KNI driver from source
- which requires the packages and tools to build kernel modules.
-
-* As it shares memory between userspace and kernelspace,
- and kernel part directly uses input provided by userspace, it is not safe.
- This makes hard to upstream the module.
-
-* Requires dedicated kernel cores.
-
-* Only a subset of net devices control commands are supported by KNI.
-
-The components of an application using the DPDK Kernel NIC Interface are shown in :numref:`figure_kernel_nic_intf`.
-
-.. _figure_kernel_nic_intf:
-
-.. figure:: img/kernel_nic_intf.*
-
- Components of a DPDK KNI Application
-
-
-The DPDK KNI Kernel Module
---------------------------
-
-The KNI kernel loadable module ``rte_kni`` provides the kernel interface
-for DPDK applications.
-
-When the ``rte_kni`` module is loaded, it will create a device ``/dev/kni``
-that is used by the DPDK KNI API functions to control and communicate with
-the kernel module.
-
-The ``rte_kni`` kernel module contains several optional parameters which
-can be specified when the module is loaded to control its behavior:
-
-.. code-block:: console
-
- # modinfo rte_kni.ko
- <snip>
- parm: lo_mode: KNI loopback mode (default=lo_mode_none):
- lo_mode_none Kernel loopback disabled
- lo_mode_fifo Enable kernel loopback with fifo
- lo_mode_fifo_skb Enable kernel loopback with fifo and skb buffer
- (charp)
- parm: kthread_mode: Kernel thread mode (default=single):
- single Single kernel thread mode enabled.
- multiple Multiple kernel thread mode enabled.
- (charp)
- parm: carrier: Default carrier state for KNI interface (default=off):
- off Interfaces will be created with carrier state set to off.
- on Interfaces will be created with carrier state set to on.
- (charp)
- parm: enable_bifurcated: Enable request processing support for
- bifurcated drivers, which means releasing rtnl_lock before calling
- userspace callback and supporting async requests (default=off):
- on Enable request processing support for bifurcated drivers.
- (charp)
- parm: min_scheduling_interval: KNI thread min scheduling interval (default=100 microseconds)
- (long)
- parm: max_scheduling_interval: KNI thread max scheduling interval (default=200 microseconds)
- (long)
-
-
-Loading the ``rte_kni`` kernel module without any optional parameters is
-the typical way a DPDK application gets packets into and out of the kernel
-network stack. Without any parameters, only one kernel thread is created
-for all KNI devices for packet receiving in kernel side, loopback mode is
-disabled, and the default carrier state of KNI interfaces is set to *off*.
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko
-
-.. _kni_loopback_mode:
-
-Loopback Mode
-~~~~~~~~~~~~~
-
-For testing, the ``rte_kni`` kernel module can be loaded in loopback mode
-by specifying the ``lo_mode`` parameter:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo
-
-The ``lo_mode_fifo`` loopback option will loop back ring enqueue/dequeue
-operations in kernel space.
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo_skb
-
-The ``lo_mode_fifo_skb`` loopback option will loop back ring enqueue/dequeue
-operations and sk buffer copies in kernel space.
-
-If the ``lo_mode`` parameter is not specified, loopback mode is disabled.
-
-.. _kni_kernel_thread_mode:
-
-Kernel Thread Mode
-~~~~~~~~~~~~~~~~~~
-
-To provide flexibility of performance, the ``rte_kni`` KNI kernel module
-can be loaded with the ``kthread_mode`` parameter. The ``rte_kni`` kernel
-module supports two options: "single kernel thread" mode and "multiple
-kernel thread" mode.
-
-Single kernel thread mode is enabled as follows:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko kthread_mode=single
-
-This mode will create only one kernel thread for all KNI interfaces to
-receive data on the kernel side. By default, this kernel thread is not
-bound to any particular core, but the user can set the core affinity for
-this kernel thread by setting the ``core_id`` and ``force_bind`` parameters
-in ``struct rte_kni_conf`` when the first KNI interface is created:
-
-For optimum performance, the kernel thread should be bound to a core in
-on the same socket as the DPDK lcores used in the application.
-
-The KNI kernel module can also be configured to start a separate kernel
-thread for each KNI interface created by the DPDK application. Multiple
-kernel thread mode is enabled as follows:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko kthread_mode=multiple
-
-This mode will create a separate kernel thread for each KNI interface to
-receive data on the kernel side. The core affinity of each ``kni_thread``
-kernel thread can be specified by setting the ``core_id`` and ``force_bind``
-parameters in ``struct rte_kni_conf`` when each KNI interface is created.
-
-Multiple kernel thread mode can provide scalable higher performance if
-sufficient unused cores are available on the host system.
-
-If the ``kthread_mode`` parameter is not specified, the "single kernel
-thread" mode is used.
-
-.. _kni_default_carrier_state:
-
-Default Carrier State
-~~~~~~~~~~~~~~~~~~~~~
-
-The default carrier state of KNI interfaces created by the ``rte_kni``
-kernel module is controlled via the ``carrier`` option when the module
-is loaded.
-
-If ``carrier=off`` is specified, the kernel module will leave the carrier
-state of the interface *down* when the interface is management enabled.
-The DPDK application can set the carrier state of the KNI interface using the
-``rte_kni_update_link()`` function. This is useful for DPDK applications
-which require that the carrier state of the KNI interface reflect the
-actual link state of the corresponding physical NIC port.
-
-If ``carrier=on`` is specified, the kernel module will automatically set
-the carrier state of the interface to *up* when the interface is management
-enabled. This is useful for DPDK applications which use the KNI interface as
-a purely virtual interface that does not correspond to any physical hardware
-and do not wish to explicitly set the carrier state of the interface with
-``rte_kni_update_link()``. It is also useful for testing in loopback mode
-where the NIC port may not be physically connected to anything.
-
-To set the default carrier state to *on*:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko carrier=on
-
-To set the default carrier state to *off*:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko carrier=off
-
-If the ``carrier`` parameter is not specified, the default carrier state
-of KNI interfaces will be set to *off*.
-
-.. _kni_bifurcated_device_support:
-
-Bifurcated Device Support
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-User callbacks are executed while kernel module holds the ``rtnl`` lock, this
-causes a deadlock when callbacks run control commands on another Linux kernel
-network interface.
-
-Bifurcated devices has kernel network driver part and to prevent deadlock for
-them ``enable_bifurcated`` is used.
-
-To enable bifurcated device support:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko enable_bifurcated=on
-
-Enabling bifurcated device support releases ``rtnl`` lock before calling
-callback and locks it back after callback. Also enables asynchronous request to
-support callbacks that requires rtnl lock to work (interface down).
-
-KNI Kthread Scheduling
-~~~~~~~~~~~~~~~~~~~~~~
-
-The ``min_scheduling_interval`` and ``max_scheduling_interval`` parameters
-control the rescheduling interval of the KNI kthreads.
-
-This might be useful if we have use cases in which we require improved
-latency or performance for control plane traffic.
-
-The implementation is backed by Linux High Precision Timers, and uses ``usleep_range``.
-Hence, it will have the same granularity constraints as this Linux subsystem.
-
-For Linux High Precision Timers, you can check the following resource: `Kernel Timers <http://www.kernel.org/doc/Documentation/timers/timers-howto.txt>`_
-
-To set the ``min_scheduling_interval`` to a value of 100 microseconds:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko min_scheduling_interval=100
-
-To set the ``max_scheduling_interval`` to a value of 200 microseconds:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko max_scheduling_interval=200
-
-If the ``min_scheduling_interval`` and ``max_scheduling_interval`` parameters are
-not specified, the default interval limits will be set to *100* and *200* respectively.
-
-KNI Creation and Deletion
--------------------------
-
-Before any KNI interfaces can be created, the ``rte_kni`` kernel module must
-be loaded into the kernel and configured with the ``rte_kni_init()`` function.
-
-The KNI interfaces are created by a DPDK application dynamically via the
-``rte_kni_alloc()`` function.
-
-The ``struct rte_kni_conf`` structure contains fields which allow the
-user to specify the interface name, set the MTU size, set an explicit or
-random MAC address and control the affinity of the kernel Rx thread(s)
-(both single and multi-threaded modes).
-By default the KNI sample example gets the MTU from the matching device,
-and in case of KNI PMD it is derived from mbuf buffer length.
-
-The ``struct rte_kni_ops`` structure contains pointers to functions to
-handle requests from the ``rte_kni`` kernel module. These functions
-allow DPDK applications to perform actions when the KNI interfaces are
-manipulated by control commands or functions external to the application.
-
-For example, the DPDK application may wish to enabled/disable a physical
-NIC port when a user enabled/disables a KNI interface with ``ip link set
-[up|down] dev <ifaceX>``. The DPDK application can register a callback for
-``config_network_if`` which will be called when the interface management
-state changes.
-
-There are currently four callbacks for which the user can register
-application functions:
-
-``config_network_if``:
-
- Called when the management state of the KNI interface changes.
- For example, when the user runs ``ip link set [up|down] dev <ifaceX>``.
-
-``change_mtu``:
-
- Called when the user changes the MTU size of the KNI
- interface. For example, when the user runs ``ip link set mtu <size>
- dev <ifaceX>``.
-
-``config_mac_address``:
-
- Called when the user changes the MAC address of the KNI interface.
- For example, when the user runs ``ip link set address <MAC>
- dev <ifaceX>``. If the user sets this callback function to NULL,
- but sets the ``port_id`` field to a value other than -1, a default
- callback handler in the rte_kni library ``kni_config_mac_address()``
- will be called which calls ``rte_eth_dev_default_mac_addr_set()``
- on the specified ``port_id``.
-
-``config_promiscusity``:
-
- Called when the user changes the promiscuity state of the KNI
- interface. For example, when the user runs ``ip link set promisc
- [on|off] dev <ifaceX>``. If the user sets this callback function to
- NULL, but sets the ``port_id`` field to a value other than -1, a default
- callback handler in the rte_kni library ``kni_config_promiscusity()``
- will be called which calls ``rte_eth_promiscuous_<enable|disable>()``
- on the specified ``port_id``.
-
-``config_allmulticast``:
-
- Called when the user changes the allmulticast state of the KNI interface.
- For example, when the user runs ``ifconfig <ifaceX> [-]allmulti``. If the
- user sets this callback function to NULL, but sets the ``port_id`` field to
- a value other than -1, a default callback handler in the rte_kni library
- ``kni_config_allmulticast()`` will be called which calls
- ``rte_eth_allmulticast_<enable|disable>()`` on the specified ``port_id``.
-
-In order to run these callbacks, the application must periodically call
-the ``rte_kni_handle_request()`` function. Any user callback function
-registered will be called directly from ``rte_kni_handle_request()`` so
-care must be taken to prevent deadlock and to not block any DPDK fastpath
-tasks. Typically DPDK applications which use these callbacks will need
-to create a separate thread or secondary process to periodically call
-``rte_kni_handle_request()``.
-
-The KNI interfaces can be deleted by a DPDK application with
-``rte_kni_release()``. All KNI interfaces not explicitly deleted will be
-deleted when the ``/dev/kni`` device is closed, either explicitly with
-``rte_kni_close()`` or when the DPDK application is closed.
-
-DPDK mbuf Flow
---------------
-
-To minimize the amount of DPDK code running in kernel space, the mbuf mempool is managed in userspace only.
-The kernel module will be aware of mbufs,
-but all mbuf allocation and free operations will be handled by the DPDK application only.
-
-:numref:`figure_pkt_flow_kni` shows a typical scenario with packets sent in both directions.
-
-.. _figure_pkt_flow_kni:
-
-.. figure:: img/pkt_flow_kni.*
-
- Packet Flow via mbufs in the DPDK KNI
-
-
-Use Case: Ingress
------------------
-
-On the DPDK RX side, the mbuf is allocated by the PMD in the RX thread context.
-This thread will enqueue the mbuf in the rx_q FIFO,
-and the next pointers in mbuf-chain will convert to physical address.
-The KNI thread will poll all KNI active devices for the rx_q.
-If an mbuf is dequeued, it will be converted to a sk_buff and sent to the net stack via netif_rx().
-The dequeued mbuf must be freed, so the same pointer is sent back in the free_q FIFO,
-and next pointers must convert back to virtual address if exists before put in the free_q FIFO.
-
-The RX thread, in the same main loop, polls this FIFO and frees the mbuf after dequeuing it.
-The address conversion of the next pointer is to prevent the chained mbuf
-in different hugepage segments from causing kernel crash.
-
-Use Case: Egress
-----------------
-
-For packet egress the DPDK application must first enqueue several mbufs to create an mbuf cache on the kernel side.
-
-The packet is received from the Linux net stack, by calling the kni_net_tx() callback.
-The mbuf is dequeued (without waiting due the cache) and filled with data from sk_buff.
-The sk_buff is then freed and the mbuf sent in the tx_q FIFO.
-
-The DPDK TX thread dequeues the mbuf and sends it to the PMD via ``rte_eth_tx_burst()``.
-It then puts the mbuf back in the cache.
-
-IOVA = VA: Support
-------------------
-
-KNI operates in IOVA_VA scheme when
-
-- LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0) and
-- EAL option `iova-mode=va` is passed or bus IOVA scheme in the DPDK is selected
- as RTE_IOVA_VA.
-
-Due to IOVA to KVA address translations, based on the KNI use case there
-can be a performance impact. For mitigation, forcing IOVA to PA via EAL
-"--iova-mode=pa" option can be used, IOVA_DC bus iommu scheme can also
-result in IOVA as PA.
-
-Ethtool
--------
-
-Ethtool is a Linux-specific tool with corresponding support in the kernel.
-The current version of kni provides minimal ethtool functionality
-including querying version and link state. It does not support link
-control, statistics, or dumping device registers.
diff --git a/doc/guides/prog_guide/packet_framework.rst b/doc/guides/prog_guide/packet_framework.rst
index 3d4e3b66cc5c..ebc69d8c3e75 100644
--- a/doc/guides/prog_guide/packet_framework.rst
+++ b/doc/guides/prog_guide/packet_framework.rst
@@ -87,18 +87,15 @@ Port Types
| | | management and hierarchical scheduling according to pre-defined SLAs. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
- | 6 | KNI | Send/receive packets to/from Linux kernel space. |
- | | | |
- +---+------------------+---------------------------------------------------------------------------------------+
- | 7 | Source | Input port used as packet generator. Similar to Linux kernel /dev/zero character |
+ | 6 | Source | Input port used as packet generator. Similar to Linux kernel /dev/zero character |
| | | device. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
- | 8 | Sink | Output port used to drop all input packets. Similar to Linux kernel /dev/null |
+ | 7 | Sink | Output port used to drop all input packets. Similar to Linux kernel /dev/null |
| | | character device. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
- | 9 | Sym_crypto | Output port used to extract DPDK Cryptodev operations from a fixed offset of the |
+ | 8 | Sym_crypto | Output port used to extract DPDK Cryptodev operations from a fixed offset of the |
| | | packet and then enqueue to the Cryptodev PMD. Input port used to dequeue the |
| | | Cryptodev operations from the Cryptodev PMD and then retrieve the packets from them. |
+---+------------------+---------------------------------------------------------------------------------------+
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index ce5a8f0361cb..bb5d23c87669 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -35,7 +35,7 @@ Deprecation Notices
which also added support for standard atomics
(Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
-* build: Enabling deprecated libraries (``kni``)
+* build: Enabling deprecated libraries
won't be possible anymore through the use of the ``disable_libs`` build option.
A new build option for deprecated libraries will be introduced instead.
@@ -78,13 +78,6 @@ Deprecation Notices
``__atomic_thread_fence`` must be used for patches that need to be merged in
20.08 onwards. This change will not introduce any performance degradation.
-* kni: The KNI kernel module and library are not recommended for use by new
- applications - other technologies such as virtio-user are recommended instead.
- Following the DPDK technical board
- `decision <https://mails.dpdk.org/archives/dev/2021-January/197077.html>`_
- and `refinement <https://mails.dpdk.org/archives/dev/2022-June/243596.html>`_,
- the KNI kernel module, library and PMD will be removed from the DPDK 23.11 release.
-
* lib: will fix extending some enum/define breaking the ABI. There are multiple
samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
used by iterators, and arrays holding these values are sized with this
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 9d96dbdcd302..0d5c4a60d020 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -70,6 +70,8 @@ Removed Items
* flow_classify: Removed flow classification library and examples.
+* kni: Removed the Kernel Network Interface (KNI) library and driver.
+
API Changes
-----------
diff --git a/doc/guides/sample_app_ug/ip_pipeline.rst b/doc/guides/sample_app_ug/ip_pipeline.rst
index b521d3b8be20..f30ac5e19db7 100644
--- a/doc/guides/sample_app_ug/ip_pipeline.rst
+++ b/doc/guides/sample_app_ug/ip_pipeline.rst
@@ -164,15 +164,6 @@ Examples
| | | | 8. Pipeline table rule add default |
| | | | 9. Pipeline table rule add |
+-----------------------+----------------------+----------------+------------------------------------+
- | KNI | Stub | Forward | 1. Mempool create |
- | | | | 2. Link create |
- | | | | 3. Pipeline create |
- | | | | 4. Pipeline port in/out |
- | | | | 5. Pipeline table |
- | | | | 6. Pipeline port in table |
- | | | | 7. Pipeline enable |
- | | | | 8. Pipeline table rule add |
- +-----------------------+----------------------+----------------+------------------------------------+
| Firewall | ACL | Allow/Drop | 1. Mempool create |
| | | | 2. Link create |
| | * Key = n-tuple | | 3. Pipeline create |
@@ -297,17 +288,6 @@ Tap
tap <name>
-Kni
-~~~
-
- Create kni port ::
-
- kni <kni_name>
- link <link_name>
- mempool <mempool_name>
- [thread <thread_id>]
-
-
Cryptodev
~~~~~~~~~
@@ -366,7 +346,6 @@ Create pipeline input port ::
| swq <swq_name>
| tmgr <tmgr_name>
| tap <tap_name> mempool <mempool_name> mtu <mtu>
- | kni <kni_name>
| source mempool <mempool_name> file <file_name> bpp <n_bytes_per_pkt>
[action <port_in_action_profile_name>]
[disabled]
@@ -379,7 +358,6 @@ Create pipeline output port ::
| swq <swq_name>
| tmgr <tmgr_name>
| tap <tap_name>
- | kni <kni_name>
| sink [file <file_name> pkts <max_n_pkts>]
Create pipeline table ::
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 4b98faa72980..01b707b6c4ac 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1130,7 +1130,7 @@ nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
{
/* These dummy functions are required for supporting
* some applications which reconfigure queues without
- * stopping tx burst and rx burst threads(eg kni app)
+ * stopping tx burst and rx burst threads.
* When the queues context is saved, txq/rxqs are released
* which caused app crash since rx/tx burst is still
* on different lcores
diff --git a/drivers/net/kni/meson.build b/drivers/net/kni/meson.build
deleted file mode 100644
index 2acc98969426..000000000000
--- a/drivers/net/kni/meson.build
+++ /dev/null
@@ -1,11 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-if is_windows
- build = false
- reason = 'not supported on Windows'
- subdir_done()
-endif
-
-deps += 'kni'
-sources = files('rte_eth_kni.c')
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
deleted file mode 100644
index c0e1f8db409e..000000000000
--- a/drivers/net/kni/rte_eth_kni.c
+++ /dev/null
@@ -1,524 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Intel Corporation
- */
-
-#include <fcntl.h>
-#include <pthread.h>
-#include <unistd.h>
-
-#include <rte_string_fns.h>
-#include <ethdev_driver.h>
-#include <ethdev_vdev.h>
-#include <rte_kni.h>
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <bus_vdev_driver.h>
-
-/* Only single queue supported */
-#define KNI_MAX_QUEUE_PER_PORT 1
-
-#define MAX_KNI_PORTS 8
-
-#define KNI_ETHER_MTU(mbuf_size) \
- ((mbuf_size) - RTE_ETHER_HDR_LEN) /**< Ethernet MTU. */
-
-#define ETH_KNI_NO_REQUEST_THREAD_ARG "no_request_thread"
-static const char * const valid_arguments[] = {
- ETH_KNI_NO_REQUEST_THREAD_ARG,
- NULL
-};
-
-struct eth_kni_args {
- int no_request_thread;
-};
-
-struct pmd_queue_stats {
- uint64_t pkts;
- uint64_t bytes;
-};
-
-struct pmd_queue {
- struct pmd_internals *internals;
- struct rte_mempool *mb_pool;
-
- struct pmd_queue_stats rx;
- struct pmd_queue_stats tx;
-};
-
-struct pmd_internals {
- struct rte_kni *kni;
- uint16_t port_id;
- int is_kni_started;
-
- pthread_t thread;
- int stop_thread;
- int no_request_thread;
-
- struct rte_ether_addr eth_addr;
-
- struct pmd_queue rx_queues[KNI_MAX_QUEUE_PER_PORT];
- struct pmd_queue tx_queues[KNI_MAX_QUEUE_PER_PORT];
-};
-
-static const struct rte_eth_link pmd_link = {
- .link_speed = RTE_ETH_SPEED_NUM_10G,
- .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
- .link_status = RTE_ETH_LINK_DOWN,
- .link_autoneg = RTE_ETH_LINK_FIXED,
-};
-static int is_kni_initialized;
-
-RTE_LOG_REGISTER_DEFAULT(eth_kni_logtype, NOTICE);
-
-#define PMD_LOG(level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, eth_kni_logtype, \
- "%s(): " fmt "\n", __func__, ##args)
-static uint16_t
-eth_kni_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
-{
- struct pmd_queue *kni_q = q;
- struct rte_kni *kni = kni_q->internals->kni;
- uint16_t nb_pkts;
- int i;
-
- nb_pkts = rte_kni_rx_burst(kni, bufs, nb_bufs);
- for (i = 0; i < nb_pkts; i++)
- bufs[i]->port = kni_q->internals->port_id;
-
- kni_q->rx.pkts += nb_pkts;
-
- return nb_pkts;
-}
-
-static uint16_t
-eth_kni_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
-{
- struct pmd_queue *kni_q = q;
- struct rte_kni *kni = kni_q->internals->kni;
- uint16_t nb_pkts;
-
- nb_pkts = rte_kni_tx_burst(kni, bufs, nb_bufs);
-
- kni_q->tx.pkts += nb_pkts;
-
- return nb_pkts;
-}
-
-static void *
-kni_handle_request(void *param)
-{
- struct pmd_internals *internals = param;
-#define MS 1000
-
- while (!internals->stop_thread) {
- rte_kni_handle_request(internals->kni);
- usleep(500 * MS);
- }
-
- return param;
-}
-
-static int
-eth_kni_start(struct rte_eth_dev *dev)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- uint16_t port_id = dev->data->port_id;
- struct rte_mempool *mb_pool;
- struct rte_kni_conf conf = {{0}};
- const char *name = dev->device->name + 4; /* remove net_ */
-
- mb_pool = internals->rx_queues[0].mb_pool;
- strlcpy(conf.name, name, RTE_KNI_NAMESIZE);
- conf.force_bind = 0;
- conf.group_id = port_id;
- conf.mbuf_size =
- rte_pktmbuf_data_room_size(mb_pool) - RTE_PKTMBUF_HEADROOM;
- conf.mtu = KNI_ETHER_MTU(conf.mbuf_size);
-
- internals->kni = rte_kni_alloc(mb_pool, &conf, NULL);
- if (internals->kni == NULL) {
- PMD_LOG(ERR,
- "Fail to create kni interface for port: %d",
- port_id);
- return -1;
- }
-
- return 0;
-}
-
-static int
-eth_kni_dev_start(struct rte_eth_dev *dev)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- int ret;
-
- if (internals->is_kni_started == 0) {
- ret = eth_kni_start(dev);
- if (ret)
- return -1;
- internals->is_kni_started = 1;
- }
-
- if (internals->no_request_thread == 0) {
- internals->stop_thread = 0;
-
- ret = rte_ctrl_thread_create(&internals->thread,
- "kni_handle_req", NULL,
- kni_handle_request, internals);
- if (ret) {
- PMD_LOG(ERR,
- "Fail to create kni request thread");
- return -1;
- }
- }
-
- dev->data->dev_link.link_status = 1;
-
- return 0;
-}
-
-static int
-eth_kni_dev_stop(struct rte_eth_dev *dev)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- int ret;
-
- if (internals->no_request_thread == 0 && internals->stop_thread == 0) {
- internals->stop_thread = 1;
-
- ret = pthread_cancel(internals->thread);
- if (ret)
- PMD_LOG(ERR, "Can't cancel the thread");
-
- ret = pthread_join(internals->thread, NULL);
- if (ret)
- PMD_LOG(ERR, "Can't join the thread");
- }
-
- dev->data->dev_link.link_status = 0;
- dev->data->dev_started = 0;
-
- return 0;
-}
-
-static int
-eth_kni_close(struct rte_eth_dev *eth_dev)
-{
- struct pmd_internals *internals;
- int ret;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- ret = eth_kni_dev_stop(eth_dev);
- if (ret)
- PMD_LOG(WARNING, "Not able to stop kni for %s",
- eth_dev->data->name);
-
- /* mac_addrs must not be freed alone because part of dev_private */
- eth_dev->data->mac_addrs = NULL;
-
- internals = eth_dev->data->dev_private;
- ret = rte_kni_release(internals->kni);
- if (ret)
- PMD_LOG(WARNING, "Not able to release kni for %s",
- eth_dev->data->name);
-
- return ret;
-}
-
-static int
-eth_kni_dev_configure(struct rte_eth_dev *dev __rte_unused)
-{
- return 0;
-}
-
-static int
-eth_kni_dev_info(struct rte_eth_dev *dev __rte_unused,
- struct rte_eth_dev_info *dev_info)
-{
- dev_info->max_mac_addrs = 1;
- dev_info->max_rx_pktlen = UINT32_MAX;
- dev_info->max_rx_queues = KNI_MAX_QUEUE_PER_PORT;
- dev_info->max_tx_queues = KNI_MAX_QUEUE_PER_PORT;
- dev_info->min_rx_bufsize = 0;
-
- return 0;
-}
-
-static int
-eth_kni_rx_queue_setup(struct rte_eth_dev *dev,
- uint16_t rx_queue_id,
- uint16_t nb_rx_desc __rte_unused,
- unsigned int socket_id __rte_unused,
- const struct rte_eth_rxconf *rx_conf __rte_unused,
- struct rte_mempool *mb_pool)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- struct pmd_queue *q;
-
- q = &internals->rx_queues[rx_queue_id];
- q->internals = internals;
- q->mb_pool = mb_pool;
-
- dev->data->rx_queues[rx_queue_id] = q;
-
- return 0;
-}
-
-static int
-eth_kni_tx_queue_setup(struct rte_eth_dev *dev,
- uint16_t tx_queue_id,
- uint16_t nb_tx_desc __rte_unused,
- unsigned int socket_id __rte_unused,
- const struct rte_eth_txconf *tx_conf __rte_unused)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- struct pmd_queue *q;
-
- q = &internals->tx_queues[tx_queue_id];
- q->internals = internals;
-
- dev->data->tx_queues[tx_queue_id] = q;
-
- return 0;
-}
-
-static int
-eth_kni_link_update(struct rte_eth_dev *dev __rte_unused,
- int wait_to_complete __rte_unused)
-{
- return 0;
-}
-
-static int
-eth_kni_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
-{
- unsigned long rx_packets_total = 0, rx_bytes_total = 0;
- unsigned long tx_packets_total = 0, tx_bytes_total = 0;
- struct rte_eth_dev_data *data = dev->data;
- unsigned int i, num_stats;
- struct pmd_queue *q;
-
- num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS,
- data->nb_rx_queues);
- for (i = 0; i < num_stats; i++) {
- q = data->rx_queues[i];
- stats->q_ipackets[i] = q->rx.pkts;
- stats->q_ibytes[i] = q->rx.bytes;
- rx_packets_total += stats->q_ipackets[i];
- rx_bytes_total += stats->q_ibytes[i];
- }
-
- num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS,
- data->nb_tx_queues);
- for (i = 0; i < num_stats; i++) {
- q = data->tx_queues[i];
- stats->q_opackets[i] = q->tx.pkts;
- stats->q_obytes[i] = q->tx.bytes;
- tx_packets_total += stats->q_opackets[i];
- tx_bytes_total += stats->q_obytes[i];
- }
-
- stats->ipackets = rx_packets_total;
- stats->ibytes = rx_bytes_total;
- stats->opackets = tx_packets_total;
- stats->obytes = tx_bytes_total;
-
- return 0;
-}
-
-static int
-eth_kni_stats_reset(struct rte_eth_dev *dev)
-{
- struct rte_eth_dev_data *data = dev->data;
- struct pmd_queue *q;
- unsigned int i;
-
- for (i = 0; i < data->nb_rx_queues; i++) {
- q = data->rx_queues[i];
- q->rx.pkts = 0;
- q->rx.bytes = 0;
- }
- for (i = 0; i < data->nb_tx_queues; i++) {
- q = data->tx_queues[i];
- q->tx.pkts = 0;
- q->tx.bytes = 0;
- }
-
- return 0;
-}
-
-static const struct eth_dev_ops eth_kni_ops = {
- .dev_start = eth_kni_dev_start,
- .dev_stop = eth_kni_dev_stop,
- .dev_close = eth_kni_close,
- .dev_configure = eth_kni_dev_configure,
- .dev_infos_get = eth_kni_dev_info,
- .rx_queue_setup = eth_kni_rx_queue_setup,
- .tx_queue_setup = eth_kni_tx_queue_setup,
- .link_update = eth_kni_link_update,
- .stats_get = eth_kni_stats_get,
- .stats_reset = eth_kni_stats_reset,
-};
-
-static struct rte_eth_dev *
-eth_kni_create(struct rte_vdev_device *vdev,
- struct eth_kni_args *args,
- unsigned int numa_node)
-{
- struct pmd_internals *internals;
- struct rte_eth_dev_data *data;
- struct rte_eth_dev *eth_dev;
-
- PMD_LOG(INFO, "Creating kni ethdev on numa socket %u",
- numa_node);
-
- /* reserve an ethdev entry */
- eth_dev = rte_eth_vdev_allocate(vdev, sizeof(*internals));
- if (!eth_dev)
- return NULL;
-
- internals = eth_dev->data->dev_private;
- internals->port_id = eth_dev->data->port_id;
- data = eth_dev->data;
- data->nb_rx_queues = 1;
- data->nb_tx_queues = 1;
- data->dev_link = pmd_link;
- data->mac_addrs = &internals->eth_addr;
- data->promiscuous = 1;
- data->all_multicast = 1;
- data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
-
- rte_eth_random_addr(internals->eth_addr.addr_bytes);
-
- eth_dev->dev_ops = ð_kni_ops;
-
- internals->no_request_thread = args->no_request_thread;
-
- return eth_dev;
-}
-
-static int
-kni_init(void)
-{
- int ret;
-
- if (is_kni_initialized == 0) {
- ret = rte_kni_init(MAX_KNI_PORTS);
- if (ret < 0)
- return ret;
- }
-
- is_kni_initialized++;
-
- return 0;
-}
-
-static int
-eth_kni_kvargs_process(struct eth_kni_args *args, const char *params)
-{
- struct rte_kvargs *kvlist;
-
- kvlist = rte_kvargs_parse(params, valid_arguments);
- if (kvlist == NULL)
- return -1;
-
- memset(args, 0, sizeof(struct eth_kni_args));
-
- if (rte_kvargs_count(kvlist, ETH_KNI_NO_REQUEST_THREAD_ARG) == 1)
- args->no_request_thread = 1;
-
- rte_kvargs_free(kvlist);
-
- return 0;
-}
-
-static int
-eth_kni_probe(struct rte_vdev_device *vdev)
-{
- struct rte_eth_dev *eth_dev;
- struct eth_kni_args args;
- const char *name;
- const char *params;
- int ret;
-
- name = rte_vdev_device_name(vdev);
- params = rte_vdev_device_args(vdev);
- PMD_LOG(INFO, "Initializing eth_kni for %s", name);
-
- if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
- eth_dev = rte_eth_dev_attach_secondary(name);
- if (!eth_dev) {
- PMD_LOG(ERR, "Failed to probe %s", name);
- return -1;
- }
- /* TODO: request info from primary to set up Rx and Tx */
- eth_dev->dev_ops = ð_kni_ops;
- eth_dev->device = &vdev->device;
- rte_eth_dev_probing_finish(eth_dev);
- return 0;
- }
-
- ret = eth_kni_kvargs_process(&args, params);
- if (ret < 0)
- return ret;
-
- ret = kni_init();
- if (ret < 0)
- return ret;
-
- eth_dev = eth_kni_create(vdev, &args, rte_socket_id());
- if (eth_dev == NULL)
- goto kni_uninit;
-
- eth_dev->rx_pkt_burst = eth_kni_rx;
- eth_dev->tx_pkt_burst = eth_kni_tx;
-
- rte_eth_dev_probing_finish(eth_dev);
- return 0;
-
-kni_uninit:
- is_kni_initialized--;
- if (is_kni_initialized == 0)
- rte_kni_close();
- return -1;
-}
-
-static int
-eth_kni_remove(struct rte_vdev_device *vdev)
-{
- struct rte_eth_dev *eth_dev;
- const char *name;
- int ret;
-
- name = rte_vdev_device_name(vdev);
- PMD_LOG(INFO, "Un-Initializing eth_kni for %s", name);
-
- /* find the ethdev entry */
- eth_dev = rte_eth_dev_allocated(name);
- if (eth_dev != NULL) {
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- ret = eth_kni_dev_stop(eth_dev);
- if (ret != 0)
- return ret;
- return rte_eth_dev_release_port(eth_dev);
- }
- eth_kni_close(eth_dev);
- rte_eth_dev_release_port(eth_dev);
- }
-
- is_kni_initialized--;
- if (is_kni_initialized == 0)
- rte_kni_close();
-
- return 0;
-}
-
-static struct rte_vdev_driver eth_kni_drv = {
- .probe = eth_kni_probe,
- .remove = eth_kni_remove,
-};
-
-RTE_PMD_REGISTER_VDEV(net_kni, eth_kni_drv);
-RTE_PMD_REGISTER_PARAM_STRING(net_kni, ETH_KNI_NO_REQUEST_THREAD_ARG "=<int>");
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index f68bbc27a784..bd38b533c573 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -35,7 +35,6 @@ drivers = [
'ionic',
'ipn3ke',
'ixgbe',
- 'kni',
'mana',
'memif',
'mlx4',
diff --git a/examples/ip_pipeline/Makefile b/examples/ip_pipeline/Makefile
index 785c7ee38ce5..bc5e0a9f1800 100644
--- a/examples/ip_pipeline/Makefile
+++ b/examples/ip_pipeline/Makefile
@@ -8,7 +8,6 @@ APP = ip_pipeline
SRCS-y := action.c
SRCS-y += cli.c
SRCS-y += conn.c
-SRCS-y += kni.c
SRCS-y += link.c
SRCS-y += main.c
SRCS-y += mempool.c
diff --git a/examples/ip_pipeline/cli.c b/examples/ip_pipeline/cli.c
index c918f30e06f3..e8269ea90c11 100644
--- a/examples/ip_pipeline/cli.c
+++ b/examples/ip_pipeline/cli.c
@@ -14,7 +14,6 @@
#include "cli.h"
#include "cryptodev.h"
-#include "kni.h"
#include "link.h"
#include "mempool.h"
#include "parser.h"
@@ -728,65 +727,6 @@ cmd_tap(char **tokens,
}
}
-static const char cmd_kni_help[] =
-"kni <kni_name>\n"
-" link <link_name>\n"
-" mempool <mempool_name>\n"
-" [thread <thread_id>]\n";
-
-static void
-cmd_kni(char **tokens,
- uint32_t n_tokens,
- char *out,
- size_t out_size)
-{
- struct kni_params p;
- char *name;
- struct kni *kni;
-
- memset(&p, 0, sizeof(p));
- if ((n_tokens != 6) && (n_tokens != 8)) {
- snprintf(out, out_size, MSG_ARG_MISMATCH, tokens[0]);
- return;
- }
-
- name = tokens[1];
-
- if (strcmp(tokens[2], "link") != 0) {
- snprintf(out, out_size, MSG_ARG_NOT_FOUND, "link");
- return;
- }
-
- p.link_name = tokens[3];
-
- if (strcmp(tokens[4], "mempool") != 0) {
- snprintf(out, out_size, MSG_ARG_NOT_FOUND, "mempool");
- return;
- }
-
- p.mempool_name = tokens[5];
-
- if (n_tokens == 8) {
- if (strcmp(tokens[6], "thread") != 0) {
- snprintf(out, out_size, MSG_ARG_NOT_FOUND, "thread");
- return;
- }
-
- if (parser_read_uint32(&p.thread_id, tokens[7]) != 0) {
- snprintf(out, out_size, MSG_ARG_INVALID, "thread_id");
- return;
- }
-
- p.force_bind = 1;
- } else
- p.force_bind = 0;
-
- kni = kni_create(name, &p);
- if (kni == NULL) {
- snprintf(out, out_size, MSG_CMD_FAIL, tokens[0]);
- return;
- }
-}
static const char cmd_cryptodev_help[] =
"cryptodev <cryptodev_name>\n"
@@ -1541,7 +1481,6 @@ static const char cmd_pipeline_port_in_help[] =
" | swq <swq_name>\n"
" | tmgr <tmgr_name>\n"
" | tap <tap_name> mempool <mempool_name> mtu <mtu>\n"
-" | kni <kni_name>\n"
" | source mempool <mempool_name> file <file_name> bpp <n_bytes_per_pkt>\n"
" | cryptodev <cryptodev_name> rxq <queue_id>\n"
" [action <port_in_action_profile_name>]\n"
@@ -1664,18 +1603,6 @@ cmd_pipeline_port_in(char **tokens,
}
t0 += 6;
- } else if (strcmp(tokens[t0], "kni") == 0) {
- if (n_tokens < t0 + 2) {
- snprintf(out, out_size, MSG_ARG_MISMATCH,
- "pipeline port in kni");
- return;
- }
-
- p.type = PORT_IN_KNI;
-
- p.dev_name = tokens[t0 + 1];
-
- t0 += 2;
} else if (strcmp(tokens[t0], "source") == 0) {
if (n_tokens < t0 + 6) {
snprintf(out, out_size, MSG_ARG_MISMATCH,
@@ -1781,7 +1708,6 @@ static const char cmd_pipeline_port_out_help[] =
" | swq <swq_name>\n"
" | tmgr <tmgr_name>\n"
" | tap <tap_name>\n"
-" | kni <kni_name>\n"
" | sink [file <file_name> pkts <max_n_pkts>]\n"
" | cryptodev <cryptodev_name> txq <txq_id> offset <crypto_op_offset>\n";
@@ -1873,16 +1799,6 @@ cmd_pipeline_port_out(char **tokens,
p.type = PORT_OUT_TAP;
- p.dev_name = tokens[7];
- } else if (strcmp(tokens[6], "kni") == 0) {
- if (n_tokens != 8) {
- snprintf(out, out_size, MSG_ARG_MISMATCH,
- "pipeline port out kni");
- return;
- }
-
- p.type = PORT_OUT_KNI;
-
p.dev_name = tokens[7];
} else if (strcmp(tokens[6], "sink") == 0) {
if ((n_tokens != 7) && (n_tokens != 11)) {
@@ -6038,7 +5954,6 @@ cmd_help(char **tokens, uint32_t n_tokens, char *out, size_t out_size)
"\ttmgr subport\n"
"\ttmgr subport pipe\n"
"\ttap\n"
- "\tkni\n"
"\tport in action profile\n"
"\ttable action profile\n"
"\tpipeline\n"
@@ -6124,11 +6039,6 @@ cmd_help(char **tokens, uint32_t n_tokens, char *out, size_t out_size)
return;
}
- if (strcmp(tokens[0], "kni") == 0) {
- snprintf(out, out_size, "\n%s\n", cmd_kni_help);
- return;
- }
-
if (strcmp(tokens[0], "cryptodev") == 0) {
snprintf(out, out_size, "\n%s\n", cmd_cryptodev_help);
return;
@@ -6436,11 +6346,6 @@ cli_process(char *in, char *out, size_t out_size)
return;
}
- if (strcmp(tokens[0], "kni") == 0) {
- cmd_kni(tokens, n_tokens, out, out_size);
- return;
- }
-
if (strcmp(tokens[0], "cryptodev") == 0) {
cmd_cryptodev(tokens, n_tokens, out, out_size);
return;
diff --git a/examples/ip_pipeline/examples/kni.cli b/examples/ip_pipeline/examples/kni.cli
deleted file mode 100644
index 143834093d4d..000000000000
--- a/examples/ip_pipeline/examples/kni.cli
+++ /dev/null
@@ -1,69 +0,0 @@
-; SPDX-License-Identifier: BSD-3-Clause
-; Copyright(c) 2010-2018 Intel Corporation
-
-; _______________ ______________________
-; | | KNI0 | |
-; LINK0 RXQ0 --->|...............|------->|--+ |
-; | | KNI1 | | br0 |
-; LINK1 TXQ0 <---|...............|<-------|<-+ |
-; | | | Linux Kernel |
-; | PIPELINE0 | | Network Stack |
-; | | KNI1 | |
-; LINK1 RXQ0 --->|...............|------->|--+ |
-; | | KNI0 | | br0 |
-; LINK0 TXQ0 <---|...............|<-------|<-+ |
-; |_______________| |______________________|
-;
-; Insert Linux kernel KNI module:
-; [Linux]$ insmod rte_kni.ko
-;
-; Configure Linux kernel bridge between KNI0 and KNI1 interfaces:
-; [Linux]$ brctl addbr br0
-; [Linux]$ brctl addif br0 KNI0
-; [Linux]$ brctl addif br0 KNI1
-; [Linux]$ ifconfig br0 up
-; [Linux]$ ifconfig KNI0 up
-; [Linux]$ ifconfig KNI1 up
-;
-; Monitor packet forwarding performed by Linux kernel between KNI0 and KNI1:
-; [Linux]$ tcpdump -i KNI0
-; [Linux]$ tcpdump -i KNI1
-
-mempool MEMPOOL0 buffer 2304 pool 32K cache 256 cpu 0
-
-link LINK0 dev 0000:02:00.0 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on
-link LINK1 dev 0000:02:00.1 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on
-
-kni KNI0 link LINK0 mempool MEMPOOL0
-kni KNI1 link LINK1 mempool MEMPOOL0
-
-table action profile AP0 ipv4 offset 270 fwd
-
-pipeline PIPELINE0 period 10 offset_port_id 0 cpu 0
-
-pipeline PIPELINE0 port in bsz 32 link LINK0 rxq 0
-pipeline PIPELINE0 port in bsz 32 kni KNI1
-pipeline PIPELINE0 port in bsz 32 link LINK1 rxq 0
-pipeline PIPELINE0 port in bsz 32 kni KNI0
-
-pipeline PIPELINE0 port out bsz 32 kni KNI0
-pipeline PIPELINE0 port out bsz 32 link LINK1 txq 0
-pipeline PIPELINE0 port out bsz 32 kni KNI1
-pipeline PIPELINE0 port out bsz 32 link LINK0 txq 0
-
-pipeline PIPELINE0 table match stub action AP0
-pipeline PIPELINE0 table match stub action AP0
-pipeline PIPELINE0 table match stub action AP0
-pipeline PIPELINE0 table match stub action AP0
-
-pipeline PIPELINE0 port in 0 table 0
-pipeline PIPELINE0 port in 1 table 1
-pipeline PIPELINE0 port in 2 table 2
-pipeline PIPELINE0 port in 3 table 3
-
-thread 1 pipeline PIPELINE0 enable
-
-pipeline PIPELINE0 table 0 rule add match default action fwd port 0
-pipeline PIPELINE0 table 1 rule add match default action fwd port 1
-pipeline PIPELINE0 table 2 rule add match default action fwd port 2
-pipeline PIPELINE0 table 3 rule add match default action fwd port 3
diff --git a/examples/ip_pipeline/kni.c b/examples/ip_pipeline/kni.c
deleted file mode 100644
index cd02c3947827..000000000000
--- a/examples/ip_pipeline/kni.c
+++ /dev/null
@@ -1,168 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include <stdlib.h>
-#include <string.h>
-
-#include <rte_ethdev.h>
-#include <rte_string_fns.h>
-
-#include "kni.h"
-#include "mempool.h"
-#include "link.h"
-
-static struct kni_list kni_list;
-
-#ifndef KNI_MAX
-#define KNI_MAX 16
-#endif
-
-int
-kni_init(void)
-{
- TAILQ_INIT(&kni_list);
-
-#ifdef RTE_LIB_KNI
- rte_kni_init(KNI_MAX);
-#endif
-
- return 0;
-}
-
-struct kni *
-kni_find(const char *name)
-{
- struct kni *kni;
-
- if (name == NULL)
- return NULL;
-
- TAILQ_FOREACH(kni, &kni_list, node)
- if (strcmp(kni->name, name) == 0)
- return kni;
-
- return NULL;
-}
-
-#ifndef RTE_LIB_KNI
-
-struct kni *
-kni_create(const char *name __rte_unused,
- struct kni_params *params __rte_unused)
-{
- return NULL;
-}
-
-void
-kni_handle_request(void)
-{
- return;
-}
-
-#else
-
-static int
-kni_config_network_interface(uint16_t port_id, uint8_t if_up)
-{
- int ret = 0;
-
- if (!rte_eth_dev_is_valid_port(port_id))
- return -EINVAL;
-
- ret = (if_up) ?
- rte_eth_dev_set_link_up(port_id) :
- rte_eth_dev_set_link_down(port_id);
-
- return ret;
-}
-
-static int
-kni_change_mtu(uint16_t port_id, unsigned int new_mtu)
-{
- int ret;
-
- if (!rte_eth_dev_is_valid_port(port_id))
- return -EINVAL;
-
- if (new_mtu > RTE_ETHER_MAX_LEN)
- return -EINVAL;
-
- /* Set new MTU */
- ret = rte_eth_dev_set_mtu(port_id, new_mtu);
- if (ret < 0)
- return ret;
-
- return 0;
-}
-
-struct kni *
-kni_create(const char *name, struct kni_params *params)
-{
- struct rte_eth_dev_info dev_info;
- struct rte_kni_conf kni_conf;
- struct rte_kni_ops kni_ops;
- struct kni *kni;
- struct mempool *mempool;
- struct link *link;
- struct rte_kni *k;
- int ret;
-
- /* Check input params */
- if ((name == NULL) ||
- kni_find(name) ||
- (params == NULL))
- return NULL;
-
- mempool = mempool_find(params->mempool_name);
- link = link_find(params->link_name);
- if ((mempool == NULL) ||
- (link == NULL))
- return NULL;
-
- /* Resource create */
- ret = rte_eth_dev_info_get(link->port_id, &dev_info);
- if (ret != 0)
- return NULL;
-
- memset(&kni_conf, 0, sizeof(kni_conf));
- strlcpy(kni_conf.name, name, RTE_KNI_NAMESIZE);
- kni_conf.force_bind = params->force_bind;
- kni_conf.core_id = params->thread_id;
- kni_conf.group_id = link->port_id;
- kni_conf.mbuf_size = mempool->buffer_size;
-
- memset(&kni_ops, 0, sizeof(kni_ops));
- kni_ops.port_id = link->port_id;
- kni_ops.config_network_if = kni_config_network_interface;
- kni_ops.change_mtu = kni_change_mtu;
-
- k = rte_kni_alloc(mempool->m, &kni_conf, &kni_ops);
- if (k == NULL)
- return NULL;
-
- /* Node allocation */
- kni = calloc(1, sizeof(struct kni));
- if (kni == NULL)
- return NULL;
-
- /* Node fill in */
- strlcpy(kni->name, name, sizeof(kni->name));
- kni->k = k;
-
- /* Node add to list */
- TAILQ_INSERT_TAIL(&kni_list, kni, node);
-
- return kni;
-}
-
-void
-kni_handle_request(void)
-{
- struct kni *kni;
-
- TAILQ_FOREACH(kni, &kni_list, node)
- rte_kni_handle_request(kni->k);
-}
-
-#endif
diff --git a/examples/ip_pipeline/kni.h b/examples/ip_pipeline/kni.h
deleted file mode 100644
index 118f48df73d8..000000000000
--- a/examples/ip_pipeline/kni.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _INCLUDE_KNI_H_
-#define _INCLUDE_KNI_H_
-
-#include <stdint.h>
-#include <sys/queue.h>
-
-#ifdef RTE_LIB_KNI
-#include <rte_kni.h>
-#endif
-
-#include "common.h"
-
-struct kni {
- TAILQ_ENTRY(kni) node;
- char name[NAME_SIZE];
-#ifdef RTE_LIB_KNI
- struct rte_kni *k;
-#endif
-};
-
-TAILQ_HEAD(kni_list, kni);
-
-int
-kni_init(void);
-
-struct kni *
-kni_find(const char *name);
-
-struct kni_params {
- const char *link_name;
- const char *mempool_name;
- int force_bind;
- uint32_t thread_id;
-};
-
-struct kni *
-kni_create(const char *name, struct kni_params *params);
-
-void
-kni_handle_request(void);
-
-#endif /* _INCLUDE_KNI_H_ */
diff --git a/examples/ip_pipeline/main.c b/examples/ip_pipeline/main.c
index e35d9bce3984..663f538f024a 100644
--- a/examples/ip_pipeline/main.c
+++ b/examples/ip_pipeline/main.c
@@ -14,7 +14,6 @@
#include "cli.h"
#include "conn.h"
-#include "kni.h"
#include "cryptodev.h"
#include "link.h"
#include "mempool.h"
@@ -205,13 +204,6 @@ main(int argc, char **argv)
return status;
}
- /* KNI */
- status = kni_init();
- if (status) {
- printf("Error: KNI initialization failed (%d)\n", status);
- return status;
- }
-
/* Sym Crypto */
status = cryptodev_init();
if (status) {
@@ -264,7 +256,5 @@ main(int argc, char **argv)
conn_poll_for_conn(conn);
conn_poll_for_msg(conn);
-
- kni_handle_request();
}
}
diff --git a/examples/ip_pipeline/meson.build b/examples/ip_pipeline/meson.build
index 57f522c24cf9..68049157e429 100644
--- a/examples/ip_pipeline/meson.build
+++ b/examples/ip_pipeline/meson.build
@@ -18,7 +18,6 @@ sources = files(
'cli.c',
'conn.c',
'cryptodev.c',
- 'kni.c',
'link.c',
'main.c',
'mempool.c',
diff --git a/examples/ip_pipeline/pipeline.c b/examples/ip_pipeline/pipeline.c
index 7ebabcae984d..63352257c6e9 100644
--- a/examples/ip_pipeline/pipeline.c
+++ b/examples/ip_pipeline/pipeline.c
@@ -11,9 +11,6 @@
#include <rte_string_fns.h>
#include <rte_port_ethdev.h>
-#ifdef RTE_LIB_KNI
-#include <rte_port_kni.h>
-#endif
#include <rte_port_ring.h>
#include <rte_port_source_sink.h>
#include <rte_port_fd.h>
@@ -28,9 +25,6 @@
#include <rte_table_lpm_ipv6.h>
#include <rte_table_stub.h>
-#ifdef RTE_LIB_KNI
-#include "kni.h"
-#endif
#include "link.h"
#include "mempool.h"
#include "pipeline.h"
@@ -160,9 +154,6 @@ pipeline_port_in_create(const char *pipeline_name,
struct rte_port_ring_reader_params ring;
struct rte_port_sched_reader_params sched;
struct rte_port_fd_reader_params fd;
-#ifdef RTE_LIB_KNI
- struct rte_port_kni_reader_params kni;
-#endif
struct rte_port_source_params source;
struct rte_port_sym_crypto_reader_params sym_crypto;
} pp;
@@ -264,22 +255,6 @@ pipeline_port_in_create(const char *pipeline_name,
break;
}
-#ifdef RTE_LIB_KNI
- case PORT_IN_KNI:
- {
- struct kni *kni;
-
- kni = kni_find(params->dev_name);
- if (kni == NULL)
- return -1;
-
- pp.kni.kni = kni->k;
-
- p.ops = &rte_port_kni_reader_ops;
- p.arg_create = &pp.kni;
- break;
- }
-#endif
case PORT_IN_SOURCE:
{
@@ -404,9 +379,6 @@ pipeline_port_out_create(const char *pipeline_name,
struct rte_port_ring_writer_params ring;
struct rte_port_sched_writer_params sched;
struct rte_port_fd_writer_params fd;
-#ifdef RTE_LIB_KNI
- struct rte_port_kni_writer_params kni;
-#endif
struct rte_port_sink_params sink;
struct rte_port_sym_crypto_writer_params sym_crypto;
} pp;
@@ -415,9 +387,6 @@ pipeline_port_out_create(const char *pipeline_name,
struct rte_port_ethdev_writer_nodrop_params ethdev;
struct rte_port_ring_writer_nodrop_params ring;
struct rte_port_fd_writer_nodrop_params fd;
-#ifdef RTE_LIB_KNI
- struct rte_port_kni_writer_nodrop_params kni;
-#endif
struct rte_port_sym_crypto_writer_nodrop_params sym_crypto;
} pp_nodrop;
@@ -537,32 +506,6 @@ pipeline_port_out_create(const char *pipeline_name,
break;
}
-#ifdef RTE_LIB_KNI
- case PORT_OUT_KNI:
- {
- struct kni *kni;
-
- kni = kni_find(params->dev_name);
- if (kni == NULL)
- return -1;
-
- pp.kni.kni = kni->k;
- pp.kni.tx_burst_sz = params->burst_size;
-
- pp_nodrop.kni.kni = kni->k;
- pp_nodrop.kni.tx_burst_sz = params->burst_size;
- pp_nodrop.kni.n_retries = params->n_retries;
-
- if (params->retry == 0) {
- p.ops = &rte_port_kni_writer_ops;
- p.arg_create = &pp.kni;
- } else {
- p.ops = &rte_port_kni_writer_nodrop_ops;
- p.arg_create = &pp_nodrop.kni;
- }
- break;
- }
-#endif
case PORT_OUT_SINK:
{
diff --git a/examples/ip_pipeline/pipeline.h b/examples/ip_pipeline/pipeline.h
index 4d2ee29a54c7..083d5e852421 100644
--- a/examples/ip_pipeline/pipeline.h
+++ b/examples/ip_pipeline/pipeline.h
@@ -25,7 +25,6 @@ enum port_in_type {
PORT_IN_SWQ,
PORT_IN_TMGR,
PORT_IN_TAP,
- PORT_IN_KNI,
PORT_IN_SOURCE,
PORT_IN_CRYPTODEV,
};
@@ -67,7 +66,6 @@ enum port_out_type {
PORT_OUT_SWQ,
PORT_OUT_TMGR,
PORT_OUT_TAP,
- PORT_OUT_KNI,
PORT_OUT_SINK,
PORT_OUT_CRYPTODEV,
};
diff --git a/kernel/linux/kni/Kbuild b/kernel/linux/kni/Kbuild
deleted file mode 100644
index e5452d6c00db..000000000000
--- a/kernel/linux/kni/Kbuild
+++ /dev/null
@@ -1,6 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
-
-ccflags-y := $(MODULE_CFLAGS)
-obj-m := rte_kni.o
-rte_kni-y := $(patsubst $(src)/%.c,%.o,$(wildcard $(src)/*.c))
diff --git a/kernel/linux/kni/compat.h b/kernel/linux/kni/compat.h
deleted file mode 100644
index 8beb67046577..000000000000
--- a/kernel/linux/kni/compat.h
+++ /dev/null
@@ -1,157 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Minimal wrappers to allow compiling kni on older kernels.
- */
-
-#include <linux/version.h>
-
-#ifndef RHEL_RELEASE_VERSION
-#define RHEL_RELEASE_VERSION(a, b) (((a) << 8) + (b))
-#endif
-
-/* SuSE version macro is the same as Linux kernel version */
-#ifndef SLE_VERSION
-#define SLE_VERSION(a, b, c) KERNEL_VERSION(a, b, c)
-#endif
-#ifdef CONFIG_SUSE_KERNEL
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 57))
-/* SLES12SP3 is at least 4.4.57+ based */
-#define SLE_VERSION_CODE SLE_VERSION(12, 3, 0)
-#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 12, 28))
-/* SLES12 is at least 3.12.28+ based */
-#define SLE_VERSION_CODE SLE_VERSION(12, 0, 0)
-#elif ((LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 61)) && \
- (LINUX_VERSION_CODE < KERNEL_VERSION(3, 1, 0)))
-/* SLES11 SP3 is at least 3.0.61+ based */
-#define SLE_VERSION_CODE SLE_VERSION(11, 3, 0)
-#elif (LINUX_VERSION_CODE == KERNEL_VERSION(2, 6, 32))
-/* SLES11 SP1 is 2.6.32 based */
-#define SLE_VERSION_CODE SLE_VERSION(11, 1, 0)
-#elif (LINUX_VERSION_CODE == KERNEL_VERSION(2, 6, 27))
-/* SLES11 GA is 2.6.27 based */
-#define SLE_VERSION_CODE SLE_VERSION(11, 0, 0)
-#endif /* LINUX_VERSION_CODE == KERNEL_VERSION(x,y,z) */
-#endif /* CONFIG_SUSE_KERNEL */
-#ifndef SLE_VERSION_CODE
-#define SLE_VERSION_CODE 0
-#endif /* SLE_VERSION_CODE */
-
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 39) && \
- (!(defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6, 4)))
-
-#define kstrtoul strict_strtoul
-
-#endif /* < 2.6.39 */
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 33)
-#define HAVE_SIMPLIFIED_PERNET_OPERATIONS
-#endif
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 35)
-#define sk_sleep(s) ((s)->sk_sleep)
-#else
-#define HAVE_SOCKET_WQ
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 7, 0)
-#define HAVE_STATIC_SOCK_MAP_FD
-#else
-#define kni_sock_map_fd(s) sock_map_fd(s, 0)
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 9, 0)
-#define HAVE_CHANGE_CARRIER_CB
-#endif
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 0)
-#define ether_addr_copy(dst, src) memcpy(dst, src, ETH_ALEN)
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 19, 0)
-#define HAVE_IOV_ITER_MSGHDR
-#endif
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 1, 0)
-#define HAVE_KIOCB_MSG_PARAM
-#define HAVE_REBUILD_HEADER
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 2, 0)
-#define HAVE_SK_ALLOC_KERN_PARAM
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 7, 0) || \
- (defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7, 4)) || \
- (SLE_VERSION_CODE && SLE_VERSION_CODE == SLE_VERSION(12, 3, 0))
-#define HAVE_TRANS_START_HELPER
-#endif
-
-/*
- * KNI uses NET_NAME_UNKNOWN macro to select correct version of alloc_netdev()
- * For old kernels just backported the commit that enables the macro
- * (685343fc3ba6) but still uses old API, it is required to undefine macro to
- * select correct version of API, this is safe since KNI doesn't use the value.
- * This fix is specific to RedHat/CentOS kernels.
- */
-#if (defined(RHEL_RELEASE_CODE) && \
- (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6, 8)) && \
- (LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 34)))
-#undef NET_NAME_UNKNOWN
-#endif
-
-/*
- * RHEL has two different version with different kernel version:
- * 3.10 is for AMD, Intel, IBM POWER7 and POWER8;
- * 4.14 is for ARM and IBM POWER9
- */
-#if (defined(RHEL_RELEASE_CODE) && \
- (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7, 5)) && \
- (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(8, 0)) && \
- (LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)))
-#define ndo_change_mtu ndo_change_mtu_rh74
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0)
-#define HAVE_MAX_MTU_PARAM
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0)
-#define HAVE_SIGNAL_FUNCTIONS_OWN_HEADER
-#endif
-
-/*
- * iova to kva mapping support can be provided since 4.6.0, but required
- * kernel version increased to >= 4.10.0 because of the updates in
- * get_user_pages_remote() kernel API
- */
-#if KERNEL_VERSION(4, 10, 0) <= LINUX_VERSION_CODE
-#define HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
-#endif
-
-#if KERNEL_VERSION(5, 6, 0) <= LINUX_VERSION_CODE || \
- (defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_VERSION(8, 3) <= RHEL_RELEASE_CODE) || \
- (defined(CONFIG_SUSE_KERNEL) && defined(HAVE_ARG_TX_QUEUE))
-#define HAVE_TX_TIMEOUT_TXQUEUE
-#endif
-
-#if KERNEL_VERSION(5, 9, 0) > LINUX_VERSION_CODE
-#define HAVE_TSK_IN_GUP
-#endif
-
-#if KERNEL_VERSION(5, 15, 0) <= LINUX_VERSION_CODE
-#define HAVE_ETH_HW_ADDR_SET
-#endif
-
-#if KERNEL_VERSION(5, 18, 0) > LINUX_VERSION_CODE && \
- (!(defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_VERSION(9, 1) <= RHEL_RELEASE_CODE))
-#define HAVE_NETIF_RX_NI
-#endif
-
-#if KERNEL_VERSION(6, 5, 0) > LINUX_VERSION_CODE
-#define HAVE_VMA_IN_GUP
-#endif
diff --git a/kernel/linux/kni/kni_dev.h b/kernel/linux/kni/kni_dev.h
deleted file mode 100644
index 975379825b2d..000000000000
--- a/kernel/linux/kni/kni_dev.h
+++ /dev/null
@@ -1,137 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-#ifndef _KNI_DEV_H_
-#define _KNI_DEV_H_
-
-#ifdef pr_fmt
-#undef pr_fmt
-#endif
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#define KNI_VERSION "1.0"
-
-#include "compat.h"
-
-#include <linux/if.h>
-#include <linux/wait.h>
-#ifdef HAVE_SIGNAL_FUNCTIONS_OWN_HEADER
-#include <linux/sched/signal.h>
-#else
-#include <linux/sched.h>
-#endif
-#include <linux/netdevice.h>
-#include <linux/spinlock.h>
-#include <linux/list.h>
-
-#include <rte_kni_common.h>
-#define KNI_KTHREAD_MAX_RESCHEDULE_INTERVAL 1000000 /* us */
-
-#define MBUF_BURST_SZ 32
-
-/* Default carrier state for created KNI network interfaces */
-extern uint32_t kni_dflt_carrier;
-
-/* Request processing support for bifurcated drivers. */
-extern uint32_t bifurcated_support;
-
-/**
- * A structure describing the private information for a kni device.
- */
-struct kni_dev {
- /* kni list */
- struct list_head list;
-
- uint8_t iova_mode;
-
- uint32_t core_id; /* Core ID to bind */
- char name[RTE_KNI_NAMESIZE]; /* Network device name */
- struct task_struct *pthread;
-
- /* wait queue for req/resp */
- wait_queue_head_t wq;
- struct mutex sync_lock;
-
- /* kni device */
- struct net_device *net_dev;
-
- /* queue for packets to be sent out */
- struct rte_kni_fifo *tx_q;
-
- /* queue for the packets received */
- struct rte_kni_fifo *rx_q;
-
- /* queue for the allocated mbufs those can be used to save sk buffs */
- struct rte_kni_fifo *alloc_q;
-
- /* free queue for the mbufs to be freed */
- struct rte_kni_fifo *free_q;
-
- /* request queue */
- struct rte_kni_fifo *req_q;
-
- /* response queue */
- struct rte_kni_fifo *resp_q;
-
- void *sync_kva;
- void *sync_va;
-
- void *mbuf_kva;
- void *mbuf_va;
-
- /* mbuf size */
- uint32_t mbuf_size;
-
- /* buffers */
- void *pa[MBUF_BURST_SZ];
- void *va[MBUF_BURST_SZ];
- void *alloc_pa[MBUF_BURST_SZ];
- void *alloc_va[MBUF_BURST_SZ];
-
- struct task_struct *usr_tsk;
-};
-
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
-static inline phys_addr_t iova_to_phys(struct task_struct *tsk,
- unsigned long iova)
-{
- phys_addr_t offset, phys_addr;
- struct page *page = NULL;
- long ret;
-
- offset = iova & (PAGE_SIZE - 1);
-
- /* Read one page struct info */
-#ifdef HAVE_TSK_IN_GUP
- ret = get_user_pages_remote(tsk, tsk->mm, iova, 1, 0, &page, NULL, NULL);
-#else
- #ifdef HAVE_VMA_IN_GUP
- ret = get_user_pages_remote(tsk->mm, iova, 1, 0, &page, NULL, NULL);
- #else
- ret = get_user_pages_remote(tsk->mm, iova, 1, 0, &page, NULL);
- #endif
-#endif
- if (ret < 0)
- return 0;
-
- phys_addr = page_to_phys(page) | offset;
- put_page(page);
-
- return phys_addr;
-}
-
-static inline void *iova_to_kva(struct task_struct *tsk, unsigned long iova)
-{
- return phys_to_virt(iova_to_phys(tsk, iova));
-}
-#endif
-
-void kni_net_release_fifo_phy(struct kni_dev *kni);
-void kni_net_rx(struct kni_dev *kni);
-void kni_net_init(struct net_device *dev);
-void kni_net_config_lo_mode(char *lo_str);
-void kni_net_poll_resp(struct kni_dev *kni);
-
-#endif
diff --git a/kernel/linux/kni/kni_fifo.h b/kernel/linux/kni/kni_fifo.h
deleted file mode 100644
index 1ba5172002b6..000000000000
--- a/kernel/linux/kni/kni_fifo.h
+++ /dev/null
@@ -1,87 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-#ifndef _KNI_FIFO_H_
-#define _KNI_FIFO_H_
-
-#include <rte_kni_common.h>
-
-/* Skip some memory barriers on Linux < 3.14 */
-#ifndef smp_load_acquire
-#define smp_load_acquire(a) (*(a))
-#endif
-#ifndef smp_store_release
-#define smp_store_release(a, b) *(a) = (b)
-#endif
-
-/**
- * Adds num elements into the fifo. Return the number actually written
- */
-static inline uint32_t
-kni_fifo_put(struct rte_kni_fifo *fifo, void **data, uint32_t num)
-{
- uint32_t i = 0;
- uint32_t fifo_write = fifo->write;
- uint32_t fifo_read = smp_load_acquire(&fifo->read);
- uint32_t new_write = fifo_write;
-
- for (i = 0; i < num; i++) {
- new_write = (new_write + 1) & (fifo->len - 1);
-
- if (new_write == fifo_read)
- break;
- fifo->buffer[fifo_write] = data[i];
- fifo_write = new_write;
- }
- smp_store_release(&fifo->write, fifo_write);
-
- return i;
-}
-
-/**
- * Get up to num elements from the FIFO. Return the number actually read
- */
-static inline uint32_t
-kni_fifo_get(struct rte_kni_fifo *fifo, void **data, uint32_t num)
-{
- uint32_t i = 0;
- uint32_t new_read = fifo->read;
- uint32_t fifo_write = smp_load_acquire(&fifo->write);
-
- for (i = 0; i < num; i++) {
- if (new_read == fifo_write)
- break;
-
- data[i] = fifo->buffer[new_read];
- new_read = (new_read + 1) & (fifo->len - 1);
- }
- smp_store_release(&fifo->read, new_read);
-
- return i;
-}
-
-/**
- * Get the num of elements in the fifo
- */
-static inline uint32_t
-kni_fifo_count(struct rte_kni_fifo *fifo)
-{
- uint32_t fifo_write = smp_load_acquire(&fifo->write);
- uint32_t fifo_read = smp_load_acquire(&fifo->read);
- return (fifo->len + fifo_write - fifo_read) & (fifo->len - 1);
-}
-
-/**
- * Get the num of available elements in the fifo
- */
-static inline uint32_t
-kni_fifo_free_count(struct rte_kni_fifo *fifo)
-{
- uint32_t fifo_write = smp_load_acquire(&fifo->write);
- uint32_t fifo_read = smp_load_acquire(&fifo->read);
- return (fifo_read - fifo_write - 1) & (fifo->len - 1);
-}
-
-#endif /* _KNI_FIFO_H_ */
diff --git a/kernel/linux/kni/kni_misc.c b/kernel/linux/kni/kni_misc.c
deleted file mode 100644
index 0c3a86ee352e..000000000000
--- a/kernel/linux/kni/kni_misc.c
+++ /dev/null
@@ -1,719 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-#include <linux/version.h>
-#include <linux/module.h>
-#include <linux/miscdevice.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/pci.h>
-#include <linux/kthread.h>
-#include <linux/rwsem.h>
-#include <linux/mutex.h>
-#include <linux/nsproxy.h>
-#include <net/net_namespace.h>
-#include <net/netns/generic.h>
-
-#include <rte_kni_common.h>
-
-#include "compat.h"
-#include "kni_dev.h"
-
-MODULE_VERSION(KNI_VERSION);
-MODULE_LICENSE("Dual BSD/GPL");
-MODULE_AUTHOR("Intel Corporation");
-MODULE_DESCRIPTION("Kernel Module for managing kni devices");
-
-#define KNI_RX_LOOP_NUM 1000
-
-#define KNI_MAX_DEVICES 32
-
-/* loopback mode */
-static char *lo_mode;
-
-/* Kernel thread mode */
-static char *kthread_mode;
-static uint32_t multiple_kthread_on;
-
-/* Default carrier state for created KNI network interfaces */
-static char *carrier;
-uint32_t kni_dflt_carrier;
-
-/* Request processing support for bifurcated drivers. */
-static char *enable_bifurcated;
-uint32_t bifurcated_support;
-
-/* KNI thread scheduling interval */
-static long min_scheduling_interval = 100; /* us */
-static long max_scheduling_interval = 200; /* us */
-
-#define KNI_DEV_IN_USE_BIT_NUM 0 /* Bit number for device in use */
-
-static int kni_net_id;
-
-struct kni_net {
- unsigned long device_in_use; /* device in use flag */
- struct mutex kni_kthread_lock;
- struct task_struct *kni_kthread;
- struct rw_semaphore kni_list_lock;
- struct list_head kni_list_head;
-};
-
-static int __net_init
-kni_init_net(struct net *net)
-{
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- struct kni_net *knet = net_generic(net, kni_net_id);
-
- memset(knet, 0, sizeof(*knet));
-#else
- struct kni_net *knet;
- int ret;
-
- knet = kzalloc(sizeof(struct kni_net), GFP_KERNEL);
- if (!knet) {
- ret = -ENOMEM;
- return ret;
- }
-#endif
-
- /* Clear the bit of device in use */
- clear_bit(KNI_DEV_IN_USE_BIT_NUM, &knet->device_in_use);
-
- mutex_init(&knet->kni_kthread_lock);
-
- init_rwsem(&knet->kni_list_lock);
- INIT_LIST_HEAD(&knet->kni_list_head);
-
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- return 0;
-#else
- ret = net_assign_generic(net, kni_net_id, knet);
- if (ret < 0)
- kfree(knet);
-
- return ret;
-#endif
-}
-
-static void __net_exit
-kni_exit_net(struct net *net)
-{
- struct kni_net *knet __maybe_unused;
-
- knet = net_generic(net, kni_net_id);
- mutex_destroy(&knet->kni_kthread_lock);
-
-#ifndef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- kfree(knet);
-#endif
-}
-
-static struct pernet_operations kni_net_ops = {
- .init = kni_init_net,
- .exit = kni_exit_net,
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- .id = &kni_net_id,
- .size = sizeof(struct kni_net),
-#endif
-};
-
-static int
-kni_thread_single(void *data)
-{
- struct kni_net *knet = data;
- int j;
- struct kni_dev *dev;
-
- while (!kthread_should_stop()) {
- down_read(&knet->kni_list_lock);
- for (j = 0; j < KNI_RX_LOOP_NUM; j++) {
- list_for_each_entry(dev, &knet->kni_list_head, list) {
- kni_net_rx(dev);
- kni_net_poll_resp(dev);
- }
- }
- up_read(&knet->kni_list_lock);
- /* reschedule out for a while */
- usleep_range(min_scheduling_interval, max_scheduling_interval);
- }
-
- return 0;
-}
-
-static int
-kni_thread_multiple(void *param)
-{
- int j;
- struct kni_dev *dev = param;
-
- while (!kthread_should_stop()) {
- for (j = 0; j < KNI_RX_LOOP_NUM; j++) {
- kni_net_rx(dev);
- kni_net_poll_resp(dev);
- }
- usleep_range(min_scheduling_interval, max_scheduling_interval);
- }
-
- return 0;
-}
-
-static int
-kni_open(struct inode *inode, struct file *file)
-{
- struct net *net = current->nsproxy->net_ns;
- struct kni_net *knet = net_generic(net, kni_net_id);
-
- /* kni device can be opened by one user only per netns */
- if (test_and_set_bit(KNI_DEV_IN_USE_BIT_NUM, &knet->device_in_use))
- return -EBUSY;
-
- file->private_data = get_net(net);
- pr_debug("/dev/kni opened\n");
-
- return 0;
-}
-
-static int
-kni_dev_remove(struct kni_dev *dev)
-{
- if (!dev)
- return -ENODEV;
-
- /*
- * The memory of kni device is allocated and released together
- * with net device. Release mbuf before freeing net device.
- */
- kni_net_release_fifo_phy(dev);
-
- if (dev->net_dev) {
- unregister_netdev(dev->net_dev);
- free_netdev(dev->net_dev);
- }
-
- return 0;
-}
-
-static int
-kni_release(struct inode *inode, struct file *file)
-{
- struct net *net = file->private_data;
- struct kni_net *knet = net_generic(net, kni_net_id);
- struct kni_dev *dev, *n;
-
- /* Stop kernel thread for single mode */
- if (multiple_kthread_on == 0) {
- mutex_lock(&knet->kni_kthread_lock);
- /* Stop kernel thread */
- if (knet->kni_kthread != NULL) {
- kthread_stop(knet->kni_kthread);
- knet->kni_kthread = NULL;
- }
- mutex_unlock(&knet->kni_kthread_lock);
- }
-
- down_write(&knet->kni_list_lock);
- list_for_each_entry_safe(dev, n, &knet->kni_list_head, list) {
- /* Stop kernel thread for multiple mode */
- if (multiple_kthread_on && dev->pthread != NULL) {
- kthread_stop(dev->pthread);
- dev->pthread = NULL;
- }
-
- list_del(&dev->list);
- kni_dev_remove(dev);
- }
- up_write(&knet->kni_list_lock);
-
- /* Clear the bit of device in use */
- clear_bit(KNI_DEV_IN_USE_BIT_NUM, &knet->device_in_use);
-
- put_net(net);
- pr_debug("/dev/kni closed\n");
-
- return 0;
-}
-
-static int
-kni_check_param(struct kni_dev *kni, struct rte_kni_device_info *dev)
-{
- if (!kni || !dev)
- return -1;
-
- /* Check if network name has been used */
- if (!strncmp(kni->name, dev->name, RTE_KNI_NAMESIZE)) {
- pr_err("KNI name %s duplicated\n", dev->name);
- return -1;
- }
-
- return 0;
-}
-
-static int
-kni_run_thread(struct kni_net *knet, struct kni_dev *kni, uint8_t force_bind)
-{
- /**
- * Create a new kernel thread for multiple mode, set its core affinity,
- * and finally wake it up.
- */
- if (multiple_kthread_on) {
- kni->pthread = kthread_create(kni_thread_multiple,
- (void *)kni, "kni_%s", kni->name);
- if (IS_ERR(kni->pthread)) {
- kni_dev_remove(kni);
- return -ECANCELED;
- }
-
- if (force_bind)
- kthread_bind(kni->pthread, kni->core_id);
- wake_up_process(kni->pthread);
- } else {
- mutex_lock(&knet->kni_kthread_lock);
-
- if (knet->kni_kthread == NULL) {
- knet->kni_kthread = kthread_create(kni_thread_single,
- (void *)knet, "kni_single");
- if (IS_ERR(knet->kni_kthread)) {
- mutex_unlock(&knet->kni_kthread_lock);
- kni_dev_remove(kni);
- return -ECANCELED;
- }
-
- if (force_bind)
- kthread_bind(knet->kni_kthread, kni->core_id);
- wake_up_process(knet->kni_kthread);
- }
-
- mutex_unlock(&knet->kni_kthread_lock);
- }
-
- return 0;
-}
-
-static int
-kni_ioctl_create(struct net *net, uint32_t ioctl_num,
- unsigned long ioctl_param)
-{
- struct kni_net *knet = net_generic(net, kni_net_id);
- int ret;
- struct rte_kni_device_info dev_info;
- struct net_device *net_dev = NULL;
- struct kni_dev *kni, *dev, *n;
-
- pr_info("Creating kni...\n");
- /* Check the buffer size, to avoid warning */
- if (_IOC_SIZE(ioctl_num) > sizeof(dev_info))
- return -EINVAL;
-
- /* Copy kni info from user space */
- if (copy_from_user(&dev_info, (void *)ioctl_param, sizeof(dev_info)))
- return -EFAULT;
-
- /* Check if name is zero-ended */
- if (strnlen(dev_info.name, sizeof(dev_info.name)) == sizeof(dev_info.name)) {
- pr_err("kni.name not zero-terminated");
- return -EINVAL;
- }
-
- /**
- * Check if the cpu core id is valid for binding.
- */
- if (dev_info.force_bind && !cpu_online(dev_info.core_id)) {
- pr_err("cpu %u is not online\n", dev_info.core_id);
- return -EINVAL;
- }
-
- /* Check if it has been created */
- down_read(&knet->kni_list_lock);
- list_for_each_entry_safe(dev, n, &knet->kni_list_head, list) {
- if (kni_check_param(dev, &dev_info) < 0) {
- up_read(&knet->kni_list_lock);
- return -EINVAL;
- }
- }
- up_read(&knet->kni_list_lock);
-
- net_dev = alloc_netdev(sizeof(struct kni_dev), dev_info.name,
-#ifdef NET_NAME_USER
- NET_NAME_USER,
-#endif
- kni_net_init);
- if (net_dev == NULL) {
- pr_err("error allocating device \"%s\"\n", dev_info.name);
- return -EBUSY;
- }
-
- dev_net_set(net_dev, net);
-
- kni = netdev_priv(net_dev);
-
- kni->net_dev = net_dev;
- kni->core_id = dev_info.core_id;
- strncpy(kni->name, dev_info.name, RTE_KNI_NAMESIZE);
-
- /* Translate user space info into kernel space info */
- if (dev_info.iova_mode) {
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
- kni->tx_q = iova_to_kva(current, dev_info.tx_phys);
- kni->rx_q = iova_to_kva(current, dev_info.rx_phys);
- kni->alloc_q = iova_to_kva(current, dev_info.alloc_phys);
- kni->free_q = iova_to_kva(current, dev_info.free_phys);
-
- kni->req_q = iova_to_kva(current, dev_info.req_phys);
- kni->resp_q = iova_to_kva(current, dev_info.resp_phys);
- kni->sync_va = dev_info.sync_va;
- kni->sync_kva = iova_to_kva(current, dev_info.sync_phys);
- kni->usr_tsk = current;
- kni->iova_mode = 1;
-#else
- pr_err("KNI module does not support IOVA to VA translation\n");
- return -EINVAL;
-#endif
- } else {
-
- kni->tx_q = phys_to_virt(dev_info.tx_phys);
- kni->rx_q = phys_to_virt(dev_info.rx_phys);
- kni->alloc_q = phys_to_virt(dev_info.alloc_phys);
- kni->free_q = phys_to_virt(dev_info.free_phys);
-
- kni->req_q = phys_to_virt(dev_info.req_phys);
- kni->resp_q = phys_to_virt(dev_info.resp_phys);
- kni->sync_va = dev_info.sync_va;
- kni->sync_kva = phys_to_virt(dev_info.sync_phys);
- kni->iova_mode = 0;
- }
-
- kni->mbuf_size = dev_info.mbuf_size;
-
- pr_debug("tx_phys: 0x%016llx, tx_q addr: 0x%p\n",
- (unsigned long long) dev_info.tx_phys, kni->tx_q);
- pr_debug("rx_phys: 0x%016llx, rx_q addr: 0x%p\n",
- (unsigned long long) dev_info.rx_phys, kni->rx_q);
- pr_debug("alloc_phys: 0x%016llx, alloc_q addr: 0x%p\n",
- (unsigned long long) dev_info.alloc_phys, kni->alloc_q);
- pr_debug("free_phys: 0x%016llx, free_q addr: 0x%p\n",
- (unsigned long long) dev_info.free_phys, kni->free_q);
- pr_debug("req_phys: 0x%016llx, req_q addr: 0x%p\n",
- (unsigned long long) dev_info.req_phys, kni->req_q);
- pr_debug("resp_phys: 0x%016llx, resp_q addr: 0x%p\n",
- (unsigned long long) dev_info.resp_phys, kni->resp_q);
- pr_debug("mbuf_size: %u\n", kni->mbuf_size);
-
- /* if user has provided a valid mac address */
- if (is_valid_ether_addr(dev_info.mac_addr)) {
-#ifdef HAVE_ETH_HW_ADDR_SET
- eth_hw_addr_set(net_dev, dev_info.mac_addr);
-#else
- memcpy(net_dev->dev_addr, dev_info.mac_addr, ETH_ALEN);
-#endif
- } else {
- /* Assign random MAC address. */
- eth_hw_addr_random(net_dev);
- }
-
- if (dev_info.mtu)
- net_dev->mtu = dev_info.mtu;
-#ifdef HAVE_MAX_MTU_PARAM
- net_dev->max_mtu = net_dev->mtu;
-
- if (dev_info.min_mtu)
- net_dev->min_mtu = dev_info.min_mtu;
-
- if (dev_info.max_mtu)
- net_dev->max_mtu = dev_info.max_mtu;
-#endif
-
- ret = register_netdev(net_dev);
- if (ret) {
- pr_err("error %i registering device \"%s\"\n",
- ret, dev_info.name);
- kni->net_dev = NULL;
- kni_dev_remove(kni);
- free_netdev(net_dev);
- return -ENODEV;
- }
-
- netif_carrier_off(net_dev);
-
- ret = kni_run_thread(knet, kni, dev_info.force_bind);
- if (ret != 0)
- return ret;
-
- down_write(&knet->kni_list_lock);
- list_add(&kni->list, &knet->kni_list_head);
- up_write(&knet->kni_list_lock);
-
- return 0;
-}
-
-static int
-kni_ioctl_release(struct net *net, uint32_t ioctl_num,
- unsigned long ioctl_param)
-{
- struct kni_net *knet = net_generic(net, kni_net_id);
- int ret = -EINVAL;
- struct kni_dev *dev, *n;
- struct rte_kni_device_info dev_info;
-
- if (_IOC_SIZE(ioctl_num) > sizeof(dev_info))
- return -EINVAL;
-
- if (copy_from_user(&dev_info, (void *)ioctl_param, sizeof(dev_info)))
- return -EFAULT;
-
- /* Release the network device according to its name */
- if (strlen(dev_info.name) == 0)
- return -EINVAL;
-
- down_write(&knet->kni_list_lock);
- list_for_each_entry_safe(dev, n, &knet->kni_list_head, list) {
- if (strncmp(dev->name, dev_info.name, RTE_KNI_NAMESIZE) != 0)
- continue;
-
- if (multiple_kthread_on && dev->pthread != NULL) {
- kthread_stop(dev->pthread);
- dev->pthread = NULL;
- }
-
- list_del(&dev->list);
- kni_dev_remove(dev);
- ret = 0;
- break;
- }
- up_write(&knet->kni_list_lock);
- pr_info("%s release kni named %s\n",
- (ret == 0 ? "Successfully" : "Unsuccessfully"), dev_info.name);
-
- return ret;
-}
-
-static long
-kni_ioctl(struct file *file, unsigned int ioctl_num, unsigned long ioctl_param)
-{
- long ret = -EINVAL;
- struct net *net = current->nsproxy->net_ns;
-
- pr_debug("IOCTL num=0x%0x param=0x%0lx\n", ioctl_num, ioctl_param);
-
- /*
- * Switch according to the ioctl called
- */
- switch (_IOC_NR(ioctl_num)) {
- case _IOC_NR(RTE_KNI_IOCTL_TEST):
- /* For test only, not used */
- break;
- case _IOC_NR(RTE_KNI_IOCTL_CREATE):
- ret = kni_ioctl_create(net, ioctl_num, ioctl_param);
- break;
- case _IOC_NR(RTE_KNI_IOCTL_RELEASE):
- ret = kni_ioctl_release(net, ioctl_num, ioctl_param);
- break;
- default:
- pr_debug("IOCTL default\n");
- break;
- }
-
- return ret;
-}
-
-static long
-kni_compat_ioctl(struct file *file, unsigned int ioctl_num,
- unsigned long ioctl_param)
-{
- /* 32 bits app on 64 bits OS to be supported later */
- pr_debug("Not implemented.\n");
-
- return -EINVAL;
-}
-
-static const struct file_operations kni_fops = {
- .owner = THIS_MODULE,
- .open = kni_open,
- .release = kni_release,
- .unlocked_ioctl = kni_ioctl,
- .compat_ioctl = kni_compat_ioctl,
-};
-
-static struct miscdevice kni_misc = {
- .minor = MISC_DYNAMIC_MINOR,
- .name = KNI_DEVICE,
- .fops = &kni_fops,
-};
-
-static int __init
-kni_parse_kthread_mode(void)
-{
- if (!kthread_mode)
- return 0;
-
- if (strcmp(kthread_mode, "single") == 0)
- return 0;
- else if (strcmp(kthread_mode, "multiple") == 0)
- multiple_kthread_on = 1;
- else
- return -1;
-
- return 0;
-}
-
-static int __init
-kni_parse_carrier_state(void)
-{
- if (!carrier) {
- kni_dflt_carrier = 0;
- return 0;
- }
-
- if (strcmp(carrier, "off") == 0)
- kni_dflt_carrier = 0;
- else if (strcmp(carrier, "on") == 0)
- kni_dflt_carrier = 1;
- else
- return -1;
-
- return 0;
-}
-
-static int __init
-kni_parse_bifurcated_support(void)
-{
- if (!enable_bifurcated) {
- bifurcated_support = 0;
- return 0;
- }
-
- if (strcmp(enable_bifurcated, "on") == 0)
- bifurcated_support = 1;
- else
- return -1;
-
- return 0;
-}
-
-static int __init
-kni_init(void)
-{
- int rc;
-
- if (kni_parse_kthread_mode() < 0) {
- pr_err("Invalid parameter for kthread_mode\n");
- return -EINVAL;
- }
-
- if (multiple_kthread_on == 0)
- pr_debug("Single kernel thread for all KNI devices\n");
- else
- pr_debug("Multiple kernel thread mode enabled\n");
-
- if (kni_parse_carrier_state() < 0) {
- pr_err("Invalid parameter for carrier\n");
- return -EINVAL;
- }
-
- if (kni_dflt_carrier == 0)
- pr_debug("Default carrier state set to off.\n");
- else
- pr_debug("Default carrier state set to on.\n");
-
- if (kni_parse_bifurcated_support() < 0) {
- pr_err("Invalid parameter for bifurcated support\n");
- return -EINVAL;
- }
- if (bifurcated_support == 1)
- pr_debug("bifurcated support is enabled.\n");
-
- if (min_scheduling_interval < 0 || max_scheduling_interval < 0 ||
- min_scheduling_interval > KNI_KTHREAD_MAX_RESCHEDULE_INTERVAL ||
- max_scheduling_interval > KNI_KTHREAD_MAX_RESCHEDULE_INTERVAL ||
- min_scheduling_interval >= max_scheduling_interval) {
- pr_err("Invalid parameters for scheduling interval\n");
- return -EINVAL;
- }
-
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- rc = register_pernet_subsys(&kni_net_ops);
-#else
- rc = register_pernet_gen_subsys(&kni_net_id, &kni_net_ops);
-#endif
- if (rc)
- return -EPERM;
-
- rc = misc_register(&kni_misc);
- if (rc != 0) {
- pr_err("Misc registration failed\n");
- goto out;
- }
-
- /* Configure the lo mode according to the input parameter */
- kni_net_config_lo_mode(lo_mode);
-
- return 0;
-
-out:
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- unregister_pernet_subsys(&kni_net_ops);
-#else
- unregister_pernet_gen_subsys(kni_net_id, &kni_net_ops);
-#endif
- return rc;
-}
-
-static void __exit
-kni_exit(void)
-{
- misc_deregister(&kni_misc);
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- unregister_pernet_subsys(&kni_net_ops);
-#else
- unregister_pernet_gen_subsys(kni_net_id, &kni_net_ops);
-#endif
-}
-
-module_init(kni_init);
-module_exit(kni_exit);
-
-module_param(lo_mode, charp, 0644);
-MODULE_PARM_DESC(lo_mode,
-"KNI loopback mode (default=lo_mode_none):\n"
-"\t\tlo_mode_none Kernel loopback disabled\n"
-"\t\tlo_mode_fifo Enable kernel loopback with fifo\n"
-"\t\tlo_mode_fifo_skb Enable kernel loopback with fifo and skb buffer\n"
-"\t\t"
-);
-
-module_param(kthread_mode, charp, 0644);
-MODULE_PARM_DESC(kthread_mode,
-"Kernel thread mode (default=single):\n"
-"\t\tsingle Single kernel thread mode enabled.\n"
-"\t\tmultiple Multiple kernel thread mode enabled.\n"
-"\t\t"
-);
-
-module_param(carrier, charp, 0644);
-MODULE_PARM_DESC(carrier,
-"Default carrier state for KNI interface (default=off):\n"
-"\t\toff Interfaces will be created with carrier state set to off.\n"
-"\t\ton Interfaces will be created with carrier state set to on.\n"
-"\t\t"
-);
-
-module_param(enable_bifurcated, charp, 0644);
-MODULE_PARM_DESC(enable_bifurcated,
-"Enable request processing support for bifurcated drivers, "
-"which means releasing rtnl_lock before calling userspace callback and "
-"supporting async requests (default=off):\n"
-"\t\ton Enable request processing support for bifurcated drivers.\n"
-"\t\t"
-);
-
-module_param(min_scheduling_interval, long, 0644);
-MODULE_PARM_DESC(min_scheduling_interval,
-"KNI thread min scheduling interval (default=100 microseconds)"
-);
-
-module_param(max_scheduling_interval, long, 0644);
-MODULE_PARM_DESC(max_scheduling_interval,
-"KNI thread max scheduling interval (default=200 microseconds)"
-);
diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c
deleted file mode 100644
index 779ee3451a4c..000000000000
--- a/kernel/linux/kni/kni_net.c
+++ /dev/null
@@ -1,878 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-/*
- * This code is inspired from the book "Linux Device Drivers" by
- * Alessandro Rubini and Jonathan Corbet, published by O'Reilly & Associates
- */
-
-#include <linux/device.h>
-#include <linux/module.h>
-#include <linux/version.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h> /* eth_type_trans */
-#include <linux/ethtool.h>
-#include <linux/skbuff.h>
-#include <linux/kthread.h>
-#include <linux/delay.h>
-#include <linux/rtnetlink.h>
-
-#include <rte_kni_common.h>
-#include <kni_fifo.h>
-
-#include "compat.h"
-#include "kni_dev.h"
-
-#define WD_TIMEOUT 5 /*jiffies */
-
-#define KNI_WAIT_RESPONSE_TIMEOUT 300 /* 3 seconds */
-
-/* typedef for rx function */
-typedef void (*kni_net_rx_t)(struct kni_dev *kni);
-
-static void kni_net_rx_normal(struct kni_dev *kni);
-
-/* kni rx function pointer, with default to normal rx */
-static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal;
-
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
-/* iova to kernel virtual address */
-static inline void *
-iova2kva(struct kni_dev *kni, void *iova)
-{
- return phys_to_virt(iova_to_phys(kni->usr_tsk, (unsigned long)iova));
-}
-
-static inline void *
-iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m)
-{
- return phys_to_virt(iova_to_phys(kni->usr_tsk, m->buf_iova) +
- m->data_off);
-}
-#endif
-
-/* physical address to kernel virtual address */
-static void *
-pa2kva(void *pa)
-{
- return phys_to_virt((unsigned long)pa);
-}
-
-/* physical address to virtual address */
-static void *
-pa2va(void *pa, struct rte_kni_mbuf *m)
-{
- void *va;
-
- va = (void *)((unsigned long)pa +
- (unsigned long)m->buf_addr -
- (unsigned long)m->buf_iova);
- return va;
-}
-
-/* mbuf data kernel virtual address from mbuf kernel virtual address */
-static void *
-kva2data_kva(struct rte_kni_mbuf *m)
-{
- return phys_to_virt(m->buf_iova + m->data_off);
-}
-
-static inline void *
-get_kva(struct kni_dev *kni, void *pa)
-{
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
- if (kni->iova_mode == 1)
- return iova2kva(kni, pa);
-#endif
- return pa2kva(pa);
-}
-
-static inline void *
-get_data_kva(struct kni_dev *kni, void *pkt_kva)
-{
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
- if (kni->iova_mode == 1)
- return iova2data_kva(kni, pkt_kva);
-#endif
- return kva2data_kva(pkt_kva);
-}
-
-/*
- * It can be called to process the request.
- */
-static int
-kni_net_process_request(struct net_device *dev, struct rte_kni_request *req)
-{
- struct kni_dev *kni = netdev_priv(dev);
- int ret = -1;
- void *resp_va;
- uint32_t num;
- int ret_val;
-
- ASSERT_RTNL();
-
- if (bifurcated_support) {
- /* If we need to wait and RTNL mutex is held
- * drop the mutex and hold reference to keep device
- */
- if (req->async == 0) {
- dev_hold(dev);
- rtnl_unlock();
- }
- }
-
- mutex_lock(&kni->sync_lock);
-
- /* Construct data */
- memcpy(kni->sync_kva, req, sizeof(struct rte_kni_request));
- num = kni_fifo_put(kni->req_q, &kni->sync_va, 1);
- if (num < 1) {
- pr_err("Cannot send to req_q\n");
- ret = -EBUSY;
- goto fail;
- }
-
- if (bifurcated_support) {
- /* No result available since request is handled
- * asynchronously. set response to success.
- */
- if (req->async != 0) {
- req->result = 0;
- goto async;
- }
- }
-
- ret_val = wait_event_interruptible_timeout(kni->wq,
- kni_fifo_count(kni->resp_q), 3 * HZ);
- if (signal_pending(current) || ret_val <= 0) {
- ret = -ETIME;
- goto fail;
- }
- num = kni_fifo_get(kni->resp_q, (void **)&resp_va, 1);
- if (num != 1 || resp_va != kni->sync_va) {
- /* This should never happen */
- pr_err("No data in resp_q\n");
- ret = -ENODATA;
- goto fail;
- }
-
- memcpy(req, kni->sync_kva, sizeof(struct rte_kni_request));
-async:
- ret = 0;
-
-fail:
- mutex_unlock(&kni->sync_lock);
- if (bifurcated_support) {
- if (req->async == 0) {
- rtnl_lock();
- dev_put(dev);
- }
- }
- return ret;
-}
-
-/*
- * Open and close
- */
-static int
-kni_net_open(struct net_device *dev)
-{
- int ret;
- struct rte_kni_request req;
-
- netif_start_queue(dev);
- if (kni_dflt_carrier == 1)
- netif_carrier_on(dev);
- else
- netif_carrier_off(dev);
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CFG_NETWORK_IF;
-
- /* Setting if_up to non-zero means up */
- req.if_up = 1;
- ret = kni_net_process_request(dev, &req);
-
- return (ret == 0) ? req.result : ret;
-}
-
-static int
-kni_net_release(struct net_device *dev)
-{
- int ret;
- struct rte_kni_request req;
-
- netif_stop_queue(dev); /* can't transmit any more */
- netif_carrier_off(dev);
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CFG_NETWORK_IF;
-
- /* Setting if_up to 0 means down */
- req.if_up = 0;
-
- if (bifurcated_support) {
- /* request async because of the deadlock problem */
- req.async = 1;
- }
-
- ret = kni_net_process_request(dev, &req);
-
- return (ret == 0) ? req.result : ret;
-}
-
-static void
-kni_fifo_trans_pa2va(struct kni_dev *kni,
- struct rte_kni_fifo *src_pa, struct rte_kni_fifo *dst_va)
-{
- uint32_t ret, i, num_dst, num_rx;
- struct rte_kni_mbuf *kva, *prev_kva;
- int nb_segs;
- int kva_nb_segs;
-
- do {
- num_dst = kni_fifo_free_count(dst_va);
- if (num_dst == 0)
- return;
-
- num_rx = min_t(uint32_t, num_dst, MBUF_BURST_SZ);
-
- num_rx = kni_fifo_get(src_pa, kni->pa, num_rx);
- if (num_rx == 0)
- return;
-
- for (i = 0; i < num_rx; i++) {
- kva = get_kva(kni, kni->pa[i]);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- kva_nb_segs = kva->nb_segs;
- for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) {
- if (!kva->next)
- break;
-
- prev_kva = kva;
- kva = get_kva(kni, kva->next);
- /* Convert physical address to virtual address */
- prev_kva->next = pa2va(prev_kva->next, kva);
- }
- }
-
- ret = kni_fifo_put(dst_va, kni->va, num_rx);
- if (ret != num_rx) {
- /* Failing should not happen */
- pr_err("Fail to enqueue entries into dst_va\n");
- return;
- }
- } while (1);
-}
-
-/* Try to release mbufs when kni release */
-void kni_net_release_fifo_phy(struct kni_dev *kni)
-{
- /* release rx_q first, because it can't release in userspace */
- kni_fifo_trans_pa2va(kni, kni->rx_q, kni->free_q);
- /* release alloc_q for speeding up kni release in userspace */
- kni_fifo_trans_pa2va(kni, kni->alloc_q, kni->free_q);
-}
-
-/*
- * Configuration changes (passed on by ifconfig)
- */
-static int
-kni_net_config(struct net_device *dev, struct ifmap *map)
-{
- if (dev->flags & IFF_UP) /* can't act on a running interface */
- return -EBUSY;
-
- /* ignore other fields */
- return 0;
-}
-
-/*
- * Transmit a packet (called by the kernel)
- */
-static int
-kni_net_tx(struct sk_buff *skb, struct net_device *dev)
-{
- int len = 0;
- uint32_t ret;
- struct kni_dev *kni = netdev_priv(dev);
- struct rte_kni_mbuf *pkt_kva = NULL;
- void *pkt_pa = NULL;
- void *pkt_va = NULL;
-
- /* save the timestamp */
-#ifdef HAVE_TRANS_START_HELPER
- netif_trans_update(dev);
-#else
- dev->trans_start = jiffies;
-#endif
-
- /* Check if the length of skb is less than mbuf size */
- if (skb->len > kni->mbuf_size)
- goto drop;
-
- /**
- * Check if it has at least one free entry in tx_q and
- * one entry in alloc_q.
- */
- if (kni_fifo_free_count(kni->tx_q) == 0 ||
- kni_fifo_count(kni->alloc_q) == 0) {
- /**
- * If no free entry in tx_q or no entry in alloc_q,
- * drops skb and goes out.
- */
- goto drop;
- }
-
- /* dequeue a mbuf from alloc_q */
- ret = kni_fifo_get(kni->alloc_q, &pkt_pa, 1);
- if (likely(ret == 1)) {
- void *data_kva;
-
- pkt_kva = get_kva(kni, pkt_pa);
- data_kva = get_data_kva(kni, pkt_kva);
- pkt_va = pa2va(pkt_pa, pkt_kva);
-
- len = skb->len;
- memcpy(data_kva, skb->data, len);
- if (unlikely(len < ETH_ZLEN)) {
- memset(data_kva + len, 0, ETH_ZLEN - len);
- len = ETH_ZLEN;
- }
- pkt_kva->pkt_len = len;
- pkt_kva->data_len = len;
-
- /* enqueue mbuf into tx_q */
- ret = kni_fifo_put(kni->tx_q, &pkt_va, 1);
- if (unlikely(ret != 1)) {
- /* Failing should not happen */
- pr_err("Fail to enqueue mbuf into tx_q\n");
- goto drop;
- }
- } else {
- /* Failing should not happen */
- pr_err("Fail to dequeue mbuf from alloc_q\n");
- goto drop;
- }
-
- /* Free skb and update statistics */
- dev_kfree_skb(skb);
- dev->stats.tx_bytes += len;
- dev->stats.tx_packets++;
-
- return NETDEV_TX_OK;
-
-drop:
- /* Free skb and update statistics */
- dev_kfree_skb(skb);
- dev->stats.tx_dropped++;
-
- return NETDEV_TX_OK;
-}
-
-/*
- * RX: normal working mode
- */
-static void
-kni_net_rx_normal(struct kni_dev *kni)
-{
- uint32_t ret;
- uint32_t len;
- uint32_t i, num_rx, num_fq;
- struct rte_kni_mbuf *kva, *prev_kva;
- void *data_kva;
- struct sk_buff *skb;
- struct net_device *dev = kni->net_dev;
-
- /* Get the number of free entries in free_q */
- num_fq = kni_fifo_free_count(kni->free_q);
- if (num_fq == 0) {
- /* No room on the free_q, bail out */
- return;
- }
-
- /* Calculate the number of entries to dequeue from rx_q */
- num_rx = min_t(uint32_t, num_fq, MBUF_BURST_SZ);
-
- /* Burst dequeue from rx_q */
- num_rx = kni_fifo_get(kni->rx_q, kni->pa, num_rx);
- if (num_rx == 0)
- return;
-
- /* Transfer received packets to netif */
- for (i = 0; i < num_rx; i++) {
- kva = get_kva(kni, kni->pa[i]);
- len = kva->pkt_len;
- data_kva = get_data_kva(kni, kva);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- skb = netdev_alloc_skb(dev, len);
- if (!skb) {
- /* Update statistics */
- dev->stats.rx_dropped++;
- continue;
- }
-
- if (kva->nb_segs == 1) {
- memcpy(skb_put(skb, len), data_kva, len);
- } else {
- int nb_segs;
- int kva_nb_segs = kva->nb_segs;
-
- for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) {
- memcpy(skb_put(skb, kva->data_len),
- data_kva, kva->data_len);
-
- if (!kva->next)
- break;
-
- prev_kva = kva;
- kva = get_kva(kni, kva->next);
- data_kva = kva2data_kva(kva);
- /* Convert physical address to virtual address */
- prev_kva->next = pa2va(prev_kva->next, kva);
- }
- }
-
- skb->protocol = eth_type_trans(skb, dev);
- skb->ip_summed = CHECKSUM_UNNECESSARY;
-
- /* Call netif interface */
-#ifdef HAVE_NETIF_RX_NI
- netif_rx_ni(skb);
-#else
- netif_rx(skb);
-#endif
-
- /* Update statistics */
- dev->stats.rx_bytes += len;
- dev->stats.rx_packets++;
- }
-
- /* Burst enqueue mbufs into free_q */
- ret = kni_fifo_put(kni->free_q, kni->va, num_rx);
- if (ret != num_rx)
- /* Failing should not happen */
- pr_err("Fail to enqueue entries into free_q\n");
-}
-
-/*
- * RX: loopback with enqueue/dequeue fifos.
- */
-static void
-kni_net_rx_lo_fifo(struct kni_dev *kni)
-{
- uint32_t ret;
- uint32_t len;
- uint32_t i, num, num_rq, num_tq, num_aq, num_fq;
- struct rte_kni_mbuf *kva, *next_kva;
- void *data_kva;
- struct rte_kni_mbuf *alloc_kva;
- void *alloc_data_kva;
- struct net_device *dev = kni->net_dev;
-
- /* Get the number of entries in rx_q */
- num_rq = kni_fifo_count(kni->rx_q);
-
- /* Get the number of free entries in tx_q */
- num_tq = kni_fifo_free_count(kni->tx_q);
-
- /* Get the number of entries in alloc_q */
- num_aq = kni_fifo_count(kni->alloc_q);
-
- /* Get the number of free entries in free_q */
- num_fq = kni_fifo_free_count(kni->free_q);
-
- /* Calculate the number of entries to be dequeued from rx_q */
- num = min(num_rq, num_tq);
- num = min(num, num_aq);
- num = min(num, num_fq);
- num = min_t(uint32_t, num, MBUF_BURST_SZ);
-
- /* Return if no entry to dequeue from rx_q */
- if (num == 0)
- return;
-
- /* Burst dequeue from rx_q */
- ret = kni_fifo_get(kni->rx_q, kni->pa, num);
- if (ret == 0)
- return; /* Failing should not happen */
-
- /* Dequeue entries from alloc_q */
- ret = kni_fifo_get(kni->alloc_q, kni->alloc_pa, num);
- if (ret) {
- num = ret;
- /* Copy mbufs */
- for (i = 0; i < num; i++) {
- kva = get_kva(kni, kni->pa[i]);
- len = kva->data_len;
- data_kva = get_data_kva(kni, kva);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- while (kva->next) {
- next_kva = get_kva(kni, kva->next);
- /* Convert physical address to virtual address */
- kva->next = pa2va(kva->next, next_kva);
- kva = next_kva;
- }
-
- alloc_kva = get_kva(kni, kni->alloc_pa[i]);
- alloc_data_kva = get_data_kva(kni, alloc_kva);
- kni->alloc_va[i] = pa2va(kni->alloc_pa[i], alloc_kva);
-
- memcpy(alloc_data_kva, data_kva, len);
- alloc_kva->pkt_len = len;
- alloc_kva->data_len = len;
-
- dev->stats.tx_bytes += len;
- dev->stats.rx_bytes += len;
- }
-
- /* Burst enqueue mbufs into tx_q */
- ret = kni_fifo_put(kni->tx_q, kni->alloc_va, num);
- if (ret != num)
- /* Failing should not happen */
- pr_err("Fail to enqueue mbufs into tx_q\n");
- }
-
- /* Burst enqueue mbufs into free_q */
- ret = kni_fifo_put(kni->free_q, kni->va, num);
- if (ret != num)
- /* Failing should not happen */
- pr_err("Fail to enqueue mbufs into free_q\n");
-
- /**
- * Update statistic, and enqueue/dequeue failure is impossible,
- * as all queues are checked at first.
- */
- dev->stats.tx_packets += num;
- dev->stats.rx_packets += num;
-}
-
-/*
- * RX: loopback with enqueue/dequeue fifos and sk buffer copies.
- */
-static void
-kni_net_rx_lo_fifo_skb(struct kni_dev *kni)
-{
- uint32_t ret;
- uint32_t len;
- uint32_t i, num_rq, num_fq, num;
- struct rte_kni_mbuf *kva, *prev_kva;
- void *data_kva;
- struct sk_buff *skb;
- struct net_device *dev = kni->net_dev;
-
- /* Get the number of entries in rx_q */
- num_rq = kni_fifo_count(kni->rx_q);
-
- /* Get the number of free entries in free_q */
- num_fq = kni_fifo_free_count(kni->free_q);
-
- /* Calculate the number of entries to dequeue from rx_q */
- num = min(num_rq, num_fq);
- num = min_t(uint32_t, num, MBUF_BURST_SZ);
-
- /* Return if no entry to dequeue from rx_q */
- if (num == 0)
- return;
-
- /* Burst dequeue mbufs from rx_q */
- ret = kni_fifo_get(kni->rx_q, kni->pa, num);
- if (ret == 0)
- return;
-
- /* Copy mbufs to sk buffer and then call tx interface */
- for (i = 0; i < num; i++) {
- kva = get_kva(kni, kni->pa[i]);
- len = kva->pkt_len;
- data_kva = get_data_kva(kni, kva);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- skb = netdev_alloc_skb(dev, len);
- if (skb) {
- memcpy(skb_put(skb, len), data_kva, len);
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- dev_kfree_skb(skb);
- }
-
- /* Simulate real usage, allocate/copy skb twice */
- skb = netdev_alloc_skb(dev, len);
- if (skb == NULL) {
- dev->stats.rx_dropped++;
- continue;
- }
-
- if (kva->nb_segs == 1) {
- memcpy(skb_put(skb, len), data_kva, len);
- } else {
- int nb_segs;
- int kva_nb_segs = kva->nb_segs;
-
- for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) {
- memcpy(skb_put(skb, kva->data_len),
- data_kva, kva->data_len);
-
- if (!kva->next)
- break;
-
- prev_kva = kva;
- kva = get_kva(kni, kva->next);
- data_kva = get_data_kva(kni, kva);
- /* Convert physical address to virtual address */
- prev_kva->next = pa2va(prev_kva->next, kva);
- }
- }
-
- skb->ip_summed = CHECKSUM_UNNECESSARY;
-
- dev->stats.rx_bytes += len;
- dev->stats.rx_packets++;
-
- /* call tx interface */
- kni_net_tx(skb, dev);
- }
-
- /* enqueue all the mbufs from rx_q into free_q */
- ret = kni_fifo_put(kni->free_q, kni->va, num);
- if (ret != num)
- /* Failing should not happen */
- pr_err("Fail to enqueue mbufs into free_q\n");
-}
-
-/* rx interface */
-void
-kni_net_rx(struct kni_dev *kni)
-{
- /**
- * It doesn't need to check if it is NULL pointer,
- * as it has a default value
- */
- (*kni_net_rx_func)(kni);
-}
-
-/*
- * Deal with a transmit timeout.
- */
-#ifdef HAVE_TX_TIMEOUT_TXQUEUE
-static void
-kni_net_tx_timeout(struct net_device *dev, unsigned int txqueue)
-#else
-static void
-kni_net_tx_timeout(struct net_device *dev)
-#endif
-{
- pr_debug("Transmit timeout at %ld, latency %ld\n", jiffies,
- jiffies - dev_trans_start(dev));
-
- dev->stats.tx_errors++;
- netif_wake_queue(dev);
-}
-
-static int
-kni_net_change_mtu(struct net_device *dev, int new_mtu)
-{
- int ret;
- struct rte_kni_request req;
-
- pr_debug("kni_net_change_mtu new mtu %d to be set\n", new_mtu);
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CHANGE_MTU;
- req.new_mtu = new_mtu;
- ret = kni_net_process_request(dev, &req);
- if (ret == 0 && req.result == 0)
- dev->mtu = new_mtu;
-
- return (ret == 0) ? req.result : ret;
-}
-
-static void
-kni_net_change_rx_flags(struct net_device *netdev, int flags)
-{
- struct rte_kni_request req;
-
- memset(&req, 0, sizeof(req));
-
- if (flags & IFF_ALLMULTI) {
- req.req_id = RTE_KNI_REQ_CHANGE_ALLMULTI;
-
- if (netdev->flags & IFF_ALLMULTI)
- req.allmulti = 1;
- else
- req.allmulti = 0;
- }
-
- if (flags & IFF_PROMISC) {
- req.req_id = RTE_KNI_REQ_CHANGE_PROMISC;
-
- if (netdev->flags & IFF_PROMISC)
- req.promiscusity = 1;
- else
- req.promiscusity = 0;
- }
-
- kni_net_process_request(netdev, &req);
-}
-
-/*
- * Checks if the user space application provided the resp message
- */
-void
-kni_net_poll_resp(struct kni_dev *kni)
-{
- if (kni_fifo_count(kni->resp_q))
- wake_up_interruptible(&kni->wq);
-}
-
-/*
- * Fill the eth header
- */
-static int
-kni_net_header(struct sk_buff *skb, struct net_device *dev,
- unsigned short type, const void *daddr,
- const void *saddr, uint32_t len)
-{
- struct ethhdr *eth = (struct ethhdr *) skb_push(skb, ETH_HLEN);
-
- memcpy(eth->h_source, saddr ? saddr : dev->dev_addr, dev->addr_len);
- memcpy(eth->h_dest, daddr ? daddr : dev->dev_addr, dev->addr_len);
- eth->h_proto = htons(type);
-
- return dev->hard_header_len;
-}
-
-/*
- * Re-fill the eth header
- */
-#ifdef HAVE_REBUILD_HEADER
-static int
-kni_net_rebuild_header(struct sk_buff *skb)
-{
- struct net_device *dev = skb->dev;
- struct ethhdr *eth = (struct ethhdr *) skb->data;
-
- memcpy(eth->h_source, dev->dev_addr, dev->addr_len);
- memcpy(eth->h_dest, dev->dev_addr, dev->addr_len);
-
- return 0;
-}
-#endif /* < 4.1.0 */
-
-/**
- * kni_net_set_mac - Change the Ethernet Address of the KNI NIC
- * @netdev: network interface device structure
- * @p: pointer to an address structure
- *
- * Returns 0 on success, negative on failure
- **/
-static int
-kni_net_set_mac(struct net_device *netdev, void *p)
-{
- int ret;
- struct rte_kni_request req;
- struct sockaddr *addr = p;
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CHANGE_MAC_ADDR;
-
- if (!is_valid_ether_addr((unsigned char *)(addr->sa_data)))
- return -EADDRNOTAVAIL;
-
- memcpy(req.mac_addr, addr->sa_data, netdev->addr_len);
-#ifdef HAVE_ETH_HW_ADDR_SET
- eth_hw_addr_set(netdev, addr->sa_data);
-#else
- memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
-#endif
-
- ret = kni_net_process_request(netdev, &req);
-
- return (ret == 0 ? req.result : ret);
-}
-
-#ifdef HAVE_CHANGE_CARRIER_CB
-static int
-kni_net_change_carrier(struct net_device *dev, bool new_carrier)
-{
- if (new_carrier)
- netif_carrier_on(dev);
- else
- netif_carrier_off(dev);
- return 0;
-}
-#endif
-
-static const struct header_ops kni_net_header_ops = {
- .create = kni_net_header,
- .parse = eth_header_parse,
-#ifdef HAVE_REBUILD_HEADER
- .rebuild = kni_net_rebuild_header,
-#endif /* < 4.1.0 */
- .cache = NULL, /* disable caching */
-};
-
-static const struct net_device_ops kni_net_netdev_ops = {
- .ndo_open = kni_net_open,
- .ndo_stop = kni_net_release,
- .ndo_set_config = kni_net_config,
- .ndo_change_rx_flags = kni_net_change_rx_flags,
- .ndo_start_xmit = kni_net_tx,
- .ndo_change_mtu = kni_net_change_mtu,
- .ndo_tx_timeout = kni_net_tx_timeout,
- .ndo_set_mac_address = kni_net_set_mac,
-#ifdef HAVE_CHANGE_CARRIER_CB
- .ndo_change_carrier = kni_net_change_carrier,
-#endif
-};
-
-static void kni_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- strlcpy(info->version, KNI_VERSION, sizeof(info->version));
- strlcpy(info->driver, "kni", sizeof(info->driver));
-}
-
-static const struct ethtool_ops kni_net_ethtool_ops = {
- .get_drvinfo = kni_get_drvinfo,
- .get_link = ethtool_op_get_link,
-};
-
-void
-kni_net_init(struct net_device *dev)
-{
- struct kni_dev *kni = netdev_priv(dev);
-
- init_waitqueue_head(&kni->wq);
- mutex_init(&kni->sync_lock);
-
- ether_setup(dev); /* assign some of the fields */
- dev->netdev_ops = &kni_net_netdev_ops;
- dev->header_ops = &kni_net_header_ops;
- dev->ethtool_ops = &kni_net_ethtool_ops;
- dev->watchdog_timeo = WD_TIMEOUT;
-}
-
-void
-kni_net_config_lo_mode(char *lo_str)
-{
- if (!lo_str) {
- pr_debug("loopback disabled");
- return;
- }
-
- if (!strcmp(lo_str, "lo_mode_none"))
- pr_debug("loopback disabled");
- else if (!strcmp(lo_str, "lo_mode_fifo")) {
- pr_debug("loopback mode=lo_mode_fifo enabled");
- kni_net_rx_func = kni_net_rx_lo_fifo;
- } else if (!strcmp(lo_str, "lo_mode_fifo_skb")) {
- pr_debug("loopback mode=lo_mode_fifo_skb enabled");
- kni_net_rx_func = kni_net_rx_lo_fifo_skb;
- } else {
- pr_debug("Unknown loopback parameter, disabled");
- }
-}
diff --git a/kernel/linux/kni/meson.build b/kernel/linux/kni/meson.build
deleted file mode 100644
index 4c90069e9989..000000000000
--- a/kernel/linux/kni/meson.build
+++ /dev/null
@@ -1,41 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
-
-# For SUSE build check function arguments of ndo_tx_timeout API
-# Ref: https://jira.devtools.intel.com/browse/DPDK-29263
-kmod_cflags = ''
-file_path = kernel_source_dir + '/include/linux/netdevice.h'
-run_cmd = run_command('grep', 'ndo_tx_timeout', file_path, check: false)
-
-if run_cmd.stdout().contains('txqueue') == true
- kmod_cflags = '-DHAVE_ARG_TX_QUEUE'
-endif
-
-
-kni_mkfile = custom_target('rte_kni_makefile',
- output: 'Makefile',
- command: ['touch', '@OUTPUT@'])
-
-kni_sources = files(
- 'kni_misc.c',
- 'kni_net.c',
- 'Kbuild',
-)
-
-custom_target('rte_kni',
- input: kni_sources,
- output: 'rte_kni.ko',
- command: ['make', '-j4', '-C', kernel_build_dir,
- 'M=' + meson.current_build_dir(),
- 'src=' + meson.current_source_dir(),
- ' '.join(['MODULE_CFLAGS=', kmod_cflags,'-include '])
- + dpdk_source_root + '/config/rte_config.h' +
- ' -I' + dpdk_source_root + '/lib/eal/include' +
- ' -I' + dpdk_source_root + '/lib/kni' +
- ' -I' + dpdk_build_root +
- ' -I' + meson.current_source_dir(),
- 'modules'] + cross_args,
- depends: kni_mkfile,
- install: install,
- install_dir: kernel_install_dir,
- build_by_default: get_option('enable_kmods'))
diff --git a/kernel/linux/meson.build b/kernel/linux/meson.build
index 16a094899446..8d47074621f7 100644
--- a/kernel/linux/meson.build
+++ b/kernel/linux/meson.build
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018 Intel Corporation
-subdirs = ['kni']
+subdirs = []
kernel_build_dir = get_option('kernel_dir')
kernel_source_dir = get_option('kernel_dir')
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index bd7b188ceb4a..0a1d219d6924 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -356,7 +356,6 @@ static const struct logtype logtype_strings[] = {
{RTE_LOGTYPE_PMD, "pmd"},
{RTE_LOGTYPE_HASH, "lib.hash"},
{RTE_LOGTYPE_LPM, "lib.lpm"},
- {RTE_LOGTYPE_KNI, "lib.kni"},
{RTE_LOGTYPE_ACL, "lib.acl"},
{RTE_LOGTYPE_POWER, "lib.power"},
{RTE_LOGTYPE_METER, "lib.meter"},
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index 6d2b0856a565..bdefff2a5933 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -34,7 +34,7 @@ extern "C" {
#define RTE_LOGTYPE_PMD 5 /**< Log related to poll mode driver. */
#define RTE_LOGTYPE_HASH 6 /**< Log related to hash table. */
#define RTE_LOGTYPE_LPM 7 /**< Log related to LPM. */
-#define RTE_LOGTYPE_KNI 8 /**< Log related to KNI. */
+ /* was RTE_LOGTYPE_KNI */
#define RTE_LOGTYPE_ACL 9 /**< Log related to ACL. */
#define RTE_LOGTYPE_POWER 10 /**< Log related to power. */
#define RTE_LOGTYPE_METER 11 /**< Log related to QoS meter. */
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index c6efd920145c..a1fefcd9d83a 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1084,11 +1084,6 @@ rte_eal_init(int argc, char **argv)
*/
iova_mode = RTE_IOVA_VA;
RTE_LOG(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.\n");
-#if defined(RTE_LIB_KNI) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0)
- } else if (rte_eal_check_module("rte_kni") == 1) {
- iova_mode = RTE_IOVA_PA;
- RTE_LOG(DEBUG, EAL, "KNI is loaded, selecting IOVA as PA mode for better KNI performance.\n");
-#endif
} else if (is_iommu_enabled()) {
/* we have an IOMMU, pick IOVA as VA mode */
iova_mode = RTE_IOVA_VA;
@@ -1101,20 +1096,6 @@ rte_eal_init(int argc, char **argv)
RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n");
}
}
-#if defined(RTE_LIB_KNI) && LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0)
- /* Workaround for KNI which requires physical address to work
- * in kernels < 4.10
- */
- if (iova_mode == RTE_IOVA_VA &&
- rte_eal_check_module("rte_kni") == 1) {
- if (phys_addrs) {
- iova_mode = RTE_IOVA_PA;
- RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module is loaded\n");
- } else {
- RTE_LOG(DEBUG, EAL, "KNI can not work since physical addresses are unavailable\n");
- }
- }
-#endif
rte_eal_get_configuration()->iova_mode = iova_mode;
} else {
rte_eal_get_configuration()->iova_mode =
diff --git a/lib/kni/meson.build b/lib/kni/meson.build
deleted file mode 100644
index 5ce410f7f2d2..000000000000
--- a/lib/kni/meson.build
+++ /dev/null
@@ -1,21 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
-
-if is_windows
- build = false
- reason = 'not supported on Windows'
- subdir_done()
-endif
-
-if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
- build = false
- reason = 'requires IOVA in mbuf (set enable_iova_as_pa option)'
-endif
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
-endif
-sources = files('rte_kni.c')
-headers = files('rte_kni.h', 'rte_kni_common.h')
-deps += ['ethdev', 'pci']
diff --git a/lib/kni/rte_kni.c b/lib/kni/rte_kni.c
deleted file mode 100644
index bfa6a001ff59..000000000000
--- a/lib/kni/rte_kni.c
+++ /dev/null
@@ -1,843 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef RTE_EXEC_ENV_LINUX
-#error "KNI is not supported"
-#endif
-
-#include <string.h>
-#include <fcntl.h>
-#include <unistd.h>
-#include <sys/ioctl.h>
-#include <linux/version.h>
-
-#include <rte_string_fns.h>
-#include <rte_ethdev.h>
-#include <rte_malloc.h>
-#include <rte_log.h>
-#include <rte_kni.h>
-#include <rte_memzone.h>
-#include <rte_tailq.h>
-#include <rte_eal_memconfig.h>
-#include <rte_kni_common.h>
-#include "rte_kni_fifo.h"
-
-#define MAX_MBUF_BURST_NUM 32
-
-/* Maximum number of ring entries */
-#define KNI_FIFO_COUNT_MAX 1024
-#define KNI_FIFO_SIZE (KNI_FIFO_COUNT_MAX * sizeof(void *) + \
- sizeof(struct rte_kni_fifo))
-
-#define KNI_REQUEST_MBUF_NUM_MAX 32
-
-#define KNI_MEM_CHECK(cond, fail) do { if (cond) goto fail; } while (0)
-
-#define KNI_MZ_NAME_FMT "kni_info_%s"
-#define KNI_TX_Q_MZ_NAME_FMT "kni_tx_%s"
-#define KNI_RX_Q_MZ_NAME_FMT "kni_rx_%s"
-#define KNI_ALLOC_Q_MZ_NAME_FMT "kni_alloc_%s"
-#define KNI_FREE_Q_MZ_NAME_FMT "kni_free_%s"
-#define KNI_REQ_Q_MZ_NAME_FMT "kni_req_%s"
-#define KNI_RESP_Q_MZ_NAME_FMT "kni_resp_%s"
-#define KNI_SYNC_ADDR_MZ_NAME_FMT "kni_sync_%s"
-
-TAILQ_HEAD(rte_kni_list, rte_tailq_entry);
-
-static struct rte_tailq_elem rte_kni_tailq = {
- .name = "RTE_KNI",
-};
-EAL_REGISTER_TAILQ(rte_kni_tailq)
-
-/**
- * KNI context
- */
-struct rte_kni {
- char name[RTE_KNI_NAMESIZE]; /**< KNI interface name */
- uint16_t group_id; /**< Group ID of KNI devices */
- uint32_t slot_id; /**< KNI pool slot ID */
- struct rte_mempool *pktmbuf_pool; /**< pkt mbuf mempool */
- unsigned int mbuf_size; /**< mbuf size */
-
- const struct rte_memzone *m_tx_q; /**< TX queue memzone */
- const struct rte_memzone *m_rx_q; /**< RX queue memzone */
- const struct rte_memzone *m_alloc_q;/**< Alloc queue memzone */
- const struct rte_memzone *m_free_q; /**< Free queue memzone */
-
- struct rte_kni_fifo *tx_q; /**< TX queue */
- struct rte_kni_fifo *rx_q; /**< RX queue */
- struct rte_kni_fifo *alloc_q; /**< Allocated mbufs queue */
- struct rte_kni_fifo *free_q; /**< To be freed mbufs queue */
-
- const struct rte_memzone *m_req_q; /**< Request queue memzone */
- const struct rte_memzone *m_resp_q; /**< Response queue memzone */
- const struct rte_memzone *m_sync_addr;/**< Sync addr memzone */
-
- /* For request & response */
- struct rte_kni_fifo *req_q; /**< Request queue */
- struct rte_kni_fifo *resp_q; /**< Response queue */
- void *sync_addr; /**< Req/Resp Mem address */
-
- struct rte_kni_ops ops; /**< operations for request */
-};
-
-enum kni_ops_status {
- KNI_REQ_NO_REGISTER = 0,
- KNI_REQ_REGISTERED,
-};
-
-static void kni_free_mbufs(struct rte_kni *kni);
-static void kni_allocate_mbufs(struct rte_kni *kni);
-
-static volatile int kni_fd = -1;
-
-/* Shall be called before any allocation happens */
-int
-rte_kni_init(unsigned int max_kni_ifaces __rte_unused)
-{
- RTE_LOG(WARNING, KNI, "WARNING: KNI is deprecated and will be removed in DPDK 23.11\n");
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0)
- if (rte_eal_iova_mode() != RTE_IOVA_PA) {
- RTE_LOG(ERR, KNI, "KNI requires IOVA as PA\n");
- return -1;
- }
-#endif
-
- /* Check FD and open */
- if (kni_fd < 0) {
- kni_fd = open("/dev/" KNI_DEVICE, O_RDWR);
- if (kni_fd < 0) {
- RTE_LOG(ERR, KNI,
- "Can not open /dev/%s\n", KNI_DEVICE);
- return -1;
- }
- }
-
- return 0;
-}
-
-static struct rte_kni *
-__rte_kni_get(const char *name)
-{
- struct rte_kni *kni;
- struct rte_tailq_entry *te;
- struct rte_kni_list *kni_list;
-
- kni_list = RTE_TAILQ_CAST(rte_kni_tailq.head, rte_kni_list);
-
- TAILQ_FOREACH(te, kni_list, next) {
- kni = te->data;
- if (strncmp(name, kni->name, RTE_KNI_NAMESIZE) == 0)
- break;
- }
-
- if (te == NULL)
- kni = NULL;
-
- return kni;
-}
-
-static int
-kni_reserve_mz(struct rte_kni *kni)
-{
- char mz_name[RTE_MEMZONE_NAMESIZE];
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_TX_Q_MZ_NAME_FMT, kni->name);
- kni->m_tx_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_tx_q == NULL, tx_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_RX_Q_MZ_NAME_FMT, kni->name);
- kni->m_rx_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_rx_q == NULL, rx_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_ALLOC_Q_MZ_NAME_FMT, kni->name);
- kni->m_alloc_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_alloc_q == NULL, alloc_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_FREE_Q_MZ_NAME_FMT, kni->name);
- kni->m_free_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_free_q == NULL, free_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_REQ_Q_MZ_NAME_FMT, kni->name);
- kni->m_req_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_req_q == NULL, req_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_RESP_Q_MZ_NAME_FMT, kni->name);
- kni->m_resp_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_resp_q == NULL, resp_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_SYNC_ADDR_MZ_NAME_FMT, kni->name);
- kni->m_sync_addr = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_sync_addr == NULL, sync_addr_fail);
-
- return 0;
-
-sync_addr_fail:
- rte_memzone_free(kni->m_resp_q);
-resp_q_fail:
- rte_memzone_free(kni->m_req_q);
-req_q_fail:
- rte_memzone_free(kni->m_free_q);
-free_q_fail:
- rte_memzone_free(kni->m_alloc_q);
-alloc_q_fail:
- rte_memzone_free(kni->m_rx_q);
-rx_q_fail:
- rte_memzone_free(kni->m_tx_q);
-tx_q_fail:
- return -1;
-}
-
-static void
-kni_release_mz(struct rte_kni *kni)
-{
- rte_memzone_free(kni->m_tx_q);
- rte_memzone_free(kni->m_rx_q);
- rte_memzone_free(kni->m_alloc_q);
- rte_memzone_free(kni->m_free_q);
- rte_memzone_free(kni->m_req_q);
- rte_memzone_free(kni->m_resp_q);
- rte_memzone_free(kni->m_sync_addr);
-}
-
-struct rte_kni *
-rte_kni_alloc(struct rte_mempool *pktmbuf_pool,
- const struct rte_kni_conf *conf,
- struct rte_kni_ops *ops)
-{
- int ret;
- struct rte_kni_device_info dev_info;
- struct rte_kni *kni;
- struct rte_tailq_entry *te;
- struct rte_kni_list *kni_list;
-
- if (!pktmbuf_pool || !conf || !conf->name[0])
- return NULL;
-
- /* Check if KNI subsystem has been initialized */
- if (kni_fd < 0) {
- RTE_LOG(ERR, KNI, "KNI subsystem has not been initialized. Invoke rte_kni_init() first\n");
- return NULL;
- }
-
- rte_mcfg_tailq_write_lock();
-
- kni = __rte_kni_get(conf->name);
- if (kni != NULL) {
- RTE_LOG(ERR, KNI, "KNI already exists\n");
- goto unlock;
- }
-
- te = rte_zmalloc("KNI_TAILQ_ENTRY", sizeof(*te), 0);
- if (te == NULL) {
- RTE_LOG(ERR, KNI, "Failed to allocate tailq entry\n");
- goto unlock;
- }
-
- kni = rte_zmalloc("KNI", sizeof(struct rte_kni), RTE_CACHE_LINE_SIZE);
- if (kni == NULL) {
- RTE_LOG(ERR, KNI, "KNI memory allocation failed\n");
- goto kni_fail;
- }
-
- strlcpy(kni->name, conf->name, RTE_KNI_NAMESIZE);
-
- if (ops)
- memcpy(&kni->ops, ops, sizeof(struct rte_kni_ops));
- else
- kni->ops.port_id = UINT16_MAX;
-
- memset(&dev_info, 0, sizeof(dev_info));
- dev_info.core_id = conf->core_id;
- dev_info.force_bind = conf->force_bind;
- dev_info.group_id = conf->group_id;
- dev_info.mbuf_size = conf->mbuf_size;
- dev_info.mtu = conf->mtu;
- dev_info.min_mtu = conf->min_mtu;
- dev_info.max_mtu = conf->max_mtu;
-
- memcpy(dev_info.mac_addr, conf->mac_addr, RTE_ETHER_ADDR_LEN);
-
- strlcpy(dev_info.name, conf->name, RTE_KNI_NAMESIZE);
-
- ret = kni_reserve_mz(kni);
- if (ret < 0)
- goto mz_fail;
-
- /* TX RING */
- kni->tx_q = kni->m_tx_q->addr;
- kni_fifo_init(kni->tx_q, KNI_FIFO_COUNT_MAX);
- dev_info.tx_phys = kni->m_tx_q->iova;
-
- /* RX RING */
- kni->rx_q = kni->m_rx_q->addr;
- kni_fifo_init(kni->rx_q, KNI_FIFO_COUNT_MAX);
- dev_info.rx_phys = kni->m_rx_q->iova;
-
- /* ALLOC RING */
- kni->alloc_q = kni->m_alloc_q->addr;
- kni_fifo_init(kni->alloc_q, KNI_FIFO_COUNT_MAX);
- dev_info.alloc_phys = kni->m_alloc_q->iova;
-
- /* FREE RING */
- kni->free_q = kni->m_free_q->addr;
- kni_fifo_init(kni->free_q, KNI_FIFO_COUNT_MAX);
- dev_info.free_phys = kni->m_free_q->iova;
-
- /* Request RING */
- kni->req_q = kni->m_req_q->addr;
- kni_fifo_init(kni->req_q, KNI_FIFO_COUNT_MAX);
- dev_info.req_phys = kni->m_req_q->iova;
-
- /* Response RING */
- kni->resp_q = kni->m_resp_q->addr;
- kni_fifo_init(kni->resp_q, KNI_FIFO_COUNT_MAX);
- dev_info.resp_phys = kni->m_resp_q->iova;
-
- /* Req/Resp sync mem area */
- kni->sync_addr = kni->m_sync_addr->addr;
- dev_info.sync_va = kni->m_sync_addr->addr;
- dev_info.sync_phys = kni->m_sync_addr->iova;
-
- kni->pktmbuf_pool = pktmbuf_pool;
- kni->group_id = conf->group_id;
- kni->mbuf_size = conf->mbuf_size;
-
- dev_info.iova_mode = (rte_eal_iova_mode() == RTE_IOVA_VA) ? 1 : 0;
-
- ret = ioctl(kni_fd, RTE_KNI_IOCTL_CREATE, &dev_info);
- if (ret < 0)
- goto ioctl_fail;
-
- te->data = kni;
-
- kni_list = RTE_TAILQ_CAST(rte_kni_tailq.head, rte_kni_list);
- TAILQ_INSERT_TAIL(kni_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- /* Allocate mbufs and then put them into alloc_q */
- kni_allocate_mbufs(kni);
-
- return kni;
-
-ioctl_fail:
- kni_release_mz(kni);
-mz_fail:
- rte_free(kni);
-kni_fail:
- rte_free(te);
-unlock:
- rte_mcfg_tailq_write_unlock();
-
- return NULL;
-}
-
-static void
-kni_free_fifo(struct rte_kni_fifo *fifo)
-{
- int ret;
- struct rte_mbuf *pkt;
-
- do {
- ret = kni_fifo_get(fifo, (void **)&pkt, 1);
- if (ret)
- rte_pktmbuf_free(pkt);
- } while (ret);
-}
-
-static void *
-va2pa(struct rte_mbuf *m)
-{
- return (void *)((unsigned long)m -
- ((unsigned long)m->buf_addr - (unsigned long)rte_mbuf_iova_get(m)));
-}
-
-static void *
-va2pa_all(struct rte_mbuf *mbuf)
-{
- void *phy_mbuf = va2pa(mbuf);
- struct rte_mbuf *next = mbuf->next;
- while (next) {
- mbuf->next = va2pa(next);
- mbuf = next;
- next = mbuf->next;
- }
- return phy_mbuf;
-}
-
-static void
-obj_free(struct rte_mempool *mp __rte_unused, void *opaque, void *obj,
- unsigned obj_idx __rte_unused)
-{
- struct rte_mbuf *m = obj;
- void *mbuf_phys = opaque;
-
- if (va2pa(m) == mbuf_phys)
- rte_pktmbuf_free(m);
-}
-
-static void
-kni_free_fifo_phy(struct rte_mempool *mp, struct rte_kni_fifo *fifo)
-{
- void *mbuf_phys;
- int ret;
-
- do {
- ret = kni_fifo_get(fifo, &mbuf_phys, 1);
- if (ret)
- rte_mempool_obj_iter(mp, obj_free, mbuf_phys);
- } while (ret);
-}
-
-int
-rte_kni_release(struct rte_kni *kni)
-{
- struct rte_tailq_entry *te;
- struct rte_kni_list *kni_list;
- struct rte_kni_device_info dev_info;
- uint32_t retry = 5;
-
- if (!kni)
- return -1;
-
- kni_list = RTE_TAILQ_CAST(rte_kni_tailq.head, rte_kni_list);
-
- rte_mcfg_tailq_write_lock();
-
- TAILQ_FOREACH(te, kni_list, next) {
- if (te->data == kni)
- break;
- }
-
- if (te == NULL)
- goto unlock;
-
- strlcpy(dev_info.name, kni->name, sizeof(dev_info.name));
- if (ioctl(kni_fd, RTE_KNI_IOCTL_RELEASE, &dev_info) < 0) {
- RTE_LOG(ERR, KNI, "Fail to release kni device\n");
- goto unlock;
- }
-
- TAILQ_REMOVE(kni_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- /* mbufs in all fifo should be released, except request/response */
-
- /* wait until all rxq packets processed by kernel */
- while (kni_fifo_count(kni->rx_q) && retry--)
- usleep(1000);
-
- if (kni_fifo_count(kni->rx_q))
- RTE_LOG(ERR, KNI, "Fail to free all Rx-q items\n");
-
- kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);
- kni_free_fifo(kni->tx_q);
- kni_free_fifo(kni->free_q);
-
- kni_release_mz(kni);
-
- rte_free(kni);
-
- rte_free(te);
-
- return 0;
-
-unlock:
- rte_mcfg_tailq_write_unlock();
-
- return -1;
-}
-
-/* default callback for request of configuring device mac address */
-static int
-kni_config_mac_address(uint16_t port_id, uint8_t mac_addr[])
-{
- int ret = 0;
-
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_LOG(ERR, KNI, "Invalid port id %d\n", port_id);
- return -EINVAL;
- }
-
- RTE_LOG(INFO, KNI, "Configure mac address of %d", port_id);
-
- ret = rte_eth_dev_default_mac_addr_set(port_id,
- (struct rte_ether_addr *)mac_addr);
- if (ret < 0)
- RTE_LOG(ERR, KNI, "Failed to config mac_addr for port %d\n",
- port_id);
-
- return ret;
-}
-
-/* default callback for request of configuring promiscuous mode */
-static int
-kni_config_promiscusity(uint16_t port_id, uint8_t to_on)
-{
- int ret;
-
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_LOG(ERR, KNI, "Invalid port id %d\n", port_id);
- return -EINVAL;
- }
-
- RTE_LOG(INFO, KNI, "Configure promiscuous mode of %d to %d\n",
- port_id, to_on);
-
- if (to_on)
- ret = rte_eth_promiscuous_enable(port_id);
- else
- ret = rte_eth_promiscuous_disable(port_id);
-
- if (ret != 0)
- RTE_LOG(ERR, KNI,
- "Failed to %s promiscuous mode for port %u: %s\n",
- to_on ? "enable" : "disable", port_id,
- rte_strerror(-ret));
-
- return ret;
-}
-
-/* default callback for request of configuring allmulticast mode */
-static int
-kni_config_allmulticast(uint16_t port_id, uint8_t to_on)
-{
- int ret;
-
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_LOG(ERR, KNI, "Invalid port id %d\n", port_id);
- return -EINVAL;
- }
-
- RTE_LOG(INFO, KNI, "Configure allmulticast mode of %d to %d\n",
- port_id, to_on);
-
- if (to_on)
- ret = rte_eth_allmulticast_enable(port_id);
- else
- ret = rte_eth_allmulticast_disable(port_id);
- if (ret != 0)
- RTE_LOG(ERR, KNI,
- "Failed to %s allmulticast mode for port %u: %s\n",
- to_on ? "enable" : "disable", port_id,
- rte_strerror(-ret));
-
- return ret;
-}
-
-int
-rte_kni_handle_request(struct rte_kni *kni)
-{
- unsigned int ret;
- struct rte_kni_request *req = NULL;
-
- if (kni == NULL)
- return -1;
-
- /* Get request mbuf */
- ret = kni_fifo_get(kni->req_q, (void **)&req, 1);
- if (ret != 1)
- return 0; /* It is OK of can not getting the request mbuf */
-
- if (req != kni->sync_addr) {
- RTE_LOG(ERR, KNI, "Wrong req pointer %p\n", req);
- return -1;
- }
-
- /* Analyze the request and call the relevant actions for it */
- switch (req->req_id) {
- case RTE_KNI_REQ_CHANGE_MTU: /* Change MTU */
- if (kni->ops.change_mtu)
- req->result = kni->ops.change_mtu(kni->ops.port_id,
- req->new_mtu);
- break;
- case RTE_KNI_REQ_CFG_NETWORK_IF: /* Set network interface up/down */
- if (kni->ops.config_network_if)
- req->result = kni->ops.config_network_if(kni->ops.port_id,
- req->if_up);
- break;
- case RTE_KNI_REQ_CHANGE_MAC_ADDR: /* Change MAC Address */
- if (kni->ops.config_mac_address)
- req->result = kni->ops.config_mac_address(
- kni->ops.port_id, req->mac_addr);
- else if (kni->ops.port_id != UINT16_MAX)
- req->result = kni_config_mac_address(
- kni->ops.port_id, req->mac_addr);
- break;
- case RTE_KNI_REQ_CHANGE_PROMISC: /* Change PROMISCUOUS MODE */
- if (kni->ops.config_promiscusity)
- req->result = kni->ops.config_promiscusity(
- kni->ops.port_id, req->promiscusity);
- else if (kni->ops.port_id != UINT16_MAX)
- req->result = kni_config_promiscusity(
- kni->ops.port_id, req->promiscusity);
- break;
- case RTE_KNI_REQ_CHANGE_ALLMULTI: /* Change ALLMULTICAST MODE */
- if (kni->ops.config_allmulticast)
- req->result = kni->ops.config_allmulticast(
- kni->ops.port_id, req->allmulti);
- else if (kni->ops.port_id != UINT16_MAX)
- req->result = kni_config_allmulticast(
- kni->ops.port_id, req->allmulti);
- break;
- default:
- RTE_LOG(ERR, KNI, "Unknown request id %u\n", req->req_id);
- req->result = -EINVAL;
- break;
- }
-
- /* if needed, construct response buffer and put it back to resp_q */
- if (!req->async)
- ret = kni_fifo_put(kni->resp_q, (void **)&req, 1);
- else
- ret = 1;
- if (ret != 1) {
- RTE_LOG(ERR, KNI, "Fail to put the muf back to resp_q\n");
- return -1; /* It is an error of can't putting the mbuf back */
- }
-
- return 0;
-}
-
-unsigned
-rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned int num)
-{
- num = RTE_MIN(kni_fifo_free_count(kni->rx_q), num);
- void *phy_mbufs[num];
- unsigned int ret;
- unsigned int i;
-
- for (i = 0; i < num; i++)
- phy_mbufs[i] = va2pa_all(mbufs[i]);
-
- ret = kni_fifo_put(kni->rx_q, phy_mbufs, num);
-
- /* Get mbufs from free_q and then free them */
- kni_free_mbufs(kni);
-
- return ret;
-}
-
-unsigned
-rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned int num)
-{
- unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
-
- /* If buffers removed or alloc_q is empty, allocate mbufs and then put them into alloc_q */
- if (ret || (kni_fifo_count(kni->alloc_q) == 0))
- kni_allocate_mbufs(kni);
-
- return ret;
-}
-
-static void
-kni_free_mbufs(struct rte_kni *kni)
-{
- int i, ret;
- struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM];
-
- ret = kni_fifo_get(kni->free_q, (void **)pkts, MAX_MBUF_BURST_NUM);
- if (likely(ret > 0)) {
- for (i = 0; i < ret; i++)
- rte_pktmbuf_free(pkts[i]);
- }
-}
-
-static void
-kni_allocate_mbufs(struct rte_kni *kni)
-{
- int i, ret;
- struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM];
- void *phys[MAX_MBUF_BURST_NUM];
- int allocq_free;
-
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pool) !=
- offsetof(struct rte_kni_mbuf, pool));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_addr) !=
- offsetof(struct rte_kni_mbuf, buf_addr));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, next) !=
- offsetof(struct rte_kni_mbuf, next));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) !=
- offsetof(struct rte_kni_mbuf, data_off));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
- offsetof(struct rte_kni_mbuf, data_len));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
- offsetof(struct rte_kni_mbuf, pkt_len));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
- offsetof(struct rte_kni_mbuf, ol_flags));
-
- /* Check if pktmbuf pool has been configured */
- if (kni->pktmbuf_pool == NULL) {
- RTE_LOG(ERR, KNI, "No valid mempool for allocating mbufs\n");
- return;
- }
-
- allocq_free = kni_fifo_free_count(kni->alloc_q);
- allocq_free = (allocq_free > MAX_MBUF_BURST_NUM) ?
- MAX_MBUF_BURST_NUM : allocq_free;
- for (i = 0; i < allocq_free; i++) {
- pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool);
- if (unlikely(pkts[i] == NULL)) {
- /* Out of memory */
- RTE_LOG(ERR, KNI, "Out of memory\n");
- break;
- }
- phys[i] = va2pa(pkts[i]);
- }
-
- /* No pkt mbuf allocated */
- if (i <= 0)
- return;
-
- ret = kni_fifo_put(kni->alloc_q, phys, i);
-
- /* Check if any mbufs not put into alloc_q, and then free them */
- if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) {
- int j;
-
- for (j = ret; j < i; j++)
- rte_pktmbuf_free(pkts[j]);
- }
-}
-
-struct rte_kni *
-rte_kni_get(const char *name)
-{
- struct rte_kni *kni;
-
- if (name == NULL || name[0] == '\0')
- return NULL;
-
- rte_mcfg_tailq_read_lock();
-
- kni = __rte_kni_get(name);
-
- rte_mcfg_tailq_read_unlock();
-
- return kni;
-}
-
-const char *
-rte_kni_get_name(const struct rte_kni *kni)
-{
- return kni->name;
-}
-
-static enum kni_ops_status
-kni_check_request_register(struct rte_kni_ops *ops)
-{
- /* check if KNI request ops has been registered*/
- if (ops == NULL)
- return KNI_REQ_NO_REGISTER;
-
- if (ops->change_mtu == NULL
- && ops->config_network_if == NULL
- && ops->config_mac_address == NULL
- && ops->config_promiscusity == NULL
- && ops->config_allmulticast == NULL)
- return KNI_REQ_NO_REGISTER;
-
- return KNI_REQ_REGISTERED;
-}
-
-int
-rte_kni_register_handlers(struct rte_kni *kni, struct rte_kni_ops *ops)
-{
- enum kni_ops_status req_status;
-
- if (ops == NULL) {
- RTE_LOG(ERR, KNI, "Invalid KNI request operation.\n");
- return -1;
- }
-
- if (kni == NULL) {
- RTE_LOG(ERR, KNI, "Invalid kni info.\n");
- return -1;
- }
-
- req_status = kni_check_request_register(&kni->ops);
- if (req_status == KNI_REQ_REGISTERED) {
- RTE_LOG(ERR, KNI, "The KNI request operation has already registered.\n");
- return -1;
- }
-
- memcpy(&kni->ops, ops, sizeof(struct rte_kni_ops));
- return 0;
-}
-
-int
-rte_kni_unregister_handlers(struct rte_kni *kni)
-{
- if (kni == NULL) {
- RTE_LOG(ERR, KNI, "Invalid kni info.\n");
- return -1;
- }
-
- memset(&kni->ops, 0, sizeof(struct rte_kni_ops));
-
- return 0;
-}
-
-int
-rte_kni_update_link(struct rte_kni *kni, unsigned int linkup)
-{
- char path[64];
- char old_carrier[2];
- const char *new_carrier;
- int old_linkup;
- int fd, ret;
-
- if (kni == NULL)
- return -1;
-
- snprintf(path, sizeof(path), "/sys/devices/virtual/net/%s/carrier",
- kni->name);
-
- fd = open(path, O_RDWR);
- if (fd == -1) {
- RTE_LOG(ERR, KNI, "Failed to open file: %s.\n", path);
- return -1;
- }
-
- ret = read(fd, old_carrier, 2);
- if (ret < 1) {
- close(fd);
- return -1;
- }
- old_linkup = (old_carrier[0] == '1');
-
- if (old_linkup == (int)linkup)
- goto out;
-
- new_carrier = linkup ? "1" : "0";
- ret = write(fd, new_carrier, 1);
- if (ret < 1) {
- RTE_LOG(ERR, KNI, "Failed to write file: %s.\n", path);
- close(fd);
- return -1;
- }
-out:
- close(fd);
- return old_linkup;
-}
-
-void
-rte_kni_close(void)
-{
- if (kni_fd < 0)
- return;
-
- close(kni_fd);
- kni_fd = -1;
-}
diff --git a/lib/kni/rte_kni.h b/lib/kni/rte_kni.h
deleted file mode 100644
index 1e508acc829b..000000000000
--- a/lib/kni/rte_kni.h
+++ /dev/null
@@ -1,269 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_KNI_H_
-#define _RTE_KNI_H_
-
-/**
- * @file
- * RTE KNI
- *
- * The KNI library provides the ability to create and destroy kernel NIC
- * interfaces that may be used by the RTE application to receive/transmit
- * packets from/to Linux kernel net interfaces.
- *
- * This library provides two APIs to burst receive packets from KNI interfaces,
- * and burst transmit packets to KNI interfaces.
- */
-
-#include <rte_compat.h>
-#include <rte_pci.h>
-#include <rte_ether.h>
-
-#include <rte_kni_common.h>
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-struct rte_kni;
-struct rte_mbuf;
-
-/**
- * Structure which has the function pointers for KNI interface.
- */
-struct rte_kni_ops {
- uint16_t port_id; /* Port ID */
-
- /* Pointer to function of changing MTU */
- int (*change_mtu)(uint16_t port_id, unsigned int new_mtu);
-
- /* Pointer to function of configuring network interface */
- int (*config_network_if)(uint16_t port_id, uint8_t if_up);
-
- /* Pointer to function of configuring mac address */
- int (*config_mac_address)(uint16_t port_id, uint8_t mac_addr[]);
-
- /* Pointer to function of configuring promiscuous mode */
- int (*config_promiscusity)(uint16_t port_id, uint8_t to_on);
-
- /* Pointer to function of configuring allmulticast mode */
- int (*config_allmulticast)(uint16_t port_id, uint8_t to_on);
-};
-
-/**
- * Structure for configuring KNI device.
- */
-struct rte_kni_conf {
- /*
- * KNI name which will be used in relevant network device.
- * Let the name as short as possible, as it will be part of
- * memzone name.
- */
- char name[RTE_KNI_NAMESIZE];
- uint32_t core_id; /* Core ID to bind kernel thread on */
- uint16_t group_id; /* Group ID */
- unsigned mbuf_size; /* mbuf size */
- struct rte_pci_addr addr; /* deprecated */
- struct rte_pci_id id; /* deprecated */
-
- __extension__
- uint8_t force_bind : 1; /* Flag to bind kernel thread */
- uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; /* MAC address assigned to KNI */
- uint16_t mtu;
- uint16_t min_mtu;
- uint16_t max_mtu;
-};
-
-/**
- * Initialize and preallocate KNI subsystem
- *
- * This function is to be executed on the main lcore only, after EAL
- * initialization and before any KNI interface is attempted to be
- * allocated
- *
- * @param max_kni_ifaces
- * The maximum number of KNI interfaces that can coexist concurrently
- *
- * @return
- * - 0 indicates success.
- * - negative value indicates failure.
- */
-int rte_kni_init(unsigned int max_kni_ifaces);
-
-
-/**
- * Allocate KNI interface according to the port id, mbuf size, mbuf pool,
- * configurations and callbacks for kernel requests.The KNI interface created
- * in the kernel space is the net interface the traditional Linux application
- * talking to.
- *
- * The rte_kni_alloc shall not be called before rte_kni_init() has been
- * called. rte_kni_alloc is thread safe.
- *
- * The mempool should have capacity of more than "2 x KNI_FIFO_COUNT_MAX"
- * elements for each KNI interface allocated.
- *
- * @param pktmbuf_pool
- * The mempool for allocating mbufs for packets.
- * @param conf
- * The pointer to the configurations of the KNI device.
- * @param ops
- * The pointer to the callbacks for the KNI kernel requests.
- *
- * @return
- * - The pointer to the context of a KNI interface.
- * - NULL indicate error.
- */
-struct rte_kni *rte_kni_alloc(struct rte_mempool *pktmbuf_pool,
- const struct rte_kni_conf *conf, struct rte_kni_ops *ops);
-
-/**
- * Release KNI interface according to the context. It will also release the
- * paired KNI interface in kernel space. All processing on the specific KNI
- * context need to be stopped before calling this interface.
- *
- * rte_kni_release is thread safe.
- *
- * @param kni
- * The pointer to the context of an existent KNI interface.
- *
- * @return
- * - 0 indicates success.
- * - negative value indicates failure.
- */
-int rte_kni_release(struct rte_kni *kni);
-
-/**
- * It is used to handle the request mbufs sent from kernel space.
- * Then analyzes it and calls the specific actions for the specific requests.
- * Finally constructs the response mbuf and puts it back to the resp_q.
- *
- * @param kni
- * The pointer to the context of an existent KNI interface.
- *
- * @return
- * - 0
- * - negative value indicates failure.
- */
-int rte_kni_handle_request(struct rte_kni *kni);
-
-/**
- * Retrieve a burst of packets from a KNI interface. The retrieved packets are
- * stored in rte_mbuf structures whose pointers are supplied in the array of
- * mbufs, and the maximum number is indicated by num. It handles allocating
- * the mbufs for KNI interface alloc queue.
- *
- * @param kni
- * The KNI interface context.
- * @param mbufs
- * The array to store the pointers of mbufs.
- * @param num
- * The maximum number per burst.
- *
- * @return
- * The actual number of packets retrieved.
- */
-unsigned rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
- unsigned num);
-
-/**
- * Send a burst of packets to a KNI interface. The packets to be sent out are
- * stored in rte_mbuf structures whose pointers are supplied in the array of
- * mbufs, and the maximum number is indicated by num. It handles the freeing of
- * the mbufs in the free queue of KNI interface.
- *
- * @param kni
- * The KNI interface context.
- * @param mbufs
- * The array to store the pointers of mbufs.
- * @param num
- * The maximum number per burst.
- *
- * @return
- * The actual number of packets sent.
- */
-unsigned rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
- unsigned num);
-
-/**
- * Get the KNI context of its name.
- *
- * @param name
- * pointer to the KNI device name.
- *
- * @return
- * On success: Pointer to KNI interface.
- * On failure: NULL.
- */
-struct rte_kni *rte_kni_get(const char *name);
-
-/**
- * Get the name given to a KNI device
- *
- * @param kni
- * The KNI instance to query
- * @return
- * The pointer to the KNI name
- */
-const char *rte_kni_get_name(const struct rte_kni *kni);
-
-/**
- * Register KNI request handling for a specified port,and it can
- * be called by primary process or secondary process.
- *
- * @param kni
- * pointer to struct rte_kni.
- * @param ops
- * pointer to struct rte_kni_ops.
- *
- * @return
- * On success: 0
- * On failure: -1
- */
-int rte_kni_register_handlers(struct rte_kni *kni, struct rte_kni_ops *ops);
-
-/**
- * Unregister KNI request handling for a specified port.
- *
- * @param kni
- * pointer to struct rte_kni.
- *
- * @return
- * On success: 0
- * On failure: -1
- */
-int rte_kni_unregister_handlers(struct rte_kni *kni);
-
-/**
- * Update link carrier state for KNI port.
- *
- * Update the linkup/linkdown state of a KNI interface in the kernel.
- *
- * @param kni
- * pointer to struct rte_kni.
- * @param linkup
- * New link state:
- * 0 for linkdown.
- * > 0 for linkup.
- *
- * @return
- * On failure: -1
- * Previous link state == linkdown: 0
- * Previous link state == linkup: 1
- */
-__rte_experimental
-int
-rte_kni_update_link(struct rte_kni *kni, unsigned int linkup);
-
-/**
- * Close KNI device.
- */
-void rte_kni_close(void);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_KNI_H_ */
diff --git a/lib/kni/rte_kni_common.h b/lib/kni/rte_kni_common.h
deleted file mode 100644
index 8d3ee0fa4fc2..000000000000
--- a/lib/kni/rte_kni_common.h
+++ /dev/null
@@ -1,147 +0,0 @@
-/* SPDX-License-Identifier: (BSD-3-Clause OR LGPL-2.1) */
-/*
- * Copyright(c) 2007-2014 Intel Corporation.
- */
-
-#ifndef _RTE_KNI_COMMON_H_
-#define _RTE_KNI_COMMON_H_
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#ifdef __KERNEL__
-#include <linux/if.h>
-#include <asm/barrier.h>
-#define RTE_STD_C11
-#else
-#include <rte_common.h>
-#include <rte_config.h>
-#endif
-
-/*
- * KNI name is part of memzone name. Must not exceed IFNAMSIZ.
- */
-#define RTE_KNI_NAMESIZE 16
-
-#define RTE_CACHE_LINE_MIN_SIZE 64
-
-/*
- * Request id.
- */
-enum rte_kni_req_id {
- RTE_KNI_REQ_UNKNOWN = 0,
- RTE_KNI_REQ_CHANGE_MTU,
- RTE_KNI_REQ_CFG_NETWORK_IF,
- RTE_KNI_REQ_CHANGE_MAC_ADDR,
- RTE_KNI_REQ_CHANGE_PROMISC,
- RTE_KNI_REQ_CHANGE_ALLMULTI,
- RTE_KNI_REQ_MAX,
-};
-
-/*
- * Structure for KNI request.
- */
-struct rte_kni_request {
- uint32_t req_id; /**< Request id */
- RTE_STD_C11
- union {
- uint32_t new_mtu; /**< New MTU */
- uint8_t if_up; /**< 1: interface up, 0: interface down */
- uint8_t mac_addr[6]; /**< MAC address for interface */
- uint8_t promiscusity;/**< 1: promisc mode enable, 0: disable */
- uint8_t allmulti; /**< 1: all-multicast mode enable, 0: disable */
- };
- int32_t async : 1; /**< 1: request is asynchronous */
- int32_t result; /**< Result for processing request */
-} __attribute__((__packed__));
-
-/*
- * Fifo struct mapped in a shared memory. It describes a circular buffer FIFO
- * Write and read should wrap around. Fifo is empty when write == read
- * Writing should never overwrite the read position
- */
-struct rte_kni_fifo {
-#ifdef RTE_USE_C11_MEM_MODEL
- unsigned write; /**< Next position to be written*/
- unsigned read; /**< Next position to be read */
-#else
- volatile unsigned write; /**< Next position to be written*/
- volatile unsigned read; /**< Next position to be read */
-#endif
- unsigned len; /**< Circular buffer length */
- unsigned elem_size; /**< Pointer size - for 32/64 bit OS */
- void *volatile buffer[]; /**< The buffer contains mbuf pointers */
-};
-
-/*
- * The kernel image of the rte_mbuf struct, with only the relevant fields.
- * Padding is necessary to assure the offsets of these fields
- */
-struct rte_kni_mbuf {
- void *buf_addr __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
- uint64_t buf_iova;
- uint16_t data_off; /**< Start address of data in segment buffer. */
- char pad1[2];
- uint16_t nb_segs; /**< Number of segments. */
- char pad4[2];
- uint64_t ol_flags; /**< Offload features. */
- char pad2[4];
- uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
- uint16_t data_len; /**< Amount of data in segment buffer. */
- char pad3[14];
- void *pool;
-
- /* fields on second cache line */
- __attribute__((__aligned__(RTE_CACHE_LINE_MIN_SIZE)))
- void *next; /**< Physical address of next mbuf in kernel. */
-};
-
-/*
- * Struct used to create a KNI device. Passed to the kernel in IOCTL call
- */
-
-struct rte_kni_device_info {
- char name[RTE_KNI_NAMESIZE]; /**< Network device name for KNI */
-
- phys_addr_t tx_phys;
- phys_addr_t rx_phys;
- phys_addr_t alloc_phys;
- phys_addr_t free_phys;
-
- /* Used by Ethtool */
- phys_addr_t req_phys;
- phys_addr_t resp_phys;
- phys_addr_t sync_phys;
- void * sync_va;
-
- /* mbuf mempool */
- void * mbuf_va;
- phys_addr_t mbuf_phys;
-
- uint16_t group_id; /**< Group ID */
- uint32_t core_id; /**< core ID to bind for kernel thread */
-
- __extension__
- uint8_t force_bind : 1; /**< Flag for kernel thread binding */
-
- /* mbuf size */
- unsigned mbuf_size;
- unsigned int mtu;
- unsigned int min_mtu;
- unsigned int max_mtu;
- uint8_t mac_addr[6];
- uint8_t iova_mode;
-};
-
-#define KNI_DEVICE "kni"
-
-#define RTE_KNI_IOCTL_TEST _IOWR(0, 1, int)
-#define RTE_KNI_IOCTL_CREATE _IOWR(0, 2, struct rte_kni_device_info)
-#define RTE_KNI_IOCTL_RELEASE _IOWR(0, 3, struct rte_kni_device_info)
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_KNI_COMMON_H_ */
diff --git a/lib/kni/rte_kni_fifo.h b/lib/kni/rte_kni_fifo.h
deleted file mode 100644
index d2ec82fe87fc..000000000000
--- a/lib/kni/rte_kni_fifo.h
+++ /dev/null
@@ -1,117 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-
-
-/**
- * @internal when c11 memory model enabled use c11 atomic memory barrier.
- * when under non c11 memory model use rte_smp_* memory barrier.
- *
- * @param src
- * Pointer to the source data.
- * @param dst
- * Pointer to the destination data.
- * @param value
- * Data value.
- */
-#ifdef RTE_USE_C11_MEM_MODEL
-#define __KNI_LOAD_ACQUIRE(src) ({ \
- __atomic_load_n((src), __ATOMIC_ACQUIRE); \
- })
-#define __KNI_STORE_RELEASE(dst, value) do { \
- __atomic_store_n((dst), value, __ATOMIC_RELEASE); \
- } while(0)
-#else
-#define __KNI_LOAD_ACQUIRE(src) ({ \
- typeof (*(src)) val = *(src); \
- rte_smp_rmb(); \
- val; \
- })
-#define __KNI_STORE_RELEASE(dst, value) do { \
- *(dst) = value; \
- rte_smp_wmb(); \
- } while(0)
-#endif
-
-/**
- * Initializes the kni fifo structure
- */
-static void
-kni_fifo_init(struct rte_kni_fifo *fifo, unsigned size)
-{
- /* Ensure size is power of 2 */
- if (size & (size - 1))
- rte_panic("KNI fifo size must be power of 2\n");
-
- fifo->write = 0;
- fifo->read = 0;
- fifo->len = size;
- fifo->elem_size = sizeof(void *);
-}
-
-/**
- * Adds num elements into the fifo. Return the number actually written
- */
-static inline unsigned
-kni_fifo_put(struct rte_kni_fifo *fifo, void **data, unsigned num)
-{
- unsigned i = 0;
- unsigned fifo_write = fifo->write;
- unsigned new_write = fifo_write;
- unsigned fifo_read = __KNI_LOAD_ACQUIRE(&fifo->read);
-
- for (i = 0; i < num; i++) {
- new_write = (new_write + 1) & (fifo->len - 1);
-
- if (new_write == fifo_read)
- break;
- fifo->buffer[fifo_write] = data[i];
- fifo_write = new_write;
- }
- __KNI_STORE_RELEASE(&fifo->write, fifo_write);
- return i;
-}
-
-/**
- * Get up to num elements from the fifo. Return the number actually read
- */
-static inline unsigned
-kni_fifo_get(struct rte_kni_fifo *fifo, void **data, unsigned num)
-{
- unsigned i = 0;
- unsigned new_read = fifo->read;
- unsigned fifo_write = __KNI_LOAD_ACQUIRE(&fifo->write);
-
- for (i = 0; i < num; i++) {
- if (new_read == fifo_write)
- break;
-
- data[i] = fifo->buffer[new_read];
- new_read = (new_read + 1) & (fifo->len - 1);
- }
- __KNI_STORE_RELEASE(&fifo->read, new_read);
- return i;
-}
-
-/**
- * Get the num of elements in the fifo
- */
-static inline uint32_t
-kni_fifo_count(struct rte_kni_fifo *fifo)
-{
- unsigned fifo_write = __KNI_LOAD_ACQUIRE(&fifo->write);
- unsigned fifo_read = __KNI_LOAD_ACQUIRE(&fifo->read);
- return (fifo->len + fifo_write - fifo_read) & (fifo->len - 1);
-}
-
-/**
- * Get the num of available elements in the fifo
- */
-static inline uint32_t
-kni_fifo_free_count(struct rte_kni_fifo *fifo)
-{
- uint32_t fifo_write = __KNI_LOAD_ACQUIRE(&fifo->write);
- uint32_t fifo_read = __KNI_LOAD_ACQUIRE(&fifo->read);
- return (fifo_read - fifo_write - 1) & (fifo->len - 1);
-}
diff --git a/lib/kni/version.map b/lib/kni/version.map
deleted file mode 100644
index 13ffaa5bfd65..000000000000
--- a/lib/kni/version.map
+++ /dev/null
@@ -1,24 +0,0 @@
-DPDK_24 {
- global:
-
- rte_kni_alloc;
- rte_kni_close;
- rte_kni_get;
- rte_kni_get_name;
- rte_kni_handle_request;
- rte_kni_init;
- rte_kni_register_handlers;
- rte_kni_release;
- rte_kni_rx_burst;
- rte_kni_tx_burst;
- rte_kni_unregister_handlers;
-
- local: *;
-};
-
-EXPERIMENTAL {
- global:
-
- # updated in v21.08
- rte_kni_update_link;
-};
diff --git a/lib/meson.build b/lib/meson.build
index ecac701161c8..bbfa28ba86dd 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -39,7 +39,6 @@ libraries = [
'gso',
'ip_frag',
'jobstats',
- 'kni',
'latencystats',
'lpm',
'member',
@@ -73,7 +72,6 @@ optional_libs = [
'graph',
'gro',
'gso',
- 'kni',
'jobstats',
'latencystats',
'metrics',
@@ -86,10 +84,6 @@ optional_libs = [
'vhost',
]
-dpdk_libs_deprecated += [
- 'kni',
-]
-
disabled_libs = []
opt_disabled_libs = run_command(list_dir_globs, get_option('disable_libs'),
check: true).stdout().split()
diff --git a/lib/port/meson.build b/lib/port/meson.build
index 3ab37e2cb4b7..b0af2b185b39 100644
--- a/lib/port/meson.build
+++ b/lib/port/meson.build
@@ -45,9 +45,3 @@ if dpdk_conf.has('RTE_HAS_LIBPCAP')
dpdk_conf.set('RTE_PORT_PCAP', 1)
ext_deps += pcap_dep # dependency provided in config/meson.build
endif
-
-if dpdk_conf.has('RTE_LIB_KNI')
- sources += files('rte_port_kni.c')
- headers += files('rte_port_kni.h')
- deps += 'kni'
-endif
diff --git a/lib/port/rte_port_kni.c b/lib/port/rte_port_kni.c
deleted file mode 100644
index 1c7a6cb200ea..000000000000
--- a/lib/port/rte_port_kni.c
+++ /dev/null
@@ -1,515 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Ethan Zhuang <zhuangwj@gmail.com>.
- * Copyright(c) 2016 Intel Corporation.
- */
-#include <string.h>
-
-#include <rte_malloc.h>
-#include <rte_kni.h>
-
-#include "rte_port_kni.h"
-
-/*
- * Port KNI Reader
- */
-#ifdef RTE_PORT_STATS_COLLECT
-
-#define RTE_PORT_KNI_READER_STATS_PKTS_IN_ADD(port, val) \
- port->stats.n_pkts_in += val
-#define RTE_PORT_KNI_READER_STATS_PKTS_DROP_ADD(port, val) \
- port->stats.n_pkts_drop += val
-
-#else
-
-#define RTE_PORT_KNI_READER_STATS_PKTS_IN_ADD(port, val)
-#define RTE_PORT_KNI_READER_STATS_PKTS_DROP_ADD(port, val)
-
-#endif
-
-struct rte_port_kni_reader {
- struct rte_port_in_stats stats;
-
- struct rte_kni *kni;
-};
-
-static void *
-rte_port_kni_reader_create(void *params, int socket_id)
-{
- struct rte_port_kni_reader_params *conf =
- params;
- struct rte_port_kni_reader *port;
-
- /* Check input parameters */
- if (conf == NULL) {
- RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__);
- return NULL;
- }
-
- /* Memory allocation */
- port = rte_zmalloc_socket("PORT", sizeof(*port),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
- return NULL;
- }
-
- /* Initialization */
- port->kni = conf->kni;
-
- return port;
-}
-
-static int
-rte_port_kni_reader_rx(void *port, struct rte_mbuf **pkts, uint32_t n_pkts)
-{
- struct rte_port_kni_reader *p =
- port;
- uint16_t rx_pkt_cnt;
-
- rx_pkt_cnt = rte_kni_rx_burst(p->kni, pkts, n_pkts);
- RTE_PORT_KNI_READER_STATS_PKTS_IN_ADD(p, rx_pkt_cnt);
- return rx_pkt_cnt;
-}
-
-static int
-rte_port_kni_reader_free(void *port)
-{
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
- return -EINVAL;
- }
-
- rte_free(port);
-
- return 0;
-}
-
-static int rte_port_kni_reader_stats_read(void *port,
- struct rte_port_in_stats *stats, int clear)
-{
- struct rte_port_kni_reader *p =
- port;
-
- if (stats != NULL)
- memcpy(stats, &p->stats, sizeof(p->stats));
-
- if (clear)
- memset(&p->stats, 0, sizeof(p->stats));
-
- return 0;
-}
-
-/*
- * Port KNI Writer
- */
-#ifdef RTE_PORT_STATS_COLLECT
-
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(port, val) \
- port->stats.n_pkts_in += val
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(port, val) \
- port->stats.n_pkts_drop += val
-
-#else
-
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(port, val)
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(port, val)
-
-#endif
-
-struct rte_port_kni_writer {
- struct rte_port_out_stats stats;
-
- struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX];
- uint32_t tx_burst_sz;
- uint32_t tx_buf_count;
- uint64_t bsz_mask;
- struct rte_kni *kni;
-};
-
-static void *
-rte_port_kni_writer_create(void *params, int socket_id)
-{
- struct rte_port_kni_writer_params *conf =
- params;
- struct rte_port_kni_writer *port;
-
- /* Check input parameters */
- if ((conf == NULL) ||
- (conf->tx_burst_sz == 0) ||
- (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
- (!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
- return NULL;
- }
-
- /* Memory allocation */
- port = rte_zmalloc_socket("PORT", sizeof(*port),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
- return NULL;
- }
-
- /* Initialization */
- port->kni = conf->kni;
- port->tx_burst_sz = conf->tx_burst_sz;
- port->tx_buf_count = 0;
- port->bsz_mask = 1LLU << (conf->tx_burst_sz - 1);
-
- return port;
-}
-
-static inline void
-send_burst(struct rte_port_kni_writer *p)
-{
- uint32_t nb_tx;
-
- nb_tx = rte_kni_tx_burst(p->kni, p->tx_buf, p->tx_buf_count);
-
- RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx);
- for (; nb_tx < p->tx_buf_count; nb_tx++)
- rte_pktmbuf_free(p->tx_buf[nb_tx]);
-
- p->tx_buf_count = 0;
-}
-
-static int
-rte_port_kni_writer_tx(void *port, struct rte_mbuf *pkt)
-{
- struct rte_port_kni_writer *p =
- port;
-
- p->tx_buf[p->tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, 1);
- if (p->tx_buf_count >= p->tx_burst_sz)
- send_burst(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_tx_bulk(void *port,
- struct rte_mbuf **pkts,
- uint64_t pkts_mask)
-{
- struct rte_port_kni_writer *p =
- port;
- uint64_t bsz_mask = p->bsz_mask;
- uint32_t tx_buf_count = p->tx_buf_count;
- uint64_t expr = (pkts_mask & (pkts_mask + 1)) |
- ((pkts_mask & bsz_mask) ^ bsz_mask);
-
- if (expr == 0) {
- uint64_t n_pkts = __builtin_popcountll(pkts_mask);
- uint32_t n_pkts_ok;
-
- if (tx_buf_count)
- send_burst(p);
-
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, n_pkts);
- n_pkts_ok = rte_kni_tx_burst(p->kni, pkts, n_pkts);
-
- RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(p, n_pkts - n_pkts_ok);
- for (; n_pkts_ok < n_pkts; n_pkts_ok++) {
- struct rte_mbuf *pkt = pkts[n_pkts_ok];
-
- rte_pktmbuf_free(pkt);
- }
- } else {
- for (; pkts_mask;) {
- uint32_t pkt_index = __builtin_ctzll(pkts_mask);
- uint64_t pkt_mask = 1LLU << pkt_index;
- struct rte_mbuf *pkt = pkts[pkt_index];
-
- p->tx_buf[tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, 1);
- pkts_mask &= ~pkt_mask;
- }
-
- p->tx_buf_count = tx_buf_count;
- if (tx_buf_count >= p->tx_burst_sz)
- send_burst(p);
- }
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_flush(void *port)
-{
- struct rte_port_kni_writer *p =
- port;
-
- if (p->tx_buf_count > 0)
- send_burst(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_free(void *port)
-{
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
- return -EINVAL;
- }
-
- rte_port_kni_writer_flush(port);
- rte_free(port);
-
- return 0;
-}
-
-static int rte_port_kni_writer_stats_read(void *port,
- struct rte_port_out_stats *stats, int clear)
-{
- struct rte_port_kni_writer *p =
- port;
-
- if (stats != NULL)
- memcpy(stats, &p->stats, sizeof(p->stats));
-
- if (clear)
- memset(&p->stats, 0, sizeof(p->stats));
-
- return 0;
-}
-
-/*
- * Port KNI Writer Nodrop
- */
-#ifdef RTE_PORT_STATS_COLLECT
-
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(port, val) \
- port->stats.n_pkts_in += val
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_DROP_ADD(port, val) \
- port->stats.n_pkts_drop += val
-
-#else
-
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(port, val)
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_DROP_ADD(port, val)
-
-#endif
-
-struct rte_port_kni_writer_nodrop {
- struct rte_port_out_stats stats;
-
- struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX];
- uint32_t tx_burst_sz;
- uint32_t tx_buf_count;
- uint64_t bsz_mask;
- uint64_t n_retries;
- struct rte_kni *kni;
-};
-
-static void *
-rte_port_kni_writer_nodrop_create(void *params, int socket_id)
-{
- struct rte_port_kni_writer_nodrop_params *conf =
- params;
- struct rte_port_kni_writer_nodrop *port;
-
- /* Check input parameters */
- if ((conf == NULL) ||
- (conf->tx_burst_sz == 0) ||
- (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
- (!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
- return NULL;
- }
-
- /* Memory allocation */
- port = rte_zmalloc_socket("PORT", sizeof(*port),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
- return NULL;
- }
-
- /* Initialization */
- port->kni = conf->kni;
- port->tx_burst_sz = conf->tx_burst_sz;
- port->tx_buf_count = 0;
- port->bsz_mask = 1LLU << (conf->tx_burst_sz - 1);
-
- /*
- * When n_retries is 0 it means that we should wait for every packet to
- * send no matter how many retries should it take. To limit number of
- * branches in fast path, we use UINT64_MAX instead of branching.
- */
- port->n_retries = (conf->n_retries == 0) ? UINT64_MAX : conf->n_retries;
-
- return port;
-}
-
-static inline void
-send_burst_nodrop(struct rte_port_kni_writer_nodrop *p)
-{
- uint32_t nb_tx = 0, i;
-
- nb_tx = rte_kni_tx_burst(p->kni, p->tx_buf, p->tx_buf_count);
-
- /* We sent all the packets in a first try */
- if (nb_tx >= p->tx_buf_count) {
- p->tx_buf_count = 0;
- return;
- }
-
- for (i = 0; i < p->n_retries; i++) {
- nb_tx += rte_kni_tx_burst(p->kni,
- p->tx_buf + nb_tx,
- p->tx_buf_count - nb_tx);
-
- /* We sent all the packets in more than one try */
- if (nb_tx >= p->tx_buf_count) {
- p->tx_buf_count = 0;
- return;
- }
- }
-
- /* We didn't send the packets in maximum allowed attempts */
- RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx);
- for ( ; nb_tx < p->tx_buf_count; nb_tx++)
- rte_pktmbuf_free(p->tx_buf[nb_tx]);
-
- p->tx_buf_count = 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_tx(void *port, struct rte_mbuf *pkt)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- p->tx_buf[p->tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, 1);
- if (p->tx_buf_count >= p->tx_burst_sz)
- send_burst_nodrop(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_tx_bulk(void *port,
- struct rte_mbuf **pkts,
- uint64_t pkts_mask)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- uint64_t bsz_mask = p->bsz_mask;
- uint32_t tx_buf_count = p->tx_buf_count;
- uint64_t expr = (pkts_mask & (pkts_mask + 1)) |
- ((pkts_mask & bsz_mask) ^ bsz_mask);
-
- if (expr == 0) {
- uint64_t n_pkts = __builtin_popcountll(pkts_mask);
- uint32_t n_pkts_ok;
-
- if (tx_buf_count)
- send_burst_nodrop(p);
-
- RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(p, n_pkts);
- n_pkts_ok = rte_kni_tx_burst(p->kni, pkts, n_pkts);
-
- if (n_pkts_ok >= n_pkts)
- return 0;
-
- /*
- * If we didn't manage to send all packets in single burst, move
- * remaining packets to the buffer and call send burst.
- */
- for (; n_pkts_ok < n_pkts; n_pkts_ok++) {
- struct rte_mbuf *pkt = pkts[n_pkts_ok];
- p->tx_buf[p->tx_buf_count++] = pkt;
- }
- send_burst_nodrop(p);
- } else {
- for ( ; pkts_mask; ) {
- uint32_t pkt_index = __builtin_ctzll(pkts_mask);
- uint64_t pkt_mask = 1LLU << pkt_index;
- struct rte_mbuf *pkt = pkts[pkt_index];
-
- p->tx_buf[tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(p, 1);
- pkts_mask &= ~pkt_mask;
- }
-
- p->tx_buf_count = tx_buf_count;
- if (tx_buf_count >= p->tx_burst_sz)
- send_burst_nodrop(p);
- }
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_flush(void *port)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- if (p->tx_buf_count > 0)
- send_burst_nodrop(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_free(void *port)
-{
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
- return -EINVAL;
- }
-
- rte_port_kni_writer_nodrop_flush(port);
- rte_free(port);
-
- return 0;
-}
-
-static int rte_port_kni_writer_nodrop_stats_read(void *port,
- struct rte_port_out_stats *stats, int clear)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- if (stats != NULL)
- memcpy(stats, &p->stats, sizeof(p->stats));
-
- if (clear)
- memset(&p->stats, 0, sizeof(p->stats));
-
- return 0;
-}
-
-
-/*
- * Summary of port operations
- */
-struct rte_port_in_ops rte_port_kni_reader_ops = {
- .f_create = rte_port_kni_reader_create,
- .f_free = rte_port_kni_reader_free,
- .f_rx = rte_port_kni_reader_rx,
- .f_stats = rte_port_kni_reader_stats_read,
-};
-
-struct rte_port_out_ops rte_port_kni_writer_ops = {
- .f_create = rte_port_kni_writer_create,
- .f_free = rte_port_kni_writer_free,
- .f_tx = rte_port_kni_writer_tx,
- .f_tx_bulk = rte_port_kni_writer_tx_bulk,
- .f_flush = rte_port_kni_writer_flush,
- .f_stats = rte_port_kni_writer_stats_read,
-};
-
-struct rte_port_out_ops rte_port_kni_writer_nodrop_ops = {
- .f_create = rte_port_kni_writer_nodrop_create,
- .f_free = rte_port_kni_writer_nodrop_free,
- .f_tx = rte_port_kni_writer_nodrop_tx,
- .f_tx_bulk = rte_port_kni_writer_nodrop_tx_bulk,
- .f_flush = rte_port_kni_writer_nodrop_flush,
- .f_stats = rte_port_kni_writer_nodrop_stats_read,
-};
diff --git a/lib/port/rte_port_kni.h b/lib/port/rte_port_kni.h
deleted file mode 100644
index 280f58c121e2..000000000000
--- a/lib/port/rte_port_kni.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Ethan Zhuang <zhuangwj@gmail.com>.
- * Copyright(c) 2016 Intel Corporation.
- */
-
-#ifndef __INCLUDE_RTE_PORT_KNI_H__
-#define __INCLUDE_RTE_PORT_KNI_H__
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/**
- * @file
- * RTE Port KNI Interface
- *
- * kni_reader: input port built on top of pre-initialized KNI interface
- * kni_writer: output port built on top of pre-initialized KNI interface
- */
-
-#include <stdint.h>
-
-#include "rte_port.h"
-
-/** kni_reader port parameters */
-struct rte_port_kni_reader_params {
- /** KNI interface reference */
- struct rte_kni *kni;
-};
-
-/** kni_reader port operations */
-extern struct rte_port_in_ops rte_port_kni_reader_ops;
-
-
-/** kni_writer port parameters */
-struct rte_port_kni_writer_params {
- /** KNI interface reference */
- struct rte_kni *kni;
- /** Burst size to KNI interface. */
- uint32_t tx_burst_sz;
-};
-
-/** kni_writer port operations */
-extern struct rte_port_out_ops rte_port_kni_writer_ops;
-
-/** kni_writer_nodrop port parameters */
-struct rte_port_kni_writer_nodrop_params {
- /** KNI interface reference */
- struct rte_kni *kni;
- /** Burst size to KNI interface. */
- uint32_t tx_burst_sz;
- /** Maximum number of retries, 0 for no limit */
- uint32_t n_retries;
-};
-
-/** kni_writer_nodrop port operations */
-extern struct rte_port_out_ops rte_port_kni_writer_nodrop_ops;
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
diff --git a/lib/port/version.map b/lib/port/version.map
index 83dbec7b01d2..fefcf29063f6 100644
--- a/lib/port/version.map
+++ b/lib/port/version.map
@@ -7,9 +7,6 @@ DPDK_24 {
rte_port_fd_reader_ops;
rte_port_fd_writer_nodrop_ops;
rte_port_fd_writer_ops;
- rte_port_kni_reader_ops;
- rte_port_kni_writer_nodrop_ops;
- rte_port_kni_writer_ops;
rte_port_ring_multi_reader_ops;
rte_port_ring_multi_writer_nodrop_ops;
rte_port_ring_multi_writer_ops;
diff --git a/meson_options.txt b/meson_options.txt
index 95e22e0ce70c..621e1ca9ba8c 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -10,7 +10,7 @@ option('disable_apps', type: 'string', value: '', description:
'Comma-separated list of apps to explicitly disable.')
option('disable_drivers', type: 'string', value: '', description:
'Comma-separated list of drivers to explicitly disable.')
-option('disable_libs', type: 'string', value: 'kni', description:
+option('disable_libs', type: 'string', value: '', description:
'Comma-separated list of libraries to explicitly disable. [NOTE: not all libs can be disabled]')
option('drivers_install_subdir', type: 'string', value: 'dpdk/pmds-<VERSION>', description:
'Subdirectory of libdir where to install PMDs. Defaults to using a versioned subdirectory.')
--
2.39.2
^ permalink raw reply [relevance 1%]
* [PATCH v4] build: update DPDK to use C11 standard
2023-07-31 10:38 4% [PATCH] build: update DPDK to use C11 standard Bruce Richardson
2023-07-31 15:58 4% ` [PATCH v2] " Bruce Richardson
2023-07-31 16:58 4% ` [PATCH v3] " Bruce Richardson
@ 2023-08-01 13:15 4% ` Bruce Richardson
2023-08-02 12:31 4% ` [PATCH v5] " Bruce Richardson
3 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-08-01 13:15 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Morten Brørup, Tyler Retzlaff
As previously announced, DPDK 23.11 will require a C11 supporting
compiler and will use the C11 standard in all builds.
Forcing use of the C standard, rather than the standard with
GNU extensions, means that some posix definitions which are not in
the C standard are unavailable by default. We fix this by ensuring
the correct defines or cflags are passed to the components that
need them.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
V4:
* pass cflags to the structure and definition checks in mlx* drivers
to ensure posix definitions - as well as C-standard ones - are
available.
V3:
* remove (now unneeded) use of -std=gnu99 in failsafe net driver.
V2:
* Resubmit now that 23.11-rc0 patch applied
* Add _POSIX_C_SOURCE macro to eal_common_errno.c to get POSIX
definition of strerror_r() with c11 standard.
---
doc/guides/linux_gsg/sys_reqs.rst | 3 ++-
doc/guides/rel_notes/deprecation.rst | 18 ------------------
doc/guides/rel_notes/release_23_11.rst | 17 +++++++++++++++++
drivers/common/mlx5/linux/meson.build | 5 +++--
drivers/net/failsafe/meson.build | 1 -
drivers/net/mlx4/meson.build | 4 ++--
lib/eal/common/eal_common_errno.c | 1 +
meson.build | 1 +
8 files changed, 26 insertions(+), 24 deletions(-)
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index dfeaf4e1c5..13be715933 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -27,7 +27,8 @@ Compilation of the DPDK
The setup commands and installed packages needed on various systems may be different.
For details on Linux distributions and the versions tested, please consult the DPDK Release Notes.
-* General development tools including a supported C compiler such as gcc (version 4.9+) or clang (version 3.4+),
+* General development tools including a C compiler supporting the C11 standard,
+ including standard atomics, for example: GCC (version 5.0+) or Clang (version 3.6+),
and ``pkg-config`` or ``pkgconf`` to be used when building end-user binaries against DPDK.
* For RHEL/Fedora systems these can be installed using ``dnf groupinstall "Development Tools"``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 494b401cda..cc939d3c67 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,24 +17,6 @@ Other API and ABI deprecation notices are to be posted below.
Deprecation Notices
-------------------
-* C Compiler: From DPDK 23.11 onwards,
- building DPDK will require a C compiler which supports the C11 standard,
- including support for C11 standard atomics.
-
- More specifically, the requirements will be:
-
- * Support for flag "-std=c11" (or similar)
- * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
-
- Please note:
-
- * C11, including standard atomics, is supported from GCC version 5 onwards,
- and is the default language version in that release
- (Ref: https://gcc.gnu.org/gcc-5/changes.html)
- * C11 is the default compilation mode in Clang from version 3.6,
- which also added support for standard atomics
- (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
-
* build: Enabling deprecated libraries (``flow_classify``, ``kni``)
won't be possible anymore through the use of the ``disable_libs`` build option.
A new build option for deprecated libraries will be introduced instead.
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0..c8b9ed456c 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -20,6 +20,23 @@ DPDK Release 23.11
ninja -C build doc
xdg-open build/doc/guides/html/rel_notes/release_23_11.html
+* Build Requirements: From DPDK 23.11 onwards,
+ building DPDK will require a C compiler which supports the C11 standard,
+ including support for C11 standard atomics.
+
+ More specifically, the requirements will be:
+
+ * Support for flag "-std=c11" (or similar)
+ * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
+
+ Please note:
+
+ * C11, including standard atomics, is supported from GCC version 5 onwards,
+ and is the default language version in that release
+ (Ref: https://gcc.gnu.org/gcc-5/changes.html)
+ * C11 is the default compilation mode in Clang from version 3.6,
+ which also added support for standard atomics
+ (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
New Features
------------
diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build
index 15edc13041..b3a64547c5 100644
--- a/drivers/common/mlx5/linux/meson.build
+++ b/drivers/common/mlx5/linux/meson.build
@@ -231,11 +231,12 @@ if libmtcr_ul_found
endif
foreach arg:has_sym_args
- mlx5_config.set(arg[0], cc.has_header_symbol(arg[1], arg[2], dependencies: libs))
+ mlx5_config.set(arg[0], cc.has_header_symbol(arg[1], arg[2], dependencies: libs, args: cflags))
endforeach
foreach arg:has_member_args
file_prefix = '#include <' + arg[1] + '>'
- mlx5_config.set(arg[0], cc.has_member(arg[2], arg[3], prefix : file_prefix, dependencies: libs))
+ mlx5_config.set(arg[0],
+ cc.has_member(arg[2], arg[3], prefix : file_prefix, dependencies: libs, args: cflags))
endforeach
# Build Glue Library
diff --git a/drivers/net/failsafe/meson.build b/drivers/net/failsafe/meson.build
index 6013e13722..c1d361083b 100644
--- a/drivers/net/failsafe/meson.build
+++ b/drivers/net/failsafe/meson.build
@@ -7,7 +7,6 @@ if is_windows
subdir_done()
endif
-cflags += '-std=gnu99'
cflags += '-D_DEFAULT_SOURCE'
cflags += '-D_XOPEN_SOURCE=700'
cflags += '-pedantic'
diff --git a/drivers/net/mlx4/meson.build b/drivers/net/mlx4/meson.build
index a038c1ec1b..3c5ee24186 100644
--- a/drivers/net/mlx4/meson.build
+++ b/drivers/net/mlx4/meson.build
@@ -103,12 +103,12 @@ has_sym_args = [
config = configuration_data()
foreach arg:has_sym_args
config.set(arg[0], cc.has_header_symbol(arg[1], arg[2],
- dependencies: libs))
+ dependencies: libs, args: cflags))
endforeach
foreach arg:has_member_args
file_prefix = '#include <' + arg[1] + '>'
config.set(arg[0], cc.has_member(arg[2], arg[3],
- prefix: file_prefix, dependencies: libs))
+ prefix: file_prefix, dependencies: libs, args: cflags))
endforeach
configure_file(output : 'mlx4_autoconf.h', configuration : config)
diff --git a/lib/eal/common/eal_common_errno.c b/lib/eal/common/eal_common_errno.c
index ef8f782abb..b30e2f0ad4 100644
--- a/lib/eal/common/eal_common_errno.c
+++ b/lib/eal/common/eal_common_errno.c
@@ -4,6 +4,7 @@
/* Use XSI-compliant portable version of strerror_r() */
#undef _GNU_SOURCE
+#define _POSIX_C_SOURCE 200809L
#include <stdio.h>
#include <string.h>
diff --git a/meson.build b/meson.build
index 39cb73846d..70b54f0c98 100644
--- a/meson.build
+++ b/meson.build
@@ -9,6 +9,7 @@ project('DPDK', 'c',
license: 'BSD',
default_options: [
'buildtype=release',
+ 'c_std=c11',
'default_library=static',
'warning_level=2',
],
--
2.39.2
^ permalink raw reply [relevance 4%]
* RE: [PATCH] doc: announce ethdev operation struct changes
2023-07-28 15:55 0% ` Morten Brørup
@ 2023-08-01 3:19 0% ` Feifei Wang
0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2023-08-01 3:19 UTC (permalink / raw)
To: Morten Brørup, thomas
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang, ferruh.yigit,
konstantin.ananyev, andrew.rybchenko, nd
> -----Original Message-----
> From: Morten Brørup <mb@smartsharesystems.com>
> Sent: Friday, July 28, 2023 11:55 PM
> To: thomas@monjalon.net
> Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; Feifei Wang <Feifei.Wang2@arm.com>;
> Ruifeng Wang <Ruifeng.Wang@arm.com>; Feifei Wang
> <Feifei.Wang2@arm.com>; ferruh.yigit@amd.com;
> konstantin.ananyev@huawei.com; andrew.rybchenko@oktetlabs.ru
> Subject: RE: [PATCH] doc: announce ethdev operation struct changes
>
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > Sent: Friday, 28 July 2023 17.38
> >
> > 28/07/2023 17:33, Morten Brørup:
> > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > Sent: Friday, 28 July 2023 17.20
> > > >
> > > > 28/07/2023 17:08, Morten Brørup:
> > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > > Sent: Friday, 28 July 2023 16.57
> > > > > >
> > > > > > 04/07/2023 10:10, Feifei Wang:
> > > > > > > To support mbufs recycle mode, announce the coming ABI
> > > > > > > changes from DPDK 23.11.
> > > > > > >
> > > > > > > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > > > > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > > > > ---
> > > > > > > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > > > > > > 1 file changed, 4 insertions(+)
> > > > > > >
> > > > > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > > > > b/doc/guides/rel_notes/deprecation.rst
> > > > > > > index 66431789b0..c7e1ffafb2 100644
> > > > > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > > > > @@ -118,6 +118,10 @@ Deprecation Notices
> > > > > > > The legacy actions should be removed
> > > > > > > once ``MODIFY_FIELD`` alternative is implemented in drivers.
> > > > > > >
> > > > > > > +* ethdev: The Ethernet device data structure ``struct
> > > > > > > +rte_eth_dev``
> > and
> > > > > > > + the fast-path ethdev flat array ``struct rte_eth_fp_ops``
> > > > > > > + will be
> > > > updated
> > > > > > > + with new fields to support mbufs recycle mode from DPDK 23.11.
> > > > >
> > > > > Existing fields will also be moved around [1]:
> > > > >
> > > > > @@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
> > > > > * Rx fast-path functions and related data.
> > > > > * 64-bit systems: occupies first 64B line
> > > > > */
> > > > > + /** Rx queues data. */
> > > > > + struct rte_ethdev_qdata rxq;
> > > > > /** PMD receive function. */
> > > > > eth_rx_burst_t rx_pkt_burst;
> > > > > /** Get the number of used Rx descriptors. */
> > > > > eth_rx_queue_count_t rx_queue_count;
> > > > > /** Check the status of a Rx descriptor. */
> > > > > eth_rx_descriptor_status_t rx_descriptor_status;
> > > > > - /** Rx queues data. */
> > > > > - struct rte_ethdev_qdata rxq;
> > > > > - uintptr_t reserved1[3];
> > > > > + /** Refill Rx descriptors with the recycling mbufs. */
> > > > > + eth_recycle_rx_descriptors_refill_t
> > recycle_rx_descriptors_refill;
> > > > > + uintptr_t reserved1[2];
> > > > > /**@}*/
> > > > >
> > > > > /**@{*/
> > > > > @@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
> > > > > * Tx fast-path functions and related data.
> > > > > * 64-bit systems: occupies second 64B line
> > > > > */
> > > > > + /** Tx queues data. */
> > > > > + struct rte_ethdev_qdata txq;
> > > > > /** PMD transmit function. */
> > > > > eth_tx_burst_t tx_pkt_burst;
> > > > > /** PMD transmit prepare function. */
> > > > > eth_tx_prep_t tx_pkt_prepare;
> > > > > /** Check the status of a Tx descriptor. */
> > > > > eth_tx_descriptor_status_t tx_descriptor_status;
> > > > > - /** Tx queues data. */
> > > > > - struct rte_ethdev_qdata txq;
> > > > > - uintptr_t reserved2[3];
> > > > > + /** Copy used mbufs from Tx mbuf ring into Rx. */
> > > > > + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> > > > > + uintptr_t reserved2[2];
> > > > > /**@}*/
> > > >
> > > > Removing existing fields should be announced explicitly.
> > >
> > > Agreed. And the patch misses this. The "rxq" and "txq" fields are
> > > not being
> > removed, they are being moved up in the structures. Your comment about
> > explicit mentioning still applies!
> > >
> > > If there's no time to wait for a new patch version from Feifei,
> > > perhaps you
> > improve the description while merging.
> >
> > If it's only moving fields, we can skip.
>
> OK. Thank you for elaborating.
>
> > The real change is the size of the reserved fields, so it looks
> > acceptable without notice.
>
> Agree.
Sorry for my late. Agree with this change. And then, I will update a new version
of recycle mbufs mode for dpdk 23.11
>
> Thoughts for later: We should perhaps document that changing the size of
> reserved fields is acceptable. And with that, if completely removing a reserved
> field is also acceptable or not.
^ permalink raw reply [relevance 0%]
* Re: [PATCH 1/3] version: 23.11-rc0
2023-07-31 9:43 12% ` [PATCH 1/3] " David Marchand
2023-07-31 10:00 0% ` Bruce Richardson
@ 2023-07-31 19:03 0% ` Aaron Conole
1 sibling, 0 replies; 200+ results
From: Aaron Conole @ 2023-07-31 19:03 UTC (permalink / raw)
To: David Marchand
Cc: dev, thomas, Michael Santana, Nicolas Chautru, Hemant Agrawal,
Sachin Saxena, Chenbo Xia, Nipun Gupta, Tomasz Duszynski,
Long Li, Anoob Joseph, Kai Ji, Gagandeep Singh, Timothy McDaniel,
Ashwin Sekhar T K, Pavan Nikhilesh, Igor Russkikh, Ajit Khaparde,
Somnath Kotur, Chas Williams, Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Yuying Zhang, Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang,
Rosen Xu, Wenjun Wu, Matan Azrad, Viacheslav Ovsiienko, Ori Kam,
Suanming Mou, Harman Kalra, Bruce Richardson,
Cristian Dumitrescu, Maxime Coquelin, Tianfei Zhang,
Konstantin Ananyev, Olivier Matz, Akhil Goyal, Fan Zhang,
David Hunt, Byron Marohn, Yipeng Wang, Ferruh Yigit,
Andrew Rybchenko, Jerin Jacob, Vladimir Medvedkin, Jiayu Hu,
Sameh Gobriel, Reshma Pattan, Gaetan Rivet, Stephen Hemminger,
Anatoly Burakov, Honnappa Nagarahalli, Volodymyr Fialko,
Erik Gabriel Carrillo
David Marchand <david.marchand@redhat.com> writes:
> Start a new release cycle with empty release notes.
>
> The ABI version becomes 24.0.
> The map files are updated to the new ABI major number (24).
> The ABI exceptions are dropped and CI ABI checks are disabled because
> compatibility is not preserved.
>
> The telemetry and vhost libraries compat code is cleaned up in next
> commits.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
Acked-by: Aaron Conole <aconole@redhat.com>
^ permalink raw reply [relevance 0%]
* [PATCH v3] build: update DPDK to use C11 standard
2023-07-31 10:38 4% [PATCH] build: update DPDK to use C11 standard Bruce Richardson
2023-07-31 15:58 4% ` [PATCH v2] " Bruce Richardson
@ 2023-07-31 16:58 4% ` Bruce Richardson
2023-08-01 13:15 4% ` [PATCH v4] " Bruce Richardson
2023-08-02 12:31 4% ` [PATCH v5] " Bruce Richardson
3 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-07-31 16:58 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Morten Brørup, Tyler Retzlaff
As previously announced, DPDK 23.11 will require a C11 supporting
compiler and will use the C11 standard in all builds.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
V3:
* remove (now unneeded) use of -std=gnu99 in failsafe net driver.
V2:
* Resubmit now that 23.11-rc0 patch applied
* Add _POSIX_C_SOURCE macro to eal_common_errno.c to get POSIX
definition of strerror_r() with c11 standard.
---
doc/guides/linux_gsg/sys_reqs.rst | 3 ++-
doc/guides/rel_notes/deprecation.rst | 18 ------------------
doc/guides/rel_notes/release_23_11.rst | 17 +++++++++++++++++
drivers/net/failsafe/meson.build | 1 -
lib/eal/common/eal_common_errno.c | 1 +
meson.build | 1 +
6 files changed, 21 insertions(+), 20 deletions(-)
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index dfeaf4e1c5..13be715933 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -27,7 +27,8 @@ Compilation of the DPDK
The setup commands and installed packages needed on various systems may be different.
For details on Linux distributions and the versions tested, please consult the DPDK Release Notes.
-* General development tools including a supported C compiler such as gcc (version 4.9+) or clang (version 3.4+),
+* General development tools including a C compiler supporting the C11 standard,
+ including standard atomics, for example: GCC (version 5.0+) or Clang (version 3.6+),
and ``pkg-config`` or ``pkgconf`` to be used when building end-user binaries against DPDK.
* For RHEL/Fedora systems these can be installed using ``dnf groupinstall "Development Tools"``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 494b401cda..cc939d3c67 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,24 +17,6 @@ Other API and ABI deprecation notices are to be posted below.
Deprecation Notices
-------------------
-* C Compiler: From DPDK 23.11 onwards,
- building DPDK will require a C compiler which supports the C11 standard,
- including support for C11 standard atomics.
-
- More specifically, the requirements will be:
-
- * Support for flag "-std=c11" (or similar)
- * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
-
- Please note:
-
- * C11, including standard atomics, is supported from GCC version 5 onwards,
- and is the default language version in that release
- (Ref: https://gcc.gnu.org/gcc-5/changes.html)
- * C11 is the default compilation mode in Clang from version 3.6,
- which also added support for standard atomics
- (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
-
* build: Enabling deprecated libraries (``flow_classify``, ``kni``)
won't be possible anymore through the use of the ``disable_libs`` build option.
A new build option for deprecated libraries will be introduced instead.
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0..c8b9ed456c 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -20,6 +20,23 @@ DPDK Release 23.11
ninja -C build doc
xdg-open build/doc/guides/html/rel_notes/release_23_11.html
+* Build Requirements: From DPDK 23.11 onwards,
+ building DPDK will require a C compiler which supports the C11 standard,
+ including support for C11 standard atomics.
+
+ More specifically, the requirements will be:
+
+ * Support for flag "-std=c11" (or similar)
+ * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
+
+ Please note:
+
+ * C11, including standard atomics, is supported from GCC version 5 onwards,
+ and is the default language version in that release
+ (Ref: https://gcc.gnu.org/gcc-5/changes.html)
+ * C11 is the default compilation mode in Clang from version 3.6,
+ which also added support for standard atomics
+ (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
New Features
------------
diff --git a/drivers/net/failsafe/meson.build b/drivers/net/failsafe/meson.build
index 6013e13722..c1d361083b 100644
--- a/drivers/net/failsafe/meson.build
+++ b/drivers/net/failsafe/meson.build
@@ -7,7 +7,6 @@ if is_windows
subdir_done()
endif
-cflags += '-std=gnu99'
cflags += '-D_DEFAULT_SOURCE'
cflags += '-D_XOPEN_SOURCE=700'
cflags += '-pedantic'
diff --git a/lib/eal/common/eal_common_errno.c b/lib/eal/common/eal_common_errno.c
index ef8f782abb..b30e2f0ad4 100644
--- a/lib/eal/common/eal_common_errno.c
+++ b/lib/eal/common/eal_common_errno.c
@@ -4,6 +4,7 @@
/* Use XSI-compliant portable version of strerror_r() */
#undef _GNU_SOURCE
+#define _POSIX_C_SOURCE 200809L
#include <stdio.h>
#include <string.h>
diff --git a/meson.build b/meson.build
index 39cb73846d..70b54f0c98 100644
--- a/meson.build
+++ b/meson.build
@@ -9,6 +9,7 @@ project('DPDK', 'c',
license: 'BSD',
default_options: [
'buildtype=release',
+ 'c_std=c11',
'default_library=static',
'warning_level=2',
],
--
2.39.2
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2] build: update DPDK to use C11 standard
2023-07-31 15:58 4% ` [PATCH v2] " Bruce Richardson
@ 2023-07-31 16:42 0% ` Tyler Retzlaff
0 siblings, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-07-31 16:42 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, Morten Brørup
On Mon, Jul 31, 2023 at 04:58:02PM +0100, Bruce Richardson wrote:
> As previously announced, DPDK 23.11 will require a C11 supporting
> compiler and will use the C11 standard in all builds.
>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>
> ---
> V2:
> * Resubmit now that 23.11-rc0 patch applied
> * Add _POSIX_C_SOURCE macro to eal_common_errno.c to get POSIX
> definition of strerror_r() with c11 standard.
> ---
> doc/guides/linux_gsg/sys_reqs.rst | 3 ++-
> doc/guides/rel_notes/deprecation.rst | 18 ------------------
> doc/guides/rel_notes/release_23_11.rst | 17 +++++++++++++++++
> lib/eal/common/eal_common_errno.c | 1 +
> meson.build | 1 +
> 5 files changed, 21 insertions(+), 19 deletions(-)
>
> diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
> index dfeaf4e1c5..13be715933 100644
> --- a/doc/guides/linux_gsg/sys_reqs.rst
> +++ b/doc/guides/linux_gsg/sys_reqs.rst
> @@ -27,7 +27,8 @@ Compilation of the DPDK
> The setup commands and installed packages needed on various systems may be different.
> For details on Linux distributions and the versions tested, please consult the DPDK Release Notes.
>
> -* General development tools including a supported C compiler such as gcc (version 4.9+) or clang (version 3.4+),
> +* General development tools including a C compiler supporting the C11 standard,
> + including standard atomics, for example: GCC (version 5.0+) or Clang (version 3.6+),
> and ``pkg-config`` or ``pkgconf`` to be used when building end-user binaries against DPDK.
>
> * For RHEL/Fedora systems these can be installed using ``dnf groupinstall "Development Tools"``
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 494b401cda..cc939d3c67 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -17,24 +17,6 @@ Other API and ABI deprecation notices are to be posted below.
> Deprecation Notices
> -------------------
>
> -* C Compiler: From DPDK 23.11 onwards,
> - building DPDK will require a C compiler which supports the C11 standard,
> - including support for C11 standard atomics.
> -
> - More specifically, the requirements will be:
> -
> - * Support for flag "-std=c11" (or similar)
> - * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
> -
> - Please note:
> -
> - * C11, including standard atomics, is supported from GCC version 5 onwards,
> - and is the default language version in that release
> - (Ref: https://gcc.gnu.org/gcc-5/changes.html)
> - * C11 is the default compilation mode in Clang from version 3.6,
> - which also added support for standard atomics
> - (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
> -
> * build: Enabling deprecated libraries (``flow_classify``, ``kni``)
> won't be possible anymore through the use of the ``disable_libs`` build option.
> A new build option for deprecated libraries will be introduced instead.
> diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
> index 6b4dd21fd0..c8b9ed456c 100644
> --- a/doc/guides/rel_notes/release_23_11.rst
> +++ b/doc/guides/rel_notes/release_23_11.rst
> @@ -20,6 +20,23 @@ DPDK Release 23.11
> ninja -C build doc
> xdg-open build/doc/guides/html/rel_notes/release_23_11.html
>
> +* Build Requirements: From DPDK 23.11 onwards,
> + building DPDK will require a C compiler which supports the C11 standard,
> + including support for C11 standard atomics.
> +
> + More specifically, the requirements will be:
> +
> + * Support for flag "-std=c11" (or similar)
> + * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
> +
> + Please note:
> +
> + * C11, including standard atomics, is supported from GCC version 5 onwards,
> + and is the default language version in that release
> + (Ref: https://gcc.gnu.org/gcc-5/changes.html)
> + * C11 is the default compilation mode in Clang from version 3.6,
> + which also added support for standard atomics
> + (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
>
> New Features
> ------------
> diff --git a/lib/eal/common/eal_common_errno.c b/lib/eal/common/eal_common_errno.c
> index ef8f782abb..b30e2f0ad4 100644
> --- a/lib/eal/common/eal_common_errno.c
> +++ b/lib/eal/common/eal_common_errno.c
> @@ -4,6 +4,7 @@
>
> /* Use XSI-compliant portable version of strerror_r() */
> #undef _GNU_SOURCE
> +#define _POSIX_C_SOURCE 200809L
>
> #include <stdio.h>
> #include <string.h>
> diff --git a/meson.build b/meson.build
> index 39cb73846d..70b54f0c98 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -9,6 +9,7 @@ project('DPDK', 'c',
> license: 'BSD',
> default_options: [
> 'buildtype=release',
> + 'c_std=c11',
> 'default_library=static',
> 'warning_level=2',
> ],
> --
oh I acked v2 (and you can maintain that ack) but one additional removal
of forced -std=gnu99 is maybe necessary?
drivers/net/failsafe/meson.build
probably should remove cflags += '-std=gnu99'
if we remove it we inherit the -std=c11 from meson project configuration
and define a _POSIX_C_SOURCE narrowly where necessary (if it is needed).
^ permalink raw reply [relevance 0%]
* [PATCH v2] build: update DPDK to use C11 standard
2023-07-31 10:38 4% [PATCH] build: update DPDK to use C11 standard Bruce Richardson
@ 2023-07-31 15:58 4% ` Bruce Richardson
2023-07-31 16:42 0% ` Tyler Retzlaff
2023-07-31 16:58 4% ` [PATCH v3] " Bruce Richardson
` (2 subsequent siblings)
3 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-07-31 15:58 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Morten Brørup
As previously announced, DPDK 23.11 will require a C11 supporting
compiler and will use the C11 standard in all builds.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
V2:
* Resubmit now that 23.11-rc0 patch applied
* Add _POSIX_C_SOURCE macro to eal_common_errno.c to get POSIX
definition of strerror_r() with c11 standard.
---
doc/guides/linux_gsg/sys_reqs.rst | 3 ++-
doc/guides/rel_notes/deprecation.rst | 18 ------------------
doc/guides/rel_notes/release_23_11.rst | 17 +++++++++++++++++
lib/eal/common/eal_common_errno.c | 1 +
meson.build | 1 +
5 files changed, 21 insertions(+), 19 deletions(-)
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index dfeaf4e1c5..13be715933 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -27,7 +27,8 @@ Compilation of the DPDK
The setup commands and installed packages needed on various systems may be different.
For details on Linux distributions and the versions tested, please consult the DPDK Release Notes.
-* General development tools including a supported C compiler such as gcc (version 4.9+) or clang (version 3.4+),
+* General development tools including a C compiler supporting the C11 standard,
+ including standard atomics, for example: GCC (version 5.0+) or Clang (version 3.6+),
and ``pkg-config`` or ``pkgconf`` to be used when building end-user binaries against DPDK.
* For RHEL/Fedora systems these can be installed using ``dnf groupinstall "Development Tools"``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 494b401cda..cc939d3c67 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,24 +17,6 @@ Other API and ABI deprecation notices are to be posted below.
Deprecation Notices
-------------------
-* C Compiler: From DPDK 23.11 onwards,
- building DPDK will require a C compiler which supports the C11 standard,
- including support for C11 standard atomics.
-
- More specifically, the requirements will be:
-
- * Support for flag "-std=c11" (or similar)
- * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
-
- Please note:
-
- * C11, including standard atomics, is supported from GCC version 5 onwards,
- and is the default language version in that release
- (Ref: https://gcc.gnu.org/gcc-5/changes.html)
- * C11 is the default compilation mode in Clang from version 3.6,
- which also added support for standard atomics
- (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
-
* build: Enabling deprecated libraries (``flow_classify``, ``kni``)
won't be possible anymore through the use of the ``disable_libs`` build option.
A new build option for deprecated libraries will be introduced instead.
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0..c8b9ed456c 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -20,6 +20,23 @@ DPDK Release 23.11
ninja -C build doc
xdg-open build/doc/guides/html/rel_notes/release_23_11.html
+* Build Requirements: From DPDK 23.11 onwards,
+ building DPDK will require a C compiler which supports the C11 standard,
+ including support for C11 standard atomics.
+
+ More specifically, the requirements will be:
+
+ * Support for flag "-std=c11" (or similar)
+ * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
+
+ Please note:
+
+ * C11, including standard atomics, is supported from GCC version 5 onwards,
+ and is the default language version in that release
+ (Ref: https://gcc.gnu.org/gcc-5/changes.html)
+ * C11 is the default compilation mode in Clang from version 3.6,
+ which also added support for standard atomics
+ (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
New Features
------------
diff --git a/lib/eal/common/eal_common_errno.c b/lib/eal/common/eal_common_errno.c
index ef8f782abb..b30e2f0ad4 100644
--- a/lib/eal/common/eal_common_errno.c
+++ b/lib/eal/common/eal_common_errno.c
@@ -4,6 +4,7 @@
/* Use XSI-compliant portable version of strerror_r() */
#undef _GNU_SOURCE
+#define _POSIX_C_SOURCE 200809L
#include <stdio.h>
#include <string.h>
diff --git a/meson.build b/meson.build
index 39cb73846d..70b54f0c98 100644
--- a/meson.build
+++ b/meson.build
@@ -9,6 +9,7 @@ project('DPDK', 'c',
license: 'BSD',
default_options: [
'buildtype=release',
+ 'c_std=c11',
'default_library=static',
'warning_level=2',
],
--
2.39.2
^ permalink raw reply [relevance 4%]
* cmdline programmer documentation
@ 2023-07-31 15:41 3% Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-07-31 15:41 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev
Noticed that the cmdline library is missing from programmers guide.
The only documentation for it is in the examples.
Should it be made part of the guide?
Is it really a stable ABI at this point - probably yes.
Although the API is awkward to use, and not a lot of tests,
it does work and has been used for years.
^ permalink raw reply [relevance 3%]
* [PATCH v7 0/3] Split logging functionality out of EAL
2023-07-31 10:17 3% ` [PATCH v6 0/3] Split logging functionality " Bruce Richardson
@ 2023-07-31 15:38 4% ` Bruce Richardson
2023-08-09 13:35 3% ` [PATCH v8 " Bruce Richardson
2 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-07-31 15:38 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson
There is a general desire to reduce the size and scope of EAL. To this
end, this patchset makes a (very) small step in that direction by taking
the logging functionality out of EAL and putting it into its own library
that can be built and maintained separately.
As with the first RFC for this, the main obstacle is the "fnmatch"
function which is needed by both EAL and the new log function when
building on windows. While the function cannot stay in EAL - or we would
have a circular dependency, moving it to a new library or just putting
it in the log library have the disadvantages that it then "leaks" into
the public namespace without an rte_prefix, which could cause issues.
Since only a single function is involved, subsequent versions take a
different approach to v1, and just moves the offending function to be a
static function in a header file. This allows use by multiple libs
without conflicting names or making it public.
The other complication, as explained in v1 RFC was that of multiple
implementations for different OS's. This is solved here in the same
way as v1, by including the OS in the name and having meson pick the
correct file for each build. Since only one file is involved, there
seemed little need for replicating EAL's separate subdirectories
per-OS.
V7:
* re-submit to re-run CI with ABI checks disabled
v6:
* Updated ABI version to DPDK_24 for new log library for 23.11 release.
v5:
* rebased to latest main branch
* fixed trailing whitespace issues in new doc section
v4:
* Fixed windows build error, due to missing strdup (_strdup on windows)
* Added doc updates to programmers guide.
v3:
* Fixed missing log file for BSD
* Removed "eal" from the filenames of files in the log directory
* added prefixes to elements in the fnmatch header to avoid conflicts
* fixed space indentation in new lines in telemetry.c (checkpatch)
* removed "extern int logtype" definition in telemetry.c (checkpatch)
* added log directory to list for doxygen scanning
Bruce Richardson (3):
eal/windows: move fnmatch function to header file
log: separate logging functions out of EAL
telemetry: use standard logging
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/env_abstraction_layer.rst | 4 +-
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/log_lib.rst | 115 ++++++++++++
lib/eal/common/eal_common_options.c | 2 +-
lib/eal/common/eal_private.h | 7 -
lib/eal/common/meson.build | 1 -
lib/eal/freebsd/eal.c | 6 +-
lib/eal/include/meson.build | 1 -
lib/eal/linux/eal.c | 8 +-
lib/eal/linux/meson.build | 1 -
lib/eal/meson.build | 2 +-
lib/eal/version.map | 17 --
lib/eal/windows/eal.c | 2 +-
lib/eal/windows/fnmatch.c | 172 -----------------
lib/eal/windows/include/fnmatch.h | 175 ++++++++++++++++--
lib/eal/windows/meson.build | 2 -
lib/kvargs/meson.build | 3 +-
.../common/eal_common_log.c => log/log.c} | 7 +-
lib/log/log_freebsd.c | 12 ++
.../common/eal_log.h => log/log_internal.h} | 18 +-
lib/{eal/linux/eal_log.c => log/log_linux.c} | 2 +-
.../windows/eal_log.c => log/log_windows.c} | 2 +-
lib/log/meson.build | 9 +
lib/{eal/include => log}/rte_log.h | 0
lib/log/version.map | 34 ++++
lib/meson.build | 1 +
lib/telemetry/meson.build | 3 +-
lib/telemetry/telemetry.c | 11 +-
lib/telemetry/telemetry_internal.h | 3 +-
30 files changed, 370 insertions(+), 252 deletions(-)
create mode 100644 doc/guides/prog_guide/log_lib.rst
delete mode 100644 lib/eal/windows/fnmatch.c
rename lib/{eal/common/eal_common_log.c => log/log.c} (99%)
create mode 100644 lib/log/log_freebsd.c
rename lib/{eal/common/eal_log.h => log/log_internal.h} (69%)
rename lib/{eal/linux/eal_log.c => log/log_linux.c} (97%)
rename lib/{eal/windows/eal_log.c => log/log_windows.c} (93%)
create mode 100644 lib/log/meson.build
rename lib/{eal/include => log}/rte_log.h (100%)
create mode 100644 lib/log/version.map
--
2.39.2
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2] kni: remove deprecated kernel network interface
2023-07-31 15:13 3% ` Stephen Hemminger
@ 2023-07-31 15:21 4% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-07-31 15:21 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Thomas Monjalon, dev, Maxime Coquelin, Chenbo Xia,
Anatoly Burakov, Cristian Dumitrescu, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Bruce Richardson
On Mon, Jul 31, 2023 at 5:13 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
> > > 2. The OVSrobot is looking into the port library to see the kni symbols.
> > > But port is marked as deprecated already.
> > > Perhaps we should just pull out port first?
> >
> > No we must support it until it is removed.
> > You should either disable or remove KNI from the port library.
>
> The patch removed from port library and it builds but the ABI
> robot was looking for symbols.
The ABI check was still active by the time you submitted this series.
Please resend it now that the ABI check is disabled with rc0 merged.
--
David Marchand
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2] kni: remove deprecated kernel network interface
@ 2023-07-31 15:13 3% ` Stephen Hemminger
2023-07-31 15:21 4% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-07-31 15:13 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Maxime Coquelin, Chenbo Xia, Anatoly Burakov,
Cristian Dumitrescu, Nithin Dabilpuram, Kiran Kumar K,
Sunil Kumar Kori, Satha Rao, Bruce Richardson, david.marchand
On Mon, 31 Jul 2023 10:40:35 +0200
Thomas Monjalon <thomas@monjalon.net> wrote:
> 30/07/2023 19:12, Stephen Hemminger:
> > On Sat, 29 Jul 2023 19:12:05 -0700
> > Stephen Hemminger <stephen@networkplumber.org> wrote:
> >
> > > Deprecation and removal was announced in 22.11.
> > > Make it so.
>
> Would be good to summarize the reason here,
> and name replacements.
>
> Also it should not be completely removed.
> I think we were supposed to move KNI into the kmod repository?
The decision I remember was to remove it completely.
KNI has several issues which make it unstable and even a potential
security problem. Moving it doesn't stop usage.
> > > Leave kernel/linux with empty directory because
> > > CI is trying to directly build it. At some later date,
> > > kernel/linux can be removed.
> > >
> > > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > > ---
> >
> > Want suggestions on this.
> > 1. The release notes gets coding style warning because checkpatch
> > is checking that release note matches current release, and release number
> > hasn't change yet. Should I just wait?
>
> Yes the release notes file for 23.11 should be created today.
>
>
> > 2. The OVSrobot is looking into the port library to see the kni symbols.
> > But port is marked as deprecated already.
> > Perhaps we should just pull out port first?
>
> No we must support it until it is removed.
> You should either disable or remove KNI from the port library.
The patch removed from port library and it builds but the ABI
robot was looking for symbols.
Will just remove port library first.
^ permalink raw reply [relevance 3%]
* [PATCH] build: update DPDK to use C11 standard
@ 2023-07-31 10:38 4% Bruce Richardson
2023-07-31 15:58 4% ` [PATCH v2] " Bruce Richardson
` (3 more replies)
0 siblings, 4 replies; 200+ results
From: Bruce Richardson @ 2023-07-31 10:38 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson
As previously announced, DPDK 23.11 will require a C11 supporting
compiler and will use the C11 standard in all builds.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
When moving the information about the new requirement to the release
notes, a change like this doesn't seem to fit into any existing section.
Given its global scope and importance, I've therefore just put it on
top of the file, rather than in any section.
---
doc/guides/linux_gsg/sys_reqs.rst | 3 ++-
doc/guides/rel_notes/deprecation.rst | 18 ------------------
doc/guides/rel_notes/release_23_11.rst | 17 +++++++++++++++++
meson.build | 1 +
4 files changed, 20 insertions(+), 19 deletions(-)
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index dfeaf4e1c5..13be715933 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -27,7 +27,8 @@ Compilation of the DPDK
The setup commands and installed packages needed on various systems may be different.
For details on Linux distributions and the versions tested, please consult the DPDK Release Notes.
-* General development tools including a supported C compiler such as gcc (version 4.9+) or clang (version 3.4+),
+* General development tools including a C compiler supporting the C11 standard,
+ including standard atomics, for example: GCC (version 5.0+) or Clang (version 3.6+),
and ``pkg-config`` or ``pkgconf`` to be used when building end-user binaries against DPDK.
* For RHEL/Fedora systems these can be installed using ``dnf groupinstall "Development Tools"``
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 494b401cda..cc939d3c67 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,24 +17,6 @@ Other API and ABI deprecation notices are to be posted below.
Deprecation Notices
-------------------
-* C Compiler: From DPDK 23.11 onwards,
- building DPDK will require a C compiler which supports the C11 standard,
- including support for C11 standard atomics.
-
- More specifically, the requirements will be:
-
- * Support for flag "-std=c11" (or similar)
- * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
-
- Please note:
-
- * C11, including standard atomics, is supported from GCC version 5 onwards,
- and is the default language version in that release
- (Ref: https://gcc.gnu.org/gcc-5/changes.html)
- * C11 is the default compilation mode in Clang from version 3.6,
- which also added support for standard atomics
- (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
-
* build: Enabling deprecated libraries (``flow_classify``, ``kni``)
won't be possible anymore through the use of the ``disable_libs`` build option.
A new build option for deprecated libraries will be introduced instead.
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 6b4dd21fd0..c8b9ed456c 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -20,6 +20,23 @@ DPDK Release 23.11
ninja -C build doc
xdg-open build/doc/guides/html/rel_notes/release_23_11.html
+* Build Requirements: From DPDK 23.11 onwards,
+ building DPDK will require a C compiler which supports the C11 standard,
+ including support for C11 standard atomics.
+
+ More specifically, the requirements will be:
+
+ * Support for flag "-std=c11" (or similar)
+ * __STDC_NO_ATOMICS__ is *not defined* when using c11 flag
+
+ Please note:
+
+ * C11, including standard atomics, is supported from GCC version 5 onwards,
+ and is the default language version in that release
+ (Ref: https://gcc.gnu.org/gcc-5/changes.html)
+ * C11 is the default compilation mode in Clang from version 3.6,
+ which also added support for standard atomics
+ (Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
New Features
------------
diff --git a/meson.build b/meson.build
index 39cb73846d..70b54f0c98 100644
--- a/meson.build
+++ b/meson.build
@@ -9,6 +9,7 @@ project('DPDK', 'c',
license: 'BSD',
default_options: [
'buildtype=release',
+ 'c_std=c11',
'default_library=static',
'warning_level=2',
],
--
2.39.2
^ permalink raw reply [relevance 4%]
* [PATCH v6 0/3] Split logging functionality out of EAL
@ 2023-07-31 10:17 3% ` Bruce Richardson
2023-07-31 15:38 4% ` [PATCH v7 " Bruce Richardson
2023-08-09 13:35 3% ` [PATCH v8 " Bruce Richardson
2 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-07-31 10:17 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson
There is a general desire to reduce the size and scope of EAL. To this
end, this patchset makes a (very) small step in that direction by taking
the logging functionality out of EAL and putting it into its own library
that can be built and maintained separately.
As with the first RFC for this, the main obstacle is the "fnmatch"
function which is needed by both EAL and the new log function when
building on windows. While the function cannot stay in EAL - or we would
have a circular dependency, moving it to a new library or just putting
it in the log library have the disadvantages that it then "leaks" into
the public namespace without an rte_prefix, which could cause issues.
Since only a single function is involved, subsequent versions take a
different approach to v1, and just moves the offending function to be a
static function in a header file. This allows use by multiple libs
without conflicting names or making it public.
The other complication, as explained in v1 RFC was that of multiple
implementations for different OS's. This is solved here in the same
way as v1, by including the OS in the name and having meson pick the
correct file for each build. Since only one file is involved, there
seemed little need for replicating EAL's separate subdirectories
per-OS.
v6:
* Updated ABI version to DPDK_24 for new log library for 23.11 release.
v5:
* rebased to latest main branch
* fixed trailing whitespace issues in new doc section
v4:
* Fixed windows build error, due to missing strdup (_strdup on windows)
* Added doc updates to programmers guide.
v3:
* Fixed missing log file for BSD
* Removed "eal" from the filenames of files in the log directory
* added prefixes to elements in the fnmatch header to avoid conflicts
* fixed space indentation in new lines in telemetry.c (checkpatch)
* removed "extern int logtype" definition in telemetry.c (checkpatch)
* added log directory to list for doxygen scanning
Bruce Richardson (3):
eal/windows: move fnmatch function to header file
log: separate logging functions out of EAL
telemetry: use standard logging
doc/api/doxy-api.conf.in | 1 +
.../prog_guide/env_abstraction_layer.rst | 4 +-
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/log_lib.rst | 115 ++++++++++++
lib/eal/common/eal_common_options.c | 2 +-
lib/eal/common/eal_private.h | 7 -
lib/eal/common/meson.build | 1 -
lib/eal/freebsd/eal.c | 6 +-
lib/eal/include/meson.build | 1 -
lib/eal/linux/eal.c | 8 +-
lib/eal/linux/meson.build | 1 -
lib/eal/meson.build | 2 +-
lib/eal/version.map | 17 --
lib/eal/windows/eal.c | 2 +-
lib/eal/windows/fnmatch.c | 172 -----------------
lib/eal/windows/include/fnmatch.h | 175 ++++++++++++++++--
lib/eal/windows/meson.build | 2 -
lib/kvargs/meson.build | 3 +-
.../common/eal_common_log.c => log/log.c} | 7 +-
lib/log/log_freebsd.c | 12 ++
.../common/eal_log.h => log/log_internal.h} | 18 +-
lib/{eal/linux/eal_log.c => log/log_linux.c} | 2 +-
.../windows/eal_log.c => log/log_windows.c} | 2 +-
lib/log/meson.build | 9 +
lib/{eal/include => log}/rte_log.h | 0
lib/log/version.map | 34 ++++
lib/meson.build | 1 +
lib/telemetry/meson.build | 3 +-
lib/telemetry/telemetry.c | 11 +-
lib/telemetry/telemetry_internal.h | 3 +-
30 files changed, 370 insertions(+), 252 deletions(-)
create mode 100644 doc/guides/prog_guide/log_lib.rst
delete mode 100644 lib/eal/windows/fnmatch.c
rename lib/{eal/common/eal_common_log.c => log/log.c} (99%)
create mode 100644 lib/log/log_freebsd.c
rename lib/{eal/common/eal_log.h => log/log_internal.h} (69%)
rename lib/{eal/linux/eal_log.c => log/log_linux.c} (97%)
rename lib/{eal/windows/eal_log.c => log/log_windows.c} (93%)
create mode 100644 lib/log/meson.build
rename lib/{eal/include => log}/rte_log.h (100%)
create mode 100644 lib/log/version.map
--
2.39.2
^ permalink raw reply [relevance 3%]
* Re: [PATCH 2/3] telemetry: remove v23 ABI compatibility
2023-07-31 9:43 8% ` [PATCH 2/3] telemetry: remove v23 ABI compatibility David Marchand
@ 2023-07-31 10:01 4% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-07-31 10:01 UTC (permalink / raw)
To: David Marchand; +Cc: dev, thomas, Ciara Power
On Mon, Jul 31, 2023 at 11:43:54AM +0200, David Marchand wrote:
> v23.11 is a ABI breaking release, remove compatibility code for the
> previous major ABI version.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [PATCH 1/3] version: 23.11-rc0
2023-07-31 9:43 12% ` [PATCH 1/3] " David Marchand
@ 2023-07-31 10:00 0% ` Bruce Richardson
2023-07-31 19:03 0% ` Aaron Conole
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2023-07-31 10:00 UTC (permalink / raw)
To: David Marchand
Cc: dev, thomas, Aaron Conole, Michael Santana, Nicolas Chautru,
Hemant Agrawal, Sachin Saxena, Chenbo Xia, Nipun Gupta,
Tomasz Duszynski, Long Li, Anoob Joseph, Kai Ji, Gagandeep Singh,
Timothy McDaniel, Ashwin Sekhar T K, Pavan Nikhilesh,
Igor Russkikh, Ajit Khaparde, Somnath Kotur, Chas Williams,
Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Yuying Zhang, Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang,
Rosen Xu, Wenjun Wu, Matan Azrad, Viacheslav Ovsiienko, Ori Kam,
Suanming Mou, Harman Kalra, Cristian Dumitrescu, Maxime Coquelin,
Tianfei Zhang, Konstantin Ananyev, Olivier Matz, Akhil Goyal,
Fan Zhang, David Hunt, Byron Marohn, Yipeng Wang, Ferruh Yigit,
Andrew Rybchenko, Jerin Jacob, Vladimir Medvedkin, Jiayu Hu,
Sameh Gobriel, Reshma Pattan, Gaetan Rivet, Stephen Hemminger,
Anatoly Burakov, Honnappa Nagarahalli, Volodymyr Fialko,
Erik Gabriel Carrillo
On Mon, Jul 31, 2023 at 11:43:53AM +0200, David Marchand wrote:
> Start a new release cycle with empty release notes.
>
> The ABI version becomes 24.0.
> The map files are updated to the new ABI major number (24).
> The ABI exceptions are dropped and CI ABI checks are disabled because
> compatibility is not preserved.
>
> The telemetry and vhost libraries compat code is cleaned up in next
> commits.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 0%]
* [PATCH 3/3] vhost: remove v23 ABI compatibility
2023-07-31 9:43 4% [PATCH 0/3] version: 23.11-rc0 David Marchand
2023-07-31 9:43 12% ` [PATCH 1/3] " David Marchand
2023-07-31 9:43 8% ` [PATCH 2/3] telemetry: remove v23 ABI compatibility David Marchand
@ 2023-07-31 9:43 8% ` David Marchand
2 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-07-31 9:43 UTC (permalink / raw)
To: dev; +Cc: thomas, Maxime Coquelin, Chenbo Xia
v23.11 is a ABI breaking release, remove compatibility code for the
previous major ABI version.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
lib/vhost/meson.build | 2 --
lib/vhost/socket.c | 59 +++----------------------------------------
lib/vhost/version.map | 8 +-----
lib/vhost/vhost.h | 6 -----
4 files changed, 5 insertions(+), 70 deletions(-)
diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
index 94f3d2535e..41b622a9be 100644
--- a/lib/vhost/meson.build
+++ b/lib/vhost/meson.build
@@ -43,5 +43,3 @@ driver_sdk_headers = files(
'vdpa_driver.h',
)
deps += ['ethdev', 'cryptodev', 'hash', 'pci', 'dmadev']
-
-use_function_versioning = true
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index 033f4b3b75..fefe60fae6 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -15,7 +15,6 @@
#include <fcntl.h>
#include <pthread.h>
-#include <rte_function_versioning.h>
#include <rte_log.h>
#include "fd_man.h"
@@ -64,7 +63,6 @@ struct vhost_user_socket {
struct rte_vdpa_device *vdpa_dev;
struct rte_vhost_device_ops const *notify_ops;
- struct rte_vhost_device_ops *malloc_notify_ops;
};
struct vhost_user_connection {
@@ -880,7 +878,6 @@ vhost_user_socket_mem_free(struct vhost_user_socket *vsocket)
return;
free(vsocket->path);
- free(vsocket->malloc_notify_ops);
free(vsocket);
}
@@ -1146,69 +1143,21 @@ rte_vhost_driver_unregister(const char *path)
/*
* Register ops so that we can add/remove device to data core.
*/
-static int
-vhost_driver_callback_register(const char *path,
- struct rte_vhost_device_ops const * const ops,
- struct rte_vhost_device_ops *malloc_ops)
+int
+rte_vhost_driver_callback_register(const char *path,
+ struct rte_vhost_device_ops const * const ops)
{
struct vhost_user_socket *vsocket;
pthread_mutex_lock(&vhost_user.mutex);
vsocket = find_vhost_user_socket(path);
- if (vsocket) {
+ if (vsocket)
vsocket->notify_ops = ops;
- free(vsocket->malloc_notify_ops);
- vsocket->malloc_notify_ops = malloc_ops;
- }
pthread_mutex_unlock(&vhost_user.mutex);
return vsocket ? 0 : -1;
}
-int __vsym
-rte_vhost_driver_callback_register_v24(const char *path,
- struct rte_vhost_device_ops const * const ops)
-{
- return vhost_driver_callback_register(path, ops, NULL);
-}
-
-int __vsym
-rte_vhost_driver_callback_register_v23(const char *path,
- struct rte_vhost_device_ops const * const ops)
-{
- int ret;
-
- /*
- * Although the ops structure is a const structure, we do need to
- * override the guest_notify operation. This is because with the
- * previous APIs it was "reserved" and if any garbage value was passed,
- * it could crash the application.
- */
- if (ops && !ops->guest_notify) {
- struct rte_vhost_device_ops *new_ops;
-
- new_ops = malloc(sizeof(*new_ops));
- if (new_ops == NULL)
- return -1;
-
- memcpy(new_ops, ops, sizeof(*new_ops));
- new_ops->guest_notify = NULL;
-
- ret = vhost_driver_callback_register(path, new_ops, new_ops);
- } else {
- ret = vhost_driver_callback_register(path, ops, NULL);
- }
-
- return ret;
-}
-
-/* Mark the v23 function as the old version, and v24 as the default version. */
-VERSION_SYMBOL(rte_vhost_driver_callback_register, _v23, 23);
-BIND_DEFAULT_SYMBOL(rte_vhost_driver_callback_register, _v24, 24);
-MAP_STATIC_SYMBOL(int rte_vhost_driver_callback_register(const char *path,
- struct rte_vhost_device_ops const * const ops),
- rte_vhost_driver_callback_register_v24);
-
struct rte_vhost_device_ops const *
vhost_driver_callback_get(const char *path)
{
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index f5d9d68e2c..5bc133dafd 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_vdpa_find_device_by_name;
@@ -64,12 +64,6 @@ DPDK_23 {
local: *;
};
-DPDK_24 {
- global:
-
- rte_vhost_driver_callback_register;
-} DPDK_23;
-
EXPERIMENTAL {
global:
diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
index f49ce943b0..9723429b1c 100644
--- a/lib/vhost/vhost.h
+++ b/lib/vhost/vhost.h
@@ -1046,10 +1046,4 @@ mbuf_is_consumed(struct rte_mbuf *m)
void mem_set_dump(void *ptr, size_t size, bool enable, uint64_t alignment);
-/* Versioned functions */
-int rte_vhost_driver_callback_register_v23(const char *path,
- struct rte_vhost_device_ops const * const ops);
-int rte_vhost_driver_callback_register_v24(const char *path,
- struct rte_vhost_device_ops const * const ops);
-
#endif /* _VHOST_NET_CDEV_H_ */
--
2.41.0
^ permalink raw reply [relevance 8%]
* [PATCH 2/3] telemetry: remove v23 ABI compatibility
2023-07-31 9:43 4% [PATCH 0/3] version: 23.11-rc0 David Marchand
2023-07-31 9:43 12% ` [PATCH 1/3] " David Marchand
@ 2023-07-31 9:43 8% ` David Marchand
2023-07-31 10:01 4% ` Bruce Richardson
2023-07-31 9:43 8% ` [PATCH 3/3] vhost: " David Marchand
2 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-07-31 9:43 UTC (permalink / raw)
To: dev; +Cc: thomas, Ciara Power
v23.11 is a ABI breaking release, remove compatibility code for the
previous major ABI version.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
lib/telemetry/meson.build | 1 -
lib/telemetry/telemetry_data.c | 33 ++++-----------------------------
lib/telemetry/telemetry_data.h | 6 ------
lib/telemetry/version.map | 9 +--------
4 files changed, 5 insertions(+), 44 deletions(-)
diff --git a/lib/telemetry/meson.build b/lib/telemetry/meson.build
index 73750d9ef4..f84c9aa3be 100644
--- a/lib/telemetry/meson.build
+++ b/lib/telemetry/meson.build
@@ -6,4 +6,3 @@ includes = [global_inc]
sources = files('telemetry.c', 'telemetry_data.c', 'telemetry_legacy.c')
headers = files('rte_telemetry.h')
includes += include_directories('../metrics')
-use_function_versioning = true
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 0c7187bec1..3b1a2408df 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -10,7 +10,6 @@
#undef RTE_USE_LIBBSD
#include <stdbool.h>
-#include <rte_function_versioning.h>
#include <rte_string_fns.h>
#include "telemetry_data.h"
@@ -63,8 +62,8 @@ rte_tel_data_add_array_string(struct rte_tel_data *d, const char *str)
return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
}
-int __vsym
-rte_tel_data_add_array_int_v24(struct rte_tel_data *d, int64_t x)
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
{
if (d->type != TEL_ARRAY_INT)
return -EINVAL;
@@ -74,18 +73,6 @@ rte_tel_data_add_array_int_v24(struct rte_tel_data *d, int64_t x)
return 0;
}
-int __vsym
-rte_tel_data_add_array_int_v23(struct rte_tel_data *d, int x)
-{
- return rte_tel_data_add_array_int_v24(d, x);
-}
-
-/* mark the v23 function as the older version, and v24 as the default version */
-VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
-BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
-MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
- int64_t x), rte_tel_data_add_array_int_v24);
-
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
{
@@ -190,8 +177,8 @@ rte_tel_data_add_dict_string(struct rte_tel_data *d, const char *name,
return 0;
}
-int __vsym
-rte_tel_data_add_dict_int_v24(struct rte_tel_data *d, const char *name, int64_t val)
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
{
struct tel_dict_entry *e = &d->data.dict[d->data_len];
if (d->type != TEL_DICT)
@@ -209,18 +196,6 @@ rte_tel_data_add_dict_int_v24(struct rte_tel_data *d, const char *name, int64_t
return bytes < RTE_TEL_MAX_STRING_LEN ? 0 : E2BIG;
}
-int __vsym
-rte_tel_data_add_dict_int_v23(struct rte_tel_data *d, const char *name, int val)
-{
- return rte_tel_data_add_dict_int_v24(d, name, val);
-}
-
-/* mark the v23 function as the older version, and v24 as the default version */
-VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
-BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
-MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
- const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
-
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
const char *name, uint64_t val)
diff --git a/lib/telemetry/telemetry_data.h b/lib/telemetry/telemetry_data.h
index 53e4cabea5..205509c5a2 100644
--- a/lib/telemetry/telemetry_data.h
+++ b/lib/telemetry/telemetry_data.h
@@ -49,10 +49,4 @@ struct rte_tel_data {
} data; /* data container */
};
-/* versioned functions */
-int rte_tel_data_add_array_int_v23(struct rte_tel_data *d, int val);
-int rte_tel_data_add_array_int_v24(struct rte_tel_data *d, int64_t val);
-int rte_tel_data_add_dict_int_v23(struct rte_tel_data *d, const char *name, int val);
-int rte_tel_data_add_dict_int_v24(struct rte_tel_data *d, const char *name, int64_t val);
-
#endif
diff --git a/lib/telemetry/version.map b/lib/telemetry/version.map
index af978b883d..7d12c92905 100644
--- a/lib/telemetry/version.map
+++ b/lib/telemetry/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_tel_data_add_array_container;
@@ -31,13 +31,6 @@ EXPERIMENTAL {
local: *;
};
-DPDK_24 {
- global:
-
- rte_tel_data_add_array_int;
- rte_tel_data_add_dict_int;
-} DPDK_23;
-
INTERNAL {
rte_telemetry_legacy_register;
rte_telemetry_init;
--
2.41.0
^ permalink raw reply [relevance 8%]
* [PATCH 1/3] version: 23.11-rc0
2023-07-31 9:43 4% [PATCH 0/3] version: 23.11-rc0 David Marchand
@ 2023-07-31 9:43 12% ` David Marchand
2023-07-31 10:00 0% ` Bruce Richardson
2023-07-31 19:03 0% ` Aaron Conole
2023-07-31 9:43 8% ` [PATCH 2/3] telemetry: remove v23 ABI compatibility David Marchand
2023-07-31 9:43 8% ` [PATCH 3/3] vhost: " David Marchand
2 siblings, 2 replies; 200+ results
From: David Marchand @ 2023-07-31 9:43 UTC (permalink / raw)
To: dev
Cc: thomas, Aaron Conole, Michael Santana, Nicolas Chautru,
Hemant Agrawal, Sachin Saxena, Chenbo Xia, Nipun Gupta,
Tomasz Duszynski, Long Li, Anoob Joseph, Kai Ji, Gagandeep Singh,
Timothy McDaniel, Ashwin Sekhar T K, Pavan Nikhilesh,
Igor Russkikh, Ajit Khaparde, Somnath Kotur, Chas Williams,
Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Yuying Zhang, Beilei Xing, Jingjing Wu, Qiming Yang, Qi Zhang,
Rosen Xu, Wenjun Wu, Matan Azrad, Viacheslav Ovsiienko, Ori Kam,
Suanming Mou, Harman Kalra, Bruce Richardson,
Cristian Dumitrescu, Maxime Coquelin, Tianfei Zhang,
Konstantin Ananyev, Olivier Matz, Akhil Goyal, Fan Zhang,
David Hunt, Byron Marohn, Yipeng Wang, Ferruh Yigit,
Andrew Rybchenko, Jerin Jacob, Vladimir Medvedkin, Jiayu Hu,
Sameh Gobriel, Reshma Pattan, Gaetan Rivet, Stephen Hemminger,
Anatoly Burakov, Honnappa Nagarahalli, Volodymyr Fialko,
Erik Gabriel Carrillo
Start a new release cycle with empty release notes.
The ABI version becomes 24.0.
The map files are updated to the new ABI major number (24).
The ABI exceptions are dropped and CI ABI checks are disabled because
compatibility is not preserved.
The telemetry and vhost libraries compat code is cleaned up in next
commits.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
.github/workflows/build.yml | 4 +-
ABI_VERSION | 2 +-
VERSION | 2 +-
devtools/libabigail.abignore | 5 -
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_23_11.rst | 136 +++++++++++++++++++++
drivers/baseband/acc/version.map | 2 +-
drivers/baseband/fpga_5gnr_fec/version.map | 2 +-
drivers/baseband/fpga_lte_fec/version.map | 2 +-
drivers/bus/fslmc/version.map | 2 +-
drivers/bus/pci/version.map | 2 +-
drivers/bus/platform/version.map | 2 +-
drivers/bus/vdev/version.map | 2 +-
drivers/bus/vmbus/version.map | 2 +-
drivers/crypto/octeontx/version.map | 2 +-
drivers/crypto/scheduler/version.map | 2 +-
drivers/dma/dpaa2/version.map | 2 +-
drivers/event/dlb2/version.map | 2 +-
drivers/mempool/cnxk/version.map | 8 +-
drivers/mempool/dpaa2/version.map | 2 +-
drivers/net/atlantic/version.map | 2 +-
drivers/net/bnxt/version.map | 2 +-
drivers/net/bonding/version.map | 2 +-
drivers/net/cnxk/version.map | 2 +-
drivers/net/dpaa/version.map | 2 +-
drivers/net/dpaa2/version.map | 2 +-
drivers/net/i40e/version.map | 2 +-
drivers/net/iavf/version.map | 2 +-
drivers/net/ice/version.map | 2 +-
drivers/net/ipn3ke/version.map | 2 +-
drivers/net/ixgbe/version.map | 2 +-
drivers/net/mlx5/version.map | 2 +-
drivers/net/octeontx/version.map | 2 +-
drivers/net/ring/version.map | 2 +-
drivers/net/softnic/version.map | 2 +-
drivers/net/vhost/version.map | 2 +-
drivers/raw/ifpga/version.map | 2 +-
drivers/version.map | 2 +-
lib/acl/version.map | 2 +-
lib/bbdev/version.map | 2 +-
lib/bitratestats/version.map | 2 +-
lib/bpf/version.map | 2 +-
lib/cfgfile/version.map | 2 +-
lib/cmdline/version.map | 2 +-
lib/cryptodev/version.map | 2 +-
lib/distributor/version.map | 2 +-
lib/eal/version.map | 2 +-
lib/efd/version.map | 2 +-
lib/ethdev/version.map | 2 +-
lib/eventdev/version.map | 2 +-
lib/fib/version.map | 2 +-
lib/gro/version.map | 2 +-
lib/gso/version.map | 2 +-
lib/hash/version.map | 2 +-
lib/ip_frag/version.map | 2 +-
lib/ipsec/version.map | 2 +-
lib/jobstats/version.map | 2 +-
lib/kni/version.map | 2 +-
lib/kvargs/version.map | 2 +-
lib/latencystats/version.map | 2 +-
lib/lpm/version.map | 2 +-
lib/mbuf/version.map | 2 +-
lib/member/version.map | 2 +-
lib/mempool/version.map | 2 +-
lib/meter/version.map | 2 +-
lib/metrics/version.map | 2 +-
lib/net/version.map | 2 +-
lib/pci/version.map | 2 +-
lib/pdump/version.map | 2 +-
lib/pipeline/version.map | 2 +-
lib/port/version.map | 2 +-
lib/power/version.map | 2 +-
lib/rawdev/version.map | 2 +-
lib/rcu/version.map | 2 +-
lib/reorder/version.map | 2 +-
lib/rib/version.map | 2 +-
lib/ring/version.map | 2 +-
lib/sched/version.map | 2 +-
lib/security/version.map | 2 +-
lib/stack/version.map | 2 +-
lib/table/version.map | 2 +-
lib/timer/version.map | 2 +-
82 files changed, 220 insertions(+), 88 deletions(-)
create mode 100644 doc/guides/rel_notes/release_23_11.rst
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index d3bcb160cf..2c1eda9b18 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -27,7 +27,7 @@ jobs:
MINGW: ${{ matrix.config.cross == 'mingw' }}
MINI: ${{ matrix.config.mini != '' }}
PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
- REF_GIT_TAG: v23.03
+ REF_GIT_TAG: none
RISCV64: ${{ matrix.config.cross == 'riscv64' }}
RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
@@ -40,7 +40,7 @@ jobs:
mini: mini
- os: ubuntu-20.04
compiler: gcc
- checks: abi+debug+doc+examples+tests
+ checks: debug+doc+examples+tests
- os: ubuntu-20.04
compiler: clang
checks: asan+doc+tests
diff --git a/ABI_VERSION b/ABI_VERSION
index 3c8ce91a46..d9133a54b6 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-23.2
+24.0
diff --git a/VERSION b/VERSION
index 942d403ae8..1d4e4e7927 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-23.07.0
+23.11.0-rc0
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 03bfbce259..3ff51509de 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -25,7 +25,6 @@
;
; SKIP_LIBRARY=librte_common_mlx5_glue
; SKIP_LIBRARY=librte_net_mlx4_glue
-; SKIP_LIBRARY=librte_net_liquidio
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Experimental APIs exceptions ;
@@ -41,7 +40,3 @@
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
; Temporary exceptions till next major ABI version ;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-
-; Ignore changes to rte_security_ops which are internal to PMD.
-[suppress_type]
- name = rte_security_ops
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index d8dfa621ec..d072815279 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
:maxdepth: 1
:numbered:
+ release_23_11
release_23_07
release_23_03
release_22_11
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
new file mode 100644
index 0000000000..6b4dd21fd0
--- /dev/null
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -0,0 +1,136 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2023 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 23.11
+==================
+
+.. **Read this first.**
+
+ The text in the sections below explains how to update the release notes.
+
+ Use proper spelling, capitalization and punctuation in all sections.
+
+ Variable and config names should be quoted as fixed width text:
+ ``LIKE_THIS``.
+
+ Build the docs and view the output file to ensure the changes are correct::
+
+ ninja -C build doc
+ xdg-open build/doc/guides/html/rel_notes/release_23_11.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+ Sample format:
+
+ * **Add a title in the past tense with a full stop.**
+
+ Add a short 1-2 sentence description in the past tense.
+ The description should be enough to allow someone scanning
+ the release notes to understand the new feature.
+
+ If the feature adds a lot of sub-features you can use a bullet list
+ like this:
+
+ * Added feature foo to do something.
+ * Enhanced feature bar to do something else.
+
+ Refer to the previous release notes for examples.
+
+ Suggested order in release notes items:
+ * Core libs (EAL, mempool, ring, mbuf, buses)
+ * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+ - ethdev (lib, PMDs)
+ - cryptodev (lib, PMDs)
+ - eventdev (lib, PMDs)
+ - etc
+ * Other libs
+ * Apps, Examples, Tools (if significant)
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+ * Add a short 1-2 sentence description of the removed item
+ in the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the API change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the ABI change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+ * **Add title in present tense with full stop.**
+
+ Add a short 1-2 sentence description of the known issue
+ in the present tense. Add information on any known workarounds.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+ with this release.
+
+ The format is:
+
+ * <vendor> platform with <vendor> <type of devices> combinations
+
+ * List of CPU
+ * List of OS
+ * List of devices
+ * Other relevant details...
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
diff --git a/drivers/baseband/acc/version.map b/drivers/baseband/acc/version.map
index 95ae74dd35..1b6b1cd10d 100644
--- a/drivers/baseband/acc/version.map
+++ b/drivers/baseband/acc/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/baseband/fpga_5gnr_fec/version.map b/drivers/baseband/fpga_5gnr_fec/version.map
index 6b191cf330..2da20cabc1 100644
--- a/drivers/baseband/fpga_5gnr_fec/version.map
+++ b/drivers/baseband/fpga_5gnr_fec/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/baseband/fpga_lte_fec/version.map b/drivers/baseband/fpga_lte_fec/version.map
index aab28a9976..83f3a8a267 100644
--- a/drivers/baseband/fpga_lte_fec/version.map
+++ b/drivers/baseband/fpga_lte_fec/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/bus/fslmc/version.map b/drivers/bus/fslmc/version.map
index a25a9e8ca0..f6bdf877bf 100644
--- a/drivers/bus/fslmc/version.map
+++ b/drivers/bus/fslmc/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_fslmc_vfio_mem_dmamap;
diff --git a/drivers/bus/pci/version.map b/drivers/bus/pci/version.map
index 92fcaca094..a0000f7938 100644
--- a/drivers/bus/pci/version.map
+++ b/drivers/bus/pci/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pci_dump;
diff --git a/drivers/bus/platform/version.map b/drivers/bus/platform/version.map
index bacce4da08..9e7111dd38 100644
--- a/drivers/bus/platform/version.map
+++ b/drivers/bus/platform/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/bus/vdev/version.map b/drivers/bus/vdev/version.map
index 594c48c3db..16f187734b 100644
--- a/drivers/bus/vdev/version.map
+++ b/drivers/bus/vdev/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_vdev_add_custom_scan;
diff --git a/drivers/bus/vmbus/version.map b/drivers/bus/vmbus/version.map
index 430781b29b..08b008b311 100644
--- a/drivers/bus/vmbus/version.map
+++ b/drivers/bus/vmbus/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_vmbus_chan_close;
diff --git a/drivers/crypto/octeontx/version.map b/drivers/crypto/octeontx/version.map
index cc4b6b0970..54a0912e76 100644
--- a/drivers/crypto/octeontx/version.map
+++ b/drivers/crypto/octeontx/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/crypto/scheduler/version.map b/drivers/crypto/scheduler/version.map
index 74491beabb..23380fb3c5 100644
--- a/drivers/crypto/scheduler/version.map
+++ b/drivers/crypto/scheduler/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_cryptodev_scheduler_load_user_scheduler;
diff --git a/drivers/dma/dpaa2/version.map b/drivers/dma/dpaa2/version.map
index 0c020e5249..7dc2d6e185 100644
--- a/drivers/dma/dpaa2/version.map
+++ b/drivers/dma/dpaa2/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/event/dlb2/version.map b/drivers/event/dlb2/version.map
index 1327e3e335..8aabf8b727 100644
--- a/drivers/event/dlb2/version.map
+++ b/drivers/event/dlb2/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/mempool/cnxk/version.map b/drivers/mempool/cnxk/version.map
index 755731e3b5..775d46d934 100644
--- a/drivers/mempool/cnxk/version.map
+++ b/drivers/mempool/cnxk/version.map
@@ -1,10 +1,10 @@
- DPDK_23 {
+DPDK_24 {
local: *;
- };
+};
- EXPERIMENTAL {
+EXPERIMENTAL {
global:
rte_pmd_cnxk_mempool_is_hwpool;
rte_pmd_cnxk_mempool_mbuf_exchange;
rte_pmd_cnxk_mempool_range_check_disable;
- };
+};
diff --git a/drivers/mempool/dpaa2/version.map b/drivers/mempool/dpaa2/version.map
index 0023765843..b2bf63eb79 100644
--- a/drivers/mempool/dpaa2/version.map
+++ b/drivers/mempool/dpaa2/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_dpaa2_mbuf_from_buf_addr;
diff --git a/drivers/net/atlantic/version.map b/drivers/net/atlantic/version.map
index e301b105fe..b063baa7a4 100644
--- a/drivers/net/atlantic/version.map
+++ b/drivers/net/atlantic/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/net/bnxt/version.map b/drivers/net/bnxt/version.map
index 075bb37a36..ff82396ca1 100644
--- a/drivers/net/bnxt/version.map
+++ b/drivers/net/bnxt/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pmd_bnxt_get_vf_rx_status;
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index 9333923b4e..bd28ee78a5 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_eth_bond_8023ad_agg_selection_get;
diff --git a/drivers/net/cnxk/version.map b/drivers/net/cnxk/version.map
index 3ef3e76bb0..7ae6d80bf0 100644
--- a/drivers/net/cnxk/version.map
+++ b/drivers/net/cnxk/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/net/dpaa/version.map b/drivers/net/dpaa/version.map
index 5268d39ef6..c06f4a56de 100644
--- a/drivers/net/dpaa/version.map
+++ b/drivers/net/dpaa/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pmd_dpaa_set_tx_loopback;
diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map
index d6535343b1..283bcb42c1 100644
--- a/drivers/net/dpaa2/version.map
+++ b/drivers/net/dpaa2/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pmd_dpaa2_mux_flow_create;
diff --git a/drivers/net/i40e/version.map b/drivers/net/i40e/version.map
index 4d1ac59226..3ba31f4768 100644
--- a/drivers/net/i40e/version.map
+++ b/drivers/net/i40e/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pmd_i40e_add_vf_mac_addr;
diff --git a/drivers/net/iavf/version.map b/drivers/net/iavf/version.map
index 4796c2884f..135a4ccd3d 100644
--- a/drivers/net/iavf/version.map
+++ b/drivers/net/iavf/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/net/ice/version.map b/drivers/net/ice/version.map
index d70c250e9a..4e924c8f4d 100644
--- a/drivers/net/ice/version.map
+++ b/drivers/net/ice/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/net/ipn3ke/version.map b/drivers/net/ipn3ke/version.map
index 4c48499993..4a8f5e499a 100644
--- a/drivers/net/ipn3ke/version.map
+++ b/drivers/net/ipn3ke/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/net/ixgbe/version.map b/drivers/net/ixgbe/version.map
index 94693ccc1a..2c9d977f5c 100644
--- a/drivers/net/ixgbe/version.map
+++ b/drivers/net/ixgbe/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pmd_ixgbe_bypass_event_show;
diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map
index 7ef598027b..99f5ab754a 100644
--- a/drivers/net/mlx5/version.map
+++ b/drivers/net/mlx5/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/drivers/net/octeontx/version.map b/drivers/net/octeontx/version.map
index ae37d32d04..219933550d 100644
--- a/drivers/net/octeontx/version.map
+++ b/drivers/net/octeontx/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_octeontx_pchan_map;
diff --git a/drivers/net/ring/version.map b/drivers/net/ring/version.map
index 84e52064e0..62d9a77f9c 100644
--- a/drivers/net/ring/version.map
+++ b/drivers/net/ring/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_eth_from_ring;
diff --git a/drivers/net/softnic/version.map b/drivers/net/softnic/version.map
index 4dac46ecd5..f67475684c 100644
--- a/drivers/net/softnic/version.map
+++ b/drivers/net/softnic/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pmd_softnic_manage;
diff --git a/drivers/net/vhost/version.map b/drivers/net/vhost/version.map
index e42c89f1eb..4825afd411 100644
--- a/drivers/net/vhost/version.map
+++ b/drivers/net/vhost/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_eth_vhost_get_queue_event;
diff --git a/drivers/raw/ifpga/version.map b/drivers/raw/ifpga/version.map
index 916da8a4f2..7fc1b5e8ae 100644
--- a/drivers/raw/ifpga/version.map
+++ b/drivers/raw/ifpga/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pmd_ifpga_cleanup;
diff --git a/drivers/version.map b/drivers/version.map
index 78c3585d7c..5535c79061 100644
--- a/drivers/version.map
+++ b/drivers/version.map
@@ -1,3 +1,3 @@
-DPDK_23 {
+DPDK_24 {
local: *;
};
diff --git a/lib/acl/version.map b/lib/acl/version.map
index 4c15dbbb36..fe3127a3a9 100644
--- a/lib/acl/version.map
+++ b/lib/acl/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_acl_add_rules;
diff --git a/lib/bbdev/version.map b/lib/bbdev/version.map
index d0bb835255..4f4bfbbd5e 100644
--- a/lib/bbdev/version.map
+++ b/lib/bbdev/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_bbdev_allocate;
diff --git a/lib/bitratestats/version.map b/lib/bitratestats/version.map
index dc110440e0..08831a62f4 100644
--- a/lib/bitratestats/version.map
+++ b/lib/bitratestats/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_stats_bitrate_calc;
diff --git a/lib/bpf/version.map b/lib/bpf/version.map
index 04bd657a85..c49bf1701f 100644
--- a/lib/bpf/version.map
+++ b/lib/bpf/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_bpf_destroy;
diff --git a/lib/cfgfile/version.map b/lib/cfgfile/version.map
index fdb0f13040..a3fe9b62f3 100644
--- a/lib/cfgfile/version.map
+++ b/lib/cfgfile/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_cfgfile_add_entry;
diff --git a/lib/cmdline/version.map b/lib/cmdline/version.map
index e3d59aaf8d..db4d904ffb 100644
--- a/lib/cmdline/version.map
+++ b/lib/cmdline/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
cirbuf_add_buf_head;
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 24ff90799c..209806cf24 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_crypto_aead_algorithm_strings;
diff --git a/lib/distributor/version.map b/lib/distributor/version.map
index 7a34dfa2f2..2670c4201c 100644
--- a/lib/distributor/version.map
+++ b/lib/distributor/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_distributor_clear_returns;
diff --git a/lib/eal/version.map b/lib/eal/version.map
index ea1b1a7d0a..bdb98cf479 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
__rte_panic;
diff --git a/lib/efd/version.map b/lib/efd/version.map
index 67886414ab..baac60f7bc 100644
--- a/lib/efd/version.map
+++ b/lib/efd/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_efd_create;
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index fc492ee839..b965d6aa52 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_eth_add_first_rx_callback;
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 89068a5713..b03c10d99f 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
__rte_eventdev_trace_crypto_adapter_enqueue;
diff --git a/lib/fib/version.map b/lib/fib/version.map
index a867d2b7d8..62dbada6bc 100644
--- a/lib/fib/version.map
+++ b/lib/fib/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_fib6_add;
diff --git a/lib/gro/version.map b/lib/gro/version.map
index 105aa64ca3..13803ec814 100644
--- a/lib/gro/version.map
+++ b/lib/gro/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_gro_ctx_create;
diff --git a/lib/gso/version.map b/lib/gso/version.map
index f6b552de6d..f159b3f199 100644
--- a/lib/gso/version.map
+++ b/lib/gso/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_gso_segment;
diff --git a/lib/hash/version.map b/lib/hash/version.map
index bdcebd19c2..daaa9a8901 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_fbk_hash_create;
diff --git a/lib/ip_frag/version.map b/lib/ip_frag/version.map
index 8aad83957d..7ba446c993 100644
--- a/lib/ip_frag/version.map
+++ b/lib/ip_frag/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_ip_frag_free_death_row;
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index f17a49dd26..f0063af354 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_ipsec_pkt_crypto_group;
diff --git a/lib/jobstats/version.map b/lib/jobstats/version.map
index bca7480afb..3b8f9d6ac4 100644
--- a/lib/jobstats/version.map
+++ b/lib/jobstats/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_jobstats_abort;
diff --git a/lib/kni/version.map b/lib/kni/version.map
index 83bbbe880f..13ffaa5bfd 100644
--- a/lib/kni/version.map
+++ b/lib/kni/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_kni_alloc;
diff --git a/lib/kvargs/version.map b/lib/kvargs/version.map
index 781f71cf23..387a94e725 100644
--- a/lib/kvargs/version.map
+++ b/lib/kvargs/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_kvargs_count;
diff --git a/lib/latencystats/version.map b/lib/latencystats/version.map
index 79b8395f12..86ded322cb 100644
--- a/lib/latencystats/version.map
+++ b/lib/latencystats/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_latencystats_get;
diff --git a/lib/lpm/version.map b/lib/lpm/version.map
index e1a7aaedbb..9ba73b2f93 100644
--- a/lib/lpm/version.map
+++ b/lib/lpm/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_lpm6_add;
diff --git a/lib/mbuf/version.map b/lib/mbuf/version.map
index ed486ed14e..f010d4692e 100644
--- a/lib/mbuf/version.map
+++ b/lib/mbuf/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
__rte_pktmbuf_linearize;
diff --git a/lib/member/version.map b/lib/member/version.map
index 35199270ff..9be5068d68 100644
--- a/lib/member/version.map
+++ b/lib/member/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_member_add;
diff --git a/lib/mempool/version.map b/lib/mempool/version.map
index dff2d1cb55..d0bfedd1d8 100644
--- a/lib/mempool/version.map
+++ b/lib/mempool/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_mempool_audit;
diff --git a/lib/meter/version.map b/lib/meter/version.map
index b10b544641..9628bd8cd9 100644
--- a/lib/meter/version.map
+++ b/lib/meter/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_meter_srtcm_config;
diff --git a/lib/metrics/version.map b/lib/metrics/version.map
index 89ffa9be80..4763ac6b8b 100644
--- a/lib/metrics/version.map
+++ b/lib/metrics/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_metrics_deinit;
diff --git a/lib/net/version.map b/lib/net/version.map
index e8fe2b7635..3e293c4715 100644
--- a/lib/net/version.map
+++ b/lib/net/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_eth_random_addr;
diff --git a/lib/pci/version.map b/lib/pci/version.map
index e9282ff49c..aeca8a1c9e 100644
--- a/lib/pci/version.map
+++ b/lib/pci/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pci_addr_cmp;
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index 25df5a82c2..225830dc85 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pdump_disable;
diff --git a/lib/pipeline/version.map b/lib/pipeline/version.map
index 3a4488cd0e..6e3f5b7e80 100644
--- a/lib/pipeline/version.map
+++ b/lib/pipeline/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_pipeline_ah_packet_drop;
diff --git a/lib/port/version.map b/lib/port/version.map
index af6cf696fd..83dbec7b01 100644
--- a/lib/port/version.map
+++ b/lib/port/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_port_ethdev_reader_ops;
diff --git a/lib/power/version.map b/lib/power/version.map
index 05d544e947..b8b54f768e 100644
--- a/lib/power/version.map
+++ b/lib/power/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_power_exit;
diff --git a/lib/rawdev/version.map b/lib/rawdev/version.map
index 8278aacdea..21064a889b 100644
--- a/lib/rawdev/version.map
+++ b/lib/rawdev/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_rawdev_close;
diff --git a/lib/rcu/version.map b/lib/rcu/version.map
index cabed64fca..9218ed1f33 100644
--- a/lib/rcu/version.map
+++ b/lib/rcu/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_rcu_log_type;
diff --git a/lib/reorder/version.map b/lib/reorder/version.map
index 0b3d4d5685..ea60759106 100644
--- a/lib/reorder/version.map
+++ b/lib/reorder/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_reorder_create;
diff --git a/lib/rib/version.map b/lib/rib/version.map
index ca2815e44b..39da637f75 100644
--- a/lib/rib/version.map
+++ b/lib/rib/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_rib6_create;
diff --git a/lib/ring/version.map b/lib/ring/version.map
index 4d7c27a6d9..9eb6e254c8 100644
--- a/lib/ring/version.map
+++ b/lib/ring/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_ring_create;
diff --git a/lib/sched/version.map b/lib/sched/version.map
index 2f64834c8f..d9ce68be14 100644
--- a/lib/sched/version.map
+++ b/lib/sched/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_approx;
diff --git a/lib/security/version.map b/lib/security/version.map
index 07dcce9ffb..b2097a969d 100644
--- a/lib/security/version.map
+++ b/lib/security/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_security_capabilities_get;
diff --git a/lib/stack/version.map b/lib/stack/version.map
index c0250f5cdf..d191ef7791 100644
--- a/lib/stack/version.map
+++ b/lib/stack/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_stack_create;
diff --git a/lib/table/version.map b/lib/table/version.map
index e32e15a5fc..05ed820119 100644
--- a/lib/table/version.map
+++ b/lib/table/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_table_acl_ops;
diff --git a/lib/timer/version.map b/lib/timer/version.map
index 101f5c18b5..e3d5a04303 100644
--- a/lib/timer/version.map
+++ b/lib/timer/version.map
@@ -1,4 +1,4 @@
-DPDK_23 {
+DPDK_24 {
global:
rte_timer_alt_dump_stats;
--
2.41.0
^ permalink raw reply [relevance 12%]
* [PATCH 0/3] version: 23.11-rc0
@ 2023-07-31 9:43 4% David Marchand
2023-07-31 9:43 12% ` [PATCH 1/3] " David Marchand
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: David Marchand @ 2023-07-31 9:43 UTC (permalink / raw)
To: dev; +Cc: thomas
Prepare the new release.
I chose to separate the compat code cleanup in the telemetry and vhost
libraries for making it easier to review, though the 3 patches could be
squashed in a single change.
--
David Marchand
David Marchand (3):
version: 23.11-rc0
telemetry: remove v23 ABI compatibility
vhost: remove v23 ABI compatibility
.github/workflows/build.yml | 4 +-
ABI_VERSION | 2 +-
VERSION | 2 +-
devtools/libabigail.abignore | 5 -
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_23_11.rst | 136 +++++++++++++++++++++
drivers/baseband/acc/version.map | 2 +-
drivers/baseband/fpga_5gnr_fec/version.map | 2 +-
drivers/baseband/fpga_lte_fec/version.map | 2 +-
drivers/bus/fslmc/version.map | 2 +-
drivers/bus/pci/version.map | 2 +-
drivers/bus/platform/version.map | 2 +-
drivers/bus/vdev/version.map | 2 +-
drivers/bus/vmbus/version.map | 2 +-
drivers/crypto/octeontx/version.map | 2 +-
drivers/crypto/scheduler/version.map | 2 +-
drivers/dma/dpaa2/version.map | 2 +-
drivers/event/dlb2/version.map | 2 +-
drivers/mempool/cnxk/version.map | 8 +-
drivers/mempool/dpaa2/version.map | 2 +-
drivers/net/atlantic/version.map | 2 +-
drivers/net/bnxt/version.map | 2 +-
drivers/net/bonding/version.map | 2 +-
drivers/net/cnxk/version.map | 2 +-
drivers/net/dpaa/version.map | 2 +-
drivers/net/dpaa2/version.map | 2 +-
drivers/net/i40e/version.map | 2 +-
drivers/net/iavf/version.map | 2 +-
drivers/net/ice/version.map | 2 +-
drivers/net/ipn3ke/version.map | 2 +-
drivers/net/ixgbe/version.map | 2 +-
drivers/net/mlx5/version.map | 2 +-
drivers/net/octeontx/version.map | 2 +-
drivers/net/ring/version.map | 2 +-
drivers/net/softnic/version.map | 2 +-
drivers/net/vhost/version.map | 2 +-
drivers/raw/ifpga/version.map | 2 +-
drivers/version.map | 2 +-
lib/acl/version.map | 2 +-
lib/bbdev/version.map | 2 +-
lib/bitratestats/version.map | 2 +-
lib/bpf/version.map | 2 +-
lib/cfgfile/version.map | 2 +-
lib/cmdline/version.map | 2 +-
lib/cryptodev/version.map | 2 +-
lib/distributor/version.map | 2 +-
lib/eal/version.map | 2 +-
lib/efd/version.map | 2 +-
lib/ethdev/version.map | 2 +-
lib/eventdev/version.map | 2 +-
lib/fib/version.map | 2 +-
lib/gro/version.map | 2 +-
lib/gso/version.map | 2 +-
lib/hash/version.map | 2 +-
lib/ip_frag/version.map | 2 +-
lib/ipsec/version.map | 2 +-
lib/jobstats/version.map | 2 +-
lib/kni/version.map | 2 +-
lib/kvargs/version.map | 2 +-
lib/latencystats/version.map | 2 +-
lib/lpm/version.map | 2 +-
lib/mbuf/version.map | 2 +-
lib/member/version.map | 2 +-
lib/mempool/version.map | 2 +-
lib/meter/version.map | 2 +-
lib/metrics/version.map | 2 +-
lib/net/version.map | 2 +-
lib/pci/version.map | 2 +-
lib/pdump/version.map | 2 +-
lib/pipeline/version.map | 2 +-
lib/port/version.map | 2 +-
lib/power/version.map | 2 +-
lib/rawdev/version.map | 2 +-
lib/rcu/version.map | 2 +-
lib/reorder/version.map | 2 +-
lib/rib/version.map | 2 +-
lib/ring/version.map | 2 +-
lib/sched/version.map | 2 +-
lib/security/version.map | 2 +-
lib/stack/version.map | 2 +-
lib/table/version.map | 2 +-
lib/telemetry/meson.build | 1 -
lib/telemetry/telemetry_data.c | 33 +----
lib/telemetry/telemetry_data.h | 6 -
lib/telemetry/version.map | 9 +-
lib/timer/version.map | 2 +-
lib/vhost/meson.build | 2 -
lib/vhost/socket.c | 59 +--------
lib/vhost/version.map | 8 +-
lib/vhost/vhost.h | 6 -
90 files changed, 230 insertions(+), 202 deletions(-)
create mode 100644 doc/guides/rel_notes/release_23_11.rst
--
2.41.0
^ permalink raw reply [relevance 4%]
* RE: [EXT] Re: [PATCH v2] doc: announce new major ABI version
2023-07-28 17:02 7% ` Patrick Robb
2023-07-28 17:33 4% ` Thomas Monjalon
@ 2023-07-31 4:42 8% ` Akhil Goyal
1 sibling, 0 replies; 200+ results
From: Akhil Goyal @ 2023-07-31 4:42 UTC (permalink / raw)
To: Patrick Robb, Thomas Monjalon; +Cc: Bruce Richardson, dev
[-- Attachment #1: Type: text/plain, Size: 559 bytes --]
I believe it is not disabled in some checks.
http://mails.dpdk.org/archives/test-report/2023-July/432810.html
This is reported for today’s patch.
From: Patrick Robb <probb@iol.unh.edu>
Sent: Friday, July 28, 2023 10:32 PM
To: Thomas Monjalon <thomas@monjalon.net>
Cc: Bruce Richardson <bruce.richardson@intel.com>; dev@dpdk.org
Subject: [EXT] Re: [PATCH v2] doc: announce new major ABI version
External Email
________________________________
The Community Lab's ABI testing on new patchseries is now disabled until the 23.11 release. Thanks.
[-- Attachment #2: Type: text/html, Size: 3048 bytes --]
^ permalink raw reply [relevance 8%]
* [PATCH v2] kni: remove deprecated kernel network interface
2023-07-29 22:54 1% [PATCH] kni: remove deprecated kernel network interface Stephen Hemminger
@ 2023-07-30 2:12 1% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-07-30 2:12 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Thomas Monjalon, Maxime Coquelin, Chenbo Xia,
Anatoly Burakov, Cristian Dumitrescu, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Bruce Richardson
Deprecation and removal was announced in 22.11.
Make it so.
Leave kernel/linux with empty directory because
CI is trying to directly build it. At some later date,
kernel/linux can be removed.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
v2 - fix doc and CI build
MAINTAINERS | 10 -
app/test/meson.build | 2 -
app/test/test_kni.c | 740 ---------------
doc/api/doxy-api-index.md | 2 -
doc/api/doxy-api.conf.in | 1 -
doc/guides/contributing/documentation.rst | 2 +-
doc/guides/howto/flow_bifurcation.rst | 3 +-
doc/guides/nics/index.rst | 1 -
doc/guides/nics/kni.rst | 170 ----
doc/guides/nics/virtio.rst | 92 +-
.../prog_guide/env_abstraction_layer.rst | 2 -
doc/guides/prog_guide/glossary.rst | 3 -
doc/guides/prog_guide/index.rst | 1 -
.../prog_guide/kernel_nic_interface.rst | 423 ---------
doc/guides/prog_guide/packet_framework.rst | 9 +-
doc/guides/rel_notes/deprecation.rst | 9 +-
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_23_11.rst | 16 +
doc/guides/sample_app_ug/ip_pipeline.rst | 22 -
drivers/net/cnxk/cnxk_ethdev.c | 2 +-
drivers/net/kni/meson.build | 11 -
drivers/net/kni/rte_eth_kni.c | 524 -----------
drivers/net/meson.build | 1 -
examples/ip_pipeline/Makefile | 1 -
examples/ip_pipeline/cli.c | 95 --
examples/ip_pipeline/examples/kni.cli | 69 --
examples/ip_pipeline/kni.c | 168 ----
examples/ip_pipeline/kni.h | 46 -
examples/ip_pipeline/main.c | 10 -
examples/ip_pipeline/meson.build | 1 -
examples/ip_pipeline/pipeline.c | 57 --
examples/ip_pipeline/pipeline.h | 2 -
kernel/linux/kni/Kbuild | 6 -
kernel/linux/kni/compat.h | 157 ----
kernel/linux/kni/kni_dev.h | 137 ---
kernel/linux/kni/kni_fifo.h | 87 --
kernel/linux/kni/kni_misc.c | 719 --------------
kernel/linux/kni/kni_net.c | 878 ------------------
kernel/linux/kni/meson.build | 41 -
kernel/linux/meson.build | 2 +-
lib/eal/common/eal_common_log.c | 1 -
lib/eal/include/rte_log.h | 2 +-
lib/eal/linux/eal.c | 19 -
lib/kni/meson.build | 21 -
lib/kni/rte_kni.c | 843 -----------------
lib/kni/rte_kni.h | 269 ------
lib/kni/rte_kni_common.h | 147 ---
lib/kni/rte_kni_fifo.h | 117 ---
lib/kni/version.map | 24 -
lib/meson.build | 3 -
lib/port/meson.build | 6 -
lib/port/rte_port_kni.c | 515 ----------
lib/port/rte_port_kni.h | 63 --
lib/port/version.map | 3 -
meson_options.txt | 2 +-
55 files changed, 28 insertions(+), 6530 deletions(-)
delete mode 100644 app/test/test_kni.c
delete mode 100644 doc/guides/nics/kni.rst
delete mode 100644 doc/guides/prog_guide/kernel_nic_interface.rst
create mode 100644 doc/guides/rel_notes/release_23_11.rst
delete mode 100644 drivers/net/kni/meson.build
delete mode 100644 drivers/net/kni/rte_eth_kni.c
delete mode 100644 examples/ip_pipeline/examples/kni.cli
delete mode 100644 examples/ip_pipeline/kni.c
delete mode 100644 examples/ip_pipeline/kni.h
delete mode 100644 kernel/linux/kni/Kbuild
delete mode 100644 kernel/linux/kni/compat.h
delete mode 100644 kernel/linux/kni/kni_dev.h
delete mode 100644 kernel/linux/kni/kni_fifo.h
delete mode 100644 kernel/linux/kni/kni_misc.c
delete mode 100644 kernel/linux/kni/kni_net.c
delete mode 100644 kernel/linux/kni/meson.build
delete mode 100644 lib/kni/meson.build
delete mode 100644 lib/kni/rte_kni.c
delete mode 100644 lib/kni/rte_kni.h
delete mode 100644 lib/kni/rte_kni_common.h
delete mode 100644 lib/kni/rte_kni_fifo.h
delete mode 100644 lib/kni/version.map
delete mode 100644 lib/port/rte_port_kni.c
delete mode 100644 lib/port/rte_port_kni.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 18bc05fccd0d..6ad45569bcd2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -617,12 +617,6 @@ F: doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
F: app/test/test_link_bonding*
F: examples/bond/
-Linux KNI
-F: kernel/linux/kni/
-F: lib/kni/
-F: doc/guides/prog_guide/kernel_nic_interface.rst
-F: app/test/test_kni.c
-
Linux AF_PACKET
M: John W. Linville <linville@tuxdriver.com>
F: drivers/net/af_packet/
@@ -1027,10 +1021,6 @@ F: drivers/net/tap/
F: doc/guides/nics/tap.rst
F: doc/guides/nics/features/tap.ini
-KNI PMD
-F: drivers/net/kni/
-F: doc/guides/nics/kni.rst
-
Ring PMD
M: Bruce Richardson <bruce.richardson@intel.com>
F: drivers/net/ring/
diff --git a/app/test/meson.build b/app/test/meson.build
index b89cf0368fb5..de895cc8fc52 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -72,7 +72,6 @@ test_sources = files(
'test_ipsec.c',
'test_ipsec_sad.c',
'test_ipsec_perf.c',
- 'test_kni.c',
'test_kvargs.c',
'test_lcores.c',
'test_logs.c',
@@ -237,7 +236,6 @@ fast_tests = [
['fbarray_autotest', true, true],
['hash_readwrite_func_autotest', false, true],
['ipsec_autotest', true, true],
- ['kni_autotest', false, true],
['kvargs_autotest', true, true],
['member_autotest', true, true],
['power_cpufreq_autotest', false, true],
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
deleted file mode 100644
index 4039da0b080c..000000000000
--- a/app/test/test_kni.c
+++ /dev/null
@@ -1,740 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#include "test.h"
-
-#include <stdio.h>
-#include <stdint.h>
-#include <unistd.h>
-#include <string.h>
-#if !defined(RTE_EXEC_ENV_LINUX) || !defined(RTE_LIB_KNI)
-
-static int
-test_kni(void)
-{
- printf("KNI not supported, skipping test\n");
- return TEST_SKIPPED;
-}
-
-#else
-
-#include <sys/wait.h>
-#include <dirent.h>
-
-#include <rte_string_fns.h>
-#include <rte_mempool.h>
-#include <rte_ethdev.h>
-#include <rte_cycles.h>
-#include <rte_kni.h>
-
-#define NB_MBUF 8192
-#define MAX_PACKET_SZ 2048
-#define MBUF_DATA_SZ (MAX_PACKET_SZ + RTE_PKTMBUF_HEADROOM)
-#define PKT_BURST_SZ 32
-#define MEMPOOL_CACHE_SZ PKT_BURST_SZ
-#define SOCKET 0
-#define NB_RXD 1024
-#define NB_TXD 1024
-#define KNI_TIMEOUT_MS 5000 /* ms */
-
-#define IFCONFIG "/sbin/ifconfig "
-#define TEST_KNI_PORT "test_kni_port"
-#define KNI_MODULE_PATH "/sys/module/rte_kni"
-#define KNI_MODULE_PARAM_LO KNI_MODULE_PATH"/parameters/lo_mode"
-#define KNI_TEST_MAX_PORTS 4
-/* The threshold number of mbufs to be transmitted or received. */
-#define KNI_NUM_MBUF_THRESHOLD 100
-static int kni_pkt_mtu = 0;
-
-struct test_kni_stats {
- volatile uint64_t ingress;
- volatile uint64_t egress;
-};
-
-static const struct rte_eth_rxconf rx_conf = {
- .rx_thresh = {
- .pthresh = 8,
- .hthresh = 8,
- .wthresh = 4,
- },
- .rx_free_thresh = 0,
-};
-
-static const struct rte_eth_txconf tx_conf = {
- .tx_thresh = {
- .pthresh = 36,
- .hthresh = 0,
- .wthresh = 0,
- },
- .tx_free_thresh = 0,
- .tx_rs_thresh = 0,
-};
-
-static const struct rte_eth_conf port_conf = {
- .txmode = {
- .mq_mode = RTE_ETH_MQ_TX_NONE,
- },
-};
-
-static struct rte_kni_ops kni_ops = {
- .change_mtu = NULL,
- .config_network_if = NULL,
- .config_mac_address = NULL,
- .config_promiscusity = NULL,
-};
-
-static unsigned int lcore_main, lcore_ingress, lcore_egress;
-static struct rte_kni *test_kni_ctx;
-static struct test_kni_stats stats;
-
-static volatile uint32_t test_kni_processing_flag;
-
-static struct rte_mempool *
-test_kni_create_mempool(void)
-{
- struct rte_mempool * mp;
-
- mp = rte_mempool_lookup("kni_mempool");
- if (!mp)
- mp = rte_pktmbuf_pool_create("kni_mempool",
- NB_MBUF,
- MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ,
- SOCKET);
-
- return mp;
-}
-
-static struct rte_mempool *
-test_kni_lookup_mempool(void)
-{
- return rte_mempool_lookup("kni_mempool");
-}
-/* Callback for request of changing MTU */
-static int
-kni_change_mtu(uint16_t port_id, unsigned int new_mtu)
-{
- printf("Change MTU of port %d to %u\n", port_id, new_mtu);
- kni_pkt_mtu = new_mtu;
- printf("Change MTU of port %d to %i successfully.\n",
- port_id, kni_pkt_mtu);
- return 0;
-}
-
-static int
-test_kni_link_change(void)
-{
- int ret;
- int pid;
-
- pid = fork();
- if (pid < 0) {
- printf("Error: Failed to fork a process\n");
- return -1;
- }
-
- if (pid == 0) {
- printf("Starting KNI Link status change tests.\n");
- if (system(IFCONFIG TEST_KNI_PORT" up") == -1) {
- ret = -1;
- goto error;
- }
-
- ret = rte_kni_update_link(test_kni_ctx, 1);
- if (ret < 0) {
- printf("Failed to change link state to Up ret=%d.\n",
- ret);
- goto error;
- }
- rte_delay_ms(1000);
- printf("KNI: Set LINKUP, previous state=%d\n", ret);
-
- ret = rte_kni_update_link(test_kni_ctx, 0);
- if (ret != 1) {
- printf(
- "Failed! Previous link state should be 1, returned %d.\n",
- ret);
- goto error;
- }
- rte_delay_ms(1000);
- printf("KNI: Set LINKDOWN, previous state=%d\n", ret);
-
- ret = rte_kni_update_link(test_kni_ctx, 1);
- if (ret != 0) {
- printf(
- "Failed! Previous link state should be 0, returned %d.\n",
- ret);
- goto error;
- }
- printf("KNI: Set LINKUP, previous state=%d\n", ret);
-
- ret = 0;
- rte_delay_ms(1000);
-
-error:
- if (system(IFCONFIG TEST_KNI_PORT" down") == -1)
- ret = -1;
-
- printf("KNI: Link status change tests: %s.\n",
- (ret == 0) ? "Passed" : "Failed");
- exit(ret);
- } else {
- int p_ret, status;
-
- while (1) {
- p_ret = waitpid(pid, &status, WNOHANG);
- if (p_ret != 0) {
- if (WIFEXITED(status))
- return WEXITSTATUS(status);
- return -1;
- }
- rte_delay_ms(10);
- rte_kni_handle_request(test_kni_ctx);
- }
- }
-}
-/**
- * This loop fully tests the basic functions of KNI. e.g. transmitting,
- * receiving to, from kernel space, and kernel requests.
- *
- * This is the loop to transmit/receive mbufs to/from kernel interface with
- * supported by KNI kernel module. The ingress lcore will allocate mbufs and
- * transmit them to kernel space; while the egress lcore will receive the mbufs
- * from kernel space and free them.
- * On the main lcore, several commands will be run to check handling the
- * kernel requests. And it will finally set the flag to exit the KNI
- * transmitting/receiving to/from the kernel space.
- *
- * Note: To support this testing, the KNI kernel module needs to be insmodded
- * in one of its loopback modes.
- */
-static int
-test_kni_loop(__rte_unused void *arg)
-{
- int ret = 0;
- unsigned nb_rx, nb_tx, num, i;
- const unsigned lcore_id = rte_lcore_id();
- struct rte_mbuf *pkts_burst[PKT_BURST_SZ];
-
- if (lcore_id == lcore_main) {
- rte_delay_ms(KNI_TIMEOUT_MS);
- /* tests of handling kernel request */
- if (system(IFCONFIG TEST_KNI_PORT" up") == -1)
- ret = -1;
- if (system(IFCONFIG TEST_KNI_PORT" mtu 1400") == -1)
- ret = -1;
- if (system(IFCONFIG TEST_KNI_PORT" down") == -1)
- ret = -1;
- rte_delay_ms(KNI_TIMEOUT_MS);
- test_kni_processing_flag = 1;
- } else if (lcore_id == lcore_ingress) {
- struct rte_mempool *mp = test_kni_lookup_mempool();
-
- if (mp == NULL)
- return -1;
-
- while (1) {
- if (test_kni_processing_flag)
- break;
-
- for (nb_rx = 0; nb_rx < PKT_BURST_SZ; nb_rx++) {
- pkts_burst[nb_rx] = rte_pktmbuf_alloc(mp);
- if (!pkts_burst[nb_rx])
- break;
- }
-
- num = rte_kni_tx_burst(test_kni_ctx, pkts_burst,
- nb_rx);
- stats.ingress += num;
- rte_kni_handle_request(test_kni_ctx);
- if (num < nb_rx) {
- for (i = num; i < nb_rx; i++) {
- rte_pktmbuf_free(pkts_burst[i]);
- }
- }
- rte_delay_ms(10);
- }
- } else if (lcore_id == lcore_egress) {
- while (1) {
- if (test_kni_processing_flag)
- break;
- num = rte_kni_rx_burst(test_kni_ctx, pkts_burst,
- PKT_BURST_SZ);
- stats.egress += num;
- for (nb_tx = 0; nb_tx < num; nb_tx++)
- rte_pktmbuf_free(pkts_burst[nb_tx]);
- rte_delay_ms(10);
- }
- }
-
- return ret;
-}
-
-static int
-test_kni_allocate_lcores(void)
-{
- unsigned i, count = 0;
-
- lcore_main = rte_get_main_lcore();
- printf("main lcore: %u\n", lcore_main);
- for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (count >=2 )
- break;
- if (rte_lcore_is_enabled(i) && i != lcore_main) {
- count ++;
- if (count == 1)
- lcore_ingress = i;
- else if (count == 2)
- lcore_egress = i;
- }
- }
- printf("count: %u\n", count);
-
- return count == 2 ? 0 : -1;
-}
-
-static int
-test_kni_register_handler_mp(void)
-{
-#define TEST_KNI_HANDLE_REQ_COUNT 10 /* 5s */
-#define TEST_KNI_HANDLE_REQ_INTERVAL 500 /* ms */
-#define TEST_KNI_MTU 1450
-#define TEST_KNI_MTU_STR " 1450"
- int pid;
-
- pid = fork();
- if (pid < 0) {
- printf("Failed to fork a process\n");
- return -1;
- } else if (pid == 0) {
- int i;
- struct rte_kni *kni = rte_kni_get(TEST_KNI_PORT);
- struct rte_kni_ops ops = {
- .change_mtu = kni_change_mtu,
- .config_network_if = NULL,
- .config_mac_address = NULL,
- .config_promiscusity = NULL,
- };
-
- if (!kni) {
- printf("Failed to get KNI named %s\n", TEST_KNI_PORT);
- exit(-1);
- }
-
- kni_pkt_mtu = 0;
-
- /* Check with the invalid parameters */
- if (rte_kni_register_handlers(kni, NULL) == 0) {
- printf("Unexpectedly register successfully "
- "with NULL ops pointer\n");
- exit(-1);
- }
- if (rte_kni_register_handlers(NULL, &ops) == 0) {
- printf("Unexpectedly register successfully "
- "to NULL KNI device pointer\n");
- exit(-1);
- }
-
- if (rte_kni_register_handlers(kni, &ops)) {
- printf("Fail to register ops\n");
- exit(-1);
- }
-
- /* Check registering again after it has been registered */
- if (rte_kni_register_handlers(kni, &ops) == 0) {
- printf("Unexpectedly register successfully after "
- "it has already been registered\n");
- exit(-1);
- }
-
- /**
- * Handle the request of setting MTU,
- * with registered handlers.
- */
- for (i = 0; i < TEST_KNI_HANDLE_REQ_COUNT; i++) {
- rte_kni_handle_request(kni);
- if (kni_pkt_mtu == TEST_KNI_MTU)
- break;
- rte_delay_ms(TEST_KNI_HANDLE_REQ_INTERVAL);
- }
- if (i >= TEST_KNI_HANDLE_REQ_COUNT) {
- printf("MTU has not been set\n");
- exit(-1);
- }
-
- kni_pkt_mtu = 0;
- if (rte_kni_unregister_handlers(kni) < 0) {
- printf("Fail to unregister ops\n");
- exit(-1);
- }
-
- /* Check with invalid parameter */
- if (rte_kni_unregister_handlers(NULL) == 0) {
- exit(-1);
- }
-
- /**
- * Handle the request of setting MTU,
- * without registered handlers.
- */
- for (i = 0; i < TEST_KNI_HANDLE_REQ_COUNT; i++) {
- rte_kni_handle_request(kni);
- if (kni_pkt_mtu != 0)
- break;
- rte_delay_ms(TEST_KNI_HANDLE_REQ_INTERVAL);
- }
- if (kni_pkt_mtu != 0) {
- printf("MTU shouldn't be set\n");
- exit(-1);
- }
-
- exit(0);
- } else {
- int p_ret, status;
-
- rte_delay_ms(1000);
- if (system(IFCONFIG TEST_KNI_PORT " mtu" TEST_KNI_MTU_STR)
- == -1)
- return -1;
-
- rte_delay_ms(1000);
- if (system(IFCONFIG TEST_KNI_PORT " mtu" TEST_KNI_MTU_STR)
- == -1)
- return -1;
-
- p_ret = wait(&status);
- if (!WIFEXITED(status)) {
- printf("Child process (%d) exit abnormally\n", p_ret);
- return -1;
- }
- if (WEXITSTATUS(status) != 0) {
- printf("Child process exit with failure\n");
- return -1;
- }
- }
-
- return 0;
-}
-
-static int
-test_kni_processing(uint16_t port_id, struct rte_mempool *mp)
-{
- int ret = 0;
- unsigned i;
- struct rte_kni *kni;
- struct rte_kni_conf conf;
- struct rte_eth_dev_info info;
- struct rte_kni_ops ops;
-
- if (!mp)
- return -1;
-
- memset(&conf, 0, sizeof(conf));
- memset(&info, 0, sizeof(info));
- memset(&ops, 0, sizeof(ops));
-
- ret = rte_eth_dev_info_get(port_id, &info);
- if (ret != 0) {
- printf("Error during getting device (port %u) info: %s\n",
- port_id, strerror(-ret));
- return -1;
- }
-
- snprintf(conf.name, sizeof(conf.name), TEST_KNI_PORT);
-
- /* core id 1 configured for kernel thread */
- conf.core_id = 1;
- conf.force_bind = 1;
- conf.mbuf_size = MAX_PACKET_SZ;
- conf.group_id = port_id;
-
- ops = kni_ops;
- ops.port_id = port_id;
-
- /* basic test of kni processing */
- kni = rte_kni_alloc(mp, &conf, &ops);
- if (!kni) {
- printf("fail to create kni\n");
- return -1;
- }
-
- test_kni_ctx = kni;
- test_kni_processing_flag = 0;
- stats.ingress = 0;
- stats.egress = 0;
-
- /**
- * Check multiple processes support on
- * registering/unregistering handlers.
- */
- if (test_kni_register_handler_mp() < 0) {
- printf("fail to check multiple process support\n");
- ret = -1;
- goto fail_kni;
- }
-
- ret = test_kni_link_change();
- if (ret != 0)
- goto fail_kni;
-
- rte_eal_mp_remote_launch(test_kni_loop, NULL, CALL_MAIN);
- RTE_LCORE_FOREACH_WORKER(i) {
- if (rte_eal_wait_lcore(i) < 0) {
- ret = -1;
- goto fail_kni;
- }
- }
- /**
- * Check if the number of mbufs received from kernel space is equal
- * to that of transmitted to kernel space
- */
- if (stats.ingress < KNI_NUM_MBUF_THRESHOLD ||
- stats.egress < KNI_NUM_MBUF_THRESHOLD) {
- printf("The ingress/egress number should not be "
- "less than %u\n", (unsigned)KNI_NUM_MBUF_THRESHOLD);
- ret = -1;
- goto fail_kni;
- }
-
- if (rte_kni_release(kni) < 0) {
- printf("fail to release kni\n");
- return -1;
- }
- test_kni_ctx = NULL;
-
- /* test of reusing memzone */
- kni = rte_kni_alloc(mp, &conf, &ops);
- if (!kni) {
- printf("fail to create kni\n");
- return -1;
- }
-
- /* Release the kni for following testing */
- if (rte_kni_release(kni) < 0) {
- printf("fail to release kni\n");
- return -1;
- }
-
- return ret;
-fail_kni:
- if (rte_kni_release(kni) < 0) {
- printf("fail to release kni\n");
- ret = -1;
- }
-
- return ret;
-}
-
-static int
-test_kni(void)
-{
- int ret = -1;
- uint16_t port_id;
- struct rte_kni *kni;
- struct rte_mempool *mp;
- struct rte_kni_conf conf;
- struct rte_eth_dev_info info;
- struct rte_kni_ops ops;
- FILE *fd;
- DIR *dir;
- char buf[16];
-
- dir = opendir(KNI_MODULE_PATH);
- if (!dir) {
- if (errno == ENOENT) {
- printf("Cannot run UT due to missing rte_kni module\n");
- return TEST_SKIPPED;
- }
- printf("opendir: %s", strerror(errno));
- return -1;
- }
- closedir(dir);
-
- /* Initialize KNI subsystem */
- ret = rte_kni_init(KNI_TEST_MAX_PORTS);
- if (ret < 0) {
- printf("fail to initialize KNI subsystem\n");
- return -1;
- }
-
- if (test_kni_allocate_lcores() < 0) {
- printf("No enough lcores for kni processing\n");
- return -1;
- }
-
- mp = test_kni_create_mempool();
- if (!mp) {
- printf("fail to create mempool for kni\n");
- return -1;
- }
-
- /* configuring port 0 for the test is enough */
- port_id = 0;
- ret = rte_eth_dev_configure(port_id, 1, 1, &port_conf);
- if (ret < 0) {
- printf("fail to configure port %d\n", port_id);
- return -1;
- }
-
- ret = rte_eth_rx_queue_setup(port_id, 0, NB_RXD, SOCKET, &rx_conf, mp);
- if (ret < 0) {
- printf("fail to setup rx queue for port %d\n", port_id);
- return -1;
- }
-
- ret = rte_eth_tx_queue_setup(port_id, 0, NB_TXD, SOCKET, &tx_conf);
- if (ret < 0) {
- printf("fail to setup tx queue for port %d\n", port_id);
- return -1;
- }
-
- ret = rte_eth_dev_start(port_id);
- if (ret < 0) {
- printf("fail to start port %d\n", port_id);
- return -1;
- }
- ret = rte_eth_promiscuous_enable(port_id);
- if (ret != 0) {
- printf("fail to enable promiscuous mode for port %d: %s\n",
- port_id, rte_strerror(-ret));
- return -1;
- }
-
- /* basic test of kni processing */
- fd = fopen(KNI_MODULE_PARAM_LO, "r");
- if (fd == NULL) {
- printf("fopen: %s", strerror(errno));
- return -1;
- }
- memset(&buf, 0, sizeof(buf));
- if (fgets(buf, sizeof(buf), fd)) {
- if (!strncmp(buf, "lo_mode_fifo", strlen("lo_mode_fifo")) ||
- !strncmp(buf, "lo_mode_fifo_skb",
- strlen("lo_mode_fifo_skb"))) {
- ret = test_kni_processing(port_id, mp);
- if (ret < 0) {
- fclose(fd);
- goto fail;
- }
- } else
- printf("test_kni_processing skipped because of missing rte_kni module lo_mode argument\n");
- }
- fclose(fd);
-
- /* test of allocating KNI with NULL mempool pointer */
- memset(&info, 0, sizeof(info));
- memset(&conf, 0, sizeof(conf));
- memset(&ops, 0, sizeof(ops));
-
- ret = rte_eth_dev_info_get(port_id, &info);
- if (ret != 0) {
- printf("Error during getting device (port %u) info: %s\n",
- port_id, strerror(-ret));
- return -1;
- }
-
- conf.group_id = port_id;
- conf.mbuf_size = MAX_PACKET_SZ;
-
- ops = kni_ops;
- ops.port_id = port_id;
- kni = rte_kni_alloc(NULL, &conf, &ops);
- if (kni) {
- ret = -1;
- printf("unexpectedly creates kni successfully with NULL "
- "mempool pointer\n");
- goto fail;
- }
-
- /* test of allocating KNI without configurations */
- kni = rte_kni_alloc(mp, NULL, NULL);
- if (kni) {
- ret = -1;
- printf("Unexpectedly allocate KNI device successfully "
- "without configurations\n");
- goto fail;
- }
-
- /* test of allocating KNI without a name */
- memset(&conf, 0, sizeof(conf));
- memset(&info, 0, sizeof(info));
- memset(&ops, 0, sizeof(ops));
-
- ret = rte_eth_dev_info_get(port_id, &info);
- if (ret != 0) {
- printf("Error during getting device (port %u) info: %s\n",
- port_id, strerror(-ret));
- ret = -1;
- goto fail;
- }
-
- conf.group_id = port_id;
- conf.mbuf_size = MAX_PACKET_SZ;
-
- ops = kni_ops;
- ops.port_id = port_id;
- kni = rte_kni_alloc(mp, &conf, &ops);
- if (kni) {
- ret = -1;
- printf("Unexpectedly allocate a KNI device successfully "
- "without a name\n");
- goto fail;
- }
-
- /* test of releasing NULL kni context */
- ret = rte_kni_release(NULL);
- if (ret == 0) {
- ret = -1;
- printf("unexpectedly release kni successfully\n");
- goto fail;
- }
-
- /* test of handling request on NULL device pointer */
- ret = rte_kni_handle_request(NULL);
- if (ret == 0) {
- ret = -1;
- printf("Unexpectedly handle request on NULL device pointer\n");
- goto fail;
- }
-
- /* test of getting KNI device with pointer to NULL */
- kni = rte_kni_get(NULL);
- if (kni) {
- ret = -1;
- printf("Unexpectedly get a KNI device with "
- "NULL name pointer\n");
- goto fail;
- }
-
- /* test of getting KNI device with an zero length name string */
- memset(&conf, 0, sizeof(conf));
- kni = rte_kni_get(conf.name);
- if (kni) {
- ret = -1;
- printf("Unexpectedly get a KNI device with "
- "zero length name string\n");
- goto fail;
- }
-
- /* test of getting KNI device with an invalid string name */
- memset(&conf, 0, sizeof(conf));
- snprintf(conf.name, sizeof(conf.name), "testing");
- kni = rte_kni_get(conf.name);
- if (kni) {
- ret = -1;
- printf("Unexpectedly get a KNI device with "
- "a never used name string\n");
- goto fail;
- }
- ret = 0;
-
-fail:
- if (rte_eth_dev_stop(port_id) != 0)
- printf("Failed to stop port %u\n", port_id);
-
- return ret;
-}
-
-#endif
-
-REGISTER_TEST_COMMAND(kni_autotest, test_kni);
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 3bc8778981f6..7bba67d58586 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -43,7 +43,6 @@ The public API headers are grouped by topics:
[bond](@ref rte_eth_bond.h),
[vhost](@ref rte_vhost.h),
[vdpa](@ref rte_vdpa.h),
- [KNI](@ref rte_kni.h),
[ixgbe](@ref rte_pmd_ixgbe.h),
[i40e](@ref rte_pmd_i40e.h),
[iavf](@ref rte_pmd_iavf.h),
@@ -178,7 +177,6 @@ The public API headers are grouped by topics:
[frag](@ref rte_port_frag.h),
[reass](@ref rte_port_ras.h),
[sched](@ref rte_port_sched.h),
- [kni](@ref rte_port_kni.h),
[src/sink](@ref rte_port_source_sink.h)
* [table](@ref rte_table.h):
[lpm IPv4](@ref rte_table_lpm.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 1a4210b948a8..90dcf232dffd 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -49,7 +49,6 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/ip_frag \
@TOPDIR@/lib/ipsec \
@TOPDIR@/lib/jobstats \
- @TOPDIR@/lib/kni \
@TOPDIR@/lib/kvargs \
@TOPDIR@/lib/latencystats \
@TOPDIR@/lib/lpm \
diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index 7fcbb7fc43b2..f16c94e9768b 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -95,7 +95,7 @@ added to by the developer.
* **The Programmers Guide**
The Programmers Guide explains how the API components of DPDK such as the EAL, Memzone, Rings and the Hash Library work.
- It also explains how some higher level functionality such as Packet Distributor, Packet Framework and KNI work.
+ It also explains how some higher level functionality such as Packet Distributor and Packet Framework.
It also shows the build system and explains how to add applications.
The Programmers Guide should be expanded when new functionality is added to DPDK.
diff --git a/doc/guides/howto/flow_bifurcation.rst b/doc/guides/howto/flow_bifurcation.rst
index 838eb2a4cc89..554dd24c32c5 100644
--- a/doc/guides/howto/flow_bifurcation.rst
+++ b/doc/guides/howto/flow_bifurcation.rst
@@ -7,8 +7,7 @@ Flow Bifurcation How-to Guide
Flow Bifurcation is a mechanism which uses hardware capable Ethernet devices
to split traffic between Linux user space and kernel space. Since it is a
hardware assisted feature this approach can provide line rate processing
-capability. Other than :ref:`KNI <kni>`, the software is just required to
-enable device configuration, there is no need to take care of the packet
+capability. There is no need to take care of the packet
movement during the traffic split. This can yield better performance with
less CPU overhead.
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 31296822e5ec..7bfcac880f44 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -43,7 +43,6 @@ Network Interface Controller Drivers
ionic
ipn3ke
ixgbe
- kni
mana
memif
mlx4
diff --git a/doc/guides/nics/kni.rst b/doc/guides/nics/kni.rst
deleted file mode 100644
index bd3033bb585c..000000000000
--- a/doc/guides/nics/kni.rst
+++ /dev/null
@@ -1,170 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2017 Intel Corporation.
-
-KNI Poll Mode Driver
-======================
-
-KNI PMD is wrapper to the :ref:`librte_kni <kni>` library.
-
-This PMD enables using KNI without having a KNI specific application,
-any forwarding application can use PMD interface for KNI.
-
-Sending packets to any DPDK controlled interface or sending to the
-Linux networking stack will be transparent to the DPDK application.
-
-To create a KNI device ``net_kni#`` device name should be used, and this
-will create ``kni#`` Linux virtual network interface.
-
-There is no physical device backend for the virtual KNI device.
-
-Packets sent to the KNI Linux interface will be received by the DPDK
-application, and DPDK application may forward packets to a physical NIC
-or to a virtual device (like another KNI interface or PCAP interface).
-
-To forward any traffic from physical NIC to the Linux networking stack,
-an application should control a physical port and create one virtual KNI port,
-and forward between two.
-
-Using this PMD requires KNI kernel module be inserted.
-
-
-Usage
------
-
-EAL ``--vdev`` argument can be used to create KNI device instance, like::
-
- dpdk-testpmd --vdev=net_kni0 --vdev=net_kni1 -- -i
-
-Above command will create ``kni0`` and ``kni1`` Linux network interfaces,
-those interfaces can be controlled by standard Linux tools.
-
-When testpmd forwarding starts, any packets sent to ``kni0`` interface
-forwarded to the ``kni1`` interface and vice versa.
-
-There is no hard limit on number of interfaces that can be created.
-
-
-Default interface configuration
--------------------------------
-
-``librte_kni`` can create Linux network interfaces with different features,
-feature set controlled by a configuration struct, and KNI PMD uses a fixed
-configuration:
-
- .. code-block:: console
-
- Interface name: kni#
- force bind kernel thread to a core : NO
- mbuf size: (rte_pktmbuf_data_room_size(pktmbuf_pool) - RTE_PKTMBUF_HEADROOM)
- mtu: (conf.mbuf_size - RTE_ETHER_HDR_LEN)
-
-KNI control path is not supported with the PMD, since there is no physical
-backend device by default.
-
-
-Runtime Configuration
----------------------
-
-``no_request_thread``, by default PMD creates a pthread for each KNI interface
-to handle Linux network interface control commands, like ``ifconfig kni0 up``
-
-With ``no_request_thread`` option, pthread is not created and control commands
-not handled by PMD.
-
-By default request thread is enabled. And this argument should not be used
-most of the time, unless this PMD used with customized DPDK application to handle
-requests itself.
-
-Argument usage::
-
- dpdk-testpmd --vdev "net_kni0,no_request_thread=1" -- -i
-
-
-PMD log messages
-----------------
-
-If KNI kernel module (rte_kni.ko) not inserted, following error log printed::
-
- "KNI: KNI subsystem has not been initialized. Invoke rte_kni_init() first"
-
-
-PMD testing
------------
-
-It is possible to test PMD quickly using KNI kernel module loopback feature:
-
-* Insert KNI kernel module with loopback support:
-
- .. code-block:: console
-
- insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo_skb
-
-* Start testpmd with no physical device but two KNI virtual devices:
-
- .. code-block:: console
-
- ./dpdk-testpmd --vdev net_kni0 --vdev net_kni1 -- -i
-
- .. code-block:: console
-
- ...
- Configuring Port 0 (socket 0)
- KNI: pci: 00:00:00 c580:b8
- Port 0: 1A:4A:5B:7C:A2:8C
- Configuring Port 1 (socket 0)
- KNI: pci: 00:00:00 600:b9
- Port 1: AE:95:21:07:93:DD
- Checking link statuses...
- Port 0 Link Up - speed 10000 Mbps - full-duplex
- Port 1 Link Up - speed 10000 Mbps - full-duplex
- Done
- testpmd>
-
-* Observe Linux interfaces
-
- .. code-block:: console
-
- $ ifconfig kni0 && ifconfig kni1
- kni0: flags=4098<BROADCAST,MULTICAST> mtu 1500
- ether ae:8e:79:8e:9b:c8 txqueuelen 1000 (Ethernet)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
- kni1: flags=4098<BROADCAST,MULTICAST> mtu 1500
- ether 9e:76:43:53:3e:9b txqueuelen 1000 (Ethernet)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
-
-* Start forwarding with tx_first:
-
- .. code-block:: console
-
- testpmd> start tx_first
-
-* Quit and check forwarding stats:
-
- .. code-block:: console
-
- testpmd> quit
- Telling cores to stop...
- Waiting for lcores to finish...
-
- ---------------------- Forward statistics for port 0 ----------------------
- RX-packets: 35637905 RX-dropped: 0 RX-total: 35637905
- TX-packets: 35637947 TX-dropped: 0 TX-total: 35637947
- ----------------------------------------------------------------------------
-
- ---------------------- Forward statistics for port 1 ----------------------
- RX-packets: 35637915 RX-dropped: 0 RX-total: 35637915
- TX-packets: 35637937 TX-dropped: 0 TX-total: 35637937
- ----------------------------------------------------------------------------
-
- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
- RX-packets: 71275820 RX-dropped: 0 RX-total: 71275820
- TX-packets: 71275884 TX-dropped: 0 TX-total: 71275884
- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index f5e54a5e9cfd..ba6247170dbb 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -10,15 +10,12 @@ we provide a virtio Poll Mode Driver (PMD) as a software solution, comparing to
for fast guest VM to guest VM communication and guest VM to host communication.
Vhost is a kernel acceleration module for virtio qemu backend.
-The DPDK extends kni to support vhost raw socket interface,
-which enables vhost to directly read/ write packets from/to a physical port.
-With this enhancement, virtio could achieve quite promising performance.
For basic qemu-KVM installation and other Intel EM poll mode driver in guest VM,
please refer to Chapter "Driver for VM Emulated Devices".
In this chapter, we will demonstrate usage of virtio PMD with two backends,
-standard qemu vhost back end and vhost kni back end.
+standard qemu vhost back end.
Virtio Implementation in DPDK
-----------------------------
@@ -89,93 +86,6 @@ The following prerequisites apply:
* When using legacy interface, ``SYS_RAWIO`` capability is required
for ``iopl()`` call to enable access to PCI I/O ports.
-Virtio with kni vhost Back End
-------------------------------
-
-This section demonstrates kni vhost back end example setup for Phy-VM Communication.
-
-.. _figure_host_vm_comms:
-
-.. figure:: img/host_vm_comms.*
-
- Host2VM Communication Example Using kni vhost Back End
-
-
-Host2VM communication example
-
-#. Load the kni kernel module:
-
- .. code-block:: console
-
- insmod rte_kni.ko
-
- Other basic DPDK preparations like hugepage enabling,
- UIO port binding are not listed here.
- Please refer to the *DPDK Getting Started Guide* for detailed instructions.
-
-#. Launch the kni user application:
-
- .. code-block:: console
-
- <build_dir>/examples/dpdk-kni -l 0-3 -n 4 -- -p 0x1 -P --config="(0,1,3)"
-
- This command generates one network device vEth0 for physical port.
- If specify more physical ports, the generated network device will be vEth1, vEth2, and so on.
-
- For each physical port, kni creates two user threads.
- One thread loops to fetch packets from the physical NIC port into the kni receive queue.
- The other user thread loops to send packets in the kni transmit queue.
-
- For each physical port, kni also creates a kernel thread that retrieves packets from the kni receive queue,
- place them onto kni's raw socket's queue and wake up the vhost kernel thread to exchange packets with the virtio virt queue.
-
- For more details about kni, please refer to :ref:`kni`.
-
-#. Enable the kni raw socket functionality for the specified physical NIC port,
- get the generated file descriptor and set it in the qemu command line parameter.
- Always remember to set ioeventfd_on and vhost_on.
-
- Example:
-
- .. code-block:: console
-
- echo 1 > /sys/class/net/vEth0/sock_en
- fd=`cat /sys/class/net/vEth0/sock_fd`
- exec qemu-system-x86_64 -enable-kvm -cpu host \
- -m 2048 -smp 4 -name dpdk-test1-vm1 \
- -drive file=/data/DPDKVMS/dpdk-vm.img \
- -netdev tap, fd=$fd,id=mynet_kni, script=no,vhost=on \
- -device virtio-net-pci,netdev=mynet_kni,bus=pci.0,addr=0x3,ioeventfd=on \
- -vnc:1 -daemonize
-
- In the above example, virtio port 0 in the guest VM will be associated with vEth0, which in turns corresponds to a physical port,
- which means received packets come from vEth0, and transmitted packets is sent to vEth0.
-
-#. In the guest, bind the virtio device to the uio_pci_generic kernel module and start the forwarding application.
- When the virtio port in guest bursts Rx, it is getting packets from the
- raw socket's receive queue.
- When the virtio port bursts Tx, it is sending packet to the tx_q.
-
- .. code-block:: console
-
- modprobe uio
- dpdk-hugepages.py --setup 1G
- modprobe uio_pci_generic
- ./usertools/dpdk-devbind.py -b uio_pci_generic 00:03.0
-
- We use testpmd as the forwarding application in this example.
-
- .. figure:: img/console.*
-
- Running testpmd
-
-#. Use IXIA packet generator to inject a packet stream into the KNI physical port.
-
- The packet reception and transmission flow path is:
-
- IXIA packet generator->82599 PF->KNI Rx queue->KNI raw socket queue->Guest
- VM virtio port 0 Rx burst->Guest VM virtio port 0 Tx burst-> KNI Tx queue
- ->82599 PF-> IXIA packet generator
Virtio with qemu virtio Back End
--------------------------------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 93c8a031be56..5d382fdd9032 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -610,8 +610,6 @@ devices would fail anyway.
``RTE_PCI_DRV_NEED_IOVA_AS_VA`` flag is used to dictate that this PCI
driver can only work in RTE_IOVA_VA mode.
- When the KNI kernel module is detected, RTE_IOVA_PA mode is preferred as a
- performance penalty is expected in RTE_IOVA_VA mode.
IOVA Mode Configuration
~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/glossary.rst b/doc/guides/prog_guide/glossary.rst
index fb0910ba5b3f..8d6349701e43 100644
--- a/doc/guides/prog_guide/glossary.rst
+++ b/doc/guides/prog_guide/glossary.rst
@@ -103,9 +103,6 @@ lcore
A logical execution unit of the processor, sometimes called a *hardware
thread*.
-KNI
- Kernel Network Interface
-
L1
Layer 1
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index d89cd3edb63c..1be6a3d6d9b6 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -54,7 +54,6 @@ Programmer's Guide
pcapng_lib
pdump_lib
multi_proc_support
- kernel_nic_interface
thread_safety_dpdk_functions
eventdev
event_ethernet_rx_adapter
diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
deleted file mode 100644
index 392e5df75fcf..000000000000
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ /dev/null
@@ -1,423 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2010-2015 Intel Corporation.
-
-.. _kni:
-
-Kernel NIC Interface
-====================
-
-.. note::
-
- KNI is deprecated and will be removed in future.
- See :doc:`../rel_notes/deprecation`.
-
- :ref:`virtio_user_as_exception_path` alternative is the preferred way
- for interfacing with the Linux network stack
- as it is an in-kernel solution and has similar performance expectations.
-
-.. note::
-
- KNI is disabled by default in the DPDK build.
- To re-enable the library, remove 'kni' from the "disable_libs" meson option when configuring a build.
-
-The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
-
-KNI provides an interface with the kernel network stack
-and allows management of DPDK ports using standard Linux net tools
-such as ``ethtool``, ``iproute2`` and ``tcpdump``.
-
-The main use case of KNI is to get/receive exception packets from/to Linux network stack
-while main datapath IO is done bypassing the networking stack.
-
-There are other alternatives to KNI, all are available in the upstream Linux:
-
-#. :ref:`virtio_user_as_exception_path`
-
-#. :doc:`../nics/tap` as wrapper to `Linux tun/tap
- <https://www.kernel.org/doc/Documentation/networking/tuntap.txt>`_
-
-The benefits of using the KNI against alternatives are:
-
-* Faster than existing Linux TUN/TAP interfaces
- (by eliminating system calls and copy_to_user()/copy_from_user() operations.
-
-The disadvantages of the KNI are:
-
-* It is out-of-tree Linux kernel module
- which makes updating and distributing the driver more difficult.
- Most users end up building the KNI driver from source
- which requires the packages and tools to build kernel modules.
-
-* As it shares memory between userspace and kernelspace,
- and kernel part directly uses input provided by userspace, it is not safe.
- This makes hard to upstream the module.
-
-* Requires dedicated kernel cores.
-
-* Only a subset of net devices control commands are supported by KNI.
-
-The components of an application using the DPDK Kernel NIC Interface are shown in :numref:`figure_kernel_nic_intf`.
-
-.. _figure_kernel_nic_intf:
-
-.. figure:: img/kernel_nic_intf.*
-
- Components of a DPDK KNI Application
-
-
-The DPDK KNI Kernel Module
---------------------------
-
-The KNI kernel loadable module ``rte_kni`` provides the kernel interface
-for DPDK applications.
-
-When the ``rte_kni`` module is loaded, it will create a device ``/dev/kni``
-that is used by the DPDK KNI API functions to control and communicate with
-the kernel module.
-
-The ``rte_kni`` kernel module contains several optional parameters which
-can be specified when the module is loaded to control its behavior:
-
-.. code-block:: console
-
- # modinfo rte_kni.ko
- <snip>
- parm: lo_mode: KNI loopback mode (default=lo_mode_none):
- lo_mode_none Kernel loopback disabled
- lo_mode_fifo Enable kernel loopback with fifo
- lo_mode_fifo_skb Enable kernel loopback with fifo and skb buffer
- (charp)
- parm: kthread_mode: Kernel thread mode (default=single):
- single Single kernel thread mode enabled.
- multiple Multiple kernel thread mode enabled.
- (charp)
- parm: carrier: Default carrier state for KNI interface (default=off):
- off Interfaces will be created with carrier state set to off.
- on Interfaces will be created with carrier state set to on.
- (charp)
- parm: enable_bifurcated: Enable request processing support for
- bifurcated drivers, which means releasing rtnl_lock before calling
- userspace callback and supporting async requests (default=off):
- on Enable request processing support for bifurcated drivers.
- (charp)
- parm: min_scheduling_interval: KNI thread min scheduling interval (default=100 microseconds)
- (long)
- parm: max_scheduling_interval: KNI thread max scheduling interval (default=200 microseconds)
- (long)
-
-
-Loading the ``rte_kni`` kernel module without any optional parameters is
-the typical way a DPDK application gets packets into and out of the kernel
-network stack. Without any parameters, only one kernel thread is created
-for all KNI devices for packet receiving in kernel side, loopback mode is
-disabled, and the default carrier state of KNI interfaces is set to *off*.
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko
-
-.. _kni_loopback_mode:
-
-Loopback Mode
-~~~~~~~~~~~~~
-
-For testing, the ``rte_kni`` kernel module can be loaded in loopback mode
-by specifying the ``lo_mode`` parameter:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo
-
-The ``lo_mode_fifo`` loopback option will loop back ring enqueue/dequeue
-operations in kernel space.
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo_skb
-
-The ``lo_mode_fifo_skb`` loopback option will loop back ring enqueue/dequeue
-operations and sk buffer copies in kernel space.
-
-If the ``lo_mode`` parameter is not specified, loopback mode is disabled.
-
-.. _kni_kernel_thread_mode:
-
-Kernel Thread Mode
-~~~~~~~~~~~~~~~~~~
-
-To provide flexibility of performance, the ``rte_kni`` KNI kernel module
-can be loaded with the ``kthread_mode`` parameter. The ``rte_kni`` kernel
-module supports two options: "single kernel thread" mode and "multiple
-kernel thread" mode.
-
-Single kernel thread mode is enabled as follows:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko kthread_mode=single
-
-This mode will create only one kernel thread for all KNI interfaces to
-receive data on the kernel side. By default, this kernel thread is not
-bound to any particular core, but the user can set the core affinity for
-this kernel thread by setting the ``core_id`` and ``force_bind`` parameters
-in ``struct rte_kni_conf`` when the first KNI interface is created:
-
-For optimum performance, the kernel thread should be bound to a core in
-on the same socket as the DPDK lcores used in the application.
-
-The KNI kernel module can also be configured to start a separate kernel
-thread for each KNI interface created by the DPDK application. Multiple
-kernel thread mode is enabled as follows:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko kthread_mode=multiple
-
-This mode will create a separate kernel thread for each KNI interface to
-receive data on the kernel side. The core affinity of each ``kni_thread``
-kernel thread can be specified by setting the ``core_id`` and ``force_bind``
-parameters in ``struct rte_kni_conf`` when each KNI interface is created.
-
-Multiple kernel thread mode can provide scalable higher performance if
-sufficient unused cores are available on the host system.
-
-If the ``kthread_mode`` parameter is not specified, the "single kernel
-thread" mode is used.
-
-.. _kni_default_carrier_state:
-
-Default Carrier State
-~~~~~~~~~~~~~~~~~~~~~
-
-The default carrier state of KNI interfaces created by the ``rte_kni``
-kernel module is controlled via the ``carrier`` option when the module
-is loaded.
-
-If ``carrier=off`` is specified, the kernel module will leave the carrier
-state of the interface *down* when the interface is management enabled.
-The DPDK application can set the carrier state of the KNI interface using the
-``rte_kni_update_link()`` function. This is useful for DPDK applications
-which require that the carrier state of the KNI interface reflect the
-actual link state of the corresponding physical NIC port.
-
-If ``carrier=on`` is specified, the kernel module will automatically set
-the carrier state of the interface to *up* when the interface is management
-enabled. This is useful for DPDK applications which use the KNI interface as
-a purely virtual interface that does not correspond to any physical hardware
-and do not wish to explicitly set the carrier state of the interface with
-``rte_kni_update_link()``. It is also useful for testing in loopback mode
-where the NIC port may not be physically connected to anything.
-
-To set the default carrier state to *on*:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko carrier=on
-
-To set the default carrier state to *off*:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko carrier=off
-
-If the ``carrier`` parameter is not specified, the default carrier state
-of KNI interfaces will be set to *off*.
-
-.. _kni_bifurcated_device_support:
-
-Bifurcated Device Support
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-User callbacks are executed while kernel module holds the ``rtnl`` lock, this
-causes a deadlock when callbacks run control commands on another Linux kernel
-network interface.
-
-Bifurcated devices has kernel network driver part and to prevent deadlock for
-them ``enable_bifurcated`` is used.
-
-To enable bifurcated device support:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko enable_bifurcated=on
-
-Enabling bifurcated device support releases ``rtnl`` lock before calling
-callback and locks it back after callback. Also enables asynchronous request to
-support callbacks that requires rtnl lock to work (interface down).
-
-KNI Kthread Scheduling
-~~~~~~~~~~~~~~~~~~~~~~
-
-The ``min_scheduling_interval`` and ``max_scheduling_interval`` parameters
-control the rescheduling interval of the KNI kthreads.
-
-This might be useful if we have use cases in which we require improved
-latency or performance for control plane traffic.
-
-The implementation is backed by Linux High Precision Timers, and uses ``usleep_range``.
-Hence, it will have the same granularity constraints as this Linux subsystem.
-
-For Linux High Precision Timers, you can check the following resource: `Kernel Timers <http://www.kernel.org/doc/Documentation/timers/timers-howto.txt>`_
-
-To set the ``min_scheduling_interval`` to a value of 100 microseconds:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko min_scheduling_interval=100
-
-To set the ``max_scheduling_interval`` to a value of 200 microseconds:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko max_scheduling_interval=200
-
-If the ``min_scheduling_interval`` and ``max_scheduling_interval`` parameters are
-not specified, the default interval limits will be set to *100* and *200* respectively.
-
-KNI Creation and Deletion
--------------------------
-
-Before any KNI interfaces can be created, the ``rte_kni`` kernel module must
-be loaded into the kernel and configured with the ``rte_kni_init()`` function.
-
-The KNI interfaces are created by a DPDK application dynamically via the
-``rte_kni_alloc()`` function.
-
-The ``struct rte_kni_conf`` structure contains fields which allow the
-user to specify the interface name, set the MTU size, set an explicit or
-random MAC address and control the affinity of the kernel Rx thread(s)
-(both single and multi-threaded modes).
-By default the KNI sample example gets the MTU from the matching device,
-and in case of KNI PMD it is derived from mbuf buffer length.
-
-The ``struct rte_kni_ops`` structure contains pointers to functions to
-handle requests from the ``rte_kni`` kernel module. These functions
-allow DPDK applications to perform actions when the KNI interfaces are
-manipulated by control commands or functions external to the application.
-
-For example, the DPDK application may wish to enabled/disable a physical
-NIC port when a user enabled/disables a KNI interface with ``ip link set
-[up|down] dev <ifaceX>``. The DPDK application can register a callback for
-``config_network_if`` which will be called when the interface management
-state changes.
-
-There are currently four callbacks for which the user can register
-application functions:
-
-``config_network_if``:
-
- Called when the management state of the KNI interface changes.
- For example, when the user runs ``ip link set [up|down] dev <ifaceX>``.
-
-``change_mtu``:
-
- Called when the user changes the MTU size of the KNI
- interface. For example, when the user runs ``ip link set mtu <size>
- dev <ifaceX>``.
-
-``config_mac_address``:
-
- Called when the user changes the MAC address of the KNI interface.
- For example, when the user runs ``ip link set address <MAC>
- dev <ifaceX>``. If the user sets this callback function to NULL,
- but sets the ``port_id`` field to a value other than -1, a default
- callback handler in the rte_kni library ``kni_config_mac_address()``
- will be called which calls ``rte_eth_dev_default_mac_addr_set()``
- on the specified ``port_id``.
-
-``config_promiscusity``:
-
- Called when the user changes the promiscuity state of the KNI
- interface. For example, when the user runs ``ip link set promisc
- [on|off] dev <ifaceX>``. If the user sets this callback function to
- NULL, but sets the ``port_id`` field to a value other than -1, a default
- callback handler in the rte_kni library ``kni_config_promiscusity()``
- will be called which calls ``rte_eth_promiscuous_<enable|disable>()``
- on the specified ``port_id``.
-
-``config_allmulticast``:
-
- Called when the user changes the allmulticast state of the KNI interface.
- For example, when the user runs ``ifconfig <ifaceX> [-]allmulti``. If the
- user sets this callback function to NULL, but sets the ``port_id`` field to
- a value other than -1, a default callback handler in the rte_kni library
- ``kni_config_allmulticast()`` will be called which calls
- ``rte_eth_allmulticast_<enable|disable>()`` on the specified ``port_id``.
-
-In order to run these callbacks, the application must periodically call
-the ``rte_kni_handle_request()`` function. Any user callback function
-registered will be called directly from ``rte_kni_handle_request()`` so
-care must be taken to prevent deadlock and to not block any DPDK fastpath
-tasks. Typically DPDK applications which use these callbacks will need
-to create a separate thread or secondary process to periodically call
-``rte_kni_handle_request()``.
-
-The KNI interfaces can be deleted by a DPDK application with
-``rte_kni_release()``. All KNI interfaces not explicitly deleted will be
-deleted when the ``/dev/kni`` device is closed, either explicitly with
-``rte_kni_close()`` or when the DPDK application is closed.
-
-DPDK mbuf Flow
---------------
-
-To minimize the amount of DPDK code running in kernel space, the mbuf mempool is managed in userspace only.
-The kernel module will be aware of mbufs,
-but all mbuf allocation and free operations will be handled by the DPDK application only.
-
-:numref:`figure_pkt_flow_kni` shows a typical scenario with packets sent in both directions.
-
-.. _figure_pkt_flow_kni:
-
-.. figure:: img/pkt_flow_kni.*
-
- Packet Flow via mbufs in the DPDK KNI
-
-
-Use Case: Ingress
------------------
-
-On the DPDK RX side, the mbuf is allocated by the PMD in the RX thread context.
-This thread will enqueue the mbuf in the rx_q FIFO,
-and the next pointers in mbuf-chain will convert to physical address.
-The KNI thread will poll all KNI active devices for the rx_q.
-If an mbuf is dequeued, it will be converted to a sk_buff and sent to the net stack via netif_rx().
-The dequeued mbuf must be freed, so the same pointer is sent back in the free_q FIFO,
-and next pointers must convert back to virtual address if exists before put in the free_q FIFO.
-
-The RX thread, in the same main loop, polls this FIFO and frees the mbuf after dequeuing it.
-The address conversion of the next pointer is to prevent the chained mbuf
-in different hugepage segments from causing kernel crash.
-
-Use Case: Egress
-----------------
-
-For packet egress the DPDK application must first enqueue several mbufs to create an mbuf cache on the kernel side.
-
-The packet is received from the Linux net stack, by calling the kni_net_tx() callback.
-The mbuf is dequeued (without waiting due the cache) and filled with data from sk_buff.
-The sk_buff is then freed and the mbuf sent in the tx_q FIFO.
-
-The DPDK TX thread dequeues the mbuf and sends it to the PMD via ``rte_eth_tx_burst()``.
-It then puts the mbuf back in the cache.
-
-IOVA = VA: Support
-------------------
-
-KNI operates in IOVA_VA scheme when
-
-- LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0) and
-- EAL option `iova-mode=va` is passed or bus IOVA scheme in the DPDK is selected
- as RTE_IOVA_VA.
-
-Due to IOVA to KVA address translations, based on the KNI use case there
-can be a performance impact. For mitigation, forcing IOVA to PA via EAL
-"--iova-mode=pa" option can be used, IOVA_DC bus iommu scheme can also
-result in IOVA as PA.
-
-Ethtool
--------
-
-Ethtool is a Linux-specific tool with corresponding support in the kernel.
-The current version of kni provides minimal ethtool functionality
-including querying version and link state. It does not support link
-control, statistics, or dumping device registers.
diff --git a/doc/guides/prog_guide/packet_framework.rst b/doc/guides/prog_guide/packet_framework.rst
index 3d4e3b66cc5c..ebc69d8c3e75 100644
--- a/doc/guides/prog_guide/packet_framework.rst
+++ b/doc/guides/prog_guide/packet_framework.rst
@@ -87,18 +87,15 @@ Port Types
| | | management and hierarchical scheduling according to pre-defined SLAs. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
- | 6 | KNI | Send/receive packets to/from Linux kernel space. |
- | | | |
- +---+------------------+---------------------------------------------------------------------------------------+
- | 7 | Source | Input port used as packet generator. Similar to Linux kernel /dev/zero character |
+ | 6 | Source | Input port used as packet generator. Similar to Linux kernel /dev/zero character |
| | | device. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
- | 8 | Sink | Output port used to drop all input packets. Similar to Linux kernel /dev/null |
+ | 7 | Sink | Output port used to drop all input packets. Similar to Linux kernel /dev/null |
| | | character device. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
- | 9 | Sym_crypto | Output port used to extract DPDK Cryptodev operations from a fixed offset of the |
+ | 8 | Sym_crypto | Output port used to extract DPDK Cryptodev operations from a fixed offset of the |
| | | packet and then enqueue to the Cryptodev PMD. Input port used to dequeue the |
| | | Cryptodev operations from the Cryptodev PMD and then retrieve the packets from them. |
+---+------------------+---------------------------------------------------------------------------------------+
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 494b401cda4b..fa619514fd64 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -35,7 +35,7 @@ Deprecation Notices
which also added support for standard atomics
(Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
-* build: Enabling deprecated libraries (``flow_classify``, ``kni``)
+* build: Enabling deprecated libraries (``flow_classify``)
won't be possible anymore through the use of the ``disable_libs`` build option.
A new build option for deprecated libraries will be introduced instead.
@@ -78,13 +78,6 @@ Deprecation Notices
``__atomic_thread_fence`` must be used for patches that need to be merged in
20.08 onwards. This change will not introduce any performance degradation.
-* kni: The KNI kernel module and library are not recommended for use by new
- applications - other technologies such as virtio-user are recommended instead.
- Following the DPDK technical board
- `decision <https://mails.dpdk.org/archives/dev/2021-January/197077.html>`_
- and `refinement <https://mails.dpdk.org/archives/dev/2022-June/243596.html>`_,
- the KNI kernel module, library and PMD will be removed from the DPDK 23.11 release.
-
* lib: will fix extending some enum/define breaking the ABI. There are multiple
samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
used by iterators, and arrays holding these values are sized with this
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index d8dfa621ecf2..d07281527991 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
:maxdepth: 1
:numbered:
+ release_23_11
release_23_07
release_23_03
release_22_11
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
new file mode 100644
index 000000000000..e2158934751f
--- /dev/null
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2022 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 23.11
+==================
+
+New Features
+------------
+
+
+Removed Items
+-------------
+
+* kni: Remove deprecated Kernel Network Interface driver, libraries and examples
diff --git a/doc/guides/sample_app_ug/ip_pipeline.rst b/doc/guides/sample_app_ug/ip_pipeline.rst
index b521d3b8be20..f30ac5e19db7 100644
--- a/doc/guides/sample_app_ug/ip_pipeline.rst
+++ b/doc/guides/sample_app_ug/ip_pipeline.rst
@@ -164,15 +164,6 @@ Examples
| | | | 8. Pipeline table rule add default |
| | | | 9. Pipeline table rule add |
+-----------------------+----------------------+----------------+------------------------------------+
- | KNI | Stub | Forward | 1. Mempool create |
- | | | | 2. Link create |
- | | | | 3. Pipeline create |
- | | | | 4. Pipeline port in/out |
- | | | | 5. Pipeline table |
- | | | | 6. Pipeline port in table |
- | | | | 7. Pipeline enable |
- | | | | 8. Pipeline table rule add |
- +-----------------------+----------------------+----------------+------------------------------------+
| Firewall | ACL | Allow/Drop | 1. Mempool create |
| | | | 2. Link create |
| | * Key = n-tuple | | 3. Pipeline create |
@@ -297,17 +288,6 @@ Tap
tap <name>
-Kni
-~~~
-
- Create kni port ::
-
- kni <kni_name>
- link <link_name>
- mempool <mempool_name>
- [thread <thread_id>]
-
-
Cryptodev
~~~~~~~~~
@@ -366,7 +346,6 @@ Create pipeline input port ::
| swq <swq_name>
| tmgr <tmgr_name>
| tap <tap_name> mempool <mempool_name> mtu <mtu>
- | kni <kni_name>
| source mempool <mempool_name> file <file_name> bpp <n_bytes_per_pkt>
[action <port_in_action_profile_name>]
[disabled]
@@ -379,7 +358,6 @@ Create pipeline output port ::
| swq <swq_name>
| tmgr <tmgr_name>
| tap <tap_name>
- | kni <kni_name>
| sink [file <file_name> pkts <max_n_pkts>]
Create pipeline table ::
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 4b98faa72980..01b707b6c4ac 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1130,7 +1130,7 @@ nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
{
/* These dummy functions are required for supporting
* some applications which reconfigure queues without
- * stopping tx burst and rx burst threads(eg kni app)
+ * stopping tx burst and rx burst threads.
* When the queues context is saved, txq/rxqs are released
* which caused app crash since rx/tx burst is still
* on different lcores
diff --git a/drivers/net/kni/meson.build b/drivers/net/kni/meson.build
deleted file mode 100644
index 2acc98969426..000000000000
--- a/drivers/net/kni/meson.build
+++ /dev/null
@@ -1,11 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-if is_windows
- build = false
- reason = 'not supported on Windows'
- subdir_done()
-endif
-
-deps += 'kni'
-sources = files('rte_eth_kni.c')
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
deleted file mode 100644
index c0e1f8db409e..000000000000
--- a/drivers/net/kni/rte_eth_kni.c
+++ /dev/null
@@ -1,524 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Intel Corporation
- */
-
-#include <fcntl.h>
-#include <pthread.h>
-#include <unistd.h>
-
-#include <rte_string_fns.h>
-#include <ethdev_driver.h>
-#include <ethdev_vdev.h>
-#include <rte_kni.h>
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <bus_vdev_driver.h>
-
-/* Only single queue supported */
-#define KNI_MAX_QUEUE_PER_PORT 1
-
-#define MAX_KNI_PORTS 8
-
-#define KNI_ETHER_MTU(mbuf_size) \
- ((mbuf_size) - RTE_ETHER_HDR_LEN) /**< Ethernet MTU. */
-
-#define ETH_KNI_NO_REQUEST_THREAD_ARG "no_request_thread"
-static const char * const valid_arguments[] = {
- ETH_KNI_NO_REQUEST_THREAD_ARG,
- NULL
-};
-
-struct eth_kni_args {
- int no_request_thread;
-};
-
-struct pmd_queue_stats {
- uint64_t pkts;
- uint64_t bytes;
-};
-
-struct pmd_queue {
- struct pmd_internals *internals;
- struct rte_mempool *mb_pool;
-
- struct pmd_queue_stats rx;
- struct pmd_queue_stats tx;
-};
-
-struct pmd_internals {
- struct rte_kni *kni;
- uint16_t port_id;
- int is_kni_started;
-
- pthread_t thread;
- int stop_thread;
- int no_request_thread;
-
- struct rte_ether_addr eth_addr;
-
- struct pmd_queue rx_queues[KNI_MAX_QUEUE_PER_PORT];
- struct pmd_queue tx_queues[KNI_MAX_QUEUE_PER_PORT];
-};
-
-static const struct rte_eth_link pmd_link = {
- .link_speed = RTE_ETH_SPEED_NUM_10G,
- .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
- .link_status = RTE_ETH_LINK_DOWN,
- .link_autoneg = RTE_ETH_LINK_FIXED,
-};
-static int is_kni_initialized;
-
-RTE_LOG_REGISTER_DEFAULT(eth_kni_logtype, NOTICE);
-
-#define PMD_LOG(level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, eth_kni_logtype, \
- "%s(): " fmt "\n", __func__, ##args)
-static uint16_t
-eth_kni_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
-{
- struct pmd_queue *kni_q = q;
- struct rte_kni *kni = kni_q->internals->kni;
- uint16_t nb_pkts;
- int i;
-
- nb_pkts = rte_kni_rx_burst(kni, bufs, nb_bufs);
- for (i = 0; i < nb_pkts; i++)
- bufs[i]->port = kni_q->internals->port_id;
-
- kni_q->rx.pkts += nb_pkts;
-
- return nb_pkts;
-}
-
-static uint16_t
-eth_kni_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
-{
- struct pmd_queue *kni_q = q;
- struct rte_kni *kni = kni_q->internals->kni;
- uint16_t nb_pkts;
-
- nb_pkts = rte_kni_tx_burst(kni, bufs, nb_bufs);
-
- kni_q->tx.pkts += nb_pkts;
-
- return nb_pkts;
-}
-
-static void *
-kni_handle_request(void *param)
-{
- struct pmd_internals *internals = param;
-#define MS 1000
-
- while (!internals->stop_thread) {
- rte_kni_handle_request(internals->kni);
- usleep(500 * MS);
- }
-
- return param;
-}
-
-static int
-eth_kni_start(struct rte_eth_dev *dev)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- uint16_t port_id = dev->data->port_id;
- struct rte_mempool *mb_pool;
- struct rte_kni_conf conf = {{0}};
- const char *name = dev->device->name + 4; /* remove net_ */
-
- mb_pool = internals->rx_queues[0].mb_pool;
- strlcpy(conf.name, name, RTE_KNI_NAMESIZE);
- conf.force_bind = 0;
- conf.group_id = port_id;
- conf.mbuf_size =
- rte_pktmbuf_data_room_size(mb_pool) - RTE_PKTMBUF_HEADROOM;
- conf.mtu = KNI_ETHER_MTU(conf.mbuf_size);
-
- internals->kni = rte_kni_alloc(mb_pool, &conf, NULL);
- if (internals->kni == NULL) {
- PMD_LOG(ERR,
- "Fail to create kni interface for port: %d",
- port_id);
- return -1;
- }
-
- return 0;
-}
-
-static int
-eth_kni_dev_start(struct rte_eth_dev *dev)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- int ret;
-
- if (internals->is_kni_started == 0) {
- ret = eth_kni_start(dev);
- if (ret)
- return -1;
- internals->is_kni_started = 1;
- }
-
- if (internals->no_request_thread == 0) {
- internals->stop_thread = 0;
-
- ret = rte_ctrl_thread_create(&internals->thread,
- "kni_handle_req", NULL,
- kni_handle_request, internals);
- if (ret) {
- PMD_LOG(ERR,
- "Fail to create kni request thread");
- return -1;
- }
- }
-
- dev->data->dev_link.link_status = 1;
-
- return 0;
-}
-
-static int
-eth_kni_dev_stop(struct rte_eth_dev *dev)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- int ret;
-
- if (internals->no_request_thread == 0 && internals->stop_thread == 0) {
- internals->stop_thread = 1;
-
- ret = pthread_cancel(internals->thread);
- if (ret)
- PMD_LOG(ERR, "Can't cancel the thread");
-
- ret = pthread_join(internals->thread, NULL);
- if (ret)
- PMD_LOG(ERR, "Can't join the thread");
- }
-
- dev->data->dev_link.link_status = 0;
- dev->data->dev_started = 0;
-
- return 0;
-}
-
-static int
-eth_kni_close(struct rte_eth_dev *eth_dev)
-{
- struct pmd_internals *internals;
- int ret;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- ret = eth_kni_dev_stop(eth_dev);
- if (ret)
- PMD_LOG(WARNING, "Not able to stop kni for %s",
- eth_dev->data->name);
-
- /* mac_addrs must not be freed alone because part of dev_private */
- eth_dev->data->mac_addrs = NULL;
-
- internals = eth_dev->data->dev_private;
- ret = rte_kni_release(internals->kni);
- if (ret)
- PMD_LOG(WARNING, "Not able to release kni for %s",
- eth_dev->data->name);
-
- return ret;
-}
-
-static int
-eth_kni_dev_configure(struct rte_eth_dev *dev __rte_unused)
-{
- return 0;
-}
-
-static int
-eth_kni_dev_info(struct rte_eth_dev *dev __rte_unused,
- struct rte_eth_dev_info *dev_info)
-{
- dev_info->max_mac_addrs = 1;
- dev_info->max_rx_pktlen = UINT32_MAX;
- dev_info->max_rx_queues = KNI_MAX_QUEUE_PER_PORT;
- dev_info->max_tx_queues = KNI_MAX_QUEUE_PER_PORT;
- dev_info->min_rx_bufsize = 0;
-
- return 0;
-}
-
-static int
-eth_kni_rx_queue_setup(struct rte_eth_dev *dev,
- uint16_t rx_queue_id,
- uint16_t nb_rx_desc __rte_unused,
- unsigned int socket_id __rte_unused,
- const struct rte_eth_rxconf *rx_conf __rte_unused,
- struct rte_mempool *mb_pool)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- struct pmd_queue *q;
-
- q = &internals->rx_queues[rx_queue_id];
- q->internals = internals;
- q->mb_pool = mb_pool;
-
- dev->data->rx_queues[rx_queue_id] = q;
-
- return 0;
-}
-
-static int
-eth_kni_tx_queue_setup(struct rte_eth_dev *dev,
- uint16_t tx_queue_id,
- uint16_t nb_tx_desc __rte_unused,
- unsigned int socket_id __rte_unused,
- const struct rte_eth_txconf *tx_conf __rte_unused)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- struct pmd_queue *q;
-
- q = &internals->tx_queues[tx_queue_id];
- q->internals = internals;
-
- dev->data->tx_queues[tx_queue_id] = q;
-
- return 0;
-}
-
-static int
-eth_kni_link_update(struct rte_eth_dev *dev __rte_unused,
- int wait_to_complete __rte_unused)
-{
- return 0;
-}
-
-static int
-eth_kni_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
-{
- unsigned long rx_packets_total = 0, rx_bytes_total = 0;
- unsigned long tx_packets_total = 0, tx_bytes_total = 0;
- struct rte_eth_dev_data *data = dev->data;
- unsigned int i, num_stats;
- struct pmd_queue *q;
-
- num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS,
- data->nb_rx_queues);
- for (i = 0; i < num_stats; i++) {
- q = data->rx_queues[i];
- stats->q_ipackets[i] = q->rx.pkts;
- stats->q_ibytes[i] = q->rx.bytes;
- rx_packets_total += stats->q_ipackets[i];
- rx_bytes_total += stats->q_ibytes[i];
- }
-
- num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS,
- data->nb_tx_queues);
- for (i = 0; i < num_stats; i++) {
- q = data->tx_queues[i];
- stats->q_opackets[i] = q->tx.pkts;
- stats->q_obytes[i] = q->tx.bytes;
- tx_packets_total += stats->q_opackets[i];
- tx_bytes_total += stats->q_obytes[i];
- }
-
- stats->ipackets = rx_packets_total;
- stats->ibytes = rx_bytes_total;
- stats->opackets = tx_packets_total;
- stats->obytes = tx_bytes_total;
-
- return 0;
-}
-
-static int
-eth_kni_stats_reset(struct rte_eth_dev *dev)
-{
- struct rte_eth_dev_data *data = dev->data;
- struct pmd_queue *q;
- unsigned int i;
-
- for (i = 0; i < data->nb_rx_queues; i++) {
- q = data->rx_queues[i];
- q->rx.pkts = 0;
- q->rx.bytes = 0;
- }
- for (i = 0; i < data->nb_tx_queues; i++) {
- q = data->tx_queues[i];
- q->tx.pkts = 0;
- q->tx.bytes = 0;
- }
-
- return 0;
-}
-
-static const struct eth_dev_ops eth_kni_ops = {
- .dev_start = eth_kni_dev_start,
- .dev_stop = eth_kni_dev_stop,
- .dev_close = eth_kni_close,
- .dev_configure = eth_kni_dev_configure,
- .dev_infos_get = eth_kni_dev_info,
- .rx_queue_setup = eth_kni_rx_queue_setup,
- .tx_queue_setup = eth_kni_tx_queue_setup,
- .link_update = eth_kni_link_update,
- .stats_get = eth_kni_stats_get,
- .stats_reset = eth_kni_stats_reset,
-};
-
-static struct rte_eth_dev *
-eth_kni_create(struct rte_vdev_device *vdev,
- struct eth_kni_args *args,
- unsigned int numa_node)
-{
- struct pmd_internals *internals;
- struct rte_eth_dev_data *data;
- struct rte_eth_dev *eth_dev;
-
- PMD_LOG(INFO, "Creating kni ethdev on numa socket %u",
- numa_node);
-
- /* reserve an ethdev entry */
- eth_dev = rte_eth_vdev_allocate(vdev, sizeof(*internals));
- if (!eth_dev)
- return NULL;
-
- internals = eth_dev->data->dev_private;
- internals->port_id = eth_dev->data->port_id;
- data = eth_dev->data;
- data->nb_rx_queues = 1;
- data->nb_tx_queues = 1;
- data->dev_link = pmd_link;
- data->mac_addrs = &internals->eth_addr;
- data->promiscuous = 1;
- data->all_multicast = 1;
- data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
-
- rte_eth_random_addr(internals->eth_addr.addr_bytes);
-
- eth_dev->dev_ops = ð_kni_ops;
-
- internals->no_request_thread = args->no_request_thread;
-
- return eth_dev;
-}
-
-static int
-kni_init(void)
-{
- int ret;
-
- if (is_kni_initialized == 0) {
- ret = rte_kni_init(MAX_KNI_PORTS);
- if (ret < 0)
- return ret;
- }
-
- is_kni_initialized++;
-
- return 0;
-}
-
-static int
-eth_kni_kvargs_process(struct eth_kni_args *args, const char *params)
-{
- struct rte_kvargs *kvlist;
-
- kvlist = rte_kvargs_parse(params, valid_arguments);
- if (kvlist == NULL)
- return -1;
-
- memset(args, 0, sizeof(struct eth_kni_args));
-
- if (rte_kvargs_count(kvlist, ETH_KNI_NO_REQUEST_THREAD_ARG) == 1)
- args->no_request_thread = 1;
-
- rte_kvargs_free(kvlist);
-
- return 0;
-}
-
-static int
-eth_kni_probe(struct rte_vdev_device *vdev)
-{
- struct rte_eth_dev *eth_dev;
- struct eth_kni_args args;
- const char *name;
- const char *params;
- int ret;
-
- name = rte_vdev_device_name(vdev);
- params = rte_vdev_device_args(vdev);
- PMD_LOG(INFO, "Initializing eth_kni for %s", name);
-
- if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
- eth_dev = rte_eth_dev_attach_secondary(name);
- if (!eth_dev) {
- PMD_LOG(ERR, "Failed to probe %s", name);
- return -1;
- }
- /* TODO: request info from primary to set up Rx and Tx */
- eth_dev->dev_ops = ð_kni_ops;
- eth_dev->device = &vdev->device;
- rte_eth_dev_probing_finish(eth_dev);
- return 0;
- }
-
- ret = eth_kni_kvargs_process(&args, params);
- if (ret < 0)
- return ret;
-
- ret = kni_init();
- if (ret < 0)
- return ret;
-
- eth_dev = eth_kni_create(vdev, &args, rte_socket_id());
- if (eth_dev == NULL)
- goto kni_uninit;
-
- eth_dev->rx_pkt_burst = eth_kni_rx;
- eth_dev->tx_pkt_burst = eth_kni_tx;
-
- rte_eth_dev_probing_finish(eth_dev);
- return 0;
-
-kni_uninit:
- is_kni_initialized--;
- if (is_kni_initialized == 0)
- rte_kni_close();
- return -1;
-}
-
-static int
-eth_kni_remove(struct rte_vdev_device *vdev)
-{
- struct rte_eth_dev *eth_dev;
- const char *name;
- int ret;
-
- name = rte_vdev_device_name(vdev);
- PMD_LOG(INFO, "Un-Initializing eth_kni for %s", name);
-
- /* find the ethdev entry */
- eth_dev = rte_eth_dev_allocated(name);
- if (eth_dev != NULL) {
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- ret = eth_kni_dev_stop(eth_dev);
- if (ret != 0)
- return ret;
- return rte_eth_dev_release_port(eth_dev);
- }
- eth_kni_close(eth_dev);
- rte_eth_dev_release_port(eth_dev);
- }
-
- is_kni_initialized--;
- if (is_kni_initialized == 0)
- rte_kni_close();
-
- return 0;
-}
-
-static struct rte_vdev_driver eth_kni_drv = {
- .probe = eth_kni_probe,
- .remove = eth_kni_remove,
-};
-
-RTE_PMD_REGISTER_VDEV(net_kni, eth_kni_drv);
-RTE_PMD_REGISTER_PARAM_STRING(net_kni, ETH_KNI_NO_REQUEST_THREAD_ARG "=<int>");
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index f68bbc27a784..bd38b533c573 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -35,7 +35,6 @@ drivers = [
'ionic',
'ipn3ke',
'ixgbe',
- 'kni',
'mana',
'memif',
'mlx4',
diff --git a/examples/ip_pipeline/Makefile b/examples/ip_pipeline/Makefile
index 785c7ee38ce5..bc5e0a9f1800 100644
--- a/examples/ip_pipeline/Makefile
+++ b/examples/ip_pipeline/Makefile
@@ -8,7 +8,6 @@ APP = ip_pipeline
SRCS-y := action.c
SRCS-y += cli.c
SRCS-y += conn.c
-SRCS-y += kni.c
SRCS-y += link.c
SRCS-y += main.c
SRCS-y += mempool.c
diff --git a/examples/ip_pipeline/cli.c b/examples/ip_pipeline/cli.c
index c918f30e06f3..e8269ea90c11 100644
--- a/examples/ip_pipeline/cli.c
+++ b/examples/ip_pipeline/cli.c
@@ -14,7 +14,6 @@
#include "cli.h"
#include "cryptodev.h"
-#include "kni.h"
#include "link.h"
#include "mempool.h"
#include "parser.h"
@@ -728,65 +727,6 @@ cmd_tap(char **tokens,
}
}
-static const char cmd_kni_help[] =
-"kni <kni_name>\n"
-" link <link_name>\n"
-" mempool <mempool_name>\n"
-" [thread <thread_id>]\n";
-
-static void
-cmd_kni(char **tokens,
- uint32_t n_tokens,
- char *out,
- size_t out_size)
-{
- struct kni_params p;
- char *name;
- struct kni *kni;
-
- memset(&p, 0, sizeof(p));
- if ((n_tokens != 6) && (n_tokens != 8)) {
- snprintf(out, out_size, MSG_ARG_MISMATCH, tokens[0]);
- return;
- }
-
- name = tokens[1];
-
- if (strcmp(tokens[2], "link") != 0) {
- snprintf(out, out_size, MSG_ARG_NOT_FOUND, "link");
- return;
- }
-
- p.link_name = tokens[3];
-
- if (strcmp(tokens[4], "mempool") != 0) {
- snprintf(out, out_size, MSG_ARG_NOT_FOUND, "mempool");
- return;
- }
-
- p.mempool_name = tokens[5];
-
- if (n_tokens == 8) {
- if (strcmp(tokens[6], "thread") != 0) {
- snprintf(out, out_size, MSG_ARG_NOT_FOUND, "thread");
- return;
- }
-
- if (parser_read_uint32(&p.thread_id, tokens[7]) != 0) {
- snprintf(out, out_size, MSG_ARG_INVALID, "thread_id");
- return;
- }
-
- p.force_bind = 1;
- } else
- p.force_bind = 0;
-
- kni = kni_create(name, &p);
- if (kni == NULL) {
- snprintf(out, out_size, MSG_CMD_FAIL, tokens[0]);
- return;
- }
-}
static const char cmd_cryptodev_help[] =
"cryptodev <cryptodev_name>\n"
@@ -1541,7 +1481,6 @@ static const char cmd_pipeline_port_in_help[] =
" | swq <swq_name>\n"
" | tmgr <tmgr_name>\n"
" | tap <tap_name> mempool <mempool_name> mtu <mtu>\n"
-" | kni <kni_name>\n"
" | source mempool <mempool_name> file <file_name> bpp <n_bytes_per_pkt>\n"
" | cryptodev <cryptodev_name> rxq <queue_id>\n"
" [action <port_in_action_profile_name>]\n"
@@ -1664,18 +1603,6 @@ cmd_pipeline_port_in(char **tokens,
}
t0 += 6;
- } else if (strcmp(tokens[t0], "kni") == 0) {
- if (n_tokens < t0 + 2) {
- snprintf(out, out_size, MSG_ARG_MISMATCH,
- "pipeline port in kni");
- return;
- }
-
- p.type = PORT_IN_KNI;
-
- p.dev_name = tokens[t0 + 1];
-
- t0 += 2;
} else if (strcmp(tokens[t0], "source") == 0) {
if (n_tokens < t0 + 6) {
snprintf(out, out_size, MSG_ARG_MISMATCH,
@@ -1781,7 +1708,6 @@ static const char cmd_pipeline_port_out_help[] =
" | swq <swq_name>\n"
" | tmgr <tmgr_name>\n"
" | tap <tap_name>\n"
-" | kni <kni_name>\n"
" | sink [file <file_name> pkts <max_n_pkts>]\n"
" | cryptodev <cryptodev_name> txq <txq_id> offset <crypto_op_offset>\n";
@@ -1873,16 +1799,6 @@ cmd_pipeline_port_out(char **tokens,
p.type = PORT_OUT_TAP;
- p.dev_name = tokens[7];
- } else if (strcmp(tokens[6], "kni") == 0) {
- if (n_tokens != 8) {
- snprintf(out, out_size, MSG_ARG_MISMATCH,
- "pipeline port out kni");
- return;
- }
-
- p.type = PORT_OUT_KNI;
-
p.dev_name = tokens[7];
} else if (strcmp(tokens[6], "sink") == 0) {
if ((n_tokens != 7) && (n_tokens != 11)) {
@@ -6038,7 +5954,6 @@ cmd_help(char **tokens, uint32_t n_tokens, char *out, size_t out_size)
"\ttmgr subport\n"
"\ttmgr subport pipe\n"
"\ttap\n"
- "\tkni\n"
"\tport in action profile\n"
"\ttable action profile\n"
"\tpipeline\n"
@@ -6124,11 +6039,6 @@ cmd_help(char **tokens, uint32_t n_tokens, char *out, size_t out_size)
return;
}
- if (strcmp(tokens[0], "kni") == 0) {
- snprintf(out, out_size, "\n%s\n", cmd_kni_help);
- return;
- }
-
if (strcmp(tokens[0], "cryptodev") == 0) {
snprintf(out, out_size, "\n%s\n", cmd_cryptodev_help);
return;
@@ -6436,11 +6346,6 @@ cli_process(char *in, char *out, size_t out_size)
return;
}
- if (strcmp(tokens[0], "kni") == 0) {
- cmd_kni(tokens, n_tokens, out, out_size);
- return;
- }
-
if (strcmp(tokens[0], "cryptodev") == 0) {
cmd_cryptodev(tokens, n_tokens, out, out_size);
return;
diff --git a/examples/ip_pipeline/examples/kni.cli b/examples/ip_pipeline/examples/kni.cli
deleted file mode 100644
index 143834093d4d..000000000000
--- a/examples/ip_pipeline/examples/kni.cli
+++ /dev/null
@@ -1,69 +0,0 @@
-; SPDX-License-Identifier: BSD-3-Clause
-; Copyright(c) 2010-2018 Intel Corporation
-
-; _______________ ______________________
-; | | KNI0 | |
-; LINK0 RXQ0 --->|...............|------->|--+ |
-; | | KNI1 | | br0 |
-; LINK1 TXQ0 <---|...............|<-------|<-+ |
-; | | | Linux Kernel |
-; | PIPELINE0 | | Network Stack |
-; | | KNI1 | |
-; LINK1 RXQ0 --->|...............|------->|--+ |
-; | | KNI0 | | br0 |
-; LINK0 TXQ0 <---|...............|<-------|<-+ |
-; |_______________| |______________________|
-;
-; Insert Linux kernel KNI module:
-; [Linux]$ insmod rte_kni.ko
-;
-; Configure Linux kernel bridge between KNI0 and KNI1 interfaces:
-; [Linux]$ brctl addbr br0
-; [Linux]$ brctl addif br0 KNI0
-; [Linux]$ brctl addif br0 KNI1
-; [Linux]$ ifconfig br0 up
-; [Linux]$ ifconfig KNI0 up
-; [Linux]$ ifconfig KNI1 up
-;
-; Monitor packet forwarding performed by Linux kernel between KNI0 and KNI1:
-; [Linux]$ tcpdump -i KNI0
-; [Linux]$ tcpdump -i KNI1
-
-mempool MEMPOOL0 buffer 2304 pool 32K cache 256 cpu 0
-
-link LINK0 dev 0000:02:00.0 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on
-link LINK1 dev 0000:02:00.1 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on
-
-kni KNI0 link LINK0 mempool MEMPOOL0
-kni KNI1 link LINK1 mempool MEMPOOL0
-
-table action profile AP0 ipv4 offset 270 fwd
-
-pipeline PIPELINE0 period 10 offset_port_id 0 cpu 0
-
-pipeline PIPELINE0 port in bsz 32 link LINK0 rxq 0
-pipeline PIPELINE0 port in bsz 32 kni KNI1
-pipeline PIPELINE0 port in bsz 32 link LINK1 rxq 0
-pipeline PIPELINE0 port in bsz 32 kni KNI0
-
-pipeline PIPELINE0 port out bsz 32 kni KNI0
-pipeline PIPELINE0 port out bsz 32 link LINK1 txq 0
-pipeline PIPELINE0 port out bsz 32 kni KNI1
-pipeline PIPELINE0 port out bsz 32 link LINK0 txq 0
-
-pipeline PIPELINE0 table match stub action AP0
-pipeline PIPELINE0 table match stub action AP0
-pipeline PIPELINE0 table match stub action AP0
-pipeline PIPELINE0 table match stub action AP0
-
-pipeline PIPELINE0 port in 0 table 0
-pipeline PIPELINE0 port in 1 table 1
-pipeline PIPELINE0 port in 2 table 2
-pipeline PIPELINE0 port in 3 table 3
-
-thread 1 pipeline PIPELINE0 enable
-
-pipeline PIPELINE0 table 0 rule add match default action fwd port 0
-pipeline PIPELINE0 table 1 rule add match default action fwd port 1
-pipeline PIPELINE0 table 2 rule add match default action fwd port 2
-pipeline PIPELINE0 table 3 rule add match default action fwd port 3
diff --git a/examples/ip_pipeline/kni.c b/examples/ip_pipeline/kni.c
deleted file mode 100644
index cd02c3947827..000000000000
--- a/examples/ip_pipeline/kni.c
+++ /dev/null
@@ -1,168 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include <stdlib.h>
-#include <string.h>
-
-#include <rte_ethdev.h>
-#include <rte_string_fns.h>
-
-#include "kni.h"
-#include "mempool.h"
-#include "link.h"
-
-static struct kni_list kni_list;
-
-#ifndef KNI_MAX
-#define KNI_MAX 16
-#endif
-
-int
-kni_init(void)
-{
- TAILQ_INIT(&kni_list);
-
-#ifdef RTE_LIB_KNI
- rte_kni_init(KNI_MAX);
-#endif
-
- return 0;
-}
-
-struct kni *
-kni_find(const char *name)
-{
- struct kni *kni;
-
- if (name == NULL)
- return NULL;
-
- TAILQ_FOREACH(kni, &kni_list, node)
- if (strcmp(kni->name, name) == 0)
- return kni;
-
- return NULL;
-}
-
-#ifndef RTE_LIB_KNI
-
-struct kni *
-kni_create(const char *name __rte_unused,
- struct kni_params *params __rte_unused)
-{
- return NULL;
-}
-
-void
-kni_handle_request(void)
-{
- return;
-}
-
-#else
-
-static int
-kni_config_network_interface(uint16_t port_id, uint8_t if_up)
-{
- int ret = 0;
-
- if (!rte_eth_dev_is_valid_port(port_id))
- return -EINVAL;
-
- ret = (if_up) ?
- rte_eth_dev_set_link_up(port_id) :
- rte_eth_dev_set_link_down(port_id);
-
- return ret;
-}
-
-static int
-kni_change_mtu(uint16_t port_id, unsigned int new_mtu)
-{
- int ret;
-
- if (!rte_eth_dev_is_valid_port(port_id))
- return -EINVAL;
-
- if (new_mtu > RTE_ETHER_MAX_LEN)
- return -EINVAL;
-
- /* Set new MTU */
- ret = rte_eth_dev_set_mtu(port_id, new_mtu);
- if (ret < 0)
- return ret;
-
- return 0;
-}
-
-struct kni *
-kni_create(const char *name, struct kni_params *params)
-{
- struct rte_eth_dev_info dev_info;
- struct rte_kni_conf kni_conf;
- struct rte_kni_ops kni_ops;
- struct kni *kni;
- struct mempool *mempool;
- struct link *link;
- struct rte_kni *k;
- int ret;
-
- /* Check input params */
- if ((name == NULL) ||
- kni_find(name) ||
- (params == NULL))
- return NULL;
-
- mempool = mempool_find(params->mempool_name);
- link = link_find(params->link_name);
- if ((mempool == NULL) ||
- (link == NULL))
- return NULL;
-
- /* Resource create */
- ret = rte_eth_dev_info_get(link->port_id, &dev_info);
- if (ret != 0)
- return NULL;
-
- memset(&kni_conf, 0, sizeof(kni_conf));
- strlcpy(kni_conf.name, name, RTE_KNI_NAMESIZE);
- kni_conf.force_bind = params->force_bind;
- kni_conf.core_id = params->thread_id;
- kni_conf.group_id = link->port_id;
- kni_conf.mbuf_size = mempool->buffer_size;
-
- memset(&kni_ops, 0, sizeof(kni_ops));
- kni_ops.port_id = link->port_id;
- kni_ops.config_network_if = kni_config_network_interface;
- kni_ops.change_mtu = kni_change_mtu;
-
- k = rte_kni_alloc(mempool->m, &kni_conf, &kni_ops);
- if (k == NULL)
- return NULL;
-
- /* Node allocation */
- kni = calloc(1, sizeof(struct kni));
- if (kni == NULL)
- return NULL;
-
- /* Node fill in */
- strlcpy(kni->name, name, sizeof(kni->name));
- kni->k = k;
-
- /* Node add to list */
- TAILQ_INSERT_TAIL(&kni_list, kni, node);
-
- return kni;
-}
-
-void
-kni_handle_request(void)
-{
- struct kni *kni;
-
- TAILQ_FOREACH(kni, &kni_list, node)
- rte_kni_handle_request(kni->k);
-}
-
-#endif
diff --git a/examples/ip_pipeline/kni.h b/examples/ip_pipeline/kni.h
deleted file mode 100644
index 118f48df73d8..000000000000
--- a/examples/ip_pipeline/kni.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _INCLUDE_KNI_H_
-#define _INCLUDE_KNI_H_
-
-#include <stdint.h>
-#include <sys/queue.h>
-
-#ifdef RTE_LIB_KNI
-#include <rte_kni.h>
-#endif
-
-#include "common.h"
-
-struct kni {
- TAILQ_ENTRY(kni) node;
- char name[NAME_SIZE];
-#ifdef RTE_LIB_KNI
- struct rte_kni *k;
-#endif
-};
-
-TAILQ_HEAD(kni_list, kni);
-
-int
-kni_init(void);
-
-struct kni *
-kni_find(const char *name);
-
-struct kni_params {
- const char *link_name;
- const char *mempool_name;
- int force_bind;
- uint32_t thread_id;
-};
-
-struct kni *
-kni_create(const char *name, struct kni_params *params);
-
-void
-kni_handle_request(void);
-
-#endif /* _INCLUDE_KNI_H_ */
diff --git a/examples/ip_pipeline/main.c b/examples/ip_pipeline/main.c
index e35d9bce3984..663f538f024a 100644
--- a/examples/ip_pipeline/main.c
+++ b/examples/ip_pipeline/main.c
@@ -14,7 +14,6 @@
#include "cli.h"
#include "conn.h"
-#include "kni.h"
#include "cryptodev.h"
#include "link.h"
#include "mempool.h"
@@ -205,13 +204,6 @@ main(int argc, char **argv)
return status;
}
- /* KNI */
- status = kni_init();
- if (status) {
- printf("Error: KNI initialization failed (%d)\n", status);
- return status;
- }
-
/* Sym Crypto */
status = cryptodev_init();
if (status) {
@@ -264,7 +256,5 @@ main(int argc, char **argv)
conn_poll_for_conn(conn);
conn_poll_for_msg(conn);
-
- kni_handle_request();
}
}
diff --git a/examples/ip_pipeline/meson.build b/examples/ip_pipeline/meson.build
index 57f522c24cf9..68049157e429 100644
--- a/examples/ip_pipeline/meson.build
+++ b/examples/ip_pipeline/meson.build
@@ -18,7 +18,6 @@ sources = files(
'cli.c',
'conn.c',
'cryptodev.c',
- 'kni.c',
'link.c',
'main.c',
'mempool.c',
diff --git a/examples/ip_pipeline/pipeline.c b/examples/ip_pipeline/pipeline.c
index 7ebabcae984d..63352257c6e9 100644
--- a/examples/ip_pipeline/pipeline.c
+++ b/examples/ip_pipeline/pipeline.c
@@ -11,9 +11,6 @@
#include <rte_string_fns.h>
#include <rte_port_ethdev.h>
-#ifdef RTE_LIB_KNI
-#include <rte_port_kni.h>
-#endif
#include <rte_port_ring.h>
#include <rte_port_source_sink.h>
#include <rte_port_fd.h>
@@ -28,9 +25,6 @@
#include <rte_table_lpm_ipv6.h>
#include <rte_table_stub.h>
-#ifdef RTE_LIB_KNI
-#include "kni.h"
-#endif
#include "link.h"
#include "mempool.h"
#include "pipeline.h"
@@ -160,9 +154,6 @@ pipeline_port_in_create(const char *pipeline_name,
struct rte_port_ring_reader_params ring;
struct rte_port_sched_reader_params sched;
struct rte_port_fd_reader_params fd;
-#ifdef RTE_LIB_KNI
- struct rte_port_kni_reader_params kni;
-#endif
struct rte_port_source_params source;
struct rte_port_sym_crypto_reader_params sym_crypto;
} pp;
@@ -264,22 +255,6 @@ pipeline_port_in_create(const char *pipeline_name,
break;
}
-#ifdef RTE_LIB_KNI
- case PORT_IN_KNI:
- {
- struct kni *kni;
-
- kni = kni_find(params->dev_name);
- if (kni == NULL)
- return -1;
-
- pp.kni.kni = kni->k;
-
- p.ops = &rte_port_kni_reader_ops;
- p.arg_create = &pp.kni;
- break;
- }
-#endif
case PORT_IN_SOURCE:
{
@@ -404,9 +379,6 @@ pipeline_port_out_create(const char *pipeline_name,
struct rte_port_ring_writer_params ring;
struct rte_port_sched_writer_params sched;
struct rte_port_fd_writer_params fd;
-#ifdef RTE_LIB_KNI
- struct rte_port_kni_writer_params kni;
-#endif
struct rte_port_sink_params sink;
struct rte_port_sym_crypto_writer_params sym_crypto;
} pp;
@@ -415,9 +387,6 @@ pipeline_port_out_create(const char *pipeline_name,
struct rte_port_ethdev_writer_nodrop_params ethdev;
struct rte_port_ring_writer_nodrop_params ring;
struct rte_port_fd_writer_nodrop_params fd;
-#ifdef RTE_LIB_KNI
- struct rte_port_kni_writer_nodrop_params kni;
-#endif
struct rte_port_sym_crypto_writer_nodrop_params sym_crypto;
} pp_nodrop;
@@ -537,32 +506,6 @@ pipeline_port_out_create(const char *pipeline_name,
break;
}
-#ifdef RTE_LIB_KNI
- case PORT_OUT_KNI:
- {
- struct kni *kni;
-
- kni = kni_find(params->dev_name);
- if (kni == NULL)
- return -1;
-
- pp.kni.kni = kni->k;
- pp.kni.tx_burst_sz = params->burst_size;
-
- pp_nodrop.kni.kni = kni->k;
- pp_nodrop.kni.tx_burst_sz = params->burst_size;
- pp_nodrop.kni.n_retries = params->n_retries;
-
- if (params->retry == 0) {
- p.ops = &rte_port_kni_writer_ops;
- p.arg_create = &pp.kni;
- } else {
- p.ops = &rte_port_kni_writer_nodrop_ops;
- p.arg_create = &pp_nodrop.kni;
- }
- break;
- }
-#endif
case PORT_OUT_SINK:
{
diff --git a/examples/ip_pipeline/pipeline.h b/examples/ip_pipeline/pipeline.h
index 4d2ee29a54c7..083d5e852421 100644
--- a/examples/ip_pipeline/pipeline.h
+++ b/examples/ip_pipeline/pipeline.h
@@ -25,7 +25,6 @@ enum port_in_type {
PORT_IN_SWQ,
PORT_IN_TMGR,
PORT_IN_TAP,
- PORT_IN_KNI,
PORT_IN_SOURCE,
PORT_IN_CRYPTODEV,
};
@@ -67,7 +66,6 @@ enum port_out_type {
PORT_OUT_SWQ,
PORT_OUT_TMGR,
PORT_OUT_TAP,
- PORT_OUT_KNI,
PORT_OUT_SINK,
PORT_OUT_CRYPTODEV,
};
diff --git a/kernel/linux/kni/Kbuild b/kernel/linux/kni/Kbuild
deleted file mode 100644
index e5452d6c00db..000000000000
--- a/kernel/linux/kni/Kbuild
+++ /dev/null
@@ -1,6 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
-
-ccflags-y := $(MODULE_CFLAGS)
-obj-m := rte_kni.o
-rte_kni-y := $(patsubst $(src)/%.c,%.o,$(wildcard $(src)/*.c))
diff --git a/kernel/linux/kni/compat.h b/kernel/linux/kni/compat.h
deleted file mode 100644
index 8beb67046577..000000000000
--- a/kernel/linux/kni/compat.h
+++ /dev/null
@@ -1,157 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Minimal wrappers to allow compiling kni on older kernels.
- */
-
-#include <linux/version.h>
-
-#ifndef RHEL_RELEASE_VERSION
-#define RHEL_RELEASE_VERSION(a, b) (((a) << 8) + (b))
-#endif
-
-/* SuSE version macro is the same as Linux kernel version */
-#ifndef SLE_VERSION
-#define SLE_VERSION(a, b, c) KERNEL_VERSION(a, b, c)
-#endif
-#ifdef CONFIG_SUSE_KERNEL
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 57))
-/* SLES12SP3 is at least 4.4.57+ based */
-#define SLE_VERSION_CODE SLE_VERSION(12, 3, 0)
-#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 12, 28))
-/* SLES12 is at least 3.12.28+ based */
-#define SLE_VERSION_CODE SLE_VERSION(12, 0, 0)
-#elif ((LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 61)) && \
- (LINUX_VERSION_CODE < KERNEL_VERSION(3, 1, 0)))
-/* SLES11 SP3 is at least 3.0.61+ based */
-#define SLE_VERSION_CODE SLE_VERSION(11, 3, 0)
-#elif (LINUX_VERSION_CODE == KERNEL_VERSION(2, 6, 32))
-/* SLES11 SP1 is 2.6.32 based */
-#define SLE_VERSION_CODE SLE_VERSION(11, 1, 0)
-#elif (LINUX_VERSION_CODE == KERNEL_VERSION(2, 6, 27))
-/* SLES11 GA is 2.6.27 based */
-#define SLE_VERSION_CODE SLE_VERSION(11, 0, 0)
-#endif /* LINUX_VERSION_CODE == KERNEL_VERSION(x,y,z) */
-#endif /* CONFIG_SUSE_KERNEL */
-#ifndef SLE_VERSION_CODE
-#define SLE_VERSION_CODE 0
-#endif /* SLE_VERSION_CODE */
-
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 39) && \
- (!(defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6, 4)))
-
-#define kstrtoul strict_strtoul
-
-#endif /* < 2.6.39 */
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 33)
-#define HAVE_SIMPLIFIED_PERNET_OPERATIONS
-#endif
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 35)
-#define sk_sleep(s) ((s)->sk_sleep)
-#else
-#define HAVE_SOCKET_WQ
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 7, 0)
-#define HAVE_STATIC_SOCK_MAP_FD
-#else
-#define kni_sock_map_fd(s) sock_map_fd(s, 0)
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 9, 0)
-#define HAVE_CHANGE_CARRIER_CB
-#endif
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 0)
-#define ether_addr_copy(dst, src) memcpy(dst, src, ETH_ALEN)
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 19, 0)
-#define HAVE_IOV_ITER_MSGHDR
-#endif
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 1, 0)
-#define HAVE_KIOCB_MSG_PARAM
-#define HAVE_REBUILD_HEADER
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 2, 0)
-#define HAVE_SK_ALLOC_KERN_PARAM
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 7, 0) || \
- (defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7, 4)) || \
- (SLE_VERSION_CODE && SLE_VERSION_CODE == SLE_VERSION(12, 3, 0))
-#define HAVE_TRANS_START_HELPER
-#endif
-
-/*
- * KNI uses NET_NAME_UNKNOWN macro to select correct version of alloc_netdev()
- * For old kernels just backported the commit that enables the macro
- * (685343fc3ba6) but still uses old API, it is required to undefine macro to
- * select correct version of API, this is safe since KNI doesn't use the value.
- * This fix is specific to RedHat/CentOS kernels.
- */
-#if (defined(RHEL_RELEASE_CODE) && \
- (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6, 8)) && \
- (LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 34)))
-#undef NET_NAME_UNKNOWN
-#endif
-
-/*
- * RHEL has two different version with different kernel version:
- * 3.10 is for AMD, Intel, IBM POWER7 and POWER8;
- * 4.14 is for ARM and IBM POWER9
- */
-#if (defined(RHEL_RELEASE_CODE) && \
- (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7, 5)) && \
- (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(8, 0)) && \
- (LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)))
-#define ndo_change_mtu ndo_change_mtu_rh74
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0)
-#define HAVE_MAX_MTU_PARAM
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0)
-#define HAVE_SIGNAL_FUNCTIONS_OWN_HEADER
-#endif
-
-/*
- * iova to kva mapping support can be provided since 4.6.0, but required
- * kernel version increased to >= 4.10.0 because of the updates in
- * get_user_pages_remote() kernel API
- */
-#if KERNEL_VERSION(4, 10, 0) <= LINUX_VERSION_CODE
-#define HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
-#endif
-
-#if KERNEL_VERSION(5, 6, 0) <= LINUX_VERSION_CODE || \
- (defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_VERSION(8, 3) <= RHEL_RELEASE_CODE) || \
- (defined(CONFIG_SUSE_KERNEL) && defined(HAVE_ARG_TX_QUEUE))
-#define HAVE_TX_TIMEOUT_TXQUEUE
-#endif
-
-#if KERNEL_VERSION(5, 9, 0) > LINUX_VERSION_CODE
-#define HAVE_TSK_IN_GUP
-#endif
-
-#if KERNEL_VERSION(5, 15, 0) <= LINUX_VERSION_CODE
-#define HAVE_ETH_HW_ADDR_SET
-#endif
-
-#if KERNEL_VERSION(5, 18, 0) > LINUX_VERSION_CODE && \
- (!(defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_VERSION(9, 1) <= RHEL_RELEASE_CODE))
-#define HAVE_NETIF_RX_NI
-#endif
-
-#if KERNEL_VERSION(6, 5, 0) > LINUX_VERSION_CODE
-#define HAVE_VMA_IN_GUP
-#endif
diff --git a/kernel/linux/kni/kni_dev.h b/kernel/linux/kni/kni_dev.h
deleted file mode 100644
index 975379825b2d..000000000000
--- a/kernel/linux/kni/kni_dev.h
+++ /dev/null
@@ -1,137 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-#ifndef _KNI_DEV_H_
-#define _KNI_DEV_H_
-
-#ifdef pr_fmt
-#undef pr_fmt
-#endif
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#define KNI_VERSION "1.0"
-
-#include "compat.h"
-
-#include <linux/if.h>
-#include <linux/wait.h>
-#ifdef HAVE_SIGNAL_FUNCTIONS_OWN_HEADER
-#include <linux/sched/signal.h>
-#else
-#include <linux/sched.h>
-#endif
-#include <linux/netdevice.h>
-#include <linux/spinlock.h>
-#include <linux/list.h>
-
-#include <rte_kni_common.h>
-#define KNI_KTHREAD_MAX_RESCHEDULE_INTERVAL 1000000 /* us */
-
-#define MBUF_BURST_SZ 32
-
-/* Default carrier state for created KNI network interfaces */
-extern uint32_t kni_dflt_carrier;
-
-/* Request processing support for bifurcated drivers. */
-extern uint32_t bifurcated_support;
-
-/**
- * A structure describing the private information for a kni device.
- */
-struct kni_dev {
- /* kni list */
- struct list_head list;
-
- uint8_t iova_mode;
-
- uint32_t core_id; /* Core ID to bind */
- char name[RTE_KNI_NAMESIZE]; /* Network device name */
- struct task_struct *pthread;
-
- /* wait queue for req/resp */
- wait_queue_head_t wq;
- struct mutex sync_lock;
-
- /* kni device */
- struct net_device *net_dev;
-
- /* queue for packets to be sent out */
- struct rte_kni_fifo *tx_q;
-
- /* queue for the packets received */
- struct rte_kni_fifo *rx_q;
-
- /* queue for the allocated mbufs those can be used to save sk buffs */
- struct rte_kni_fifo *alloc_q;
-
- /* free queue for the mbufs to be freed */
- struct rte_kni_fifo *free_q;
-
- /* request queue */
- struct rte_kni_fifo *req_q;
-
- /* response queue */
- struct rte_kni_fifo *resp_q;
-
- void *sync_kva;
- void *sync_va;
-
- void *mbuf_kva;
- void *mbuf_va;
-
- /* mbuf size */
- uint32_t mbuf_size;
-
- /* buffers */
- void *pa[MBUF_BURST_SZ];
- void *va[MBUF_BURST_SZ];
- void *alloc_pa[MBUF_BURST_SZ];
- void *alloc_va[MBUF_BURST_SZ];
-
- struct task_struct *usr_tsk;
-};
-
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
-static inline phys_addr_t iova_to_phys(struct task_struct *tsk,
- unsigned long iova)
-{
- phys_addr_t offset, phys_addr;
- struct page *page = NULL;
- long ret;
-
- offset = iova & (PAGE_SIZE - 1);
-
- /* Read one page struct info */
-#ifdef HAVE_TSK_IN_GUP
- ret = get_user_pages_remote(tsk, tsk->mm, iova, 1, 0, &page, NULL, NULL);
-#else
- #ifdef HAVE_VMA_IN_GUP
- ret = get_user_pages_remote(tsk->mm, iova, 1, 0, &page, NULL, NULL);
- #else
- ret = get_user_pages_remote(tsk->mm, iova, 1, 0, &page, NULL);
- #endif
-#endif
- if (ret < 0)
- return 0;
-
- phys_addr = page_to_phys(page) | offset;
- put_page(page);
-
- return phys_addr;
-}
-
-static inline void *iova_to_kva(struct task_struct *tsk, unsigned long iova)
-{
- return phys_to_virt(iova_to_phys(tsk, iova));
-}
-#endif
-
-void kni_net_release_fifo_phy(struct kni_dev *kni);
-void kni_net_rx(struct kni_dev *kni);
-void kni_net_init(struct net_device *dev);
-void kni_net_config_lo_mode(char *lo_str);
-void kni_net_poll_resp(struct kni_dev *kni);
-
-#endif
diff --git a/kernel/linux/kni/kni_fifo.h b/kernel/linux/kni/kni_fifo.h
deleted file mode 100644
index 1ba5172002b6..000000000000
--- a/kernel/linux/kni/kni_fifo.h
+++ /dev/null
@@ -1,87 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-#ifndef _KNI_FIFO_H_
-#define _KNI_FIFO_H_
-
-#include <rte_kni_common.h>
-
-/* Skip some memory barriers on Linux < 3.14 */
-#ifndef smp_load_acquire
-#define smp_load_acquire(a) (*(a))
-#endif
-#ifndef smp_store_release
-#define smp_store_release(a, b) *(a) = (b)
-#endif
-
-/**
- * Adds num elements into the fifo. Return the number actually written
- */
-static inline uint32_t
-kni_fifo_put(struct rte_kni_fifo *fifo, void **data, uint32_t num)
-{
- uint32_t i = 0;
- uint32_t fifo_write = fifo->write;
- uint32_t fifo_read = smp_load_acquire(&fifo->read);
- uint32_t new_write = fifo_write;
-
- for (i = 0; i < num; i++) {
- new_write = (new_write + 1) & (fifo->len - 1);
-
- if (new_write == fifo_read)
- break;
- fifo->buffer[fifo_write] = data[i];
- fifo_write = new_write;
- }
- smp_store_release(&fifo->write, fifo_write);
-
- return i;
-}
-
-/**
- * Get up to num elements from the FIFO. Return the number actually read
- */
-static inline uint32_t
-kni_fifo_get(struct rte_kni_fifo *fifo, void **data, uint32_t num)
-{
- uint32_t i = 0;
- uint32_t new_read = fifo->read;
- uint32_t fifo_write = smp_load_acquire(&fifo->write);
-
- for (i = 0; i < num; i++) {
- if (new_read == fifo_write)
- break;
-
- data[i] = fifo->buffer[new_read];
- new_read = (new_read + 1) & (fifo->len - 1);
- }
- smp_store_release(&fifo->read, new_read);
-
- return i;
-}
-
-/**
- * Get the num of elements in the fifo
- */
-static inline uint32_t
-kni_fifo_count(struct rte_kni_fifo *fifo)
-{
- uint32_t fifo_write = smp_load_acquire(&fifo->write);
- uint32_t fifo_read = smp_load_acquire(&fifo->read);
- return (fifo->len + fifo_write - fifo_read) & (fifo->len - 1);
-}
-
-/**
- * Get the num of available elements in the fifo
- */
-static inline uint32_t
-kni_fifo_free_count(struct rte_kni_fifo *fifo)
-{
- uint32_t fifo_write = smp_load_acquire(&fifo->write);
- uint32_t fifo_read = smp_load_acquire(&fifo->read);
- return (fifo_read - fifo_write - 1) & (fifo->len - 1);
-}
-
-#endif /* _KNI_FIFO_H_ */
diff --git a/kernel/linux/kni/kni_misc.c b/kernel/linux/kni/kni_misc.c
deleted file mode 100644
index 0c3a86ee352e..000000000000
--- a/kernel/linux/kni/kni_misc.c
+++ /dev/null
@@ -1,719 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-#include <linux/version.h>
-#include <linux/module.h>
-#include <linux/miscdevice.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/pci.h>
-#include <linux/kthread.h>
-#include <linux/rwsem.h>
-#include <linux/mutex.h>
-#include <linux/nsproxy.h>
-#include <net/net_namespace.h>
-#include <net/netns/generic.h>
-
-#include <rte_kni_common.h>
-
-#include "compat.h"
-#include "kni_dev.h"
-
-MODULE_VERSION(KNI_VERSION);
-MODULE_LICENSE("Dual BSD/GPL");
-MODULE_AUTHOR("Intel Corporation");
-MODULE_DESCRIPTION("Kernel Module for managing kni devices");
-
-#define KNI_RX_LOOP_NUM 1000
-
-#define KNI_MAX_DEVICES 32
-
-/* loopback mode */
-static char *lo_mode;
-
-/* Kernel thread mode */
-static char *kthread_mode;
-static uint32_t multiple_kthread_on;
-
-/* Default carrier state for created KNI network interfaces */
-static char *carrier;
-uint32_t kni_dflt_carrier;
-
-/* Request processing support for bifurcated drivers. */
-static char *enable_bifurcated;
-uint32_t bifurcated_support;
-
-/* KNI thread scheduling interval */
-static long min_scheduling_interval = 100; /* us */
-static long max_scheduling_interval = 200; /* us */
-
-#define KNI_DEV_IN_USE_BIT_NUM 0 /* Bit number for device in use */
-
-static int kni_net_id;
-
-struct kni_net {
- unsigned long device_in_use; /* device in use flag */
- struct mutex kni_kthread_lock;
- struct task_struct *kni_kthread;
- struct rw_semaphore kni_list_lock;
- struct list_head kni_list_head;
-};
-
-static int __net_init
-kni_init_net(struct net *net)
-{
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- struct kni_net *knet = net_generic(net, kni_net_id);
-
- memset(knet, 0, sizeof(*knet));
-#else
- struct kni_net *knet;
- int ret;
-
- knet = kzalloc(sizeof(struct kni_net), GFP_KERNEL);
- if (!knet) {
- ret = -ENOMEM;
- return ret;
- }
-#endif
-
- /* Clear the bit of device in use */
- clear_bit(KNI_DEV_IN_USE_BIT_NUM, &knet->device_in_use);
-
- mutex_init(&knet->kni_kthread_lock);
-
- init_rwsem(&knet->kni_list_lock);
- INIT_LIST_HEAD(&knet->kni_list_head);
-
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- return 0;
-#else
- ret = net_assign_generic(net, kni_net_id, knet);
- if (ret < 0)
- kfree(knet);
-
- return ret;
-#endif
-}
-
-static void __net_exit
-kni_exit_net(struct net *net)
-{
- struct kni_net *knet __maybe_unused;
-
- knet = net_generic(net, kni_net_id);
- mutex_destroy(&knet->kni_kthread_lock);
-
-#ifndef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- kfree(knet);
-#endif
-}
-
-static struct pernet_operations kni_net_ops = {
- .init = kni_init_net,
- .exit = kni_exit_net,
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- .id = &kni_net_id,
- .size = sizeof(struct kni_net),
-#endif
-};
-
-static int
-kni_thread_single(void *data)
-{
- struct kni_net *knet = data;
- int j;
- struct kni_dev *dev;
-
- while (!kthread_should_stop()) {
- down_read(&knet->kni_list_lock);
- for (j = 0; j < KNI_RX_LOOP_NUM; j++) {
- list_for_each_entry(dev, &knet->kni_list_head, list) {
- kni_net_rx(dev);
- kni_net_poll_resp(dev);
- }
- }
- up_read(&knet->kni_list_lock);
- /* reschedule out for a while */
- usleep_range(min_scheduling_interval, max_scheduling_interval);
- }
-
- return 0;
-}
-
-static int
-kni_thread_multiple(void *param)
-{
- int j;
- struct kni_dev *dev = param;
-
- while (!kthread_should_stop()) {
- for (j = 0; j < KNI_RX_LOOP_NUM; j++) {
- kni_net_rx(dev);
- kni_net_poll_resp(dev);
- }
- usleep_range(min_scheduling_interval, max_scheduling_interval);
- }
-
- return 0;
-}
-
-static int
-kni_open(struct inode *inode, struct file *file)
-{
- struct net *net = current->nsproxy->net_ns;
- struct kni_net *knet = net_generic(net, kni_net_id);
-
- /* kni device can be opened by one user only per netns */
- if (test_and_set_bit(KNI_DEV_IN_USE_BIT_NUM, &knet->device_in_use))
- return -EBUSY;
-
- file->private_data = get_net(net);
- pr_debug("/dev/kni opened\n");
-
- return 0;
-}
-
-static int
-kni_dev_remove(struct kni_dev *dev)
-{
- if (!dev)
- return -ENODEV;
-
- /*
- * The memory of kni device is allocated and released together
- * with net device. Release mbuf before freeing net device.
- */
- kni_net_release_fifo_phy(dev);
-
- if (dev->net_dev) {
- unregister_netdev(dev->net_dev);
- free_netdev(dev->net_dev);
- }
-
- return 0;
-}
-
-static int
-kni_release(struct inode *inode, struct file *file)
-{
- struct net *net = file->private_data;
- struct kni_net *knet = net_generic(net, kni_net_id);
- struct kni_dev *dev, *n;
-
- /* Stop kernel thread for single mode */
- if (multiple_kthread_on == 0) {
- mutex_lock(&knet->kni_kthread_lock);
- /* Stop kernel thread */
- if (knet->kni_kthread != NULL) {
- kthread_stop(knet->kni_kthread);
- knet->kni_kthread = NULL;
- }
- mutex_unlock(&knet->kni_kthread_lock);
- }
-
- down_write(&knet->kni_list_lock);
- list_for_each_entry_safe(dev, n, &knet->kni_list_head, list) {
- /* Stop kernel thread for multiple mode */
- if (multiple_kthread_on && dev->pthread != NULL) {
- kthread_stop(dev->pthread);
- dev->pthread = NULL;
- }
-
- list_del(&dev->list);
- kni_dev_remove(dev);
- }
- up_write(&knet->kni_list_lock);
-
- /* Clear the bit of device in use */
- clear_bit(KNI_DEV_IN_USE_BIT_NUM, &knet->device_in_use);
-
- put_net(net);
- pr_debug("/dev/kni closed\n");
-
- return 0;
-}
-
-static int
-kni_check_param(struct kni_dev *kni, struct rte_kni_device_info *dev)
-{
- if (!kni || !dev)
- return -1;
-
- /* Check if network name has been used */
- if (!strncmp(kni->name, dev->name, RTE_KNI_NAMESIZE)) {
- pr_err("KNI name %s duplicated\n", dev->name);
- return -1;
- }
-
- return 0;
-}
-
-static int
-kni_run_thread(struct kni_net *knet, struct kni_dev *kni, uint8_t force_bind)
-{
- /**
- * Create a new kernel thread for multiple mode, set its core affinity,
- * and finally wake it up.
- */
- if (multiple_kthread_on) {
- kni->pthread = kthread_create(kni_thread_multiple,
- (void *)kni, "kni_%s", kni->name);
- if (IS_ERR(kni->pthread)) {
- kni_dev_remove(kni);
- return -ECANCELED;
- }
-
- if (force_bind)
- kthread_bind(kni->pthread, kni->core_id);
- wake_up_process(kni->pthread);
- } else {
- mutex_lock(&knet->kni_kthread_lock);
-
- if (knet->kni_kthread == NULL) {
- knet->kni_kthread = kthread_create(kni_thread_single,
- (void *)knet, "kni_single");
- if (IS_ERR(knet->kni_kthread)) {
- mutex_unlock(&knet->kni_kthread_lock);
- kni_dev_remove(kni);
- return -ECANCELED;
- }
-
- if (force_bind)
- kthread_bind(knet->kni_kthread, kni->core_id);
- wake_up_process(knet->kni_kthread);
- }
-
- mutex_unlock(&knet->kni_kthread_lock);
- }
-
- return 0;
-}
-
-static int
-kni_ioctl_create(struct net *net, uint32_t ioctl_num,
- unsigned long ioctl_param)
-{
- struct kni_net *knet = net_generic(net, kni_net_id);
- int ret;
- struct rte_kni_device_info dev_info;
- struct net_device *net_dev = NULL;
- struct kni_dev *kni, *dev, *n;
-
- pr_info("Creating kni...\n");
- /* Check the buffer size, to avoid warning */
- if (_IOC_SIZE(ioctl_num) > sizeof(dev_info))
- return -EINVAL;
-
- /* Copy kni info from user space */
- if (copy_from_user(&dev_info, (void *)ioctl_param, sizeof(dev_info)))
- return -EFAULT;
-
- /* Check if name is zero-ended */
- if (strnlen(dev_info.name, sizeof(dev_info.name)) == sizeof(dev_info.name)) {
- pr_err("kni.name not zero-terminated");
- return -EINVAL;
- }
-
- /**
- * Check if the cpu core id is valid for binding.
- */
- if (dev_info.force_bind && !cpu_online(dev_info.core_id)) {
- pr_err("cpu %u is not online\n", dev_info.core_id);
- return -EINVAL;
- }
-
- /* Check if it has been created */
- down_read(&knet->kni_list_lock);
- list_for_each_entry_safe(dev, n, &knet->kni_list_head, list) {
- if (kni_check_param(dev, &dev_info) < 0) {
- up_read(&knet->kni_list_lock);
- return -EINVAL;
- }
- }
- up_read(&knet->kni_list_lock);
-
- net_dev = alloc_netdev(sizeof(struct kni_dev), dev_info.name,
-#ifdef NET_NAME_USER
- NET_NAME_USER,
-#endif
- kni_net_init);
- if (net_dev == NULL) {
- pr_err("error allocating device \"%s\"\n", dev_info.name);
- return -EBUSY;
- }
-
- dev_net_set(net_dev, net);
-
- kni = netdev_priv(net_dev);
-
- kni->net_dev = net_dev;
- kni->core_id = dev_info.core_id;
- strncpy(kni->name, dev_info.name, RTE_KNI_NAMESIZE);
-
- /* Translate user space info into kernel space info */
- if (dev_info.iova_mode) {
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
- kni->tx_q = iova_to_kva(current, dev_info.tx_phys);
- kni->rx_q = iova_to_kva(current, dev_info.rx_phys);
- kni->alloc_q = iova_to_kva(current, dev_info.alloc_phys);
- kni->free_q = iova_to_kva(current, dev_info.free_phys);
-
- kni->req_q = iova_to_kva(current, dev_info.req_phys);
- kni->resp_q = iova_to_kva(current, dev_info.resp_phys);
- kni->sync_va = dev_info.sync_va;
- kni->sync_kva = iova_to_kva(current, dev_info.sync_phys);
- kni->usr_tsk = current;
- kni->iova_mode = 1;
-#else
- pr_err("KNI module does not support IOVA to VA translation\n");
- return -EINVAL;
-#endif
- } else {
-
- kni->tx_q = phys_to_virt(dev_info.tx_phys);
- kni->rx_q = phys_to_virt(dev_info.rx_phys);
- kni->alloc_q = phys_to_virt(dev_info.alloc_phys);
- kni->free_q = phys_to_virt(dev_info.free_phys);
-
- kni->req_q = phys_to_virt(dev_info.req_phys);
- kni->resp_q = phys_to_virt(dev_info.resp_phys);
- kni->sync_va = dev_info.sync_va;
- kni->sync_kva = phys_to_virt(dev_info.sync_phys);
- kni->iova_mode = 0;
- }
-
- kni->mbuf_size = dev_info.mbuf_size;
-
- pr_debug("tx_phys: 0x%016llx, tx_q addr: 0x%p\n",
- (unsigned long long) dev_info.tx_phys, kni->tx_q);
- pr_debug("rx_phys: 0x%016llx, rx_q addr: 0x%p\n",
- (unsigned long long) dev_info.rx_phys, kni->rx_q);
- pr_debug("alloc_phys: 0x%016llx, alloc_q addr: 0x%p\n",
- (unsigned long long) dev_info.alloc_phys, kni->alloc_q);
- pr_debug("free_phys: 0x%016llx, free_q addr: 0x%p\n",
- (unsigned long long) dev_info.free_phys, kni->free_q);
- pr_debug("req_phys: 0x%016llx, req_q addr: 0x%p\n",
- (unsigned long long) dev_info.req_phys, kni->req_q);
- pr_debug("resp_phys: 0x%016llx, resp_q addr: 0x%p\n",
- (unsigned long long) dev_info.resp_phys, kni->resp_q);
- pr_debug("mbuf_size: %u\n", kni->mbuf_size);
-
- /* if user has provided a valid mac address */
- if (is_valid_ether_addr(dev_info.mac_addr)) {
-#ifdef HAVE_ETH_HW_ADDR_SET
- eth_hw_addr_set(net_dev, dev_info.mac_addr);
-#else
- memcpy(net_dev->dev_addr, dev_info.mac_addr, ETH_ALEN);
-#endif
- } else {
- /* Assign random MAC address. */
- eth_hw_addr_random(net_dev);
- }
-
- if (dev_info.mtu)
- net_dev->mtu = dev_info.mtu;
-#ifdef HAVE_MAX_MTU_PARAM
- net_dev->max_mtu = net_dev->mtu;
-
- if (dev_info.min_mtu)
- net_dev->min_mtu = dev_info.min_mtu;
-
- if (dev_info.max_mtu)
- net_dev->max_mtu = dev_info.max_mtu;
-#endif
-
- ret = register_netdev(net_dev);
- if (ret) {
- pr_err("error %i registering device \"%s\"\n",
- ret, dev_info.name);
- kni->net_dev = NULL;
- kni_dev_remove(kni);
- free_netdev(net_dev);
- return -ENODEV;
- }
-
- netif_carrier_off(net_dev);
-
- ret = kni_run_thread(knet, kni, dev_info.force_bind);
- if (ret != 0)
- return ret;
-
- down_write(&knet->kni_list_lock);
- list_add(&kni->list, &knet->kni_list_head);
- up_write(&knet->kni_list_lock);
-
- return 0;
-}
-
-static int
-kni_ioctl_release(struct net *net, uint32_t ioctl_num,
- unsigned long ioctl_param)
-{
- struct kni_net *knet = net_generic(net, kni_net_id);
- int ret = -EINVAL;
- struct kni_dev *dev, *n;
- struct rte_kni_device_info dev_info;
-
- if (_IOC_SIZE(ioctl_num) > sizeof(dev_info))
- return -EINVAL;
-
- if (copy_from_user(&dev_info, (void *)ioctl_param, sizeof(dev_info)))
- return -EFAULT;
-
- /* Release the network device according to its name */
- if (strlen(dev_info.name) == 0)
- return -EINVAL;
-
- down_write(&knet->kni_list_lock);
- list_for_each_entry_safe(dev, n, &knet->kni_list_head, list) {
- if (strncmp(dev->name, dev_info.name, RTE_KNI_NAMESIZE) != 0)
- continue;
-
- if (multiple_kthread_on && dev->pthread != NULL) {
- kthread_stop(dev->pthread);
- dev->pthread = NULL;
- }
-
- list_del(&dev->list);
- kni_dev_remove(dev);
- ret = 0;
- break;
- }
- up_write(&knet->kni_list_lock);
- pr_info("%s release kni named %s\n",
- (ret == 0 ? "Successfully" : "Unsuccessfully"), dev_info.name);
-
- return ret;
-}
-
-static long
-kni_ioctl(struct file *file, unsigned int ioctl_num, unsigned long ioctl_param)
-{
- long ret = -EINVAL;
- struct net *net = current->nsproxy->net_ns;
-
- pr_debug("IOCTL num=0x%0x param=0x%0lx\n", ioctl_num, ioctl_param);
-
- /*
- * Switch according to the ioctl called
- */
- switch (_IOC_NR(ioctl_num)) {
- case _IOC_NR(RTE_KNI_IOCTL_TEST):
- /* For test only, not used */
- break;
- case _IOC_NR(RTE_KNI_IOCTL_CREATE):
- ret = kni_ioctl_create(net, ioctl_num, ioctl_param);
- break;
- case _IOC_NR(RTE_KNI_IOCTL_RELEASE):
- ret = kni_ioctl_release(net, ioctl_num, ioctl_param);
- break;
- default:
- pr_debug("IOCTL default\n");
- break;
- }
-
- return ret;
-}
-
-static long
-kni_compat_ioctl(struct file *file, unsigned int ioctl_num,
- unsigned long ioctl_param)
-{
- /* 32 bits app on 64 bits OS to be supported later */
- pr_debug("Not implemented.\n");
-
- return -EINVAL;
-}
-
-static const struct file_operations kni_fops = {
- .owner = THIS_MODULE,
- .open = kni_open,
- .release = kni_release,
- .unlocked_ioctl = kni_ioctl,
- .compat_ioctl = kni_compat_ioctl,
-};
-
-static struct miscdevice kni_misc = {
- .minor = MISC_DYNAMIC_MINOR,
- .name = KNI_DEVICE,
- .fops = &kni_fops,
-};
-
-static int __init
-kni_parse_kthread_mode(void)
-{
- if (!kthread_mode)
- return 0;
-
- if (strcmp(kthread_mode, "single") == 0)
- return 0;
- else if (strcmp(kthread_mode, "multiple") == 0)
- multiple_kthread_on = 1;
- else
- return -1;
-
- return 0;
-}
-
-static int __init
-kni_parse_carrier_state(void)
-{
- if (!carrier) {
- kni_dflt_carrier = 0;
- return 0;
- }
-
- if (strcmp(carrier, "off") == 0)
- kni_dflt_carrier = 0;
- else if (strcmp(carrier, "on") == 0)
- kni_dflt_carrier = 1;
- else
- return -1;
-
- return 0;
-}
-
-static int __init
-kni_parse_bifurcated_support(void)
-{
- if (!enable_bifurcated) {
- bifurcated_support = 0;
- return 0;
- }
-
- if (strcmp(enable_bifurcated, "on") == 0)
- bifurcated_support = 1;
- else
- return -1;
-
- return 0;
-}
-
-static int __init
-kni_init(void)
-{
- int rc;
-
- if (kni_parse_kthread_mode() < 0) {
- pr_err("Invalid parameter for kthread_mode\n");
- return -EINVAL;
- }
-
- if (multiple_kthread_on == 0)
- pr_debug("Single kernel thread for all KNI devices\n");
- else
- pr_debug("Multiple kernel thread mode enabled\n");
-
- if (kni_parse_carrier_state() < 0) {
- pr_err("Invalid parameter for carrier\n");
- return -EINVAL;
- }
-
- if (kni_dflt_carrier == 0)
- pr_debug("Default carrier state set to off.\n");
- else
- pr_debug("Default carrier state set to on.\n");
-
- if (kni_parse_bifurcated_support() < 0) {
- pr_err("Invalid parameter for bifurcated support\n");
- return -EINVAL;
- }
- if (bifurcated_support == 1)
- pr_debug("bifurcated support is enabled.\n");
-
- if (min_scheduling_interval < 0 || max_scheduling_interval < 0 ||
- min_scheduling_interval > KNI_KTHREAD_MAX_RESCHEDULE_INTERVAL ||
- max_scheduling_interval > KNI_KTHREAD_MAX_RESCHEDULE_INTERVAL ||
- min_scheduling_interval >= max_scheduling_interval) {
- pr_err("Invalid parameters for scheduling interval\n");
- return -EINVAL;
- }
-
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- rc = register_pernet_subsys(&kni_net_ops);
-#else
- rc = register_pernet_gen_subsys(&kni_net_id, &kni_net_ops);
-#endif
- if (rc)
- return -EPERM;
-
- rc = misc_register(&kni_misc);
- if (rc != 0) {
- pr_err("Misc registration failed\n");
- goto out;
- }
-
- /* Configure the lo mode according to the input parameter */
- kni_net_config_lo_mode(lo_mode);
-
- return 0;
-
-out:
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- unregister_pernet_subsys(&kni_net_ops);
-#else
- unregister_pernet_gen_subsys(kni_net_id, &kni_net_ops);
-#endif
- return rc;
-}
-
-static void __exit
-kni_exit(void)
-{
- misc_deregister(&kni_misc);
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- unregister_pernet_subsys(&kni_net_ops);
-#else
- unregister_pernet_gen_subsys(kni_net_id, &kni_net_ops);
-#endif
-}
-
-module_init(kni_init);
-module_exit(kni_exit);
-
-module_param(lo_mode, charp, 0644);
-MODULE_PARM_DESC(lo_mode,
-"KNI loopback mode (default=lo_mode_none):\n"
-"\t\tlo_mode_none Kernel loopback disabled\n"
-"\t\tlo_mode_fifo Enable kernel loopback with fifo\n"
-"\t\tlo_mode_fifo_skb Enable kernel loopback with fifo and skb buffer\n"
-"\t\t"
-);
-
-module_param(kthread_mode, charp, 0644);
-MODULE_PARM_DESC(kthread_mode,
-"Kernel thread mode (default=single):\n"
-"\t\tsingle Single kernel thread mode enabled.\n"
-"\t\tmultiple Multiple kernel thread mode enabled.\n"
-"\t\t"
-);
-
-module_param(carrier, charp, 0644);
-MODULE_PARM_DESC(carrier,
-"Default carrier state for KNI interface (default=off):\n"
-"\t\toff Interfaces will be created with carrier state set to off.\n"
-"\t\ton Interfaces will be created with carrier state set to on.\n"
-"\t\t"
-);
-
-module_param(enable_bifurcated, charp, 0644);
-MODULE_PARM_DESC(enable_bifurcated,
-"Enable request processing support for bifurcated drivers, "
-"which means releasing rtnl_lock before calling userspace callback and "
-"supporting async requests (default=off):\n"
-"\t\ton Enable request processing support for bifurcated drivers.\n"
-"\t\t"
-);
-
-module_param(min_scheduling_interval, long, 0644);
-MODULE_PARM_DESC(min_scheduling_interval,
-"KNI thread min scheduling interval (default=100 microseconds)"
-);
-
-module_param(max_scheduling_interval, long, 0644);
-MODULE_PARM_DESC(max_scheduling_interval,
-"KNI thread max scheduling interval (default=200 microseconds)"
-);
diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c
deleted file mode 100644
index 779ee3451a4c..000000000000
--- a/kernel/linux/kni/kni_net.c
+++ /dev/null
@@ -1,878 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-/*
- * This code is inspired from the book "Linux Device Drivers" by
- * Alessandro Rubini and Jonathan Corbet, published by O'Reilly & Associates
- */
-
-#include <linux/device.h>
-#include <linux/module.h>
-#include <linux/version.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h> /* eth_type_trans */
-#include <linux/ethtool.h>
-#include <linux/skbuff.h>
-#include <linux/kthread.h>
-#include <linux/delay.h>
-#include <linux/rtnetlink.h>
-
-#include <rte_kni_common.h>
-#include <kni_fifo.h>
-
-#include "compat.h"
-#include "kni_dev.h"
-
-#define WD_TIMEOUT 5 /*jiffies */
-
-#define KNI_WAIT_RESPONSE_TIMEOUT 300 /* 3 seconds */
-
-/* typedef for rx function */
-typedef void (*kni_net_rx_t)(struct kni_dev *kni);
-
-static void kni_net_rx_normal(struct kni_dev *kni);
-
-/* kni rx function pointer, with default to normal rx */
-static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal;
-
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
-/* iova to kernel virtual address */
-static inline void *
-iova2kva(struct kni_dev *kni, void *iova)
-{
- return phys_to_virt(iova_to_phys(kni->usr_tsk, (unsigned long)iova));
-}
-
-static inline void *
-iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m)
-{
- return phys_to_virt(iova_to_phys(kni->usr_tsk, m->buf_iova) +
- m->data_off);
-}
-#endif
-
-/* physical address to kernel virtual address */
-static void *
-pa2kva(void *pa)
-{
- return phys_to_virt((unsigned long)pa);
-}
-
-/* physical address to virtual address */
-static void *
-pa2va(void *pa, struct rte_kni_mbuf *m)
-{
- void *va;
-
- va = (void *)((unsigned long)pa +
- (unsigned long)m->buf_addr -
- (unsigned long)m->buf_iova);
- return va;
-}
-
-/* mbuf data kernel virtual address from mbuf kernel virtual address */
-static void *
-kva2data_kva(struct rte_kni_mbuf *m)
-{
- return phys_to_virt(m->buf_iova + m->data_off);
-}
-
-static inline void *
-get_kva(struct kni_dev *kni, void *pa)
-{
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
- if (kni->iova_mode == 1)
- return iova2kva(kni, pa);
-#endif
- return pa2kva(pa);
-}
-
-static inline void *
-get_data_kva(struct kni_dev *kni, void *pkt_kva)
-{
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
- if (kni->iova_mode == 1)
- return iova2data_kva(kni, pkt_kva);
-#endif
- return kva2data_kva(pkt_kva);
-}
-
-/*
- * It can be called to process the request.
- */
-static int
-kni_net_process_request(struct net_device *dev, struct rte_kni_request *req)
-{
- struct kni_dev *kni = netdev_priv(dev);
- int ret = -1;
- void *resp_va;
- uint32_t num;
- int ret_val;
-
- ASSERT_RTNL();
-
- if (bifurcated_support) {
- /* If we need to wait and RTNL mutex is held
- * drop the mutex and hold reference to keep device
- */
- if (req->async == 0) {
- dev_hold(dev);
- rtnl_unlock();
- }
- }
-
- mutex_lock(&kni->sync_lock);
-
- /* Construct data */
- memcpy(kni->sync_kva, req, sizeof(struct rte_kni_request));
- num = kni_fifo_put(kni->req_q, &kni->sync_va, 1);
- if (num < 1) {
- pr_err("Cannot send to req_q\n");
- ret = -EBUSY;
- goto fail;
- }
-
- if (bifurcated_support) {
- /* No result available since request is handled
- * asynchronously. set response to success.
- */
- if (req->async != 0) {
- req->result = 0;
- goto async;
- }
- }
-
- ret_val = wait_event_interruptible_timeout(kni->wq,
- kni_fifo_count(kni->resp_q), 3 * HZ);
- if (signal_pending(current) || ret_val <= 0) {
- ret = -ETIME;
- goto fail;
- }
- num = kni_fifo_get(kni->resp_q, (void **)&resp_va, 1);
- if (num != 1 || resp_va != kni->sync_va) {
- /* This should never happen */
- pr_err("No data in resp_q\n");
- ret = -ENODATA;
- goto fail;
- }
-
- memcpy(req, kni->sync_kva, sizeof(struct rte_kni_request));
-async:
- ret = 0;
-
-fail:
- mutex_unlock(&kni->sync_lock);
- if (bifurcated_support) {
- if (req->async == 0) {
- rtnl_lock();
- dev_put(dev);
- }
- }
- return ret;
-}
-
-/*
- * Open and close
- */
-static int
-kni_net_open(struct net_device *dev)
-{
- int ret;
- struct rte_kni_request req;
-
- netif_start_queue(dev);
- if (kni_dflt_carrier == 1)
- netif_carrier_on(dev);
- else
- netif_carrier_off(dev);
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CFG_NETWORK_IF;
-
- /* Setting if_up to non-zero means up */
- req.if_up = 1;
- ret = kni_net_process_request(dev, &req);
-
- return (ret == 0) ? req.result : ret;
-}
-
-static int
-kni_net_release(struct net_device *dev)
-{
- int ret;
- struct rte_kni_request req;
-
- netif_stop_queue(dev); /* can't transmit any more */
- netif_carrier_off(dev);
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CFG_NETWORK_IF;
-
- /* Setting if_up to 0 means down */
- req.if_up = 0;
-
- if (bifurcated_support) {
- /* request async because of the deadlock problem */
- req.async = 1;
- }
-
- ret = kni_net_process_request(dev, &req);
-
- return (ret == 0) ? req.result : ret;
-}
-
-static void
-kni_fifo_trans_pa2va(struct kni_dev *kni,
- struct rte_kni_fifo *src_pa, struct rte_kni_fifo *dst_va)
-{
- uint32_t ret, i, num_dst, num_rx;
- struct rte_kni_mbuf *kva, *prev_kva;
- int nb_segs;
- int kva_nb_segs;
-
- do {
- num_dst = kni_fifo_free_count(dst_va);
- if (num_dst == 0)
- return;
-
- num_rx = min_t(uint32_t, num_dst, MBUF_BURST_SZ);
-
- num_rx = kni_fifo_get(src_pa, kni->pa, num_rx);
- if (num_rx == 0)
- return;
-
- for (i = 0; i < num_rx; i++) {
- kva = get_kva(kni, kni->pa[i]);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- kva_nb_segs = kva->nb_segs;
- for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) {
- if (!kva->next)
- break;
-
- prev_kva = kva;
- kva = get_kva(kni, kva->next);
- /* Convert physical address to virtual address */
- prev_kva->next = pa2va(prev_kva->next, kva);
- }
- }
-
- ret = kni_fifo_put(dst_va, kni->va, num_rx);
- if (ret != num_rx) {
- /* Failing should not happen */
- pr_err("Fail to enqueue entries into dst_va\n");
- return;
- }
- } while (1);
-}
-
-/* Try to release mbufs when kni release */
-void kni_net_release_fifo_phy(struct kni_dev *kni)
-{
- /* release rx_q first, because it can't release in userspace */
- kni_fifo_trans_pa2va(kni, kni->rx_q, kni->free_q);
- /* release alloc_q for speeding up kni release in userspace */
- kni_fifo_trans_pa2va(kni, kni->alloc_q, kni->free_q);
-}
-
-/*
- * Configuration changes (passed on by ifconfig)
- */
-static int
-kni_net_config(struct net_device *dev, struct ifmap *map)
-{
- if (dev->flags & IFF_UP) /* can't act on a running interface */
- return -EBUSY;
-
- /* ignore other fields */
- return 0;
-}
-
-/*
- * Transmit a packet (called by the kernel)
- */
-static int
-kni_net_tx(struct sk_buff *skb, struct net_device *dev)
-{
- int len = 0;
- uint32_t ret;
- struct kni_dev *kni = netdev_priv(dev);
- struct rte_kni_mbuf *pkt_kva = NULL;
- void *pkt_pa = NULL;
- void *pkt_va = NULL;
-
- /* save the timestamp */
-#ifdef HAVE_TRANS_START_HELPER
- netif_trans_update(dev);
-#else
- dev->trans_start = jiffies;
-#endif
-
- /* Check if the length of skb is less than mbuf size */
- if (skb->len > kni->mbuf_size)
- goto drop;
-
- /**
- * Check if it has at least one free entry in tx_q and
- * one entry in alloc_q.
- */
- if (kni_fifo_free_count(kni->tx_q) == 0 ||
- kni_fifo_count(kni->alloc_q) == 0) {
- /**
- * If no free entry in tx_q or no entry in alloc_q,
- * drops skb and goes out.
- */
- goto drop;
- }
-
- /* dequeue a mbuf from alloc_q */
- ret = kni_fifo_get(kni->alloc_q, &pkt_pa, 1);
- if (likely(ret == 1)) {
- void *data_kva;
-
- pkt_kva = get_kva(kni, pkt_pa);
- data_kva = get_data_kva(kni, pkt_kva);
- pkt_va = pa2va(pkt_pa, pkt_kva);
-
- len = skb->len;
- memcpy(data_kva, skb->data, len);
- if (unlikely(len < ETH_ZLEN)) {
- memset(data_kva + len, 0, ETH_ZLEN - len);
- len = ETH_ZLEN;
- }
- pkt_kva->pkt_len = len;
- pkt_kva->data_len = len;
-
- /* enqueue mbuf into tx_q */
- ret = kni_fifo_put(kni->tx_q, &pkt_va, 1);
- if (unlikely(ret != 1)) {
- /* Failing should not happen */
- pr_err("Fail to enqueue mbuf into tx_q\n");
- goto drop;
- }
- } else {
- /* Failing should not happen */
- pr_err("Fail to dequeue mbuf from alloc_q\n");
- goto drop;
- }
-
- /* Free skb and update statistics */
- dev_kfree_skb(skb);
- dev->stats.tx_bytes += len;
- dev->stats.tx_packets++;
-
- return NETDEV_TX_OK;
-
-drop:
- /* Free skb and update statistics */
- dev_kfree_skb(skb);
- dev->stats.tx_dropped++;
-
- return NETDEV_TX_OK;
-}
-
-/*
- * RX: normal working mode
- */
-static void
-kni_net_rx_normal(struct kni_dev *kni)
-{
- uint32_t ret;
- uint32_t len;
- uint32_t i, num_rx, num_fq;
- struct rte_kni_mbuf *kva, *prev_kva;
- void *data_kva;
- struct sk_buff *skb;
- struct net_device *dev = kni->net_dev;
-
- /* Get the number of free entries in free_q */
- num_fq = kni_fifo_free_count(kni->free_q);
- if (num_fq == 0) {
- /* No room on the free_q, bail out */
- return;
- }
-
- /* Calculate the number of entries to dequeue from rx_q */
- num_rx = min_t(uint32_t, num_fq, MBUF_BURST_SZ);
-
- /* Burst dequeue from rx_q */
- num_rx = kni_fifo_get(kni->rx_q, kni->pa, num_rx);
- if (num_rx == 0)
- return;
-
- /* Transfer received packets to netif */
- for (i = 0; i < num_rx; i++) {
- kva = get_kva(kni, kni->pa[i]);
- len = kva->pkt_len;
- data_kva = get_data_kva(kni, kva);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- skb = netdev_alloc_skb(dev, len);
- if (!skb) {
- /* Update statistics */
- dev->stats.rx_dropped++;
- continue;
- }
-
- if (kva->nb_segs == 1) {
- memcpy(skb_put(skb, len), data_kva, len);
- } else {
- int nb_segs;
- int kva_nb_segs = kva->nb_segs;
-
- for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) {
- memcpy(skb_put(skb, kva->data_len),
- data_kva, kva->data_len);
-
- if (!kva->next)
- break;
-
- prev_kva = kva;
- kva = get_kva(kni, kva->next);
- data_kva = kva2data_kva(kva);
- /* Convert physical address to virtual address */
- prev_kva->next = pa2va(prev_kva->next, kva);
- }
- }
-
- skb->protocol = eth_type_trans(skb, dev);
- skb->ip_summed = CHECKSUM_UNNECESSARY;
-
- /* Call netif interface */
-#ifdef HAVE_NETIF_RX_NI
- netif_rx_ni(skb);
-#else
- netif_rx(skb);
-#endif
-
- /* Update statistics */
- dev->stats.rx_bytes += len;
- dev->stats.rx_packets++;
- }
-
- /* Burst enqueue mbufs into free_q */
- ret = kni_fifo_put(kni->free_q, kni->va, num_rx);
- if (ret != num_rx)
- /* Failing should not happen */
- pr_err("Fail to enqueue entries into free_q\n");
-}
-
-/*
- * RX: loopback with enqueue/dequeue fifos.
- */
-static void
-kni_net_rx_lo_fifo(struct kni_dev *kni)
-{
- uint32_t ret;
- uint32_t len;
- uint32_t i, num, num_rq, num_tq, num_aq, num_fq;
- struct rte_kni_mbuf *kva, *next_kva;
- void *data_kva;
- struct rte_kni_mbuf *alloc_kva;
- void *alloc_data_kva;
- struct net_device *dev = kni->net_dev;
-
- /* Get the number of entries in rx_q */
- num_rq = kni_fifo_count(kni->rx_q);
-
- /* Get the number of free entries in tx_q */
- num_tq = kni_fifo_free_count(kni->tx_q);
-
- /* Get the number of entries in alloc_q */
- num_aq = kni_fifo_count(kni->alloc_q);
-
- /* Get the number of free entries in free_q */
- num_fq = kni_fifo_free_count(kni->free_q);
-
- /* Calculate the number of entries to be dequeued from rx_q */
- num = min(num_rq, num_tq);
- num = min(num, num_aq);
- num = min(num, num_fq);
- num = min_t(uint32_t, num, MBUF_BURST_SZ);
-
- /* Return if no entry to dequeue from rx_q */
- if (num == 0)
- return;
-
- /* Burst dequeue from rx_q */
- ret = kni_fifo_get(kni->rx_q, kni->pa, num);
- if (ret == 0)
- return; /* Failing should not happen */
-
- /* Dequeue entries from alloc_q */
- ret = kni_fifo_get(kni->alloc_q, kni->alloc_pa, num);
- if (ret) {
- num = ret;
- /* Copy mbufs */
- for (i = 0; i < num; i++) {
- kva = get_kva(kni, kni->pa[i]);
- len = kva->data_len;
- data_kva = get_data_kva(kni, kva);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- while (kva->next) {
- next_kva = get_kva(kni, kva->next);
- /* Convert physical address to virtual address */
- kva->next = pa2va(kva->next, next_kva);
- kva = next_kva;
- }
-
- alloc_kva = get_kva(kni, kni->alloc_pa[i]);
- alloc_data_kva = get_data_kva(kni, alloc_kva);
- kni->alloc_va[i] = pa2va(kni->alloc_pa[i], alloc_kva);
-
- memcpy(alloc_data_kva, data_kva, len);
- alloc_kva->pkt_len = len;
- alloc_kva->data_len = len;
-
- dev->stats.tx_bytes += len;
- dev->stats.rx_bytes += len;
- }
-
- /* Burst enqueue mbufs into tx_q */
- ret = kni_fifo_put(kni->tx_q, kni->alloc_va, num);
- if (ret != num)
- /* Failing should not happen */
- pr_err("Fail to enqueue mbufs into tx_q\n");
- }
-
- /* Burst enqueue mbufs into free_q */
- ret = kni_fifo_put(kni->free_q, kni->va, num);
- if (ret != num)
- /* Failing should not happen */
- pr_err("Fail to enqueue mbufs into free_q\n");
-
- /**
- * Update statistic, and enqueue/dequeue failure is impossible,
- * as all queues are checked at first.
- */
- dev->stats.tx_packets += num;
- dev->stats.rx_packets += num;
-}
-
-/*
- * RX: loopback with enqueue/dequeue fifos and sk buffer copies.
- */
-static void
-kni_net_rx_lo_fifo_skb(struct kni_dev *kni)
-{
- uint32_t ret;
- uint32_t len;
- uint32_t i, num_rq, num_fq, num;
- struct rte_kni_mbuf *kva, *prev_kva;
- void *data_kva;
- struct sk_buff *skb;
- struct net_device *dev = kni->net_dev;
-
- /* Get the number of entries in rx_q */
- num_rq = kni_fifo_count(kni->rx_q);
-
- /* Get the number of free entries in free_q */
- num_fq = kni_fifo_free_count(kni->free_q);
-
- /* Calculate the number of entries to dequeue from rx_q */
- num = min(num_rq, num_fq);
- num = min_t(uint32_t, num, MBUF_BURST_SZ);
-
- /* Return if no entry to dequeue from rx_q */
- if (num == 0)
- return;
-
- /* Burst dequeue mbufs from rx_q */
- ret = kni_fifo_get(kni->rx_q, kni->pa, num);
- if (ret == 0)
- return;
-
- /* Copy mbufs to sk buffer and then call tx interface */
- for (i = 0; i < num; i++) {
- kva = get_kva(kni, kni->pa[i]);
- len = kva->pkt_len;
- data_kva = get_data_kva(kni, kva);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- skb = netdev_alloc_skb(dev, len);
- if (skb) {
- memcpy(skb_put(skb, len), data_kva, len);
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- dev_kfree_skb(skb);
- }
-
- /* Simulate real usage, allocate/copy skb twice */
- skb = netdev_alloc_skb(dev, len);
- if (skb == NULL) {
- dev->stats.rx_dropped++;
- continue;
- }
-
- if (kva->nb_segs == 1) {
- memcpy(skb_put(skb, len), data_kva, len);
- } else {
- int nb_segs;
- int kva_nb_segs = kva->nb_segs;
-
- for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) {
- memcpy(skb_put(skb, kva->data_len),
- data_kva, kva->data_len);
-
- if (!kva->next)
- break;
-
- prev_kva = kva;
- kva = get_kva(kni, kva->next);
- data_kva = get_data_kva(kni, kva);
- /* Convert physical address to virtual address */
- prev_kva->next = pa2va(prev_kva->next, kva);
- }
- }
-
- skb->ip_summed = CHECKSUM_UNNECESSARY;
-
- dev->stats.rx_bytes += len;
- dev->stats.rx_packets++;
-
- /* call tx interface */
- kni_net_tx(skb, dev);
- }
-
- /* enqueue all the mbufs from rx_q into free_q */
- ret = kni_fifo_put(kni->free_q, kni->va, num);
- if (ret != num)
- /* Failing should not happen */
- pr_err("Fail to enqueue mbufs into free_q\n");
-}
-
-/* rx interface */
-void
-kni_net_rx(struct kni_dev *kni)
-{
- /**
- * It doesn't need to check if it is NULL pointer,
- * as it has a default value
- */
- (*kni_net_rx_func)(kni);
-}
-
-/*
- * Deal with a transmit timeout.
- */
-#ifdef HAVE_TX_TIMEOUT_TXQUEUE
-static void
-kni_net_tx_timeout(struct net_device *dev, unsigned int txqueue)
-#else
-static void
-kni_net_tx_timeout(struct net_device *dev)
-#endif
-{
- pr_debug("Transmit timeout at %ld, latency %ld\n", jiffies,
- jiffies - dev_trans_start(dev));
-
- dev->stats.tx_errors++;
- netif_wake_queue(dev);
-}
-
-static int
-kni_net_change_mtu(struct net_device *dev, int new_mtu)
-{
- int ret;
- struct rte_kni_request req;
-
- pr_debug("kni_net_change_mtu new mtu %d to be set\n", new_mtu);
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CHANGE_MTU;
- req.new_mtu = new_mtu;
- ret = kni_net_process_request(dev, &req);
- if (ret == 0 && req.result == 0)
- dev->mtu = new_mtu;
-
- return (ret == 0) ? req.result : ret;
-}
-
-static void
-kni_net_change_rx_flags(struct net_device *netdev, int flags)
-{
- struct rte_kni_request req;
-
- memset(&req, 0, sizeof(req));
-
- if (flags & IFF_ALLMULTI) {
- req.req_id = RTE_KNI_REQ_CHANGE_ALLMULTI;
-
- if (netdev->flags & IFF_ALLMULTI)
- req.allmulti = 1;
- else
- req.allmulti = 0;
- }
-
- if (flags & IFF_PROMISC) {
- req.req_id = RTE_KNI_REQ_CHANGE_PROMISC;
-
- if (netdev->flags & IFF_PROMISC)
- req.promiscusity = 1;
- else
- req.promiscusity = 0;
- }
-
- kni_net_process_request(netdev, &req);
-}
-
-/*
- * Checks if the user space application provided the resp message
- */
-void
-kni_net_poll_resp(struct kni_dev *kni)
-{
- if (kni_fifo_count(kni->resp_q))
- wake_up_interruptible(&kni->wq);
-}
-
-/*
- * Fill the eth header
- */
-static int
-kni_net_header(struct sk_buff *skb, struct net_device *dev,
- unsigned short type, const void *daddr,
- const void *saddr, uint32_t len)
-{
- struct ethhdr *eth = (struct ethhdr *) skb_push(skb, ETH_HLEN);
-
- memcpy(eth->h_source, saddr ? saddr : dev->dev_addr, dev->addr_len);
- memcpy(eth->h_dest, daddr ? daddr : dev->dev_addr, dev->addr_len);
- eth->h_proto = htons(type);
-
- return dev->hard_header_len;
-}
-
-/*
- * Re-fill the eth header
- */
-#ifdef HAVE_REBUILD_HEADER
-static int
-kni_net_rebuild_header(struct sk_buff *skb)
-{
- struct net_device *dev = skb->dev;
- struct ethhdr *eth = (struct ethhdr *) skb->data;
-
- memcpy(eth->h_source, dev->dev_addr, dev->addr_len);
- memcpy(eth->h_dest, dev->dev_addr, dev->addr_len);
-
- return 0;
-}
-#endif /* < 4.1.0 */
-
-/**
- * kni_net_set_mac - Change the Ethernet Address of the KNI NIC
- * @netdev: network interface device structure
- * @p: pointer to an address structure
- *
- * Returns 0 on success, negative on failure
- **/
-static int
-kni_net_set_mac(struct net_device *netdev, void *p)
-{
- int ret;
- struct rte_kni_request req;
- struct sockaddr *addr = p;
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CHANGE_MAC_ADDR;
-
- if (!is_valid_ether_addr((unsigned char *)(addr->sa_data)))
- return -EADDRNOTAVAIL;
-
- memcpy(req.mac_addr, addr->sa_data, netdev->addr_len);
-#ifdef HAVE_ETH_HW_ADDR_SET
- eth_hw_addr_set(netdev, addr->sa_data);
-#else
- memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
-#endif
-
- ret = kni_net_process_request(netdev, &req);
-
- return (ret == 0 ? req.result : ret);
-}
-
-#ifdef HAVE_CHANGE_CARRIER_CB
-static int
-kni_net_change_carrier(struct net_device *dev, bool new_carrier)
-{
- if (new_carrier)
- netif_carrier_on(dev);
- else
- netif_carrier_off(dev);
- return 0;
-}
-#endif
-
-static const struct header_ops kni_net_header_ops = {
- .create = kni_net_header,
- .parse = eth_header_parse,
-#ifdef HAVE_REBUILD_HEADER
- .rebuild = kni_net_rebuild_header,
-#endif /* < 4.1.0 */
- .cache = NULL, /* disable caching */
-};
-
-static const struct net_device_ops kni_net_netdev_ops = {
- .ndo_open = kni_net_open,
- .ndo_stop = kni_net_release,
- .ndo_set_config = kni_net_config,
- .ndo_change_rx_flags = kni_net_change_rx_flags,
- .ndo_start_xmit = kni_net_tx,
- .ndo_change_mtu = kni_net_change_mtu,
- .ndo_tx_timeout = kni_net_tx_timeout,
- .ndo_set_mac_address = kni_net_set_mac,
-#ifdef HAVE_CHANGE_CARRIER_CB
- .ndo_change_carrier = kni_net_change_carrier,
-#endif
-};
-
-static void kni_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- strlcpy(info->version, KNI_VERSION, sizeof(info->version));
- strlcpy(info->driver, "kni", sizeof(info->driver));
-}
-
-static const struct ethtool_ops kni_net_ethtool_ops = {
- .get_drvinfo = kni_get_drvinfo,
- .get_link = ethtool_op_get_link,
-};
-
-void
-kni_net_init(struct net_device *dev)
-{
- struct kni_dev *kni = netdev_priv(dev);
-
- init_waitqueue_head(&kni->wq);
- mutex_init(&kni->sync_lock);
-
- ether_setup(dev); /* assign some of the fields */
- dev->netdev_ops = &kni_net_netdev_ops;
- dev->header_ops = &kni_net_header_ops;
- dev->ethtool_ops = &kni_net_ethtool_ops;
- dev->watchdog_timeo = WD_TIMEOUT;
-}
-
-void
-kni_net_config_lo_mode(char *lo_str)
-{
- if (!lo_str) {
- pr_debug("loopback disabled");
- return;
- }
-
- if (!strcmp(lo_str, "lo_mode_none"))
- pr_debug("loopback disabled");
- else if (!strcmp(lo_str, "lo_mode_fifo")) {
- pr_debug("loopback mode=lo_mode_fifo enabled");
- kni_net_rx_func = kni_net_rx_lo_fifo;
- } else if (!strcmp(lo_str, "lo_mode_fifo_skb")) {
- pr_debug("loopback mode=lo_mode_fifo_skb enabled");
- kni_net_rx_func = kni_net_rx_lo_fifo_skb;
- } else {
- pr_debug("Unknown loopback parameter, disabled");
- }
-}
diff --git a/kernel/linux/kni/meson.build b/kernel/linux/kni/meson.build
deleted file mode 100644
index 4c90069e9989..000000000000
--- a/kernel/linux/kni/meson.build
+++ /dev/null
@@ -1,41 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
-
-# For SUSE build check function arguments of ndo_tx_timeout API
-# Ref: https://jira.devtools.intel.com/browse/DPDK-29263
-kmod_cflags = ''
-file_path = kernel_source_dir + '/include/linux/netdevice.h'
-run_cmd = run_command('grep', 'ndo_tx_timeout', file_path, check: false)
-
-if run_cmd.stdout().contains('txqueue') == true
- kmod_cflags = '-DHAVE_ARG_TX_QUEUE'
-endif
-
-
-kni_mkfile = custom_target('rte_kni_makefile',
- output: 'Makefile',
- command: ['touch', '@OUTPUT@'])
-
-kni_sources = files(
- 'kni_misc.c',
- 'kni_net.c',
- 'Kbuild',
-)
-
-custom_target('rte_kni',
- input: kni_sources,
- output: 'rte_kni.ko',
- command: ['make', '-j4', '-C', kernel_build_dir,
- 'M=' + meson.current_build_dir(),
- 'src=' + meson.current_source_dir(),
- ' '.join(['MODULE_CFLAGS=', kmod_cflags,'-include '])
- + dpdk_source_root + '/config/rte_config.h' +
- ' -I' + dpdk_source_root + '/lib/eal/include' +
- ' -I' + dpdk_source_root + '/lib/kni' +
- ' -I' + dpdk_build_root +
- ' -I' + meson.current_source_dir(),
- 'modules'] + cross_args,
- depends: kni_mkfile,
- install: install,
- install_dir: kernel_install_dir,
- build_by_default: get_option('enable_kmods'))
diff --git a/kernel/linux/meson.build b/kernel/linux/meson.build
index 16a094899446..8d47074621f7 100644
--- a/kernel/linux/meson.build
+++ b/kernel/linux/meson.build
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2018 Intel Corporation
-subdirs = ['kni']
+subdirs = []
kernel_build_dir = get_option('kernel_dir')
kernel_source_dir = get_option('kernel_dir')
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index bd7b188ceb4a..0a1d219d6924 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -356,7 +356,6 @@ static const struct logtype logtype_strings[] = {
{RTE_LOGTYPE_PMD, "pmd"},
{RTE_LOGTYPE_HASH, "lib.hash"},
{RTE_LOGTYPE_LPM, "lib.lpm"},
- {RTE_LOGTYPE_KNI, "lib.kni"},
{RTE_LOGTYPE_ACL, "lib.acl"},
{RTE_LOGTYPE_POWER, "lib.power"},
{RTE_LOGTYPE_METER, "lib.meter"},
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index 6d2b0856a565..bdefff2a5933 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -34,7 +34,7 @@ extern "C" {
#define RTE_LOGTYPE_PMD 5 /**< Log related to poll mode driver. */
#define RTE_LOGTYPE_HASH 6 /**< Log related to hash table. */
#define RTE_LOGTYPE_LPM 7 /**< Log related to LPM. */
-#define RTE_LOGTYPE_KNI 8 /**< Log related to KNI. */
+ /* was RTE_LOGTYPE_KNI */
#define RTE_LOGTYPE_ACL 9 /**< Log related to ACL. */
#define RTE_LOGTYPE_POWER 10 /**< Log related to power. */
#define RTE_LOGTYPE_METER 11 /**< Log related to QoS meter. */
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index c6efd920145c..a1fefcd9d83a 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1084,11 +1084,6 @@ rte_eal_init(int argc, char **argv)
*/
iova_mode = RTE_IOVA_VA;
RTE_LOG(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.\n");
-#if defined(RTE_LIB_KNI) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0)
- } else if (rte_eal_check_module("rte_kni") == 1) {
- iova_mode = RTE_IOVA_PA;
- RTE_LOG(DEBUG, EAL, "KNI is loaded, selecting IOVA as PA mode for better KNI performance.\n");
-#endif
} else if (is_iommu_enabled()) {
/* we have an IOMMU, pick IOVA as VA mode */
iova_mode = RTE_IOVA_VA;
@@ -1101,20 +1096,6 @@ rte_eal_init(int argc, char **argv)
RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n");
}
}
-#if defined(RTE_LIB_KNI) && LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0)
- /* Workaround for KNI which requires physical address to work
- * in kernels < 4.10
- */
- if (iova_mode == RTE_IOVA_VA &&
- rte_eal_check_module("rte_kni") == 1) {
- if (phys_addrs) {
- iova_mode = RTE_IOVA_PA;
- RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module is loaded\n");
- } else {
- RTE_LOG(DEBUG, EAL, "KNI can not work since physical addresses are unavailable\n");
- }
- }
-#endif
rte_eal_get_configuration()->iova_mode = iova_mode;
} else {
rte_eal_get_configuration()->iova_mode =
diff --git a/lib/kni/meson.build b/lib/kni/meson.build
deleted file mode 100644
index 5ce410f7f2d2..000000000000
--- a/lib/kni/meson.build
+++ /dev/null
@@ -1,21 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
-
-if is_windows
- build = false
- reason = 'not supported on Windows'
- subdir_done()
-endif
-
-if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
- build = false
- reason = 'requires IOVA in mbuf (set enable_iova_as_pa option)'
-endif
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
-endif
-sources = files('rte_kni.c')
-headers = files('rte_kni.h', 'rte_kni_common.h')
-deps += ['ethdev', 'pci']
diff --git a/lib/kni/rte_kni.c b/lib/kni/rte_kni.c
deleted file mode 100644
index bfa6a001ff59..000000000000
--- a/lib/kni/rte_kni.c
+++ /dev/null
@@ -1,843 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef RTE_EXEC_ENV_LINUX
-#error "KNI is not supported"
-#endif
-
-#include <string.h>
-#include <fcntl.h>
-#include <unistd.h>
-#include <sys/ioctl.h>
-#include <linux/version.h>
-
-#include <rte_string_fns.h>
-#include <rte_ethdev.h>
-#include <rte_malloc.h>
-#include <rte_log.h>
-#include <rte_kni.h>
-#include <rte_memzone.h>
-#include <rte_tailq.h>
-#include <rte_eal_memconfig.h>
-#include <rte_kni_common.h>
-#include "rte_kni_fifo.h"
-
-#define MAX_MBUF_BURST_NUM 32
-
-/* Maximum number of ring entries */
-#define KNI_FIFO_COUNT_MAX 1024
-#define KNI_FIFO_SIZE (KNI_FIFO_COUNT_MAX * sizeof(void *) + \
- sizeof(struct rte_kni_fifo))
-
-#define KNI_REQUEST_MBUF_NUM_MAX 32
-
-#define KNI_MEM_CHECK(cond, fail) do { if (cond) goto fail; } while (0)
-
-#define KNI_MZ_NAME_FMT "kni_info_%s"
-#define KNI_TX_Q_MZ_NAME_FMT "kni_tx_%s"
-#define KNI_RX_Q_MZ_NAME_FMT "kni_rx_%s"
-#define KNI_ALLOC_Q_MZ_NAME_FMT "kni_alloc_%s"
-#define KNI_FREE_Q_MZ_NAME_FMT "kni_free_%s"
-#define KNI_REQ_Q_MZ_NAME_FMT "kni_req_%s"
-#define KNI_RESP_Q_MZ_NAME_FMT "kni_resp_%s"
-#define KNI_SYNC_ADDR_MZ_NAME_FMT "kni_sync_%s"
-
-TAILQ_HEAD(rte_kni_list, rte_tailq_entry);
-
-static struct rte_tailq_elem rte_kni_tailq = {
- .name = "RTE_KNI",
-};
-EAL_REGISTER_TAILQ(rte_kni_tailq)
-
-/**
- * KNI context
- */
-struct rte_kni {
- char name[RTE_KNI_NAMESIZE]; /**< KNI interface name */
- uint16_t group_id; /**< Group ID of KNI devices */
- uint32_t slot_id; /**< KNI pool slot ID */
- struct rte_mempool *pktmbuf_pool; /**< pkt mbuf mempool */
- unsigned int mbuf_size; /**< mbuf size */
-
- const struct rte_memzone *m_tx_q; /**< TX queue memzone */
- const struct rte_memzone *m_rx_q; /**< RX queue memzone */
- const struct rte_memzone *m_alloc_q;/**< Alloc queue memzone */
- const struct rte_memzone *m_free_q; /**< Free queue memzone */
-
- struct rte_kni_fifo *tx_q; /**< TX queue */
- struct rte_kni_fifo *rx_q; /**< RX queue */
- struct rte_kni_fifo *alloc_q; /**< Allocated mbufs queue */
- struct rte_kni_fifo *free_q; /**< To be freed mbufs queue */
-
- const struct rte_memzone *m_req_q; /**< Request queue memzone */
- const struct rte_memzone *m_resp_q; /**< Response queue memzone */
- const struct rte_memzone *m_sync_addr;/**< Sync addr memzone */
-
- /* For request & response */
- struct rte_kni_fifo *req_q; /**< Request queue */
- struct rte_kni_fifo *resp_q; /**< Response queue */
- void *sync_addr; /**< Req/Resp Mem address */
-
- struct rte_kni_ops ops; /**< operations for request */
-};
-
-enum kni_ops_status {
- KNI_REQ_NO_REGISTER = 0,
- KNI_REQ_REGISTERED,
-};
-
-static void kni_free_mbufs(struct rte_kni *kni);
-static void kni_allocate_mbufs(struct rte_kni *kni);
-
-static volatile int kni_fd = -1;
-
-/* Shall be called before any allocation happens */
-int
-rte_kni_init(unsigned int max_kni_ifaces __rte_unused)
-{
- RTE_LOG(WARNING, KNI, "WARNING: KNI is deprecated and will be removed in DPDK 23.11\n");
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0)
- if (rte_eal_iova_mode() != RTE_IOVA_PA) {
- RTE_LOG(ERR, KNI, "KNI requires IOVA as PA\n");
- return -1;
- }
-#endif
-
- /* Check FD and open */
- if (kni_fd < 0) {
- kni_fd = open("/dev/" KNI_DEVICE, O_RDWR);
- if (kni_fd < 0) {
- RTE_LOG(ERR, KNI,
- "Can not open /dev/%s\n", KNI_DEVICE);
- return -1;
- }
- }
-
- return 0;
-}
-
-static struct rte_kni *
-__rte_kni_get(const char *name)
-{
- struct rte_kni *kni;
- struct rte_tailq_entry *te;
- struct rte_kni_list *kni_list;
-
- kni_list = RTE_TAILQ_CAST(rte_kni_tailq.head, rte_kni_list);
-
- TAILQ_FOREACH(te, kni_list, next) {
- kni = te->data;
- if (strncmp(name, kni->name, RTE_KNI_NAMESIZE) == 0)
- break;
- }
-
- if (te == NULL)
- kni = NULL;
-
- return kni;
-}
-
-static int
-kni_reserve_mz(struct rte_kni *kni)
-{
- char mz_name[RTE_MEMZONE_NAMESIZE];
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_TX_Q_MZ_NAME_FMT, kni->name);
- kni->m_tx_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_tx_q == NULL, tx_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_RX_Q_MZ_NAME_FMT, kni->name);
- kni->m_rx_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_rx_q == NULL, rx_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_ALLOC_Q_MZ_NAME_FMT, kni->name);
- kni->m_alloc_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_alloc_q == NULL, alloc_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_FREE_Q_MZ_NAME_FMT, kni->name);
- kni->m_free_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_free_q == NULL, free_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_REQ_Q_MZ_NAME_FMT, kni->name);
- kni->m_req_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_req_q == NULL, req_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_RESP_Q_MZ_NAME_FMT, kni->name);
- kni->m_resp_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_resp_q == NULL, resp_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_SYNC_ADDR_MZ_NAME_FMT, kni->name);
- kni->m_sync_addr = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_sync_addr == NULL, sync_addr_fail);
-
- return 0;
-
-sync_addr_fail:
- rte_memzone_free(kni->m_resp_q);
-resp_q_fail:
- rte_memzone_free(kni->m_req_q);
-req_q_fail:
- rte_memzone_free(kni->m_free_q);
-free_q_fail:
- rte_memzone_free(kni->m_alloc_q);
-alloc_q_fail:
- rte_memzone_free(kni->m_rx_q);
-rx_q_fail:
- rte_memzone_free(kni->m_tx_q);
-tx_q_fail:
- return -1;
-}
-
-static void
-kni_release_mz(struct rte_kni *kni)
-{
- rte_memzone_free(kni->m_tx_q);
- rte_memzone_free(kni->m_rx_q);
- rte_memzone_free(kni->m_alloc_q);
- rte_memzone_free(kni->m_free_q);
- rte_memzone_free(kni->m_req_q);
- rte_memzone_free(kni->m_resp_q);
- rte_memzone_free(kni->m_sync_addr);
-}
-
-struct rte_kni *
-rte_kni_alloc(struct rte_mempool *pktmbuf_pool,
- const struct rte_kni_conf *conf,
- struct rte_kni_ops *ops)
-{
- int ret;
- struct rte_kni_device_info dev_info;
- struct rte_kni *kni;
- struct rte_tailq_entry *te;
- struct rte_kni_list *kni_list;
-
- if (!pktmbuf_pool || !conf || !conf->name[0])
- return NULL;
-
- /* Check if KNI subsystem has been initialized */
- if (kni_fd < 0) {
- RTE_LOG(ERR, KNI, "KNI subsystem has not been initialized. Invoke rte_kni_init() first\n");
- return NULL;
- }
-
- rte_mcfg_tailq_write_lock();
-
- kni = __rte_kni_get(conf->name);
- if (kni != NULL) {
- RTE_LOG(ERR, KNI, "KNI already exists\n");
- goto unlock;
- }
-
- te = rte_zmalloc("KNI_TAILQ_ENTRY", sizeof(*te), 0);
- if (te == NULL) {
- RTE_LOG(ERR, KNI, "Failed to allocate tailq entry\n");
- goto unlock;
- }
-
- kni = rte_zmalloc("KNI", sizeof(struct rte_kni), RTE_CACHE_LINE_SIZE);
- if (kni == NULL) {
- RTE_LOG(ERR, KNI, "KNI memory allocation failed\n");
- goto kni_fail;
- }
-
- strlcpy(kni->name, conf->name, RTE_KNI_NAMESIZE);
-
- if (ops)
- memcpy(&kni->ops, ops, sizeof(struct rte_kni_ops));
- else
- kni->ops.port_id = UINT16_MAX;
-
- memset(&dev_info, 0, sizeof(dev_info));
- dev_info.core_id = conf->core_id;
- dev_info.force_bind = conf->force_bind;
- dev_info.group_id = conf->group_id;
- dev_info.mbuf_size = conf->mbuf_size;
- dev_info.mtu = conf->mtu;
- dev_info.min_mtu = conf->min_mtu;
- dev_info.max_mtu = conf->max_mtu;
-
- memcpy(dev_info.mac_addr, conf->mac_addr, RTE_ETHER_ADDR_LEN);
-
- strlcpy(dev_info.name, conf->name, RTE_KNI_NAMESIZE);
-
- ret = kni_reserve_mz(kni);
- if (ret < 0)
- goto mz_fail;
-
- /* TX RING */
- kni->tx_q = kni->m_tx_q->addr;
- kni_fifo_init(kni->tx_q, KNI_FIFO_COUNT_MAX);
- dev_info.tx_phys = kni->m_tx_q->iova;
-
- /* RX RING */
- kni->rx_q = kni->m_rx_q->addr;
- kni_fifo_init(kni->rx_q, KNI_FIFO_COUNT_MAX);
- dev_info.rx_phys = kni->m_rx_q->iova;
-
- /* ALLOC RING */
- kni->alloc_q = kni->m_alloc_q->addr;
- kni_fifo_init(kni->alloc_q, KNI_FIFO_COUNT_MAX);
- dev_info.alloc_phys = kni->m_alloc_q->iova;
-
- /* FREE RING */
- kni->free_q = kni->m_free_q->addr;
- kni_fifo_init(kni->free_q, KNI_FIFO_COUNT_MAX);
- dev_info.free_phys = kni->m_free_q->iova;
-
- /* Request RING */
- kni->req_q = kni->m_req_q->addr;
- kni_fifo_init(kni->req_q, KNI_FIFO_COUNT_MAX);
- dev_info.req_phys = kni->m_req_q->iova;
-
- /* Response RING */
- kni->resp_q = kni->m_resp_q->addr;
- kni_fifo_init(kni->resp_q, KNI_FIFO_COUNT_MAX);
- dev_info.resp_phys = kni->m_resp_q->iova;
-
- /* Req/Resp sync mem area */
- kni->sync_addr = kni->m_sync_addr->addr;
- dev_info.sync_va = kni->m_sync_addr->addr;
- dev_info.sync_phys = kni->m_sync_addr->iova;
-
- kni->pktmbuf_pool = pktmbuf_pool;
- kni->group_id = conf->group_id;
- kni->mbuf_size = conf->mbuf_size;
-
- dev_info.iova_mode = (rte_eal_iova_mode() == RTE_IOVA_VA) ? 1 : 0;
-
- ret = ioctl(kni_fd, RTE_KNI_IOCTL_CREATE, &dev_info);
- if (ret < 0)
- goto ioctl_fail;
-
- te->data = kni;
-
- kni_list = RTE_TAILQ_CAST(rte_kni_tailq.head, rte_kni_list);
- TAILQ_INSERT_TAIL(kni_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- /* Allocate mbufs and then put them into alloc_q */
- kni_allocate_mbufs(kni);
-
- return kni;
-
-ioctl_fail:
- kni_release_mz(kni);
-mz_fail:
- rte_free(kni);
-kni_fail:
- rte_free(te);
-unlock:
- rte_mcfg_tailq_write_unlock();
-
- return NULL;
-}
-
-static void
-kni_free_fifo(struct rte_kni_fifo *fifo)
-{
- int ret;
- struct rte_mbuf *pkt;
-
- do {
- ret = kni_fifo_get(fifo, (void **)&pkt, 1);
- if (ret)
- rte_pktmbuf_free(pkt);
- } while (ret);
-}
-
-static void *
-va2pa(struct rte_mbuf *m)
-{
- return (void *)((unsigned long)m -
- ((unsigned long)m->buf_addr - (unsigned long)rte_mbuf_iova_get(m)));
-}
-
-static void *
-va2pa_all(struct rte_mbuf *mbuf)
-{
- void *phy_mbuf = va2pa(mbuf);
- struct rte_mbuf *next = mbuf->next;
- while (next) {
- mbuf->next = va2pa(next);
- mbuf = next;
- next = mbuf->next;
- }
- return phy_mbuf;
-}
-
-static void
-obj_free(struct rte_mempool *mp __rte_unused, void *opaque, void *obj,
- unsigned obj_idx __rte_unused)
-{
- struct rte_mbuf *m = obj;
- void *mbuf_phys = opaque;
-
- if (va2pa(m) == mbuf_phys)
- rte_pktmbuf_free(m);
-}
-
-static void
-kni_free_fifo_phy(struct rte_mempool *mp, struct rte_kni_fifo *fifo)
-{
- void *mbuf_phys;
- int ret;
-
- do {
- ret = kni_fifo_get(fifo, &mbuf_phys, 1);
- if (ret)
- rte_mempool_obj_iter(mp, obj_free, mbuf_phys);
- } while (ret);
-}
-
-int
-rte_kni_release(struct rte_kni *kni)
-{
- struct rte_tailq_entry *te;
- struct rte_kni_list *kni_list;
- struct rte_kni_device_info dev_info;
- uint32_t retry = 5;
-
- if (!kni)
- return -1;
-
- kni_list = RTE_TAILQ_CAST(rte_kni_tailq.head, rte_kni_list);
-
- rte_mcfg_tailq_write_lock();
-
- TAILQ_FOREACH(te, kni_list, next) {
- if (te->data == kni)
- break;
- }
-
- if (te == NULL)
- goto unlock;
-
- strlcpy(dev_info.name, kni->name, sizeof(dev_info.name));
- if (ioctl(kni_fd, RTE_KNI_IOCTL_RELEASE, &dev_info) < 0) {
- RTE_LOG(ERR, KNI, "Fail to release kni device\n");
- goto unlock;
- }
-
- TAILQ_REMOVE(kni_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- /* mbufs in all fifo should be released, except request/response */
-
- /* wait until all rxq packets processed by kernel */
- while (kni_fifo_count(kni->rx_q) && retry--)
- usleep(1000);
-
- if (kni_fifo_count(kni->rx_q))
- RTE_LOG(ERR, KNI, "Fail to free all Rx-q items\n");
-
- kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);
- kni_free_fifo(kni->tx_q);
- kni_free_fifo(kni->free_q);
-
- kni_release_mz(kni);
-
- rte_free(kni);
-
- rte_free(te);
-
- return 0;
-
-unlock:
- rte_mcfg_tailq_write_unlock();
-
- return -1;
-}
-
-/* default callback for request of configuring device mac address */
-static int
-kni_config_mac_address(uint16_t port_id, uint8_t mac_addr[])
-{
- int ret = 0;
-
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_LOG(ERR, KNI, "Invalid port id %d\n", port_id);
- return -EINVAL;
- }
-
- RTE_LOG(INFO, KNI, "Configure mac address of %d", port_id);
-
- ret = rte_eth_dev_default_mac_addr_set(port_id,
- (struct rte_ether_addr *)mac_addr);
- if (ret < 0)
- RTE_LOG(ERR, KNI, "Failed to config mac_addr for port %d\n",
- port_id);
-
- return ret;
-}
-
-/* default callback for request of configuring promiscuous mode */
-static int
-kni_config_promiscusity(uint16_t port_id, uint8_t to_on)
-{
- int ret;
-
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_LOG(ERR, KNI, "Invalid port id %d\n", port_id);
- return -EINVAL;
- }
-
- RTE_LOG(INFO, KNI, "Configure promiscuous mode of %d to %d\n",
- port_id, to_on);
-
- if (to_on)
- ret = rte_eth_promiscuous_enable(port_id);
- else
- ret = rte_eth_promiscuous_disable(port_id);
-
- if (ret != 0)
- RTE_LOG(ERR, KNI,
- "Failed to %s promiscuous mode for port %u: %s\n",
- to_on ? "enable" : "disable", port_id,
- rte_strerror(-ret));
-
- return ret;
-}
-
-/* default callback for request of configuring allmulticast mode */
-static int
-kni_config_allmulticast(uint16_t port_id, uint8_t to_on)
-{
- int ret;
-
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_LOG(ERR, KNI, "Invalid port id %d\n", port_id);
- return -EINVAL;
- }
-
- RTE_LOG(INFO, KNI, "Configure allmulticast mode of %d to %d\n",
- port_id, to_on);
-
- if (to_on)
- ret = rte_eth_allmulticast_enable(port_id);
- else
- ret = rte_eth_allmulticast_disable(port_id);
- if (ret != 0)
- RTE_LOG(ERR, KNI,
- "Failed to %s allmulticast mode for port %u: %s\n",
- to_on ? "enable" : "disable", port_id,
- rte_strerror(-ret));
-
- return ret;
-}
-
-int
-rte_kni_handle_request(struct rte_kni *kni)
-{
- unsigned int ret;
- struct rte_kni_request *req = NULL;
-
- if (kni == NULL)
- return -1;
-
- /* Get request mbuf */
- ret = kni_fifo_get(kni->req_q, (void **)&req, 1);
- if (ret != 1)
- return 0; /* It is OK of can not getting the request mbuf */
-
- if (req != kni->sync_addr) {
- RTE_LOG(ERR, KNI, "Wrong req pointer %p\n", req);
- return -1;
- }
-
- /* Analyze the request and call the relevant actions for it */
- switch (req->req_id) {
- case RTE_KNI_REQ_CHANGE_MTU: /* Change MTU */
- if (kni->ops.change_mtu)
- req->result = kni->ops.change_mtu(kni->ops.port_id,
- req->new_mtu);
- break;
- case RTE_KNI_REQ_CFG_NETWORK_IF: /* Set network interface up/down */
- if (kni->ops.config_network_if)
- req->result = kni->ops.config_network_if(kni->ops.port_id,
- req->if_up);
- break;
- case RTE_KNI_REQ_CHANGE_MAC_ADDR: /* Change MAC Address */
- if (kni->ops.config_mac_address)
- req->result = kni->ops.config_mac_address(
- kni->ops.port_id, req->mac_addr);
- else if (kni->ops.port_id != UINT16_MAX)
- req->result = kni_config_mac_address(
- kni->ops.port_id, req->mac_addr);
- break;
- case RTE_KNI_REQ_CHANGE_PROMISC: /* Change PROMISCUOUS MODE */
- if (kni->ops.config_promiscusity)
- req->result = kni->ops.config_promiscusity(
- kni->ops.port_id, req->promiscusity);
- else if (kni->ops.port_id != UINT16_MAX)
- req->result = kni_config_promiscusity(
- kni->ops.port_id, req->promiscusity);
- break;
- case RTE_KNI_REQ_CHANGE_ALLMULTI: /* Change ALLMULTICAST MODE */
- if (kni->ops.config_allmulticast)
- req->result = kni->ops.config_allmulticast(
- kni->ops.port_id, req->allmulti);
- else if (kni->ops.port_id != UINT16_MAX)
- req->result = kni_config_allmulticast(
- kni->ops.port_id, req->allmulti);
- break;
- default:
- RTE_LOG(ERR, KNI, "Unknown request id %u\n", req->req_id);
- req->result = -EINVAL;
- break;
- }
-
- /* if needed, construct response buffer and put it back to resp_q */
- if (!req->async)
- ret = kni_fifo_put(kni->resp_q, (void **)&req, 1);
- else
- ret = 1;
- if (ret != 1) {
- RTE_LOG(ERR, KNI, "Fail to put the muf back to resp_q\n");
- return -1; /* It is an error of can't putting the mbuf back */
- }
-
- return 0;
-}
-
-unsigned
-rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned int num)
-{
- num = RTE_MIN(kni_fifo_free_count(kni->rx_q), num);
- void *phy_mbufs[num];
- unsigned int ret;
- unsigned int i;
-
- for (i = 0; i < num; i++)
- phy_mbufs[i] = va2pa_all(mbufs[i]);
-
- ret = kni_fifo_put(kni->rx_q, phy_mbufs, num);
-
- /* Get mbufs from free_q and then free them */
- kni_free_mbufs(kni);
-
- return ret;
-}
-
-unsigned
-rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned int num)
-{
- unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
-
- /* If buffers removed or alloc_q is empty, allocate mbufs and then put them into alloc_q */
- if (ret || (kni_fifo_count(kni->alloc_q) == 0))
- kni_allocate_mbufs(kni);
-
- return ret;
-}
-
-static void
-kni_free_mbufs(struct rte_kni *kni)
-{
- int i, ret;
- struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM];
-
- ret = kni_fifo_get(kni->free_q, (void **)pkts, MAX_MBUF_BURST_NUM);
- if (likely(ret > 0)) {
- for (i = 0; i < ret; i++)
- rte_pktmbuf_free(pkts[i]);
- }
-}
-
-static void
-kni_allocate_mbufs(struct rte_kni *kni)
-{
- int i, ret;
- struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM];
- void *phys[MAX_MBUF_BURST_NUM];
- int allocq_free;
-
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pool) !=
- offsetof(struct rte_kni_mbuf, pool));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_addr) !=
- offsetof(struct rte_kni_mbuf, buf_addr));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, next) !=
- offsetof(struct rte_kni_mbuf, next));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) !=
- offsetof(struct rte_kni_mbuf, data_off));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
- offsetof(struct rte_kni_mbuf, data_len));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
- offsetof(struct rte_kni_mbuf, pkt_len));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
- offsetof(struct rte_kni_mbuf, ol_flags));
-
- /* Check if pktmbuf pool has been configured */
- if (kni->pktmbuf_pool == NULL) {
- RTE_LOG(ERR, KNI, "No valid mempool for allocating mbufs\n");
- return;
- }
-
- allocq_free = kni_fifo_free_count(kni->alloc_q);
- allocq_free = (allocq_free > MAX_MBUF_BURST_NUM) ?
- MAX_MBUF_BURST_NUM : allocq_free;
- for (i = 0; i < allocq_free; i++) {
- pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool);
- if (unlikely(pkts[i] == NULL)) {
- /* Out of memory */
- RTE_LOG(ERR, KNI, "Out of memory\n");
- break;
- }
- phys[i] = va2pa(pkts[i]);
- }
-
- /* No pkt mbuf allocated */
- if (i <= 0)
- return;
-
- ret = kni_fifo_put(kni->alloc_q, phys, i);
-
- /* Check if any mbufs not put into alloc_q, and then free them */
- if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) {
- int j;
-
- for (j = ret; j < i; j++)
- rte_pktmbuf_free(pkts[j]);
- }
-}
-
-struct rte_kni *
-rte_kni_get(const char *name)
-{
- struct rte_kni *kni;
-
- if (name == NULL || name[0] == '\0')
- return NULL;
-
- rte_mcfg_tailq_read_lock();
-
- kni = __rte_kni_get(name);
-
- rte_mcfg_tailq_read_unlock();
-
- return kni;
-}
-
-const char *
-rte_kni_get_name(const struct rte_kni *kni)
-{
- return kni->name;
-}
-
-static enum kni_ops_status
-kni_check_request_register(struct rte_kni_ops *ops)
-{
- /* check if KNI request ops has been registered*/
- if (ops == NULL)
- return KNI_REQ_NO_REGISTER;
-
- if (ops->change_mtu == NULL
- && ops->config_network_if == NULL
- && ops->config_mac_address == NULL
- && ops->config_promiscusity == NULL
- && ops->config_allmulticast == NULL)
- return KNI_REQ_NO_REGISTER;
-
- return KNI_REQ_REGISTERED;
-}
-
-int
-rte_kni_register_handlers(struct rte_kni *kni, struct rte_kni_ops *ops)
-{
- enum kni_ops_status req_status;
-
- if (ops == NULL) {
- RTE_LOG(ERR, KNI, "Invalid KNI request operation.\n");
- return -1;
- }
-
- if (kni == NULL) {
- RTE_LOG(ERR, KNI, "Invalid kni info.\n");
- return -1;
- }
-
- req_status = kni_check_request_register(&kni->ops);
- if (req_status == KNI_REQ_REGISTERED) {
- RTE_LOG(ERR, KNI, "The KNI request operation has already registered.\n");
- return -1;
- }
-
- memcpy(&kni->ops, ops, sizeof(struct rte_kni_ops));
- return 0;
-}
-
-int
-rte_kni_unregister_handlers(struct rte_kni *kni)
-{
- if (kni == NULL) {
- RTE_LOG(ERR, KNI, "Invalid kni info.\n");
- return -1;
- }
-
- memset(&kni->ops, 0, sizeof(struct rte_kni_ops));
-
- return 0;
-}
-
-int
-rte_kni_update_link(struct rte_kni *kni, unsigned int linkup)
-{
- char path[64];
- char old_carrier[2];
- const char *new_carrier;
- int old_linkup;
- int fd, ret;
-
- if (kni == NULL)
- return -1;
-
- snprintf(path, sizeof(path), "/sys/devices/virtual/net/%s/carrier",
- kni->name);
-
- fd = open(path, O_RDWR);
- if (fd == -1) {
- RTE_LOG(ERR, KNI, "Failed to open file: %s.\n", path);
- return -1;
- }
-
- ret = read(fd, old_carrier, 2);
- if (ret < 1) {
- close(fd);
- return -1;
- }
- old_linkup = (old_carrier[0] == '1');
-
- if (old_linkup == (int)linkup)
- goto out;
-
- new_carrier = linkup ? "1" : "0";
- ret = write(fd, new_carrier, 1);
- if (ret < 1) {
- RTE_LOG(ERR, KNI, "Failed to write file: %s.\n", path);
- close(fd);
- return -1;
- }
-out:
- close(fd);
- return old_linkup;
-}
-
-void
-rte_kni_close(void)
-{
- if (kni_fd < 0)
- return;
-
- close(kni_fd);
- kni_fd = -1;
-}
diff --git a/lib/kni/rte_kni.h b/lib/kni/rte_kni.h
deleted file mode 100644
index 1e508acc829b..000000000000
--- a/lib/kni/rte_kni.h
+++ /dev/null
@@ -1,269 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_KNI_H_
-#define _RTE_KNI_H_
-
-/**
- * @file
- * RTE KNI
- *
- * The KNI library provides the ability to create and destroy kernel NIC
- * interfaces that may be used by the RTE application to receive/transmit
- * packets from/to Linux kernel net interfaces.
- *
- * This library provides two APIs to burst receive packets from KNI interfaces,
- * and burst transmit packets to KNI interfaces.
- */
-
-#include <rte_compat.h>
-#include <rte_pci.h>
-#include <rte_ether.h>
-
-#include <rte_kni_common.h>
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-struct rte_kni;
-struct rte_mbuf;
-
-/**
- * Structure which has the function pointers for KNI interface.
- */
-struct rte_kni_ops {
- uint16_t port_id; /* Port ID */
-
- /* Pointer to function of changing MTU */
- int (*change_mtu)(uint16_t port_id, unsigned int new_mtu);
-
- /* Pointer to function of configuring network interface */
- int (*config_network_if)(uint16_t port_id, uint8_t if_up);
-
- /* Pointer to function of configuring mac address */
- int (*config_mac_address)(uint16_t port_id, uint8_t mac_addr[]);
-
- /* Pointer to function of configuring promiscuous mode */
- int (*config_promiscusity)(uint16_t port_id, uint8_t to_on);
-
- /* Pointer to function of configuring allmulticast mode */
- int (*config_allmulticast)(uint16_t port_id, uint8_t to_on);
-};
-
-/**
- * Structure for configuring KNI device.
- */
-struct rte_kni_conf {
- /*
- * KNI name which will be used in relevant network device.
- * Let the name as short as possible, as it will be part of
- * memzone name.
- */
- char name[RTE_KNI_NAMESIZE];
- uint32_t core_id; /* Core ID to bind kernel thread on */
- uint16_t group_id; /* Group ID */
- unsigned mbuf_size; /* mbuf size */
- struct rte_pci_addr addr; /* deprecated */
- struct rte_pci_id id; /* deprecated */
-
- __extension__
- uint8_t force_bind : 1; /* Flag to bind kernel thread */
- uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; /* MAC address assigned to KNI */
- uint16_t mtu;
- uint16_t min_mtu;
- uint16_t max_mtu;
-};
-
-/**
- * Initialize and preallocate KNI subsystem
- *
- * This function is to be executed on the main lcore only, after EAL
- * initialization and before any KNI interface is attempted to be
- * allocated
- *
- * @param max_kni_ifaces
- * The maximum number of KNI interfaces that can coexist concurrently
- *
- * @return
- * - 0 indicates success.
- * - negative value indicates failure.
- */
-int rte_kni_init(unsigned int max_kni_ifaces);
-
-
-/**
- * Allocate KNI interface according to the port id, mbuf size, mbuf pool,
- * configurations and callbacks for kernel requests.The KNI interface created
- * in the kernel space is the net interface the traditional Linux application
- * talking to.
- *
- * The rte_kni_alloc shall not be called before rte_kni_init() has been
- * called. rte_kni_alloc is thread safe.
- *
- * The mempool should have capacity of more than "2 x KNI_FIFO_COUNT_MAX"
- * elements for each KNI interface allocated.
- *
- * @param pktmbuf_pool
- * The mempool for allocating mbufs for packets.
- * @param conf
- * The pointer to the configurations of the KNI device.
- * @param ops
- * The pointer to the callbacks for the KNI kernel requests.
- *
- * @return
- * - The pointer to the context of a KNI interface.
- * - NULL indicate error.
- */
-struct rte_kni *rte_kni_alloc(struct rte_mempool *pktmbuf_pool,
- const struct rte_kni_conf *conf, struct rte_kni_ops *ops);
-
-/**
- * Release KNI interface according to the context. It will also release the
- * paired KNI interface in kernel space. All processing on the specific KNI
- * context need to be stopped before calling this interface.
- *
- * rte_kni_release is thread safe.
- *
- * @param kni
- * The pointer to the context of an existent KNI interface.
- *
- * @return
- * - 0 indicates success.
- * - negative value indicates failure.
- */
-int rte_kni_release(struct rte_kni *kni);
-
-/**
- * It is used to handle the request mbufs sent from kernel space.
- * Then analyzes it and calls the specific actions for the specific requests.
- * Finally constructs the response mbuf and puts it back to the resp_q.
- *
- * @param kni
- * The pointer to the context of an existent KNI interface.
- *
- * @return
- * - 0
- * - negative value indicates failure.
- */
-int rte_kni_handle_request(struct rte_kni *kni);
-
-/**
- * Retrieve a burst of packets from a KNI interface. The retrieved packets are
- * stored in rte_mbuf structures whose pointers are supplied in the array of
- * mbufs, and the maximum number is indicated by num. It handles allocating
- * the mbufs for KNI interface alloc queue.
- *
- * @param kni
- * The KNI interface context.
- * @param mbufs
- * The array to store the pointers of mbufs.
- * @param num
- * The maximum number per burst.
- *
- * @return
- * The actual number of packets retrieved.
- */
-unsigned rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
- unsigned num);
-
-/**
- * Send a burst of packets to a KNI interface. The packets to be sent out are
- * stored in rte_mbuf structures whose pointers are supplied in the array of
- * mbufs, and the maximum number is indicated by num. It handles the freeing of
- * the mbufs in the free queue of KNI interface.
- *
- * @param kni
- * The KNI interface context.
- * @param mbufs
- * The array to store the pointers of mbufs.
- * @param num
- * The maximum number per burst.
- *
- * @return
- * The actual number of packets sent.
- */
-unsigned rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
- unsigned num);
-
-/**
- * Get the KNI context of its name.
- *
- * @param name
- * pointer to the KNI device name.
- *
- * @return
- * On success: Pointer to KNI interface.
- * On failure: NULL.
- */
-struct rte_kni *rte_kni_get(const char *name);
-
-/**
- * Get the name given to a KNI device
- *
- * @param kni
- * The KNI instance to query
- * @return
- * The pointer to the KNI name
- */
-const char *rte_kni_get_name(const struct rte_kni *kni);
-
-/**
- * Register KNI request handling for a specified port,and it can
- * be called by primary process or secondary process.
- *
- * @param kni
- * pointer to struct rte_kni.
- * @param ops
- * pointer to struct rte_kni_ops.
- *
- * @return
- * On success: 0
- * On failure: -1
- */
-int rte_kni_register_handlers(struct rte_kni *kni, struct rte_kni_ops *ops);
-
-/**
- * Unregister KNI request handling for a specified port.
- *
- * @param kni
- * pointer to struct rte_kni.
- *
- * @return
- * On success: 0
- * On failure: -1
- */
-int rte_kni_unregister_handlers(struct rte_kni *kni);
-
-/**
- * Update link carrier state for KNI port.
- *
- * Update the linkup/linkdown state of a KNI interface in the kernel.
- *
- * @param kni
- * pointer to struct rte_kni.
- * @param linkup
- * New link state:
- * 0 for linkdown.
- * > 0 for linkup.
- *
- * @return
- * On failure: -1
- * Previous link state == linkdown: 0
- * Previous link state == linkup: 1
- */
-__rte_experimental
-int
-rte_kni_update_link(struct rte_kni *kni, unsigned int linkup);
-
-/**
- * Close KNI device.
- */
-void rte_kni_close(void);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_KNI_H_ */
diff --git a/lib/kni/rte_kni_common.h b/lib/kni/rte_kni_common.h
deleted file mode 100644
index 8d3ee0fa4fc2..000000000000
--- a/lib/kni/rte_kni_common.h
+++ /dev/null
@@ -1,147 +0,0 @@
-/* SPDX-License-Identifier: (BSD-3-Clause OR LGPL-2.1) */
-/*
- * Copyright(c) 2007-2014 Intel Corporation.
- */
-
-#ifndef _RTE_KNI_COMMON_H_
-#define _RTE_KNI_COMMON_H_
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#ifdef __KERNEL__
-#include <linux/if.h>
-#include <asm/barrier.h>
-#define RTE_STD_C11
-#else
-#include <rte_common.h>
-#include <rte_config.h>
-#endif
-
-/*
- * KNI name is part of memzone name. Must not exceed IFNAMSIZ.
- */
-#define RTE_KNI_NAMESIZE 16
-
-#define RTE_CACHE_LINE_MIN_SIZE 64
-
-/*
- * Request id.
- */
-enum rte_kni_req_id {
- RTE_KNI_REQ_UNKNOWN = 0,
- RTE_KNI_REQ_CHANGE_MTU,
- RTE_KNI_REQ_CFG_NETWORK_IF,
- RTE_KNI_REQ_CHANGE_MAC_ADDR,
- RTE_KNI_REQ_CHANGE_PROMISC,
- RTE_KNI_REQ_CHANGE_ALLMULTI,
- RTE_KNI_REQ_MAX,
-};
-
-/*
- * Structure for KNI request.
- */
-struct rte_kni_request {
- uint32_t req_id; /**< Request id */
- RTE_STD_C11
- union {
- uint32_t new_mtu; /**< New MTU */
- uint8_t if_up; /**< 1: interface up, 0: interface down */
- uint8_t mac_addr[6]; /**< MAC address for interface */
- uint8_t promiscusity;/**< 1: promisc mode enable, 0: disable */
- uint8_t allmulti; /**< 1: all-multicast mode enable, 0: disable */
- };
- int32_t async : 1; /**< 1: request is asynchronous */
- int32_t result; /**< Result for processing request */
-} __attribute__((__packed__));
-
-/*
- * Fifo struct mapped in a shared memory. It describes a circular buffer FIFO
- * Write and read should wrap around. Fifo is empty when write == read
- * Writing should never overwrite the read position
- */
-struct rte_kni_fifo {
-#ifdef RTE_USE_C11_MEM_MODEL
- unsigned write; /**< Next position to be written*/
- unsigned read; /**< Next position to be read */
-#else
- volatile unsigned write; /**< Next position to be written*/
- volatile unsigned read; /**< Next position to be read */
-#endif
- unsigned len; /**< Circular buffer length */
- unsigned elem_size; /**< Pointer size - for 32/64 bit OS */
- void *volatile buffer[]; /**< The buffer contains mbuf pointers */
-};
-
-/*
- * The kernel image of the rte_mbuf struct, with only the relevant fields.
- * Padding is necessary to assure the offsets of these fields
- */
-struct rte_kni_mbuf {
- void *buf_addr __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
- uint64_t buf_iova;
- uint16_t data_off; /**< Start address of data in segment buffer. */
- char pad1[2];
- uint16_t nb_segs; /**< Number of segments. */
- char pad4[2];
- uint64_t ol_flags; /**< Offload features. */
- char pad2[4];
- uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
- uint16_t data_len; /**< Amount of data in segment buffer. */
- char pad3[14];
- void *pool;
-
- /* fields on second cache line */
- __attribute__((__aligned__(RTE_CACHE_LINE_MIN_SIZE)))
- void *next; /**< Physical address of next mbuf in kernel. */
-};
-
-/*
- * Struct used to create a KNI device. Passed to the kernel in IOCTL call
- */
-
-struct rte_kni_device_info {
- char name[RTE_KNI_NAMESIZE]; /**< Network device name for KNI */
-
- phys_addr_t tx_phys;
- phys_addr_t rx_phys;
- phys_addr_t alloc_phys;
- phys_addr_t free_phys;
-
- /* Used by Ethtool */
- phys_addr_t req_phys;
- phys_addr_t resp_phys;
- phys_addr_t sync_phys;
- void * sync_va;
-
- /* mbuf mempool */
- void * mbuf_va;
- phys_addr_t mbuf_phys;
-
- uint16_t group_id; /**< Group ID */
- uint32_t core_id; /**< core ID to bind for kernel thread */
-
- __extension__
- uint8_t force_bind : 1; /**< Flag for kernel thread binding */
-
- /* mbuf size */
- unsigned mbuf_size;
- unsigned int mtu;
- unsigned int min_mtu;
- unsigned int max_mtu;
- uint8_t mac_addr[6];
- uint8_t iova_mode;
-};
-
-#define KNI_DEVICE "kni"
-
-#define RTE_KNI_IOCTL_TEST _IOWR(0, 1, int)
-#define RTE_KNI_IOCTL_CREATE _IOWR(0, 2, struct rte_kni_device_info)
-#define RTE_KNI_IOCTL_RELEASE _IOWR(0, 3, struct rte_kni_device_info)
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_KNI_COMMON_H_ */
diff --git a/lib/kni/rte_kni_fifo.h b/lib/kni/rte_kni_fifo.h
deleted file mode 100644
index d2ec82fe87fc..000000000000
--- a/lib/kni/rte_kni_fifo.h
+++ /dev/null
@@ -1,117 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-
-
-/**
- * @internal when c11 memory model enabled use c11 atomic memory barrier.
- * when under non c11 memory model use rte_smp_* memory barrier.
- *
- * @param src
- * Pointer to the source data.
- * @param dst
- * Pointer to the destination data.
- * @param value
- * Data value.
- */
-#ifdef RTE_USE_C11_MEM_MODEL
-#define __KNI_LOAD_ACQUIRE(src) ({ \
- __atomic_load_n((src), __ATOMIC_ACQUIRE); \
- })
-#define __KNI_STORE_RELEASE(dst, value) do { \
- __atomic_store_n((dst), value, __ATOMIC_RELEASE); \
- } while(0)
-#else
-#define __KNI_LOAD_ACQUIRE(src) ({ \
- typeof (*(src)) val = *(src); \
- rte_smp_rmb(); \
- val; \
- })
-#define __KNI_STORE_RELEASE(dst, value) do { \
- *(dst) = value; \
- rte_smp_wmb(); \
- } while(0)
-#endif
-
-/**
- * Initializes the kni fifo structure
- */
-static void
-kni_fifo_init(struct rte_kni_fifo *fifo, unsigned size)
-{
- /* Ensure size is power of 2 */
- if (size & (size - 1))
- rte_panic("KNI fifo size must be power of 2\n");
-
- fifo->write = 0;
- fifo->read = 0;
- fifo->len = size;
- fifo->elem_size = sizeof(void *);
-}
-
-/**
- * Adds num elements into the fifo. Return the number actually written
- */
-static inline unsigned
-kni_fifo_put(struct rte_kni_fifo *fifo, void **data, unsigned num)
-{
- unsigned i = 0;
- unsigned fifo_write = fifo->write;
- unsigned new_write = fifo_write;
- unsigned fifo_read = __KNI_LOAD_ACQUIRE(&fifo->read);
-
- for (i = 0; i < num; i++) {
- new_write = (new_write + 1) & (fifo->len - 1);
-
- if (new_write == fifo_read)
- break;
- fifo->buffer[fifo_write] = data[i];
- fifo_write = new_write;
- }
- __KNI_STORE_RELEASE(&fifo->write, fifo_write);
- return i;
-}
-
-/**
- * Get up to num elements from the fifo. Return the number actually read
- */
-static inline unsigned
-kni_fifo_get(struct rte_kni_fifo *fifo, void **data, unsigned num)
-{
- unsigned i = 0;
- unsigned new_read = fifo->read;
- unsigned fifo_write = __KNI_LOAD_ACQUIRE(&fifo->write);
-
- for (i = 0; i < num; i++) {
- if (new_read == fifo_write)
- break;
-
- data[i] = fifo->buffer[new_read];
- new_read = (new_read + 1) & (fifo->len - 1);
- }
- __KNI_STORE_RELEASE(&fifo->read, new_read);
- return i;
-}
-
-/**
- * Get the num of elements in the fifo
- */
-static inline uint32_t
-kni_fifo_count(struct rte_kni_fifo *fifo)
-{
- unsigned fifo_write = __KNI_LOAD_ACQUIRE(&fifo->write);
- unsigned fifo_read = __KNI_LOAD_ACQUIRE(&fifo->read);
- return (fifo->len + fifo_write - fifo_read) & (fifo->len - 1);
-}
-
-/**
- * Get the num of available elements in the fifo
- */
-static inline uint32_t
-kni_fifo_free_count(struct rte_kni_fifo *fifo)
-{
- uint32_t fifo_write = __KNI_LOAD_ACQUIRE(&fifo->write);
- uint32_t fifo_read = __KNI_LOAD_ACQUIRE(&fifo->read);
- return (fifo_read - fifo_write - 1) & (fifo->len - 1);
-}
diff --git a/lib/kni/version.map b/lib/kni/version.map
deleted file mode 100644
index 83bbbe880f43..000000000000
--- a/lib/kni/version.map
+++ /dev/null
@@ -1,24 +0,0 @@
-DPDK_23 {
- global:
-
- rte_kni_alloc;
- rte_kni_close;
- rte_kni_get;
- rte_kni_get_name;
- rte_kni_handle_request;
- rte_kni_init;
- rte_kni_register_handlers;
- rte_kni_release;
- rte_kni_rx_burst;
- rte_kni_tx_burst;
- rte_kni_unregister_handlers;
-
- local: *;
-};
-
-EXPERIMENTAL {
- global:
-
- # updated in v21.08
- rte_kni_update_link;
-};
diff --git a/lib/meson.build b/lib/meson.build
index fac2f52cad4f..06df4f57ad6e 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -39,7 +39,6 @@ libraries = [
'gso',
'ip_frag',
'jobstats',
- 'kni',
'latencystats',
'lpm',
'member',
@@ -75,7 +74,6 @@ optional_libs = [
'graph',
'gro',
'gso',
- 'kni',
'jobstats',
'latencystats',
'metrics',
@@ -90,7 +88,6 @@ optional_libs = [
dpdk_libs_deprecated += [
'flow_classify',
- 'kni',
]
disabled_libs = []
diff --git a/lib/port/meson.build b/lib/port/meson.build
index 3ab37e2cb4b7..b0af2b185b39 100644
--- a/lib/port/meson.build
+++ b/lib/port/meson.build
@@ -45,9 +45,3 @@ if dpdk_conf.has('RTE_HAS_LIBPCAP')
dpdk_conf.set('RTE_PORT_PCAP', 1)
ext_deps += pcap_dep # dependency provided in config/meson.build
endif
-
-if dpdk_conf.has('RTE_LIB_KNI')
- sources += files('rte_port_kni.c')
- headers += files('rte_port_kni.h')
- deps += 'kni'
-endif
diff --git a/lib/port/rte_port_kni.c b/lib/port/rte_port_kni.c
deleted file mode 100644
index 1c7a6cb200ea..000000000000
--- a/lib/port/rte_port_kni.c
+++ /dev/null
@@ -1,515 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Ethan Zhuang <zhuangwj@gmail.com>.
- * Copyright(c) 2016 Intel Corporation.
- */
-#include <string.h>
-
-#include <rte_malloc.h>
-#include <rte_kni.h>
-
-#include "rte_port_kni.h"
-
-/*
- * Port KNI Reader
- */
-#ifdef RTE_PORT_STATS_COLLECT
-
-#define RTE_PORT_KNI_READER_STATS_PKTS_IN_ADD(port, val) \
- port->stats.n_pkts_in += val
-#define RTE_PORT_KNI_READER_STATS_PKTS_DROP_ADD(port, val) \
- port->stats.n_pkts_drop += val
-
-#else
-
-#define RTE_PORT_KNI_READER_STATS_PKTS_IN_ADD(port, val)
-#define RTE_PORT_KNI_READER_STATS_PKTS_DROP_ADD(port, val)
-
-#endif
-
-struct rte_port_kni_reader {
- struct rte_port_in_stats stats;
-
- struct rte_kni *kni;
-};
-
-static void *
-rte_port_kni_reader_create(void *params, int socket_id)
-{
- struct rte_port_kni_reader_params *conf =
- params;
- struct rte_port_kni_reader *port;
-
- /* Check input parameters */
- if (conf == NULL) {
- RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__);
- return NULL;
- }
-
- /* Memory allocation */
- port = rte_zmalloc_socket("PORT", sizeof(*port),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
- return NULL;
- }
-
- /* Initialization */
- port->kni = conf->kni;
-
- return port;
-}
-
-static int
-rte_port_kni_reader_rx(void *port, struct rte_mbuf **pkts, uint32_t n_pkts)
-{
- struct rte_port_kni_reader *p =
- port;
- uint16_t rx_pkt_cnt;
-
- rx_pkt_cnt = rte_kni_rx_burst(p->kni, pkts, n_pkts);
- RTE_PORT_KNI_READER_STATS_PKTS_IN_ADD(p, rx_pkt_cnt);
- return rx_pkt_cnt;
-}
-
-static int
-rte_port_kni_reader_free(void *port)
-{
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
- return -EINVAL;
- }
-
- rte_free(port);
-
- return 0;
-}
-
-static int rte_port_kni_reader_stats_read(void *port,
- struct rte_port_in_stats *stats, int clear)
-{
- struct rte_port_kni_reader *p =
- port;
-
- if (stats != NULL)
- memcpy(stats, &p->stats, sizeof(p->stats));
-
- if (clear)
- memset(&p->stats, 0, sizeof(p->stats));
-
- return 0;
-}
-
-/*
- * Port KNI Writer
- */
-#ifdef RTE_PORT_STATS_COLLECT
-
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(port, val) \
- port->stats.n_pkts_in += val
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(port, val) \
- port->stats.n_pkts_drop += val
-
-#else
-
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(port, val)
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(port, val)
-
-#endif
-
-struct rte_port_kni_writer {
- struct rte_port_out_stats stats;
-
- struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX];
- uint32_t tx_burst_sz;
- uint32_t tx_buf_count;
- uint64_t bsz_mask;
- struct rte_kni *kni;
-};
-
-static void *
-rte_port_kni_writer_create(void *params, int socket_id)
-{
- struct rte_port_kni_writer_params *conf =
- params;
- struct rte_port_kni_writer *port;
-
- /* Check input parameters */
- if ((conf == NULL) ||
- (conf->tx_burst_sz == 0) ||
- (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
- (!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
- return NULL;
- }
-
- /* Memory allocation */
- port = rte_zmalloc_socket("PORT", sizeof(*port),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
- return NULL;
- }
-
- /* Initialization */
- port->kni = conf->kni;
- port->tx_burst_sz = conf->tx_burst_sz;
- port->tx_buf_count = 0;
- port->bsz_mask = 1LLU << (conf->tx_burst_sz - 1);
-
- return port;
-}
-
-static inline void
-send_burst(struct rte_port_kni_writer *p)
-{
- uint32_t nb_tx;
-
- nb_tx = rte_kni_tx_burst(p->kni, p->tx_buf, p->tx_buf_count);
-
- RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx);
- for (; nb_tx < p->tx_buf_count; nb_tx++)
- rte_pktmbuf_free(p->tx_buf[nb_tx]);
-
- p->tx_buf_count = 0;
-}
-
-static int
-rte_port_kni_writer_tx(void *port, struct rte_mbuf *pkt)
-{
- struct rte_port_kni_writer *p =
- port;
-
- p->tx_buf[p->tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, 1);
- if (p->tx_buf_count >= p->tx_burst_sz)
- send_burst(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_tx_bulk(void *port,
- struct rte_mbuf **pkts,
- uint64_t pkts_mask)
-{
- struct rte_port_kni_writer *p =
- port;
- uint64_t bsz_mask = p->bsz_mask;
- uint32_t tx_buf_count = p->tx_buf_count;
- uint64_t expr = (pkts_mask & (pkts_mask + 1)) |
- ((pkts_mask & bsz_mask) ^ bsz_mask);
-
- if (expr == 0) {
- uint64_t n_pkts = __builtin_popcountll(pkts_mask);
- uint32_t n_pkts_ok;
-
- if (tx_buf_count)
- send_burst(p);
-
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, n_pkts);
- n_pkts_ok = rte_kni_tx_burst(p->kni, pkts, n_pkts);
-
- RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(p, n_pkts - n_pkts_ok);
- for (; n_pkts_ok < n_pkts; n_pkts_ok++) {
- struct rte_mbuf *pkt = pkts[n_pkts_ok];
-
- rte_pktmbuf_free(pkt);
- }
- } else {
- for (; pkts_mask;) {
- uint32_t pkt_index = __builtin_ctzll(pkts_mask);
- uint64_t pkt_mask = 1LLU << pkt_index;
- struct rte_mbuf *pkt = pkts[pkt_index];
-
- p->tx_buf[tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, 1);
- pkts_mask &= ~pkt_mask;
- }
-
- p->tx_buf_count = tx_buf_count;
- if (tx_buf_count >= p->tx_burst_sz)
- send_burst(p);
- }
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_flush(void *port)
-{
- struct rte_port_kni_writer *p =
- port;
-
- if (p->tx_buf_count > 0)
- send_burst(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_free(void *port)
-{
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
- return -EINVAL;
- }
-
- rte_port_kni_writer_flush(port);
- rte_free(port);
-
- return 0;
-}
-
-static int rte_port_kni_writer_stats_read(void *port,
- struct rte_port_out_stats *stats, int clear)
-{
- struct rte_port_kni_writer *p =
- port;
-
- if (stats != NULL)
- memcpy(stats, &p->stats, sizeof(p->stats));
-
- if (clear)
- memset(&p->stats, 0, sizeof(p->stats));
-
- return 0;
-}
-
-/*
- * Port KNI Writer Nodrop
- */
-#ifdef RTE_PORT_STATS_COLLECT
-
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(port, val) \
- port->stats.n_pkts_in += val
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_DROP_ADD(port, val) \
- port->stats.n_pkts_drop += val
-
-#else
-
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(port, val)
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_DROP_ADD(port, val)
-
-#endif
-
-struct rte_port_kni_writer_nodrop {
- struct rte_port_out_stats stats;
-
- struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX];
- uint32_t tx_burst_sz;
- uint32_t tx_buf_count;
- uint64_t bsz_mask;
- uint64_t n_retries;
- struct rte_kni *kni;
-};
-
-static void *
-rte_port_kni_writer_nodrop_create(void *params, int socket_id)
-{
- struct rte_port_kni_writer_nodrop_params *conf =
- params;
- struct rte_port_kni_writer_nodrop *port;
-
- /* Check input parameters */
- if ((conf == NULL) ||
- (conf->tx_burst_sz == 0) ||
- (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
- (!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
- return NULL;
- }
-
- /* Memory allocation */
- port = rte_zmalloc_socket("PORT", sizeof(*port),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
- return NULL;
- }
-
- /* Initialization */
- port->kni = conf->kni;
- port->tx_burst_sz = conf->tx_burst_sz;
- port->tx_buf_count = 0;
- port->bsz_mask = 1LLU << (conf->tx_burst_sz - 1);
-
- /*
- * When n_retries is 0 it means that we should wait for every packet to
- * send no matter how many retries should it take. To limit number of
- * branches in fast path, we use UINT64_MAX instead of branching.
- */
- port->n_retries = (conf->n_retries == 0) ? UINT64_MAX : conf->n_retries;
-
- return port;
-}
-
-static inline void
-send_burst_nodrop(struct rte_port_kni_writer_nodrop *p)
-{
- uint32_t nb_tx = 0, i;
-
- nb_tx = rte_kni_tx_burst(p->kni, p->tx_buf, p->tx_buf_count);
-
- /* We sent all the packets in a first try */
- if (nb_tx >= p->tx_buf_count) {
- p->tx_buf_count = 0;
- return;
- }
-
- for (i = 0; i < p->n_retries; i++) {
- nb_tx += rte_kni_tx_burst(p->kni,
- p->tx_buf + nb_tx,
- p->tx_buf_count - nb_tx);
-
- /* We sent all the packets in more than one try */
- if (nb_tx >= p->tx_buf_count) {
- p->tx_buf_count = 0;
- return;
- }
- }
-
- /* We didn't send the packets in maximum allowed attempts */
- RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx);
- for ( ; nb_tx < p->tx_buf_count; nb_tx++)
- rte_pktmbuf_free(p->tx_buf[nb_tx]);
-
- p->tx_buf_count = 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_tx(void *port, struct rte_mbuf *pkt)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- p->tx_buf[p->tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, 1);
- if (p->tx_buf_count >= p->tx_burst_sz)
- send_burst_nodrop(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_tx_bulk(void *port,
- struct rte_mbuf **pkts,
- uint64_t pkts_mask)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- uint64_t bsz_mask = p->bsz_mask;
- uint32_t tx_buf_count = p->tx_buf_count;
- uint64_t expr = (pkts_mask & (pkts_mask + 1)) |
- ((pkts_mask & bsz_mask) ^ bsz_mask);
-
- if (expr == 0) {
- uint64_t n_pkts = __builtin_popcountll(pkts_mask);
- uint32_t n_pkts_ok;
-
- if (tx_buf_count)
- send_burst_nodrop(p);
-
- RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(p, n_pkts);
- n_pkts_ok = rte_kni_tx_burst(p->kni, pkts, n_pkts);
-
- if (n_pkts_ok >= n_pkts)
- return 0;
-
- /*
- * If we didn't manage to send all packets in single burst, move
- * remaining packets to the buffer and call send burst.
- */
- for (; n_pkts_ok < n_pkts; n_pkts_ok++) {
- struct rte_mbuf *pkt = pkts[n_pkts_ok];
- p->tx_buf[p->tx_buf_count++] = pkt;
- }
- send_burst_nodrop(p);
- } else {
- for ( ; pkts_mask; ) {
- uint32_t pkt_index = __builtin_ctzll(pkts_mask);
- uint64_t pkt_mask = 1LLU << pkt_index;
- struct rte_mbuf *pkt = pkts[pkt_index];
-
- p->tx_buf[tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(p, 1);
- pkts_mask &= ~pkt_mask;
- }
-
- p->tx_buf_count = tx_buf_count;
- if (tx_buf_count >= p->tx_burst_sz)
- send_burst_nodrop(p);
- }
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_flush(void *port)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- if (p->tx_buf_count > 0)
- send_burst_nodrop(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_free(void *port)
-{
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
- return -EINVAL;
- }
-
- rte_port_kni_writer_nodrop_flush(port);
- rte_free(port);
-
- return 0;
-}
-
-static int rte_port_kni_writer_nodrop_stats_read(void *port,
- struct rte_port_out_stats *stats, int clear)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- if (stats != NULL)
- memcpy(stats, &p->stats, sizeof(p->stats));
-
- if (clear)
- memset(&p->stats, 0, sizeof(p->stats));
-
- return 0;
-}
-
-
-/*
- * Summary of port operations
- */
-struct rte_port_in_ops rte_port_kni_reader_ops = {
- .f_create = rte_port_kni_reader_create,
- .f_free = rte_port_kni_reader_free,
- .f_rx = rte_port_kni_reader_rx,
- .f_stats = rte_port_kni_reader_stats_read,
-};
-
-struct rte_port_out_ops rte_port_kni_writer_ops = {
- .f_create = rte_port_kni_writer_create,
- .f_free = rte_port_kni_writer_free,
- .f_tx = rte_port_kni_writer_tx,
- .f_tx_bulk = rte_port_kni_writer_tx_bulk,
- .f_flush = rte_port_kni_writer_flush,
- .f_stats = rte_port_kni_writer_stats_read,
-};
-
-struct rte_port_out_ops rte_port_kni_writer_nodrop_ops = {
- .f_create = rte_port_kni_writer_nodrop_create,
- .f_free = rte_port_kni_writer_nodrop_free,
- .f_tx = rte_port_kni_writer_nodrop_tx,
- .f_tx_bulk = rte_port_kni_writer_nodrop_tx_bulk,
- .f_flush = rte_port_kni_writer_nodrop_flush,
- .f_stats = rte_port_kni_writer_nodrop_stats_read,
-};
diff --git a/lib/port/rte_port_kni.h b/lib/port/rte_port_kni.h
deleted file mode 100644
index 280f58c121e2..000000000000
--- a/lib/port/rte_port_kni.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Ethan Zhuang <zhuangwj@gmail.com>.
- * Copyright(c) 2016 Intel Corporation.
- */
-
-#ifndef __INCLUDE_RTE_PORT_KNI_H__
-#define __INCLUDE_RTE_PORT_KNI_H__
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/**
- * @file
- * RTE Port KNI Interface
- *
- * kni_reader: input port built on top of pre-initialized KNI interface
- * kni_writer: output port built on top of pre-initialized KNI interface
- */
-
-#include <stdint.h>
-
-#include "rte_port.h"
-
-/** kni_reader port parameters */
-struct rte_port_kni_reader_params {
- /** KNI interface reference */
- struct rte_kni *kni;
-};
-
-/** kni_reader port operations */
-extern struct rte_port_in_ops rte_port_kni_reader_ops;
-
-
-/** kni_writer port parameters */
-struct rte_port_kni_writer_params {
- /** KNI interface reference */
- struct rte_kni *kni;
- /** Burst size to KNI interface. */
- uint32_t tx_burst_sz;
-};
-
-/** kni_writer port operations */
-extern struct rte_port_out_ops rte_port_kni_writer_ops;
-
-/** kni_writer_nodrop port parameters */
-struct rte_port_kni_writer_nodrop_params {
- /** KNI interface reference */
- struct rte_kni *kni;
- /** Burst size to KNI interface. */
- uint32_t tx_burst_sz;
- /** Maximum number of retries, 0 for no limit */
- uint32_t n_retries;
-};
-
-/** kni_writer_nodrop port operations */
-extern struct rte_port_out_ops rte_port_kni_writer_nodrop_ops;
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
diff --git a/lib/port/version.map b/lib/port/version.map
index af6cf696fd54..d67a03650d8b 100644
--- a/lib/port/version.map
+++ b/lib/port/version.map
@@ -7,9 +7,6 @@ DPDK_23 {
rte_port_fd_reader_ops;
rte_port_fd_writer_nodrop_ops;
rte_port_fd_writer_ops;
- rte_port_kni_reader_ops;
- rte_port_kni_writer_nodrop_ops;
- rte_port_kni_writer_ops;
rte_port_ring_multi_reader_ops;
rte_port_ring_multi_writer_nodrop_ops;
rte_port_ring_multi_writer_ops;
diff --git a/meson_options.txt b/meson_options.txt
index 82c8297065f0..7b67e0203f8f 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -10,7 +10,7 @@ option('disable_apps', type: 'string', value: '', description:
'Comma-separated list of apps to explicitly disable.')
option('disable_drivers', type: 'string', value: '', description:
'Comma-separated list of drivers to explicitly disable.')
-option('disable_libs', type: 'string', value: 'flow_classify,kni', description:
+option('disable_libs', type: 'string', value: 'flow_classify', description:
'Comma-separated list of libraries to explicitly disable. [NOTE: not all libs can be disabled]')
option('drivers_install_subdir', type: 'string', value: 'dpdk/pmds-<VERSION>', description:
'Subdirectory of libdir where to install PMDs. Defaults to using a versioned subdirectory.')
--
2.39.2
^ permalink raw reply [relevance 1%]
* [PATCH] kni: remove deprecated kernel network interface
@ 2023-07-29 22:54 1% Stephen Hemminger
2023-07-30 2:12 1% ` [PATCH v2] " Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-07-29 22:54 UTC (permalink / raw)
To: dev
Cc: Stephen Hemminger, Thomas Monjalon, Maxime Coquelin, Chenbo Xia,
Anatoly Burakov, Cristian Dumitrescu, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Bruce Richardson
Deprecation and removal was announced in 22.11.
Make it so.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
MAINTAINERS | 10 -
app/test/meson.build | 2 -
app/test/test_kni.c | 740 ---------------
doc/api/doxy-api-index.md | 2 -
doc/api/doxy-api.conf.in | 1 -
doc/guides/contributing/documentation.rst | 2 +-
doc/guides/howto/flow_bifurcation.rst | 3 +-
doc/guides/nics/index.rst | 1 -
doc/guides/nics/kni.rst | 170 ----
doc/guides/nics/virtio.rst | 92 +-
.../prog_guide/env_abstraction_layer.rst | 2 -
doc/guides/prog_guide/glossary.rst | 3 -
doc/guides/prog_guide/index.rst | 1 -
.../prog_guide/kernel_nic_interface.rst | 423 ---------
doc/guides/prog_guide/packet_framework.rst | 9 +-
doc/guides/rel_notes/deprecation.rst | 9 +-
doc/guides/rel_notes/release_23_11.rst | 16 +
doc/guides/sample_app_ug/ip_pipeline.rst | 22 -
drivers/net/cnxk/cnxk_ethdev.c | 2 +-
drivers/net/kni/meson.build | 11 -
drivers/net/kni/rte_eth_kni.c | 524 -----------
drivers/net/meson.build | 1 -
examples/ip_pipeline/Makefile | 1 -
examples/ip_pipeline/cli.c | 95 --
examples/ip_pipeline/examples/kni.cli | 69 --
examples/ip_pipeline/kni.c | 168 ----
examples/ip_pipeline/kni.h | 46 -
examples/ip_pipeline/main.c | 10 -
examples/ip_pipeline/meson.build | 1 -
examples/ip_pipeline/pipeline.c | 57 --
examples/ip_pipeline/pipeline.h | 2 -
kernel/linux/kni/Kbuild | 6 -
kernel/linux/kni/compat.h | 157 ----
kernel/linux/kni/kni_dev.h | 137 ---
kernel/linux/kni/kni_fifo.h | 87 --
kernel/linux/kni/kni_misc.c | 719 --------------
kernel/linux/kni/kni_net.c | 878 ------------------
kernel/linux/kni/meson.build | 41 -
kernel/linux/meson.build | 103 --
lib/eal/common/eal_common_log.c | 1 -
lib/eal/include/rte_log.h | 2 +-
lib/eal/linux/eal.c | 19 -
lib/kni/meson.build | 21 -
lib/kni/rte_kni.c | 843 -----------------
lib/kni/rte_kni.h | 269 ------
lib/kni/rte_kni_common.h | 147 ---
lib/kni/rte_kni_fifo.h | 117 ---
lib/kni/version.map | 24 -
lib/meson.build | 3 -
lib/port/meson.build | 6 -
lib/port/rte_port_kni.c | 515 ----------
lib/port/rte_port_kni.h | 63 --
lib/port/version.map | 3 -
meson_options.txt | 2 +-
54 files changed, 26 insertions(+), 6632 deletions(-)
delete mode 100644 app/test/test_kni.c
delete mode 100644 doc/guides/nics/kni.rst
delete mode 100644 doc/guides/prog_guide/kernel_nic_interface.rst
create mode 100644 doc/guides/rel_notes/release_23_11.rst
delete mode 100644 drivers/net/kni/meson.build
delete mode 100644 drivers/net/kni/rte_eth_kni.c
delete mode 100644 examples/ip_pipeline/examples/kni.cli
delete mode 100644 examples/ip_pipeline/kni.c
delete mode 100644 examples/ip_pipeline/kni.h
delete mode 100644 kernel/linux/kni/Kbuild
delete mode 100644 kernel/linux/kni/compat.h
delete mode 100644 kernel/linux/kni/kni_dev.h
delete mode 100644 kernel/linux/kni/kni_fifo.h
delete mode 100644 kernel/linux/kni/kni_misc.c
delete mode 100644 kernel/linux/kni/kni_net.c
delete mode 100644 kernel/linux/kni/meson.build
delete mode 100644 kernel/linux/meson.build
delete mode 100644 lib/kni/meson.build
delete mode 100644 lib/kni/rte_kni.c
delete mode 100644 lib/kni/rte_kni.h
delete mode 100644 lib/kni/rte_kni_common.h
delete mode 100644 lib/kni/rte_kni_fifo.h
delete mode 100644 lib/kni/version.map
delete mode 100644 lib/port/rte_port_kni.c
delete mode 100644 lib/port/rte_port_kni.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 18bc05fccd0d..6ad45569bcd2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -617,12 +617,6 @@ F: doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
F: app/test/test_link_bonding*
F: examples/bond/
-Linux KNI
-F: kernel/linux/kni/
-F: lib/kni/
-F: doc/guides/prog_guide/kernel_nic_interface.rst
-F: app/test/test_kni.c
-
Linux AF_PACKET
M: John W. Linville <linville@tuxdriver.com>
F: drivers/net/af_packet/
@@ -1027,10 +1021,6 @@ F: drivers/net/tap/
F: doc/guides/nics/tap.rst
F: doc/guides/nics/features/tap.ini
-KNI PMD
-F: drivers/net/kni/
-F: doc/guides/nics/kni.rst
-
Ring PMD
M: Bruce Richardson <bruce.richardson@intel.com>
F: drivers/net/ring/
diff --git a/app/test/meson.build b/app/test/meson.build
index b89cf0368fb5..de895cc8fc52 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -72,7 +72,6 @@ test_sources = files(
'test_ipsec.c',
'test_ipsec_sad.c',
'test_ipsec_perf.c',
- 'test_kni.c',
'test_kvargs.c',
'test_lcores.c',
'test_logs.c',
@@ -237,7 +236,6 @@ fast_tests = [
['fbarray_autotest', true, true],
['hash_readwrite_func_autotest', false, true],
['ipsec_autotest', true, true],
- ['kni_autotest', false, true],
['kvargs_autotest', true, true],
['member_autotest', true, true],
['power_cpufreq_autotest', false, true],
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
deleted file mode 100644
index 4039da0b080c..000000000000
--- a/app/test/test_kni.c
+++ /dev/null
@@ -1,740 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#include "test.h"
-
-#include <stdio.h>
-#include <stdint.h>
-#include <unistd.h>
-#include <string.h>
-#if !defined(RTE_EXEC_ENV_LINUX) || !defined(RTE_LIB_KNI)
-
-static int
-test_kni(void)
-{
- printf("KNI not supported, skipping test\n");
- return TEST_SKIPPED;
-}
-
-#else
-
-#include <sys/wait.h>
-#include <dirent.h>
-
-#include <rte_string_fns.h>
-#include <rte_mempool.h>
-#include <rte_ethdev.h>
-#include <rte_cycles.h>
-#include <rte_kni.h>
-
-#define NB_MBUF 8192
-#define MAX_PACKET_SZ 2048
-#define MBUF_DATA_SZ (MAX_PACKET_SZ + RTE_PKTMBUF_HEADROOM)
-#define PKT_BURST_SZ 32
-#define MEMPOOL_CACHE_SZ PKT_BURST_SZ
-#define SOCKET 0
-#define NB_RXD 1024
-#define NB_TXD 1024
-#define KNI_TIMEOUT_MS 5000 /* ms */
-
-#define IFCONFIG "/sbin/ifconfig "
-#define TEST_KNI_PORT "test_kni_port"
-#define KNI_MODULE_PATH "/sys/module/rte_kni"
-#define KNI_MODULE_PARAM_LO KNI_MODULE_PATH"/parameters/lo_mode"
-#define KNI_TEST_MAX_PORTS 4
-/* The threshold number of mbufs to be transmitted or received. */
-#define KNI_NUM_MBUF_THRESHOLD 100
-static int kni_pkt_mtu = 0;
-
-struct test_kni_stats {
- volatile uint64_t ingress;
- volatile uint64_t egress;
-};
-
-static const struct rte_eth_rxconf rx_conf = {
- .rx_thresh = {
- .pthresh = 8,
- .hthresh = 8,
- .wthresh = 4,
- },
- .rx_free_thresh = 0,
-};
-
-static const struct rte_eth_txconf tx_conf = {
- .tx_thresh = {
- .pthresh = 36,
- .hthresh = 0,
- .wthresh = 0,
- },
- .tx_free_thresh = 0,
- .tx_rs_thresh = 0,
-};
-
-static const struct rte_eth_conf port_conf = {
- .txmode = {
- .mq_mode = RTE_ETH_MQ_TX_NONE,
- },
-};
-
-static struct rte_kni_ops kni_ops = {
- .change_mtu = NULL,
- .config_network_if = NULL,
- .config_mac_address = NULL,
- .config_promiscusity = NULL,
-};
-
-static unsigned int lcore_main, lcore_ingress, lcore_egress;
-static struct rte_kni *test_kni_ctx;
-static struct test_kni_stats stats;
-
-static volatile uint32_t test_kni_processing_flag;
-
-static struct rte_mempool *
-test_kni_create_mempool(void)
-{
- struct rte_mempool * mp;
-
- mp = rte_mempool_lookup("kni_mempool");
- if (!mp)
- mp = rte_pktmbuf_pool_create("kni_mempool",
- NB_MBUF,
- MEMPOOL_CACHE_SZ, 0, MBUF_DATA_SZ,
- SOCKET);
-
- return mp;
-}
-
-static struct rte_mempool *
-test_kni_lookup_mempool(void)
-{
- return rte_mempool_lookup("kni_mempool");
-}
-/* Callback for request of changing MTU */
-static int
-kni_change_mtu(uint16_t port_id, unsigned int new_mtu)
-{
- printf("Change MTU of port %d to %u\n", port_id, new_mtu);
- kni_pkt_mtu = new_mtu;
- printf("Change MTU of port %d to %i successfully.\n",
- port_id, kni_pkt_mtu);
- return 0;
-}
-
-static int
-test_kni_link_change(void)
-{
- int ret;
- int pid;
-
- pid = fork();
- if (pid < 0) {
- printf("Error: Failed to fork a process\n");
- return -1;
- }
-
- if (pid == 0) {
- printf("Starting KNI Link status change tests.\n");
- if (system(IFCONFIG TEST_KNI_PORT" up") == -1) {
- ret = -1;
- goto error;
- }
-
- ret = rte_kni_update_link(test_kni_ctx, 1);
- if (ret < 0) {
- printf("Failed to change link state to Up ret=%d.\n",
- ret);
- goto error;
- }
- rte_delay_ms(1000);
- printf("KNI: Set LINKUP, previous state=%d\n", ret);
-
- ret = rte_kni_update_link(test_kni_ctx, 0);
- if (ret != 1) {
- printf(
- "Failed! Previous link state should be 1, returned %d.\n",
- ret);
- goto error;
- }
- rte_delay_ms(1000);
- printf("KNI: Set LINKDOWN, previous state=%d\n", ret);
-
- ret = rte_kni_update_link(test_kni_ctx, 1);
- if (ret != 0) {
- printf(
- "Failed! Previous link state should be 0, returned %d.\n",
- ret);
- goto error;
- }
- printf("KNI: Set LINKUP, previous state=%d\n", ret);
-
- ret = 0;
- rte_delay_ms(1000);
-
-error:
- if (system(IFCONFIG TEST_KNI_PORT" down") == -1)
- ret = -1;
-
- printf("KNI: Link status change tests: %s.\n",
- (ret == 0) ? "Passed" : "Failed");
- exit(ret);
- } else {
- int p_ret, status;
-
- while (1) {
- p_ret = waitpid(pid, &status, WNOHANG);
- if (p_ret != 0) {
- if (WIFEXITED(status))
- return WEXITSTATUS(status);
- return -1;
- }
- rte_delay_ms(10);
- rte_kni_handle_request(test_kni_ctx);
- }
- }
-}
-/**
- * This loop fully tests the basic functions of KNI. e.g. transmitting,
- * receiving to, from kernel space, and kernel requests.
- *
- * This is the loop to transmit/receive mbufs to/from kernel interface with
- * supported by KNI kernel module. The ingress lcore will allocate mbufs and
- * transmit them to kernel space; while the egress lcore will receive the mbufs
- * from kernel space and free them.
- * On the main lcore, several commands will be run to check handling the
- * kernel requests. And it will finally set the flag to exit the KNI
- * transmitting/receiving to/from the kernel space.
- *
- * Note: To support this testing, the KNI kernel module needs to be insmodded
- * in one of its loopback modes.
- */
-static int
-test_kni_loop(__rte_unused void *arg)
-{
- int ret = 0;
- unsigned nb_rx, nb_tx, num, i;
- const unsigned lcore_id = rte_lcore_id();
- struct rte_mbuf *pkts_burst[PKT_BURST_SZ];
-
- if (lcore_id == lcore_main) {
- rte_delay_ms(KNI_TIMEOUT_MS);
- /* tests of handling kernel request */
- if (system(IFCONFIG TEST_KNI_PORT" up") == -1)
- ret = -1;
- if (system(IFCONFIG TEST_KNI_PORT" mtu 1400") == -1)
- ret = -1;
- if (system(IFCONFIG TEST_KNI_PORT" down") == -1)
- ret = -1;
- rte_delay_ms(KNI_TIMEOUT_MS);
- test_kni_processing_flag = 1;
- } else if (lcore_id == lcore_ingress) {
- struct rte_mempool *mp = test_kni_lookup_mempool();
-
- if (mp == NULL)
- return -1;
-
- while (1) {
- if (test_kni_processing_flag)
- break;
-
- for (nb_rx = 0; nb_rx < PKT_BURST_SZ; nb_rx++) {
- pkts_burst[nb_rx] = rte_pktmbuf_alloc(mp);
- if (!pkts_burst[nb_rx])
- break;
- }
-
- num = rte_kni_tx_burst(test_kni_ctx, pkts_burst,
- nb_rx);
- stats.ingress += num;
- rte_kni_handle_request(test_kni_ctx);
- if (num < nb_rx) {
- for (i = num; i < nb_rx; i++) {
- rte_pktmbuf_free(pkts_burst[i]);
- }
- }
- rte_delay_ms(10);
- }
- } else if (lcore_id == lcore_egress) {
- while (1) {
- if (test_kni_processing_flag)
- break;
- num = rte_kni_rx_burst(test_kni_ctx, pkts_burst,
- PKT_BURST_SZ);
- stats.egress += num;
- for (nb_tx = 0; nb_tx < num; nb_tx++)
- rte_pktmbuf_free(pkts_burst[nb_tx]);
- rte_delay_ms(10);
- }
- }
-
- return ret;
-}
-
-static int
-test_kni_allocate_lcores(void)
-{
- unsigned i, count = 0;
-
- lcore_main = rte_get_main_lcore();
- printf("main lcore: %u\n", lcore_main);
- for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (count >=2 )
- break;
- if (rte_lcore_is_enabled(i) && i != lcore_main) {
- count ++;
- if (count == 1)
- lcore_ingress = i;
- else if (count == 2)
- lcore_egress = i;
- }
- }
- printf("count: %u\n", count);
-
- return count == 2 ? 0 : -1;
-}
-
-static int
-test_kni_register_handler_mp(void)
-{
-#define TEST_KNI_HANDLE_REQ_COUNT 10 /* 5s */
-#define TEST_KNI_HANDLE_REQ_INTERVAL 500 /* ms */
-#define TEST_KNI_MTU 1450
-#define TEST_KNI_MTU_STR " 1450"
- int pid;
-
- pid = fork();
- if (pid < 0) {
- printf("Failed to fork a process\n");
- return -1;
- } else if (pid == 0) {
- int i;
- struct rte_kni *kni = rte_kni_get(TEST_KNI_PORT);
- struct rte_kni_ops ops = {
- .change_mtu = kni_change_mtu,
- .config_network_if = NULL,
- .config_mac_address = NULL,
- .config_promiscusity = NULL,
- };
-
- if (!kni) {
- printf("Failed to get KNI named %s\n", TEST_KNI_PORT);
- exit(-1);
- }
-
- kni_pkt_mtu = 0;
-
- /* Check with the invalid parameters */
- if (rte_kni_register_handlers(kni, NULL) == 0) {
- printf("Unexpectedly register successfully "
- "with NULL ops pointer\n");
- exit(-1);
- }
- if (rte_kni_register_handlers(NULL, &ops) == 0) {
- printf("Unexpectedly register successfully "
- "to NULL KNI device pointer\n");
- exit(-1);
- }
-
- if (rte_kni_register_handlers(kni, &ops)) {
- printf("Fail to register ops\n");
- exit(-1);
- }
-
- /* Check registering again after it has been registered */
- if (rte_kni_register_handlers(kni, &ops) == 0) {
- printf("Unexpectedly register successfully after "
- "it has already been registered\n");
- exit(-1);
- }
-
- /**
- * Handle the request of setting MTU,
- * with registered handlers.
- */
- for (i = 0; i < TEST_KNI_HANDLE_REQ_COUNT; i++) {
- rte_kni_handle_request(kni);
- if (kni_pkt_mtu == TEST_KNI_MTU)
- break;
- rte_delay_ms(TEST_KNI_HANDLE_REQ_INTERVAL);
- }
- if (i >= TEST_KNI_HANDLE_REQ_COUNT) {
- printf("MTU has not been set\n");
- exit(-1);
- }
-
- kni_pkt_mtu = 0;
- if (rte_kni_unregister_handlers(kni) < 0) {
- printf("Fail to unregister ops\n");
- exit(-1);
- }
-
- /* Check with invalid parameter */
- if (rte_kni_unregister_handlers(NULL) == 0) {
- exit(-1);
- }
-
- /**
- * Handle the request of setting MTU,
- * without registered handlers.
- */
- for (i = 0; i < TEST_KNI_HANDLE_REQ_COUNT; i++) {
- rte_kni_handle_request(kni);
- if (kni_pkt_mtu != 0)
- break;
- rte_delay_ms(TEST_KNI_HANDLE_REQ_INTERVAL);
- }
- if (kni_pkt_mtu != 0) {
- printf("MTU shouldn't be set\n");
- exit(-1);
- }
-
- exit(0);
- } else {
- int p_ret, status;
-
- rte_delay_ms(1000);
- if (system(IFCONFIG TEST_KNI_PORT " mtu" TEST_KNI_MTU_STR)
- == -1)
- return -1;
-
- rte_delay_ms(1000);
- if (system(IFCONFIG TEST_KNI_PORT " mtu" TEST_KNI_MTU_STR)
- == -1)
- return -1;
-
- p_ret = wait(&status);
- if (!WIFEXITED(status)) {
- printf("Child process (%d) exit abnormally\n", p_ret);
- return -1;
- }
- if (WEXITSTATUS(status) != 0) {
- printf("Child process exit with failure\n");
- return -1;
- }
- }
-
- return 0;
-}
-
-static int
-test_kni_processing(uint16_t port_id, struct rte_mempool *mp)
-{
- int ret = 0;
- unsigned i;
- struct rte_kni *kni;
- struct rte_kni_conf conf;
- struct rte_eth_dev_info info;
- struct rte_kni_ops ops;
-
- if (!mp)
- return -1;
-
- memset(&conf, 0, sizeof(conf));
- memset(&info, 0, sizeof(info));
- memset(&ops, 0, sizeof(ops));
-
- ret = rte_eth_dev_info_get(port_id, &info);
- if (ret != 0) {
- printf("Error during getting device (port %u) info: %s\n",
- port_id, strerror(-ret));
- return -1;
- }
-
- snprintf(conf.name, sizeof(conf.name), TEST_KNI_PORT);
-
- /* core id 1 configured for kernel thread */
- conf.core_id = 1;
- conf.force_bind = 1;
- conf.mbuf_size = MAX_PACKET_SZ;
- conf.group_id = port_id;
-
- ops = kni_ops;
- ops.port_id = port_id;
-
- /* basic test of kni processing */
- kni = rte_kni_alloc(mp, &conf, &ops);
- if (!kni) {
- printf("fail to create kni\n");
- return -1;
- }
-
- test_kni_ctx = kni;
- test_kni_processing_flag = 0;
- stats.ingress = 0;
- stats.egress = 0;
-
- /**
- * Check multiple processes support on
- * registering/unregistering handlers.
- */
- if (test_kni_register_handler_mp() < 0) {
- printf("fail to check multiple process support\n");
- ret = -1;
- goto fail_kni;
- }
-
- ret = test_kni_link_change();
- if (ret != 0)
- goto fail_kni;
-
- rte_eal_mp_remote_launch(test_kni_loop, NULL, CALL_MAIN);
- RTE_LCORE_FOREACH_WORKER(i) {
- if (rte_eal_wait_lcore(i) < 0) {
- ret = -1;
- goto fail_kni;
- }
- }
- /**
- * Check if the number of mbufs received from kernel space is equal
- * to that of transmitted to kernel space
- */
- if (stats.ingress < KNI_NUM_MBUF_THRESHOLD ||
- stats.egress < KNI_NUM_MBUF_THRESHOLD) {
- printf("The ingress/egress number should not be "
- "less than %u\n", (unsigned)KNI_NUM_MBUF_THRESHOLD);
- ret = -1;
- goto fail_kni;
- }
-
- if (rte_kni_release(kni) < 0) {
- printf("fail to release kni\n");
- return -1;
- }
- test_kni_ctx = NULL;
-
- /* test of reusing memzone */
- kni = rte_kni_alloc(mp, &conf, &ops);
- if (!kni) {
- printf("fail to create kni\n");
- return -1;
- }
-
- /* Release the kni for following testing */
- if (rte_kni_release(kni) < 0) {
- printf("fail to release kni\n");
- return -1;
- }
-
- return ret;
-fail_kni:
- if (rte_kni_release(kni) < 0) {
- printf("fail to release kni\n");
- ret = -1;
- }
-
- return ret;
-}
-
-static int
-test_kni(void)
-{
- int ret = -1;
- uint16_t port_id;
- struct rte_kni *kni;
- struct rte_mempool *mp;
- struct rte_kni_conf conf;
- struct rte_eth_dev_info info;
- struct rte_kni_ops ops;
- FILE *fd;
- DIR *dir;
- char buf[16];
-
- dir = opendir(KNI_MODULE_PATH);
- if (!dir) {
- if (errno == ENOENT) {
- printf("Cannot run UT due to missing rte_kni module\n");
- return TEST_SKIPPED;
- }
- printf("opendir: %s", strerror(errno));
- return -1;
- }
- closedir(dir);
-
- /* Initialize KNI subsystem */
- ret = rte_kni_init(KNI_TEST_MAX_PORTS);
- if (ret < 0) {
- printf("fail to initialize KNI subsystem\n");
- return -1;
- }
-
- if (test_kni_allocate_lcores() < 0) {
- printf("No enough lcores for kni processing\n");
- return -1;
- }
-
- mp = test_kni_create_mempool();
- if (!mp) {
- printf("fail to create mempool for kni\n");
- return -1;
- }
-
- /* configuring port 0 for the test is enough */
- port_id = 0;
- ret = rte_eth_dev_configure(port_id, 1, 1, &port_conf);
- if (ret < 0) {
- printf("fail to configure port %d\n", port_id);
- return -1;
- }
-
- ret = rte_eth_rx_queue_setup(port_id, 0, NB_RXD, SOCKET, &rx_conf, mp);
- if (ret < 0) {
- printf("fail to setup rx queue for port %d\n", port_id);
- return -1;
- }
-
- ret = rte_eth_tx_queue_setup(port_id, 0, NB_TXD, SOCKET, &tx_conf);
- if (ret < 0) {
- printf("fail to setup tx queue for port %d\n", port_id);
- return -1;
- }
-
- ret = rte_eth_dev_start(port_id);
- if (ret < 0) {
- printf("fail to start port %d\n", port_id);
- return -1;
- }
- ret = rte_eth_promiscuous_enable(port_id);
- if (ret != 0) {
- printf("fail to enable promiscuous mode for port %d: %s\n",
- port_id, rte_strerror(-ret));
- return -1;
- }
-
- /* basic test of kni processing */
- fd = fopen(KNI_MODULE_PARAM_LO, "r");
- if (fd == NULL) {
- printf("fopen: %s", strerror(errno));
- return -1;
- }
- memset(&buf, 0, sizeof(buf));
- if (fgets(buf, sizeof(buf), fd)) {
- if (!strncmp(buf, "lo_mode_fifo", strlen("lo_mode_fifo")) ||
- !strncmp(buf, "lo_mode_fifo_skb",
- strlen("lo_mode_fifo_skb"))) {
- ret = test_kni_processing(port_id, mp);
- if (ret < 0) {
- fclose(fd);
- goto fail;
- }
- } else
- printf("test_kni_processing skipped because of missing rte_kni module lo_mode argument\n");
- }
- fclose(fd);
-
- /* test of allocating KNI with NULL mempool pointer */
- memset(&info, 0, sizeof(info));
- memset(&conf, 0, sizeof(conf));
- memset(&ops, 0, sizeof(ops));
-
- ret = rte_eth_dev_info_get(port_id, &info);
- if (ret != 0) {
- printf("Error during getting device (port %u) info: %s\n",
- port_id, strerror(-ret));
- return -1;
- }
-
- conf.group_id = port_id;
- conf.mbuf_size = MAX_PACKET_SZ;
-
- ops = kni_ops;
- ops.port_id = port_id;
- kni = rte_kni_alloc(NULL, &conf, &ops);
- if (kni) {
- ret = -1;
- printf("unexpectedly creates kni successfully with NULL "
- "mempool pointer\n");
- goto fail;
- }
-
- /* test of allocating KNI without configurations */
- kni = rte_kni_alloc(mp, NULL, NULL);
- if (kni) {
- ret = -1;
- printf("Unexpectedly allocate KNI device successfully "
- "without configurations\n");
- goto fail;
- }
-
- /* test of allocating KNI without a name */
- memset(&conf, 0, sizeof(conf));
- memset(&info, 0, sizeof(info));
- memset(&ops, 0, sizeof(ops));
-
- ret = rte_eth_dev_info_get(port_id, &info);
- if (ret != 0) {
- printf("Error during getting device (port %u) info: %s\n",
- port_id, strerror(-ret));
- ret = -1;
- goto fail;
- }
-
- conf.group_id = port_id;
- conf.mbuf_size = MAX_PACKET_SZ;
-
- ops = kni_ops;
- ops.port_id = port_id;
- kni = rte_kni_alloc(mp, &conf, &ops);
- if (kni) {
- ret = -1;
- printf("Unexpectedly allocate a KNI device successfully "
- "without a name\n");
- goto fail;
- }
-
- /* test of releasing NULL kni context */
- ret = rte_kni_release(NULL);
- if (ret == 0) {
- ret = -1;
- printf("unexpectedly release kni successfully\n");
- goto fail;
- }
-
- /* test of handling request on NULL device pointer */
- ret = rte_kni_handle_request(NULL);
- if (ret == 0) {
- ret = -1;
- printf("Unexpectedly handle request on NULL device pointer\n");
- goto fail;
- }
-
- /* test of getting KNI device with pointer to NULL */
- kni = rte_kni_get(NULL);
- if (kni) {
- ret = -1;
- printf("Unexpectedly get a KNI device with "
- "NULL name pointer\n");
- goto fail;
- }
-
- /* test of getting KNI device with an zero length name string */
- memset(&conf, 0, sizeof(conf));
- kni = rte_kni_get(conf.name);
- if (kni) {
- ret = -1;
- printf("Unexpectedly get a KNI device with "
- "zero length name string\n");
- goto fail;
- }
-
- /* test of getting KNI device with an invalid string name */
- memset(&conf, 0, sizeof(conf));
- snprintf(conf.name, sizeof(conf.name), "testing");
- kni = rte_kni_get(conf.name);
- if (kni) {
- ret = -1;
- printf("Unexpectedly get a KNI device with "
- "a never used name string\n");
- goto fail;
- }
- ret = 0;
-
-fail:
- if (rte_eth_dev_stop(port_id) != 0)
- printf("Failed to stop port %u\n", port_id);
-
- return ret;
-}
-
-#endif
-
-REGISTER_TEST_COMMAND(kni_autotest, test_kni);
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 3bc8778981f6..7bba67d58586 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -43,7 +43,6 @@ The public API headers are grouped by topics:
[bond](@ref rte_eth_bond.h),
[vhost](@ref rte_vhost.h),
[vdpa](@ref rte_vdpa.h),
- [KNI](@ref rte_kni.h),
[ixgbe](@ref rte_pmd_ixgbe.h),
[i40e](@ref rte_pmd_i40e.h),
[iavf](@ref rte_pmd_iavf.h),
@@ -178,7 +177,6 @@ The public API headers are grouped by topics:
[frag](@ref rte_port_frag.h),
[reass](@ref rte_port_ras.h),
[sched](@ref rte_port_sched.h),
- [kni](@ref rte_port_kni.h),
[src/sink](@ref rte_port_source_sink.h)
* [table](@ref rte_table.h):
[lpm IPv4](@ref rte_table_lpm.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 1a4210b948a8..90dcf232dffd 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -49,7 +49,6 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/ip_frag \
@TOPDIR@/lib/ipsec \
@TOPDIR@/lib/jobstats \
- @TOPDIR@/lib/kni \
@TOPDIR@/lib/kvargs \
@TOPDIR@/lib/latencystats \
@TOPDIR@/lib/lpm \
diff --git a/doc/guides/contributing/documentation.rst b/doc/guides/contributing/documentation.rst
index 7fcbb7fc43b2..f16c94e9768b 100644
--- a/doc/guides/contributing/documentation.rst
+++ b/doc/guides/contributing/documentation.rst
@@ -95,7 +95,7 @@ added to by the developer.
* **The Programmers Guide**
The Programmers Guide explains how the API components of DPDK such as the EAL, Memzone, Rings and the Hash Library work.
- It also explains how some higher level functionality such as Packet Distributor, Packet Framework and KNI work.
+ It also explains how some higher level functionality such as Packet Distributor and Packet Framework.
It also shows the build system and explains how to add applications.
The Programmers Guide should be expanded when new functionality is added to DPDK.
diff --git a/doc/guides/howto/flow_bifurcation.rst b/doc/guides/howto/flow_bifurcation.rst
index 838eb2a4cc89..554dd24c32c5 100644
--- a/doc/guides/howto/flow_bifurcation.rst
+++ b/doc/guides/howto/flow_bifurcation.rst
@@ -7,8 +7,7 @@ Flow Bifurcation How-to Guide
Flow Bifurcation is a mechanism which uses hardware capable Ethernet devices
to split traffic between Linux user space and kernel space. Since it is a
hardware assisted feature this approach can provide line rate processing
-capability. Other than :ref:`KNI <kni>`, the software is just required to
-enable device configuration, there is no need to take care of the packet
+capability. There is no need to take care of the packet
movement during the traffic split. This can yield better performance with
less CPU overhead.
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 31296822e5ec..7bfcac880f44 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -43,7 +43,6 @@ Network Interface Controller Drivers
ionic
ipn3ke
ixgbe
- kni
mana
memif
mlx4
diff --git a/doc/guides/nics/kni.rst b/doc/guides/nics/kni.rst
deleted file mode 100644
index bd3033bb585c..000000000000
--- a/doc/guides/nics/kni.rst
+++ /dev/null
@@ -1,170 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2017 Intel Corporation.
-
-KNI Poll Mode Driver
-======================
-
-KNI PMD is wrapper to the :ref:`librte_kni <kni>` library.
-
-This PMD enables using KNI without having a KNI specific application,
-any forwarding application can use PMD interface for KNI.
-
-Sending packets to any DPDK controlled interface or sending to the
-Linux networking stack will be transparent to the DPDK application.
-
-To create a KNI device ``net_kni#`` device name should be used, and this
-will create ``kni#`` Linux virtual network interface.
-
-There is no physical device backend for the virtual KNI device.
-
-Packets sent to the KNI Linux interface will be received by the DPDK
-application, and DPDK application may forward packets to a physical NIC
-or to a virtual device (like another KNI interface or PCAP interface).
-
-To forward any traffic from physical NIC to the Linux networking stack,
-an application should control a physical port and create one virtual KNI port,
-and forward between two.
-
-Using this PMD requires KNI kernel module be inserted.
-
-
-Usage
------
-
-EAL ``--vdev`` argument can be used to create KNI device instance, like::
-
- dpdk-testpmd --vdev=net_kni0 --vdev=net_kni1 -- -i
-
-Above command will create ``kni0`` and ``kni1`` Linux network interfaces,
-those interfaces can be controlled by standard Linux tools.
-
-When testpmd forwarding starts, any packets sent to ``kni0`` interface
-forwarded to the ``kni1`` interface and vice versa.
-
-There is no hard limit on number of interfaces that can be created.
-
-
-Default interface configuration
--------------------------------
-
-``librte_kni`` can create Linux network interfaces with different features,
-feature set controlled by a configuration struct, and KNI PMD uses a fixed
-configuration:
-
- .. code-block:: console
-
- Interface name: kni#
- force bind kernel thread to a core : NO
- mbuf size: (rte_pktmbuf_data_room_size(pktmbuf_pool) - RTE_PKTMBUF_HEADROOM)
- mtu: (conf.mbuf_size - RTE_ETHER_HDR_LEN)
-
-KNI control path is not supported with the PMD, since there is no physical
-backend device by default.
-
-
-Runtime Configuration
----------------------
-
-``no_request_thread``, by default PMD creates a pthread for each KNI interface
-to handle Linux network interface control commands, like ``ifconfig kni0 up``
-
-With ``no_request_thread`` option, pthread is not created and control commands
-not handled by PMD.
-
-By default request thread is enabled. And this argument should not be used
-most of the time, unless this PMD used with customized DPDK application to handle
-requests itself.
-
-Argument usage::
-
- dpdk-testpmd --vdev "net_kni0,no_request_thread=1" -- -i
-
-
-PMD log messages
-----------------
-
-If KNI kernel module (rte_kni.ko) not inserted, following error log printed::
-
- "KNI: KNI subsystem has not been initialized. Invoke rte_kni_init() first"
-
-
-PMD testing
------------
-
-It is possible to test PMD quickly using KNI kernel module loopback feature:
-
-* Insert KNI kernel module with loopback support:
-
- .. code-block:: console
-
- insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo_skb
-
-* Start testpmd with no physical device but two KNI virtual devices:
-
- .. code-block:: console
-
- ./dpdk-testpmd --vdev net_kni0 --vdev net_kni1 -- -i
-
- .. code-block:: console
-
- ...
- Configuring Port 0 (socket 0)
- KNI: pci: 00:00:00 c580:b8
- Port 0: 1A:4A:5B:7C:A2:8C
- Configuring Port 1 (socket 0)
- KNI: pci: 00:00:00 600:b9
- Port 1: AE:95:21:07:93:DD
- Checking link statuses...
- Port 0 Link Up - speed 10000 Mbps - full-duplex
- Port 1 Link Up - speed 10000 Mbps - full-duplex
- Done
- testpmd>
-
-* Observe Linux interfaces
-
- .. code-block:: console
-
- $ ifconfig kni0 && ifconfig kni1
- kni0: flags=4098<BROADCAST,MULTICAST> mtu 1500
- ether ae:8e:79:8e:9b:c8 txqueuelen 1000 (Ethernet)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
- kni1: flags=4098<BROADCAST,MULTICAST> mtu 1500
- ether 9e:76:43:53:3e:9b txqueuelen 1000 (Ethernet)
- RX packets 0 bytes 0 (0.0 B)
- RX errors 0 dropped 0 overruns 0 frame 0
- TX packets 0 bytes 0 (0.0 B)
- TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-
-
-* Start forwarding with tx_first:
-
- .. code-block:: console
-
- testpmd> start tx_first
-
-* Quit and check forwarding stats:
-
- .. code-block:: console
-
- testpmd> quit
- Telling cores to stop...
- Waiting for lcores to finish...
-
- ---------------------- Forward statistics for port 0 ----------------------
- RX-packets: 35637905 RX-dropped: 0 RX-total: 35637905
- TX-packets: 35637947 TX-dropped: 0 TX-total: 35637947
- ----------------------------------------------------------------------------
-
- ---------------------- Forward statistics for port 1 ----------------------
- RX-packets: 35637915 RX-dropped: 0 RX-total: 35637915
- TX-packets: 35637937 TX-dropped: 0 TX-total: 35637937
- ----------------------------------------------------------------------------
-
- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
- RX-packets: 71275820 RX-dropped: 0 RX-total: 71275820
- TX-packets: 71275884 TX-dropped: 0 TX-total: 71275884
- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
diff --git a/doc/guides/nics/virtio.rst b/doc/guides/nics/virtio.rst
index f5e54a5e9cfd..ba6247170dbb 100644
--- a/doc/guides/nics/virtio.rst
+++ b/doc/guides/nics/virtio.rst
@@ -10,15 +10,12 @@ we provide a virtio Poll Mode Driver (PMD) as a software solution, comparing to
for fast guest VM to guest VM communication and guest VM to host communication.
Vhost is a kernel acceleration module for virtio qemu backend.
-The DPDK extends kni to support vhost raw socket interface,
-which enables vhost to directly read/ write packets from/to a physical port.
-With this enhancement, virtio could achieve quite promising performance.
For basic qemu-KVM installation and other Intel EM poll mode driver in guest VM,
please refer to Chapter "Driver for VM Emulated Devices".
In this chapter, we will demonstrate usage of virtio PMD with two backends,
-standard qemu vhost back end and vhost kni back end.
+standard qemu vhost back end.
Virtio Implementation in DPDK
-----------------------------
@@ -89,93 +86,6 @@ The following prerequisites apply:
* When using legacy interface, ``SYS_RAWIO`` capability is required
for ``iopl()`` call to enable access to PCI I/O ports.
-Virtio with kni vhost Back End
-------------------------------
-
-This section demonstrates kni vhost back end example setup for Phy-VM Communication.
-
-.. _figure_host_vm_comms:
-
-.. figure:: img/host_vm_comms.*
-
- Host2VM Communication Example Using kni vhost Back End
-
-
-Host2VM communication example
-
-#. Load the kni kernel module:
-
- .. code-block:: console
-
- insmod rte_kni.ko
-
- Other basic DPDK preparations like hugepage enabling,
- UIO port binding are not listed here.
- Please refer to the *DPDK Getting Started Guide* for detailed instructions.
-
-#. Launch the kni user application:
-
- .. code-block:: console
-
- <build_dir>/examples/dpdk-kni -l 0-3 -n 4 -- -p 0x1 -P --config="(0,1,3)"
-
- This command generates one network device vEth0 for physical port.
- If specify more physical ports, the generated network device will be vEth1, vEth2, and so on.
-
- For each physical port, kni creates two user threads.
- One thread loops to fetch packets from the physical NIC port into the kni receive queue.
- The other user thread loops to send packets in the kni transmit queue.
-
- For each physical port, kni also creates a kernel thread that retrieves packets from the kni receive queue,
- place them onto kni's raw socket's queue and wake up the vhost kernel thread to exchange packets with the virtio virt queue.
-
- For more details about kni, please refer to :ref:`kni`.
-
-#. Enable the kni raw socket functionality for the specified physical NIC port,
- get the generated file descriptor and set it in the qemu command line parameter.
- Always remember to set ioeventfd_on and vhost_on.
-
- Example:
-
- .. code-block:: console
-
- echo 1 > /sys/class/net/vEth0/sock_en
- fd=`cat /sys/class/net/vEth0/sock_fd`
- exec qemu-system-x86_64 -enable-kvm -cpu host \
- -m 2048 -smp 4 -name dpdk-test1-vm1 \
- -drive file=/data/DPDKVMS/dpdk-vm.img \
- -netdev tap, fd=$fd,id=mynet_kni, script=no,vhost=on \
- -device virtio-net-pci,netdev=mynet_kni,bus=pci.0,addr=0x3,ioeventfd=on \
- -vnc:1 -daemonize
-
- In the above example, virtio port 0 in the guest VM will be associated with vEth0, which in turns corresponds to a physical port,
- which means received packets come from vEth0, and transmitted packets is sent to vEth0.
-
-#. In the guest, bind the virtio device to the uio_pci_generic kernel module and start the forwarding application.
- When the virtio port in guest bursts Rx, it is getting packets from the
- raw socket's receive queue.
- When the virtio port bursts Tx, it is sending packet to the tx_q.
-
- .. code-block:: console
-
- modprobe uio
- dpdk-hugepages.py --setup 1G
- modprobe uio_pci_generic
- ./usertools/dpdk-devbind.py -b uio_pci_generic 00:03.0
-
- We use testpmd as the forwarding application in this example.
-
- .. figure:: img/console.*
-
- Running testpmd
-
-#. Use IXIA packet generator to inject a packet stream into the KNI physical port.
-
- The packet reception and transmission flow path is:
-
- IXIA packet generator->82599 PF->KNI Rx queue->KNI raw socket queue->Guest
- VM virtio port 0 Rx burst->Guest VM virtio port 0 Tx burst-> KNI Tx queue
- ->82599 PF-> IXIA packet generator
Virtio with qemu virtio Back End
--------------------------------
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 93c8a031be56..5d382fdd9032 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -610,8 +610,6 @@ devices would fail anyway.
``RTE_PCI_DRV_NEED_IOVA_AS_VA`` flag is used to dictate that this PCI
driver can only work in RTE_IOVA_VA mode.
- When the KNI kernel module is detected, RTE_IOVA_PA mode is preferred as a
- performance penalty is expected in RTE_IOVA_VA mode.
IOVA Mode Configuration
~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/guides/prog_guide/glossary.rst b/doc/guides/prog_guide/glossary.rst
index fb0910ba5b3f..8d6349701e43 100644
--- a/doc/guides/prog_guide/glossary.rst
+++ b/doc/guides/prog_guide/glossary.rst
@@ -103,9 +103,6 @@ lcore
A logical execution unit of the processor, sometimes called a *hardware
thread*.
-KNI
- Kernel Network Interface
-
L1
Layer 1
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index d89cd3edb63c..1be6a3d6d9b6 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -54,7 +54,6 @@ Programmer's Guide
pcapng_lib
pdump_lib
multi_proc_support
- kernel_nic_interface
thread_safety_dpdk_functions
eventdev
event_ethernet_rx_adapter
diff --git a/doc/guides/prog_guide/kernel_nic_interface.rst b/doc/guides/prog_guide/kernel_nic_interface.rst
deleted file mode 100644
index 392e5df75fcf..000000000000
--- a/doc/guides/prog_guide/kernel_nic_interface.rst
+++ /dev/null
@@ -1,423 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2010-2015 Intel Corporation.
-
-.. _kni:
-
-Kernel NIC Interface
-====================
-
-.. note::
-
- KNI is deprecated and will be removed in future.
- See :doc:`../rel_notes/deprecation`.
-
- :ref:`virtio_user_as_exception_path` alternative is the preferred way
- for interfacing with the Linux network stack
- as it is an in-kernel solution and has similar performance expectations.
-
-.. note::
-
- KNI is disabled by default in the DPDK build.
- To re-enable the library, remove 'kni' from the "disable_libs" meson option when configuring a build.
-
-The DPDK Kernel NIC Interface (KNI) allows userspace applications access to the Linux* control plane.
-
-KNI provides an interface with the kernel network stack
-and allows management of DPDK ports using standard Linux net tools
-such as ``ethtool``, ``iproute2`` and ``tcpdump``.
-
-The main use case of KNI is to get/receive exception packets from/to Linux network stack
-while main datapath IO is done bypassing the networking stack.
-
-There are other alternatives to KNI, all are available in the upstream Linux:
-
-#. :ref:`virtio_user_as_exception_path`
-
-#. :doc:`../nics/tap` as wrapper to `Linux tun/tap
- <https://www.kernel.org/doc/Documentation/networking/tuntap.txt>`_
-
-The benefits of using the KNI against alternatives are:
-
-* Faster than existing Linux TUN/TAP interfaces
- (by eliminating system calls and copy_to_user()/copy_from_user() operations.
-
-The disadvantages of the KNI are:
-
-* It is out-of-tree Linux kernel module
- which makes updating and distributing the driver more difficult.
- Most users end up building the KNI driver from source
- which requires the packages and tools to build kernel modules.
-
-* As it shares memory between userspace and kernelspace,
- and kernel part directly uses input provided by userspace, it is not safe.
- This makes hard to upstream the module.
-
-* Requires dedicated kernel cores.
-
-* Only a subset of net devices control commands are supported by KNI.
-
-The components of an application using the DPDK Kernel NIC Interface are shown in :numref:`figure_kernel_nic_intf`.
-
-.. _figure_kernel_nic_intf:
-
-.. figure:: img/kernel_nic_intf.*
-
- Components of a DPDK KNI Application
-
-
-The DPDK KNI Kernel Module
---------------------------
-
-The KNI kernel loadable module ``rte_kni`` provides the kernel interface
-for DPDK applications.
-
-When the ``rte_kni`` module is loaded, it will create a device ``/dev/kni``
-that is used by the DPDK KNI API functions to control and communicate with
-the kernel module.
-
-The ``rte_kni`` kernel module contains several optional parameters which
-can be specified when the module is loaded to control its behavior:
-
-.. code-block:: console
-
- # modinfo rte_kni.ko
- <snip>
- parm: lo_mode: KNI loopback mode (default=lo_mode_none):
- lo_mode_none Kernel loopback disabled
- lo_mode_fifo Enable kernel loopback with fifo
- lo_mode_fifo_skb Enable kernel loopback with fifo and skb buffer
- (charp)
- parm: kthread_mode: Kernel thread mode (default=single):
- single Single kernel thread mode enabled.
- multiple Multiple kernel thread mode enabled.
- (charp)
- parm: carrier: Default carrier state for KNI interface (default=off):
- off Interfaces will be created with carrier state set to off.
- on Interfaces will be created with carrier state set to on.
- (charp)
- parm: enable_bifurcated: Enable request processing support for
- bifurcated drivers, which means releasing rtnl_lock before calling
- userspace callback and supporting async requests (default=off):
- on Enable request processing support for bifurcated drivers.
- (charp)
- parm: min_scheduling_interval: KNI thread min scheduling interval (default=100 microseconds)
- (long)
- parm: max_scheduling_interval: KNI thread max scheduling interval (default=200 microseconds)
- (long)
-
-
-Loading the ``rte_kni`` kernel module without any optional parameters is
-the typical way a DPDK application gets packets into and out of the kernel
-network stack. Without any parameters, only one kernel thread is created
-for all KNI devices for packet receiving in kernel side, loopback mode is
-disabled, and the default carrier state of KNI interfaces is set to *off*.
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko
-
-.. _kni_loopback_mode:
-
-Loopback Mode
-~~~~~~~~~~~~~
-
-For testing, the ``rte_kni`` kernel module can be loaded in loopback mode
-by specifying the ``lo_mode`` parameter:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo
-
-The ``lo_mode_fifo`` loopback option will loop back ring enqueue/dequeue
-operations in kernel space.
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko lo_mode=lo_mode_fifo_skb
-
-The ``lo_mode_fifo_skb`` loopback option will loop back ring enqueue/dequeue
-operations and sk buffer copies in kernel space.
-
-If the ``lo_mode`` parameter is not specified, loopback mode is disabled.
-
-.. _kni_kernel_thread_mode:
-
-Kernel Thread Mode
-~~~~~~~~~~~~~~~~~~
-
-To provide flexibility of performance, the ``rte_kni`` KNI kernel module
-can be loaded with the ``kthread_mode`` parameter. The ``rte_kni`` kernel
-module supports two options: "single kernel thread" mode and "multiple
-kernel thread" mode.
-
-Single kernel thread mode is enabled as follows:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko kthread_mode=single
-
-This mode will create only one kernel thread for all KNI interfaces to
-receive data on the kernel side. By default, this kernel thread is not
-bound to any particular core, but the user can set the core affinity for
-this kernel thread by setting the ``core_id`` and ``force_bind`` parameters
-in ``struct rte_kni_conf`` when the first KNI interface is created:
-
-For optimum performance, the kernel thread should be bound to a core in
-on the same socket as the DPDK lcores used in the application.
-
-The KNI kernel module can also be configured to start a separate kernel
-thread for each KNI interface created by the DPDK application. Multiple
-kernel thread mode is enabled as follows:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko kthread_mode=multiple
-
-This mode will create a separate kernel thread for each KNI interface to
-receive data on the kernel side. The core affinity of each ``kni_thread``
-kernel thread can be specified by setting the ``core_id`` and ``force_bind``
-parameters in ``struct rte_kni_conf`` when each KNI interface is created.
-
-Multiple kernel thread mode can provide scalable higher performance if
-sufficient unused cores are available on the host system.
-
-If the ``kthread_mode`` parameter is not specified, the "single kernel
-thread" mode is used.
-
-.. _kni_default_carrier_state:
-
-Default Carrier State
-~~~~~~~~~~~~~~~~~~~~~
-
-The default carrier state of KNI interfaces created by the ``rte_kni``
-kernel module is controlled via the ``carrier`` option when the module
-is loaded.
-
-If ``carrier=off`` is specified, the kernel module will leave the carrier
-state of the interface *down* when the interface is management enabled.
-The DPDK application can set the carrier state of the KNI interface using the
-``rte_kni_update_link()`` function. This is useful for DPDK applications
-which require that the carrier state of the KNI interface reflect the
-actual link state of the corresponding physical NIC port.
-
-If ``carrier=on`` is specified, the kernel module will automatically set
-the carrier state of the interface to *up* when the interface is management
-enabled. This is useful for DPDK applications which use the KNI interface as
-a purely virtual interface that does not correspond to any physical hardware
-and do not wish to explicitly set the carrier state of the interface with
-``rte_kni_update_link()``. It is also useful for testing in loopback mode
-where the NIC port may not be physically connected to anything.
-
-To set the default carrier state to *on*:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko carrier=on
-
-To set the default carrier state to *off*:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko carrier=off
-
-If the ``carrier`` parameter is not specified, the default carrier state
-of KNI interfaces will be set to *off*.
-
-.. _kni_bifurcated_device_support:
-
-Bifurcated Device Support
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-User callbacks are executed while kernel module holds the ``rtnl`` lock, this
-causes a deadlock when callbacks run control commands on another Linux kernel
-network interface.
-
-Bifurcated devices has kernel network driver part and to prevent deadlock for
-them ``enable_bifurcated`` is used.
-
-To enable bifurcated device support:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko enable_bifurcated=on
-
-Enabling bifurcated device support releases ``rtnl`` lock before calling
-callback and locks it back after callback. Also enables asynchronous request to
-support callbacks that requires rtnl lock to work (interface down).
-
-KNI Kthread Scheduling
-~~~~~~~~~~~~~~~~~~~~~~
-
-The ``min_scheduling_interval`` and ``max_scheduling_interval`` parameters
-control the rescheduling interval of the KNI kthreads.
-
-This might be useful if we have use cases in which we require improved
-latency or performance for control plane traffic.
-
-The implementation is backed by Linux High Precision Timers, and uses ``usleep_range``.
-Hence, it will have the same granularity constraints as this Linux subsystem.
-
-For Linux High Precision Timers, you can check the following resource: `Kernel Timers <http://www.kernel.org/doc/Documentation/timers/timers-howto.txt>`_
-
-To set the ``min_scheduling_interval`` to a value of 100 microseconds:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko min_scheduling_interval=100
-
-To set the ``max_scheduling_interval`` to a value of 200 microseconds:
-
-.. code-block:: console
-
- # insmod <build_dir>/kernel/linux/kni/rte_kni.ko max_scheduling_interval=200
-
-If the ``min_scheduling_interval`` and ``max_scheduling_interval`` parameters are
-not specified, the default interval limits will be set to *100* and *200* respectively.
-
-KNI Creation and Deletion
--------------------------
-
-Before any KNI interfaces can be created, the ``rte_kni`` kernel module must
-be loaded into the kernel and configured with the ``rte_kni_init()`` function.
-
-The KNI interfaces are created by a DPDK application dynamically via the
-``rte_kni_alloc()`` function.
-
-The ``struct rte_kni_conf`` structure contains fields which allow the
-user to specify the interface name, set the MTU size, set an explicit or
-random MAC address and control the affinity of the kernel Rx thread(s)
-(both single and multi-threaded modes).
-By default the KNI sample example gets the MTU from the matching device,
-and in case of KNI PMD it is derived from mbuf buffer length.
-
-The ``struct rte_kni_ops`` structure contains pointers to functions to
-handle requests from the ``rte_kni`` kernel module. These functions
-allow DPDK applications to perform actions when the KNI interfaces are
-manipulated by control commands or functions external to the application.
-
-For example, the DPDK application may wish to enabled/disable a physical
-NIC port when a user enabled/disables a KNI interface with ``ip link set
-[up|down] dev <ifaceX>``. The DPDK application can register a callback for
-``config_network_if`` which will be called when the interface management
-state changes.
-
-There are currently four callbacks for which the user can register
-application functions:
-
-``config_network_if``:
-
- Called when the management state of the KNI interface changes.
- For example, when the user runs ``ip link set [up|down] dev <ifaceX>``.
-
-``change_mtu``:
-
- Called when the user changes the MTU size of the KNI
- interface. For example, when the user runs ``ip link set mtu <size>
- dev <ifaceX>``.
-
-``config_mac_address``:
-
- Called when the user changes the MAC address of the KNI interface.
- For example, when the user runs ``ip link set address <MAC>
- dev <ifaceX>``. If the user sets this callback function to NULL,
- but sets the ``port_id`` field to a value other than -1, a default
- callback handler in the rte_kni library ``kni_config_mac_address()``
- will be called which calls ``rte_eth_dev_default_mac_addr_set()``
- on the specified ``port_id``.
-
-``config_promiscusity``:
-
- Called when the user changes the promiscuity state of the KNI
- interface. For example, when the user runs ``ip link set promisc
- [on|off] dev <ifaceX>``. If the user sets this callback function to
- NULL, but sets the ``port_id`` field to a value other than -1, a default
- callback handler in the rte_kni library ``kni_config_promiscusity()``
- will be called which calls ``rte_eth_promiscuous_<enable|disable>()``
- on the specified ``port_id``.
-
-``config_allmulticast``:
-
- Called when the user changes the allmulticast state of the KNI interface.
- For example, when the user runs ``ifconfig <ifaceX> [-]allmulti``. If the
- user sets this callback function to NULL, but sets the ``port_id`` field to
- a value other than -1, a default callback handler in the rte_kni library
- ``kni_config_allmulticast()`` will be called which calls
- ``rte_eth_allmulticast_<enable|disable>()`` on the specified ``port_id``.
-
-In order to run these callbacks, the application must periodically call
-the ``rte_kni_handle_request()`` function. Any user callback function
-registered will be called directly from ``rte_kni_handle_request()`` so
-care must be taken to prevent deadlock and to not block any DPDK fastpath
-tasks. Typically DPDK applications which use these callbacks will need
-to create a separate thread or secondary process to periodically call
-``rte_kni_handle_request()``.
-
-The KNI interfaces can be deleted by a DPDK application with
-``rte_kni_release()``. All KNI interfaces not explicitly deleted will be
-deleted when the ``/dev/kni`` device is closed, either explicitly with
-``rte_kni_close()`` or when the DPDK application is closed.
-
-DPDK mbuf Flow
---------------
-
-To minimize the amount of DPDK code running in kernel space, the mbuf mempool is managed in userspace only.
-The kernel module will be aware of mbufs,
-but all mbuf allocation and free operations will be handled by the DPDK application only.
-
-:numref:`figure_pkt_flow_kni` shows a typical scenario with packets sent in both directions.
-
-.. _figure_pkt_flow_kni:
-
-.. figure:: img/pkt_flow_kni.*
-
- Packet Flow via mbufs in the DPDK KNI
-
-
-Use Case: Ingress
------------------
-
-On the DPDK RX side, the mbuf is allocated by the PMD in the RX thread context.
-This thread will enqueue the mbuf in the rx_q FIFO,
-and the next pointers in mbuf-chain will convert to physical address.
-The KNI thread will poll all KNI active devices for the rx_q.
-If an mbuf is dequeued, it will be converted to a sk_buff and sent to the net stack via netif_rx().
-The dequeued mbuf must be freed, so the same pointer is sent back in the free_q FIFO,
-and next pointers must convert back to virtual address if exists before put in the free_q FIFO.
-
-The RX thread, in the same main loop, polls this FIFO and frees the mbuf after dequeuing it.
-The address conversion of the next pointer is to prevent the chained mbuf
-in different hugepage segments from causing kernel crash.
-
-Use Case: Egress
-----------------
-
-For packet egress the DPDK application must first enqueue several mbufs to create an mbuf cache on the kernel side.
-
-The packet is received from the Linux net stack, by calling the kni_net_tx() callback.
-The mbuf is dequeued (without waiting due the cache) and filled with data from sk_buff.
-The sk_buff is then freed and the mbuf sent in the tx_q FIFO.
-
-The DPDK TX thread dequeues the mbuf and sends it to the PMD via ``rte_eth_tx_burst()``.
-It then puts the mbuf back in the cache.
-
-IOVA = VA: Support
-------------------
-
-KNI operates in IOVA_VA scheme when
-
-- LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0) and
-- EAL option `iova-mode=va` is passed or bus IOVA scheme in the DPDK is selected
- as RTE_IOVA_VA.
-
-Due to IOVA to KVA address translations, based on the KNI use case there
-can be a performance impact. For mitigation, forcing IOVA to PA via EAL
-"--iova-mode=pa" option can be used, IOVA_DC bus iommu scheme can also
-result in IOVA as PA.
-
-Ethtool
--------
-
-Ethtool is a Linux-specific tool with corresponding support in the kernel.
-The current version of kni provides minimal ethtool functionality
-including querying version and link state. It does not support link
-control, statistics, or dumping device registers.
diff --git a/doc/guides/prog_guide/packet_framework.rst b/doc/guides/prog_guide/packet_framework.rst
index 3d4e3b66cc5c..ebc69d8c3e75 100644
--- a/doc/guides/prog_guide/packet_framework.rst
+++ b/doc/guides/prog_guide/packet_framework.rst
@@ -87,18 +87,15 @@ Port Types
| | | management and hierarchical scheduling according to pre-defined SLAs. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
- | 6 | KNI | Send/receive packets to/from Linux kernel space. |
- | | | |
- +---+------------------+---------------------------------------------------------------------------------------+
- | 7 | Source | Input port used as packet generator. Similar to Linux kernel /dev/zero character |
+ | 6 | Source | Input port used as packet generator. Similar to Linux kernel /dev/zero character |
| | | device. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
- | 8 | Sink | Output port used to drop all input packets. Similar to Linux kernel /dev/null |
+ | 7 | Sink | Output port used to drop all input packets. Similar to Linux kernel /dev/null |
| | | character device. |
| | | |
+---+------------------+---------------------------------------------------------------------------------------+
- | 9 | Sym_crypto | Output port used to extract DPDK Cryptodev operations from a fixed offset of the |
+ | 8 | Sym_crypto | Output port used to extract DPDK Cryptodev operations from a fixed offset of the |
| | | packet and then enqueue to the Cryptodev PMD. Input port used to dequeue the |
| | | Cryptodev operations from the Cryptodev PMD and then retrieve the packets from them. |
+---+------------------+---------------------------------------------------------------------------------------+
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 494b401cda4b..fa619514fd64 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -35,7 +35,7 @@ Deprecation Notices
which also added support for standard atomics
(Ref: https://releases.llvm.org/3.6.0/tools/clang/docs/ReleaseNotes.html)
-* build: Enabling deprecated libraries (``flow_classify``, ``kni``)
+* build: Enabling deprecated libraries (``flow_classify``)
won't be possible anymore through the use of the ``disable_libs`` build option.
A new build option for deprecated libraries will be introduced instead.
@@ -78,13 +78,6 @@ Deprecation Notices
``__atomic_thread_fence`` must be used for patches that need to be merged in
20.08 onwards. This change will not introduce any performance degradation.
-* kni: The KNI kernel module and library are not recommended for use by new
- applications - other technologies such as virtio-user are recommended instead.
- Following the DPDK technical board
- `decision <https://mails.dpdk.org/archives/dev/2021-January/197077.html>`_
- and `refinement <https://mails.dpdk.org/archives/dev/2022-June/243596.html>`_,
- the KNI kernel module, library and PMD will be removed from the DPDK 23.11 release.
-
* lib: will fix extending some enum/define breaking the ABI. There are multiple
samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
used by iterators, and arrays holding these values are sized with this
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
new file mode 100644
index 000000000000..e2158934751f
--- /dev/null
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -0,0 +1,16 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2022 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 23.11
+==================
+
+New Features
+------------
+
+
+Removed Items
+-------------
+
+* kni: Remove deprecated Kernel Network Interface driver, libraries and examples
diff --git a/doc/guides/sample_app_ug/ip_pipeline.rst b/doc/guides/sample_app_ug/ip_pipeline.rst
index b521d3b8be20..f30ac5e19db7 100644
--- a/doc/guides/sample_app_ug/ip_pipeline.rst
+++ b/doc/guides/sample_app_ug/ip_pipeline.rst
@@ -164,15 +164,6 @@ Examples
| | | | 8. Pipeline table rule add default |
| | | | 9. Pipeline table rule add |
+-----------------------+----------------------+----------------+------------------------------------+
- | KNI | Stub | Forward | 1. Mempool create |
- | | | | 2. Link create |
- | | | | 3. Pipeline create |
- | | | | 4. Pipeline port in/out |
- | | | | 5. Pipeline table |
- | | | | 6. Pipeline port in table |
- | | | | 7. Pipeline enable |
- | | | | 8. Pipeline table rule add |
- +-----------------------+----------------------+----------------+------------------------------------+
| Firewall | ACL | Allow/Drop | 1. Mempool create |
| | | | 2. Link create |
| | * Key = n-tuple | | 3. Pipeline create |
@@ -297,17 +288,6 @@ Tap
tap <name>
-Kni
-~~~
-
- Create kni port ::
-
- kni <kni_name>
- link <link_name>
- mempool <mempool_name>
- [thread <thread_id>]
-
-
Cryptodev
~~~~~~~~~
@@ -366,7 +346,6 @@ Create pipeline input port ::
| swq <swq_name>
| tmgr <tmgr_name>
| tap <tap_name> mempool <mempool_name> mtu <mtu>
- | kni <kni_name>
| source mempool <mempool_name> file <file_name> bpp <n_bytes_per_pkt>
[action <port_in_action_profile_name>]
[disabled]
@@ -379,7 +358,6 @@ Create pipeline output port ::
| swq <swq_name>
| tmgr <tmgr_name>
| tap <tap_name>
- | kni <kni_name>
| sink [file <file_name> pkts <max_n_pkts>]
Create pipeline table ::
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 4b98faa72980..01b707b6c4ac 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1130,7 +1130,7 @@ nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
{
/* These dummy functions are required for supporting
* some applications which reconfigure queues without
- * stopping tx burst and rx burst threads(eg kni app)
+ * stopping tx burst and rx burst threads.
* When the queues context is saved, txq/rxqs are released
* which caused app crash since rx/tx burst is still
* on different lcores
diff --git a/drivers/net/kni/meson.build b/drivers/net/kni/meson.build
deleted file mode 100644
index 2acc98969426..000000000000
--- a/drivers/net/kni/meson.build
+++ /dev/null
@@ -1,11 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-if is_windows
- build = false
- reason = 'not supported on Windows'
- subdir_done()
-endif
-
-deps += 'kni'
-sources = files('rte_eth_kni.c')
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
deleted file mode 100644
index c0e1f8db409e..000000000000
--- a/drivers/net/kni/rte_eth_kni.c
+++ /dev/null
@@ -1,524 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Intel Corporation
- */
-
-#include <fcntl.h>
-#include <pthread.h>
-#include <unistd.h>
-
-#include <rte_string_fns.h>
-#include <ethdev_driver.h>
-#include <ethdev_vdev.h>
-#include <rte_kni.h>
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <bus_vdev_driver.h>
-
-/* Only single queue supported */
-#define KNI_MAX_QUEUE_PER_PORT 1
-
-#define MAX_KNI_PORTS 8
-
-#define KNI_ETHER_MTU(mbuf_size) \
- ((mbuf_size) - RTE_ETHER_HDR_LEN) /**< Ethernet MTU. */
-
-#define ETH_KNI_NO_REQUEST_THREAD_ARG "no_request_thread"
-static const char * const valid_arguments[] = {
- ETH_KNI_NO_REQUEST_THREAD_ARG,
- NULL
-};
-
-struct eth_kni_args {
- int no_request_thread;
-};
-
-struct pmd_queue_stats {
- uint64_t pkts;
- uint64_t bytes;
-};
-
-struct pmd_queue {
- struct pmd_internals *internals;
- struct rte_mempool *mb_pool;
-
- struct pmd_queue_stats rx;
- struct pmd_queue_stats tx;
-};
-
-struct pmd_internals {
- struct rte_kni *kni;
- uint16_t port_id;
- int is_kni_started;
-
- pthread_t thread;
- int stop_thread;
- int no_request_thread;
-
- struct rte_ether_addr eth_addr;
-
- struct pmd_queue rx_queues[KNI_MAX_QUEUE_PER_PORT];
- struct pmd_queue tx_queues[KNI_MAX_QUEUE_PER_PORT];
-};
-
-static const struct rte_eth_link pmd_link = {
- .link_speed = RTE_ETH_SPEED_NUM_10G,
- .link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
- .link_status = RTE_ETH_LINK_DOWN,
- .link_autoneg = RTE_ETH_LINK_FIXED,
-};
-static int is_kni_initialized;
-
-RTE_LOG_REGISTER_DEFAULT(eth_kni_logtype, NOTICE);
-
-#define PMD_LOG(level, fmt, args...) \
- rte_log(RTE_LOG_ ## level, eth_kni_logtype, \
- "%s(): " fmt "\n", __func__, ##args)
-static uint16_t
-eth_kni_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
-{
- struct pmd_queue *kni_q = q;
- struct rte_kni *kni = kni_q->internals->kni;
- uint16_t nb_pkts;
- int i;
-
- nb_pkts = rte_kni_rx_burst(kni, bufs, nb_bufs);
- for (i = 0; i < nb_pkts; i++)
- bufs[i]->port = kni_q->internals->port_id;
-
- kni_q->rx.pkts += nb_pkts;
-
- return nb_pkts;
-}
-
-static uint16_t
-eth_kni_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
-{
- struct pmd_queue *kni_q = q;
- struct rte_kni *kni = kni_q->internals->kni;
- uint16_t nb_pkts;
-
- nb_pkts = rte_kni_tx_burst(kni, bufs, nb_bufs);
-
- kni_q->tx.pkts += nb_pkts;
-
- return nb_pkts;
-}
-
-static void *
-kni_handle_request(void *param)
-{
- struct pmd_internals *internals = param;
-#define MS 1000
-
- while (!internals->stop_thread) {
- rte_kni_handle_request(internals->kni);
- usleep(500 * MS);
- }
-
- return param;
-}
-
-static int
-eth_kni_start(struct rte_eth_dev *dev)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- uint16_t port_id = dev->data->port_id;
- struct rte_mempool *mb_pool;
- struct rte_kni_conf conf = {{0}};
- const char *name = dev->device->name + 4; /* remove net_ */
-
- mb_pool = internals->rx_queues[0].mb_pool;
- strlcpy(conf.name, name, RTE_KNI_NAMESIZE);
- conf.force_bind = 0;
- conf.group_id = port_id;
- conf.mbuf_size =
- rte_pktmbuf_data_room_size(mb_pool) - RTE_PKTMBUF_HEADROOM;
- conf.mtu = KNI_ETHER_MTU(conf.mbuf_size);
-
- internals->kni = rte_kni_alloc(mb_pool, &conf, NULL);
- if (internals->kni == NULL) {
- PMD_LOG(ERR,
- "Fail to create kni interface for port: %d",
- port_id);
- return -1;
- }
-
- return 0;
-}
-
-static int
-eth_kni_dev_start(struct rte_eth_dev *dev)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- int ret;
-
- if (internals->is_kni_started == 0) {
- ret = eth_kni_start(dev);
- if (ret)
- return -1;
- internals->is_kni_started = 1;
- }
-
- if (internals->no_request_thread == 0) {
- internals->stop_thread = 0;
-
- ret = rte_ctrl_thread_create(&internals->thread,
- "kni_handle_req", NULL,
- kni_handle_request, internals);
- if (ret) {
- PMD_LOG(ERR,
- "Fail to create kni request thread");
- return -1;
- }
- }
-
- dev->data->dev_link.link_status = 1;
-
- return 0;
-}
-
-static int
-eth_kni_dev_stop(struct rte_eth_dev *dev)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- int ret;
-
- if (internals->no_request_thread == 0 && internals->stop_thread == 0) {
- internals->stop_thread = 1;
-
- ret = pthread_cancel(internals->thread);
- if (ret)
- PMD_LOG(ERR, "Can't cancel the thread");
-
- ret = pthread_join(internals->thread, NULL);
- if (ret)
- PMD_LOG(ERR, "Can't join the thread");
- }
-
- dev->data->dev_link.link_status = 0;
- dev->data->dev_started = 0;
-
- return 0;
-}
-
-static int
-eth_kni_close(struct rte_eth_dev *eth_dev)
-{
- struct pmd_internals *internals;
- int ret;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- ret = eth_kni_dev_stop(eth_dev);
- if (ret)
- PMD_LOG(WARNING, "Not able to stop kni for %s",
- eth_dev->data->name);
-
- /* mac_addrs must not be freed alone because part of dev_private */
- eth_dev->data->mac_addrs = NULL;
-
- internals = eth_dev->data->dev_private;
- ret = rte_kni_release(internals->kni);
- if (ret)
- PMD_LOG(WARNING, "Not able to release kni for %s",
- eth_dev->data->name);
-
- return ret;
-}
-
-static int
-eth_kni_dev_configure(struct rte_eth_dev *dev __rte_unused)
-{
- return 0;
-}
-
-static int
-eth_kni_dev_info(struct rte_eth_dev *dev __rte_unused,
- struct rte_eth_dev_info *dev_info)
-{
- dev_info->max_mac_addrs = 1;
- dev_info->max_rx_pktlen = UINT32_MAX;
- dev_info->max_rx_queues = KNI_MAX_QUEUE_PER_PORT;
- dev_info->max_tx_queues = KNI_MAX_QUEUE_PER_PORT;
- dev_info->min_rx_bufsize = 0;
-
- return 0;
-}
-
-static int
-eth_kni_rx_queue_setup(struct rte_eth_dev *dev,
- uint16_t rx_queue_id,
- uint16_t nb_rx_desc __rte_unused,
- unsigned int socket_id __rte_unused,
- const struct rte_eth_rxconf *rx_conf __rte_unused,
- struct rte_mempool *mb_pool)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- struct pmd_queue *q;
-
- q = &internals->rx_queues[rx_queue_id];
- q->internals = internals;
- q->mb_pool = mb_pool;
-
- dev->data->rx_queues[rx_queue_id] = q;
-
- return 0;
-}
-
-static int
-eth_kni_tx_queue_setup(struct rte_eth_dev *dev,
- uint16_t tx_queue_id,
- uint16_t nb_tx_desc __rte_unused,
- unsigned int socket_id __rte_unused,
- const struct rte_eth_txconf *tx_conf __rte_unused)
-{
- struct pmd_internals *internals = dev->data->dev_private;
- struct pmd_queue *q;
-
- q = &internals->tx_queues[tx_queue_id];
- q->internals = internals;
-
- dev->data->tx_queues[tx_queue_id] = q;
-
- return 0;
-}
-
-static int
-eth_kni_link_update(struct rte_eth_dev *dev __rte_unused,
- int wait_to_complete __rte_unused)
-{
- return 0;
-}
-
-static int
-eth_kni_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
-{
- unsigned long rx_packets_total = 0, rx_bytes_total = 0;
- unsigned long tx_packets_total = 0, tx_bytes_total = 0;
- struct rte_eth_dev_data *data = dev->data;
- unsigned int i, num_stats;
- struct pmd_queue *q;
-
- num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS,
- data->nb_rx_queues);
- for (i = 0; i < num_stats; i++) {
- q = data->rx_queues[i];
- stats->q_ipackets[i] = q->rx.pkts;
- stats->q_ibytes[i] = q->rx.bytes;
- rx_packets_total += stats->q_ipackets[i];
- rx_bytes_total += stats->q_ibytes[i];
- }
-
- num_stats = RTE_MIN((unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS,
- data->nb_tx_queues);
- for (i = 0; i < num_stats; i++) {
- q = data->tx_queues[i];
- stats->q_opackets[i] = q->tx.pkts;
- stats->q_obytes[i] = q->tx.bytes;
- tx_packets_total += stats->q_opackets[i];
- tx_bytes_total += stats->q_obytes[i];
- }
-
- stats->ipackets = rx_packets_total;
- stats->ibytes = rx_bytes_total;
- stats->opackets = tx_packets_total;
- stats->obytes = tx_bytes_total;
-
- return 0;
-}
-
-static int
-eth_kni_stats_reset(struct rte_eth_dev *dev)
-{
- struct rte_eth_dev_data *data = dev->data;
- struct pmd_queue *q;
- unsigned int i;
-
- for (i = 0; i < data->nb_rx_queues; i++) {
- q = data->rx_queues[i];
- q->rx.pkts = 0;
- q->rx.bytes = 0;
- }
- for (i = 0; i < data->nb_tx_queues; i++) {
- q = data->tx_queues[i];
- q->tx.pkts = 0;
- q->tx.bytes = 0;
- }
-
- return 0;
-}
-
-static const struct eth_dev_ops eth_kni_ops = {
- .dev_start = eth_kni_dev_start,
- .dev_stop = eth_kni_dev_stop,
- .dev_close = eth_kni_close,
- .dev_configure = eth_kni_dev_configure,
- .dev_infos_get = eth_kni_dev_info,
- .rx_queue_setup = eth_kni_rx_queue_setup,
- .tx_queue_setup = eth_kni_tx_queue_setup,
- .link_update = eth_kni_link_update,
- .stats_get = eth_kni_stats_get,
- .stats_reset = eth_kni_stats_reset,
-};
-
-static struct rte_eth_dev *
-eth_kni_create(struct rte_vdev_device *vdev,
- struct eth_kni_args *args,
- unsigned int numa_node)
-{
- struct pmd_internals *internals;
- struct rte_eth_dev_data *data;
- struct rte_eth_dev *eth_dev;
-
- PMD_LOG(INFO, "Creating kni ethdev on numa socket %u",
- numa_node);
-
- /* reserve an ethdev entry */
- eth_dev = rte_eth_vdev_allocate(vdev, sizeof(*internals));
- if (!eth_dev)
- return NULL;
-
- internals = eth_dev->data->dev_private;
- internals->port_id = eth_dev->data->port_id;
- data = eth_dev->data;
- data->nb_rx_queues = 1;
- data->nb_tx_queues = 1;
- data->dev_link = pmd_link;
- data->mac_addrs = &internals->eth_addr;
- data->promiscuous = 1;
- data->all_multicast = 1;
- data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
-
- rte_eth_random_addr(internals->eth_addr.addr_bytes);
-
- eth_dev->dev_ops = ð_kni_ops;
-
- internals->no_request_thread = args->no_request_thread;
-
- return eth_dev;
-}
-
-static int
-kni_init(void)
-{
- int ret;
-
- if (is_kni_initialized == 0) {
- ret = rte_kni_init(MAX_KNI_PORTS);
- if (ret < 0)
- return ret;
- }
-
- is_kni_initialized++;
-
- return 0;
-}
-
-static int
-eth_kni_kvargs_process(struct eth_kni_args *args, const char *params)
-{
- struct rte_kvargs *kvlist;
-
- kvlist = rte_kvargs_parse(params, valid_arguments);
- if (kvlist == NULL)
- return -1;
-
- memset(args, 0, sizeof(struct eth_kni_args));
-
- if (rte_kvargs_count(kvlist, ETH_KNI_NO_REQUEST_THREAD_ARG) == 1)
- args->no_request_thread = 1;
-
- rte_kvargs_free(kvlist);
-
- return 0;
-}
-
-static int
-eth_kni_probe(struct rte_vdev_device *vdev)
-{
- struct rte_eth_dev *eth_dev;
- struct eth_kni_args args;
- const char *name;
- const char *params;
- int ret;
-
- name = rte_vdev_device_name(vdev);
- params = rte_vdev_device_args(vdev);
- PMD_LOG(INFO, "Initializing eth_kni for %s", name);
-
- if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
- eth_dev = rte_eth_dev_attach_secondary(name);
- if (!eth_dev) {
- PMD_LOG(ERR, "Failed to probe %s", name);
- return -1;
- }
- /* TODO: request info from primary to set up Rx and Tx */
- eth_dev->dev_ops = ð_kni_ops;
- eth_dev->device = &vdev->device;
- rte_eth_dev_probing_finish(eth_dev);
- return 0;
- }
-
- ret = eth_kni_kvargs_process(&args, params);
- if (ret < 0)
- return ret;
-
- ret = kni_init();
- if (ret < 0)
- return ret;
-
- eth_dev = eth_kni_create(vdev, &args, rte_socket_id());
- if (eth_dev == NULL)
- goto kni_uninit;
-
- eth_dev->rx_pkt_burst = eth_kni_rx;
- eth_dev->tx_pkt_burst = eth_kni_tx;
-
- rte_eth_dev_probing_finish(eth_dev);
- return 0;
-
-kni_uninit:
- is_kni_initialized--;
- if (is_kni_initialized == 0)
- rte_kni_close();
- return -1;
-}
-
-static int
-eth_kni_remove(struct rte_vdev_device *vdev)
-{
- struct rte_eth_dev *eth_dev;
- const char *name;
- int ret;
-
- name = rte_vdev_device_name(vdev);
- PMD_LOG(INFO, "Un-Initializing eth_kni for %s", name);
-
- /* find the ethdev entry */
- eth_dev = rte_eth_dev_allocated(name);
- if (eth_dev != NULL) {
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- ret = eth_kni_dev_stop(eth_dev);
- if (ret != 0)
- return ret;
- return rte_eth_dev_release_port(eth_dev);
- }
- eth_kni_close(eth_dev);
- rte_eth_dev_release_port(eth_dev);
- }
-
- is_kni_initialized--;
- if (is_kni_initialized == 0)
- rte_kni_close();
-
- return 0;
-}
-
-static struct rte_vdev_driver eth_kni_drv = {
- .probe = eth_kni_probe,
- .remove = eth_kni_remove,
-};
-
-RTE_PMD_REGISTER_VDEV(net_kni, eth_kni_drv);
-RTE_PMD_REGISTER_PARAM_STRING(net_kni, ETH_KNI_NO_REQUEST_THREAD_ARG "=<int>");
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index f68bbc27a784..bd38b533c573 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -35,7 +35,6 @@ drivers = [
'ionic',
'ipn3ke',
'ixgbe',
- 'kni',
'mana',
'memif',
'mlx4',
diff --git a/examples/ip_pipeline/Makefile b/examples/ip_pipeline/Makefile
index 785c7ee38ce5..bc5e0a9f1800 100644
--- a/examples/ip_pipeline/Makefile
+++ b/examples/ip_pipeline/Makefile
@@ -8,7 +8,6 @@ APP = ip_pipeline
SRCS-y := action.c
SRCS-y += cli.c
SRCS-y += conn.c
-SRCS-y += kni.c
SRCS-y += link.c
SRCS-y += main.c
SRCS-y += mempool.c
diff --git a/examples/ip_pipeline/cli.c b/examples/ip_pipeline/cli.c
index c918f30e06f3..e8269ea90c11 100644
--- a/examples/ip_pipeline/cli.c
+++ b/examples/ip_pipeline/cli.c
@@ -14,7 +14,6 @@
#include "cli.h"
#include "cryptodev.h"
-#include "kni.h"
#include "link.h"
#include "mempool.h"
#include "parser.h"
@@ -728,65 +727,6 @@ cmd_tap(char **tokens,
}
}
-static const char cmd_kni_help[] =
-"kni <kni_name>\n"
-" link <link_name>\n"
-" mempool <mempool_name>\n"
-" [thread <thread_id>]\n";
-
-static void
-cmd_kni(char **tokens,
- uint32_t n_tokens,
- char *out,
- size_t out_size)
-{
- struct kni_params p;
- char *name;
- struct kni *kni;
-
- memset(&p, 0, sizeof(p));
- if ((n_tokens != 6) && (n_tokens != 8)) {
- snprintf(out, out_size, MSG_ARG_MISMATCH, tokens[0]);
- return;
- }
-
- name = tokens[1];
-
- if (strcmp(tokens[2], "link") != 0) {
- snprintf(out, out_size, MSG_ARG_NOT_FOUND, "link");
- return;
- }
-
- p.link_name = tokens[3];
-
- if (strcmp(tokens[4], "mempool") != 0) {
- snprintf(out, out_size, MSG_ARG_NOT_FOUND, "mempool");
- return;
- }
-
- p.mempool_name = tokens[5];
-
- if (n_tokens == 8) {
- if (strcmp(tokens[6], "thread") != 0) {
- snprintf(out, out_size, MSG_ARG_NOT_FOUND, "thread");
- return;
- }
-
- if (parser_read_uint32(&p.thread_id, tokens[7]) != 0) {
- snprintf(out, out_size, MSG_ARG_INVALID, "thread_id");
- return;
- }
-
- p.force_bind = 1;
- } else
- p.force_bind = 0;
-
- kni = kni_create(name, &p);
- if (kni == NULL) {
- snprintf(out, out_size, MSG_CMD_FAIL, tokens[0]);
- return;
- }
-}
static const char cmd_cryptodev_help[] =
"cryptodev <cryptodev_name>\n"
@@ -1541,7 +1481,6 @@ static const char cmd_pipeline_port_in_help[] =
" | swq <swq_name>\n"
" | tmgr <tmgr_name>\n"
" | tap <tap_name> mempool <mempool_name> mtu <mtu>\n"
-" | kni <kni_name>\n"
" | source mempool <mempool_name> file <file_name> bpp <n_bytes_per_pkt>\n"
" | cryptodev <cryptodev_name> rxq <queue_id>\n"
" [action <port_in_action_profile_name>]\n"
@@ -1664,18 +1603,6 @@ cmd_pipeline_port_in(char **tokens,
}
t0 += 6;
- } else if (strcmp(tokens[t0], "kni") == 0) {
- if (n_tokens < t0 + 2) {
- snprintf(out, out_size, MSG_ARG_MISMATCH,
- "pipeline port in kni");
- return;
- }
-
- p.type = PORT_IN_KNI;
-
- p.dev_name = tokens[t0 + 1];
-
- t0 += 2;
} else if (strcmp(tokens[t0], "source") == 0) {
if (n_tokens < t0 + 6) {
snprintf(out, out_size, MSG_ARG_MISMATCH,
@@ -1781,7 +1708,6 @@ static const char cmd_pipeline_port_out_help[] =
" | swq <swq_name>\n"
" | tmgr <tmgr_name>\n"
" | tap <tap_name>\n"
-" | kni <kni_name>\n"
" | sink [file <file_name> pkts <max_n_pkts>]\n"
" | cryptodev <cryptodev_name> txq <txq_id> offset <crypto_op_offset>\n";
@@ -1873,16 +1799,6 @@ cmd_pipeline_port_out(char **tokens,
p.type = PORT_OUT_TAP;
- p.dev_name = tokens[7];
- } else if (strcmp(tokens[6], "kni") == 0) {
- if (n_tokens != 8) {
- snprintf(out, out_size, MSG_ARG_MISMATCH,
- "pipeline port out kni");
- return;
- }
-
- p.type = PORT_OUT_KNI;
-
p.dev_name = tokens[7];
} else if (strcmp(tokens[6], "sink") == 0) {
if ((n_tokens != 7) && (n_tokens != 11)) {
@@ -6038,7 +5954,6 @@ cmd_help(char **tokens, uint32_t n_tokens, char *out, size_t out_size)
"\ttmgr subport\n"
"\ttmgr subport pipe\n"
"\ttap\n"
- "\tkni\n"
"\tport in action profile\n"
"\ttable action profile\n"
"\tpipeline\n"
@@ -6124,11 +6039,6 @@ cmd_help(char **tokens, uint32_t n_tokens, char *out, size_t out_size)
return;
}
- if (strcmp(tokens[0], "kni") == 0) {
- snprintf(out, out_size, "\n%s\n", cmd_kni_help);
- return;
- }
-
if (strcmp(tokens[0], "cryptodev") == 0) {
snprintf(out, out_size, "\n%s\n", cmd_cryptodev_help);
return;
@@ -6436,11 +6346,6 @@ cli_process(char *in, char *out, size_t out_size)
return;
}
- if (strcmp(tokens[0], "kni") == 0) {
- cmd_kni(tokens, n_tokens, out, out_size);
- return;
- }
-
if (strcmp(tokens[0], "cryptodev") == 0) {
cmd_cryptodev(tokens, n_tokens, out, out_size);
return;
diff --git a/examples/ip_pipeline/examples/kni.cli b/examples/ip_pipeline/examples/kni.cli
deleted file mode 100644
index 143834093d4d..000000000000
--- a/examples/ip_pipeline/examples/kni.cli
+++ /dev/null
@@ -1,69 +0,0 @@
-; SPDX-License-Identifier: BSD-3-Clause
-; Copyright(c) 2010-2018 Intel Corporation
-
-; _______________ ______________________
-; | | KNI0 | |
-; LINK0 RXQ0 --->|...............|------->|--+ |
-; | | KNI1 | | br0 |
-; LINK1 TXQ0 <---|...............|<-------|<-+ |
-; | | | Linux Kernel |
-; | PIPELINE0 | | Network Stack |
-; | | KNI1 | |
-; LINK1 RXQ0 --->|...............|------->|--+ |
-; | | KNI0 | | br0 |
-; LINK0 TXQ0 <---|...............|<-------|<-+ |
-; |_______________| |______________________|
-;
-; Insert Linux kernel KNI module:
-; [Linux]$ insmod rte_kni.ko
-;
-; Configure Linux kernel bridge between KNI0 and KNI1 interfaces:
-; [Linux]$ brctl addbr br0
-; [Linux]$ brctl addif br0 KNI0
-; [Linux]$ brctl addif br0 KNI1
-; [Linux]$ ifconfig br0 up
-; [Linux]$ ifconfig KNI0 up
-; [Linux]$ ifconfig KNI1 up
-;
-; Monitor packet forwarding performed by Linux kernel between KNI0 and KNI1:
-; [Linux]$ tcpdump -i KNI0
-; [Linux]$ tcpdump -i KNI1
-
-mempool MEMPOOL0 buffer 2304 pool 32K cache 256 cpu 0
-
-link LINK0 dev 0000:02:00.0 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on
-link LINK1 dev 0000:02:00.1 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on
-
-kni KNI0 link LINK0 mempool MEMPOOL0
-kni KNI1 link LINK1 mempool MEMPOOL0
-
-table action profile AP0 ipv4 offset 270 fwd
-
-pipeline PIPELINE0 period 10 offset_port_id 0 cpu 0
-
-pipeline PIPELINE0 port in bsz 32 link LINK0 rxq 0
-pipeline PIPELINE0 port in bsz 32 kni KNI1
-pipeline PIPELINE0 port in bsz 32 link LINK1 rxq 0
-pipeline PIPELINE0 port in bsz 32 kni KNI0
-
-pipeline PIPELINE0 port out bsz 32 kni KNI0
-pipeline PIPELINE0 port out bsz 32 link LINK1 txq 0
-pipeline PIPELINE0 port out bsz 32 kni KNI1
-pipeline PIPELINE0 port out bsz 32 link LINK0 txq 0
-
-pipeline PIPELINE0 table match stub action AP0
-pipeline PIPELINE0 table match stub action AP0
-pipeline PIPELINE0 table match stub action AP0
-pipeline PIPELINE0 table match stub action AP0
-
-pipeline PIPELINE0 port in 0 table 0
-pipeline PIPELINE0 port in 1 table 1
-pipeline PIPELINE0 port in 2 table 2
-pipeline PIPELINE0 port in 3 table 3
-
-thread 1 pipeline PIPELINE0 enable
-
-pipeline PIPELINE0 table 0 rule add match default action fwd port 0
-pipeline PIPELINE0 table 1 rule add match default action fwd port 1
-pipeline PIPELINE0 table 2 rule add match default action fwd port 2
-pipeline PIPELINE0 table 3 rule add match default action fwd port 3
diff --git a/examples/ip_pipeline/kni.c b/examples/ip_pipeline/kni.c
deleted file mode 100644
index cd02c3947827..000000000000
--- a/examples/ip_pipeline/kni.c
+++ /dev/null
@@ -1,168 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#include <stdlib.h>
-#include <string.h>
-
-#include <rte_ethdev.h>
-#include <rte_string_fns.h>
-
-#include "kni.h"
-#include "mempool.h"
-#include "link.h"
-
-static struct kni_list kni_list;
-
-#ifndef KNI_MAX
-#define KNI_MAX 16
-#endif
-
-int
-kni_init(void)
-{
- TAILQ_INIT(&kni_list);
-
-#ifdef RTE_LIB_KNI
- rte_kni_init(KNI_MAX);
-#endif
-
- return 0;
-}
-
-struct kni *
-kni_find(const char *name)
-{
- struct kni *kni;
-
- if (name == NULL)
- return NULL;
-
- TAILQ_FOREACH(kni, &kni_list, node)
- if (strcmp(kni->name, name) == 0)
- return kni;
-
- return NULL;
-}
-
-#ifndef RTE_LIB_KNI
-
-struct kni *
-kni_create(const char *name __rte_unused,
- struct kni_params *params __rte_unused)
-{
- return NULL;
-}
-
-void
-kni_handle_request(void)
-{
- return;
-}
-
-#else
-
-static int
-kni_config_network_interface(uint16_t port_id, uint8_t if_up)
-{
- int ret = 0;
-
- if (!rte_eth_dev_is_valid_port(port_id))
- return -EINVAL;
-
- ret = (if_up) ?
- rte_eth_dev_set_link_up(port_id) :
- rte_eth_dev_set_link_down(port_id);
-
- return ret;
-}
-
-static int
-kni_change_mtu(uint16_t port_id, unsigned int new_mtu)
-{
- int ret;
-
- if (!rte_eth_dev_is_valid_port(port_id))
- return -EINVAL;
-
- if (new_mtu > RTE_ETHER_MAX_LEN)
- return -EINVAL;
-
- /* Set new MTU */
- ret = rte_eth_dev_set_mtu(port_id, new_mtu);
- if (ret < 0)
- return ret;
-
- return 0;
-}
-
-struct kni *
-kni_create(const char *name, struct kni_params *params)
-{
- struct rte_eth_dev_info dev_info;
- struct rte_kni_conf kni_conf;
- struct rte_kni_ops kni_ops;
- struct kni *kni;
- struct mempool *mempool;
- struct link *link;
- struct rte_kni *k;
- int ret;
-
- /* Check input params */
- if ((name == NULL) ||
- kni_find(name) ||
- (params == NULL))
- return NULL;
-
- mempool = mempool_find(params->mempool_name);
- link = link_find(params->link_name);
- if ((mempool == NULL) ||
- (link == NULL))
- return NULL;
-
- /* Resource create */
- ret = rte_eth_dev_info_get(link->port_id, &dev_info);
- if (ret != 0)
- return NULL;
-
- memset(&kni_conf, 0, sizeof(kni_conf));
- strlcpy(kni_conf.name, name, RTE_KNI_NAMESIZE);
- kni_conf.force_bind = params->force_bind;
- kni_conf.core_id = params->thread_id;
- kni_conf.group_id = link->port_id;
- kni_conf.mbuf_size = mempool->buffer_size;
-
- memset(&kni_ops, 0, sizeof(kni_ops));
- kni_ops.port_id = link->port_id;
- kni_ops.config_network_if = kni_config_network_interface;
- kni_ops.change_mtu = kni_change_mtu;
-
- k = rte_kni_alloc(mempool->m, &kni_conf, &kni_ops);
- if (k == NULL)
- return NULL;
-
- /* Node allocation */
- kni = calloc(1, sizeof(struct kni));
- if (kni == NULL)
- return NULL;
-
- /* Node fill in */
- strlcpy(kni->name, name, sizeof(kni->name));
- kni->k = k;
-
- /* Node add to list */
- TAILQ_INSERT_TAIL(&kni_list, kni, node);
-
- return kni;
-}
-
-void
-kni_handle_request(void)
-{
- struct kni *kni;
-
- TAILQ_FOREACH(kni, &kni_list, node)
- rte_kni_handle_request(kni->k);
-}
-
-#endif
diff --git a/examples/ip_pipeline/kni.h b/examples/ip_pipeline/kni.h
deleted file mode 100644
index 118f48df73d8..000000000000
--- a/examples/ip_pipeline/kni.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2018 Intel Corporation
- */
-
-#ifndef _INCLUDE_KNI_H_
-#define _INCLUDE_KNI_H_
-
-#include <stdint.h>
-#include <sys/queue.h>
-
-#ifdef RTE_LIB_KNI
-#include <rte_kni.h>
-#endif
-
-#include "common.h"
-
-struct kni {
- TAILQ_ENTRY(kni) node;
- char name[NAME_SIZE];
-#ifdef RTE_LIB_KNI
- struct rte_kni *k;
-#endif
-};
-
-TAILQ_HEAD(kni_list, kni);
-
-int
-kni_init(void);
-
-struct kni *
-kni_find(const char *name);
-
-struct kni_params {
- const char *link_name;
- const char *mempool_name;
- int force_bind;
- uint32_t thread_id;
-};
-
-struct kni *
-kni_create(const char *name, struct kni_params *params);
-
-void
-kni_handle_request(void);
-
-#endif /* _INCLUDE_KNI_H_ */
diff --git a/examples/ip_pipeline/main.c b/examples/ip_pipeline/main.c
index e35d9bce3984..663f538f024a 100644
--- a/examples/ip_pipeline/main.c
+++ b/examples/ip_pipeline/main.c
@@ -14,7 +14,6 @@
#include "cli.h"
#include "conn.h"
-#include "kni.h"
#include "cryptodev.h"
#include "link.h"
#include "mempool.h"
@@ -205,13 +204,6 @@ main(int argc, char **argv)
return status;
}
- /* KNI */
- status = kni_init();
- if (status) {
- printf("Error: KNI initialization failed (%d)\n", status);
- return status;
- }
-
/* Sym Crypto */
status = cryptodev_init();
if (status) {
@@ -264,7 +256,5 @@ main(int argc, char **argv)
conn_poll_for_conn(conn);
conn_poll_for_msg(conn);
-
- kni_handle_request();
}
}
diff --git a/examples/ip_pipeline/meson.build b/examples/ip_pipeline/meson.build
index 57f522c24cf9..68049157e429 100644
--- a/examples/ip_pipeline/meson.build
+++ b/examples/ip_pipeline/meson.build
@@ -18,7 +18,6 @@ sources = files(
'cli.c',
'conn.c',
'cryptodev.c',
- 'kni.c',
'link.c',
'main.c',
'mempool.c',
diff --git a/examples/ip_pipeline/pipeline.c b/examples/ip_pipeline/pipeline.c
index 7ebabcae984d..63352257c6e9 100644
--- a/examples/ip_pipeline/pipeline.c
+++ b/examples/ip_pipeline/pipeline.c
@@ -11,9 +11,6 @@
#include <rte_string_fns.h>
#include <rte_port_ethdev.h>
-#ifdef RTE_LIB_KNI
-#include <rte_port_kni.h>
-#endif
#include <rte_port_ring.h>
#include <rte_port_source_sink.h>
#include <rte_port_fd.h>
@@ -28,9 +25,6 @@
#include <rte_table_lpm_ipv6.h>
#include <rte_table_stub.h>
-#ifdef RTE_LIB_KNI
-#include "kni.h"
-#endif
#include "link.h"
#include "mempool.h"
#include "pipeline.h"
@@ -160,9 +154,6 @@ pipeline_port_in_create(const char *pipeline_name,
struct rte_port_ring_reader_params ring;
struct rte_port_sched_reader_params sched;
struct rte_port_fd_reader_params fd;
-#ifdef RTE_LIB_KNI
- struct rte_port_kni_reader_params kni;
-#endif
struct rte_port_source_params source;
struct rte_port_sym_crypto_reader_params sym_crypto;
} pp;
@@ -264,22 +255,6 @@ pipeline_port_in_create(const char *pipeline_name,
break;
}
-#ifdef RTE_LIB_KNI
- case PORT_IN_KNI:
- {
- struct kni *kni;
-
- kni = kni_find(params->dev_name);
- if (kni == NULL)
- return -1;
-
- pp.kni.kni = kni->k;
-
- p.ops = &rte_port_kni_reader_ops;
- p.arg_create = &pp.kni;
- break;
- }
-#endif
case PORT_IN_SOURCE:
{
@@ -404,9 +379,6 @@ pipeline_port_out_create(const char *pipeline_name,
struct rte_port_ring_writer_params ring;
struct rte_port_sched_writer_params sched;
struct rte_port_fd_writer_params fd;
-#ifdef RTE_LIB_KNI
- struct rte_port_kni_writer_params kni;
-#endif
struct rte_port_sink_params sink;
struct rte_port_sym_crypto_writer_params sym_crypto;
} pp;
@@ -415,9 +387,6 @@ pipeline_port_out_create(const char *pipeline_name,
struct rte_port_ethdev_writer_nodrop_params ethdev;
struct rte_port_ring_writer_nodrop_params ring;
struct rte_port_fd_writer_nodrop_params fd;
-#ifdef RTE_LIB_KNI
- struct rte_port_kni_writer_nodrop_params kni;
-#endif
struct rte_port_sym_crypto_writer_nodrop_params sym_crypto;
} pp_nodrop;
@@ -537,32 +506,6 @@ pipeline_port_out_create(const char *pipeline_name,
break;
}
-#ifdef RTE_LIB_KNI
- case PORT_OUT_KNI:
- {
- struct kni *kni;
-
- kni = kni_find(params->dev_name);
- if (kni == NULL)
- return -1;
-
- pp.kni.kni = kni->k;
- pp.kni.tx_burst_sz = params->burst_size;
-
- pp_nodrop.kni.kni = kni->k;
- pp_nodrop.kni.tx_burst_sz = params->burst_size;
- pp_nodrop.kni.n_retries = params->n_retries;
-
- if (params->retry == 0) {
- p.ops = &rte_port_kni_writer_ops;
- p.arg_create = &pp.kni;
- } else {
- p.ops = &rte_port_kni_writer_nodrop_ops;
- p.arg_create = &pp_nodrop.kni;
- }
- break;
- }
-#endif
case PORT_OUT_SINK:
{
diff --git a/examples/ip_pipeline/pipeline.h b/examples/ip_pipeline/pipeline.h
index 4d2ee29a54c7..083d5e852421 100644
--- a/examples/ip_pipeline/pipeline.h
+++ b/examples/ip_pipeline/pipeline.h
@@ -25,7 +25,6 @@ enum port_in_type {
PORT_IN_SWQ,
PORT_IN_TMGR,
PORT_IN_TAP,
- PORT_IN_KNI,
PORT_IN_SOURCE,
PORT_IN_CRYPTODEV,
};
@@ -67,7 +66,6 @@ enum port_out_type {
PORT_OUT_SWQ,
PORT_OUT_TMGR,
PORT_OUT_TAP,
- PORT_OUT_KNI,
PORT_OUT_SINK,
PORT_OUT_CRYPTODEV,
};
diff --git a/kernel/linux/kni/Kbuild b/kernel/linux/kni/Kbuild
deleted file mode 100644
index e5452d6c00db..000000000000
--- a/kernel/linux/kni/Kbuild
+++ /dev/null
@@ -1,6 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
-
-ccflags-y := $(MODULE_CFLAGS)
-obj-m := rte_kni.o
-rte_kni-y := $(patsubst $(src)/%.c,%.o,$(wildcard $(src)/*.c))
diff --git a/kernel/linux/kni/compat.h b/kernel/linux/kni/compat.h
deleted file mode 100644
index 8beb67046577..000000000000
--- a/kernel/linux/kni/compat.h
+++ /dev/null
@@ -1,157 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Minimal wrappers to allow compiling kni on older kernels.
- */
-
-#include <linux/version.h>
-
-#ifndef RHEL_RELEASE_VERSION
-#define RHEL_RELEASE_VERSION(a, b) (((a) << 8) + (b))
-#endif
-
-/* SuSE version macro is the same as Linux kernel version */
-#ifndef SLE_VERSION
-#define SLE_VERSION(a, b, c) KERNEL_VERSION(a, b, c)
-#endif
-#ifdef CONFIG_SUSE_KERNEL
-#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 57))
-/* SLES12SP3 is at least 4.4.57+ based */
-#define SLE_VERSION_CODE SLE_VERSION(12, 3, 0)
-#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3, 12, 28))
-/* SLES12 is at least 3.12.28+ based */
-#define SLE_VERSION_CODE SLE_VERSION(12, 0, 0)
-#elif ((LINUX_VERSION_CODE >= KERNEL_VERSION(3, 0, 61)) && \
- (LINUX_VERSION_CODE < KERNEL_VERSION(3, 1, 0)))
-/* SLES11 SP3 is at least 3.0.61+ based */
-#define SLE_VERSION_CODE SLE_VERSION(11, 3, 0)
-#elif (LINUX_VERSION_CODE == KERNEL_VERSION(2, 6, 32))
-/* SLES11 SP1 is 2.6.32 based */
-#define SLE_VERSION_CODE SLE_VERSION(11, 1, 0)
-#elif (LINUX_VERSION_CODE == KERNEL_VERSION(2, 6, 27))
-/* SLES11 GA is 2.6.27 based */
-#define SLE_VERSION_CODE SLE_VERSION(11, 0, 0)
-#endif /* LINUX_VERSION_CODE == KERNEL_VERSION(x,y,z) */
-#endif /* CONFIG_SUSE_KERNEL */
-#ifndef SLE_VERSION_CODE
-#define SLE_VERSION_CODE 0
-#endif /* SLE_VERSION_CODE */
-
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 39) && \
- (!(defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6, 4)))
-
-#define kstrtoul strict_strtoul
-
-#endif /* < 2.6.39 */
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 33)
-#define HAVE_SIMPLIFIED_PERNET_OPERATIONS
-#endif
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 35)
-#define sk_sleep(s) ((s)->sk_sleep)
-#else
-#define HAVE_SOCKET_WQ
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 7, 0)
-#define HAVE_STATIC_SOCK_MAP_FD
-#else
-#define kni_sock_map_fd(s) sock_map_fd(s, 0)
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 9, 0)
-#define HAVE_CHANGE_CARRIER_CB
-#endif
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(3, 14, 0)
-#define ether_addr_copy(dst, src) memcpy(dst, src, ETH_ALEN)
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(3, 19, 0)
-#define HAVE_IOV_ITER_MSGHDR
-#endif
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 1, 0)
-#define HAVE_KIOCB_MSG_PARAM
-#define HAVE_REBUILD_HEADER
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 2, 0)
-#define HAVE_SK_ALLOC_KERN_PARAM
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 7, 0) || \
- (defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7, 4)) || \
- (SLE_VERSION_CODE && SLE_VERSION_CODE == SLE_VERSION(12, 3, 0))
-#define HAVE_TRANS_START_HELPER
-#endif
-
-/*
- * KNI uses NET_NAME_UNKNOWN macro to select correct version of alloc_netdev()
- * For old kernels just backported the commit that enables the macro
- * (685343fc3ba6) but still uses old API, it is required to undefine macro to
- * select correct version of API, this is safe since KNI doesn't use the value.
- * This fix is specific to RedHat/CentOS kernels.
- */
-#if (defined(RHEL_RELEASE_CODE) && \
- (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(6, 8)) && \
- (LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 34)))
-#undef NET_NAME_UNKNOWN
-#endif
-
-/*
- * RHEL has two different version with different kernel version:
- * 3.10 is for AMD, Intel, IBM POWER7 and POWER8;
- * 4.14 is for ARM and IBM POWER9
- */
-#if (defined(RHEL_RELEASE_CODE) && \
- (RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7, 5)) && \
- (RHEL_RELEASE_CODE < RHEL_RELEASE_VERSION(8, 0)) && \
- (LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 0)))
-#define ndo_change_mtu ndo_change_mtu_rh74
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0)
-#define HAVE_MAX_MTU_PARAM
-#endif
-
-#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0)
-#define HAVE_SIGNAL_FUNCTIONS_OWN_HEADER
-#endif
-
-/*
- * iova to kva mapping support can be provided since 4.6.0, but required
- * kernel version increased to >= 4.10.0 because of the updates in
- * get_user_pages_remote() kernel API
- */
-#if KERNEL_VERSION(4, 10, 0) <= LINUX_VERSION_CODE
-#define HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
-#endif
-
-#if KERNEL_VERSION(5, 6, 0) <= LINUX_VERSION_CODE || \
- (defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_VERSION(8, 3) <= RHEL_RELEASE_CODE) || \
- (defined(CONFIG_SUSE_KERNEL) && defined(HAVE_ARG_TX_QUEUE))
-#define HAVE_TX_TIMEOUT_TXQUEUE
-#endif
-
-#if KERNEL_VERSION(5, 9, 0) > LINUX_VERSION_CODE
-#define HAVE_TSK_IN_GUP
-#endif
-
-#if KERNEL_VERSION(5, 15, 0) <= LINUX_VERSION_CODE
-#define HAVE_ETH_HW_ADDR_SET
-#endif
-
-#if KERNEL_VERSION(5, 18, 0) > LINUX_VERSION_CODE && \
- (!(defined(RHEL_RELEASE_CODE) && \
- RHEL_RELEASE_VERSION(9, 1) <= RHEL_RELEASE_CODE))
-#define HAVE_NETIF_RX_NI
-#endif
-
-#if KERNEL_VERSION(6, 5, 0) > LINUX_VERSION_CODE
-#define HAVE_VMA_IN_GUP
-#endif
diff --git a/kernel/linux/kni/kni_dev.h b/kernel/linux/kni/kni_dev.h
deleted file mode 100644
index 975379825b2d..000000000000
--- a/kernel/linux/kni/kni_dev.h
+++ /dev/null
@@ -1,137 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-#ifndef _KNI_DEV_H_
-#define _KNI_DEV_H_
-
-#ifdef pr_fmt
-#undef pr_fmt
-#endif
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#define KNI_VERSION "1.0"
-
-#include "compat.h"
-
-#include <linux/if.h>
-#include <linux/wait.h>
-#ifdef HAVE_SIGNAL_FUNCTIONS_OWN_HEADER
-#include <linux/sched/signal.h>
-#else
-#include <linux/sched.h>
-#endif
-#include <linux/netdevice.h>
-#include <linux/spinlock.h>
-#include <linux/list.h>
-
-#include <rte_kni_common.h>
-#define KNI_KTHREAD_MAX_RESCHEDULE_INTERVAL 1000000 /* us */
-
-#define MBUF_BURST_SZ 32
-
-/* Default carrier state for created KNI network interfaces */
-extern uint32_t kni_dflt_carrier;
-
-/* Request processing support for bifurcated drivers. */
-extern uint32_t bifurcated_support;
-
-/**
- * A structure describing the private information for a kni device.
- */
-struct kni_dev {
- /* kni list */
- struct list_head list;
-
- uint8_t iova_mode;
-
- uint32_t core_id; /* Core ID to bind */
- char name[RTE_KNI_NAMESIZE]; /* Network device name */
- struct task_struct *pthread;
-
- /* wait queue for req/resp */
- wait_queue_head_t wq;
- struct mutex sync_lock;
-
- /* kni device */
- struct net_device *net_dev;
-
- /* queue for packets to be sent out */
- struct rte_kni_fifo *tx_q;
-
- /* queue for the packets received */
- struct rte_kni_fifo *rx_q;
-
- /* queue for the allocated mbufs those can be used to save sk buffs */
- struct rte_kni_fifo *alloc_q;
-
- /* free queue for the mbufs to be freed */
- struct rte_kni_fifo *free_q;
-
- /* request queue */
- struct rte_kni_fifo *req_q;
-
- /* response queue */
- struct rte_kni_fifo *resp_q;
-
- void *sync_kva;
- void *sync_va;
-
- void *mbuf_kva;
- void *mbuf_va;
-
- /* mbuf size */
- uint32_t mbuf_size;
-
- /* buffers */
- void *pa[MBUF_BURST_SZ];
- void *va[MBUF_BURST_SZ];
- void *alloc_pa[MBUF_BURST_SZ];
- void *alloc_va[MBUF_BURST_SZ];
-
- struct task_struct *usr_tsk;
-};
-
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
-static inline phys_addr_t iova_to_phys(struct task_struct *tsk,
- unsigned long iova)
-{
- phys_addr_t offset, phys_addr;
- struct page *page = NULL;
- long ret;
-
- offset = iova & (PAGE_SIZE - 1);
-
- /* Read one page struct info */
-#ifdef HAVE_TSK_IN_GUP
- ret = get_user_pages_remote(tsk, tsk->mm, iova, 1, 0, &page, NULL, NULL);
-#else
- #ifdef HAVE_VMA_IN_GUP
- ret = get_user_pages_remote(tsk->mm, iova, 1, 0, &page, NULL, NULL);
- #else
- ret = get_user_pages_remote(tsk->mm, iova, 1, 0, &page, NULL);
- #endif
-#endif
- if (ret < 0)
- return 0;
-
- phys_addr = page_to_phys(page) | offset;
- put_page(page);
-
- return phys_addr;
-}
-
-static inline void *iova_to_kva(struct task_struct *tsk, unsigned long iova)
-{
- return phys_to_virt(iova_to_phys(tsk, iova));
-}
-#endif
-
-void kni_net_release_fifo_phy(struct kni_dev *kni);
-void kni_net_rx(struct kni_dev *kni);
-void kni_net_init(struct net_device *dev);
-void kni_net_config_lo_mode(char *lo_str);
-void kni_net_poll_resp(struct kni_dev *kni);
-
-#endif
diff --git a/kernel/linux/kni/kni_fifo.h b/kernel/linux/kni/kni_fifo.h
deleted file mode 100644
index 1ba5172002b6..000000000000
--- a/kernel/linux/kni/kni_fifo.h
+++ /dev/null
@@ -1,87 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-#ifndef _KNI_FIFO_H_
-#define _KNI_FIFO_H_
-
-#include <rte_kni_common.h>
-
-/* Skip some memory barriers on Linux < 3.14 */
-#ifndef smp_load_acquire
-#define smp_load_acquire(a) (*(a))
-#endif
-#ifndef smp_store_release
-#define smp_store_release(a, b) *(a) = (b)
-#endif
-
-/**
- * Adds num elements into the fifo. Return the number actually written
- */
-static inline uint32_t
-kni_fifo_put(struct rte_kni_fifo *fifo, void **data, uint32_t num)
-{
- uint32_t i = 0;
- uint32_t fifo_write = fifo->write;
- uint32_t fifo_read = smp_load_acquire(&fifo->read);
- uint32_t new_write = fifo_write;
-
- for (i = 0; i < num; i++) {
- new_write = (new_write + 1) & (fifo->len - 1);
-
- if (new_write == fifo_read)
- break;
- fifo->buffer[fifo_write] = data[i];
- fifo_write = new_write;
- }
- smp_store_release(&fifo->write, fifo_write);
-
- return i;
-}
-
-/**
- * Get up to num elements from the FIFO. Return the number actually read
- */
-static inline uint32_t
-kni_fifo_get(struct rte_kni_fifo *fifo, void **data, uint32_t num)
-{
- uint32_t i = 0;
- uint32_t new_read = fifo->read;
- uint32_t fifo_write = smp_load_acquire(&fifo->write);
-
- for (i = 0; i < num; i++) {
- if (new_read == fifo_write)
- break;
-
- data[i] = fifo->buffer[new_read];
- new_read = (new_read + 1) & (fifo->len - 1);
- }
- smp_store_release(&fifo->read, new_read);
-
- return i;
-}
-
-/**
- * Get the num of elements in the fifo
- */
-static inline uint32_t
-kni_fifo_count(struct rte_kni_fifo *fifo)
-{
- uint32_t fifo_write = smp_load_acquire(&fifo->write);
- uint32_t fifo_read = smp_load_acquire(&fifo->read);
- return (fifo->len + fifo_write - fifo_read) & (fifo->len - 1);
-}
-
-/**
- * Get the num of available elements in the fifo
- */
-static inline uint32_t
-kni_fifo_free_count(struct rte_kni_fifo *fifo)
-{
- uint32_t fifo_write = smp_load_acquire(&fifo->write);
- uint32_t fifo_read = smp_load_acquire(&fifo->read);
- return (fifo_read - fifo_write - 1) & (fifo->len - 1);
-}
-
-#endif /* _KNI_FIFO_H_ */
diff --git a/kernel/linux/kni/kni_misc.c b/kernel/linux/kni/kni_misc.c
deleted file mode 100644
index 0c3a86ee352e..000000000000
--- a/kernel/linux/kni/kni_misc.c
+++ /dev/null
@@ -1,719 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-#include <linux/version.h>
-#include <linux/module.h>
-#include <linux/miscdevice.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/pci.h>
-#include <linux/kthread.h>
-#include <linux/rwsem.h>
-#include <linux/mutex.h>
-#include <linux/nsproxy.h>
-#include <net/net_namespace.h>
-#include <net/netns/generic.h>
-
-#include <rte_kni_common.h>
-
-#include "compat.h"
-#include "kni_dev.h"
-
-MODULE_VERSION(KNI_VERSION);
-MODULE_LICENSE("Dual BSD/GPL");
-MODULE_AUTHOR("Intel Corporation");
-MODULE_DESCRIPTION("Kernel Module for managing kni devices");
-
-#define KNI_RX_LOOP_NUM 1000
-
-#define KNI_MAX_DEVICES 32
-
-/* loopback mode */
-static char *lo_mode;
-
-/* Kernel thread mode */
-static char *kthread_mode;
-static uint32_t multiple_kthread_on;
-
-/* Default carrier state for created KNI network interfaces */
-static char *carrier;
-uint32_t kni_dflt_carrier;
-
-/* Request processing support for bifurcated drivers. */
-static char *enable_bifurcated;
-uint32_t bifurcated_support;
-
-/* KNI thread scheduling interval */
-static long min_scheduling_interval = 100; /* us */
-static long max_scheduling_interval = 200; /* us */
-
-#define KNI_DEV_IN_USE_BIT_NUM 0 /* Bit number for device in use */
-
-static int kni_net_id;
-
-struct kni_net {
- unsigned long device_in_use; /* device in use flag */
- struct mutex kni_kthread_lock;
- struct task_struct *kni_kthread;
- struct rw_semaphore kni_list_lock;
- struct list_head kni_list_head;
-};
-
-static int __net_init
-kni_init_net(struct net *net)
-{
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- struct kni_net *knet = net_generic(net, kni_net_id);
-
- memset(knet, 0, sizeof(*knet));
-#else
- struct kni_net *knet;
- int ret;
-
- knet = kzalloc(sizeof(struct kni_net), GFP_KERNEL);
- if (!knet) {
- ret = -ENOMEM;
- return ret;
- }
-#endif
-
- /* Clear the bit of device in use */
- clear_bit(KNI_DEV_IN_USE_BIT_NUM, &knet->device_in_use);
-
- mutex_init(&knet->kni_kthread_lock);
-
- init_rwsem(&knet->kni_list_lock);
- INIT_LIST_HEAD(&knet->kni_list_head);
-
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- return 0;
-#else
- ret = net_assign_generic(net, kni_net_id, knet);
- if (ret < 0)
- kfree(knet);
-
- return ret;
-#endif
-}
-
-static void __net_exit
-kni_exit_net(struct net *net)
-{
- struct kni_net *knet __maybe_unused;
-
- knet = net_generic(net, kni_net_id);
- mutex_destroy(&knet->kni_kthread_lock);
-
-#ifndef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- kfree(knet);
-#endif
-}
-
-static struct pernet_operations kni_net_ops = {
- .init = kni_init_net,
- .exit = kni_exit_net,
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- .id = &kni_net_id,
- .size = sizeof(struct kni_net),
-#endif
-};
-
-static int
-kni_thread_single(void *data)
-{
- struct kni_net *knet = data;
- int j;
- struct kni_dev *dev;
-
- while (!kthread_should_stop()) {
- down_read(&knet->kni_list_lock);
- for (j = 0; j < KNI_RX_LOOP_NUM; j++) {
- list_for_each_entry(dev, &knet->kni_list_head, list) {
- kni_net_rx(dev);
- kni_net_poll_resp(dev);
- }
- }
- up_read(&knet->kni_list_lock);
- /* reschedule out for a while */
- usleep_range(min_scheduling_interval, max_scheduling_interval);
- }
-
- return 0;
-}
-
-static int
-kni_thread_multiple(void *param)
-{
- int j;
- struct kni_dev *dev = param;
-
- while (!kthread_should_stop()) {
- for (j = 0; j < KNI_RX_LOOP_NUM; j++) {
- kni_net_rx(dev);
- kni_net_poll_resp(dev);
- }
- usleep_range(min_scheduling_interval, max_scheduling_interval);
- }
-
- return 0;
-}
-
-static int
-kni_open(struct inode *inode, struct file *file)
-{
- struct net *net = current->nsproxy->net_ns;
- struct kni_net *knet = net_generic(net, kni_net_id);
-
- /* kni device can be opened by one user only per netns */
- if (test_and_set_bit(KNI_DEV_IN_USE_BIT_NUM, &knet->device_in_use))
- return -EBUSY;
-
- file->private_data = get_net(net);
- pr_debug("/dev/kni opened\n");
-
- return 0;
-}
-
-static int
-kni_dev_remove(struct kni_dev *dev)
-{
- if (!dev)
- return -ENODEV;
-
- /*
- * The memory of kni device is allocated and released together
- * with net device. Release mbuf before freeing net device.
- */
- kni_net_release_fifo_phy(dev);
-
- if (dev->net_dev) {
- unregister_netdev(dev->net_dev);
- free_netdev(dev->net_dev);
- }
-
- return 0;
-}
-
-static int
-kni_release(struct inode *inode, struct file *file)
-{
- struct net *net = file->private_data;
- struct kni_net *knet = net_generic(net, kni_net_id);
- struct kni_dev *dev, *n;
-
- /* Stop kernel thread for single mode */
- if (multiple_kthread_on == 0) {
- mutex_lock(&knet->kni_kthread_lock);
- /* Stop kernel thread */
- if (knet->kni_kthread != NULL) {
- kthread_stop(knet->kni_kthread);
- knet->kni_kthread = NULL;
- }
- mutex_unlock(&knet->kni_kthread_lock);
- }
-
- down_write(&knet->kni_list_lock);
- list_for_each_entry_safe(dev, n, &knet->kni_list_head, list) {
- /* Stop kernel thread for multiple mode */
- if (multiple_kthread_on && dev->pthread != NULL) {
- kthread_stop(dev->pthread);
- dev->pthread = NULL;
- }
-
- list_del(&dev->list);
- kni_dev_remove(dev);
- }
- up_write(&knet->kni_list_lock);
-
- /* Clear the bit of device in use */
- clear_bit(KNI_DEV_IN_USE_BIT_NUM, &knet->device_in_use);
-
- put_net(net);
- pr_debug("/dev/kni closed\n");
-
- return 0;
-}
-
-static int
-kni_check_param(struct kni_dev *kni, struct rte_kni_device_info *dev)
-{
- if (!kni || !dev)
- return -1;
-
- /* Check if network name has been used */
- if (!strncmp(kni->name, dev->name, RTE_KNI_NAMESIZE)) {
- pr_err("KNI name %s duplicated\n", dev->name);
- return -1;
- }
-
- return 0;
-}
-
-static int
-kni_run_thread(struct kni_net *knet, struct kni_dev *kni, uint8_t force_bind)
-{
- /**
- * Create a new kernel thread for multiple mode, set its core affinity,
- * and finally wake it up.
- */
- if (multiple_kthread_on) {
- kni->pthread = kthread_create(kni_thread_multiple,
- (void *)kni, "kni_%s", kni->name);
- if (IS_ERR(kni->pthread)) {
- kni_dev_remove(kni);
- return -ECANCELED;
- }
-
- if (force_bind)
- kthread_bind(kni->pthread, kni->core_id);
- wake_up_process(kni->pthread);
- } else {
- mutex_lock(&knet->kni_kthread_lock);
-
- if (knet->kni_kthread == NULL) {
- knet->kni_kthread = kthread_create(kni_thread_single,
- (void *)knet, "kni_single");
- if (IS_ERR(knet->kni_kthread)) {
- mutex_unlock(&knet->kni_kthread_lock);
- kni_dev_remove(kni);
- return -ECANCELED;
- }
-
- if (force_bind)
- kthread_bind(knet->kni_kthread, kni->core_id);
- wake_up_process(knet->kni_kthread);
- }
-
- mutex_unlock(&knet->kni_kthread_lock);
- }
-
- return 0;
-}
-
-static int
-kni_ioctl_create(struct net *net, uint32_t ioctl_num,
- unsigned long ioctl_param)
-{
- struct kni_net *knet = net_generic(net, kni_net_id);
- int ret;
- struct rte_kni_device_info dev_info;
- struct net_device *net_dev = NULL;
- struct kni_dev *kni, *dev, *n;
-
- pr_info("Creating kni...\n");
- /* Check the buffer size, to avoid warning */
- if (_IOC_SIZE(ioctl_num) > sizeof(dev_info))
- return -EINVAL;
-
- /* Copy kni info from user space */
- if (copy_from_user(&dev_info, (void *)ioctl_param, sizeof(dev_info)))
- return -EFAULT;
-
- /* Check if name is zero-ended */
- if (strnlen(dev_info.name, sizeof(dev_info.name)) == sizeof(dev_info.name)) {
- pr_err("kni.name not zero-terminated");
- return -EINVAL;
- }
-
- /**
- * Check if the cpu core id is valid for binding.
- */
- if (dev_info.force_bind && !cpu_online(dev_info.core_id)) {
- pr_err("cpu %u is not online\n", dev_info.core_id);
- return -EINVAL;
- }
-
- /* Check if it has been created */
- down_read(&knet->kni_list_lock);
- list_for_each_entry_safe(dev, n, &knet->kni_list_head, list) {
- if (kni_check_param(dev, &dev_info) < 0) {
- up_read(&knet->kni_list_lock);
- return -EINVAL;
- }
- }
- up_read(&knet->kni_list_lock);
-
- net_dev = alloc_netdev(sizeof(struct kni_dev), dev_info.name,
-#ifdef NET_NAME_USER
- NET_NAME_USER,
-#endif
- kni_net_init);
- if (net_dev == NULL) {
- pr_err("error allocating device \"%s\"\n", dev_info.name);
- return -EBUSY;
- }
-
- dev_net_set(net_dev, net);
-
- kni = netdev_priv(net_dev);
-
- kni->net_dev = net_dev;
- kni->core_id = dev_info.core_id;
- strncpy(kni->name, dev_info.name, RTE_KNI_NAMESIZE);
-
- /* Translate user space info into kernel space info */
- if (dev_info.iova_mode) {
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
- kni->tx_q = iova_to_kva(current, dev_info.tx_phys);
- kni->rx_q = iova_to_kva(current, dev_info.rx_phys);
- kni->alloc_q = iova_to_kva(current, dev_info.alloc_phys);
- kni->free_q = iova_to_kva(current, dev_info.free_phys);
-
- kni->req_q = iova_to_kva(current, dev_info.req_phys);
- kni->resp_q = iova_to_kva(current, dev_info.resp_phys);
- kni->sync_va = dev_info.sync_va;
- kni->sync_kva = iova_to_kva(current, dev_info.sync_phys);
- kni->usr_tsk = current;
- kni->iova_mode = 1;
-#else
- pr_err("KNI module does not support IOVA to VA translation\n");
- return -EINVAL;
-#endif
- } else {
-
- kni->tx_q = phys_to_virt(dev_info.tx_phys);
- kni->rx_q = phys_to_virt(dev_info.rx_phys);
- kni->alloc_q = phys_to_virt(dev_info.alloc_phys);
- kni->free_q = phys_to_virt(dev_info.free_phys);
-
- kni->req_q = phys_to_virt(dev_info.req_phys);
- kni->resp_q = phys_to_virt(dev_info.resp_phys);
- kni->sync_va = dev_info.sync_va;
- kni->sync_kva = phys_to_virt(dev_info.sync_phys);
- kni->iova_mode = 0;
- }
-
- kni->mbuf_size = dev_info.mbuf_size;
-
- pr_debug("tx_phys: 0x%016llx, tx_q addr: 0x%p\n",
- (unsigned long long) dev_info.tx_phys, kni->tx_q);
- pr_debug("rx_phys: 0x%016llx, rx_q addr: 0x%p\n",
- (unsigned long long) dev_info.rx_phys, kni->rx_q);
- pr_debug("alloc_phys: 0x%016llx, alloc_q addr: 0x%p\n",
- (unsigned long long) dev_info.alloc_phys, kni->alloc_q);
- pr_debug("free_phys: 0x%016llx, free_q addr: 0x%p\n",
- (unsigned long long) dev_info.free_phys, kni->free_q);
- pr_debug("req_phys: 0x%016llx, req_q addr: 0x%p\n",
- (unsigned long long) dev_info.req_phys, kni->req_q);
- pr_debug("resp_phys: 0x%016llx, resp_q addr: 0x%p\n",
- (unsigned long long) dev_info.resp_phys, kni->resp_q);
- pr_debug("mbuf_size: %u\n", kni->mbuf_size);
-
- /* if user has provided a valid mac address */
- if (is_valid_ether_addr(dev_info.mac_addr)) {
-#ifdef HAVE_ETH_HW_ADDR_SET
- eth_hw_addr_set(net_dev, dev_info.mac_addr);
-#else
- memcpy(net_dev->dev_addr, dev_info.mac_addr, ETH_ALEN);
-#endif
- } else {
- /* Assign random MAC address. */
- eth_hw_addr_random(net_dev);
- }
-
- if (dev_info.mtu)
- net_dev->mtu = dev_info.mtu;
-#ifdef HAVE_MAX_MTU_PARAM
- net_dev->max_mtu = net_dev->mtu;
-
- if (dev_info.min_mtu)
- net_dev->min_mtu = dev_info.min_mtu;
-
- if (dev_info.max_mtu)
- net_dev->max_mtu = dev_info.max_mtu;
-#endif
-
- ret = register_netdev(net_dev);
- if (ret) {
- pr_err("error %i registering device \"%s\"\n",
- ret, dev_info.name);
- kni->net_dev = NULL;
- kni_dev_remove(kni);
- free_netdev(net_dev);
- return -ENODEV;
- }
-
- netif_carrier_off(net_dev);
-
- ret = kni_run_thread(knet, kni, dev_info.force_bind);
- if (ret != 0)
- return ret;
-
- down_write(&knet->kni_list_lock);
- list_add(&kni->list, &knet->kni_list_head);
- up_write(&knet->kni_list_lock);
-
- return 0;
-}
-
-static int
-kni_ioctl_release(struct net *net, uint32_t ioctl_num,
- unsigned long ioctl_param)
-{
- struct kni_net *knet = net_generic(net, kni_net_id);
- int ret = -EINVAL;
- struct kni_dev *dev, *n;
- struct rte_kni_device_info dev_info;
-
- if (_IOC_SIZE(ioctl_num) > sizeof(dev_info))
- return -EINVAL;
-
- if (copy_from_user(&dev_info, (void *)ioctl_param, sizeof(dev_info)))
- return -EFAULT;
-
- /* Release the network device according to its name */
- if (strlen(dev_info.name) == 0)
- return -EINVAL;
-
- down_write(&knet->kni_list_lock);
- list_for_each_entry_safe(dev, n, &knet->kni_list_head, list) {
- if (strncmp(dev->name, dev_info.name, RTE_KNI_NAMESIZE) != 0)
- continue;
-
- if (multiple_kthread_on && dev->pthread != NULL) {
- kthread_stop(dev->pthread);
- dev->pthread = NULL;
- }
-
- list_del(&dev->list);
- kni_dev_remove(dev);
- ret = 0;
- break;
- }
- up_write(&knet->kni_list_lock);
- pr_info("%s release kni named %s\n",
- (ret == 0 ? "Successfully" : "Unsuccessfully"), dev_info.name);
-
- return ret;
-}
-
-static long
-kni_ioctl(struct file *file, unsigned int ioctl_num, unsigned long ioctl_param)
-{
- long ret = -EINVAL;
- struct net *net = current->nsproxy->net_ns;
-
- pr_debug("IOCTL num=0x%0x param=0x%0lx\n", ioctl_num, ioctl_param);
-
- /*
- * Switch according to the ioctl called
- */
- switch (_IOC_NR(ioctl_num)) {
- case _IOC_NR(RTE_KNI_IOCTL_TEST):
- /* For test only, not used */
- break;
- case _IOC_NR(RTE_KNI_IOCTL_CREATE):
- ret = kni_ioctl_create(net, ioctl_num, ioctl_param);
- break;
- case _IOC_NR(RTE_KNI_IOCTL_RELEASE):
- ret = kni_ioctl_release(net, ioctl_num, ioctl_param);
- break;
- default:
- pr_debug("IOCTL default\n");
- break;
- }
-
- return ret;
-}
-
-static long
-kni_compat_ioctl(struct file *file, unsigned int ioctl_num,
- unsigned long ioctl_param)
-{
- /* 32 bits app on 64 bits OS to be supported later */
- pr_debug("Not implemented.\n");
-
- return -EINVAL;
-}
-
-static const struct file_operations kni_fops = {
- .owner = THIS_MODULE,
- .open = kni_open,
- .release = kni_release,
- .unlocked_ioctl = kni_ioctl,
- .compat_ioctl = kni_compat_ioctl,
-};
-
-static struct miscdevice kni_misc = {
- .minor = MISC_DYNAMIC_MINOR,
- .name = KNI_DEVICE,
- .fops = &kni_fops,
-};
-
-static int __init
-kni_parse_kthread_mode(void)
-{
- if (!kthread_mode)
- return 0;
-
- if (strcmp(kthread_mode, "single") == 0)
- return 0;
- else if (strcmp(kthread_mode, "multiple") == 0)
- multiple_kthread_on = 1;
- else
- return -1;
-
- return 0;
-}
-
-static int __init
-kni_parse_carrier_state(void)
-{
- if (!carrier) {
- kni_dflt_carrier = 0;
- return 0;
- }
-
- if (strcmp(carrier, "off") == 0)
- kni_dflt_carrier = 0;
- else if (strcmp(carrier, "on") == 0)
- kni_dflt_carrier = 1;
- else
- return -1;
-
- return 0;
-}
-
-static int __init
-kni_parse_bifurcated_support(void)
-{
- if (!enable_bifurcated) {
- bifurcated_support = 0;
- return 0;
- }
-
- if (strcmp(enable_bifurcated, "on") == 0)
- bifurcated_support = 1;
- else
- return -1;
-
- return 0;
-}
-
-static int __init
-kni_init(void)
-{
- int rc;
-
- if (kni_parse_kthread_mode() < 0) {
- pr_err("Invalid parameter for kthread_mode\n");
- return -EINVAL;
- }
-
- if (multiple_kthread_on == 0)
- pr_debug("Single kernel thread for all KNI devices\n");
- else
- pr_debug("Multiple kernel thread mode enabled\n");
-
- if (kni_parse_carrier_state() < 0) {
- pr_err("Invalid parameter for carrier\n");
- return -EINVAL;
- }
-
- if (kni_dflt_carrier == 0)
- pr_debug("Default carrier state set to off.\n");
- else
- pr_debug("Default carrier state set to on.\n");
-
- if (kni_parse_bifurcated_support() < 0) {
- pr_err("Invalid parameter for bifurcated support\n");
- return -EINVAL;
- }
- if (bifurcated_support == 1)
- pr_debug("bifurcated support is enabled.\n");
-
- if (min_scheduling_interval < 0 || max_scheduling_interval < 0 ||
- min_scheduling_interval > KNI_KTHREAD_MAX_RESCHEDULE_INTERVAL ||
- max_scheduling_interval > KNI_KTHREAD_MAX_RESCHEDULE_INTERVAL ||
- min_scheduling_interval >= max_scheduling_interval) {
- pr_err("Invalid parameters for scheduling interval\n");
- return -EINVAL;
- }
-
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- rc = register_pernet_subsys(&kni_net_ops);
-#else
- rc = register_pernet_gen_subsys(&kni_net_id, &kni_net_ops);
-#endif
- if (rc)
- return -EPERM;
-
- rc = misc_register(&kni_misc);
- if (rc != 0) {
- pr_err("Misc registration failed\n");
- goto out;
- }
-
- /* Configure the lo mode according to the input parameter */
- kni_net_config_lo_mode(lo_mode);
-
- return 0;
-
-out:
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- unregister_pernet_subsys(&kni_net_ops);
-#else
- unregister_pernet_gen_subsys(kni_net_id, &kni_net_ops);
-#endif
- return rc;
-}
-
-static void __exit
-kni_exit(void)
-{
- misc_deregister(&kni_misc);
-#ifdef HAVE_SIMPLIFIED_PERNET_OPERATIONS
- unregister_pernet_subsys(&kni_net_ops);
-#else
- unregister_pernet_gen_subsys(kni_net_id, &kni_net_ops);
-#endif
-}
-
-module_init(kni_init);
-module_exit(kni_exit);
-
-module_param(lo_mode, charp, 0644);
-MODULE_PARM_DESC(lo_mode,
-"KNI loopback mode (default=lo_mode_none):\n"
-"\t\tlo_mode_none Kernel loopback disabled\n"
-"\t\tlo_mode_fifo Enable kernel loopback with fifo\n"
-"\t\tlo_mode_fifo_skb Enable kernel loopback with fifo and skb buffer\n"
-"\t\t"
-);
-
-module_param(kthread_mode, charp, 0644);
-MODULE_PARM_DESC(kthread_mode,
-"Kernel thread mode (default=single):\n"
-"\t\tsingle Single kernel thread mode enabled.\n"
-"\t\tmultiple Multiple kernel thread mode enabled.\n"
-"\t\t"
-);
-
-module_param(carrier, charp, 0644);
-MODULE_PARM_DESC(carrier,
-"Default carrier state for KNI interface (default=off):\n"
-"\t\toff Interfaces will be created with carrier state set to off.\n"
-"\t\ton Interfaces will be created with carrier state set to on.\n"
-"\t\t"
-);
-
-module_param(enable_bifurcated, charp, 0644);
-MODULE_PARM_DESC(enable_bifurcated,
-"Enable request processing support for bifurcated drivers, "
-"which means releasing rtnl_lock before calling userspace callback and "
-"supporting async requests (default=off):\n"
-"\t\ton Enable request processing support for bifurcated drivers.\n"
-"\t\t"
-);
-
-module_param(min_scheduling_interval, long, 0644);
-MODULE_PARM_DESC(min_scheduling_interval,
-"KNI thread min scheduling interval (default=100 microseconds)"
-);
-
-module_param(max_scheduling_interval, long, 0644);
-MODULE_PARM_DESC(max_scheduling_interval,
-"KNI thread max scheduling interval (default=200 microseconds)"
-);
diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c
deleted file mode 100644
index 779ee3451a4c..000000000000
--- a/kernel/linux/kni/kni_net.c
+++ /dev/null
@@ -1,878 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/*
- * Copyright(c) 2010-2014 Intel Corporation.
- */
-
-/*
- * This code is inspired from the book "Linux Device Drivers" by
- * Alessandro Rubini and Jonathan Corbet, published by O'Reilly & Associates
- */
-
-#include <linux/device.h>
-#include <linux/module.h>
-#include <linux/version.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h> /* eth_type_trans */
-#include <linux/ethtool.h>
-#include <linux/skbuff.h>
-#include <linux/kthread.h>
-#include <linux/delay.h>
-#include <linux/rtnetlink.h>
-
-#include <rte_kni_common.h>
-#include <kni_fifo.h>
-
-#include "compat.h"
-#include "kni_dev.h"
-
-#define WD_TIMEOUT 5 /*jiffies */
-
-#define KNI_WAIT_RESPONSE_TIMEOUT 300 /* 3 seconds */
-
-/* typedef for rx function */
-typedef void (*kni_net_rx_t)(struct kni_dev *kni);
-
-static void kni_net_rx_normal(struct kni_dev *kni);
-
-/* kni rx function pointer, with default to normal rx */
-static kni_net_rx_t kni_net_rx_func = kni_net_rx_normal;
-
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
-/* iova to kernel virtual address */
-static inline void *
-iova2kva(struct kni_dev *kni, void *iova)
-{
- return phys_to_virt(iova_to_phys(kni->usr_tsk, (unsigned long)iova));
-}
-
-static inline void *
-iova2data_kva(struct kni_dev *kni, struct rte_kni_mbuf *m)
-{
- return phys_to_virt(iova_to_phys(kni->usr_tsk, m->buf_iova) +
- m->data_off);
-}
-#endif
-
-/* physical address to kernel virtual address */
-static void *
-pa2kva(void *pa)
-{
- return phys_to_virt((unsigned long)pa);
-}
-
-/* physical address to virtual address */
-static void *
-pa2va(void *pa, struct rte_kni_mbuf *m)
-{
- void *va;
-
- va = (void *)((unsigned long)pa +
- (unsigned long)m->buf_addr -
- (unsigned long)m->buf_iova);
- return va;
-}
-
-/* mbuf data kernel virtual address from mbuf kernel virtual address */
-static void *
-kva2data_kva(struct rte_kni_mbuf *m)
-{
- return phys_to_virt(m->buf_iova + m->data_off);
-}
-
-static inline void *
-get_kva(struct kni_dev *kni, void *pa)
-{
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
- if (kni->iova_mode == 1)
- return iova2kva(kni, pa);
-#endif
- return pa2kva(pa);
-}
-
-static inline void *
-get_data_kva(struct kni_dev *kni, void *pkt_kva)
-{
-#ifdef HAVE_IOVA_TO_KVA_MAPPING_SUPPORT
- if (kni->iova_mode == 1)
- return iova2data_kva(kni, pkt_kva);
-#endif
- return kva2data_kva(pkt_kva);
-}
-
-/*
- * It can be called to process the request.
- */
-static int
-kni_net_process_request(struct net_device *dev, struct rte_kni_request *req)
-{
- struct kni_dev *kni = netdev_priv(dev);
- int ret = -1;
- void *resp_va;
- uint32_t num;
- int ret_val;
-
- ASSERT_RTNL();
-
- if (bifurcated_support) {
- /* If we need to wait and RTNL mutex is held
- * drop the mutex and hold reference to keep device
- */
- if (req->async == 0) {
- dev_hold(dev);
- rtnl_unlock();
- }
- }
-
- mutex_lock(&kni->sync_lock);
-
- /* Construct data */
- memcpy(kni->sync_kva, req, sizeof(struct rte_kni_request));
- num = kni_fifo_put(kni->req_q, &kni->sync_va, 1);
- if (num < 1) {
- pr_err("Cannot send to req_q\n");
- ret = -EBUSY;
- goto fail;
- }
-
- if (bifurcated_support) {
- /* No result available since request is handled
- * asynchronously. set response to success.
- */
- if (req->async != 0) {
- req->result = 0;
- goto async;
- }
- }
-
- ret_val = wait_event_interruptible_timeout(kni->wq,
- kni_fifo_count(kni->resp_q), 3 * HZ);
- if (signal_pending(current) || ret_val <= 0) {
- ret = -ETIME;
- goto fail;
- }
- num = kni_fifo_get(kni->resp_q, (void **)&resp_va, 1);
- if (num != 1 || resp_va != kni->sync_va) {
- /* This should never happen */
- pr_err("No data in resp_q\n");
- ret = -ENODATA;
- goto fail;
- }
-
- memcpy(req, kni->sync_kva, sizeof(struct rte_kni_request));
-async:
- ret = 0;
-
-fail:
- mutex_unlock(&kni->sync_lock);
- if (bifurcated_support) {
- if (req->async == 0) {
- rtnl_lock();
- dev_put(dev);
- }
- }
- return ret;
-}
-
-/*
- * Open and close
- */
-static int
-kni_net_open(struct net_device *dev)
-{
- int ret;
- struct rte_kni_request req;
-
- netif_start_queue(dev);
- if (kni_dflt_carrier == 1)
- netif_carrier_on(dev);
- else
- netif_carrier_off(dev);
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CFG_NETWORK_IF;
-
- /* Setting if_up to non-zero means up */
- req.if_up = 1;
- ret = kni_net_process_request(dev, &req);
-
- return (ret == 0) ? req.result : ret;
-}
-
-static int
-kni_net_release(struct net_device *dev)
-{
- int ret;
- struct rte_kni_request req;
-
- netif_stop_queue(dev); /* can't transmit any more */
- netif_carrier_off(dev);
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CFG_NETWORK_IF;
-
- /* Setting if_up to 0 means down */
- req.if_up = 0;
-
- if (bifurcated_support) {
- /* request async because of the deadlock problem */
- req.async = 1;
- }
-
- ret = kni_net_process_request(dev, &req);
-
- return (ret == 0) ? req.result : ret;
-}
-
-static void
-kni_fifo_trans_pa2va(struct kni_dev *kni,
- struct rte_kni_fifo *src_pa, struct rte_kni_fifo *dst_va)
-{
- uint32_t ret, i, num_dst, num_rx;
- struct rte_kni_mbuf *kva, *prev_kva;
- int nb_segs;
- int kva_nb_segs;
-
- do {
- num_dst = kni_fifo_free_count(dst_va);
- if (num_dst == 0)
- return;
-
- num_rx = min_t(uint32_t, num_dst, MBUF_BURST_SZ);
-
- num_rx = kni_fifo_get(src_pa, kni->pa, num_rx);
- if (num_rx == 0)
- return;
-
- for (i = 0; i < num_rx; i++) {
- kva = get_kva(kni, kni->pa[i]);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- kva_nb_segs = kva->nb_segs;
- for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) {
- if (!kva->next)
- break;
-
- prev_kva = kva;
- kva = get_kva(kni, kva->next);
- /* Convert physical address to virtual address */
- prev_kva->next = pa2va(prev_kva->next, kva);
- }
- }
-
- ret = kni_fifo_put(dst_va, kni->va, num_rx);
- if (ret != num_rx) {
- /* Failing should not happen */
- pr_err("Fail to enqueue entries into dst_va\n");
- return;
- }
- } while (1);
-}
-
-/* Try to release mbufs when kni release */
-void kni_net_release_fifo_phy(struct kni_dev *kni)
-{
- /* release rx_q first, because it can't release in userspace */
- kni_fifo_trans_pa2va(kni, kni->rx_q, kni->free_q);
- /* release alloc_q for speeding up kni release in userspace */
- kni_fifo_trans_pa2va(kni, kni->alloc_q, kni->free_q);
-}
-
-/*
- * Configuration changes (passed on by ifconfig)
- */
-static int
-kni_net_config(struct net_device *dev, struct ifmap *map)
-{
- if (dev->flags & IFF_UP) /* can't act on a running interface */
- return -EBUSY;
-
- /* ignore other fields */
- return 0;
-}
-
-/*
- * Transmit a packet (called by the kernel)
- */
-static int
-kni_net_tx(struct sk_buff *skb, struct net_device *dev)
-{
- int len = 0;
- uint32_t ret;
- struct kni_dev *kni = netdev_priv(dev);
- struct rte_kni_mbuf *pkt_kva = NULL;
- void *pkt_pa = NULL;
- void *pkt_va = NULL;
-
- /* save the timestamp */
-#ifdef HAVE_TRANS_START_HELPER
- netif_trans_update(dev);
-#else
- dev->trans_start = jiffies;
-#endif
-
- /* Check if the length of skb is less than mbuf size */
- if (skb->len > kni->mbuf_size)
- goto drop;
-
- /**
- * Check if it has at least one free entry in tx_q and
- * one entry in alloc_q.
- */
- if (kni_fifo_free_count(kni->tx_q) == 0 ||
- kni_fifo_count(kni->alloc_q) == 0) {
- /**
- * If no free entry in tx_q or no entry in alloc_q,
- * drops skb and goes out.
- */
- goto drop;
- }
-
- /* dequeue a mbuf from alloc_q */
- ret = kni_fifo_get(kni->alloc_q, &pkt_pa, 1);
- if (likely(ret == 1)) {
- void *data_kva;
-
- pkt_kva = get_kva(kni, pkt_pa);
- data_kva = get_data_kva(kni, pkt_kva);
- pkt_va = pa2va(pkt_pa, pkt_kva);
-
- len = skb->len;
- memcpy(data_kva, skb->data, len);
- if (unlikely(len < ETH_ZLEN)) {
- memset(data_kva + len, 0, ETH_ZLEN - len);
- len = ETH_ZLEN;
- }
- pkt_kva->pkt_len = len;
- pkt_kva->data_len = len;
-
- /* enqueue mbuf into tx_q */
- ret = kni_fifo_put(kni->tx_q, &pkt_va, 1);
- if (unlikely(ret != 1)) {
- /* Failing should not happen */
- pr_err("Fail to enqueue mbuf into tx_q\n");
- goto drop;
- }
- } else {
- /* Failing should not happen */
- pr_err("Fail to dequeue mbuf from alloc_q\n");
- goto drop;
- }
-
- /* Free skb and update statistics */
- dev_kfree_skb(skb);
- dev->stats.tx_bytes += len;
- dev->stats.tx_packets++;
-
- return NETDEV_TX_OK;
-
-drop:
- /* Free skb and update statistics */
- dev_kfree_skb(skb);
- dev->stats.tx_dropped++;
-
- return NETDEV_TX_OK;
-}
-
-/*
- * RX: normal working mode
- */
-static void
-kni_net_rx_normal(struct kni_dev *kni)
-{
- uint32_t ret;
- uint32_t len;
- uint32_t i, num_rx, num_fq;
- struct rte_kni_mbuf *kva, *prev_kva;
- void *data_kva;
- struct sk_buff *skb;
- struct net_device *dev = kni->net_dev;
-
- /* Get the number of free entries in free_q */
- num_fq = kni_fifo_free_count(kni->free_q);
- if (num_fq == 0) {
- /* No room on the free_q, bail out */
- return;
- }
-
- /* Calculate the number of entries to dequeue from rx_q */
- num_rx = min_t(uint32_t, num_fq, MBUF_BURST_SZ);
-
- /* Burst dequeue from rx_q */
- num_rx = kni_fifo_get(kni->rx_q, kni->pa, num_rx);
- if (num_rx == 0)
- return;
-
- /* Transfer received packets to netif */
- for (i = 0; i < num_rx; i++) {
- kva = get_kva(kni, kni->pa[i]);
- len = kva->pkt_len;
- data_kva = get_data_kva(kni, kva);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- skb = netdev_alloc_skb(dev, len);
- if (!skb) {
- /* Update statistics */
- dev->stats.rx_dropped++;
- continue;
- }
-
- if (kva->nb_segs == 1) {
- memcpy(skb_put(skb, len), data_kva, len);
- } else {
- int nb_segs;
- int kva_nb_segs = kva->nb_segs;
-
- for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) {
- memcpy(skb_put(skb, kva->data_len),
- data_kva, kva->data_len);
-
- if (!kva->next)
- break;
-
- prev_kva = kva;
- kva = get_kva(kni, kva->next);
- data_kva = kva2data_kva(kva);
- /* Convert physical address to virtual address */
- prev_kva->next = pa2va(prev_kva->next, kva);
- }
- }
-
- skb->protocol = eth_type_trans(skb, dev);
- skb->ip_summed = CHECKSUM_UNNECESSARY;
-
- /* Call netif interface */
-#ifdef HAVE_NETIF_RX_NI
- netif_rx_ni(skb);
-#else
- netif_rx(skb);
-#endif
-
- /* Update statistics */
- dev->stats.rx_bytes += len;
- dev->stats.rx_packets++;
- }
-
- /* Burst enqueue mbufs into free_q */
- ret = kni_fifo_put(kni->free_q, kni->va, num_rx);
- if (ret != num_rx)
- /* Failing should not happen */
- pr_err("Fail to enqueue entries into free_q\n");
-}
-
-/*
- * RX: loopback with enqueue/dequeue fifos.
- */
-static void
-kni_net_rx_lo_fifo(struct kni_dev *kni)
-{
- uint32_t ret;
- uint32_t len;
- uint32_t i, num, num_rq, num_tq, num_aq, num_fq;
- struct rte_kni_mbuf *kva, *next_kva;
- void *data_kva;
- struct rte_kni_mbuf *alloc_kva;
- void *alloc_data_kva;
- struct net_device *dev = kni->net_dev;
-
- /* Get the number of entries in rx_q */
- num_rq = kni_fifo_count(kni->rx_q);
-
- /* Get the number of free entries in tx_q */
- num_tq = kni_fifo_free_count(kni->tx_q);
-
- /* Get the number of entries in alloc_q */
- num_aq = kni_fifo_count(kni->alloc_q);
-
- /* Get the number of free entries in free_q */
- num_fq = kni_fifo_free_count(kni->free_q);
-
- /* Calculate the number of entries to be dequeued from rx_q */
- num = min(num_rq, num_tq);
- num = min(num, num_aq);
- num = min(num, num_fq);
- num = min_t(uint32_t, num, MBUF_BURST_SZ);
-
- /* Return if no entry to dequeue from rx_q */
- if (num == 0)
- return;
-
- /* Burst dequeue from rx_q */
- ret = kni_fifo_get(kni->rx_q, kni->pa, num);
- if (ret == 0)
- return; /* Failing should not happen */
-
- /* Dequeue entries from alloc_q */
- ret = kni_fifo_get(kni->alloc_q, kni->alloc_pa, num);
- if (ret) {
- num = ret;
- /* Copy mbufs */
- for (i = 0; i < num; i++) {
- kva = get_kva(kni, kni->pa[i]);
- len = kva->data_len;
- data_kva = get_data_kva(kni, kva);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- while (kva->next) {
- next_kva = get_kva(kni, kva->next);
- /* Convert physical address to virtual address */
- kva->next = pa2va(kva->next, next_kva);
- kva = next_kva;
- }
-
- alloc_kva = get_kva(kni, kni->alloc_pa[i]);
- alloc_data_kva = get_data_kva(kni, alloc_kva);
- kni->alloc_va[i] = pa2va(kni->alloc_pa[i], alloc_kva);
-
- memcpy(alloc_data_kva, data_kva, len);
- alloc_kva->pkt_len = len;
- alloc_kva->data_len = len;
-
- dev->stats.tx_bytes += len;
- dev->stats.rx_bytes += len;
- }
-
- /* Burst enqueue mbufs into tx_q */
- ret = kni_fifo_put(kni->tx_q, kni->alloc_va, num);
- if (ret != num)
- /* Failing should not happen */
- pr_err("Fail to enqueue mbufs into tx_q\n");
- }
-
- /* Burst enqueue mbufs into free_q */
- ret = kni_fifo_put(kni->free_q, kni->va, num);
- if (ret != num)
- /* Failing should not happen */
- pr_err("Fail to enqueue mbufs into free_q\n");
-
- /**
- * Update statistic, and enqueue/dequeue failure is impossible,
- * as all queues are checked at first.
- */
- dev->stats.tx_packets += num;
- dev->stats.rx_packets += num;
-}
-
-/*
- * RX: loopback with enqueue/dequeue fifos and sk buffer copies.
- */
-static void
-kni_net_rx_lo_fifo_skb(struct kni_dev *kni)
-{
- uint32_t ret;
- uint32_t len;
- uint32_t i, num_rq, num_fq, num;
- struct rte_kni_mbuf *kva, *prev_kva;
- void *data_kva;
- struct sk_buff *skb;
- struct net_device *dev = kni->net_dev;
-
- /* Get the number of entries in rx_q */
- num_rq = kni_fifo_count(kni->rx_q);
-
- /* Get the number of free entries in free_q */
- num_fq = kni_fifo_free_count(kni->free_q);
-
- /* Calculate the number of entries to dequeue from rx_q */
- num = min(num_rq, num_fq);
- num = min_t(uint32_t, num, MBUF_BURST_SZ);
-
- /* Return if no entry to dequeue from rx_q */
- if (num == 0)
- return;
-
- /* Burst dequeue mbufs from rx_q */
- ret = kni_fifo_get(kni->rx_q, kni->pa, num);
- if (ret == 0)
- return;
-
- /* Copy mbufs to sk buffer and then call tx interface */
- for (i = 0; i < num; i++) {
- kva = get_kva(kni, kni->pa[i]);
- len = kva->pkt_len;
- data_kva = get_data_kva(kni, kva);
- kni->va[i] = pa2va(kni->pa[i], kva);
-
- skb = netdev_alloc_skb(dev, len);
- if (skb) {
- memcpy(skb_put(skb, len), data_kva, len);
- skb->ip_summed = CHECKSUM_UNNECESSARY;
- dev_kfree_skb(skb);
- }
-
- /* Simulate real usage, allocate/copy skb twice */
- skb = netdev_alloc_skb(dev, len);
- if (skb == NULL) {
- dev->stats.rx_dropped++;
- continue;
- }
-
- if (kva->nb_segs == 1) {
- memcpy(skb_put(skb, len), data_kva, len);
- } else {
- int nb_segs;
- int kva_nb_segs = kva->nb_segs;
-
- for (nb_segs = 0; nb_segs < kva_nb_segs; nb_segs++) {
- memcpy(skb_put(skb, kva->data_len),
- data_kva, kva->data_len);
-
- if (!kva->next)
- break;
-
- prev_kva = kva;
- kva = get_kva(kni, kva->next);
- data_kva = get_data_kva(kni, kva);
- /* Convert physical address to virtual address */
- prev_kva->next = pa2va(prev_kva->next, kva);
- }
- }
-
- skb->ip_summed = CHECKSUM_UNNECESSARY;
-
- dev->stats.rx_bytes += len;
- dev->stats.rx_packets++;
-
- /* call tx interface */
- kni_net_tx(skb, dev);
- }
-
- /* enqueue all the mbufs from rx_q into free_q */
- ret = kni_fifo_put(kni->free_q, kni->va, num);
- if (ret != num)
- /* Failing should not happen */
- pr_err("Fail to enqueue mbufs into free_q\n");
-}
-
-/* rx interface */
-void
-kni_net_rx(struct kni_dev *kni)
-{
- /**
- * It doesn't need to check if it is NULL pointer,
- * as it has a default value
- */
- (*kni_net_rx_func)(kni);
-}
-
-/*
- * Deal with a transmit timeout.
- */
-#ifdef HAVE_TX_TIMEOUT_TXQUEUE
-static void
-kni_net_tx_timeout(struct net_device *dev, unsigned int txqueue)
-#else
-static void
-kni_net_tx_timeout(struct net_device *dev)
-#endif
-{
- pr_debug("Transmit timeout at %ld, latency %ld\n", jiffies,
- jiffies - dev_trans_start(dev));
-
- dev->stats.tx_errors++;
- netif_wake_queue(dev);
-}
-
-static int
-kni_net_change_mtu(struct net_device *dev, int new_mtu)
-{
- int ret;
- struct rte_kni_request req;
-
- pr_debug("kni_net_change_mtu new mtu %d to be set\n", new_mtu);
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CHANGE_MTU;
- req.new_mtu = new_mtu;
- ret = kni_net_process_request(dev, &req);
- if (ret == 0 && req.result == 0)
- dev->mtu = new_mtu;
-
- return (ret == 0) ? req.result : ret;
-}
-
-static void
-kni_net_change_rx_flags(struct net_device *netdev, int flags)
-{
- struct rte_kni_request req;
-
- memset(&req, 0, sizeof(req));
-
- if (flags & IFF_ALLMULTI) {
- req.req_id = RTE_KNI_REQ_CHANGE_ALLMULTI;
-
- if (netdev->flags & IFF_ALLMULTI)
- req.allmulti = 1;
- else
- req.allmulti = 0;
- }
-
- if (flags & IFF_PROMISC) {
- req.req_id = RTE_KNI_REQ_CHANGE_PROMISC;
-
- if (netdev->flags & IFF_PROMISC)
- req.promiscusity = 1;
- else
- req.promiscusity = 0;
- }
-
- kni_net_process_request(netdev, &req);
-}
-
-/*
- * Checks if the user space application provided the resp message
- */
-void
-kni_net_poll_resp(struct kni_dev *kni)
-{
- if (kni_fifo_count(kni->resp_q))
- wake_up_interruptible(&kni->wq);
-}
-
-/*
- * Fill the eth header
- */
-static int
-kni_net_header(struct sk_buff *skb, struct net_device *dev,
- unsigned short type, const void *daddr,
- const void *saddr, uint32_t len)
-{
- struct ethhdr *eth = (struct ethhdr *) skb_push(skb, ETH_HLEN);
-
- memcpy(eth->h_source, saddr ? saddr : dev->dev_addr, dev->addr_len);
- memcpy(eth->h_dest, daddr ? daddr : dev->dev_addr, dev->addr_len);
- eth->h_proto = htons(type);
-
- return dev->hard_header_len;
-}
-
-/*
- * Re-fill the eth header
- */
-#ifdef HAVE_REBUILD_HEADER
-static int
-kni_net_rebuild_header(struct sk_buff *skb)
-{
- struct net_device *dev = skb->dev;
- struct ethhdr *eth = (struct ethhdr *) skb->data;
-
- memcpy(eth->h_source, dev->dev_addr, dev->addr_len);
- memcpy(eth->h_dest, dev->dev_addr, dev->addr_len);
-
- return 0;
-}
-#endif /* < 4.1.0 */
-
-/**
- * kni_net_set_mac - Change the Ethernet Address of the KNI NIC
- * @netdev: network interface device structure
- * @p: pointer to an address structure
- *
- * Returns 0 on success, negative on failure
- **/
-static int
-kni_net_set_mac(struct net_device *netdev, void *p)
-{
- int ret;
- struct rte_kni_request req;
- struct sockaddr *addr = p;
-
- memset(&req, 0, sizeof(req));
- req.req_id = RTE_KNI_REQ_CHANGE_MAC_ADDR;
-
- if (!is_valid_ether_addr((unsigned char *)(addr->sa_data)))
- return -EADDRNOTAVAIL;
-
- memcpy(req.mac_addr, addr->sa_data, netdev->addr_len);
-#ifdef HAVE_ETH_HW_ADDR_SET
- eth_hw_addr_set(netdev, addr->sa_data);
-#else
- memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
-#endif
-
- ret = kni_net_process_request(netdev, &req);
-
- return (ret == 0 ? req.result : ret);
-}
-
-#ifdef HAVE_CHANGE_CARRIER_CB
-static int
-kni_net_change_carrier(struct net_device *dev, bool new_carrier)
-{
- if (new_carrier)
- netif_carrier_on(dev);
- else
- netif_carrier_off(dev);
- return 0;
-}
-#endif
-
-static const struct header_ops kni_net_header_ops = {
- .create = kni_net_header,
- .parse = eth_header_parse,
-#ifdef HAVE_REBUILD_HEADER
- .rebuild = kni_net_rebuild_header,
-#endif /* < 4.1.0 */
- .cache = NULL, /* disable caching */
-};
-
-static const struct net_device_ops kni_net_netdev_ops = {
- .ndo_open = kni_net_open,
- .ndo_stop = kni_net_release,
- .ndo_set_config = kni_net_config,
- .ndo_change_rx_flags = kni_net_change_rx_flags,
- .ndo_start_xmit = kni_net_tx,
- .ndo_change_mtu = kni_net_change_mtu,
- .ndo_tx_timeout = kni_net_tx_timeout,
- .ndo_set_mac_address = kni_net_set_mac,
-#ifdef HAVE_CHANGE_CARRIER_CB
- .ndo_change_carrier = kni_net_change_carrier,
-#endif
-};
-
-static void kni_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- strlcpy(info->version, KNI_VERSION, sizeof(info->version));
- strlcpy(info->driver, "kni", sizeof(info->driver));
-}
-
-static const struct ethtool_ops kni_net_ethtool_ops = {
- .get_drvinfo = kni_get_drvinfo,
- .get_link = ethtool_op_get_link,
-};
-
-void
-kni_net_init(struct net_device *dev)
-{
- struct kni_dev *kni = netdev_priv(dev);
-
- init_waitqueue_head(&kni->wq);
- mutex_init(&kni->sync_lock);
-
- ether_setup(dev); /* assign some of the fields */
- dev->netdev_ops = &kni_net_netdev_ops;
- dev->header_ops = &kni_net_header_ops;
- dev->ethtool_ops = &kni_net_ethtool_ops;
- dev->watchdog_timeo = WD_TIMEOUT;
-}
-
-void
-kni_net_config_lo_mode(char *lo_str)
-{
- if (!lo_str) {
- pr_debug("loopback disabled");
- return;
- }
-
- if (!strcmp(lo_str, "lo_mode_none"))
- pr_debug("loopback disabled");
- else if (!strcmp(lo_str, "lo_mode_fifo")) {
- pr_debug("loopback mode=lo_mode_fifo enabled");
- kni_net_rx_func = kni_net_rx_lo_fifo;
- } else if (!strcmp(lo_str, "lo_mode_fifo_skb")) {
- pr_debug("loopback mode=lo_mode_fifo_skb enabled");
- kni_net_rx_func = kni_net_rx_lo_fifo_skb;
- } else {
- pr_debug("Unknown loopback parameter, disabled");
- }
-}
diff --git a/kernel/linux/kni/meson.build b/kernel/linux/kni/meson.build
deleted file mode 100644
index 4c90069e9989..000000000000
--- a/kernel/linux/kni/meson.build
+++ /dev/null
@@ -1,41 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Luca Boccassi <bluca@debian.org>
-
-# For SUSE build check function arguments of ndo_tx_timeout API
-# Ref: https://jira.devtools.intel.com/browse/DPDK-29263
-kmod_cflags = ''
-file_path = kernel_source_dir + '/include/linux/netdevice.h'
-run_cmd = run_command('grep', 'ndo_tx_timeout', file_path, check: false)
-
-if run_cmd.stdout().contains('txqueue') == true
- kmod_cflags = '-DHAVE_ARG_TX_QUEUE'
-endif
-
-
-kni_mkfile = custom_target('rte_kni_makefile',
- output: 'Makefile',
- command: ['touch', '@OUTPUT@'])
-
-kni_sources = files(
- 'kni_misc.c',
- 'kni_net.c',
- 'Kbuild',
-)
-
-custom_target('rte_kni',
- input: kni_sources,
- output: 'rte_kni.ko',
- command: ['make', '-j4', '-C', kernel_build_dir,
- 'M=' + meson.current_build_dir(),
- 'src=' + meson.current_source_dir(),
- ' '.join(['MODULE_CFLAGS=', kmod_cflags,'-include '])
- + dpdk_source_root + '/config/rte_config.h' +
- ' -I' + dpdk_source_root + '/lib/eal/include' +
- ' -I' + dpdk_source_root + '/lib/kni' +
- ' -I' + dpdk_build_root +
- ' -I' + meson.current_source_dir(),
- 'modules'] + cross_args,
- depends: kni_mkfile,
- install: install,
- install_dir: kernel_install_dir,
- build_by_default: get_option('enable_kmods'))
diff --git a/kernel/linux/meson.build b/kernel/linux/meson.build
deleted file mode 100644
index 16a094899446..000000000000
--- a/kernel/linux/meson.build
+++ /dev/null
@@ -1,103 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2018 Intel Corporation
-
-subdirs = ['kni']
-
-kernel_build_dir = get_option('kernel_dir')
-kernel_source_dir = get_option('kernel_dir')
-kernel_install_dir = ''
-install = not meson.is_cross_build()
-cross_args = []
-
-if not meson.is_cross_build()
- # native build
- kernel_version = run_command('uname', '-r', check: true).stdout().strip()
- if kernel_source_dir != ''
- # Try kernel release from sources first
- r = run_command('make', '-s', '-C', kernel_source_dir, 'kernelrelease', check: false)
- if r.returncode() == 0
- kernel_version = r.stdout().strip()
- endif
- else
- # use default path for native builds
- kernel_source_dir = '/lib/modules/' + kernel_version + '/source'
- endif
- kernel_install_dir = '/lib/modules/' + kernel_version + '/extra/dpdk'
- if kernel_build_dir == ''
- # use default path for native builds
- kernel_build_dir = '/lib/modules/' + kernel_version + '/build'
- endif
-
- # test running make in kernel directory, using "make kernelversion"
- make_returncode = run_command('make', '-sC', kernel_build_dir,
- 'kernelversion', check: true).returncode()
- if make_returncode != 0
- # backward compatibility:
- # the headers could still be in the 'build' subdir
- if not kernel_build_dir.endswith('build') and not kernel_build_dir.endswith('build/')
- kernel_build_dir = join_paths(kernel_build_dir, 'build')
- make_returncode = run_command('make', '-sC', kernel_build_dir,
- 'kernelversion', check: true).returncode()
- endif
- endif
-
- if make_returncode != 0
- error('Cannot compile kernel modules as requested - are kernel headers installed?')
- endif
-
- # DO ACTUAL MODULE BUILDING
- foreach d:subdirs
- subdir(d)
- endforeach
-
- subdir_done()
-endif
-
-# cross build
-# if we are cross-compiling we need kernel_build_dir specified
-if kernel_build_dir == ''
- error('Need "kernel_dir" option for kmod compilation when cross-compiling')
-endif
-cross_compiler = find_program('c').path()
-if cross_compiler.endswith('gcc')
- cross_prefix = run_command([py3, '-c', 'print("' + cross_compiler + '"[:-3])'],
- check: true).stdout().strip()
-elif cross_compiler.endswith('clang')
- cross_prefix = ''
- found_target = false
- # search for '-target' and use the arg that follows
- # (i.e. the value of '-target') as cross_prefix
- foreach cross_c_arg : meson.get_cross_property('c_args')
- if found_target and cross_prefix == ''
- cross_prefix = cross_c_arg
- endif
- if cross_c_arg == '-target'
- found_target = true
- endif
- endforeach
- if cross_prefix == ''
- error('Did not find -target and its value in c_args in input cross-file.')
- endif
- linker = 'lld'
- foreach cross_c_link_arg : meson.get_cross_property('c_link_args')
- if cross_c_link_arg.startswith('-fuse-ld')
- linker = cross_c_link_arg.split('=')[1]
- endif
- endforeach
- cross_args += ['CC=@0@'.format(cross_compiler), 'LD=ld.@0@'.format(linker)]
-else
- error('Unsupported cross compiler: @0@'.format(cross_compiler))
-endif
-
-cross_arch = host_machine.cpu_family()
-if host_machine.cpu_family() == 'aarch64'
- cross_arch = 'arm64'
-endif
-
-cross_args += ['ARCH=@0@'.format(cross_arch),
- 'CROSS_COMPILE=@0@'.format(cross_prefix)]
-
-# DO ACTUAL MODULE BUILDING
-foreach d:subdirs
- subdir(d)
-endforeach
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index bd7b188ceb4a..0a1d219d6924 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -356,7 +356,6 @@ static const struct logtype logtype_strings[] = {
{RTE_LOGTYPE_PMD, "pmd"},
{RTE_LOGTYPE_HASH, "lib.hash"},
{RTE_LOGTYPE_LPM, "lib.lpm"},
- {RTE_LOGTYPE_KNI, "lib.kni"},
{RTE_LOGTYPE_ACL, "lib.acl"},
{RTE_LOGTYPE_POWER, "lib.power"},
{RTE_LOGTYPE_METER, "lib.meter"},
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index 6d2b0856a565..bdefff2a5933 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -34,7 +34,7 @@ extern "C" {
#define RTE_LOGTYPE_PMD 5 /**< Log related to poll mode driver. */
#define RTE_LOGTYPE_HASH 6 /**< Log related to hash table. */
#define RTE_LOGTYPE_LPM 7 /**< Log related to LPM. */
-#define RTE_LOGTYPE_KNI 8 /**< Log related to KNI. */
+ /* was RTE_LOGTYPE_KNI */
#define RTE_LOGTYPE_ACL 9 /**< Log related to ACL. */
#define RTE_LOGTYPE_POWER 10 /**< Log related to power. */
#define RTE_LOGTYPE_METER 11 /**< Log related to QoS meter. */
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index c6efd920145c..a1fefcd9d83a 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1084,11 +1084,6 @@ rte_eal_init(int argc, char **argv)
*/
iova_mode = RTE_IOVA_VA;
RTE_LOG(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.\n");
-#if defined(RTE_LIB_KNI) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0)
- } else if (rte_eal_check_module("rte_kni") == 1) {
- iova_mode = RTE_IOVA_PA;
- RTE_LOG(DEBUG, EAL, "KNI is loaded, selecting IOVA as PA mode for better KNI performance.\n");
-#endif
} else if (is_iommu_enabled()) {
/* we have an IOMMU, pick IOVA as VA mode */
iova_mode = RTE_IOVA_VA;
@@ -1101,20 +1096,6 @@ rte_eal_init(int argc, char **argv)
RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n");
}
}
-#if defined(RTE_LIB_KNI) && LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0)
- /* Workaround for KNI which requires physical address to work
- * in kernels < 4.10
- */
- if (iova_mode == RTE_IOVA_VA &&
- rte_eal_check_module("rte_kni") == 1) {
- if (phys_addrs) {
- iova_mode = RTE_IOVA_PA;
- RTE_LOG(WARNING, EAL, "Forcing IOVA as 'PA' because KNI module is loaded\n");
- } else {
- RTE_LOG(DEBUG, EAL, "KNI can not work since physical addresses are unavailable\n");
- }
- }
-#endif
rte_eal_get_configuration()->iova_mode = iova_mode;
} else {
rte_eal_get_configuration()->iova_mode =
diff --git a/lib/kni/meson.build b/lib/kni/meson.build
deleted file mode 100644
index 5ce410f7f2d2..000000000000
--- a/lib/kni/meson.build
+++ /dev/null
@@ -1,21 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
-
-if is_windows
- build = false
- reason = 'not supported on Windows'
- subdir_done()
-endif
-
-if dpdk_conf.get('RTE_IOVA_IN_MBUF') == 0
- build = false
- reason = 'requires IOVA in mbuf (set enable_iova_as_pa option)'
-endif
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
-endif
-sources = files('rte_kni.c')
-headers = files('rte_kni.h', 'rte_kni_common.h')
-deps += ['ethdev', 'pci']
diff --git a/lib/kni/rte_kni.c b/lib/kni/rte_kni.c
deleted file mode 100644
index bfa6a001ff59..000000000000
--- a/lib/kni/rte_kni.c
+++ /dev/null
@@ -1,843 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef RTE_EXEC_ENV_LINUX
-#error "KNI is not supported"
-#endif
-
-#include <string.h>
-#include <fcntl.h>
-#include <unistd.h>
-#include <sys/ioctl.h>
-#include <linux/version.h>
-
-#include <rte_string_fns.h>
-#include <rte_ethdev.h>
-#include <rte_malloc.h>
-#include <rte_log.h>
-#include <rte_kni.h>
-#include <rte_memzone.h>
-#include <rte_tailq.h>
-#include <rte_eal_memconfig.h>
-#include <rte_kni_common.h>
-#include "rte_kni_fifo.h"
-
-#define MAX_MBUF_BURST_NUM 32
-
-/* Maximum number of ring entries */
-#define KNI_FIFO_COUNT_MAX 1024
-#define KNI_FIFO_SIZE (KNI_FIFO_COUNT_MAX * sizeof(void *) + \
- sizeof(struct rte_kni_fifo))
-
-#define KNI_REQUEST_MBUF_NUM_MAX 32
-
-#define KNI_MEM_CHECK(cond, fail) do { if (cond) goto fail; } while (0)
-
-#define KNI_MZ_NAME_FMT "kni_info_%s"
-#define KNI_TX_Q_MZ_NAME_FMT "kni_tx_%s"
-#define KNI_RX_Q_MZ_NAME_FMT "kni_rx_%s"
-#define KNI_ALLOC_Q_MZ_NAME_FMT "kni_alloc_%s"
-#define KNI_FREE_Q_MZ_NAME_FMT "kni_free_%s"
-#define KNI_REQ_Q_MZ_NAME_FMT "kni_req_%s"
-#define KNI_RESP_Q_MZ_NAME_FMT "kni_resp_%s"
-#define KNI_SYNC_ADDR_MZ_NAME_FMT "kni_sync_%s"
-
-TAILQ_HEAD(rte_kni_list, rte_tailq_entry);
-
-static struct rte_tailq_elem rte_kni_tailq = {
- .name = "RTE_KNI",
-};
-EAL_REGISTER_TAILQ(rte_kni_tailq)
-
-/**
- * KNI context
- */
-struct rte_kni {
- char name[RTE_KNI_NAMESIZE]; /**< KNI interface name */
- uint16_t group_id; /**< Group ID of KNI devices */
- uint32_t slot_id; /**< KNI pool slot ID */
- struct rte_mempool *pktmbuf_pool; /**< pkt mbuf mempool */
- unsigned int mbuf_size; /**< mbuf size */
-
- const struct rte_memzone *m_tx_q; /**< TX queue memzone */
- const struct rte_memzone *m_rx_q; /**< RX queue memzone */
- const struct rte_memzone *m_alloc_q;/**< Alloc queue memzone */
- const struct rte_memzone *m_free_q; /**< Free queue memzone */
-
- struct rte_kni_fifo *tx_q; /**< TX queue */
- struct rte_kni_fifo *rx_q; /**< RX queue */
- struct rte_kni_fifo *alloc_q; /**< Allocated mbufs queue */
- struct rte_kni_fifo *free_q; /**< To be freed mbufs queue */
-
- const struct rte_memzone *m_req_q; /**< Request queue memzone */
- const struct rte_memzone *m_resp_q; /**< Response queue memzone */
- const struct rte_memzone *m_sync_addr;/**< Sync addr memzone */
-
- /* For request & response */
- struct rte_kni_fifo *req_q; /**< Request queue */
- struct rte_kni_fifo *resp_q; /**< Response queue */
- void *sync_addr; /**< Req/Resp Mem address */
-
- struct rte_kni_ops ops; /**< operations for request */
-};
-
-enum kni_ops_status {
- KNI_REQ_NO_REGISTER = 0,
- KNI_REQ_REGISTERED,
-};
-
-static void kni_free_mbufs(struct rte_kni *kni);
-static void kni_allocate_mbufs(struct rte_kni *kni);
-
-static volatile int kni_fd = -1;
-
-/* Shall be called before any allocation happens */
-int
-rte_kni_init(unsigned int max_kni_ifaces __rte_unused)
-{
- RTE_LOG(WARNING, KNI, "WARNING: KNI is deprecated and will be removed in DPDK 23.11\n");
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0)
- if (rte_eal_iova_mode() != RTE_IOVA_PA) {
- RTE_LOG(ERR, KNI, "KNI requires IOVA as PA\n");
- return -1;
- }
-#endif
-
- /* Check FD and open */
- if (kni_fd < 0) {
- kni_fd = open("/dev/" KNI_DEVICE, O_RDWR);
- if (kni_fd < 0) {
- RTE_LOG(ERR, KNI,
- "Can not open /dev/%s\n", KNI_DEVICE);
- return -1;
- }
- }
-
- return 0;
-}
-
-static struct rte_kni *
-__rte_kni_get(const char *name)
-{
- struct rte_kni *kni;
- struct rte_tailq_entry *te;
- struct rte_kni_list *kni_list;
-
- kni_list = RTE_TAILQ_CAST(rte_kni_tailq.head, rte_kni_list);
-
- TAILQ_FOREACH(te, kni_list, next) {
- kni = te->data;
- if (strncmp(name, kni->name, RTE_KNI_NAMESIZE) == 0)
- break;
- }
-
- if (te == NULL)
- kni = NULL;
-
- return kni;
-}
-
-static int
-kni_reserve_mz(struct rte_kni *kni)
-{
- char mz_name[RTE_MEMZONE_NAMESIZE];
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_TX_Q_MZ_NAME_FMT, kni->name);
- kni->m_tx_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_tx_q == NULL, tx_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_RX_Q_MZ_NAME_FMT, kni->name);
- kni->m_rx_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_rx_q == NULL, rx_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_ALLOC_Q_MZ_NAME_FMT, kni->name);
- kni->m_alloc_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_alloc_q == NULL, alloc_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_FREE_Q_MZ_NAME_FMT, kni->name);
- kni->m_free_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_free_q == NULL, free_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_REQ_Q_MZ_NAME_FMT, kni->name);
- kni->m_req_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_req_q == NULL, req_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_RESP_Q_MZ_NAME_FMT, kni->name);
- kni->m_resp_q = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_resp_q == NULL, resp_q_fail);
-
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE, KNI_SYNC_ADDR_MZ_NAME_FMT, kni->name);
- kni->m_sync_addr = rte_memzone_reserve(mz_name, KNI_FIFO_SIZE, SOCKET_ID_ANY,
- RTE_MEMZONE_IOVA_CONTIG);
- KNI_MEM_CHECK(kni->m_sync_addr == NULL, sync_addr_fail);
-
- return 0;
-
-sync_addr_fail:
- rte_memzone_free(kni->m_resp_q);
-resp_q_fail:
- rte_memzone_free(kni->m_req_q);
-req_q_fail:
- rte_memzone_free(kni->m_free_q);
-free_q_fail:
- rte_memzone_free(kni->m_alloc_q);
-alloc_q_fail:
- rte_memzone_free(kni->m_rx_q);
-rx_q_fail:
- rte_memzone_free(kni->m_tx_q);
-tx_q_fail:
- return -1;
-}
-
-static void
-kni_release_mz(struct rte_kni *kni)
-{
- rte_memzone_free(kni->m_tx_q);
- rte_memzone_free(kni->m_rx_q);
- rte_memzone_free(kni->m_alloc_q);
- rte_memzone_free(kni->m_free_q);
- rte_memzone_free(kni->m_req_q);
- rte_memzone_free(kni->m_resp_q);
- rte_memzone_free(kni->m_sync_addr);
-}
-
-struct rte_kni *
-rte_kni_alloc(struct rte_mempool *pktmbuf_pool,
- const struct rte_kni_conf *conf,
- struct rte_kni_ops *ops)
-{
- int ret;
- struct rte_kni_device_info dev_info;
- struct rte_kni *kni;
- struct rte_tailq_entry *te;
- struct rte_kni_list *kni_list;
-
- if (!pktmbuf_pool || !conf || !conf->name[0])
- return NULL;
-
- /* Check if KNI subsystem has been initialized */
- if (kni_fd < 0) {
- RTE_LOG(ERR, KNI, "KNI subsystem has not been initialized. Invoke rte_kni_init() first\n");
- return NULL;
- }
-
- rte_mcfg_tailq_write_lock();
-
- kni = __rte_kni_get(conf->name);
- if (kni != NULL) {
- RTE_LOG(ERR, KNI, "KNI already exists\n");
- goto unlock;
- }
-
- te = rte_zmalloc("KNI_TAILQ_ENTRY", sizeof(*te), 0);
- if (te == NULL) {
- RTE_LOG(ERR, KNI, "Failed to allocate tailq entry\n");
- goto unlock;
- }
-
- kni = rte_zmalloc("KNI", sizeof(struct rte_kni), RTE_CACHE_LINE_SIZE);
- if (kni == NULL) {
- RTE_LOG(ERR, KNI, "KNI memory allocation failed\n");
- goto kni_fail;
- }
-
- strlcpy(kni->name, conf->name, RTE_KNI_NAMESIZE);
-
- if (ops)
- memcpy(&kni->ops, ops, sizeof(struct rte_kni_ops));
- else
- kni->ops.port_id = UINT16_MAX;
-
- memset(&dev_info, 0, sizeof(dev_info));
- dev_info.core_id = conf->core_id;
- dev_info.force_bind = conf->force_bind;
- dev_info.group_id = conf->group_id;
- dev_info.mbuf_size = conf->mbuf_size;
- dev_info.mtu = conf->mtu;
- dev_info.min_mtu = conf->min_mtu;
- dev_info.max_mtu = conf->max_mtu;
-
- memcpy(dev_info.mac_addr, conf->mac_addr, RTE_ETHER_ADDR_LEN);
-
- strlcpy(dev_info.name, conf->name, RTE_KNI_NAMESIZE);
-
- ret = kni_reserve_mz(kni);
- if (ret < 0)
- goto mz_fail;
-
- /* TX RING */
- kni->tx_q = kni->m_tx_q->addr;
- kni_fifo_init(kni->tx_q, KNI_FIFO_COUNT_MAX);
- dev_info.tx_phys = kni->m_tx_q->iova;
-
- /* RX RING */
- kni->rx_q = kni->m_rx_q->addr;
- kni_fifo_init(kni->rx_q, KNI_FIFO_COUNT_MAX);
- dev_info.rx_phys = kni->m_rx_q->iova;
-
- /* ALLOC RING */
- kni->alloc_q = kni->m_alloc_q->addr;
- kni_fifo_init(kni->alloc_q, KNI_FIFO_COUNT_MAX);
- dev_info.alloc_phys = kni->m_alloc_q->iova;
-
- /* FREE RING */
- kni->free_q = kni->m_free_q->addr;
- kni_fifo_init(kni->free_q, KNI_FIFO_COUNT_MAX);
- dev_info.free_phys = kni->m_free_q->iova;
-
- /* Request RING */
- kni->req_q = kni->m_req_q->addr;
- kni_fifo_init(kni->req_q, KNI_FIFO_COUNT_MAX);
- dev_info.req_phys = kni->m_req_q->iova;
-
- /* Response RING */
- kni->resp_q = kni->m_resp_q->addr;
- kni_fifo_init(kni->resp_q, KNI_FIFO_COUNT_MAX);
- dev_info.resp_phys = kni->m_resp_q->iova;
-
- /* Req/Resp sync mem area */
- kni->sync_addr = kni->m_sync_addr->addr;
- dev_info.sync_va = kni->m_sync_addr->addr;
- dev_info.sync_phys = kni->m_sync_addr->iova;
-
- kni->pktmbuf_pool = pktmbuf_pool;
- kni->group_id = conf->group_id;
- kni->mbuf_size = conf->mbuf_size;
-
- dev_info.iova_mode = (rte_eal_iova_mode() == RTE_IOVA_VA) ? 1 : 0;
-
- ret = ioctl(kni_fd, RTE_KNI_IOCTL_CREATE, &dev_info);
- if (ret < 0)
- goto ioctl_fail;
-
- te->data = kni;
-
- kni_list = RTE_TAILQ_CAST(rte_kni_tailq.head, rte_kni_list);
- TAILQ_INSERT_TAIL(kni_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- /* Allocate mbufs and then put them into alloc_q */
- kni_allocate_mbufs(kni);
-
- return kni;
-
-ioctl_fail:
- kni_release_mz(kni);
-mz_fail:
- rte_free(kni);
-kni_fail:
- rte_free(te);
-unlock:
- rte_mcfg_tailq_write_unlock();
-
- return NULL;
-}
-
-static void
-kni_free_fifo(struct rte_kni_fifo *fifo)
-{
- int ret;
- struct rte_mbuf *pkt;
-
- do {
- ret = kni_fifo_get(fifo, (void **)&pkt, 1);
- if (ret)
- rte_pktmbuf_free(pkt);
- } while (ret);
-}
-
-static void *
-va2pa(struct rte_mbuf *m)
-{
- return (void *)((unsigned long)m -
- ((unsigned long)m->buf_addr - (unsigned long)rte_mbuf_iova_get(m)));
-}
-
-static void *
-va2pa_all(struct rte_mbuf *mbuf)
-{
- void *phy_mbuf = va2pa(mbuf);
- struct rte_mbuf *next = mbuf->next;
- while (next) {
- mbuf->next = va2pa(next);
- mbuf = next;
- next = mbuf->next;
- }
- return phy_mbuf;
-}
-
-static void
-obj_free(struct rte_mempool *mp __rte_unused, void *opaque, void *obj,
- unsigned obj_idx __rte_unused)
-{
- struct rte_mbuf *m = obj;
- void *mbuf_phys = opaque;
-
- if (va2pa(m) == mbuf_phys)
- rte_pktmbuf_free(m);
-}
-
-static void
-kni_free_fifo_phy(struct rte_mempool *mp, struct rte_kni_fifo *fifo)
-{
- void *mbuf_phys;
- int ret;
-
- do {
- ret = kni_fifo_get(fifo, &mbuf_phys, 1);
- if (ret)
- rte_mempool_obj_iter(mp, obj_free, mbuf_phys);
- } while (ret);
-}
-
-int
-rte_kni_release(struct rte_kni *kni)
-{
- struct rte_tailq_entry *te;
- struct rte_kni_list *kni_list;
- struct rte_kni_device_info dev_info;
- uint32_t retry = 5;
-
- if (!kni)
- return -1;
-
- kni_list = RTE_TAILQ_CAST(rte_kni_tailq.head, rte_kni_list);
-
- rte_mcfg_tailq_write_lock();
-
- TAILQ_FOREACH(te, kni_list, next) {
- if (te->data == kni)
- break;
- }
-
- if (te == NULL)
- goto unlock;
-
- strlcpy(dev_info.name, kni->name, sizeof(dev_info.name));
- if (ioctl(kni_fd, RTE_KNI_IOCTL_RELEASE, &dev_info) < 0) {
- RTE_LOG(ERR, KNI, "Fail to release kni device\n");
- goto unlock;
- }
-
- TAILQ_REMOVE(kni_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- /* mbufs in all fifo should be released, except request/response */
-
- /* wait until all rxq packets processed by kernel */
- while (kni_fifo_count(kni->rx_q) && retry--)
- usleep(1000);
-
- if (kni_fifo_count(kni->rx_q))
- RTE_LOG(ERR, KNI, "Fail to free all Rx-q items\n");
-
- kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);
- kni_free_fifo(kni->tx_q);
- kni_free_fifo(kni->free_q);
-
- kni_release_mz(kni);
-
- rte_free(kni);
-
- rte_free(te);
-
- return 0;
-
-unlock:
- rte_mcfg_tailq_write_unlock();
-
- return -1;
-}
-
-/* default callback for request of configuring device mac address */
-static int
-kni_config_mac_address(uint16_t port_id, uint8_t mac_addr[])
-{
- int ret = 0;
-
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_LOG(ERR, KNI, "Invalid port id %d\n", port_id);
- return -EINVAL;
- }
-
- RTE_LOG(INFO, KNI, "Configure mac address of %d", port_id);
-
- ret = rte_eth_dev_default_mac_addr_set(port_id,
- (struct rte_ether_addr *)mac_addr);
- if (ret < 0)
- RTE_LOG(ERR, KNI, "Failed to config mac_addr for port %d\n",
- port_id);
-
- return ret;
-}
-
-/* default callback for request of configuring promiscuous mode */
-static int
-kni_config_promiscusity(uint16_t port_id, uint8_t to_on)
-{
- int ret;
-
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_LOG(ERR, KNI, "Invalid port id %d\n", port_id);
- return -EINVAL;
- }
-
- RTE_LOG(INFO, KNI, "Configure promiscuous mode of %d to %d\n",
- port_id, to_on);
-
- if (to_on)
- ret = rte_eth_promiscuous_enable(port_id);
- else
- ret = rte_eth_promiscuous_disable(port_id);
-
- if (ret != 0)
- RTE_LOG(ERR, KNI,
- "Failed to %s promiscuous mode for port %u: %s\n",
- to_on ? "enable" : "disable", port_id,
- rte_strerror(-ret));
-
- return ret;
-}
-
-/* default callback for request of configuring allmulticast mode */
-static int
-kni_config_allmulticast(uint16_t port_id, uint8_t to_on)
-{
- int ret;
-
- if (!rte_eth_dev_is_valid_port(port_id)) {
- RTE_LOG(ERR, KNI, "Invalid port id %d\n", port_id);
- return -EINVAL;
- }
-
- RTE_LOG(INFO, KNI, "Configure allmulticast mode of %d to %d\n",
- port_id, to_on);
-
- if (to_on)
- ret = rte_eth_allmulticast_enable(port_id);
- else
- ret = rte_eth_allmulticast_disable(port_id);
- if (ret != 0)
- RTE_LOG(ERR, KNI,
- "Failed to %s allmulticast mode for port %u: %s\n",
- to_on ? "enable" : "disable", port_id,
- rte_strerror(-ret));
-
- return ret;
-}
-
-int
-rte_kni_handle_request(struct rte_kni *kni)
-{
- unsigned int ret;
- struct rte_kni_request *req = NULL;
-
- if (kni == NULL)
- return -1;
-
- /* Get request mbuf */
- ret = kni_fifo_get(kni->req_q, (void **)&req, 1);
- if (ret != 1)
- return 0; /* It is OK of can not getting the request mbuf */
-
- if (req != kni->sync_addr) {
- RTE_LOG(ERR, KNI, "Wrong req pointer %p\n", req);
- return -1;
- }
-
- /* Analyze the request and call the relevant actions for it */
- switch (req->req_id) {
- case RTE_KNI_REQ_CHANGE_MTU: /* Change MTU */
- if (kni->ops.change_mtu)
- req->result = kni->ops.change_mtu(kni->ops.port_id,
- req->new_mtu);
- break;
- case RTE_KNI_REQ_CFG_NETWORK_IF: /* Set network interface up/down */
- if (kni->ops.config_network_if)
- req->result = kni->ops.config_network_if(kni->ops.port_id,
- req->if_up);
- break;
- case RTE_KNI_REQ_CHANGE_MAC_ADDR: /* Change MAC Address */
- if (kni->ops.config_mac_address)
- req->result = kni->ops.config_mac_address(
- kni->ops.port_id, req->mac_addr);
- else if (kni->ops.port_id != UINT16_MAX)
- req->result = kni_config_mac_address(
- kni->ops.port_id, req->mac_addr);
- break;
- case RTE_KNI_REQ_CHANGE_PROMISC: /* Change PROMISCUOUS MODE */
- if (kni->ops.config_promiscusity)
- req->result = kni->ops.config_promiscusity(
- kni->ops.port_id, req->promiscusity);
- else if (kni->ops.port_id != UINT16_MAX)
- req->result = kni_config_promiscusity(
- kni->ops.port_id, req->promiscusity);
- break;
- case RTE_KNI_REQ_CHANGE_ALLMULTI: /* Change ALLMULTICAST MODE */
- if (kni->ops.config_allmulticast)
- req->result = kni->ops.config_allmulticast(
- kni->ops.port_id, req->allmulti);
- else if (kni->ops.port_id != UINT16_MAX)
- req->result = kni_config_allmulticast(
- kni->ops.port_id, req->allmulti);
- break;
- default:
- RTE_LOG(ERR, KNI, "Unknown request id %u\n", req->req_id);
- req->result = -EINVAL;
- break;
- }
-
- /* if needed, construct response buffer and put it back to resp_q */
- if (!req->async)
- ret = kni_fifo_put(kni->resp_q, (void **)&req, 1);
- else
- ret = 1;
- if (ret != 1) {
- RTE_LOG(ERR, KNI, "Fail to put the muf back to resp_q\n");
- return -1; /* It is an error of can't putting the mbuf back */
- }
-
- return 0;
-}
-
-unsigned
-rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned int num)
-{
- num = RTE_MIN(kni_fifo_free_count(kni->rx_q), num);
- void *phy_mbufs[num];
- unsigned int ret;
- unsigned int i;
-
- for (i = 0; i < num; i++)
- phy_mbufs[i] = va2pa_all(mbufs[i]);
-
- ret = kni_fifo_put(kni->rx_q, phy_mbufs, num);
-
- /* Get mbufs from free_q and then free them */
- kni_free_mbufs(kni);
-
- return ret;
-}
-
-unsigned
-rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, unsigned int num)
-{
- unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num);
-
- /* If buffers removed or alloc_q is empty, allocate mbufs and then put them into alloc_q */
- if (ret || (kni_fifo_count(kni->alloc_q) == 0))
- kni_allocate_mbufs(kni);
-
- return ret;
-}
-
-static void
-kni_free_mbufs(struct rte_kni *kni)
-{
- int i, ret;
- struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM];
-
- ret = kni_fifo_get(kni->free_q, (void **)pkts, MAX_MBUF_BURST_NUM);
- if (likely(ret > 0)) {
- for (i = 0; i < ret; i++)
- rte_pktmbuf_free(pkts[i]);
- }
-}
-
-static void
-kni_allocate_mbufs(struct rte_kni *kni)
-{
- int i, ret;
- struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM];
- void *phys[MAX_MBUF_BURST_NUM];
- int allocq_free;
-
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pool) !=
- offsetof(struct rte_kni_mbuf, pool));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, buf_addr) !=
- offsetof(struct rte_kni_mbuf, buf_addr));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, next) !=
- offsetof(struct rte_kni_mbuf, next));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) !=
- offsetof(struct rte_kni_mbuf, data_off));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_len) !=
- offsetof(struct rte_kni_mbuf, data_len));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
- offsetof(struct rte_kni_mbuf, pkt_len));
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
- offsetof(struct rte_kni_mbuf, ol_flags));
-
- /* Check if pktmbuf pool has been configured */
- if (kni->pktmbuf_pool == NULL) {
- RTE_LOG(ERR, KNI, "No valid mempool for allocating mbufs\n");
- return;
- }
-
- allocq_free = kni_fifo_free_count(kni->alloc_q);
- allocq_free = (allocq_free > MAX_MBUF_BURST_NUM) ?
- MAX_MBUF_BURST_NUM : allocq_free;
- for (i = 0; i < allocq_free; i++) {
- pkts[i] = rte_pktmbuf_alloc(kni->pktmbuf_pool);
- if (unlikely(pkts[i] == NULL)) {
- /* Out of memory */
- RTE_LOG(ERR, KNI, "Out of memory\n");
- break;
- }
- phys[i] = va2pa(pkts[i]);
- }
-
- /* No pkt mbuf allocated */
- if (i <= 0)
- return;
-
- ret = kni_fifo_put(kni->alloc_q, phys, i);
-
- /* Check if any mbufs not put into alloc_q, and then free them */
- if (ret >= 0 && ret < i && ret < MAX_MBUF_BURST_NUM) {
- int j;
-
- for (j = ret; j < i; j++)
- rte_pktmbuf_free(pkts[j]);
- }
-}
-
-struct rte_kni *
-rte_kni_get(const char *name)
-{
- struct rte_kni *kni;
-
- if (name == NULL || name[0] == '\0')
- return NULL;
-
- rte_mcfg_tailq_read_lock();
-
- kni = __rte_kni_get(name);
-
- rte_mcfg_tailq_read_unlock();
-
- return kni;
-}
-
-const char *
-rte_kni_get_name(const struct rte_kni *kni)
-{
- return kni->name;
-}
-
-static enum kni_ops_status
-kni_check_request_register(struct rte_kni_ops *ops)
-{
- /* check if KNI request ops has been registered*/
- if (ops == NULL)
- return KNI_REQ_NO_REGISTER;
-
- if (ops->change_mtu == NULL
- && ops->config_network_if == NULL
- && ops->config_mac_address == NULL
- && ops->config_promiscusity == NULL
- && ops->config_allmulticast == NULL)
- return KNI_REQ_NO_REGISTER;
-
- return KNI_REQ_REGISTERED;
-}
-
-int
-rte_kni_register_handlers(struct rte_kni *kni, struct rte_kni_ops *ops)
-{
- enum kni_ops_status req_status;
-
- if (ops == NULL) {
- RTE_LOG(ERR, KNI, "Invalid KNI request operation.\n");
- return -1;
- }
-
- if (kni == NULL) {
- RTE_LOG(ERR, KNI, "Invalid kni info.\n");
- return -1;
- }
-
- req_status = kni_check_request_register(&kni->ops);
- if (req_status == KNI_REQ_REGISTERED) {
- RTE_LOG(ERR, KNI, "The KNI request operation has already registered.\n");
- return -1;
- }
-
- memcpy(&kni->ops, ops, sizeof(struct rte_kni_ops));
- return 0;
-}
-
-int
-rte_kni_unregister_handlers(struct rte_kni *kni)
-{
- if (kni == NULL) {
- RTE_LOG(ERR, KNI, "Invalid kni info.\n");
- return -1;
- }
-
- memset(&kni->ops, 0, sizeof(struct rte_kni_ops));
-
- return 0;
-}
-
-int
-rte_kni_update_link(struct rte_kni *kni, unsigned int linkup)
-{
- char path[64];
- char old_carrier[2];
- const char *new_carrier;
- int old_linkup;
- int fd, ret;
-
- if (kni == NULL)
- return -1;
-
- snprintf(path, sizeof(path), "/sys/devices/virtual/net/%s/carrier",
- kni->name);
-
- fd = open(path, O_RDWR);
- if (fd == -1) {
- RTE_LOG(ERR, KNI, "Failed to open file: %s.\n", path);
- return -1;
- }
-
- ret = read(fd, old_carrier, 2);
- if (ret < 1) {
- close(fd);
- return -1;
- }
- old_linkup = (old_carrier[0] == '1');
-
- if (old_linkup == (int)linkup)
- goto out;
-
- new_carrier = linkup ? "1" : "0";
- ret = write(fd, new_carrier, 1);
- if (ret < 1) {
- RTE_LOG(ERR, KNI, "Failed to write file: %s.\n", path);
- close(fd);
- return -1;
- }
-out:
- close(fd);
- return old_linkup;
-}
-
-void
-rte_kni_close(void)
-{
- if (kni_fd < 0)
- return;
-
- close(kni_fd);
- kni_fd = -1;
-}
diff --git a/lib/kni/rte_kni.h b/lib/kni/rte_kni.h
deleted file mode 100644
index 1e508acc829b..000000000000
--- a/lib/kni/rte_kni.h
+++ /dev/null
@@ -1,269 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_KNI_H_
-#define _RTE_KNI_H_
-
-/**
- * @file
- * RTE KNI
- *
- * The KNI library provides the ability to create and destroy kernel NIC
- * interfaces that may be used by the RTE application to receive/transmit
- * packets from/to Linux kernel net interfaces.
- *
- * This library provides two APIs to burst receive packets from KNI interfaces,
- * and burst transmit packets to KNI interfaces.
- */
-
-#include <rte_compat.h>
-#include <rte_pci.h>
-#include <rte_ether.h>
-
-#include <rte_kni_common.h>
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-struct rte_kni;
-struct rte_mbuf;
-
-/**
- * Structure which has the function pointers for KNI interface.
- */
-struct rte_kni_ops {
- uint16_t port_id; /* Port ID */
-
- /* Pointer to function of changing MTU */
- int (*change_mtu)(uint16_t port_id, unsigned int new_mtu);
-
- /* Pointer to function of configuring network interface */
- int (*config_network_if)(uint16_t port_id, uint8_t if_up);
-
- /* Pointer to function of configuring mac address */
- int (*config_mac_address)(uint16_t port_id, uint8_t mac_addr[]);
-
- /* Pointer to function of configuring promiscuous mode */
- int (*config_promiscusity)(uint16_t port_id, uint8_t to_on);
-
- /* Pointer to function of configuring allmulticast mode */
- int (*config_allmulticast)(uint16_t port_id, uint8_t to_on);
-};
-
-/**
- * Structure for configuring KNI device.
- */
-struct rte_kni_conf {
- /*
- * KNI name which will be used in relevant network device.
- * Let the name as short as possible, as it will be part of
- * memzone name.
- */
- char name[RTE_KNI_NAMESIZE];
- uint32_t core_id; /* Core ID to bind kernel thread on */
- uint16_t group_id; /* Group ID */
- unsigned mbuf_size; /* mbuf size */
- struct rte_pci_addr addr; /* deprecated */
- struct rte_pci_id id; /* deprecated */
-
- __extension__
- uint8_t force_bind : 1; /* Flag to bind kernel thread */
- uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; /* MAC address assigned to KNI */
- uint16_t mtu;
- uint16_t min_mtu;
- uint16_t max_mtu;
-};
-
-/**
- * Initialize and preallocate KNI subsystem
- *
- * This function is to be executed on the main lcore only, after EAL
- * initialization and before any KNI interface is attempted to be
- * allocated
- *
- * @param max_kni_ifaces
- * The maximum number of KNI interfaces that can coexist concurrently
- *
- * @return
- * - 0 indicates success.
- * - negative value indicates failure.
- */
-int rte_kni_init(unsigned int max_kni_ifaces);
-
-
-/**
- * Allocate KNI interface according to the port id, mbuf size, mbuf pool,
- * configurations and callbacks for kernel requests.The KNI interface created
- * in the kernel space is the net interface the traditional Linux application
- * talking to.
- *
- * The rte_kni_alloc shall not be called before rte_kni_init() has been
- * called. rte_kni_alloc is thread safe.
- *
- * The mempool should have capacity of more than "2 x KNI_FIFO_COUNT_MAX"
- * elements for each KNI interface allocated.
- *
- * @param pktmbuf_pool
- * The mempool for allocating mbufs for packets.
- * @param conf
- * The pointer to the configurations of the KNI device.
- * @param ops
- * The pointer to the callbacks for the KNI kernel requests.
- *
- * @return
- * - The pointer to the context of a KNI interface.
- * - NULL indicate error.
- */
-struct rte_kni *rte_kni_alloc(struct rte_mempool *pktmbuf_pool,
- const struct rte_kni_conf *conf, struct rte_kni_ops *ops);
-
-/**
- * Release KNI interface according to the context. It will also release the
- * paired KNI interface in kernel space. All processing on the specific KNI
- * context need to be stopped before calling this interface.
- *
- * rte_kni_release is thread safe.
- *
- * @param kni
- * The pointer to the context of an existent KNI interface.
- *
- * @return
- * - 0 indicates success.
- * - negative value indicates failure.
- */
-int rte_kni_release(struct rte_kni *kni);
-
-/**
- * It is used to handle the request mbufs sent from kernel space.
- * Then analyzes it and calls the specific actions for the specific requests.
- * Finally constructs the response mbuf and puts it back to the resp_q.
- *
- * @param kni
- * The pointer to the context of an existent KNI interface.
- *
- * @return
- * - 0
- * - negative value indicates failure.
- */
-int rte_kni_handle_request(struct rte_kni *kni);
-
-/**
- * Retrieve a burst of packets from a KNI interface. The retrieved packets are
- * stored in rte_mbuf structures whose pointers are supplied in the array of
- * mbufs, and the maximum number is indicated by num. It handles allocating
- * the mbufs for KNI interface alloc queue.
- *
- * @param kni
- * The KNI interface context.
- * @param mbufs
- * The array to store the pointers of mbufs.
- * @param num
- * The maximum number per burst.
- *
- * @return
- * The actual number of packets retrieved.
- */
-unsigned rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
- unsigned num);
-
-/**
- * Send a burst of packets to a KNI interface. The packets to be sent out are
- * stored in rte_mbuf structures whose pointers are supplied in the array of
- * mbufs, and the maximum number is indicated by num. It handles the freeing of
- * the mbufs in the free queue of KNI interface.
- *
- * @param kni
- * The KNI interface context.
- * @param mbufs
- * The array to store the pointers of mbufs.
- * @param num
- * The maximum number per burst.
- *
- * @return
- * The actual number of packets sent.
- */
-unsigned rte_kni_tx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs,
- unsigned num);
-
-/**
- * Get the KNI context of its name.
- *
- * @param name
- * pointer to the KNI device name.
- *
- * @return
- * On success: Pointer to KNI interface.
- * On failure: NULL.
- */
-struct rte_kni *rte_kni_get(const char *name);
-
-/**
- * Get the name given to a KNI device
- *
- * @param kni
- * The KNI instance to query
- * @return
- * The pointer to the KNI name
- */
-const char *rte_kni_get_name(const struct rte_kni *kni);
-
-/**
- * Register KNI request handling for a specified port,and it can
- * be called by primary process or secondary process.
- *
- * @param kni
- * pointer to struct rte_kni.
- * @param ops
- * pointer to struct rte_kni_ops.
- *
- * @return
- * On success: 0
- * On failure: -1
- */
-int rte_kni_register_handlers(struct rte_kni *kni, struct rte_kni_ops *ops);
-
-/**
- * Unregister KNI request handling for a specified port.
- *
- * @param kni
- * pointer to struct rte_kni.
- *
- * @return
- * On success: 0
- * On failure: -1
- */
-int rte_kni_unregister_handlers(struct rte_kni *kni);
-
-/**
- * Update link carrier state for KNI port.
- *
- * Update the linkup/linkdown state of a KNI interface in the kernel.
- *
- * @param kni
- * pointer to struct rte_kni.
- * @param linkup
- * New link state:
- * 0 for linkdown.
- * > 0 for linkup.
- *
- * @return
- * On failure: -1
- * Previous link state == linkdown: 0
- * Previous link state == linkup: 1
- */
-__rte_experimental
-int
-rte_kni_update_link(struct rte_kni *kni, unsigned int linkup);
-
-/**
- * Close KNI device.
- */
-void rte_kni_close(void);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_KNI_H_ */
diff --git a/lib/kni/rte_kni_common.h b/lib/kni/rte_kni_common.h
deleted file mode 100644
index 8d3ee0fa4fc2..000000000000
--- a/lib/kni/rte_kni_common.h
+++ /dev/null
@@ -1,147 +0,0 @@
-/* SPDX-License-Identifier: (BSD-3-Clause OR LGPL-2.1) */
-/*
- * Copyright(c) 2007-2014 Intel Corporation.
- */
-
-#ifndef _RTE_KNI_COMMON_H_
-#define _RTE_KNI_COMMON_H_
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#ifdef __KERNEL__
-#include <linux/if.h>
-#include <asm/barrier.h>
-#define RTE_STD_C11
-#else
-#include <rte_common.h>
-#include <rte_config.h>
-#endif
-
-/*
- * KNI name is part of memzone name. Must not exceed IFNAMSIZ.
- */
-#define RTE_KNI_NAMESIZE 16
-
-#define RTE_CACHE_LINE_MIN_SIZE 64
-
-/*
- * Request id.
- */
-enum rte_kni_req_id {
- RTE_KNI_REQ_UNKNOWN = 0,
- RTE_KNI_REQ_CHANGE_MTU,
- RTE_KNI_REQ_CFG_NETWORK_IF,
- RTE_KNI_REQ_CHANGE_MAC_ADDR,
- RTE_KNI_REQ_CHANGE_PROMISC,
- RTE_KNI_REQ_CHANGE_ALLMULTI,
- RTE_KNI_REQ_MAX,
-};
-
-/*
- * Structure for KNI request.
- */
-struct rte_kni_request {
- uint32_t req_id; /**< Request id */
- RTE_STD_C11
- union {
- uint32_t new_mtu; /**< New MTU */
- uint8_t if_up; /**< 1: interface up, 0: interface down */
- uint8_t mac_addr[6]; /**< MAC address for interface */
- uint8_t promiscusity;/**< 1: promisc mode enable, 0: disable */
- uint8_t allmulti; /**< 1: all-multicast mode enable, 0: disable */
- };
- int32_t async : 1; /**< 1: request is asynchronous */
- int32_t result; /**< Result for processing request */
-} __attribute__((__packed__));
-
-/*
- * Fifo struct mapped in a shared memory. It describes a circular buffer FIFO
- * Write and read should wrap around. Fifo is empty when write == read
- * Writing should never overwrite the read position
- */
-struct rte_kni_fifo {
-#ifdef RTE_USE_C11_MEM_MODEL
- unsigned write; /**< Next position to be written*/
- unsigned read; /**< Next position to be read */
-#else
- volatile unsigned write; /**< Next position to be written*/
- volatile unsigned read; /**< Next position to be read */
-#endif
- unsigned len; /**< Circular buffer length */
- unsigned elem_size; /**< Pointer size - for 32/64 bit OS */
- void *volatile buffer[]; /**< The buffer contains mbuf pointers */
-};
-
-/*
- * The kernel image of the rte_mbuf struct, with only the relevant fields.
- * Padding is necessary to assure the offsets of these fields
- */
-struct rte_kni_mbuf {
- void *buf_addr __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)));
- uint64_t buf_iova;
- uint16_t data_off; /**< Start address of data in segment buffer. */
- char pad1[2];
- uint16_t nb_segs; /**< Number of segments. */
- char pad4[2];
- uint64_t ol_flags; /**< Offload features. */
- char pad2[4];
- uint32_t pkt_len; /**< Total pkt len: sum of all segment data_len. */
- uint16_t data_len; /**< Amount of data in segment buffer. */
- char pad3[14];
- void *pool;
-
- /* fields on second cache line */
- __attribute__((__aligned__(RTE_CACHE_LINE_MIN_SIZE)))
- void *next; /**< Physical address of next mbuf in kernel. */
-};
-
-/*
- * Struct used to create a KNI device. Passed to the kernel in IOCTL call
- */
-
-struct rte_kni_device_info {
- char name[RTE_KNI_NAMESIZE]; /**< Network device name for KNI */
-
- phys_addr_t tx_phys;
- phys_addr_t rx_phys;
- phys_addr_t alloc_phys;
- phys_addr_t free_phys;
-
- /* Used by Ethtool */
- phys_addr_t req_phys;
- phys_addr_t resp_phys;
- phys_addr_t sync_phys;
- void * sync_va;
-
- /* mbuf mempool */
- void * mbuf_va;
- phys_addr_t mbuf_phys;
-
- uint16_t group_id; /**< Group ID */
- uint32_t core_id; /**< core ID to bind for kernel thread */
-
- __extension__
- uint8_t force_bind : 1; /**< Flag for kernel thread binding */
-
- /* mbuf size */
- unsigned mbuf_size;
- unsigned int mtu;
- unsigned int min_mtu;
- unsigned int max_mtu;
- uint8_t mac_addr[6];
- uint8_t iova_mode;
-};
-
-#define KNI_DEVICE "kni"
-
-#define RTE_KNI_IOCTL_TEST _IOWR(0, 1, int)
-#define RTE_KNI_IOCTL_CREATE _IOWR(0, 2, struct rte_kni_device_info)
-#define RTE_KNI_IOCTL_RELEASE _IOWR(0, 3, struct rte_kni_device_info)
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_KNI_COMMON_H_ */
diff --git a/lib/kni/rte_kni_fifo.h b/lib/kni/rte_kni_fifo.h
deleted file mode 100644
index d2ec82fe87fc..000000000000
--- a/lib/kni/rte_kni_fifo.h
+++ /dev/null
@@ -1,117 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-
-
-/**
- * @internal when c11 memory model enabled use c11 atomic memory barrier.
- * when under non c11 memory model use rte_smp_* memory barrier.
- *
- * @param src
- * Pointer to the source data.
- * @param dst
- * Pointer to the destination data.
- * @param value
- * Data value.
- */
-#ifdef RTE_USE_C11_MEM_MODEL
-#define __KNI_LOAD_ACQUIRE(src) ({ \
- __atomic_load_n((src), __ATOMIC_ACQUIRE); \
- })
-#define __KNI_STORE_RELEASE(dst, value) do { \
- __atomic_store_n((dst), value, __ATOMIC_RELEASE); \
- } while(0)
-#else
-#define __KNI_LOAD_ACQUIRE(src) ({ \
- typeof (*(src)) val = *(src); \
- rte_smp_rmb(); \
- val; \
- })
-#define __KNI_STORE_RELEASE(dst, value) do { \
- *(dst) = value; \
- rte_smp_wmb(); \
- } while(0)
-#endif
-
-/**
- * Initializes the kni fifo structure
- */
-static void
-kni_fifo_init(struct rte_kni_fifo *fifo, unsigned size)
-{
- /* Ensure size is power of 2 */
- if (size & (size - 1))
- rte_panic("KNI fifo size must be power of 2\n");
-
- fifo->write = 0;
- fifo->read = 0;
- fifo->len = size;
- fifo->elem_size = sizeof(void *);
-}
-
-/**
- * Adds num elements into the fifo. Return the number actually written
- */
-static inline unsigned
-kni_fifo_put(struct rte_kni_fifo *fifo, void **data, unsigned num)
-{
- unsigned i = 0;
- unsigned fifo_write = fifo->write;
- unsigned new_write = fifo_write;
- unsigned fifo_read = __KNI_LOAD_ACQUIRE(&fifo->read);
-
- for (i = 0; i < num; i++) {
- new_write = (new_write + 1) & (fifo->len - 1);
-
- if (new_write == fifo_read)
- break;
- fifo->buffer[fifo_write] = data[i];
- fifo_write = new_write;
- }
- __KNI_STORE_RELEASE(&fifo->write, fifo_write);
- return i;
-}
-
-/**
- * Get up to num elements from the fifo. Return the number actually read
- */
-static inline unsigned
-kni_fifo_get(struct rte_kni_fifo *fifo, void **data, unsigned num)
-{
- unsigned i = 0;
- unsigned new_read = fifo->read;
- unsigned fifo_write = __KNI_LOAD_ACQUIRE(&fifo->write);
-
- for (i = 0; i < num; i++) {
- if (new_read == fifo_write)
- break;
-
- data[i] = fifo->buffer[new_read];
- new_read = (new_read + 1) & (fifo->len - 1);
- }
- __KNI_STORE_RELEASE(&fifo->read, new_read);
- return i;
-}
-
-/**
- * Get the num of elements in the fifo
- */
-static inline uint32_t
-kni_fifo_count(struct rte_kni_fifo *fifo)
-{
- unsigned fifo_write = __KNI_LOAD_ACQUIRE(&fifo->write);
- unsigned fifo_read = __KNI_LOAD_ACQUIRE(&fifo->read);
- return (fifo->len + fifo_write - fifo_read) & (fifo->len - 1);
-}
-
-/**
- * Get the num of available elements in the fifo
- */
-static inline uint32_t
-kni_fifo_free_count(struct rte_kni_fifo *fifo)
-{
- uint32_t fifo_write = __KNI_LOAD_ACQUIRE(&fifo->write);
- uint32_t fifo_read = __KNI_LOAD_ACQUIRE(&fifo->read);
- return (fifo_read - fifo_write - 1) & (fifo->len - 1);
-}
diff --git a/lib/kni/version.map b/lib/kni/version.map
deleted file mode 100644
index 83bbbe880f43..000000000000
--- a/lib/kni/version.map
+++ /dev/null
@@ -1,24 +0,0 @@
-DPDK_23 {
- global:
-
- rte_kni_alloc;
- rte_kni_close;
- rte_kni_get;
- rte_kni_get_name;
- rte_kni_handle_request;
- rte_kni_init;
- rte_kni_register_handlers;
- rte_kni_release;
- rte_kni_rx_burst;
- rte_kni_tx_burst;
- rte_kni_unregister_handlers;
-
- local: *;
-};
-
-EXPERIMENTAL {
- global:
-
- # updated in v21.08
- rte_kni_update_link;
-};
diff --git a/lib/meson.build b/lib/meson.build
index fac2f52cad4f..06df4f57ad6e 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -39,7 +39,6 @@ libraries = [
'gso',
'ip_frag',
'jobstats',
- 'kni',
'latencystats',
'lpm',
'member',
@@ -75,7 +74,6 @@ optional_libs = [
'graph',
'gro',
'gso',
- 'kni',
'jobstats',
'latencystats',
'metrics',
@@ -90,7 +88,6 @@ optional_libs = [
dpdk_libs_deprecated += [
'flow_classify',
- 'kni',
]
disabled_libs = []
diff --git a/lib/port/meson.build b/lib/port/meson.build
index 3ab37e2cb4b7..b0af2b185b39 100644
--- a/lib/port/meson.build
+++ b/lib/port/meson.build
@@ -45,9 +45,3 @@ if dpdk_conf.has('RTE_HAS_LIBPCAP')
dpdk_conf.set('RTE_PORT_PCAP', 1)
ext_deps += pcap_dep # dependency provided in config/meson.build
endif
-
-if dpdk_conf.has('RTE_LIB_KNI')
- sources += files('rte_port_kni.c')
- headers += files('rte_port_kni.h')
- deps += 'kni'
-endif
diff --git a/lib/port/rte_port_kni.c b/lib/port/rte_port_kni.c
deleted file mode 100644
index 1c7a6cb200ea..000000000000
--- a/lib/port/rte_port_kni.c
+++ /dev/null
@@ -1,515 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Ethan Zhuang <zhuangwj@gmail.com>.
- * Copyright(c) 2016 Intel Corporation.
- */
-#include <string.h>
-
-#include <rte_malloc.h>
-#include <rte_kni.h>
-
-#include "rte_port_kni.h"
-
-/*
- * Port KNI Reader
- */
-#ifdef RTE_PORT_STATS_COLLECT
-
-#define RTE_PORT_KNI_READER_STATS_PKTS_IN_ADD(port, val) \
- port->stats.n_pkts_in += val
-#define RTE_PORT_KNI_READER_STATS_PKTS_DROP_ADD(port, val) \
- port->stats.n_pkts_drop += val
-
-#else
-
-#define RTE_PORT_KNI_READER_STATS_PKTS_IN_ADD(port, val)
-#define RTE_PORT_KNI_READER_STATS_PKTS_DROP_ADD(port, val)
-
-#endif
-
-struct rte_port_kni_reader {
- struct rte_port_in_stats stats;
-
- struct rte_kni *kni;
-};
-
-static void *
-rte_port_kni_reader_create(void *params, int socket_id)
-{
- struct rte_port_kni_reader_params *conf =
- params;
- struct rte_port_kni_reader *port;
-
- /* Check input parameters */
- if (conf == NULL) {
- RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__);
- return NULL;
- }
-
- /* Memory allocation */
- port = rte_zmalloc_socket("PORT", sizeof(*port),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
- return NULL;
- }
-
- /* Initialization */
- port->kni = conf->kni;
-
- return port;
-}
-
-static int
-rte_port_kni_reader_rx(void *port, struct rte_mbuf **pkts, uint32_t n_pkts)
-{
- struct rte_port_kni_reader *p =
- port;
- uint16_t rx_pkt_cnt;
-
- rx_pkt_cnt = rte_kni_rx_burst(p->kni, pkts, n_pkts);
- RTE_PORT_KNI_READER_STATS_PKTS_IN_ADD(p, rx_pkt_cnt);
- return rx_pkt_cnt;
-}
-
-static int
-rte_port_kni_reader_free(void *port)
-{
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
- return -EINVAL;
- }
-
- rte_free(port);
-
- return 0;
-}
-
-static int rte_port_kni_reader_stats_read(void *port,
- struct rte_port_in_stats *stats, int clear)
-{
- struct rte_port_kni_reader *p =
- port;
-
- if (stats != NULL)
- memcpy(stats, &p->stats, sizeof(p->stats));
-
- if (clear)
- memset(&p->stats, 0, sizeof(p->stats));
-
- return 0;
-}
-
-/*
- * Port KNI Writer
- */
-#ifdef RTE_PORT_STATS_COLLECT
-
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(port, val) \
- port->stats.n_pkts_in += val
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(port, val) \
- port->stats.n_pkts_drop += val
-
-#else
-
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(port, val)
-#define RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(port, val)
-
-#endif
-
-struct rte_port_kni_writer {
- struct rte_port_out_stats stats;
-
- struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX];
- uint32_t tx_burst_sz;
- uint32_t tx_buf_count;
- uint64_t bsz_mask;
- struct rte_kni *kni;
-};
-
-static void *
-rte_port_kni_writer_create(void *params, int socket_id)
-{
- struct rte_port_kni_writer_params *conf =
- params;
- struct rte_port_kni_writer *port;
-
- /* Check input parameters */
- if ((conf == NULL) ||
- (conf->tx_burst_sz == 0) ||
- (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
- (!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
- return NULL;
- }
-
- /* Memory allocation */
- port = rte_zmalloc_socket("PORT", sizeof(*port),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
- return NULL;
- }
-
- /* Initialization */
- port->kni = conf->kni;
- port->tx_burst_sz = conf->tx_burst_sz;
- port->tx_buf_count = 0;
- port->bsz_mask = 1LLU << (conf->tx_burst_sz - 1);
-
- return port;
-}
-
-static inline void
-send_burst(struct rte_port_kni_writer *p)
-{
- uint32_t nb_tx;
-
- nb_tx = rte_kni_tx_burst(p->kni, p->tx_buf, p->tx_buf_count);
-
- RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx);
- for (; nb_tx < p->tx_buf_count; nb_tx++)
- rte_pktmbuf_free(p->tx_buf[nb_tx]);
-
- p->tx_buf_count = 0;
-}
-
-static int
-rte_port_kni_writer_tx(void *port, struct rte_mbuf *pkt)
-{
- struct rte_port_kni_writer *p =
- port;
-
- p->tx_buf[p->tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, 1);
- if (p->tx_buf_count >= p->tx_burst_sz)
- send_burst(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_tx_bulk(void *port,
- struct rte_mbuf **pkts,
- uint64_t pkts_mask)
-{
- struct rte_port_kni_writer *p =
- port;
- uint64_t bsz_mask = p->bsz_mask;
- uint32_t tx_buf_count = p->tx_buf_count;
- uint64_t expr = (pkts_mask & (pkts_mask + 1)) |
- ((pkts_mask & bsz_mask) ^ bsz_mask);
-
- if (expr == 0) {
- uint64_t n_pkts = __builtin_popcountll(pkts_mask);
- uint32_t n_pkts_ok;
-
- if (tx_buf_count)
- send_burst(p);
-
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, n_pkts);
- n_pkts_ok = rte_kni_tx_burst(p->kni, pkts, n_pkts);
-
- RTE_PORT_KNI_WRITER_STATS_PKTS_DROP_ADD(p, n_pkts - n_pkts_ok);
- for (; n_pkts_ok < n_pkts; n_pkts_ok++) {
- struct rte_mbuf *pkt = pkts[n_pkts_ok];
-
- rte_pktmbuf_free(pkt);
- }
- } else {
- for (; pkts_mask;) {
- uint32_t pkt_index = __builtin_ctzll(pkts_mask);
- uint64_t pkt_mask = 1LLU << pkt_index;
- struct rte_mbuf *pkt = pkts[pkt_index];
-
- p->tx_buf[tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, 1);
- pkts_mask &= ~pkt_mask;
- }
-
- p->tx_buf_count = tx_buf_count;
- if (tx_buf_count >= p->tx_burst_sz)
- send_burst(p);
- }
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_flush(void *port)
-{
- struct rte_port_kni_writer *p =
- port;
-
- if (p->tx_buf_count > 0)
- send_burst(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_free(void *port)
-{
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
- return -EINVAL;
- }
-
- rte_port_kni_writer_flush(port);
- rte_free(port);
-
- return 0;
-}
-
-static int rte_port_kni_writer_stats_read(void *port,
- struct rte_port_out_stats *stats, int clear)
-{
- struct rte_port_kni_writer *p =
- port;
-
- if (stats != NULL)
- memcpy(stats, &p->stats, sizeof(p->stats));
-
- if (clear)
- memset(&p->stats, 0, sizeof(p->stats));
-
- return 0;
-}
-
-/*
- * Port KNI Writer Nodrop
- */
-#ifdef RTE_PORT_STATS_COLLECT
-
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(port, val) \
- port->stats.n_pkts_in += val
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_DROP_ADD(port, val) \
- port->stats.n_pkts_drop += val
-
-#else
-
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(port, val)
-#define RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_DROP_ADD(port, val)
-
-#endif
-
-struct rte_port_kni_writer_nodrop {
- struct rte_port_out_stats stats;
-
- struct rte_mbuf *tx_buf[2 * RTE_PORT_IN_BURST_SIZE_MAX];
- uint32_t tx_burst_sz;
- uint32_t tx_buf_count;
- uint64_t bsz_mask;
- uint64_t n_retries;
- struct rte_kni *kni;
-};
-
-static void *
-rte_port_kni_writer_nodrop_create(void *params, int socket_id)
-{
- struct rte_port_kni_writer_nodrop_params *conf =
- params;
- struct rte_port_kni_writer_nodrop *port;
-
- /* Check input parameters */
- if ((conf == NULL) ||
- (conf->tx_burst_sz == 0) ||
- (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
- (!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
- return NULL;
- }
-
- /* Memory allocation */
- port = rte_zmalloc_socket("PORT", sizeof(*port),
- RTE_CACHE_LINE_SIZE, socket_id);
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
- return NULL;
- }
-
- /* Initialization */
- port->kni = conf->kni;
- port->tx_burst_sz = conf->tx_burst_sz;
- port->tx_buf_count = 0;
- port->bsz_mask = 1LLU << (conf->tx_burst_sz - 1);
-
- /*
- * When n_retries is 0 it means that we should wait for every packet to
- * send no matter how many retries should it take. To limit number of
- * branches in fast path, we use UINT64_MAX instead of branching.
- */
- port->n_retries = (conf->n_retries == 0) ? UINT64_MAX : conf->n_retries;
-
- return port;
-}
-
-static inline void
-send_burst_nodrop(struct rte_port_kni_writer_nodrop *p)
-{
- uint32_t nb_tx = 0, i;
-
- nb_tx = rte_kni_tx_burst(p->kni, p->tx_buf, p->tx_buf_count);
-
- /* We sent all the packets in a first try */
- if (nb_tx >= p->tx_buf_count) {
- p->tx_buf_count = 0;
- return;
- }
-
- for (i = 0; i < p->n_retries; i++) {
- nb_tx += rte_kni_tx_burst(p->kni,
- p->tx_buf + nb_tx,
- p->tx_buf_count - nb_tx);
-
- /* We sent all the packets in more than one try */
- if (nb_tx >= p->tx_buf_count) {
- p->tx_buf_count = 0;
- return;
- }
- }
-
- /* We didn't send the packets in maximum allowed attempts */
- RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_DROP_ADD(p, p->tx_buf_count - nb_tx);
- for ( ; nb_tx < p->tx_buf_count; nb_tx++)
- rte_pktmbuf_free(p->tx_buf[nb_tx]);
-
- p->tx_buf_count = 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_tx(void *port, struct rte_mbuf *pkt)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- p->tx_buf[p->tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_STATS_PKTS_IN_ADD(p, 1);
- if (p->tx_buf_count >= p->tx_burst_sz)
- send_burst_nodrop(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_tx_bulk(void *port,
- struct rte_mbuf **pkts,
- uint64_t pkts_mask)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- uint64_t bsz_mask = p->bsz_mask;
- uint32_t tx_buf_count = p->tx_buf_count;
- uint64_t expr = (pkts_mask & (pkts_mask + 1)) |
- ((pkts_mask & bsz_mask) ^ bsz_mask);
-
- if (expr == 0) {
- uint64_t n_pkts = __builtin_popcountll(pkts_mask);
- uint32_t n_pkts_ok;
-
- if (tx_buf_count)
- send_burst_nodrop(p);
-
- RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(p, n_pkts);
- n_pkts_ok = rte_kni_tx_burst(p->kni, pkts, n_pkts);
-
- if (n_pkts_ok >= n_pkts)
- return 0;
-
- /*
- * If we didn't manage to send all packets in single burst, move
- * remaining packets to the buffer and call send burst.
- */
- for (; n_pkts_ok < n_pkts; n_pkts_ok++) {
- struct rte_mbuf *pkt = pkts[n_pkts_ok];
- p->tx_buf[p->tx_buf_count++] = pkt;
- }
- send_burst_nodrop(p);
- } else {
- for ( ; pkts_mask; ) {
- uint32_t pkt_index = __builtin_ctzll(pkts_mask);
- uint64_t pkt_mask = 1LLU << pkt_index;
- struct rte_mbuf *pkt = pkts[pkt_index];
-
- p->tx_buf[tx_buf_count++] = pkt;
- RTE_PORT_KNI_WRITER_NODROP_STATS_PKTS_IN_ADD(p, 1);
- pkts_mask &= ~pkt_mask;
- }
-
- p->tx_buf_count = tx_buf_count;
- if (tx_buf_count >= p->tx_burst_sz)
- send_burst_nodrop(p);
- }
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_flush(void *port)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- if (p->tx_buf_count > 0)
- send_burst_nodrop(p);
-
- return 0;
-}
-
-static int
-rte_port_kni_writer_nodrop_free(void *port)
-{
- if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
- return -EINVAL;
- }
-
- rte_port_kni_writer_nodrop_flush(port);
- rte_free(port);
-
- return 0;
-}
-
-static int rte_port_kni_writer_nodrop_stats_read(void *port,
- struct rte_port_out_stats *stats, int clear)
-{
- struct rte_port_kni_writer_nodrop *p =
- port;
-
- if (stats != NULL)
- memcpy(stats, &p->stats, sizeof(p->stats));
-
- if (clear)
- memset(&p->stats, 0, sizeof(p->stats));
-
- return 0;
-}
-
-
-/*
- * Summary of port operations
- */
-struct rte_port_in_ops rte_port_kni_reader_ops = {
- .f_create = rte_port_kni_reader_create,
- .f_free = rte_port_kni_reader_free,
- .f_rx = rte_port_kni_reader_rx,
- .f_stats = rte_port_kni_reader_stats_read,
-};
-
-struct rte_port_out_ops rte_port_kni_writer_ops = {
- .f_create = rte_port_kni_writer_create,
- .f_free = rte_port_kni_writer_free,
- .f_tx = rte_port_kni_writer_tx,
- .f_tx_bulk = rte_port_kni_writer_tx_bulk,
- .f_flush = rte_port_kni_writer_flush,
- .f_stats = rte_port_kni_writer_stats_read,
-};
-
-struct rte_port_out_ops rte_port_kni_writer_nodrop_ops = {
- .f_create = rte_port_kni_writer_nodrop_create,
- .f_free = rte_port_kni_writer_nodrop_free,
- .f_tx = rte_port_kni_writer_nodrop_tx,
- .f_tx_bulk = rte_port_kni_writer_nodrop_tx_bulk,
- .f_flush = rte_port_kni_writer_nodrop_flush,
- .f_stats = rte_port_kni_writer_nodrop_stats_read,
-};
diff --git a/lib/port/rte_port_kni.h b/lib/port/rte_port_kni.h
deleted file mode 100644
index 280f58c121e2..000000000000
--- a/lib/port/rte_port_kni.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Ethan Zhuang <zhuangwj@gmail.com>.
- * Copyright(c) 2016 Intel Corporation.
- */
-
-#ifndef __INCLUDE_RTE_PORT_KNI_H__
-#define __INCLUDE_RTE_PORT_KNI_H__
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-/**
- * @file
- * RTE Port KNI Interface
- *
- * kni_reader: input port built on top of pre-initialized KNI interface
- * kni_writer: output port built on top of pre-initialized KNI interface
- */
-
-#include <stdint.h>
-
-#include "rte_port.h"
-
-/** kni_reader port parameters */
-struct rte_port_kni_reader_params {
- /** KNI interface reference */
- struct rte_kni *kni;
-};
-
-/** kni_reader port operations */
-extern struct rte_port_in_ops rte_port_kni_reader_ops;
-
-
-/** kni_writer port parameters */
-struct rte_port_kni_writer_params {
- /** KNI interface reference */
- struct rte_kni *kni;
- /** Burst size to KNI interface. */
- uint32_t tx_burst_sz;
-};
-
-/** kni_writer port operations */
-extern struct rte_port_out_ops rte_port_kni_writer_ops;
-
-/** kni_writer_nodrop port parameters */
-struct rte_port_kni_writer_nodrop_params {
- /** KNI interface reference */
- struct rte_kni *kni;
- /** Burst size to KNI interface. */
- uint32_t tx_burst_sz;
- /** Maximum number of retries, 0 for no limit */
- uint32_t n_retries;
-};
-
-/** kni_writer_nodrop port operations */
-extern struct rte_port_out_ops rte_port_kni_writer_nodrop_ops;
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
diff --git a/lib/port/version.map b/lib/port/version.map
index af6cf696fd54..d67a03650d8b 100644
--- a/lib/port/version.map
+++ b/lib/port/version.map
@@ -7,9 +7,6 @@ DPDK_23 {
rte_port_fd_reader_ops;
rte_port_fd_writer_nodrop_ops;
rte_port_fd_writer_ops;
- rte_port_kni_reader_ops;
- rte_port_kni_writer_nodrop_ops;
- rte_port_kni_writer_ops;
rte_port_ring_multi_reader_ops;
rte_port_ring_multi_writer_nodrop_ops;
rte_port_ring_multi_writer_ops;
diff --git a/meson_options.txt b/meson_options.txt
index 82c8297065f0..7b67e0203f8f 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -10,7 +10,7 @@ option('disable_apps', type: 'string', value: '', description:
'Comma-separated list of apps to explicitly disable.')
option('disable_drivers', type: 'string', value: '', description:
'Comma-separated list of drivers to explicitly disable.')
-option('disable_libs', type: 'string', value: 'flow_classify,kni', description:
+option('disable_libs', type: 'string', value: 'flow_classify', description:
'Comma-separated list of libraries to explicitly disable. [NOTE: not all libs can be disabled]')
option('drivers_install_subdir', type: 'string', value: 'dpdk/pmds-<VERSION>', description:
'Subdirectory of libdir where to install PMDs. Defaults to using a versioned subdirectory.')
--
2.39.2
^ permalink raw reply [relevance 1%]
* DPDK 23.07 released
@ 2023-07-28 20:37 3% Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 20:37 UTC (permalink / raw)
To: announce
A new major release is available:
https://fast.dpdk.org/rel/dpdk-23.07.tar.xz
The number of commits is not that big
but the number of changed lines is quite significant:
1028 commits from 178 authors
1554 files changed, 157260 insertions(+), 58411 deletions(-)
This release happens on July 28, and 23.03 was on March 31.
Later would be too late :)
It looks like more help would be welcome at any stage of the process:
feel free to give a boost in test, review or merge tasks.
It is not planned to start a maintenance branch for 23.07.
This version is ABI-compatible with 22.11 and 23.03.
Below are some new features:
- AMD CDX bus
- PCI MMIO read/write
- new flow patterns: Tx queue, Infiniband BTH
- new flow actions: push/remove IPv6 extension
- indirect flow rule list
- flow rule update
- vhost interrupt callback
- VDUSE in vhost library
- more ShangMi crypto algorithms
- PDCP library
- removed LiquidIO driver
- DMA device performance test application
- DTS basic UDP test
More details in the release notes:
https://doc.dpdk.org/guides/rel_notes/release_23_07.html
There are 37 new contributors (including authors, reviewers and testers).
Welcome to Abhijit Gangurde, Abhiram R N, Akihiko Odaki, Arnaud Fiorini,
Artemii Morozov, Bar Neuman, Bartosz Staszewski, Benjamin Mikailenko,
Charles Stoll, Dave Johnson, Dengdui Huang, Denis Pryazhennikov,
Eric Joyner, Heng Jiang, Itamar Gozlan, Jeroen de Borst, Jieqiang Wang,
Julien Aube, Kaijun Zeng, Kaisen You, Kaiyu Zhang, Kazatsker Kirill,
Lukasz Plachno, Manish Kurup, Nizan Zorea, Pavan Kumar Linga,
Pengfei Sun, Philip Prindeville, Pier Damouny, Priyalee Kushwaha,
Qin Ke, Ron Beider, Ronak Doshi, Samina Arshad, Sandilya Bhagi,
Vladimir Ratnikov, and Yutang Jiang.
Below is the number of commits per employer (with authors count):
252 Intel (45)
225 Marvell (29)
127 NVIDIA (28)
88 Red Hat (7)
73 Corigine (7)
54 Ark Networks (4)
53 Huawei (7)
32 Microsoft (2)
16 AMD (4)
13 Arm (3)
12 Broadcom (5)
11 VMware (2)
11 Trustnet (1)
...
A big thank to all courageous people who took on the non rewarding task
of reviewing other's job.
Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
44 Ferruh Yigit <ferruh.yigit@amd.com>
43 Akhil Goyal <gakhil@marvell.com>
38 David Marchand <david.marchand@redhat.com>
32 Chenbo Xia <chenbo.xia@intel.com>
31 Bruce Richardson <bruce.richardson@intel.com>
26 Jerin Jacob <jerinj@marvell.com>
21 Ori Kam <orika@nvidia.com>
20 Ciara Power <ciara.power@intel.com>
16 Morten Brørup <mb@smartsharesystems.com>
16 Anatoly Burakov <anatoly.burakov@intel.com>
More numbers? There are more than 300 open bugs in our Bugzilla.
The number of comments in half-done work (TODO, FIXME) keeps increasing,
especially in drivers code (159 lines found). Complete report is coming.
We must do more effort in cleaning such code.
The next version will be 23.11 in November.
The new features for 23.11 can be submitted during the next 2 weeks:
http://core.dpdk.org/roadmap#dates
Please share your roadmap.
Don't forget to register for the DPDK Summit in September:
https://events.linuxfoundation.org/dpdk-summit/
Thanks everyone, see you in Dublin
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2] doc: announce new major ABI version
2023-07-28 17:02 7% ` Patrick Robb
@ 2023-07-28 17:33 4% ` Thomas Monjalon
2023-07-31 4:42 8% ` [EXT] " Akhil Goyal
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 17:33 UTC (permalink / raw)
To: Patrick Robb; +Cc: Bruce Richardson, dev, david.marchand
28/07/2023 19:02, Patrick Robb:
> The Community Lab's ABI testing on new patchseries is now disabled until
> the 23.11 release. Thanks.
Perfect, thank you.
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2] doc: announce new major ABI version
2023-07-28 16:03 4% ` Thomas Monjalon
@ 2023-07-28 17:02 7% ` Patrick Robb
2023-07-28 17:33 4% ` Thomas Monjalon
2023-07-31 4:42 8% ` [EXT] " Akhil Goyal
0 siblings, 2 replies; 200+ results
From: Patrick Robb @ 2023-07-28 17:02 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: Bruce Richardson, dev
[-- Attachment #1: Type: text/plain, Size: 100 bytes --]
The Community Lab's ABI testing on new patchseries is now disabled until
the 23.11 release. Thanks.
[-- Attachment #2: Type: text/html, Size: 128 bytes --]
^ permalink raw reply [relevance 7%]
* Re: [PATCH v2] doc: announce new major ABI version
2023-07-28 15:23 4% ` Bruce Richardson
@ 2023-07-28 16:03 4% ` Thomas Monjalon
2023-07-28 17:02 7% ` Patrick Robb
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-07-28 16:03 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
28/07/2023 17:23, Bruce Richardson:
> On Fri, Jul 28, 2023 at 05:18:40PM +0200, Thomas Monjalon wrote:
> > The next DPDK release 23.11 won't keep ABI compatibility.
> > Only the changes impacting the users should be announced in advance.
> >
> > Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> > ---
> > v2: improve wording (thanks Bruce)
> > ---
> > doc/guides/rel_notes/deprecation.rst | 12 +++++++++---
> > 1 file changed, 9 insertions(+), 3 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 91ac8f0229..18281d7304 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -4,9 +4,15 @@
> > ABI and API Deprecation
> > =======================
> >
> > -See the guidelines document for details of the :doc:`ABI policy
> > -<../contributing/abi_policy>`. API and ABI deprecation notices are to be posted
> > -here.
> > +See the guidelines document for details
> > +of the :doc:`ABI policy <../contributing/abi_policy>`.
> > +
> This has a strange line-break position. It can probably be a single line.
Will keep the original break which was better looking.
> > +With DPDK 23.11, there will be a new major ABI version: 24.
> > +This means that during the development of 23.11,
> > +new items may be added to structs or enums,
> > +even if those additions involve an ABI compatibility breakage.
> > +
> > +Other API and ABI deprecation notices are to be posted below.
>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Applied
^ permalink raw reply [relevance 4%]
* Re: [PATCH] doc: postpone deprecation of pipeline legacy API
2023-07-20 10:37 0% ` Dumitrescu, Cristian
@ 2023-07-28 16:02 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 16:02 UTC (permalink / raw)
To: Dumitrescu, Cristian
Cc: Richardson, Bruce, dev, Nicolau, Radu, R, Kamalakannan,
Suresh Narayane, Harshad
> > > Postpone the deprecation of the legacy pipeline, table and port
> > > library API and gradual stabilization of the new API.
> > >
> > > Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> > > ---
> > > doc/guides/rel_notes/deprecation.rst | 21 +++++++++------------
> > > 1 file changed, 9 insertions(+), 12 deletions(-)
> > >
> >
> > No objection to this, though it would be really good to get the new
> > functions stabilized in 23.11 when we lock down the 24 ABI.
> >
>
> Yes, fully agree, let's see if we can make this happen for 23.11
>
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Applied, thanks.
^ permalink raw reply [relevance 0%]
* RE: [PATCH] doc: announce ethdev operation struct changes
2023-07-28 15:37 0% ` Thomas Monjalon
@ 2023-07-28 15:55 0% ` Morten Brørup
2023-08-01 3:19 0% ` Feifei Wang
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-07-28 15:55 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, nd, Honnappa.Nagarahalli, Feifei Wang, Ruifeng Wang,
Feifei Wang, ferruh.yigit, konstantin.ananyev, andrew.rybchenko
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, 28 July 2023 17.38
>
> 28/07/2023 17:33, Morten Brørup:
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > Sent: Friday, 28 July 2023 17.20
> > >
> > > 28/07/2023 17:08, Morten Brørup:
> > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > > Sent: Friday, 28 July 2023 16.57
> > > > >
> > > > > 04/07/2023 10:10, Feifei Wang:
> > > > > > To support mbufs recycle mode, announce the coming ABI changes
> > > > > > from DPDK 23.11.
> > > > > >
> > > > > > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > > > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > > > ---
> > > > > > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > > > > > 1 file changed, 4 insertions(+)
> > > > > >
> > > > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > > > b/doc/guides/rel_notes/deprecation.rst
> > > > > > index 66431789b0..c7e1ffafb2 100644
> > > > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > > > @@ -118,6 +118,10 @@ Deprecation Notices
> > > > > > The legacy actions should be removed
> > > > > > once ``MODIFY_FIELD`` alternative is implemented in drivers.
> > > > > >
> > > > > > +* ethdev: The Ethernet device data structure ``struct rte_eth_dev``
> and
> > > > > > + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be
> > > updated
> > > > > > + with new fields to support mbufs recycle mode from DPDK 23.11.
> > > >
> > > > Existing fields will also be moved around [1]:
> > > >
> > > > @@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
> > > > * Rx fast-path functions and related data.
> > > > * 64-bit systems: occupies first 64B line
> > > > */
> > > > + /** Rx queues data. */
> > > > + struct rte_ethdev_qdata rxq;
> > > > /** PMD receive function. */
> > > > eth_rx_burst_t rx_pkt_burst;
> > > > /** Get the number of used Rx descriptors. */
> > > > eth_rx_queue_count_t rx_queue_count;
> > > > /** Check the status of a Rx descriptor. */
> > > > eth_rx_descriptor_status_t rx_descriptor_status;
> > > > - /** Rx queues data. */
> > > > - struct rte_ethdev_qdata rxq;
> > > > - uintptr_t reserved1[3];
> > > > + /** Refill Rx descriptors with the recycling mbufs. */
> > > > + eth_recycle_rx_descriptors_refill_t
> recycle_rx_descriptors_refill;
> > > > + uintptr_t reserved1[2];
> > > > /**@}*/
> > > >
> > > > /**@{*/
> > > > @@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
> > > > * Tx fast-path functions and related data.
> > > > * 64-bit systems: occupies second 64B line
> > > > */
> > > > + /** Tx queues data. */
> > > > + struct rte_ethdev_qdata txq;
> > > > /** PMD transmit function. */
> > > > eth_tx_burst_t tx_pkt_burst;
> > > > /** PMD transmit prepare function. */
> > > > eth_tx_prep_t tx_pkt_prepare;
> > > > /** Check the status of a Tx descriptor. */
> > > > eth_tx_descriptor_status_t tx_descriptor_status;
> > > > - /** Tx queues data. */
> > > > - struct rte_ethdev_qdata txq;
> > > > - uintptr_t reserved2[3];
> > > > + /** Copy used mbufs from Tx mbuf ring into Rx. */
> > > > + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> > > > + uintptr_t reserved2[2];
> > > > /**@}*/
> > >
> > > Removing existing fields should be announced explicitly.
> >
> > Agreed. And the patch misses this. The "rxq" and "txq" fields are not being
> removed, they are being moved up in the structures. Your comment about
> explicit mentioning still applies!
> >
> > If there's no time to wait for a new patch version from Feifei, perhaps you
> improve the description while merging.
>
> If it's only moving fields, we can skip.
OK. Thank you for elaborating.
> The real change is the size of the reserved fields,
> so it looks acceptable without notice.
Agree.
Thoughts for later: We should perhaps document that changing the size of reserved fields is acceptable. And with that, if completely removing a reserved field is also acceptable or not.
^ permalink raw reply [relevance 0%]
* Re: [EXT] Re: [PATCH v2] doc: announce single-event enqueue/dequeue ABI change
2023-07-05 13:02 4% ` [EXT] " Pavan Nikhilesh Bhagavatula
@ 2023-07-28 15:51 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 15:51 UTC (permalink / raw)
To: Jerin Jacob, Mattias Rönnblom
Cc: dev, Jerin Jacob Kollanukkaran, hofors, dev, Timothy McDaniel,
Hemant Agrawal, Sachin Saxena, Harry van Haaren, Liang Ma,
Peter Mccarthy, Pavan Nikhilesh Bhagavatula
05/07/2023 15:02, Pavan Nikhilesh Bhagavatula:
> > On Wed, Jul 5, 2023 at 4:48 PM Mattias Rönnblom
> > <mattias.ronnblom@ericsson.com> wrote:
> > >
> > > Announce the removal of the single-event enqueue and dequeue
> > > operations from the eventdev ABI.
> > >
> > > Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >
> > Acked-by: Jerin Jacob <jerinj@marvell.com>
>
> Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Applied, thanks.
^ permalink raw reply [relevance 4%]
* Re: [PATCH v1] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-25 8:40 0% ` Ferruh Yigit
2023-07-25 16:46 0% ` Hemant Agrawal
@ 2023-07-28 15:42 3% ` Thomas Monjalon
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 15:42 UTC (permalink / raw)
To: Sivaprasad Tummala
Cc: dev, bruce.richardson, david.marchand, jerinjacobk, techboard,
Ferruh Yigit
25/07/2023 10:40, Ferruh Yigit:
> On 7/17/2023 12:24 PM, Sivaprasad Tummala wrote:
> > Deprecation notice to add "rte_eventdev_port_data" field to
> > ``rte_event_fp_ops`` for callback support.
> >
> > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index fb771a0305..057f97ce5a 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -130,6 +130,13 @@ Deprecation Notices
> > ``rte_cryptodev_get_auth_algo_string``, ``rte_cryptodev_get_aead_algo_string`` and
> > ``rte_cryptodev_asym_get_xform_string`` respectively.
> >
> > +* eventdev: The struct rte_event_fp_ops will be updated with a new element
> > + rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
> > + rte_eventdev_port_data is used to hold callbacks registered optionally
> > + per event device port and associated callback data. By adding rte_eventdev_port_data
> > + to rte_event_fp_ops, allows to fetch this data for fastpath eventdev inline functions
> > + in advance. This changes the size of rte_event_fp_ops and result in ABI change.
> > +
> > * security: Hide structures ``rte_security_ops`` and ``rte_security_ctx``
> > as these are internal to DPDK library and drivers.
> >
>
> +techboard,
>
> Request for review/ack, patch is to extend eventdev to support callbacks
> per packet.
It does not look necessary to announce adding new fields.
The ABI compatibility breakage should be covered by this patch:
https://patches.dpdk.org/project/dpdk/patch/20230728152052.1204486-1-thomas@monjalon.net/
Marking as superseded.
^ permalink raw reply [relevance 3%]
* Re: [PATCH] doc: announce ethdev operation struct changes
2023-07-28 15:33 0% ` Morten Brørup
@ 2023-07-28 15:37 0% ` Thomas Monjalon
2023-07-28 15:55 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-07-28 15:37 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, nd, Honnappa.Nagarahalli, Feifei Wang, Ruifeng Wang,
Feifei Wang, ferruh.yigit, konstantin.ananyev, andrew.rybchenko
28/07/2023 17:33, Morten Brørup:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > Sent: Friday, 28 July 2023 17.20
> >
> > 28/07/2023 17:08, Morten Brørup:
> > > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > > Sent: Friday, 28 July 2023 16.57
> > > >
> > > > 04/07/2023 10:10, Feifei Wang:
> > > > > To support mbufs recycle mode, announce the coming ABI changes
> > > > > from DPDK 23.11.
> > > > >
> > > > > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > > ---
> > > > > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > > > > 1 file changed, 4 insertions(+)
> > > > >
> > > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > > b/doc/guides/rel_notes/deprecation.rst
> > > > > index 66431789b0..c7e1ffafb2 100644
> > > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > > @@ -118,6 +118,10 @@ Deprecation Notices
> > > > > The legacy actions should be removed
> > > > > once ``MODIFY_FIELD`` alternative is implemented in drivers.
> > > > >
> > > > > +* ethdev: The Ethernet device data structure ``struct rte_eth_dev`` and
> > > > > + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be
> > updated
> > > > > + with new fields to support mbufs recycle mode from DPDK 23.11.
> > >
> > > Existing fields will also be moved around [1]:
> > >
> > > @@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
> > > * Rx fast-path functions and related data.
> > > * 64-bit systems: occupies first 64B line
> > > */
> > > + /** Rx queues data. */
> > > + struct rte_ethdev_qdata rxq;
> > > /** PMD receive function. */
> > > eth_rx_burst_t rx_pkt_burst;
> > > /** Get the number of used Rx descriptors. */
> > > eth_rx_queue_count_t rx_queue_count;
> > > /** Check the status of a Rx descriptor. */
> > > eth_rx_descriptor_status_t rx_descriptor_status;
> > > - /** Rx queues data. */
> > > - struct rte_ethdev_qdata rxq;
> > > - uintptr_t reserved1[3];
> > > + /** Refill Rx descriptors with the recycling mbufs. */
> > > + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> > > + uintptr_t reserved1[2];
> > > /**@}*/
> > >
> > > /**@{*/
> > > @@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
> > > * Tx fast-path functions and related data.
> > > * 64-bit systems: occupies second 64B line
> > > */
> > > + /** Tx queues data. */
> > > + struct rte_ethdev_qdata txq;
> > > /** PMD transmit function. */
> > > eth_tx_burst_t tx_pkt_burst;
> > > /** PMD transmit prepare function. */
> > > eth_tx_prep_t tx_pkt_prepare;
> > > /** Check the status of a Tx descriptor. */
> > > eth_tx_descriptor_status_t tx_descriptor_status;
> > > - /** Tx queues data. */
> > > - struct rte_ethdev_qdata txq;
> > > - uintptr_t reserved2[3];
> > > + /** Copy used mbufs from Tx mbuf ring into Rx. */
> > > + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> > > + uintptr_t reserved2[2];
> > > /**@}*/
> >
> > Removing existing fields should be announced explicitly.
>
> Agreed. And the patch misses this. The "rxq" and "txq" fields are not being removed, they are being moved up in the structures. Your comment about explicit mentioning still applies!
>
> If there's no time to wait for a new patch version from Feifei, perhaps you improve the description while merging.
If it's only moving fields, we can skip.
The real change is the size of the reserved fields,
so it looks acceptable without notice.
^ permalink raw reply [relevance 0%]
* RE: [PATCH] doc: announce ethdev operation struct changes
2023-07-28 15:20 0% ` Thomas Monjalon
@ 2023-07-28 15:33 0% ` Morten Brørup
2023-07-28 15:37 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-07-28 15:33 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, nd, Honnappa.Nagarahalli, Feifei Wang, Ruifeng Wang,
Feifei Wang, ferruh.yigit, konstantin.ananyev, andrew.rybchenko
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, 28 July 2023 17.20
>
> 28/07/2023 17:08, Morten Brørup:
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > Sent: Friday, 28 July 2023 16.57
> > >
> > > 04/07/2023 10:10, Feifei Wang:
> > > > To support mbufs recycle mode, announce the coming ABI changes
> > > > from DPDK 23.11.
> > > >
> > > > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > ---
> > > > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > > > 1 file changed, 4 insertions(+)
> > > >
> > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > b/doc/guides/rel_notes/deprecation.rst
> > > > index 66431789b0..c7e1ffafb2 100644
> > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > @@ -118,6 +118,10 @@ Deprecation Notices
> > > > The legacy actions should be removed
> > > > once ``MODIFY_FIELD`` alternative is implemented in drivers.
> > > >
> > > > +* ethdev: The Ethernet device data structure ``struct rte_eth_dev`` and
> > > > + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be
> updated
> > > > + with new fields to support mbufs recycle mode from DPDK 23.11.
> >
> > Existing fields will also be moved around [1]:
> >
> > @@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
> > * Rx fast-path functions and related data.
> > * 64-bit systems: occupies first 64B line
> > */
> > + /** Rx queues data. */
> > + struct rte_ethdev_qdata rxq;
> > /** PMD receive function. */
> > eth_rx_burst_t rx_pkt_burst;
> > /** Get the number of used Rx descriptors. */
> > eth_rx_queue_count_t rx_queue_count;
> > /** Check the status of a Rx descriptor. */
> > eth_rx_descriptor_status_t rx_descriptor_status;
> > - /** Rx queues data. */
> > - struct rte_ethdev_qdata rxq;
> > - uintptr_t reserved1[3];
> > + /** Refill Rx descriptors with the recycling mbufs. */
> > + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> > + uintptr_t reserved1[2];
> > /**@}*/
> >
> > /**@{*/
> > @@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
> > * Tx fast-path functions and related data.
> > * 64-bit systems: occupies second 64B line
> > */
> > + /** Tx queues data. */
> > + struct rte_ethdev_qdata txq;
> > /** PMD transmit function. */
> > eth_tx_burst_t tx_pkt_burst;
> > /** PMD transmit prepare function. */
> > eth_tx_prep_t tx_pkt_prepare;
> > /** Check the status of a Tx descriptor. */
> > eth_tx_descriptor_status_t tx_descriptor_status;
> > - /** Tx queues data. */
> > - struct rte_ethdev_qdata txq;
> > - uintptr_t reserved2[3];
> > + /** Copy used mbufs from Tx mbuf ring into Rx. */
> > + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> > + uintptr_t reserved2[2];
> > /**@}*/
>
> Removing existing fields should be announced explicitly.
Agreed. And the patch misses this. The "rxq" and "txq" fields are not being removed, they are being moved up in the structures. Your comment about explicit mentioning still applies!
If there's no time to wait for a new patch version from Feifei, perhaps you improve the description while merging.
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2] doc: announce new major ABI version
2023-07-28 15:18 27% ` [PATCH v2] " Thomas Monjalon
2023-07-28 15:23 4% ` Bruce Richardson
@ 2023-07-28 15:25 4% ` Morten Brørup
1 sibling, 0 replies; 200+ results
From: Morten Brørup @ 2023-07-28 15:25 UTC (permalink / raw)
To: Thomas Monjalon, dev
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, 28 July 2023 17.19
>
> The next DPDK release 23.11 won't keep ABI compatibility.
> Only the changes impacting the users should be announced in advance.
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
Acked-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2] doc: announce new major ABI version
2023-07-28 15:18 27% ` [PATCH v2] " Thomas Monjalon
@ 2023-07-28 15:23 4% ` Bruce Richardson
2023-07-28 16:03 4% ` Thomas Monjalon
2023-07-28 15:25 4% ` Morten Brørup
1 sibling, 1 reply; 200+ results
From: Bruce Richardson @ 2023-07-28 15:23 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
On Fri, Jul 28, 2023 at 05:18:40PM +0200, Thomas Monjalon wrote:
> The next DPDK release 23.11 won't keep ABI compatibility.
> Only the changes impacting the users should be announced in advance.
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> v2: improve wording (thanks Bruce)
> ---
> doc/guides/rel_notes/deprecation.rst | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 91ac8f0229..18281d7304 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -4,9 +4,15 @@
> ABI and API Deprecation
> =======================
>
> -See the guidelines document for details of the :doc:`ABI policy
> -<../contributing/abi_policy>`. API and ABI deprecation notices are to be posted
> -here.
> +See the guidelines document for details
> +of the :doc:`ABI policy <../contributing/abi_policy>`.
> +
This has a strange line-break position. It can probably be a single line.
> +With DPDK 23.11, there will be a new major ABI version: 24.
> +This means that during the development of 23.11,
> +new items may be added to structs or enums,
> +even if those additions involve an ABI compatibility breakage.
> +
> +Other API and ABI deprecation notices are to be posted below.
>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 4%]
* [PATCH v2] doc: announce new major ABI version
2023-07-28 14:29 27% [PATCH] doc: announce new major ABI version Thomas Monjalon
@ 2023-07-28 15:18 27% ` Thomas Monjalon
2023-07-28 15:23 4% ` Bruce Richardson
2023-07-28 15:25 4% ` Morten Brørup
0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 15:18 UTC (permalink / raw)
To: dev
The next DPDK release 23.11 won't keep ABI compatibility.
Only the changes impacting the users should be announced in advance.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
v2: improve wording (thanks Bruce)
---
doc/guides/rel_notes/deprecation.rst | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 91ac8f0229..18281d7304 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -4,9 +4,15 @@
ABI and API Deprecation
=======================
-See the guidelines document for details of the :doc:`ABI policy
-<../contributing/abi_policy>`. API and ABI deprecation notices are to be posted
-here.
+See the guidelines document for details
+of the :doc:`ABI policy <../contributing/abi_policy>`.
+
+With DPDK 23.11, there will be a new major ABI version: 24.
+This means that during the development of 23.11,
+new items may be added to structs or enums,
+even if those additions involve an ABI compatibility breakage.
+
+Other API and ABI deprecation notices are to be posted below.
Deprecation Notices
-------------------
--
2.41.0
^ permalink raw reply [relevance 27%]
* Re: [PATCH] doc: announce ethdev operation struct changes
2023-07-28 15:08 0% ` Morten Brørup
@ 2023-07-28 15:20 0% ` Thomas Monjalon
2023-07-28 15:33 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-07-28 15:20 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, nd, Honnappa.Nagarahalli, Feifei Wang, Ruifeng Wang,
Feifei Wang, ferruh.yigit, konstantin.ananyev, andrew.rybchenko
28/07/2023 17:08, Morten Brørup:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > Sent: Friday, 28 July 2023 16.57
> >
> > 04/07/2023 10:10, Feifei Wang:
> > > To support mbufs recycle mode, announce the coming ABI changes
> > > from DPDK 23.11.
> > >
> > > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > ---
> > > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > > 1 file changed, 4 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > > index 66431789b0..c7e1ffafb2 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -118,6 +118,10 @@ Deprecation Notices
> > > The legacy actions should be removed
> > > once ``MODIFY_FIELD`` alternative is implemented in drivers.
> > >
> > > +* ethdev: The Ethernet device data structure ``struct rte_eth_dev`` and
> > > + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be updated
> > > + with new fields to support mbufs recycle mode from DPDK 23.11.
>
> Existing fields will also be moved around [1]:
>
> @@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
> * Rx fast-path functions and related data.
> * 64-bit systems: occupies first 64B line
> */
> + /** Rx queues data. */
> + struct rte_ethdev_qdata rxq;
> /** PMD receive function. */
> eth_rx_burst_t rx_pkt_burst;
> /** Get the number of used Rx descriptors. */
> eth_rx_queue_count_t rx_queue_count;
> /** Check the status of a Rx descriptor. */
> eth_rx_descriptor_status_t rx_descriptor_status;
> - /** Rx queues data. */
> - struct rte_ethdev_qdata rxq;
> - uintptr_t reserved1[3];
> + /** Refill Rx descriptors with the recycling mbufs. */
> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
> + uintptr_t reserved1[2];
> /**@}*/
>
> /**@{*/
> @@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
> * Tx fast-path functions and related data.
> * 64-bit systems: occupies second 64B line
> */
> + /** Tx queues data. */
> + struct rte_ethdev_qdata txq;
> /** PMD transmit function. */
> eth_tx_burst_t tx_pkt_burst;
> /** PMD transmit prepare function. */
> eth_tx_prep_t tx_pkt_prepare;
> /** Check the status of a Tx descriptor. */
> eth_tx_descriptor_status_t tx_descriptor_status;
> - /** Tx queues data. */
> - struct rte_ethdev_qdata txq;
> - uintptr_t reserved2[3];
> + /** Copy used mbufs from Tx mbuf ring into Rx. */
> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
> + uintptr_t reserved2[2];
> /**@}*/
Removing existing fields should be announced explicitly.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2] doc: announce changes to event device structures
@ 2023-07-28 15:14 3% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 15:14 UTC (permalink / raw)
To: pbhagavatula
Cc: dev, jerinj, jay.jayatheerthan, erik.g.carrillo,
abhinandan.gujjar, timothy.mcdaniel, sthotton, hemant.agrawal,
nipun.gupta, harry.van.haaren, mattias.ronnblom, liangma,
peter.mccarthy, Jerin Jacob
27/07/2023 11:01, Jerin Jacob:
> On Wed, Jul 26, 2023 at 9:25 PM <pbhagavatula@marvell.com> wrote:
> >
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > The structures ``rte_event_dev_info``, ``rte_event_fp_ops`` will be
> > modified to add new elements to support link profiles.
> > A new field ``max_profiles_per_port`` will be added to
> > ``rte_event_dev_info`` and ``switch_profile`` will be added to
> > ``rte_event_fp_ops``.
> >
> > A profile is a unique identifier for a set of event queues linked to
> > an event port. The unique identifier spans from 0 to the value
> > advertised in ``rte_event_dev_info.max_profiles_per_port`` - 1.
> >
> > Two new experimental APIs will be added, one to associate a set of
> > event queues with a profile which can be linked to an event port and
> > another to switch the profile which would affect the next dequeue call.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> > +
> > +* eventdev: The structures ``rte_event_dev_info``, ``rte_event_fp_ops`` will be
> > + modified to add new elements to support link profiles.A new field
> > + ``max_profiles_per_port`` will be added to ``rte_event_dev_info`` and
> > + ``switch_profile`` will be added to ``rte_event_fp_ops``.
>
> There are other deprecation notices to update rte_event_fp_ops.
> Exact fields in rte_event_dev_info be decided later along with patch.
> With that
>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
Actually it does not look necessary to announce adding new fields.
The ABI compatibility breakage should be covered by this patch:
https://patches.dpdk.org/project/dpdk/patch/20230728142946.1201459-1-thomas@monjalon.net/
Marking as superseded.
^ permalink raw reply [relevance 3%]
* RE: [PATCH] doc: announce ethdev operation struct changes
2023-07-28 14:56 3% ` Thomas Monjalon
2023-07-28 15:04 0% ` Thomas Monjalon
@ 2023-07-28 15:08 0% ` Morten Brørup
2023-07-28 15:20 0% ` Thomas Monjalon
1 sibling, 1 reply; 200+ results
From: Morten Brørup @ 2023-07-28 15:08 UTC (permalink / raw)
To: Thomas Monjalon, dev
Cc: nd, Honnappa.Nagarahalli, Feifei Wang, Ruifeng Wang, Feifei Wang,
ferruh.yigit, konstantin.ananyev, andrew.rybchenko
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, 28 July 2023 16.57
>
> 04/07/2023 10:10, Feifei Wang:
> > To support mbufs recycle mode, announce the coming ABI changes
> > from DPDK 23.11.
> >
> > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> > index 66431789b0..c7e1ffafb2 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -118,6 +118,10 @@ Deprecation Notices
> > The legacy actions should be removed
> > once ``MODIFY_FIELD`` alternative is implemented in drivers.
> >
> > +* ethdev: The Ethernet device data structure ``struct rte_eth_dev`` and
> > + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be updated
> > + with new fields to support mbufs recycle mode from DPDK 23.11.
Existing fields will also be moved around [1]:
@@ -83,15 +90,17 @@ struct rte_eth_fp_ops {
* Rx fast-path functions and related data.
* 64-bit systems: occupies first 64B line
*/
+ /** Rx queues data. */
+ struct rte_ethdev_qdata rxq;
/** PMD receive function. */
eth_rx_burst_t rx_pkt_burst;
/** Get the number of used Rx descriptors. */
eth_rx_queue_count_t rx_queue_count;
/** Check the status of a Rx descriptor. */
eth_rx_descriptor_status_t rx_descriptor_status;
- /** Rx queues data. */
- struct rte_ethdev_qdata rxq;
- uintptr_t reserved1[3];
+ /** Refill Rx descriptors with the recycling mbufs. */
+ eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill;
+ uintptr_t reserved1[2];
/**@}*/
/**@{*/
@@ -99,15 +108,17 @@ struct rte_eth_fp_ops {
* Tx fast-path functions and related data.
* 64-bit systems: occupies second 64B line
*/
+ /** Tx queues data. */
+ struct rte_ethdev_qdata txq;
/** PMD transmit function. */
eth_tx_burst_t tx_pkt_burst;
/** PMD transmit prepare function. */
eth_tx_prep_t tx_pkt_prepare;
/** Check the status of a Tx descriptor. */
eth_tx_descriptor_status_t tx_descriptor_status;
- /** Tx queues data. */
- struct rte_ethdev_qdata txq;
- uintptr_t reserved2[3];
+ /** Copy used mbufs from Tx mbuf ring into Rx. */
+ eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse;
+ uintptr_t reserved2[2];
/**@}*/
[1]: https://patchwork.dpdk.org/project/dpdk/patch/20230706095004.1848199-2-feifei.wang2@arm.com/
>
> It does seem to be an impacting change for existing applications,
> except that it is allowed only during ABI breakage window.
>
> I think my patch should be enough:
> https://patches.dpdk.org/project/dpdk/patch/20230728142946.1201459-1-
> thomas@monjalon.net/
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: deprecation notice to add RSS hash algorithm field
@ 2023-07-28 15:06 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 15:06 UTC (permalink / raw)
To: Ferruh Yigit, Stephen Hemminger; +Cc: dev, Dongdong Liu
06/06/2023 18:35, Stephen Hemminger:
> On Tue, 6 Jun 2023 16:50:53 +0100
> Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>
> > On 6/6/2023 4:39 PM, Stephen Hemminger wrote:
> > > On Tue, 6 Jun 2023 20:11:26 +0800
> > > Dongdong Liu <liudongdong3@huawei.com> wrote:
> > >
> > >> Deprecation notice to add "func" field to ``rte_eth_rss_conf``
> > >> structure for RSS hash algorithm.
> > >>
> > >> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
> > >> ---
> > >
> > > New fields do not require deprecation notice.
> > > Since this seems to be a repeated issue, perhaps someone should
> > > add this to the documentation.
I've just sent such a patch:
https://patches.dpdk.org/project/dpdk/patch/20230728142946.1201459-1-thomas@monjalon.net/
> > Hi Stephen,
> >
> > This is follow up to an existing patchset:
> > https://patches.dpdk.org/project/dpdk/list/?series=27400&state=*
> >
> > Although field is addition to the "struct rte_eth_rss_conf" struct, it
> > is embedded into "struct rte_eth_conf" which is parameter to an API, so
> > change cause size increase in outer struct and causes ABI breakage,
> > requiring deprecation notice.
>
> It will change ABI so will have to wait for 23.11.
> But the purpose of deprecation notice is more about telling users that API
> will change.
>
> The automated tools may give false complaint. Ok to add to deprecation,
> but really not necessary.
This deprecation notice is marked as superseded,
given my patch above should be enough.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: announce ethdev operation struct changes
2023-07-28 14:56 3% ` Thomas Monjalon
@ 2023-07-28 15:04 0% ` Thomas Monjalon
2023-07-28 15:08 0% ` Morten Brørup
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 15:04 UTC (permalink / raw)
To: dev
Cc: nd, Honnappa.Nagarahalli, Feifei Wang, Ruifeng Wang, Feifei Wang,
ferruh.yigit, konstantin.ananyev, Morten Brørup,
andrew.rybchenko
28/07/2023 16:56, Thomas Monjalon:
> 04/07/2023 10:10, Feifei Wang:
> > To support mbufs recycle mode, announce the coming ABI changes
> > from DPDK 23.11.
> >
> > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 66431789b0..c7e1ffafb2 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -118,6 +118,10 @@ Deprecation Notices
> > The legacy actions should be removed
> > once ``MODIFY_FIELD`` alternative is implemented in drivers.
> >
> > +* ethdev: The Ethernet device data structure ``struct rte_eth_dev`` and
> > + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be updated
> > + with new fields to support mbufs recycle mode from DPDK 23.11.
>
> It does seem to be an impacting change for existing applications,
I meant "It does NOT seem"
> except that it is allowed only during ABI breakage window.
>
> I think my patch should be enough:
> https://patches.dpdk.org/project/dpdk/patch/20230728142946.1201459-1-thomas@monjalon.net/
This deprecation notice is marked as superseded,
given my patch above should be enough.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: announce ethdev operation struct changes
2023-07-04 8:10 3% [PATCH] doc: announce ethdev operation struct changes Feifei Wang
2023-07-04 8:17 0% ` Feifei Wang
2023-07-05 11:32 0% ` Konstantin Ananyev
@ 2023-07-28 14:56 3% ` Thomas Monjalon
2023-07-28 15:04 0% ` Thomas Monjalon
2023-07-28 15:08 0% ` Morten Brørup
2 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 14:56 UTC (permalink / raw)
To: dev
Cc: nd, Honnappa.Nagarahalli, Feifei Wang, Ruifeng Wang, Feifei Wang,
ferruh.yigit, konstantin.ananyev, Morten Brørup,
andrew.rybchenko
04/07/2023 10:10, Feifei Wang:
> To support mbufs recycle mode, announce the coming ABI changes
> from DPDK 23.11.
>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 66431789b0..c7e1ffafb2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -118,6 +118,10 @@ Deprecation Notices
> The legacy actions should be removed
> once ``MODIFY_FIELD`` alternative is implemented in drivers.
>
> +* ethdev: The Ethernet device data structure ``struct rte_eth_dev`` and
> + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be updated
> + with new fields to support mbufs recycle mode from DPDK 23.11.
It does seem to be an impacting change for existing applications,
except that it is allowed only during ABI breakage window.
I think my patch should be enough:
https://patches.dpdk.org/project/dpdk/patch/20230728142946.1201459-1-thomas@monjalon.net/
^ permalink raw reply [relevance 3%]
* [PATCH] doc: announce new major ABI version
@ 2023-07-28 14:29 27% Thomas Monjalon
2023-07-28 15:18 27% ` [PATCH v2] " Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2023-07-28 14:29 UTC (permalink / raw)
To: dev
The next DPDK release 23.11 won't keep ABI compatibility.
Only the changes impacting the users should be announced in advance.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/rel_notes/deprecation.rst | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 91ac8f0229..55e4b2253e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -4,9 +4,14 @@
ABI and API Deprecation
=======================
-See the guidelines document for details of the :doc:`ABI policy
-<../contributing/abi_policy>`. API and ABI deprecation notices are to be posted
-here.
+See the guidelines document for details
+of the :doc:`ABI policy <../contributing/abi_policy>`.
+
+With DPDK 23.11, there will be a new major ABI version: 24.
+It means that during the development of 23.11, it will be allowed
+to add new items in a struct or enum involving ABI compatibility breakage.
+
+Other API and ABI deprecation notices are to be posted here.
Deprecation Notices
-------------------
--
2.41.0
^ permalink raw reply [relevance 27%]
* Re: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-25 16:45 0% ` Hemant Agrawal
@ 2023-07-28 10:11 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-28 10:11 UTC (permalink / raw)
To: Sivaprasad Tummala
Cc: Jerin Jacob, Tyler Retzlaff, dev, Ferruh Yigit, bruce.richardson,
david.marchand, Hemant Agrawal
> > > > > To allow new cpu features to be added without ABI breakage,
> > > > > RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
> > > > >
> > > > > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > >
> > > > +techboard,
> > > >
> > > > Request for review/ack, patch is to remove ABI restriction to add
> > > > new CPU flags.
> > >
> > > Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> >
> > Acked-by: Jerin Jacob <jerinj@marvell.com>
> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Applied, thanks.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2] doc: announce single-event enqueue/dequeue ABI change
2023-07-05 13:00 4% ` Jerin Jacob
2023-07-05 13:02 4% ` [EXT] " Pavan Nikhilesh Bhagavatula
@ 2023-07-26 12:04 4% ` Jerin Jacob
1 sibling, 0 replies; 200+ results
From: Jerin Jacob @ 2023-07-26 12:04 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: jerinj, Thomas Monjalon, hofors, dev, Pavan Nikhilesh,
Timothy McDaniel, Hemant Agrawal, Sachin Saxena,
Harry van Haaren, Liang Ma, Peter Mccarthy, techboard
On Wed, Jul 5, 2023 at 6:30 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Wed, Jul 5, 2023 at 4:48 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
> >
> > Announce the removal of the single-event enqueue and dequeue
> > operations from the eventdev ABI.
> >
> > Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
+ Techboard for review
>
>
> >
> > ---
> > PATCH v2: Fix commit subject prefix.
> > ---
> > doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 66431789b0..ca192d838d 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -153,3 +153,11 @@ Deprecation Notices
> > The new port library API (functions rte_swx_port_*)
> > will gradually transition from experimental to stable status
> > starting with DPDK 23.07 release.
> > +
> > +* eventdev: The single-event (non-burst) enqueue and dequeue
> > + operations, used by static inline burst enqueue and dequeue
> > + functions in <rte_eventdev.h>, will be removed in DPDK 23.11. This
> > + simplification includes changing the layout and potentially also the
> > + size of the public rte_event_fp_ops struct, breaking the ABI. Since
> > + these functions are not called directly by the application, the API
> > + remains unaffected.
> > --
> > 2.34.1
> >
^ permalink raw reply [relevance 4%]
* RE: [PATCH v1] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-25 16:46 0% ` Hemant Agrawal
@ 2023-07-25 18:44 0% ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2023-07-25 18:44 UTC (permalink / raw)
To: Hemant Agrawal, Ferruh Yigit, Sivaprasad Tummala, dev
Cc: bruce.richardson, david.marchand, thomas, jerinjacobk, techboard
> > -----Original Message-----
> > From: Ferruh Yigit <ferruh.yigit@amd.com>
> > Sent: Tuesday, July 25, 2023 2:11 PM
> > To: Sivaprasad Tummala <sivaprasad.tummala@amd.com>; dev@dpdk.org
> > Cc: bruce.richardson@intel.com; david.marchand@redhat.com;
> > thomas@monjalon.net; jerinjacobk@gmail.com; techboard@dpdk.org
> > Subject: Re: [PATCH v1] doc: deprecation notice to add callback data to
> > rte_event_fp_ops
> >
> > On 7/17/2023 12:24 PM, Sivaprasad Tummala wrote:
> > > Deprecation notice to add "rte_eventdev_port_data" field to
> > > ``rte_event_fp_ops`` for callback support.
> > >
> > > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > ---
> > > doc/guides/rel_notes/deprecation.rst | 7 +++++++
> > > 1 file changed, 7 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > b/doc/guides/rel_notes/deprecation.rst
> > > index fb771a0305..057f97ce5a 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -130,6 +130,13 @@ Deprecation Notices
> > > ``rte_cryptodev_get_auth_algo_string``,
> > ``rte_cryptodev_get_aead_algo_string`` and
> > > ``rte_cryptodev_asym_get_xform_string`` respectively.
> > >
> > > +* eventdev: The struct rte_event_fp_ops will be updated with a new
> > > +element
> > > + rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
> > > + rte_eventdev_port_data is used to hold callbacks registered
> > > +optionally
> > > + per event device port and associated callback data. By adding
> > > +rte_eventdev_port_data
> > > + to rte_event_fp_ops, allows to fetch this data for fastpath
> > > +eventdev inline functions
> > > + in advance. This changes the size of rte_event_fp_ops and result in ABI
> > change.
> > > +
> > > * security: Hide structures ``rte_security_ops`` and ``rte_security_ctx``
> > > as these are internal to DPDK library and drivers.
> > >
> >
> > +techboard,
> >
> > Request for review/ack, patch is to extend eventdev to support callbacks
> per
> > packet.
> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
^ permalink raw reply [relevance 0%]
* RE: [PATCH v1] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-25 8:40 0% ` Ferruh Yigit
@ 2023-07-25 16:46 0% ` Hemant Agrawal
2023-07-25 18:44 0% ` Pavan Nikhilesh Bhagavatula
2023-07-28 15:42 3% ` Thomas Monjalon
1 sibling, 1 reply; 200+ results
From: Hemant Agrawal @ 2023-07-25 16:46 UTC (permalink / raw)
To: Ferruh Yigit, Sivaprasad Tummala, dev
Cc: bruce.richardson, david.marchand, thomas, jerinjacobk, techboard
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, July 25, 2023 2:11 PM
> To: Sivaprasad Tummala <sivaprasad.tummala@amd.com>; dev@dpdk.org
> Cc: bruce.richardson@intel.com; david.marchand@redhat.com;
> thomas@monjalon.net; jerinjacobk@gmail.com; techboard@dpdk.org
> Subject: Re: [PATCH v1] doc: deprecation notice to add callback data to
> rte_event_fp_ops
>
> On 7/17/2023 12:24 PM, Sivaprasad Tummala wrote:
> > Deprecation notice to add "rte_eventdev_port_data" field to
> > ``rte_event_fp_ops`` for callback support.
> >
> > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index fb771a0305..057f97ce5a 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -130,6 +130,13 @@ Deprecation Notices
> > ``rte_cryptodev_get_auth_algo_string``,
> ``rte_cryptodev_get_aead_algo_string`` and
> > ``rte_cryptodev_asym_get_xform_string`` respectively.
> >
> > +* eventdev: The struct rte_event_fp_ops will be updated with a new
> > +element
> > + rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
> > + rte_eventdev_port_data is used to hold callbacks registered
> > +optionally
> > + per event device port and associated callback data. By adding
> > +rte_eventdev_port_data
> > + to rte_event_fp_ops, allows to fetch this data for fastpath
> > +eventdev inline functions
> > + in advance. This changes the size of rte_event_fp_ops and result in ABI
> change.
> > +
> > * security: Hide structures ``rte_security_ops`` and ``rte_security_ctx``
> > as these are internal to DPDK library and drivers.
> >
>
> +techboard,
>
> Request for review/ack, patch is to extend eventdev to support callbacks per
> packet.
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
^ permalink raw reply [relevance 0%]
* RE: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-25 14:24 0% ` Jerin Jacob
@ 2023-07-25 16:45 0% ` Hemant Agrawal
2023-07-28 10:11 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2023-07-25 16:45 UTC (permalink / raw)
To: Jerin Jacob, Tyler Retzlaff
Cc: Ferruh Yigit, Sivaprasad Tummala, dev, bruce.richardson,
david.marchand, thomas, techboard
> -----Original Message-----
>
> On Tue, Jul 25, 2023 at 7:48 PM Tyler Retzlaff <roretzla@linux.microsoft.com>
> wrote:
> >
> > On Tue, Jul 25, 2023 at 09:39:15AM +0100, Ferruh Yigit wrote:
> > > On 7/12/2023 11:18 AM, Sivaprasad Tummala wrote:
> > > > To allow new cpu features to be added without ABI breakage,
> > > > RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
> > > >
> > > > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > > ---
> > > > doc/guides/rel_notes/deprecation.rst | 3 +++
> > > > 1 file changed, 3 insertions(+)
> > > >
> > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > > b/doc/guides/rel_notes/deprecation.rst
> > > > index 8e1cdd677a..92db59d9c2 100644
> > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > @@ -28,6 +28,9 @@ Deprecation Notices
> > > > the replacement API rte_thread_set_name and
> rte_thread_create_control being
> > > > marked as stable, and planned to be removed by the 23.11 release.
> > > >
> > > > +* eal: RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11
> > > > +release. This is
> > > > + to allow new cpu features to be added without ABI breakage.
> > > > +
> > > > * rte_atomicNN_xxx: These APIs do not take memory order parameter.
> This does
> > > > not allow for writing optimized code for all the CPU architectures
> supported
> > > > in DPDK. DPDK has adopted the atomic operations from
> > >
> > > +techboard,
> > >
> > > Request for review/ack, patch is to remove ABI restriction to add
> > > new CPU flags.
> >
> > Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-25 14:18 0% ` Tyler Retzlaff
@ 2023-07-25 14:24 0% ` Jerin Jacob
2023-07-25 16:45 0% ` Hemant Agrawal
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-07-25 14:24 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: Ferruh Yigit, Sivaprasad Tummala, dev, bruce.richardson,
david.marchand, thomas, techboard
On Tue, Jul 25, 2023 at 7:48 PM Tyler Retzlaff
<roretzla@linux.microsoft.com> wrote:
>
> On Tue, Jul 25, 2023 at 09:39:15AM +0100, Ferruh Yigit wrote:
> > On 7/12/2023 11:18 AM, Sivaprasad Tummala wrote:
> > > To allow new cpu features to be added without ABI breakage,
> > > RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
> > >
> > > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > ---
> > > doc/guides/rel_notes/deprecation.rst | 3 +++
> > > 1 file changed, 3 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > > index 8e1cdd677a..92db59d9c2 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -28,6 +28,9 @@ Deprecation Notices
> > > the replacement API rte_thread_set_name and rte_thread_create_control being
> > > marked as stable, and planned to be removed by the 23.11 release.
> > >
> > > +* eal: RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release. This is
> > > + to allow new cpu features to be added without ABI breakage.
> > > +
> > > * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
> > > not allow for writing optimized code for all the CPU architectures supported
> > > in DPDK. DPDK has adopted the atomic operations from
> >
> > +techboard,
> >
> > Request for review/ack, patch is to remove ABI restriction to add new
> > CPU flags.
>
> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-25 8:39 3% ` Ferruh Yigit
` (2 preceding siblings ...)
2023-07-25 9:36 0% ` Kevin Traynor
@ 2023-07-25 14:18 0% ` Tyler Retzlaff
2023-07-25 14:24 0% ` Jerin Jacob
3 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2023-07-25 14:18 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Sivaprasad Tummala, dev, bruce.richardson, david.marchand,
thomas, techboard
On Tue, Jul 25, 2023 at 09:39:15AM +0100, Ferruh Yigit wrote:
> On 7/12/2023 11:18 AM, Sivaprasad Tummala wrote:
> > To allow new cpu features to be added without ABI breakage,
> > RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
> >
> > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 8e1cdd677a..92db59d9c2 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -28,6 +28,9 @@ Deprecation Notices
> > the replacement API rte_thread_set_name and rte_thread_create_control being
> > marked as stable, and planned to be removed by the 23.11 release.
> >
> > +* eal: RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release. This is
> > + to allow new cpu features to be added without ABI breakage.
> > +
> > * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
> > not allow for writing optimized code for all the CPU architectures supported
> > in DPDK. DPDK has adopted the atomic operations from
>
> +techboard,
>
> Request for review/ack, patch is to remove ABI restriction to add new
> CPU flags.
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-25 8:39 3% ` Ferruh Yigit
2023-07-25 8:40 0% ` Bruce Richardson
2023-07-25 9:24 0% ` Morten Brørup
@ 2023-07-25 9:36 0% ` Kevin Traynor
2023-07-25 14:18 0% ` Tyler Retzlaff
3 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2023-07-25 9:36 UTC (permalink / raw)
To: Ferruh Yigit, Sivaprasad Tummala, dev
Cc: bruce.richardson, david.marchand, thomas, techboard
On 25/07/2023 09:39, Ferruh Yigit wrote:
> On 7/12/2023 11:18 AM, Sivaprasad Tummala wrote:
>> To allow new cpu features to be added without ABI breakage,
>> RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
>>
>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
>> ---
>> doc/guides/rel_notes/deprecation.rst | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index 8e1cdd677a..92db59d9c2 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -28,6 +28,9 @@ Deprecation Notices
>> the replacement API rte_thread_set_name and rte_thread_create_control being
>> marked as stable, and planned to be removed by the 23.11 release.
>>
>> +* eal: RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release. This is
>> + to allow new cpu features to be added without ABI breakage.
>> +
>> * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
>> not allow for writing optimized code for all the CPU architectures supported
>> in DPDK. DPDK has adopted the atomic operations from
>
> +techboard,
>
> Request for review/ack, patch is to remove ABI restriction to add new
> CPU flags.
>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
^ permalink raw reply [relevance 0%]
* RE: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-25 8:39 3% ` Ferruh Yigit
2023-07-25 8:40 0% ` Bruce Richardson
@ 2023-07-25 9:24 0% ` Morten Brørup
2023-07-25 9:36 0% ` Kevin Traynor
2023-07-25 14:18 0% ` Tyler Retzlaff
3 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2023-07-25 9:24 UTC (permalink / raw)
To: Ferruh Yigit, Sivaprasad Tummala, dev
Cc: bruce.richardson, david.marchand, thomas, techboard
> From: Ferruh Yigit [mailto:ferruh.yigit@amd.com]
> Sent: Tuesday, 25 July 2023 10.39
>
> On 7/12/2023 11:18 AM, Sivaprasad Tummala wrote:
> > To allow new cpu features to be added without ABI breakage,
> > RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
> >
> > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > ---
Acked-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-25 8:39 3% ` Ferruh Yigit
@ 2023-07-25 8:40 0% ` Bruce Richardson
2023-07-25 9:24 0% ` Morten Brørup
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-07-25 8:40 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: Sivaprasad Tummala, dev, david.marchand, thomas, techboard
On Tue, Jul 25, 2023 at 09:39:15AM +0100, Ferruh Yigit wrote:
> On 7/12/2023 11:18 AM, Sivaprasad Tummala wrote:
> > To allow new cpu features to be added without ABI breakage,
> > RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
> >
> > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index 8e1cdd677a..92db59d9c2 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -28,6 +28,9 @@ Deprecation Notices
> > the replacement API rte_thread_set_name and rte_thread_create_control being
> > marked as stable, and planned to be removed by the 23.11 release.
> >
> > +* eal: RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release. This is
> > + to allow new cpu features to be added without ABI breakage.
> > +
> > * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
> > not allow for writing optimized code for all the CPU architectures supported
> > in DPDK. DPDK has adopted the atomic operations from
>
> +techboard,
>
> Request for review/ack, patch is to remove ABI restriction to add new
> CPU flags.
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v1] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-17 11:24 5% ` [PATCH v1] " Sivaprasad Tummala
2023-07-17 11:43 0% ` Jerin Jacob
@ 2023-07-25 8:40 0% ` Ferruh Yigit
2023-07-25 16:46 0% ` Hemant Agrawal
2023-07-28 15:42 3% ` Thomas Monjalon
1 sibling, 2 replies; 200+ results
From: Ferruh Yigit @ 2023-07-25 8:40 UTC (permalink / raw)
To: Sivaprasad Tummala, dev
Cc: bruce.richardson, david.marchand, thomas, jerinjacobk, techboard
On 7/17/2023 12:24 PM, Sivaprasad Tummala wrote:
> Deprecation notice to add "rte_eventdev_port_data" field to
> ``rte_event_fp_ops`` for callback support.
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index fb771a0305..057f97ce5a 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -130,6 +130,13 @@ Deprecation Notices
> ``rte_cryptodev_get_auth_algo_string``, ``rte_cryptodev_get_aead_algo_string`` and
> ``rte_cryptodev_asym_get_xform_string`` respectively.
>
> +* eventdev: The struct rte_event_fp_ops will be updated with a new element
> + rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
> + rte_eventdev_port_data is used to hold callbacks registered optionally
> + per event device port and associated callback data. By adding rte_eventdev_port_data
> + to rte_event_fp_ops, allows to fetch this data for fastpath eventdev inline functions
> + in advance. This changes the size of rte_event_fp_ops and result in ABI change.
> +
> * security: Hide structures ``rte_security_ops`` and ``rte_security_ctx``
> as these are internal to DPDK library and drivers.
>
+techboard,
Request for review/ack, patch is to extend eventdev to support callbacks
per packet.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-12 10:18 8% [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS Sivaprasad Tummala
2023-07-12 10:21 0% ` Ferruh Yigit
@ 2023-07-25 8:39 3% ` Ferruh Yigit
2023-07-25 8:40 0% ` Bruce Richardson
` (3 more replies)
1 sibling, 4 replies; 200+ results
From: Ferruh Yigit @ 2023-07-25 8:39 UTC (permalink / raw)
To: Sivaprasad Tummala, dev
Cc: bruce.richardson, david.marchand, thomas, techboard
On 7/12/2023 11:18 AM, Sivaprasad Tummala wrote:
> To allow new cpu features to be added without ABI breakage,
> RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 8e1cdd677a..92db59d9c2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -28,6 +28,9 @@ Deprecation Notices
> the replacement API rte_thread_set_name and rte_thread_create_control being
> marked as stable, and planned to be removed by the 23.11 release.
>
> +* eal: RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release. This is
> + to allow new cpu features to be added without ABI breakage.
> +
> * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
> not allow for writing optimized code for all the CPU architectures supported
> in DPDK. DPDK has adopted the atomic operations from
+techboard,
Request for review/ack, patch is to remove ABI restriction to add new
CPU flags.
^ permalink raw reply [relevance 3%]
* [PATCH v4] tap: fix build of TAP BPF program
2023-07-16 21:25 1% [RFC] MAINTAINERS: add status information Stephen Hemminger
` (3 preceding siblings ...)
2023-07-20 23:25 4% ` [PATCH v3] " Stephen Hemminger
@ 2023-07-22 16:32 4% ` Stephen Hemminger
4 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-07-22 16:32 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Move the BPF program related code into a subdirectory.
And add a Makefile for building it.
The code depends on include files from iproute2.
But these are not public headers which iproute2 exports
as a package API. Therefore make a local copy here.
The standalone build was also broken because by
commit ef5baf3486e0 ("replace packed attributes")
which introduced __rte_packed into this code.
Add a python program to extract the resulting BPF into
a format that can be consumed by the TAP driver.
Update the documentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
v4 - update bpf_api.h and bpf_elf.h with SPDK headers from upstream
doc/guides/nics/tap.rst | 11 +-
drivers/net/tap/bpf/.gitignore | 1 +
drivers/net/tap/bpf/Makefile | 18 ++
drivers/net/tap/bpf/bpf_api.h | 275 ++++++++++++++++++++
drivers/net/tap/bpf/bpf_elf.h | 53 ++++
| 80 ++++++
drivers/net/tap/{ => bpf}/tap_bpf_program.c | 9 +-
| 2 +-
8 files changed, 437 insertions(+), 12 deletions(-)
create mode 100644 drivers/net/tap/bpf/.gitignore
create mode 100644 drivers/net/tap/bpf/Makefile
create mode 100644 drivers/net/tap/bpf/bpf_api.h
create mode 100644 drivers/net/tap/bpf/bpf_elf.h
create mode 100644 drivers/net/tap/bpf/bpf_extract.py
rename drivers/net/tap/{ => bpf}/tap_bpf_program.c (97%)
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 07df0d35a2ec..449e747994bd 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -256,15 +256,12 @@ C functions under different ELF sections.
2. Install ``LLVM`` library and ``clang`` compiler versions 3.7 and above
-3. Compile ``tap_bpf_program.c`` via ``LLVM`` into an object file::
+3. Use make to compile `tap_bpf_program.c`` via ``LLVM`` into an object file
+ and extract the resulting instructions into ``tap_bpf_insn.h``.
- clang -O2 -emit-llvm -c tap_bpf_program.c -o - | llc -march=bpf \
- -filetype=obj -o <tap_bpf_program.o>
+ cd bpf; make
-
-4. Use a tool that receives two parameters: an eBPF object file and a section
-name, and prints out the section as a C array of eBPF instructions.
-Embed the C array in your TAP PMD tree.
+4. Recompile the TAP PMD.
The C arrays are uploaded to the kernel using BPF system calls.
diff --git a/drivers/net/tap/bpf/.gitignore b/drivers/net/tap/bpf/.gitignore
new file mode 100644
index 000000000000..30a258f1af3b
--- /dev/null
+++ b/drivers/net/tap/bpf/.gitignore
@@ -0,0 +1 @@
+tap_bpf_program.o
diff --git a/drivers/net/tap/bpf/Makefile b/drivers/net/tap/bpf/Makefile
new file mode 100644
index 000000000000..e5ae4e1f5adc
--- /dev/null
+++ b/drivers/net/tap/bpf/Makefile
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# This file is not built as part of normal DPDK build.
+# It is used to generate the eBPF code for TAP RSS.
+CLANG=clang
+CLANG_OPTS=-O2
+TARGET=../tap_bpf_insns.h
+
+all: $(TARGET)
+
+clean:
+ rm tap_bpf_program.o $(TARGET)
+
+tap_bpf_program.o: tap_bpf_program.c
+ $(CLANG) $(CLANG_OPTS) -emit-llvm -c $< -o - | \
+ llc -march=bpf -filetype=obj -o $@
+
+$(TARGET): bpf_extract.py tap_bpf_program.o
+ python3 bpf_extract.py tap_bpf_program.o $@
diff --git a/drivers/net/tap/bpf/bpf_api.h b/drivers/net/tap/bpf/bpf_api.h
new file mode 100644
index 000000000000..5887d3a851cf
--- /dev/null
+++ b/drivers/net/tap/bpf/bpf_api.h
@@ -0,0 +1,275 @@
+/* SPDX-License-Identifier: GPL-2.0 or BSD-3-Clause */
+#ifndef __BPF_API__
+#define __BPF_API__
+
+/* Note:
+ *
+ * This file can be included into eBPF kernel programs. It contains
+ * a couple of useful helper functions, map/section ABI (bpf_elf.h),
+ * misc macros and some eBPF specific LLVM built-ins.
+ */
+
+#include <stdint.h>
+
+#include <linux/pkt_cls.h>
+#include <linux/bpf.h>
+#include <linux/filter.h>
+
+#include <asm/byteorder.h>
+
+#include "bpf_elf.h"
+
+/** libbpf pin type. */
+enum libbpf_pin_type {
+ LIBBPF_PIN_NONE,
+ /* PIN_BY_NAME: pin maps by name (in /sys/fs/bpf by default) */
+ LIBBPF_PIN_BY_NAME,
+};
+
+/** Type helper macros. */
+
+#define __uint(name, val) int (*name)[val]
+#define __type(name, val) typeof(val) *name
+#define __array(name, val) typeof(val) *name[]
+
+/** Misc macros. */
+
+#ifndef __stringify
+# define __stringify(X) #X
+#endif
+
+#ifndef __maybe_unused
+# define __maybe_unused __attribute__((__unused__))
+#endif
+
+#ifndef offsetof
+# define offsetof(TYPE, MEMBER) __builtin_offsetof(TYPE, MEMBER)
+#endif
+
+#ifndef likely
+# define likely(X) __builtin_expect(!!(X), 1)
+#endif
+
+#ifndef unlikely
+# define unlikely(X) __builtin_expect(!!(X), 0)
+#endif
+
+#ifndef htons
+# define htons(X) __constant_htons((X))
+#endif
+
+#ifndef ntohs
+# define ntohs(X) __constant_ntohs((X))
+#endif
+
+#ifndef htonl
+# define htonl(X) __constant_htonl((X))
+#endif
+
+#ifndef ntohl
+# define ntohl(X) __constant_ntohl((X))
+#endif
+
+#ifndef __inline__
+# define __inline__ __attribute__((always_inline))
+#endif
+
+/** Section helper macros. */
+
+#ifndef __section
+# define __section(NAME) \
+ __attribute__((section(NAME), used))
+#endif
+
+#ifndef __section_tail
+# define __section_tail(ID, KEY) \
+ __section(__stringify(ID) "/" __stringify(KEY))
+#endif
+
+#ifndef __section_xdp_entry
+# define __section_xdp_entry \
+ __section(ELF_SECTION_PROG)
+#endif
+
+#ifndef __section_cls_entry
+# define __section_cls_entry \
+ __section(ELF_SECTION_CLASSIFIER)
+#endif
+
+#ifndef __section_act_entry
+# define __section_act_entry \
+ __section(ELF_SECTION_ACTION)
+#endif
+
+#ifndef __section_lwt_entry
+# define __section_lwt_entry \
+ __section(ELF_SECTION_PROG)
+#endif
+
+#ifndef __section_license
+# define __section_license \
+ __section(ELF_SECTION_LICENSE)
+#endif
+
+#ifndef __section_maps
+# define __section_maps \
+ __section(ELF_SECTION_MAPS)
+#endif
+
+/** Declaration helper macros. */
+
+#ifndef BPF_LICENSE
+# define BPF_LICENSE(NAME) \
+ char ____license[] __section_license = NAME
+#endif
+
+/** Classifier helper */
+
+#ifndef BPF_H_DEFAULT
+# define BPF_H_DEFAULT -1
+#endif
+
+/** BPF helper functions for tc. Individual flags are in linux/bpf.h */
+
+#ifndef __BPF_FUNC
+# define __BPF_FUNC(NAME, ...) \
+ (* NAME)(__VA_ARGS__) __maybe_unused
+#endif
+
+#ifndef BPF_FUNC
+# define BPF_FUNC(NAME, ...) \
+ __BPF_FUNC(NAME, __VA_ARGS__) = (void *) BPF_FUNC_##NAME
+#endif
+
+/* Map access/manipulation */
+static void *BPF_FUNC(map_lookup_elem, void *map, const void *key);
+static int BPF_FUNC(map_update_elem, void *map, const void *key,
+ const void *value, uint32_t flags);
+static int BPF_FUNC(map_delete_elem, void *map, const void *key);
+
+/* Time access */
+static uint64_t BPF_FUNC(ktime_get_ns);
+
+/* Debugging */
+
+/* FIXME: __attribute__ ((format(printf, 1, 3))) not possible unless
+ * llvm bug https://llvm.org/bugs/show_bug.cgi?id=26243 gets resolved.
+ * It would require ____fmt to be made const, which generates a reloc
+ * entry (non-map).
+ */
+static void BPF_FUNC(trace_printk, const char *fmt, int fmt_size, ...);
+
+#ifndef printt
+# define printt(fmt, ...) \
+ ({ \
+ char ____fmt[] = fmt; \
+ trace_printk(____fmt, sizeof(____fmt), ##__VA_ARGS__); \
+ })
+#endif
+
+/* Random numbers */
+static uint32_t BPF_FUNC(get_prandom_u32);
+
+/* Tail calls */
+static void BPF_FUNC(tail_call, struct __sk_buff *skb, void *map,
+ uint32_t index);
+
+/* System helpers */
+static uint32_t BPF_FUNC(get_smp_processor_id);
+static uint32_t BPF_FUNC(get_numa_node_id);
+
+/* Packet misc meta data */
+static uint32_t BPF_FUNC(get_cgroup_classid, struct __sk_buff *skb);
+static int BPF_FUNC(skb_under_cgroup, void *map, uint32_t index);
+
+static uint32_t BPF_FUNC(get_route_realm, struct __sk_buff *skb);
+static uint32_t BPF_FUNC(get_hash_recalc, struct __sk_buff *skb);
+static uint32_t BPF_FUNC(set_hash_invalid, struct __sk_buff *skb);
+
+/* Packet redirection */
+static int BPF_FUNC(redirect, int ifindex, uint32_t flags);
+static int BPF_FUNC(clone_redirect, struct __sk_buff *skb, int ifindex,
+ uint32_t flags);
+
+/* Packet manipulation */
+static int BPF_FUNC(skb_load_bytes, struct __sk_buff *skb, uint32_t off,
+ void *to, uint32_t len);
+static int BPF_FUNC(skb_store_bytes, struct __sk_buff *skb, uint32_t off,
+ const void *from, uint32_t len, uint32_t flags);
+
+static int BPF_FUNC(l3_csum_replace, struct __sk_buff *skb, uint32_t off,
+ uint32_t from, uint32_t to, uint32_t flags);
+static int BPF_FUNC(l4_csum_replace, struct __sk_buff *skb, uint32_t off,
+ uint32_t from, uint32_t to, uint32_t flags);
+static int BPF_FUNC(csum_diff, const void *from, uint32_t from_size,
+ const void *to, uint32_t to_size, uint32_t seed);
+static int BPF_FUNC(csum_update, struct __sk_buff *skb, uint32_t wsum);
+
+static int BPF_FUNC(skb_change_type, struct __sk_buff *skb, uint32_t type);
+static int BPF_FUNC(skb_change_proto, struct __sk_buff *skb, uint32_t proto,
+ uint32_t flags);
+static int BPF_FUNC(skb_change_tail, struct __sk_buff *skb, uint32_t nlen,
+ uint32_t flags);
+
+static int BPF_FUNC(skb_pull_data, struct __sk_buff *skb, uint32_t len);
+
+/* Event notification */
+static int __BPF_FUNC(skb_event_output, struct __sk_buff *skb, void *map,
+ uint64_t index, const void *data, uint32_t size) =
+ (void *) BPF_FUNC_perf_event_output;
+
+/* Packet vlan encap/decap */
+static int BPF_FUNC(skb_vlan_push, struct __sk_buff *skb, uint16_t proto,
+ uint16_t vlan_tci);
+static int BPF_FUNC(skb_vlan_pop, struct __sk_buff *skb);
+
+/* Packet tunnel encap/decap */
+static int BPF_FUNC(skb_get_tunnel_key, struct __sk_buff *skb,
+ struct bpf_tunnel_key *to, uint32_t size, uint32_t flags);
+static int BPF_FUNC(skb_set_tunnel_key, struct __sk_buff *skb,
+ const struct bpf_tunnel_key *from, uint32_t size,
+ uint32_t flags);
+
+static int BPF_FUNC(skb_get_tunnel_opt, struct __sk_buff *skb,
+ void *to, uint32_t size);
+static int BPF_FUNC(skb_set_tunnel_opt, struct __sk_buff *skb,
+ const void *from, uint32_t size);
+
+/** LLVM built-ins, mem*() routines work for constant size */
+
+#ifndef lock_xadd
+# define lock_xadd(ptr, val) ((void) __sync_fetch_and_add(ptr, val))
+#endif
+
+#ifndef memset
+# define memset(s, c, n) __builtin_memset((s), (c), (n))
+#endif
+
+#ifndef memcpy
+# define memcpy(d, s, n) __builtin_memcpy((d), (s), (n))
+#endif
+
+#ifndef memmove
+# define memmove(d, s, n) __builtin_memmove((d), (s), (n))
+#endif
+
+/* FIXME: __builtin_memcmp() is not yet fully useable unless llvm bug
+ * https://llvm.org/bugs/show_bug.cgi?id=26218 gets resolved. Also
+ * this one would generate a reloc entry (non-map), otherwise.
+ */
+#if 0
+#ifndef memcmp
+# define memcmp(a, b, n) __builtin_memcmp((a), (b), (n))
+#endif
+#endif
+
+unsigned long long load_byte(void *skb, unsigned long long off)
+ asm ("llvm.bpf.load.byte");
+
+unsigned long long load_half(void *skb, unsigned long long off)
+ asm ("llvm.bpf.load.half");
+
+unsigned long long load_word(void *skb, unsigned long long off)
+ asm ("llvm.bpf.load.word");
+
+#endif /* __BPF_API__ */
diff --git a/drivers/net/tap/bpf/bpf_elf.h b/drivers/net/tap/bpf/bpf_elf.h
new file mode 100644
index 000000000000..ea8a11c95c0f
--- /dev/null
+++ b/drivers/net/tap/bpf/bpf_elf.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 or BSD-3-Clause */
+#ifndef __BPF_ELF__
+#define __BPF_ELF__
+
+#include <asm/types.h>
+
+/* Note:
+ *
+ * Below ELF section names and bpf_elf_map structure definition
+ * are not (!) kernel ABI. It's rather a "contract" between the
+ * application and the BPF loader in tc. For compatibility, the
+ * section names should stay as-is. Introduction of aliases, if
+ * needed, are a possibility, though.
+ */
+
+/* ELF section names, etc */
+#define ELF_SECTION_LICENSE "license"
+#define ELF_SECTION_MAPS "maps"
+#define ELF_SECTION_PROG "prog"
+#define ELF_SECTION_CLASSIFIER "classifier"
+#define ELF_SECTION_ACTION "action"
+
+#define ELF_MAX_MAPS 64
+#define ELF_MAX_LICENSE_LEN 128
+
+/* Object pinning settings */
+#define PIN_NONE 0
+#define PIN_OBJECT_NS 1
+#define PIN_GLOBAL_NS 2
+
+/* ELF map definition */
+struct bpf_elf_map {
+ __u32 type;
+ __u32 size_key;
+ __u32 size_value;
+ __u32 max_elem;
+ __u32 flags;
+ __u32 id;
+ __u32 pinning;
+ __u32 inner_id;
+ __u32 inner_idx;
+};
+
+#define BPF_ANNOTATE_KV_PAIR(name, type_key, type_val) \
+ struct ____btf_map_##name { \
+ type_key key; \
+ type_val value; \
+ }; \
+ struct ____btf_map_##name \
+ __attribute__ ((section(".maps." #name), used)) \
+ ____btf_map_##name = { }
+
+#endif /* __BPF_ELF__ */
--git a/drivers/net/tap/bpf/bpf_extract.py b/drivers/net/tap/bpf/bpf_extract.py
new file mode 100644
index 000000000000..d79fc61020b3
--- /dev/null
+++ b/drivers/net/tap/bpf/bpf_extract.py
@@ -0,0 +1,80 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright (c) 2023 Stephen Hemminger <stephen@networkplumber.org>
+
+import argparse
+import sys
+import struct
+from tempfile import TemporaryFile
+from elftools.elf.elffile import ELFFile
+
+
+def load_sections(elffile):
+ result = []
+ DATA = [("cls_q", "cls_q_insns"), ("l3_l4", "l3_l4_hash_insns")]
+ for name, tag in DATA:
+ section = elffile.get_section_by_name(name)
+ if section:
+ insns = struct.iter_unpack('<BBhL', section.data())
+ result.append([tag, insns])
+ return result
+
+
+def dump_sections(sections, out):
+ for name, insns in sections:
+ print(f'\nstatic const struct bpf_insn {name} = {{', file=out)
+ for bpf in insns:
+ code = bpf[0]
+ src = bpf[1] >> 4
+ dst = bpf[1] & 0xf
+ off = bpf[2]
+ imm = bpf[3]
+ print('\t{{{:#02x}, {:4d}, {:4d}, {:8d}, {:#010x}}},'.format(
+ code, dst, src, off, imm),
+ file=out)
+ print('};', file=out)
+
+
+def parse_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("input",
+ nargs='+',
+ help="input object file path or '-' for stdin")
+ parser.add_argument("output", help="output C file path or '-' for stdout")
+ return parser.parse_args()
+
+
+def open_input(path):
+ if path == "-":
+ temp = TemporaryFile()
+ temp.write(sys.stdin.buffer.read())
+ return temp
+ return open(path, "rb")
+
+
+def open_output(path):
+ if path == "-":
+ return sys.stdout
+ return open(path, "w")
+
+
+def write_header(output):
+ print("/* SPDX-License-Identifier: BSD-3-Clause", file=output)
+ print(" * Compiled BPF instructions do not edit", file=output)
+ print(" */\n", file=output)
+ print("#include <tap_bpf.h>", file=output)
+
+
+def main():
+ args = parse_args()
+
+ output = open_output(args.output)
+ write_header(output)
+ for path in args.input:
+ elffile = ELFFile(open_input(path))
+ sections = load_sections(elffile)
+ dump_sections(sections, output)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/drivers/net/tap/tap_bpf_program.c b/drivers/net/tap/bpf/tap_bpf_program.c
similarity index 97%
rename from drivers/net/tap/tap_bpf_program.c
rename to drivers/net/tap/bpf/tap_bpf_program.c
index 20c310e5e7ba..ff6f1606fb38 100644
--- a/drivers/net/tap/tap_bpf_program.c
+++ b/drivers/net/tap/bpf/tap_bpf_program.c
@@ -14,9 +14,10 @@
#include <linux/ipv6.h>
#include <linux/if_tunnel.h>
#include <linux/filter.h>
-#include <linux/bpf.h>
-#include "tap_rss.h"
+#include "bpf_api.h"
+#include "bpf_elf.h"
+#include "../tap_rss.h"
/** Create IPv4 address */
#define IPv4(a, b, c, d) ((__u32)(((a) & 0xff) << 24) | \
@@ -75,14 +76,14 @@ struct ipv4_l3_l4_tuple {
__u32 dst_addr;
__u16 dport;
__u16 sport;
-} __rte_packed;
+} __attribute__((packed));
struct ipv6_l3_l4_tuple {
__u8 src_addr[16];
__u8 dst_addr[16];
__u16 dport;
__u16 sport;
-} __rte_packed;
+} __attribute__((packed));
static const __u8 def_rss_key[TAP_RSS_HASH_KEY_SIZE] = {
0xd1, 0x81, 0xc6, 0x2c,
--git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 48c151cf6b68..dff46a012f94 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -35,6 +35,6 @@ struct rss_key {
__u32 key_size;
__u32 queues[TAP_MAX_QUEUES];
__u32 nb_queues;
-} __rte_packed;
+} __attribute__((packed));
#endif /* _TAP_RSS_H_ */
--
2.39.2
^ permalink raw reply [relevance 4%]
* [PATCH v3] tap: fix build of TAP BPF program
2023-07-16 21:25 1% [RFC] MAINTAINERS: add status information Stephen Hemminger
` (2 preceding siblings ...)
2023-07-20 17:45 5% ` [PATCH v2 ] tap: fix build of TAP BPF program Stephen Hemminger
@ 2023-07-20 23:25 4% ` Stephen Hemminger
2023-07-22 16:32 4% ` [PATCH v4] " Stephen Hemminger
4 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-07-20 23:25 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Move the BPF program related code into a subdirectory.
And add a Makefile for building it.
The code was depending on old versions of headers from iproute2.
Include those headers here so that build works.
The standalone build was also broken because by
commit ef5baf3486e0 ("replace packed attributes")
which introduced __rte_packed into this code.
Add a python program to extract the resulting BPF into
a format that can be consumed by the TAP driver.
Update the documentation.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/nics/tap.rst | 11 +-
drivers/net/tap/bpf/.gitignore | 1 +
drivers/net/tap/bpf/Makefile | 18 ++
drivers/net/tap/bpf/bpf_api.h | 261 ++++++++++++++++++++
drivers/net/tap/bpf/bpf_elf.h | 43 ++++
| 80 ++++++
drivers/net/tap/{ => bpf}/tap_bpf_program.c | 9 +-
| 2 +-
8 files changed, 413 insertions(+), 12 deletions(-)
create mode 100644 drivers/net/tap/bpf/.gitignore
create mode 100644 drivers/net/tap/bpf/Makefile
create mode 100644 drivers/net/tap/bpf/bpf_api.h
create mode 100644 drivers/net/tap/bpf/bpf_elf.h
create mode 100644 drivers/net/tap/bpf/bpf_extract.py
rename drivers/net/tap/{ => bpf}/tap_bpf_program.c (97%)
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 07df0d35a2ec..449e747994bd 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -256,15 +256,12 @@ C functions under different ELF sections.
2. Install ``LLVM`` library and ``clang`` compiler versions 3.7 and above
-3. Compile ``tap_bpf_program.c`` via ``LLVM`` into an object file::
+3. Use make to compile `tap_bpf_program.c`` via ``LLVM`` into an object file
+ and extract the resulting instructions into ``tap_bpf_insn.h``.
- clang -O2 -emit-llvm -c tap_bpf_program.c -o - | llc -march=bpf \
- -filetype=obj -o <tap_bpf_program.o>
+ cd bpf; make
-
-4. Use a tool that receives two parameters: an eBPF object file and a section
-name, and prints out the section as a C array of eBPF instructions.
-Embed the C array in your TAP PMD tree.
+4. Recompile the TAP PMD.
The C arrays are uploaded to the kernel using BPF system calls.
diff --git a/drivers/net/tap/bpf/.gitignore b/drivers/net/tap/bpf/.gitignore
new file mode 100644
index 000000000000..30a258f1af3b
--- /dev/null
+++ b/drivers/net/tap/bpf/.gitignore
@@ -0,0 +1 @@
+tap_bpf_program.o
diff --git a/drivers/net/tap/bpf/Makefile b/drivers/net/tap/bpf/Makefile
new file mode 100644
index 000000000000..e5ae4e1f5adc
--- /dev/null
+++ b/drivers/net/tap/bpf/Makefile
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# This file is not built as part of normal DPDK build.
+# It is used to generate the eBPF code for TAP RSS.
+CLANG=clang
+CLANG_OPTS=-O2
+TARGET=../tap_bpf_insns.h
+
+all: $(TARGET)
+
+clean:
+ rm tap_bpf_program.o $(TARGET)
+
+tap_bpf_program.o: tap_bpf_program.c
+ $(CLANG) $(CLANG_OPTS) -emit-llvm -c $< -o - | \
+ llc -march=bpf -filetype=obj -o $@
+
+$(TARGET): bpf_extract.py tap_bpf_program.o
+ python3 bpf_extract.py tap_bpf_program.o $@
diff --git a/drivers/net/tap/bpf/bpf_api.h b/drivers/net/tap/bpf/bpf_api.h
new file mode 100644
index 000000000000..d13247199c9a
--- /dev/null
+++ b/drivers/net/tap/bpf/bpf_api.h
@@ -0,0 +1,261 @@
+#ifndef __BPF_API__
+#define __BPF_API__
+
+/* Note:
+ *
+ * This file can be included into eBPF kernel programs. It contains
+ * a couple of useful helper functions, map/section ABI (bpf_elf.h),
+ * misc macros and some eBPF specific LLVM built-ins.
+ */
+
+#include <stdint.h>
+
+#include <linux/pkt_cls.h>
+#include <linux/bpf.h>
+#include <linux/filter.h>
+
+#include <asm/byteorder.h>
+
+#include "bpf_elf.h"
+
+/** Misc macros. */
+
+#ifndef __stringify
+# define __stringify(X) #X
+#endif
+
+#ifndef __maybe_unused
+# define __maybe_unused __attribute__((__unused__))
+#endif
+
+#ifndef offsetof
+# define offsetof(TYPE, MEMBER) __builtin_offsetof(TYPE, MEMBER)
+#endif
+
+#ifndef likely
+# define likely(X) __builtin_expect(!!(X), 1)
+#endif
+
+#ifndef unlikely
+# define unlikely(X) __builtin_expect(!!(X), 0)
+#endif
+
+#ifndef htons
+# define htons(X) __constant_htons((X))
+#endif
+
+#ifndef ntohs
+# define ntohs(X) __constant_ntohs((X))
+#endif
+
+#ifndef htonl
+# define htonl(X) __constant_htonl((X))
+#endif
+
+#ifndef ntohl
+# define ntohl(X) __constant_ntohl((X))
+#endif
+
+#ifndef __inline__
+# define __inline__ __attribute__((always_inline))
+#endif
+
+/** Section helper macros. */
+
+#ifndef __section
+# define __section(NAME) \
+ __attribute__((section(NAME), used))
+#endif
+
+#ifndef __section_tail
+# define __section_tail(ID, KEY) \
+ __section(__stringify(ID) "/" __stringify(KEY))
+#endif
+
+#ifndef __section_xdp_entry
+# define __section_xdp_entry \
+ __section(ELF_SECTION_PROG)
+#endif
+
+#ifndef __section_cls_entry
+# define __section_cls_entry \
+ __section(ELF_SECTION_CLASSIFIER)
+#endif
+
+#ifndef __section_act_entry
+# define __section_act_entry \
+ __section(ELF_SECTION_ACTION)
+#endif
+
+#ifndef __section_lwt_entry
+# define __section_lwt_entry \
+ __section(ELF_SECTION_PROG)
+#endif
+
+#ifndef __section_license
+# define __section_license \
+ __section(ELF_SECTION_LICENSE)
+#endif
+
+#ifndef __section_maps
+# define __section_maps \
+ __section(ELF_SECTION_MAPS)
+#endif
+
+/** Declaration helper macros. */
+
+#ifndef BPF_LICENSE
+# define BPF_LICENSE(NAME) \
+ char ____license[] __section_license = NAME
+#endif
+
+/** Classifier helper */
+
+#ifndef BPF_H_DEFAULT
+# define BPF_H_DEFAULT -1
+#endif
+
+/** BPF helper functions for tc. Individual flags are in linux/bpf.h */
+
+#ifndef __BPF_FUNC
+# define __BPF_FUNC(NAME, ...) \
+ (* NAME)(__VA_ARGS__) __maybe_unused
+#endif
+
+#ifndef BPF_FUNC
+# define BPF_FUNC(NAME, ...) \
+ __BPF_FUNC(NAME, __VA_ARGS__) = (void *) BPF_FUNC_##NAME
+#endif
+
+/* Map access/manipulation */
+static void *BPF_FUNC(map_lookup_elem, void *map, const void *key);
+static int BPF_FUNC(map_update_elem, void *map, const void *key,
+ const void *value, uint32_t flags);
+static int BPF_FUNC(map_delete_elem, void *map, const void *key);
+
+/* Time access */
+static uint64_t BPF_FUNC(ktime_get_ns);
+
+/* Debugging */
+
+/* FIXME: __attribute__ ((format(printf, 1, 3))) not possible unless
+ * llvm bug https://llvm.org/bugs/show_bug.cgi?id=26243 gets resolved.
+ * It would require ____fmt to be made const, which generates a reloc
+ * entry (non-map).
+ */
+static void BPF_FUNC(trace_printk, const char *fmt, int fmt_size, ...);
+
+#ifndef printt
+# define printt(fmt, ...) \
+ ({ \
+ char ____fmt[] = fmt; \
+ trace_printk(____fmt, sizeof(____fmt), ##__VA_ARGS__); \
+ })
+#endif
+
+/* Random numbers */
+static uint32_t BPF_FUNC(get_prandom_u32);
+
+/* Tail calls */
+static void BPF_FUNC(tail_call, struct __sk_buff *skb, void *map,
+ uint32_t index);
+
+/* System helpers */
+static uint32_t BPF_FUNC(get_smp_processor_id);
+static uint32_t BPF_FUNC(get_numa_node_id);
+
+/* Packet misc meta data */
+static uint32_t BPF_FUNC(get_cgroup_classid, struct __sk_buff *skb);
+static int BPF_FUNC(skb_under_cgroup, void *map, uint32_t index);
+
+static uint32_t BPF_FUNC(get_route_realm, struct __sk_buff *skb);
+static uint32_t BPF_FUNC(get_hash_recalc, struct __sk_buff *skb);
+static uint32_t BPF_FUNC(set_hash_invalid, struct __sk_buff *skb);
+
+/* Packet redirection */
+static int BPF_FUNC(redirect, int ifindex, uint32_t flags);
+static int BPF_FUNC(clone_redirect, struct __sk_buff *skb, int ifindex,
+ uint32_t flags);
+
+/* Packet manipulation */
+static int BPF_FUNC(skb_load_bytes, struct __sk_buff *skb, uint32_t off,
+ void *to, uint32_t len);
+static int BPF_FUNC(skb_store_bytes, struct __sk_buff *skb, uint32_t off,
+ const void *from, uint32_t len, uint32_t flags);
+
+static int BPF_FUNC(l3_csum_replace, struct __sk_buff *skb, uint32_t off,
+ uint32_t from, uint32_t to, uint32_t flags);
+static int BPF_FUNC(l4_csum_replace, struct __sk_buff *skb, uint32_t off,
+ uint32_t from, uint32_t to, uint32_t flags);
+static int BPF_FUNC(csum_diff, const void *from, uint32_t from_size,
+ const void *to, uint32_t to_size, uint32_t seed);
+static int BPF_FUNC(csum_update, struct __sk_buff *skb, uint32_t wsum);
+
+static int BPF_FUNC(skb_change_type, struct __sk_buff *skb, uint32_t type);
+static int BPF_FUNC(skb_change_proto, struct __sk_buff *skb, uint32_t proto,
+ uint32_t flags);
+static int BPF_FUNC(skb_change_tail, struct __sk_buff *skb, uint32_t nlen,
+ uint32_t flags);
+
+static int BPF_FUNC(skb_pull_data, struct __sk_buff *skb, uint32_t len);
+
+/* Event notification */
+static int __BPF_FUNC(skb_event_output, struct __sk_buff *skb, void *map,
+ uint64_t index, const void *data, uint32_t size) =
+ (void *) BPF_FUNC_perf_event_output;
+
+/* Packet vlan encap/decap */
+static int BPF_FUNC(skb_vlan_push, struct __sk_buff *skb, uint16_t proto,
+ uint16_t vlan_tci);
+static int BPF_FUNC(skb_vlan_pop, struct __sk_buff *skb);
+
+/* Packet tunnel encap/decap */
+static int BPF_FUNC(skb_get_tunnel_key, struct __sk_buff *skb,
+ struct bpf_tunnel_key *to, uint32_t size, uint32_t flags);
+static int BPF_FUNC(skb_set_tunnel_key, struct __sk_buff *skb,
+ const struct bpf_tunnel_key *from, uint32_t size,
+ uint32_t flags);
+
+static int BPF_FUNC(skb_get_tunnel_opt, struct __sk_buff *skb,
+ void *to, uint32_t size);
+static int BPF_FUNC(skb_set_tunnel_opt, struct __sk_buff *skb,
+ const void *from, uint32_t size);
+
+/** LLVM built-ins, mem*() routines work for constant size */
+
+#ifndef lock_xadd
+# define lock_xadd(ptr, val) ((void) __sync_fetch_and_add(ptr, val))
+#endif
+
+#ifndef memset
+# define memset(s, c, n) __builtin_memset((s), (c), (n))
+#endif
+
+#ifndef memcpy
+# define memcpy(d, s, n) __builtin_memcpy((d), (s), (n))
+#endif
+
+#ifndef memmove
+# define memmove(d, s, n) __builtin_memmove((d), (s), (n))
+#endif
+
+/* FIXME: __builtin_memcmp() is not yet fully useable unless llvm bug
+ * https://llvm.org/bugs/show_bug.cgi?id=26218 gets resolved. Also
+ * this one would generate a reloc entry (non-map), otherwise.
+ */
+#if 0
+#ifndef memcmp
+# define memcmp(a, b, n) __builtin_memcmp((a), (b), (n))
+#endif
+#endif
+
+unsigned long long load_byte(void *skb, unsigned long long off)
+ asm ("llvm.bpf.load.byte");
+
+unsigned long long load_half(void *skb, unsigned long long off)
+ asm ("llvm.bpf.load.half");
+
+unsigned long long load_word(void *skb, unsigned long long off)
+ asm ("llvm.bpf.load.word");
+
+#endif /* __BPF_API__ */
diff --git a/drivers/net/tap/bpf/bpf_elf.h b/drivers/net/tap/bpf/bpf_elf.h
new file mode 100644
index 000000000000..406c30874ac3
--- /dev/null
+++ b/drivers/net/tap/bpf/bpf_elf.h
@@ -0,0 +1,43 @@
+#ifndef __BPF_ELF__
+#define __BPF_ELF__
+
+#include <asm/types.h>
+
+/* Note:
+ *
+ * Below ELF section names and bpf_elf_map structure definition
+ * are not (!) kernel ABI. It's rather a "contract" between the
+ * application and the BPF loader in tc. For compatibility, the
+ * section names should stay as-is. Introduction of aliases, if
+ * needed, are a possibility, though.
+ */
+
+/* ELF section names, etc */
+#define ELF_SECTION_LICENSE "license"
+#define ELF_SECTION_MAPS "maps"
+#define ELF_SECTION_PROG "prog"
+#define ELF_SECTION_CLASSIFIER "classifier"
+#define ELF_SECTION_ACTION "action"
+
+#define ELF_MAX_MAPS 64
+#define ELF_MAX_LICENSE_LEN 128
+
+/* Object pinning settings */
+#define PIN_NONE 0
+#define PIN_OBJECT_NS 1
+#define PIN_GLOBAL_NS 2
+
+/* ELF map definition */
+struct bpf_elf_map {
+ __u32 type;
+ __u32 size_key;
+ __u32 size_value;
+ __u32 max_elem;
+ __u32 flags;
+ __u32 id;
+ __u32 pinning;
+ __u32 inner_id;
+ __u32 inner_idx;
+};
+
+#endif /* __BPF_ELF__ */
--git a/drivers/net/tap/bpf/bpf_extract.py b/drivers/net/tap/bpf/bpf_extract.py
new file mode 100644
index 000000000000..d79fc61020b3
--- /dev/null
+++ b/drivers/net/tap/bpf/bpf_extract.py
@@ -0,0 +1,80 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright (c) 2023 Stephen Hemminger <stephen@networkplumber.org>
+
+import argparse
+import sys
+import struct
+from tempfile import TemporaryFile
+from elftools.elf.elffile import ELFFile
+
+
+def load_sections(elffile):
+ result = []
+ DATA = [("cls_q", "cls_q_insns"), ("l3_l4", "l3_l4_hash_insns")]
+ for name, tag in DATA:
+ section = elffile.get_section_by_name(name)
+ if section:
+ insns = struct.iter_unpack('<BBhL', section.data())
+ result.append([tag, insns])
+ return result
+
+
+def dump_sections(sections, out):
+ for name, insns in sections:
+ print(f'\nstatic const struct bpf_insn {name} = {{', file=out)
+ for bpf in insns:
+ code = bpf[0]
+ src = bpf[1] >> 4
+ dst = bpf[1] & 0xf
+ off = bpf[2]
+ imm = bpf[3]
+ print('\t{{{:#02x}, {:4d}, {:4d}, {:8d}, {:#010x}}},'.format(
+ code, dst, src, off, imm),
+ file=out)
+ print('};', file=out)
+
+
+def parse_args():
+ parser = argparse.ArgumentParser()
+ parser.add_argument("input",
+ nargs='+',
+ help="input object file path or '-' for stdin")
+ parser.add_argument("output", help="output C file path or '-' for stdout")
+ return parser.parse_args()
+
+
+def open_input(path):
+ if path == "-":
+ temp = TemporaryFile()
+ temp.write(sys.stdin.buffer.read())
+ return temp
+ return open(path, "rb")
+
+
+def open_output(path):
+ if path == "-":
+ return sys.stdout
+ return open(path, "w")
+
+
+def write_header(output):
+ print("/* SPDX-License-Identifier: BSD-3-Clause", file=output)
+ print(" * Compiled BPF instructions do not edit", file=output)
+ print(" */\n", file=output)
+ print("#include <tap_bpf.h>", file=output)
+
+
+def main():
+ args = parse_args()
+
+ output = open_output(args.output)
+ write_header(output)
+ for path in args.input:
+ elffile = ELFFile(open_input(path))
+ sections = load_sections(elffile)
+ dump_sections(sections, output)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/drivers/net/tap/tap_bpf_program.c b/drivers/net/tap/bpf/tap_bpf_program.c
similarity index 97%
rename from drivers/net/tap/tap_bpf_program.c
rename to drivers/net/tap/bpf/tap_bpf_program.c
index 20c310e5e7ba..ff6f1606fb38 100644
--- a/drivers/net/tap/tap_bpf_program.c
+++ b/drivers/net/tap/bpf/tap_bpf_program.c
@@ -14,9 +14,10 @@
#include <linux/ipv6.h>
#include <linux/if_tunnel.h>
#include <linux/filter.h>
-#include <linux/bpf.h>
-#include "tap_rss.h"
+#include "bpf_api.h"
+#include "bpf_elf.h"
+#include "../tap_rss.h"
/** Create IPv4 address */
#define IPv4(a, b, c, d) ((__u32)(((a) & 0xff) << 24) | \
@@ -75,14 +76,14 @@ struct ipv4_l3_l4_tuple {
__u32 dst_addr;
__u16 dport;
__u16 sport;
-} __rte_packed;
+} __attribute__((packed));
struct ipv6_l3_l4_tuple {
__u8 src_addr[16];
__u8 dst_addr[16];
__u16 dport;
__u16 sport;
-} __rte_packed;
+} __attribute__((packed));
static const __u8 def_rss_key[TAP_RSS_HASH_KEY_SIZE] = {
0xd1, 0x81, 0xc6, 0x2c,
--git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 48c151cf6b68..dff46a012f94 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -35,6 +35,6 @@ struct rss_key {
__u32 key_size;
__u32 queues[TAP_MAX_QUEUES];
__u32 nb_queues;
-} __rte_packed;
+} __attribute__((packed));
#endif /* _TAP_RSS_H_ */
--
2.39.2
^ permalink raw reply [relevance 4%]
* [PATCH v2 ] tap: fix build of TAP BPF program
2023-07-16 21:25 1% [RFC] MAINTAINERS: add status information Stephen Hemminger
2023-07-19 16:07 1% ` [PATCH v2] " Stephen Hemminger
2023-07-20 17:21 1% ` [PATCH v3] " Stephen Hemminger
@ 2023-07-20 17:45 5% ` Stephen Hemminger
2023-07-20 23:25 4% ` [PATCH v3] " Stephen Hemminger
2023-07-22 16:32 4% ` [PATCH v4] " Stephen Hemminger
4 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-07-20 17:45 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
The code was depending on old versions of headers from iproute2.
Include those headers here so that build works.
The standalone build was also broken because by
commit ef5baf3486e0 ("replace packed attributes")
which introduced __rte_packed into this code.
This patch does not address several other issues with this
BPF code. It should be using BTF and the conversion into
array is a mess.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/tap/bpf_api.h | 261 ++++++++++++++++++++++++++++++
drivers/net/tap/bpf_elf.h | 43 +++++
drivers/net/tap/tap_bpf_program.c | 14 +-
| 2 +-
4 files changed, 316 insertions(+), 4 deletions(-)
create mode 100644 drivers/net/tap/bpf_api.h
create mode 100644 drivers/net/tap/bpf_elf.h
diff --git a/drivers/net/tap/bpf_api.h b/drivers/net/tap/bpf_api.h
new file mode 100644
index 000000000000..d13247199c9a
--- /dev/null
+++ b/drivers/net/tap/bpf_api.h
@@ -0,0 +1,261 @@
+#ifndef __BPF_API__
+#define __BPF_API__
+
+/* Note:
+ *
+ * This file can be included into eBPF kernel programs. It contains
+ * a couple of useful helper functions, map/section ABI (bpf_elf.h),
+ * misc macros and some eBPF specific LLVM built-ins.
+ */
+
+#include <stdint.h>
+
+#include <linux/pkt_cls.h>
+#include <linux/bpf.h>
+#include <linux/filter.h>
+
+#include <asm/byteorder.h>
+
+#include "bpf_elf.h"
+
+/** Misc macros. */
+
+#ifndef __stringify
+# define __stringify(X) #X
+#endif
+
+#ifndef __maybe_unused
+# define __maybe_unused __attribute__((__unused__))
+#endif
+
+#ifndef offsetof
+# define offsetof(TYPE, MEMBER) __builtin_offsetof(TYPE, MEMBER)
+#endif
+
+#ifndef likely
+# define likely(X) __builtin_expect(!!(X), 1)
+#endif
+
+#ifndef unlikely
+# define unlikely(X) __builtin_expect(!!(X), 0)
+#endif
+
+#ifndef htons
+# define htons(X) __constant_htons((X))
+#endif
+
+#ifndef ntohs
+# define ntohs(X) __constant_ntohs((X))
+#endif
+
+#ifndef htonl
+# define htonl(X) __constant_htonl((X))
+#endif
+
+#ifndef ntohl
+# define ntohl(X) __constant_ntohl((X))
+#endif
+
+#ifndef __inline__
+# define __inline__ __attribute__((always_inline))
+#endif
+
+/** Section helper macros. */
+
+#ifndef __section
+# define __section(NAME) \
+ __attribute__((section(NAME), used))
+#endif
+
+#ifndef __section_tail
+# define __section_tail(ID, KEY) \
+ __section(__stringify(ID) "/" __stringify(KEY))
+#endif
+
+#ifndef __section_xdp_entry
+# define __section_xdp_entry \
+ __section(ELF_SECTION_PROG)
+#endif
+
+#ifndef __section_cls_entry
+# define __section_cls_entry \
+ __section(ELF_SECTION_CLASSIFIER)
+#endif
+
+#ifndef __section_act_entry
+# define __section_act_entry \
+ __section(ELF_SECTION_ACTION)
+#endif
+
+#ifndef __section_lwt_entry
+# define __section_lwt_entry \
+ __section(ELF_SECTION_PROG)
+#endif
+
+#ifndef __section_license
+# define __section_license \
+ __section(ELF_SECTION_LICENSE)
+#endif
+
+#ifndef __section_maps
+# define __section_maps \
+ __section(ELF_SECTION_MAPS)
+#endif
+
+/** Declaration helper macros. */
+
+#ifndef BPF_LICENSE
+# define BPF_LICENSE(NAME) \
+ char ____license[] __section_license = NAME
+#endif
+
+/** Classifier helper */
+
+#ifndef BPF_H_DEFAULT
+# define BPF_H_DEFAULT -1
+#endif
+
+/** BPF helper functions for tc. Individual flags are in linux/bpf.h */
+
+#ifndef __BPF_FUNC
+# define __BPF_FUNC(NAME, ...) \
+ (* NAME)(__VA_ARGS__) __maybe_unused
+#endif
+
+#ifndef BPF_FUNC
+# define BPF_FUNC(NAME, ...) \
+ __BPF_FUNC(NAME, __VA_ARGS__) = (void *) BPF_FUNC_##NAME
+#endif
+
+/* Map access/manipulation */
+static void *BPF_FUNC(map_lookup_elem, void *map, const void *key);
+static int BPF_FUNC(map_update_elem, void *map, const void *key,
+ const void *value, uint32_t flags);
+static int BPF_FUNC(map_delete_elem, void *map, const void *key);
+
+/* Time access */
+static uint64_t BPF_FUNC(ktime_get_ns);
+
+/* Debugging */
+
+/* FIXME: __attribute__ ((format(printf, 1, 3))) not possible unless
+ * llvm bug https://llvm.org/bugs/show_bug.cgi?id=26243 gets resolved.
+ * It would require ____fmt to be made const, which generates a reloc
+ * entry (non-map).
+ */
+static void BPF_FUNC(trace_printk, const char *fmt, int fmt_size, ...);
+
+#ifndef printt
+# define printt(fmt, ...) \
+ ({ \
+ char ____fmt[] = fmt; \
+ trace_printk(____fmt, sizeof(____fmt), ##__VA_ARGS__); \
+ })
+#endif
+
+/* Random numbers */
+static uint32_t BPF_FUNC(get_prandom_u32);
+
+/* Tail calls */
+static void BPF_FUNC(tail_call, struct __sk_buff *skb, void *map,
+ uint32_t index);
+
+/* System helpers */
+static uint32_t BPF_FUNC(get_smp_processor_id);
+static uint32_t BPF_FUNC(get_numa_node_id);
+
+/* Packet misc meta data */
+static uint32_t BPF_FUNC(get_cgroup_classid, struct __sk_buff *skb);
+static int BPF_FUNC(skb_under_cgroup, void *map, uint32_t index);
+
+static uint32_t BPF_FUNC(get_route_realm, struct __sk_buff *skb);
+static uint32_t BPF_FUNC(get_hash_recalc, struct __sk_buff *skb);
+static uint32_t BPF_FUNC(set_hash_invalid, struct __sk_buff *skb);
+
+/* Packet redirection */
+static int BPF_FUNC(redirect, int ifindex, uint32_t flags);
+static int BPF_FUNC(clone_redirect, struct __sk_buff *skb, int ifindex,
+ uint32_t flags);
+
+/* Packet manipulation */
+static int BPF_FUNC(skb_load_bytes, struct __sk_buff *skb, uint32_t off,
+ void *to, uint32_t len);
+static int BPF_FUNC(skb_store_bytes, struct __sk_buff *skb, uint32_t off,
+ const void *from, uint32_t len, uint32_t flags);
+
+static int BPF_FUNC(l3_csum_replace, struct __sk_buff *skb, uint32_t off,
+ uint32_t from, uint32_t to, uint32_t flags);
+static int BPF_FUNC(l4_csum_replace, struct __sk_buff *skb, uint32_t off,
+ uint32_t from, uint32_t to, uint32_t flags);
+static int BPF_FUNC(csum_diff, const void *from, uint32_t from_size,
+ const void *to, uint32_t to_size, uint32_t seed);
+static int BPF_FUNC(csum_update, struct __sk_buff *skb, uint32_t wsum);
+
+static int BPF_FUNC(skb_change_type, struct __sk_buff *skb, uint32_t type);
+static int BPF_FUNC(skb_change_proto, struct __sk_buff *skb, uint32_t proto,
+ uint32_t flags);
+static int BPF_FUNC(skb_change_tail, struct __sk_buff *skb, uint32_t nlen,
+ uint32_t flags);
+
+static int BPF_FUNC(skb_pull_data, struct __sk_buff *skb, uint32_t len);
+
+/* Event notification */
+static int __BPF_FUNC(skb_event_output, struct __sk_buff *skb, void *map,
+ uint64_t index, const void *data, uint32_t size) =
+ (void *) BPF_FUNC_perf_event_output;
+
+/* Packet vlan encap/decap */
+static int BPF_FUNC(skb_vlan_push, struct __sk_buff *skb, uint16_t proto,
+ uint16_t vlan_tci);
+static int BPF_FUNC(skb_vlan_pop, struct __sk_buff *skb);
+
+/* Packet tunnel encap/decap */
+static int BPF_FUNC(skb_get_tunnel_key, struct __sk_buff *skb,
+ struct bpf_tunnel_key *to, uint32_t size, uint32_t flags);
+static int BPF_FUNC(skb_set_tunnel_key, struct __sk_buff *skb,
+ const struct bpf_tunnel_key *from, uint32_t size,
+ uint32_t flags);
+
+static int BPF_FUNC(skb_get_tunnel_opt, struct __sk_buff *skb,
+ void *to, uint32_t size);
+static int BPF_FUNC(skb_set_tunnel_opt, struct __sk_buff *skb,
+ const void *from, uint32_t size);
+
+/** LLVM built-ins, mem*() routines work for constant size */
+
+#ifndef lock_xadd
+# define lock_xadd(ptr, val) ((void) __sync_fetch_and_add(ptr, val))
+#endif
+
+#ifndef memset
+# define memset(s, c, n) __builtin_memset((s), (c), (n))
+#endif
+
+#ifndef memcpy
+# define memcpy(d, s, n) __builtin_memcpy((d), (s), (n))
+#endif
+
+#ifndef memmove
+# define memmove(d, s, n) __builtin_memmove((d), (s), (n))
+#endif
+
+/* FIXME: __builtin_memcmp() is not yet fully useable unless llvm bug
+ * https://llvm.org/bugs/show_bug.cgi?id=26218 gets resolved. Also
+ * this one would generate a reloc entry (non-map), otherwise.
+ */
+#if 0
+#ifndef memcmp
+# define memcmp(a, b, n) __builtin_memcmp((a), (b), (n))
+#endif
+#endif
+
+unsigned long long load_byte(void *skb, unsigned long long off)
+ asm ("llvm.bpf.load.byte");
+
+unsigned long long load_half(void *skb, unsigned long long off)
+ asm ("llvm.bpf.load.half");
+
+unsigned long long load_word(void *skb, unsigned long long off)
+ asm ("llvm.bpf.load.word");
+
+#endif /* __BPF_API__ */
diff --git a/drivers/net/tap/bpf_elf.h b/drivers/net/tap/bpf_elf.h
new file mode 100644
index 000000000000..406c30874ac3
--- /dev/null
+++ b/drivers/net/tap/bpf_elf.h
@@ -0,0 +1,43 @@
+#ifndef __BPF_ELF__
+#define __BPF_ELF__
+
+#include <asm/types.h>
+
+/* Note:
+ *
+ * Below ELF section names and bpf_elf_map structure definition
+ * are not (!) kernel ABI. It's rather a "contract" between the
+ * application and the BPF loader in tc. For compatibility, the
+ * section names should stay as-is. Introduction of aliases, if
+ * needed, are a possibility, though.
+ */
+
+/* ELF section names, etc */
+#define ELF_SECTION_LICENSE "license"
+#define ELF_SECTION_MAPS "maps"
+#define ELF_SECTION_PROG "prog"
+#define ELF_SECTION_CLASSIFIER "classifier"
+#define ELF_SECTION_ACTION "action"
+
+#define ELF_MAX_MAPS 64
+#define ELF_MAX_LICENSE_LEN 128
+
+/* Object pinning settings */
+#define PIN_NONE 0
+#define PIN_OBJECT_NS 1
+#define PIN_GLOBAL_NS 2
+
+/* ELF map definition */
+struct bpf_elf_map {
+ __u32 type;
+ __u32 size_key;
+ __u32 size_value;
+ __u32 max_elem;
+ __u32 flags;
+ __u32 id;
+ __u32 pinning;
+ __u32 inner_id;
+ __u32 inner_idx;
+};
+
+#endif /* __BPF_ELF__ */
diff --git a/drivers/net/tap/tap_bpf_program.c b/drivers/net/tap/tap_bpf_program.c
index 20c310e5e7ba..daf30c4aba86 100644
--- a/drivers/net/tap/tap_bpf_program.c
+++ b/drivers/net/tap/tap_bpf_program.c
@@ -1,5 +1,12 @@
/* SPDX-License-Identifier: BSD-3-Clause OR GPL-2.0
* Copyright 2017 Mellanox Technologies, Ltd
+ *
+ * This file is not built as part of normal DPDK build.
+ * It is used to generate the eBPF code for TAP RSS.
+ *
+ * To build it use:
+ * clang -O2 -emit-llvm -c tap_bpf_program.c -o - | \
+ * llc -march=bpf -filetype=obj -o tap_bpf_program.o
*/
#include <stdint.h>
@@ -14,8 +21,9 @@
#include <linux/ipv6.h>
#include <linux/if_tunnel.h>
#include <linux/filter.h>
-#include <linux/bpf.h>
+#include "bpf_api.h"
+#include "bpf_elf.h"
#include "tap_rss.h"
/** Create IPv4 address */
@@ -75,14 +83,14 @@ struct ipv4_l3_l4_tuple {
__u32 dst_addr;
__u16 dport;
__u16 sport;
-} __rte_packed;
+} __attribute__((packed));
struct ipv6_l3_l4_tuple {
__u8 src_addr[16];
__u8 dst_addr[16];
__u16 dport;
__u16 sport;
-} __rte_packed;
+} __attribute__((packed));
static const __u8 def_rss_key[TAP_RSS_HASH_KEY_SIZE] = {
0xd1, 0x81, 0xc6, 0x2c,
--git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 48c151cf6b68..dff46a012f94 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -35,6 +35,6 @@ struct rss_key {
__u32 key_size;
__u32 queues[TAP_MAX_QUEUES];
__u32 nb_queues;
-} __rte_packed;
+} __attribute__((packed));
#endif /* _TAP_RSS_H_ */
--
2.39.2
^ permalink raw reply [relevance 5%]
* [PATCH v3] MAINTAINERS: add status information
2023-07-16 21:25 1% [RFC] MAINTAINERS: add status information Stephen Hemminger
2023-07-19 16:07 1% ` [PATCH v2] " Stephen Hemminger
@ 2023-07-20 17:21 1% ` Stephen Hemminger
2023-07-20 17:45 5% ` [PATCH v2 ] tap: fix build of TAP BPF program Stephen Hemminger
` (2 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-07-20 17:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Add a new field S: which indicates the status of support for
individual sub-trees. Almost everything is marked as supported
but components without any maintainer are listed as Orphan.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
v3 - add back Makefile
mark vdev_netvsc as Odd Fixes
MAINTAINERS | 266 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 266 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 18bc05fccd0d..42cc29e6c475 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17,6 +17,16 @@ Descriptions of section entries:
X: Files and directories exclusion, same rules as F:
K: Keyword regex pattern to match content.
One regex pattern per line. Multiple K: lines acceptable.
+ S: *Status*, one of the following:
+ Supported: Someone is actually paid to look after this.
+ Maintained: Someone actually looks after it.
+ Odd Fixes: It has a maintainer but they don't have time to do
+ much other than throw the odd patch in. See below..
+ Orphan: No current maintainer [but maybe you could take the
+ role as you write your new code].
+ Obsolete: Old code. Something tagged obsolete generally means
+ it has been replaced by a better system and you
+ should be using that.
General Project Administration
@@ -25,44 +35,54 @@ General Project Administration
Main Branch
M: Thomas Monjalon <thomas@monjalon.net>
M: David Marchand <david.marchand@redhat.com>
+S: Supported
T: git://dpdk.org/dpdk
Next-net Tree
M: Ferruh Yigit <ferruh.yigit@amd.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
Next-net-brcm Tree
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-brcm
Next-net-intel Tree
M: Qi Zhang <qi.z.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
Next-net-mrvl Tree
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
Next-net-mlx Tree
M: Raslan Darawsheh <rasland@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mlx
Next-virtio Tree
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
Next-crypto Tree
M: Akhil Goyal <gakhil@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
Next-eventdev Tree
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
Next-baseband Tree
M: Maxime Coquelin <maxime.coquelin@redhat.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
Stable Branches
@@ -70,17 +90,21 @@ M: Luca Boccassi <bluca@debian.org>
M: Kevin Traynor <ktraynor@redhat.com>
M: Christian Ehrhardt <christian.ehrhardt@canonical.com>
M: Xueming Li <xuemingl@nvidia.com>
+S: Supported
T: git://dpdk.org/dpdk-stable
Security Issues
M: maintainers@dpdk.org
+S: Supported
Documentation (with overlaps)
F: README
F: doc/
+S: Supported
Developers and Maintainers Tools
M: Thomas Monjalon <thomas@monjalon.net>
+S: Supported
F: MAINTAINERS
F: devtools/build-dict.sh
F: devtools/check-abi.sh
@@ -110,6 +134,7 @@ F: .mailmap
Build System
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: Makefile
F: meson.build
F: meson_options.txt
@@ -130,11 +155,13 @@ F: devtools/check-meson.py
Public CI
M: Aaron Conole <aconole@redhat.com>
M: Michael Santana <maicolgabriel@hotmail.com>
+S: Supported
F: .github/workflows/build.yml
F: .ci/
Driver information
M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
+S: Maintained
F: buildtools/coff.py
F: buildtools/gen-pmdinfo-cfile.py
F: buildtools/pmdinfogen.py
@@ -147,6 +174,7 @@ Environment Abstraction Layer
T: git://dpdk.org/dpdk
EAL API and common code
+S: Supported
F: lib/eal/common/
F: lib/eal/unix/
F: lib/eal/include/
@@ -180,6 +208,7 @@ F: app/test/test_version.c
Trace - EXPERIMENTAL
M: Jerin Jacob <jerinj@marvell.com>
M: Sunil Kumar Kori <skori@marvell.com>
+S: Supported
F: lib/eal/include/rte_trace*.h
F: lib/eal/common/eal_common_trace*.c
F: lib/eal/common/eal_trace.h
@@ -188,6 +217,7 @@ F: app/test/test_trace*
Memory Allocation
M: Anatoly Burakov <anatoly.burakov@intel.com>
+S: Supported
F: lib/eal/include/rte_fbarray.h
F: lib/eal/include/rte_mem*
F: lib/eal/include/rte_malloc.h
@@ -209,11 +239,13 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+S: Supported
F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
Keep alive
+S: Orphan
F: lib/eal/include/rte_keepalive.h
F: lib/eal/common/rte_keepalive.c
F: examples/l2fwd-keepalive/
@@ -221,6 +253,7 @@ F: doc/guides/sample_app_ug/keep_alive.rst
Secondary process
M: Anatoly Burakov <anatoly.burakov@intel.com>
+S: Maintained
K: RTE_PROC_
F: lib/eal/common/eal_common_proc.c
F: doc/guides/prog_guide/multi_proc_support.rst
@@ -230,6 +263,7 @@ F: doc/guides/sample_app_ug/multi_process.rst
Service Cores
M: Harry van Haaren <harry.van.haaren@intel.com>
+S: Supported
F: lib/eal/include/rte_service.h
F: lib/eal/include/rte_service_component.h
F: lib/eal/common/rte_service.c
@@ -240,44 +274,52 @@ F: doc/guides/sample_app_ug/service_cores.rst
Bitops
M: Joyce Kong <joyce.kong@arm.com>
+S: Supported
F: lib/eal/include/rte_bitops.h
F: app/test/test_bitops.c
Bitmap
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/eal/include/rte_bitmap.h
F: app/test/test_bitmap.c
MCSlock
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+S: Supported
F: lib/eal/include/rte_mcslock.h
F: app/test/test_mcslock.c
Sequence Lock
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+S: Supported
F: lib/eal/include/rte_seqcount.h
F: lib/eal/include/rte_seqlock.h
F: app/test/test_seqlock.c
Ticketlock
M: Joyce Kong <joyce.kong@arm.com>
+S: Supported
F: lib/eal/include/rte_ticketlock.h
F: app/test/test_ticketlock.c
Pseudo-random Number Generation
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+S: Supported
F: lib/eal/include/rte_random.h
F: lib/eal/common/rte_random.c
F: app/test/test_rand_perf.c
ARM v7
M: Ruifeng Wang <ruifeng.wang@arm.com>
+S: Supported
F: config/arm/
F: lib/eal/arm/
X: lib/eal/arm/include/*_64.h
ARM v8
M: Ruifeng Wang <ruifeng.wang@arm.com>
+S: Supported
F: config/arm/
F: doc/guides/linux_gsg/cross_build_dpdk_for_arm64.rst
F: lib/eal/arm/
@@ -291,12 +333,14 @@ F: examples/common/neon/
LoongArch
M: Min Zhou <zhoumin@loongson.cn>
+S: Supported
F: config/loongarch/
F: doc/guides/linux_gsg/cross_build_dpdk_for_loongarch.rst
F: lib/eal/loongarch/
IBM POWER (alpha)
M: David Christensen <drc@linux.vnet.ibm.com>
+S: Supported
F: config/ppc/
F: lib/eal/ppc/
F: lib/*/*_altivec*
@@ -307,6 +351,7 @@ F: examples/common/altivec/
RISC-V
M: Stanislaw Kardach <kda@semihalf.com>
+S: Supported
F: config/riscv/
F: doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
F: lib/eal/riscv/
@@ -314,6 +359,7 @@ F: lib/eal/riscv/
Intel x86
M: Bruce Richardson <bruce.richardson@intel.com>
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: config/x86/
F: doc/guides/linux_gsg/nic_perf_intel_platform.rst
F: buildtools/binutils-avx512-check.py
@@ -330,28 +376,34 @@ F: examples/*/*_avx*
F: examples/common/sse/
Linux EAL (with overlaps)
+S: Supported
F: lib/eal/linux/
F: doc/guides/linux_gsg/
Linux UIO
+S: Supported
F: drivers/bus/pci/linux/*uio*
Linux VFIO
M: Anatoly Burakov <anatoly.burakov@intel.com>
+S: Supported
F: lib/eal/linux/*vfio*
F: drivers/bus/pci/linux/*vfio*
FreeBSD EAL (with overlaps)
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Odd Fixes
F: lib/eal/freebsd/
F: doc/guides/freebsd_gsg/
FreeBSD contigmem
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Oddd Fixes
F: kernel/freebsd/contigmem/
FreeBSD UIO
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Odd Fixes
F: kernel/freebsd/nic_uio/
Windows support
@@ -359,12 +411,14 @@ M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
M: Narcisa Ana Maria Vasile <navasile@linux.microsoft.com>
M: Dmitry Malloy <dmitrym@microsoft.com>
M: Pallavi Kadam <pallavi.kadam@intel.com>
+S: Supported
F: lib/eal/windows/
F: buildtools/map_to_win.py
F: doc/guides/windows_gsg/
Windows memory allocation
M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
+S: Supported
F: lib/eal/windows/eal_hugepages.c
F: lib/eal/windows/eal_mem*
@@ -372,10 +426,12 @@ F: lib/eal/windows/eal_mem*
Core Libraries
--------------
T: git://dpdk.org/dpdk
+S: Maintained
Memory pool
M: Olivier Matz <olivier.matz@6wind.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
F: lib/mempool/
F: drivers/mempool/ring/
F: doc/guides/prog_guide/mempool_lib.rst
@@ -385,6 +441,7 @@ F: app/test/test_func_reentrancy.c
Ring queue
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/ring/
F: doc/guides/prog_guide/ring_lib.rst
F: app/test/test_ring*
@@ -392,6 +449,7 @@ F: app/test/test_func_reentrancy.c
Stack
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/stack/
F: drivers/mempool/stack/
F: app/test/test_stack*
@@ -399,6 +457,7 @@ F: doc/guides/prog_guide/stack_lib.rst
Packet buffer
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/mbuf/
F: doc/guides/prog_guide/mbuf_lib.rst
F: app/test/test_mbuf.c
@@ -407,6 +466,7 @@ Ethernet API
M: Thomas Monjalon <thomas@monjalon.net>
M: Ferruh Yigit <ferruh.yigit@amd.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/
F: app/test/test_ethdev*
@@ -415,6 +475,7 @@ F: doc/guides/prog_guide/switch_representation.rst
Flow API
M: Ori Kam <orika@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: app/test-pmd/cmdline_flow.c
F: doc/guides/prog_guide/rte_flow.rst
@@ -422,18 +483,21 @@ F: lib/ethdev/rte_flow*
Traffic Management API - EXPERIMENTAL
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/rte_tm*
F: app/test-pmd/cmdline_tm.*
Traffic Metering and Policing API - EXPERIMENTAL
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/rte_mtr*
F: app/test-pmd/cmdline_mtr.*
Baseband API
M: Nicolas Chautru <nicolas.chautru@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: lib/bbdev/
F: doc/guides/prog_guide/bbdev.rst
@@ -446,6 +510,7 @@ F: doc/guides/sample_app_ug/bbdev_app.rst
Crypto API
M: Akhil Goyal <gakhil@marvell.com>
M: Fan Zhang <fanzhang.oss@gmail.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/cryptodev/
F: app/test/test_cryptodev*
@@ -453,6 +518,7 @@ F: examples/l2fwd-crypto/
Security API
M: Akhil Goyal <gakhil@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
@@ -461,6 +527,7 @@ F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <fanzhang.oss@gmail.com>
M: Ashish Gupta <ashish.gupta@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/compressdev/
F: drivers/compress/
@@ -470,6 +537,7 @@ F: doc/guides/compressdevs/features/default.ini
RegEx API - EXPERIMENTAL
M: Ori Kam <orika@nvidia.com>
+S: Supported
F: lib/regexdev/
F: app/test-regex/
F: doc/guides/prog_guide/regexdev.rst
@@ -477,6 +545,7 @@ F: doc/guides/regexdevs/features/default.ini
Machine Learning device API - EXPERIMENTAL
M: Srikanth Yalavarthi <syalavarthi@marvell.com>
+S: Supported
F: lib/mldev/
F: doc/guides/prog_guide/mldev.rst
F: app/test-mldev/
@@ -484,6 +553,7 @@ F: doc/guides/tools/testmldev.rst
DMA device API - EXPERIMENTAL
M: Chengwen Feng <fengchengwen@huawei.com>
+S: Supported
F: lib/dmadev/
F: drivers/dma/skeleton/
F: app/test/test_dmadev*
@@ -495,6 +565,7 @@ F: doc/guides/sample_app_ug/dma.rst
General-Purpose Graphics Processing Unit (GPU) API - EXPERIMENTAL
M: Elena Agostini <eagostini@nvidia.com>
+S: Supported
F: lib/gpudev/
F: doc/guides/prog_guide/gpudev.rst
F: doc/guides/gpus/features/default.ini
@@ -502,6 +573,7 @@ F: app/test-gpudev/
Eventdev API
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/
F: drivers/event/skeleton/
@@ -510,6 +582,7 @@ F: examples/l3fwd/l3fwd_event*
Eventdev Ethdev Rx Adapter API
M: Naga Harish K S V <s.v.naga.harish.k@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*eth_rx_adapter*
F: app/test/test_event_eth_rx_adapter.c
@@ -517,6 +590,7 @@ F: doc/guides/prog_guide/event_ethernet_rx_adapter.rst
Eventdev Ethdev Tx Adapter API
M: Naga Harish K S V <s.v.naga.harish.k@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*eth_tx_adapter*
F: app/test/test_event_eth_tx_adapter.c
@@ -524,6 +598,7 @@ F: doc/guides/prog_guide/event_ethernet_tx_adapter.rst
Eventdev Timer Adapter API
M: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*timer_adapter*
F: app/test/test_event_timer_adapter.c
@@ -531,6 +606,7 @@ F: doc/guides/prog_guide/event_timer_adapter.rst
Eventdev Crypto Adapter API
M: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*crypto_adapter*
F: app/test/test_event_crypto_adapter.c
@@ -539,6 +615,7 @@ F: doc/guides/prog_guide/event_crypto_adapter.rst
Raw device API
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: lib/rawdev/
F: drivers/raw/skeleton/
F: app/test/test_rawdev.c
@@ -551,11 +628,13 @@ Memory Pool Drivers
Bucket memory pool
M: Artem V. Andreev <artem.andreev@oktetlabs.ru>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
F: drivers/mempool/bucket/
Marvell cnxk
M: Ashwin Sekhar T K <asekhar@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/mempool/cnxk/
F: doc/guides/mempool/cnxk.rst
@@ -567,20 +646,24 @@ Bus Drivers
AMD CDX bus
M: Nipun Gupta <nipun.gupta@amd.com>
M: Nikhil Agarwal <nikhil.agarwal@amd.com>
+S: Supported
F: drivers/bus/cdx/
Auxiliary bus driver - EXPERIMENTAL
M: Parav Pandit <parav@nvidia.com>
M: Xueming Li <xuemingl@nvidia.com>
+S: Supported
F: drivers/bus/auxiliary/
Intel FPGA bus
M: Rosen Xu <rosen.xu@intel.com>
+S: Supported
F: drivers/bus/ifpga/
NXP buses
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/common/dpaax/
F: drivers/bus/dpaa/
F: drivers/bus/fslmc/
@@ -588,36 +671,43 @@ F: drivers/bus/fslmc/
PCI bus driver
M: Chenbo Xia <chenbo.xia@intel.com>
M: Nipun Gupta <nipun.gupta@amd.com>
+S: Supported
F: drivers/bus/pci/
Platform bus driver
M: Tomasz Duszynski <tduszynski@marvell.com>
+S: Supported
F: drivers/bus/platform/
VDEV bus driver
+S: Maintained
F: drivers/bus/vdev/
F: app/test/test_vdev.c
VMBUS bus driver
M: Long Li <longli@microsoft.com>
+S: Supported
F: drivers/bus/vmbus/
Networking Drivers
------------------
M: Ferruh Yigit <ferruh.yigit@amd.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: doc/guides/nics/features/default.ini
Link bonding
M: Chas Williams <chas3@att.com>
M: Min Hu (Connor) <humin29@huawei.com>
+S: Supported
F: drivers/net/bonding/
F: doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
F: app/test/test_link_bonding*
F: examples/bond/
Linux KNI
+S: Obsolete
F: kernel/linux/kni/
F: lib/kni/
F: doc/guides/prog_guide/kernel_nic_interface.rst
@@ -625,12 +715,14 @@ F: app/test/test_kni.c
Linux AF_PACKET
M: John W. Linville <linville@tuxdriver.com>
+S: Odd Fixes
F: drivers/net/af_packet/
F: doc/guides/nics/features/afpacket.ini
Linux AF_XDP
M: Ciara Loftus <ciara.loftus@intel.com>
M: Qi Zhang <qi.z.zhang@intel.com>
+S: Supported
F: drivers/net/af_xdp/
F: doc/guides/nics/af_xdp.rst
F: doc/guides/nics/features/af_xdp.ini
@@ -641,24 +733,28 @@ M: Shai Brandes <shaibran@amazon.com>
M: Evgeny Schemeilin <evgenys@amazon.com>
M: Igor Chauskin <igorch@amazon.com>
M: Ron Beider <rbeider@amazon.com>
+S: Supported
F: drivers/net/ena/
F: doc/guides/nics/ena.rst
F: doc/guides/nics/features/ena.ini
AMD axgbe
M: Chandubabu Namburu <chandu@amd.com>
+S: Supported
F: drivers/net/axgbe/
F: doc/guides/nics/axgbe.rst
F: doc/guides/nics/features/axgbe.ini
AMD Pensando ionic
M: Andrew Boyer <andrew.boyer@amd.com>
+S: Supported
F: drivers/net/ionic/
F: doc/guides/nics/ionic.rst
F: doc/guides/nics/features/ionic.ini
Marvell/Aquantia atlantic
M: Igor Russkikh <irusskikh@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/atlantic/
F: doc/guides/nics/atlantic.rst
@@ -668,6 +764,7 @@ Atomic Rules ARK
M: Shepard Siegel <shepard.siegel@atomicrules.com>
M: Ed Czeck <ed.czeck@atomicrules.com>
M: John Miller <john.miller@atomicrules.com>
+S: Supported
F: drivers/net/ark/
F: doc/guides/nics/ark.rst
F: doc/guides/nics/features/ark.ini
@@ -675,6 +772,7 @@ F: doc/guides/nics/features/ark.ini
Broadcom bnxt
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Somnath Kotur <somnath.kotur@broadcom.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-brcm
F: drivers/net/bnxt/
F: doc/guides/nics/bnxt.rst
@@ -683,6 +781,7 @@ F: doc/guides/nics/features/bnxt.ini
Cavium ThunderX nicvf
M: Jerin Jacob <jerinj@marvell.com>
M: Maciej Czekaj <mczekaj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/thunderx/
F: doc/guides/nics/thunderx.rst
@@ -690,6 +789,7 @@ F: doc/guides/nics/features/thunderx.ini
Cavium OCTEON TX
M: Harman Kalra <hkalra@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/octeontx/
F: drivers/mempool/octeontx/
@@ -699,6 +799,7 @@ F: doc/guides/nics/features/octeontx.ini
Chelsio cxgbe
M: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
+S: Supported
F: drivers/net/cxgbe/
F: doc/guides/nics/cxgbe.rst
F: doc/guides/nics/features/cxgbe.ini
@@ -706,6 +807,7 @@ F: doc/guides/nics/features/cxgbe.ini
Cisco enic
M: John Daley <johndale@cisco.com>
M: Hyong Youb Kim <hyonkim@cisco.com>
+S: Supported
F: drivers/net/enic/
F: doc/guides/nics/enic.rst
F: doc/guides/nics/features/enic.ini
@@ -715,6 +817,7 @@ M: Junfeng Guo <junfeng.guo@intel.com>
M: Jeroen de Borst <jeroendb@google.com>
M: Rushil Gupta <rushilg@google.com>
M: Joshua Washington <joshwash@google.com>
+S: Supported
F: drivers/net/gve/
F: doc/guides/nics/gve.rst
F: doc/guides/nics/features/gve.ini
@@ -722,6 +825,7 @@ F: doc/guides/nics/features/gve.ini
Hisilicon hns3
M: Dongdong Liu <liudongdong3@huawei.com>
M: Yisen Zhuang <yisen.zhuang@huawei.com>
+S: Supported
F: drivers/net/hns3/
F: doc/guides/nics/hns3.rst
F: doc/guides/nics/features/hns3.ini
@@ -730,6 +834,7 @@ Huawei hinic
M: Ziyang Xuan <xuanziyang2@huawei.com>
M: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
M: Guoyang Zhou <zhouguoyang@huawei.com>
+S: Supported
F: drivers/net/hinic/
F: doc/guides/nics/hinic.rst
F: doc/guides/nics/features/hinic.ini
@@ -737,6 +842,7 @@ F: doc/guides/nics/features/hinic.ini
Intel e1000
M: Simei Su <simei.su@intel.com>
M: Wenjun Wu <wenjun1.wu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/e1000/
F: doc/guides/nics/e1000em.rst
@@ -747,6 +853,7 @@ F: doc/guides/nics/features/igb*.ini
Intel ixgbe
M: Qiming Yang <qiming.yang@intel.com>
M: Wenjun Wu <wenjun1.wu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ixgbe/
F: doc/guides/nics/ixgbe.rst
@@ -756,6 +863,7 @@ F: doc/guides/nics/features/ixgbe*.ini
Intel i40e
M: Yuying Zhang <Yuying.Zhang@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/i40e/
F: doc/guides/nics/i40e.rst
@@ -765,6 +873,7 @@ F: doc/guides/nics/features/i40e*.ini
Intel fm10k
M: Qi Zhang <qi.z.zhang@intel.com>
M: Xiao Wang <xiao.w.wang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/fm10k/
F: doc/guides/nics/fm10k.rst
@@ -773,6 +882,7 @@ F: doc/guides/nics/features/fm10k*.ini
Intel iavf
M: Jingjing Wu <jingjing.wu@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/iavf/
F: drivers/common/iavf/
@@ -781,6 +891,7 @@ F: doc/guides/nics/features/iavf*.ini
Intel ice
M: Qiming Yang <qiming.yang@intel.com>
M: Qi Zhang <qi.z.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ice/
F: doc/guides/nics/ice.rst
@@ -789,6 +900,7 @@ F: doc/guides/nics/features/ice.ini
Intel idpf
M: Jingjing Wu <jingjing.wu@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/idpf/
F: drivers/common/idpf/
@@ -798,6 +910,7 @@ F: doc/guides/nics/features/idpf.ini
Intel cpfl - EXPERIMENTAL
M: Yuying Zhang <yuying.zhang@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/cpfl/
F: doc/guides/nics/cpfl.rst
@@ -806,6 +919,7 @@ F: doc/guides/nics/features/cpfl.ini
Intel igc
M: Junfeng Guo <junfeng.guo@intel.com>
M: Simei Su <simei.su@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/igc/
F: doc/guides/nics/igc.rst
@@ -814,6 +928,7 @@ F: doc/guides/nics/features/igc.ini
Intel ipn3ke
M: Rosen Xu <rosen.xu@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
+S: Supported
F: drivers/net/ipn3ke/
F: doc/guides/nics/ipn3ke.rst
F: doc/guides/nics/features/ipn3ke.ini
@@ -823,6 +938,7 @@ M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Kiran Kumar K <kirankumark@marvell.com>
M: Sunil Kumar Kori <skori@marvell.com>
M: Satha Rao <skoteshwar@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/cnxk/
F: drivers/net/cnxk/
@@ -832,6 +948,7 @@ F: doc/guides/platform/cnxk.rst
Marvell mvpp2
M: Liron Himi <lironh@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/mvep/
F: drivers/net/mvpp2/
@@ -841,6 +958,7 @@ F: doc/guides/nics/features/mvpp2.ini
Marvell mvneta
M: Zyta Szpak <zr@semihalf.com>
M: Liron Himi <lironh@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
@@ -848,6 +966,7 @@ F: doc/guides/nics/features/mvneta.ini
Marvell OCTEON TX EP - endpoint
M: Vamsi Attunuru <vattunuru@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/octeon_ep/
F: doc/guides/nics/features/octeon_ep.ini
@@ -856,6 +975,7 @@ F: doc/guides/nics/octeon_ep.rst
NVIDIA mlx4
M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mlx
F: drivers/net/mlx4/
F: doc/guides/nics/mlx4.rst
@@ -866,6 +986,7 @@ M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
M: Ori Kam <orika@nvidia.com>
M: Suanming Mou <suanmingm@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mlx
F: drivers/common/mlx5/
F: drivers/net/mlx5/
@@ -875,23 +996,27 @@ F: doc/guides/nics/features/mlx5.ini
Microsoft mana
M: Long Li <longli@microsoft.com>
+S: Supported
F: drivers/net/mana/
F: doc/guides/nics/mana.rst
F: doc/guides/nics/features/mana.ini
Microsoft vdev_netvsc - EXPERIMENTAL
M: Matan Azrad <matan@nvidia.com>
+S: Odd Fixes
F: drivers/net/vdev_netvsc/
F: doc/guides/nics/vdev_netvsc.rst
Microsoft Hyper-V netvsc
M: Long Li <longli@microsoft.com>
+S: Supported
F: drivers/net/netvsc/
F: doc/guides/nics/netvsc.rst
F: doc/guides/nics/features/netvsc.ini
Netcope nfb
M: Martin Spinler <spinler@cesnet.cz>
+S: Supported
F: drivers/net/nfb/
F: doc/guides/nics/nfb.rst
F: doc/guides/nics/features/nfb.ini
@@ -899,6 +1024,7 @@ F: doc/guides/nics/features/nfb.ini
Netronome nfp
M: Chaoyong He <chaoyong.he@corigine.com>
M: Niklas Soderlund <niklas.soderlund@corigine.com>
+S: Supported
F: drivers/net/nfp/
F: doc/guides/nics/nfp.rst
F: doc/guides/nics/features/nfp*.ini
@@ -906,6 +1032,7 @@ F: doc/guides/nics/features/nfp*.ini
NXP dpaa
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/mempool/dpaa/
F: drivers/net/dpaa/
F: doc/guides/nics/dpaa.rst
@@ -914,6 +1041,7 @@ F: doc/guides/nics/features/dpaa.ini
NXP dpaa2
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/mempool/dpaa2/
F: drivers/net/dpaa2/
F: doc/guides/nics/dpaa2.rst
@@ -922,6 +1050,7 @@ F: doc/guides/nics/features/dpaa2.ini
NXP enetc
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
@@ -929,18 +1058,21 @@ F: doc/guides/nics/features/enetc.ini
NXP enetfec - EXPERIMENTAL
M: Apeksha Gupta <apeksha.gupta@nxp.com>
M: Sachin Saxena <sachin.saxena@nxp.com>
+S: Supported
F: drivers/net/enetfec/
F: doc/guides/nics/enetfec.rst
F: doc/guides/nics/features/enetfec.ini
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
+S: Supported
F: doc/guides/nics/pfe.rst
F: drivers/net/pfe/
F: doc/guides/nics/features/pfe.ini
Marvell QLogic bnx2x
M: Julien Aube <julien_dpdk@jaube.fr>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/bnx2x/
F: doc/guides/nics/bnx2x.rst
@@ -949,6 +1081,7 @@ F: doc/guides/nics/features/bnx2x*.ini
Marvell QLogic qede PMD
M: Devendra Singh Rawat <dsinghrawat@marvell.com>
M: Alok Prasad <palok@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/qede/
F: doc/guides/nics/qede.rst
@@ -956,6 +1089,7 @@ F: doc/guides/nics/features/qede*.ini
Solarflare sfc_efx
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
F: drivers/common/sfc_efx/
F: drivers/net/sfc/
F: doc/guides/nics/sfc_efx.rst
@@ -963,6 +1097,7 @@ F: doc/guides/nics/features/sfc.ini
Wangxun ngbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
+S: Supported
F: drivers/net/ngbe/
F: doc/guides/nics/ngbe.rst
F: doc/guides/nics/features/ngbe.ini
@@ -970,12 +1105,14 @@ F: doc/guides/nics/features/ngbe.ini
Wangxun txgbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
M: Jian Wang <jianwang@trustnetic.com>
+S: Supported
F: drivers/net/txgbe/
F: doc/guides/nics/txgbe.rst
F: doc/guides/nics/features/txgbe.ini
VMware vmxnet3
M: Jochen Behrens <jbehrens@vmware.com>
+S: Supported
F: drivers/net/vmxnet3/
F: doc/guides/nics/vmxnet3.rst
F: doc/guides/nics/features/vmxnet3.ini
@@ -983,6 +1120,7 @@ F: doc/guides/nics/features/vmxnet3.ini
Vhost-user
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
F: lib/vhost/
F: doc/guides/prog_guide/vhost_lib.rst
@@ -997,6 +1135,7 @@ F: doc/guides/sample_app_ug/vdpa.rst
Vhost PMD
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
F: drivers/net/vhost/
F: doc/guides/nics/vhost.rst
@@ -1005,6 +1144,7 @@ F: doc/guides/nics/features/vhost.ini
Virtio PMD
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
F: drivers/net/virtio/
F: doc/guides/nics/virtio.rst
@@ -1013,26 +1153,31 @@ F: doc/guides/nics/features/virtio*.ini
Wind River AVP
M: Steven Webster <steven.webster@windriver.com>
M: Matt Peters <matt.peters@windriver.com>
+S: Supported
F: drivers/net/avp/
F: doc/guides/nics/avp.rst
F: doc/guides/nics/features/avp.ini
PCAP PMD
+S: Orphan
F: drivers/net/pcap/
F: doc/guides/nics/pcap_ring.rst
F: doc/guides/nics/features/pcap.ini
Tap PMD
+S: Orphan
F: drivers/net/tap/
F: doc/guides/nics/tap.rst
F: doc/guides/nics/features/tap.ini
KNI PMD
+S: Obsolete
F: drivers/net/kni/
F: doc/guides/nics/kni.rst
Ring PMD
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: drivers/net/ring/
F: doc/guides/nics/pcap_ring.rst
F: app/test/test_pmd_ring.c
@@ -1040,21 +1185,25 @@ F: app/test/test_pmd_ring_perf.c
Null Networking PMD
M: Tetsuya Mukawa <mtetsuyah@gmail.com>
+S: Supported
F: drivers/net/null/
Fail-safe PMD
M: Gaetan Rivet <grive@u256.net>
+S: Odd Fixes
F: drivers/net/failsafe/
F: doc/guides/nics/fail_safe.rst
F: doc/guides/nics/features/failsafe.ini
Softnic PMD
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: drivers/net/softnic/
F: doc/guides/nics/softnic.rst
Memif PMD
M: Jakub Grajciar <jgrajcia@cisco.com>
+S: Supported
F: drivers/net/memif/
F: doc/guides/nics/memif.rst
F: doc/guides/nics/features/memif.ini
@@ -1062,17 +1211,20 @@ F: doc/guides/nics/features/memif.ini
Crypto Drivers
--------------
+S: Maintained
T: git://dpdk.org/next/dpdk-next-crypto
F: doc/guides/cryptodevs/features/default.ini
AMD CCP Crypto
M: Sunil Uttarwar <sunilprakashrao.uttarwar@amd.com>
+S: Supported
F: drivers/crypto/ccp/
F: doc/guides/cryptodevs/ccp.rst
F: doc/guides/cryptodevs/features/ccp.ini
ARMv8 Crypto
M: Ruifeng Wang <ruifeng.wang@arm.com>
+S: Supported
F: drivers/crypto/armv8/
F: doc/guides/cryptodevs/armv8.rst
F: doc/guides/cryptodevs/features/armv8.ini
@@ -1081,12 +1233,14 @@ Broadcom FlexSparc
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
M: Vikas Gupta <vikas.gupta@broadcom.com>
+S: Supported
F: drivers/crypto/bcmfs/
F: doc/guides/cryptodevs/bcmfs.rst
F: doc/guides/cryptodevs/features/bcmfs.ini
Cavium OCTEON TX crypto
M: Anoob Joseph <anoobj@marvell.com>
+S: Supported
F: drivers/common/cpt/
F: drivers/crypto/octeontx/
F: doc/guides/cryptodevs/octeontx.rst
@@ -1094,17 +1248,20 @@ F: doc/guides/cryptodevs/features/octeontx.ini
Crypto Scheduler
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/scheduler/
F: doc/guides/cryptodevs/scheduler.rst
HiSilicon UADK crypto
M: Zhangfei Gao <zhangfei.gao@linaro.org>
+S: Supported
F: drivers/crypto/uadk/
F: doc/guides/cryptodevs/uadk.rst
F: doc/guides/cryptodevs/features/uadk.ini
Intel QuickAssist
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/qat/
F: drivers/common/qat/
F: doc/guides/cryptodevs/qat.rst
@@ -1113,6 +1270,7 @@ F: doc/guides/cryptodevs/features/qat.ini
IPsec MB
M: Kai Ji <kai.ji@intel.com>
M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
+S: Supported
F: drivers/crypto/ipsec_mb/
F: doc/guides/cryptodevs/aesni_gcm.rst
F: doc/guides/cryptodevs/aesni_mb.rst
@@ -1131,6 +1289,7 @@ Marvell cnxk crypto
M: Ankur Dwivedi <adwivedi@marvell.com>
M: Anoob Joseph <anoobj@marvell.com>
M: Tejasree Kondoj <ktejasree@marvell.com>
+S: Supported
F: drivers/crypto/cnxk/
F: doc/guides/cryptodevs/cnxk.rst
F: doc/guides/cryptodevs/features/cn9k.ini
@@ -1139,6 +1298,7 @@ F: doc/guides/cryptodevs/features/cn10k.ini
Marvell mvsam
M: Michael Shamis <michaelsh@marvell.com>
M: Liron Himi <lironh@marvell.com>
+S: Supported
F: drivers/crypto/mvsam/
F: doc/guides/cryptodevs/mvsam.rst
F: doc/guides/cryptodevs/features/mvsam.ini
@@ -1146,18 +1306,21 @@ F: doc/guides/cryptodevs/features/mvsam.ini
Marvell Nitrox
M: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
M: Srikanth Jampala <jsrikanth@marvell.com>
+S: Supported
F: drivers/crypto/nitrox/
F: doc/guides/cryptodevs/nitrox.rst
F: doc/guides/cryptodevs/features/nitrox.ini
NVIDIA mlx5
M: Matan Azrad <matan@nvidia.com>
+S: Supported
F: drivers/crypto/mlx5/
F: doc/guides/cryptodevs/mlx5.rst
F: doc/guides/cryptodevs/features/mlx5.ini
Null Crypto
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/null/
F: doc/guides/cryptodevs/null.rst
F: doc/guides/cryptodevs/features/null.ini
@@ -1165,6 +1328,7 @@ F: doc/guides/cryptodevs/features/null.ini
NXP CAAM JR
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/crypto/caam_jr/
F: doc/guides/cryptodevs/caam_jr.rst
F: doc/guides/cryptodevs/features/caam_jr.ini
@@ -1172,6 +1336,7 @@ F: doc/guides/cryptodevs/features/caam_jr.ini
NXP DPAA_SEC
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/crypto/dpaa_sec/
F: doc/guides/cryptodevs/dpaa_sec.rst
F: doc/guides/cryptodevs/features/dpaa_sec.ini
@@ -1179,18 +1344,21 @@ F: doc/guides/cryptodevs/features/dpaa_sec.ini
NXP DPAA2_SEC
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/crypto/dpaa2_sec/
F: doc/guides/cryptodevs/dpaa2_sec.rst
F: doc/guides/cryptodevs/features/dpaa2_sec.ini
OpenSSL
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/openssl/
F: doc/guides/cryptodevs/openssl.rst
F: doc/guides/cryptodevs/features/openssl.ini
Virtio
M: Jay Zhou <jianjay.zhou@huawei.com>
+S: Supported
F: drivers/crypto/virtio/
F: doc/guides/cryptodevs/virtio.rst
F: doc/guides/cryptodevs/features/virtio.ini
@@ -1198,31 +1366,37 @@ F: doc/guides/cryptodevs/features/virtio.ini
Compression Drivers
-------------------
+S: Maintained
T: git://dpdk.org/next/dpdk-next-crypto
Cavium OCTEON TX zipvf
M: Ashish Gupta <ashish.gupta@marvell.com>
+S: Supported
F: drivers/compress/octeontx/
F: doc/guides/compressdevs/octeontx.rst
F: doc/guides/compressdevs/features/octeontx.ini
Intel QuickAssist
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/compress/qat/
F: drivers/common/qat/
ISA-L
M: Lee Daly <lee.daly@intel.com>
+S: Supported
F: drivers/compress/isal/
F: doc/guides/compressdevs/isal.rst
F: doc/guides/compressdevs/features/isal.ini
NVIDIA mlx5
M: Matan Azrad <matan@nvidia.com>
+S: Supported
F: drivers/compress/mlx5/
ZLIB
M: Sunila Sahu <ssahu@marvell.com>
+S: Supported
F: drivers/compress/zlib/
F: doc/guides/compressdevs/zlib.rst
F: doc/guides/compressdevs/features/zlib.ini
@@ -1234,34 +1408,40 @@ DMAdev Drivers
Intel IDXD - EXPERIMENTAL
M: Bruce Richardson <bruce.richardson@intel.com>
M: Kevin Laatz <kevin.laatz@intel.com>
+S: Supported
F: drivers/dma/idxd/
F: doc/guides/dmadevs/idxd.rst
Intel IOAT
M: Bruce Richardson <bruce.richardson@intel.com>
M: Conor Walsh <conor.walsh@intel.com>
+S: Supported
F: drivers/dma/ioat/
F: doc/guides/dmadevs/ioat.rst
HiSilicon DMA
M: Chengwen Feng <fengchengwen@huawei.com>
+S: Supported
F: drivers/dma/hisilicon/
F: doc/guides/dmadevs/hisilicon.rst
Marvell CNXK DPI DMA
M: Vamsi Attunuru <vattunuru@marvell.com>
+S: Supported
F: drivers/dma/cnxk/
F: doc/guides/dmadevs/cnxk.rst
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/dma/dpaa/
F: doc/guides/dmadevs/dpaa.rst
NXP DPAA2 QDMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/dma/dpaa2/
F: doc/guides/dmadevs/dpaa2.rst
@@ -1271,12 +1451,14 @@ RegEx Drivers
Marvell OCTEON CN9K regex
M: Liron Himi <lironh@marvell.com>
+S: Supported
F: drivers/regex/cn9k/
F: doc/guides/regexdevs/cn9k.rst
F: doc/guides/regexdevs/features/cn9k.ini
NVIDIA mlx5
M: Ori Kam <orika@nvidia.com>
+S: Supported
F: drivers/regex/mlx5/
F: doc/guides/regexdevs/mlx5.rst
F: doc/guides/regexdevs/features/mlx5.ini
@@ -1287,6 +1469,7 @@ MLdev Drivers
Marvell ML CNXK
M: Srikanth Yalavarthi <syalavarthi@marvell.com>
+S: Supported
F: drivers/common/cnxk/hw/ml.h
F: drivers/common/cnxk/roc_ml*
F: drivers/ml/cnxk/
@@ -1299,6 +1482,7 @@ T: git://dpdk.org/next/dpdk-next-virtio
Intel ifc
M: Xiao Wang <xiao.w.wang@intel.com>
+S: Supported
F: drivers/vdpa/ifc/
F: doc/guides/vdpadevs/ifc.rst
F: doc/guides/vdpadevs/features/ifcvf.ini
@@ -1306,12 +1490,14 @@ F: doc/guides/vdpadevs/features/ifcvf.ini
NVIDIA mlx5 vDPA
M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
+S: Supported
F: drivers/vdpa/mlx5/
F: doc/guides/vdpadevs/mlx5.rst
F: doc/guides/vdpadevs/features/mlx5.ini
Xilinx sfc vDPA
M: Vijay Kumar Srivastava <vsrivast@xilinx.com>
+S: Supported
F: drivers/vdpa/sfc/
F: doc/guides/vdpadevs/sfc.rst
F: doc/guides/vdpadevs/features/sfc.ini
@@ -1320,42 +1506,50 @@ F: doc/guides/vdpadevs/features/sfc.ini
Eventdev Drivers
----------------
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
Cavium OCTEON TX ssovf
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
F: drivers/event/octeontx/
F: doc/guides/eventdevs/octeontx.rst
Cavium OCTEON TX timvf
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
F: drivers/event/octeontx/timvf_*
Intel DLB2
M: Timothy McDaniel <timothy.mcdaniel@intel.com>
+S: Supported
F: drivers/event/dlb2/
F: doc/guides/eventdevs/dlb2.rst
Marvell cnxk
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
M: Shijith Thotton <sthotton@marvell.com>
+S: Supported
F: drivers/event/cnxk/
F: doc/guides/eventdevs/cnxk.rst
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/event/dpaa/
F: doc/guides/eventdevs/dpaa.rst
NXP DPAA2 eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/event/dpaa2/
F: doc/guides/eventdevs/dpaa2.rst
Software Eventdev PMD
M: Harry van Haaren <harry.van.haaren@intel.com>
+S: Supported
F: drivers/event/sw/
F: doc/guides/eventdevs/sw.rst
F: examples/eventdev_pipeline/
@@ -1363,11 +1557,13 @@ F: doc/guides/sample_app_ug/eventdev_pipeline.rst
Distributed Software Eventdev PMD
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+S: Supported
F: drivers/event/dsw/
F: doc/guides/eventdevs/dsw.rst
Software OPDL Eventdev PMD
M: Liang Ma <liangma@liangbit.com>
+S: Supported
M: Peter Mccarthy <peter.mccarthy@intel.com>
F: drivers/event/opdl/
F: doc/guides/eventdevs/opdl.rst
@@ -1378,6 +1574,7 @@ Baseband Drivers
Intel baseband
M: Nicolas Chautru <nicolas.chautru@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: drivers/baseband/turbo_sw/
F: doc/guides/bbdevs/turbo_sw.rst
@@ -1397,6 +1594,7 @@ F: doc/guides/bbdevs/features/vrb1.ini
Null baseband
M: Nicolas Chautru <nicolas.chautru@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: drivers/baseband/null/
F: doc/guides/bbdevs/null.rst
@@ -1405,6 +1603,7 @@ F: doc/guides/bbdevs/features/null.ini
NXP LA12xx
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: drivers/baseband/la12xx/
F: doc/guides/bbdevs/la12xx.rst
@@ -1416,6 +1615,7 @@ GPU Drivers
NVIDIA CUDA
M: Elena Agostini <eagostini@nvidia.com>
+S: Supported
F: drivers/gpu/cuda/
F: doc/guides/gpus/cuda.rst
@@ -1426,6 +1626,7 @@ Rawdev Drivers
Intel FPGA
M: Rosen Xu <rosen.xu@intel.com>
M: Tianfei zhang <tianfei.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/raw/ifpga/
F: doc/guides/rawdevs/ifpga.rst
@@ -1433,18 +1634,21 @@ F: doc/guides/rawdevs/ifpga.rst
Marvell CNXK BPHY
M: Jakub Palider <jpalider@marvell.com>
M: Tomasz Duszynski <tduszynski@marvell.com>
+S: Supported
F: doc/guides/rawdevs/cnxk_bphy.rst
F: drivers/raw/cnxk_bphy/
Marvell CNXK GPIO
M: Jakub Palider <jpalider@marvell.com>
M: Tomasz Duszynski <tduszynski@marvell.com>
+S: Supported
F: doc/guides/rawdevs/cnxk_gpio.rst
F: drivers/raw/cnxk_gpio/
NTB
M: Jingjing Wu <jingjing.wu@intel.com>
M: Junfeng Guo <junfeng.guo@intel.com>
+S: Supported
F: drivers/raw/ntb/
F: doc/guides/rawdevs/ntb.rst
F: examples/ntb/
@@ -1452,6 +1656,7 @@ F: doc/guides/sample_app_ug/ntb.rst
NXP DPAA2 CMDIF
M: Gagandeep Singh <g.singh@nxp.com>
+S: Supported
F: drivers/raw/dpaa2_cmdif/
F: doc/guides/rawdevs/dpaa2_cmdif.rst
@@ -1461,12 +1666,14 @@ Packet processing
Network headers
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/net/
F: app/test/test_cksum.c
F: app/test/test_cksum_perf.c
Packet CRC
M: Jasvinder Singh <jasvinder.singh@intel.com>
+S: Supported
F: lib/net/net_crc.h
F: lib/net/rte_net_crc*
F: lib/net/net_crc_avx512.c
@@ -1475,6 +1682,7 @@ F: app/test/test_crc.c
IP fragmentation & reassembly
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/ip_frag/
F: doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
F: app/test/test_ipfrag.c
@@ -1486,16 +1694,19 @@ F: doc/guides/sample_app_ug/ip_reassembly.rst
Generic Receive Offload - EXPERIMENTAL
M: Jiayu Hu <jiayu.hu@intel.com>
+S: Supported
F: lib/gro/
F: doc/guides/prog_guide/generic_receive_offload_lib.rst
Generic Segmentation Offload
M: Jiayu Hu <jiayu.hu@intel.com>
+S: Supported
F: lib/gso/
F: doc/guides/prog_guide/generic_segmentation_offload_lib.rst
IPsec
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/ipsec/
F: app/test/test_ipsec*
@@ -1506,12 +1717,14 @@ F: app/test-sad/
PDCP - EXPERIMENTAL
M: Anoob Joseph <anoobj@marvell.com>
M: Volodymyr Fialko <vfialko@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/pdcp/
F: doc/guides/prog_guide/pdcp_lib.rst
F: app/test/test_pdcp*
Flow Classify - EXPERIMENTAL - UNMAINTAINED
+S: Orphan
F: lib/flow_classify/
F: app/test/test_flow_classify*
F: doc/guides/prog_guide/flow_classify_lib.rst
@@ -1520,6 +1733,7 @@ F: doc/guides/sample_app_ug/flow_classify.rst
Distributor
M: David Hunt <david.hunt@intel.com>
+S: Supported
F: lib/distributor/
F: doc/guides/prog_guide/packet_distrib_lib.rst
F: app/test/test_distributor*
@@ -1528,6 +1742,7 @@ F: doc/guides/sample_app_ug/dist_app.rst
Reorder
M: Volodymyr Fialko <vfialko@marvell.com>
+S: Supported
F: lib/reorder/
F: doc/guides/prog_guide/reorder_lib.rst
F: app/test/test_reorder*
@@ -1536,6 +1751,7 @@ F: doc/guides/sample_app_ug/packet_ordering.rst
Hierarchical scheduler
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/sched/
F: doc/guides/prog_guide/qos_framework.rst
F: app/test/test_pie.c
@@ -1547,6 +1763,7 @@ F: doc/guides/sample_app_ug/qos_scheduler.rst
Packet capture
M: Reshma Pattan <reshma.pattan@intel.com>
M: Stephen Hemminger <stephen@networkplumber.org>
+S: Maintained
F: lib/pdump/
F: doc/guides/prog_guide/pdump_lib.rst
F: app/test/test_pdump.*
@@ -1562,6 +1779,7 @@ F: doc/guides/tools/dumpcap.rst
Packet Framework
----------------
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Obsolete
F: lib/pipeline/
F: lib/port/
F: lib/table/
@@ -1579,6 +1797,7 @@ Algorithms
ACL
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/acl/
F: doc/guides/prog_guide/packet_classif_access_ctrl.rst
F: app/test-acl/
@@ -1587,6 +1806,7 @@ F: app/test/test_acl.*
EFD
M: Byron Marohn <byron.marohn@intel.com>
M: Yipeng Wang <yipeng1.wang@intel.com>
+S: Supported
F: lib/efd/
F: doc/guides/prog_guide/efd_lib.rst
F: app/test/test_efd*
@@ -1598,6 +1818,7 @@ M: Yipeng Wang <yipeng1.wang@intel.com>
M: Sameh Gobriel <sameh.gobriel@intel.com>
M: Bruce Richardson <bruce.richardson@intel.com>
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
+S: Supported
F: lib/hash/
F: doc/guides/prog_guide/hash_lib.rst
F: doc/guides/prog_guide/toeplitz_hash_lib.rst
@@ -1607,6 +1828,7 @@ F: app/test/test_func_reentrancy.c
LPM
M: Bruce Richardson <bruce.richardson@intel.com>
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
+S: Supported
F: lib/lpm/
F: doc/guides/prog_guide/lpm*
F: app/test/test_lpm*
@@ -1616,12 +1838,14 @@ F: app/test/test_xmmt_ops.h
Membership - EXPERIMENTAL
M: Yipeng Wang <yipeng1.wang@intel.com>
M: Sameh Gobriel <sameh.gobriel@intel.com>
+S: Supported
F: lib/member/
F: doc/guides/prog_guide/member_lib.rst
F: app/test/test_member*
RIB/FIB
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
+S: Supported
F: lib/rib/
F: app/test/test_rib*
F: lib/fib/
@@ -1630,6 +1854,7 @@ F: app/test-fib/
Traffic metering
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/meter/
F: doc/guides/sample_app_ug/qos_scheduler.rst
F: app/test/test_meter.c
@@ -1642,12 +1867,14 @@ Other libraries
Configuration file
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/cfgfile/
F: app/test/test_cfgfile.c
F: app/test/test_cfgfiles/
Interactive command line
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/cmdline/
F: app/test-cmdline/
F: app/test/test_cmdline*
@@ -1656,11 +1883,13 @@ F: doc/guides/sample_app_ug/cmd_line.rst
Key/Value parsing
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/kvargs/
F: app/test/test_kvargs.c
RCU
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+S: Supported
F: lib/rcu/
F: app/test/test_rcu*
F: doc/guides/prog_guide/rcu_lib.rst
@@ -1668,11 +1897,13 @@ F: doc/guides/prog_guide/rcu_lib.rst
PCI
M: Chenbo Xia <chenbo.xia@intel.com>
M: Gaetan Rivet <grive@u256.net>
+S: Supported
F: lib/pci/
Power management
M: Anatoly Burakov <anatoly.burakov@intel.com>
M: David Hunt <david.hunt@intel.com>
+S: Supported
F: lib/power/
F: doc/guides/prog_guide/power_man.rst
F: app/test/test_power*
@@ -1683,6 +1914,7 @@ F: doc/guides/sample_app_ug/vm_power_management.rst
Timers
M: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
+S: Supported
F: lib/timer/
F: doc/guides/prog_guide/timer_lib.rst
F: app/test/test_timer*
@@ -1690,25 +1922,30 @@ F: examples/timer/
F: doc/guides/sample_app_ug/timer.rst
Job statistics
+S: Orphan
F: lib/jobstats/
F: examples/l2fwd-jobstats/
F: doc/guides/sample_app_ug/l2_forward_job_stats.rst
Metrics
+S: Orphan
F: lib/metrics/
F: app/test/test_metrics.c
Bit-rate statistics
+S: Orphan
F: lib/bitratestats/
F: app/test/test_bitratestats.c
Latency statistics
M: Reshma Pattan <reshma.pattan@intel.com>
+S: Supported
F: lib/latencystats/
F: app/test/test_latencystats.c
Telemetry
M: Ciara Power <ciara.power@intel.com>
+S: Supported
F: lib/telemetry/
F: app/test/test_telemetry*
F: usertools/dpdk-telemetry*
@@ -1716,6 +1953,7 @@ F: doc/guides/howto/telemetry.rst
BPF
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/bpf/
F: examples/bpf/
F: app/test/test_bpf.c
@@ -1727,6 +1965,7 @@ M: Jerin Jacob <jerinj@marvell.com>
M: Kiran Kumar K <kirankumark@marvell.com>
M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Zhirun Yan <zhirun.yan@intel.com>
+S: Supported
F: lib/graph/
F: doc/guides/prog_guide/graph_lib.rst
F: app/test/test_graph*
@@ -1736,6 +1975,7 @@ F: doc/guides/sample_app_ug/l3_forward_graph.rst
Nodes - EXPERIMENTAL
M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
F: lib/node/
@@ -1743,6 +1983,7 @@ Test Applications
-----------------
Unit tests framework
+S: Maintained
F: app/test/commands.c
F: app/test/has_hugepage.py
F: app/test/packet_burst_generator.c
@@ -1758,45 +1999,53 @@ F: app/test/virtual_pmd.h
Sample packet helper functions for unit test
M: Reshma Pattan <reshma.pattan@intel.com>
+S: Supported
F: app/test/sample_packet_forward.c
F: app/test/sample_packet_forward.h
Networking drivers testing tool
M: Aman Singh <aman.deep.singh@intel.com>
M: Yuying Zhang <yuying.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: app/test-pmd/
F: doc/guides/testpmd_app_ug/
DMA device performance tool
M: Cheng Jiang <cheng1.jiang@intel.com>
+S: Supported
F: app/test-dma-perf/
F: doc/guides/tools/dmaperf.rst
Flow performance tool
M: Wisam Jaddo <wisamm@nvidia.com>
+S: Supported
F: app/test-flow-perf/
F: doc/guides/tools/flow-perf.rst
Security performance tool
M: Anoob Joseph <anoobj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: app/test-security-perf/
F: doc/guides/tools/securityperf.rst
Compression performance test application
T: git://dpdk.org/next/dpdk-next-crypto
+S: Orphan
F: app/test-compress-perf/
F: doc/guides/tools/comp_perf.rst
Crypto performance test application
M: Ciara Power <ciara.power@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: app/test-crypto-perf/
F: doc/guides/tools/cryptoperf.rst
Eventdev test application
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: app/test-eventdev/
F: doc/guides/tools/testeventdev.rst
@@ -1805,12 +2054,14 @@ F: app/test/test_event_ring.c
Procinfo tool
M: Reshma Pattan <reshma.pattan@intel.com>
+S: Supported
F: app/proc-info/
F: doc/guides/tools/proc_info.rst
DTS
M: Lijuan Tu <lijuan.tu@intel.com>
M: Juraj Linkeš <juraj.linkes@pantheon.tech>
+S: Supported
F: dts/
F: devtools/dts-check-format.sh
F: doc/guides/tools/dts.rst
@@ -1820,77 +2071,92 @@ Other Example Applications
--------------------------
Ethtool example
+S: Orphan
F: examples/ethtool/
F: doc/guides/sample_app_ug/ethtool.rst
FIPS validation example
M: Brian Dooley <brian.dooley@intel.com>
M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
+S: Supported
F: examples/fips_validation/
F: doc/guides/sample_app_ug/fips_validation.rst
Flow filtering example
M: Ori Kam <orika@nvidia.com>
+S: Supported
F: examples/flow_filtering/
F: doc/guides/sample_app_ug/flow_filtering.rst
Helloworld example
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: examples/helloworld/
F: doc/guides/sample_app_ug/hello_world.rst
IPsec security gateway example
M: Radu Nicolau <radu.nicolau@intel.com>
M: Akhil Goyal <gakhil@marvell.com>
+S: Supported
F: examples/ipsec-secgw/
F: doc/guides/sample_app_ug/ipsec_secgw.rst
IPv4 multicast example
+S: Orphan
F: examples/ipv4_multicast/
F: doc/guides/sample_app_ug/ipv4_multicast.rst
L2 forwarding example
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: examples/l2fwd/
F: doc/guides/sample_app_ug/l2_forward_real_virtual.rst
L2 forwarding with cache allocation example
M: Tomasz Kantecki <tomasz.kantecki@intel.com>
+S: Supported
F: doc/guides/sample_app_ug/l2_forward_cat.rst
F: examples/l2fwd-cat/
L2 forwarding with eventdev example
M: Sunil Kumar Kori <skori@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: examples/l2fwd-event/
F: doc/guides/sample_app_ug/l2_forward_event.rst
L3 forwarding example
+S: Maintained
F: examples/l3fwd/
F: doc/guides/sample_app_ug/l3_forward.rst
Link status interrupt example
+S: Maintained
F: examples/link_status_interrupt/
F: doc/guides/sample_app_ug/link_status_intr.rst
PTP client example
M: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
+S: Supported
F: examples/ptpclient/
Rx/Tx callbacks example
M: Bruce Richardson <bruce.richardson@intel.com>
M: John McNamara <john.mcnamara@intel.com>
+S: Supported
F: examples/rxtx_callbacks/
F: doc/guides/sample_app_ug/rxtx_callbacks.rst
Skeleton example
M: Bruce Richardson <bruce.richardson@intel.com>
M: John McNamara <john.mcnamara@intel.com>
+S: Supported
F: examples/skeleton/
F: doc/guides/sample_app_ug/skeleton.rst
VMDq examples
+S: Orphan
F: examples/vmdq/
F: doc/guides/sample_app_ug/vmdq_forwarding.rst
F: examples/vmdq_dcb/
--
2.39.2
^ permalink raw reply [relevance 1%]
* RE: [PATCH] doc: postpone deprecation of pipeline legacy API
2023-07-19 16:08 3% ` Bruce Richardson
@ 2023-07-20 10:37 0% ` Dumitrescu, Cristian
2023-07-28 16:02 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2023-07-20 10:37 UTC (permalink / raw)
To: Richardson, Bruce
Cc: dev, Nicolau, Radu, R, Kamalakannan, Suresh Narayane, Harshad
> -----Original Message-----
> From: Richardson, Bruce <bruce.richardson@intel.com>
> Sent: Wednesday, July 19, 2023 5:09 PM
> To: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: dev@dpdk.org; Nicolau, Radu <radu.nicolau@intel.com>; R,
> Kamalakannan <kamalakannan.r@intel.com>; Suresh Narayane, Harshad
> <harshad.suresh.narayane@intel.com>
> Subject: Re: [PATCH] doc: postpone deprecation of pipeline legacy API
>
> On Wed, Jul 19, 2023 at 03:12:25PM +0000, Cristian Dumitrescu wrote:
> > Postpone the deprecation of the legacy pipeline, table and port
> > library API and gradual stabilization of the new API.
> >
> > Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 21 +++++++++------------
> > 1 file changed, 9 insertions(+), 12 deletions(-)
> >
>
> No objection to this, though it would be really good to get the new
> functions stabilized in 23.11 when we lock down the 24 ABI.
>
Yes, fully agree, let's see if we can make this happen for 23.11
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> > index fb771a0305..56ef906cdb 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -145,19 +145,16 @@ Deprecation Notices
> > In the absence of such interest, this library will be removed in DPDK 23.11.
> >
> > * pipeline: The pipeline library legacy API (functions rte_pipeline_*)
> > - will be deprecated in DPDK 23.07 release and removed in DPDK 23.11
> release.
> > - The new pipeline library API (functions rte_swx_pipeline_*)
> > - will gradually transition from experimental to stable status
> > - starting with DPDK 23.07 release.
> > + will be deprecated and subsequently removed in DPDK 24.11 release.
> > + Before this, the new pipeline library API (functions rte_swx_pipeline_*)
> > + will gradually transition from experimental to stable status.
> >
> > * table: The table library legacy API (functions rte_table_*)
> > - will be deprecated in DPDK 23.07 release and removed in DPDK 23.11
> release.
> > - The new table library API (functions rte_swx_table_*)
> > - will gradually transition from experimental to stable status
> > - starting with DPDK 23.07 release.
> > + will be deprecated and subsequently removed in DPDK 24.11 release.
> > + Before this, the new table library API (functions rte_swx_table_*)
> > + will gradually transition from experimental to stable status.
> >
> > * port: The port library legacy API (functions rte_port_*)
> > - will be deprecated in DPDK 23.07 release and removed in DPDK 23.11
> release.
> > - The new port library API (functions rte_swx_port_*)
> > - will gradually transition from experimental to stable status
> > - starting with DPDK 23.07 release.
> > + will be deprecated and subsequently removed in DPDK 24.11 release.
> > + Before this, the new port library API (functions rte_swx_port_*)
> > + will gradually transition from experimental to stable status.
> > --
> > 2.34.1
> >
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: postpone deprecation of pipeline legacy API
@ 2023-07-19 16:08 3% ` Bruce Richardson
2023-07-20 10:37 0% ` Dumitrescu, Cristian
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2023-07-19 16:08 UTC (permalink / raw)
To: Cristian Dumitrescu
Cc: dev, radu.nicolau, kamalakannan.r, harshad.suresh.narayane
On Wed, Jul 19, 2023 at 03:12:25PM +0000, Cristian Dumitrescu wrote:
> Postpone the deprecation of the legacy pipeline, table and port
> library API and gradual stabilization of the new API.
>
> Signed-off-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 21 +++++++++------------
> 1 file changed, 9 insertions(+), 12 deletions(-)
>
No objection to this, though it would be really good to get the new
functions stabilized in 23.11 when we lock down the 24 ABI.
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index fb771a0305..56ef906cdb 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -145,19 +145,16 @@ Deprecation Notices
> In the absence of such interest, this library will be removed in DPDK 23.11.
>
> * pipeline: The pipeline library legacy API (functions rte_pipeline_*)
> - will be deprecated in DPDK 23.07 release and removed in DPDK 23.11 release.
> - The new pipeline library API (functions rte_swx_pipeline_*)
> - will gradually transition from experimental to stable status
> - starting with DPDK 23.07 release.
> + will be deprecated and subsequently removed in DPDK 24.11 release.
> + Before this, the new pipeline library API (functions rte_swx_pipeline_*)
> + will gradually transition from experimental to stable status.
>
> * table: The table library legacy API (functions rte_table_*)
> - will be deprecated in DPDK 23.07 release and removed in DPDK 23.11 release.
> - The new table library API (functions rte_swx_table_*)
> - will gradually transition from experimental to stable status
> - starting with DPDK 23.07 release.
> + will be deprecated and subsequently removed in DPDK 24.11 release.
> + Before this, the new table library API (functions rte_swx_table_*)
> + will gradually transition from experimental to stable status.
>
> * port: The port library legacy API (functions rte_port_*)
> - will be deprecated in DPDK 23.07 release and removed in DPDK 23.11 release.
> - The new port library API (functions rte_swx_port_*)
> - will gradually transition from experimental to stable status
> - starting with DPDK 23.07 release.
> + will be deprecated and subsequently removed in DPDK 24.11 release.
> + Before this, the new port library API (functions rte_swx_port_*)
> + will gradually transition from experimental to stable status.
> --
> 2.34.1
>
^ permalink raw reply [relevance 3%]
* [PATCH v2] MAINTAINERS: add status information
2023-07-16 21:25 1% [RFC] MAINTAINERS: add status information Stephen Hemminger
@ 2023-07-19 16:07 1% ` Stephen Hemminger
2023-07-20 17:21 1% ` [PATCH v3] " Stephen Hemminger
` (3 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-07-19 16:07 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Add a new field S: which indicates the status of support for
individual sub-trees. Almost everything is marked as supported
but components without any maintainer are listed as Orphan.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
v2 - mark FreeBSD as Odd Fixes
- pipeline, table, port are marked as deprecated so should be Obsolete
MAINTAINERS | 267 +++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 266 insertions(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5bb8090ebe7e..7882a3c020bc 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17,6 +17,16 @@ Descriptions of section entries:
X: Files and directories exclusion, same rules as F:
K: Keyword regex pattern to match content.
One regex pattern per line. Multiple K: lines acceptable.
+ S: *Status*, one of the following:
+ Supported: Someone is actually paid to look after this.
+ Maintained: Someone actually looks after it.
+ Odd Fixes: It has a maintainer but they don't have time to do
+ much other than throw the odd patch in. See below..
+ Orphan: No current maintainer [but maybe you could take the
+ role as you write your new code].
+ Obsolete: Old code. Something tagged obsolete generally means
+ it has been replaced by a better system and you
+ should be using that.
General Project Administration
@@ -25,44 +35,54 @@ General Project Administration
Main Branch
M: Thomas Monjalon <thomas@monjalon.net>
M: David Marchand <david.marchand@redhat.com>
+S: Supported
T: git://dpdk.org/dpdk
Next-net Tree
M: Ferruh Yigit <ferruh.yigit@amd.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
Next-net-brcm Tree
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-brcm
Next-net-intel Tree
M: Qi Zhang <qi.z.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
Next-net-mrvl Tree
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
Next-net-mlx Tree
M: Raslan Darawsheh <rasland@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mlx
Next-virtio Tree
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
Next-crypto Tree
M: Akhil Goyal <gakhil@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
Next-eventdev Tree
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
Next-baseband Tree
M: Maxime Coquelin <maxime.coquelin@redhat.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
Stable Branches
@@ -70,17 +90,21 @@ M: Luca Boccassi <bluca@debian.org>
M: Kevin Traynor <ktraynor@redhat.com>
M: Christian Ehrhardt <christian.ehrhardt@canonical.com>
M: Xueming Li <xuemingl@nvidia.com>
+S: Supported
T: git://dpdk.org/dpdk-stable
Security Issues
M: maintainers@dpdk.org
+S: Supported
Documentation (with overlaps)
F: README
F: doc/
+S: Supported
Developers and Maintainers Tools
M: Thomas Monjalon <thomas@monjalon.net>
+S: Supported
F: MAINTAINERS
F: devtools/build-dict.sh
F: devtools/check-abi.sh
@@ -110,7 +134,7 @@ F: .mailmap
Build System
M: Bruce Richardson <bruce.richardson@intel.com>
-F: Makefile
+S: Maintained
F: meson.build
F: meson_options.txt
F: config/
@@ -130,11 +154,13 @@ F: devtools/check-meson.py
Public CI
M: Aaron Conole <aconole@redhat.com>
M: Michael Santana <maicolgabriel@hotmail.com>
+S: Supported
F: .github/workflows/build.yml
F: .ci/
Driver information
M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
+S: Maintained
F: buildtools/coff.py
F: buildtools/gen-pmdinfo-cfile.py
F: buildtools/pmdinfogen.py
@@ -147,6 +173,7 @@ Environment Abstraction Layer
T: git://dpdk.org/dpdk
EAL API and common code
+S: Supported
F: lib/eal/common/
F: lib/eal/unix/
F: lib/eal/include/
@@ -180,6 +207,7 @@ F: app/test/test_version.c
Trace - EXPERIMENTAL
M: Jerin Jacob <jerinj@marvell.com>
M: Sunil Kumar Kori <skori@marvell.com>
+S: Supported
F: lib/eal/include/rte_trace*.h
F: lib/eal/common/eal_common_trace*.c
F: lib/eal/common/eal_trace.h
@@ -188,6 +216,7 @@ F: app/test/test_trace*
Memory Allocation
M: Anatoly Burakov <anatoly.burakov@intel.com>
+S: Supported
F: lib/eal/include/rte_fbarray.h
F: lib/eal/include/rte_mem*
F: lib/eal/include/rte_malloc.h
@@ -209,11 +238,13 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+S: Supported
F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
Keep alive
+S: Orphan
F: lib/eal/include/rte_keepalive.h
F: lib/eal/common/rte_keepalive.c
F: examples/l2fwd-keepalive/
@@ -221,6 +252,7 @@ F: doc/guides/sample_app_ug/keep_alive.rst
Secondary process
M: Anatoly Burakov <anatoly.burakov@intel.com>
+S: Maintained
K: RTE_PROC_
F: lib/eal/common/eal_common_proc.c
F: doc/guides/prog_guide/multi_proc_support.rst
@@ -230,6 +262,7 @@ F: doc/guides/sample_app_ug/multi_process.rst
Service Cores
M: Harry van Haaren <harry.van.haaren@intel.com>
+S: Supported
F: lib/eal/include/rte_service.h
F: lib/eal/include/rte_service_component.h
F: lib/eal/common/rte_service.c
@@ -240,44 +273,52 @@ F: doc/guides/sample_app_ug/service_cores.rst
Bitops
M: Joyce Kong <joyce.kong@arm.com>
+S: Supported
F: lib/eal/include/rte_bitops.h
F: app/test/test_bitops.c
Bitmap
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/eal/include/rte_bitmap.h
F: app/test/test_bitmap.c
MCSlock
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+S: Supported
F: lib/eal/include/rte_mcslock.h
F: app/test/test_mcslock.c
Sequence Lock
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+S: Supported
F: lib/eal/include/rte_seqcount.h
F: lib/eal/include/rte_seqlock.h
F: app/test/test_seqlock.c
Ticketlock
M: Joyce Kong <joyce.kong@arm.com>
+S: Supported
F: lib/eal/include/rte_ticketlock.h
F: app/test/test_ticketlock.c
Pseudo-random Number Generation
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+S: Supported
F: lib/eal/include/rte_random.h
F: lib/eal/common/rte_random.c
F: app/test/test_rand_perf.c
ARM v7
M: Ruifeng Wang <ruifeng.wang@arm.com>
+S: Supported
F: config/arm/
F: lib/eal/arm/
X: lib/eal/arm/include/*_64.h
ARM v8
M: Ruifeng Wang <ruifeng.wang@arm.com>
+S: Supported
F: config/arm/
F: doc/guides/linux_gsg/cross_build_dpdk_for_arm64.rst
F: lib/eal/arm/
@@ -291,12 +332,14 @@ F: examples/common/neon/
LoongArch
M: Min Zhou <zhoumin@loongson.cn>
+S: Supported
F: config/loongarch/
F: doc/guides/linux_gsg/cross_build_dpdk_for_loongarch.rst
F: lib/eal/loongarch/
IBM POWER (alpha)
M: David Christensen <drc@linux.vnet.ibm.com>
+S: Supported
F: config/ppc/
F: lib/eal/ppc/
F: lib/*/*_altivec*
@@ -307,6 +350,7 @@ F: examples/common/altivec/
RISC-V
M: Stanislaw Kardach <kda@semihalf.com>
+S: Supported
F: config/riscv/
F: doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
F: lib/eal/riscv/
@@ -314,6 +358,7 @@ F: lib/eal/riscv/
Intel x86
M: Bruce Richardson <bruce.richardson@intel.com>
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: config/x86/
F: doc/guides/linux_gsg/nic_perf_intel_platform.rst
F: buildtools/binutils-avx512-check.py
@@ -330,28 +375,34 @@ F: examples/*/*_avx*
F: examples/common/sse/
Linux EAL (with overlaps)
+S: Maintained
F: lib/eal/linux/
F: doc/guides/linux_gsg/
Linux UIO
+S: Maintained
F: drivers/bus/pci/linux/*uio*
Linux VFIO
M: Anatoly Burakov <anatoly.burakov@intel.com>
+S: Supported
F: lib/eal/linux/*vfio*
F: drivers/bus/pci/linux/*vfio*
FreeBSD EAL (with overlaps)
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Odd Fixes
F: lib/eal/freebsd/
F: doc/guides/freebsd_gsg/
FreeBSD contigmem
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Oddd Fixes
F: kernel/freebsd/contigmem/
FreeBSD UIO
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Odd Fixes
F: kernel/freebsd/nic_uio/
Windows support
@@ -359,12 +410,14 @@ M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
M: Narcisa Ana Maria Vasile <navasile@linux.microsoft.com>
M: Dmitry Malloy <dmitrym@microsoft.com>
M: Pallavi Kadam <pallavi.kadam@intel.com>
+S: Supported
F: lib/eal/windows/
F: buildtools/map_to_win.py
F: doc/guides/windows_gsg/
Windows memory allocation
M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
+S: Supported
F: lib/eal/windows/eal_hugepages.c
F: lib/eal/windows/eal_mem*
@@ -372,10 +425,12 @@ F: lib/eal/windows/eal_mem*
Core Libraries
--------------
T: git://dpdk.org/dpdk
+S: Maintained
Memory pool
M: Olivier Matz <olivier.matz@6wind.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
F: lib/mempool/
F: drivers/mempool/ring/
F: doc/guides/prog_guide/mempool_lib.rst
@@ -385,6 +440,7 @@ F: app/test/test_func_reentrancy.c
Ring queue
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/ring/
F: doc/guides/prog_guide/ring_lib.rst
F: app/test/test_ring*
@@ -392,6 +448,7 @@ F: app/test/test_func_reentrancy.c
Stack
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/stack/
F: drivers/mempool/stack/
F: app/test/test_stack*
@@ -399,6 +456,7 @@ F: doc/guides/prog_guide/stack_lib.rst
Packet buffer
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/mbuf/
F: doc/guides/prog_guide/mbuf_lib.rst
F: app/test/test_mbuf.c
@@ -407,6 +465,7 @@ Ethernet API
M: Thomas Monjalon <thomas@monjalon.net>
M: Ferruh Yigit <ferruh.yigit@amd.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/
F: app/test/test_ethdev*
@@ -415,6 +474,7 @@ F: doc/guides/prog_guide/switch_representation.rst
Flow API
M: Ori Kam <orika@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: app/test-pmd/cmdline_flow.c
F: doc/guides/prog_guide/rte_flow.rst
@@ -422,18 +482,21 @@ F: lib/ethdev/rte_flow*
Traffic Management API - EXPERIMENTAL
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/rte_tm*
F: app/test-pmd/cmdline_tm.*
Traffic Metering and Policing API - EXPERIMENTAL
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/rte_mtr*
F: app/test-pmd/cmdline_mtr.*
Baseband API
M: Nicolas Chautru <nicolas.chautru@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: lib/bbdev/
F: doc/guides/prog_guide/bbdev.rst
@@ -446,6 +509,7 @@ F: doc/guides/sample_app_ug/bbdev_app.rst
Crypto API
M: Akhil Goyal <gakhil@marvell.com>
M: Fan Zhang <fanzhang.oss@gmail.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/cryptodev/
F: app/test/test_cryptodev*
@@ -453,6 +517,7 @@ F: examples/l2fwd-crypto/
Security API
M: Akhil Goyal <gakhil@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
@@ -461,6 +526,7 @@ F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <fanzhang.oss@gmail.com>
M: Ashish Gupta <ashish.gupta@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/compressdev/
F: drivers/compress/
@@ -470,6 +536,7 @@ F: doc/guides/compressdevs/features/default.ini
RegEx API - EXPERIMENTAL
M: Ori Kam <orika@nvidia.com>
+S: Supported
F: lib/regexdev/
F: app/test-regex/
F: doc/guides/prog_guide/regexdev.rst
@@ -477,6 +544,7 @@ F: doc/guides/regexdevs/features/default.ini
Machine Learning device API - EXPERIMENTAL
M: Srikanth Yalavarthi <syalavarthi@marvell.com>
+S: Supported
F: lib/mldev/
F: doc/guides/prog_guide/mldev.rst
F: app/test-mldev/
@@ -484,6 +552,7 @@ F: doc/guides/tools/testmldev.rst
DMA device API - EXPERIMENTAL
M: Chengwen Feng <fengchengwen@huawei.com>
+S: Supported
F: lib/dmadev/
F: drivers/dma/skeleton/
F: app/test/test_dmadev*
@@ -495,6 +564,7 @@ F: doc/guides/sample_app_ug/dma.rst
General-Purpose Graphics Processing Unit (GPU) API - EXPERIMENTAL
M: Elena Agostini <eagostini@nvidia.com>
+S: Supported
F: lib/gpudev/
F: doc/guides/prog_guide/gpudev.rst
F: doc/guides/gpus/features/default.ini
@@ -502,6 +572,7 @@ F: app/test-gpudev/
Eventdev API
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/
F: drivers/event/skeleton/
@@ -510,6 +581,7 @@ F: examples/l3fwd/l3fwd_event*
Eventdev Ethdev Rx Adapter API
M: Naga Harish K S V <s.v.naga.harish.k@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*eth_rx_adapter*
F: app/test/test_event_eth_rx_adapter.c
@@ -517,6 +589,7 @@ F: doc/guides/prog_guide/event_ethernet_rx_adapter.rst
Eventdev Ethdev Tx Adapter API
M: Naga Harish K S V <s.v.naga.harish.k@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*eth_tx_adapter*
F: app/test/test_event_eth_tx_adapter.c
@@ -524,6 +597,7 @@ F: doc/guides/prog_guide/event_ethernet_tx_adapter.rst
Eventdev Timer Adapter API
M: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*timer_adapter*
F: app/test/test_event_timer_adapter.c
@@ -531,6 +605,7 @@ F: doc/guides/prog_guide/event_timer_adapter.rst
Eventdev Crypto Adapter API
M: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*crypto_adapter*
F: app/test/test_event_crypto_adapter.c
@@ -539,6 +614,7 @@ F: doc/guides/prog_guide/event_crypto_adapter.rst
Raw device API
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: lib/rawdev/
F: drivers/raw/skeleton/
F: app/test/test_rawdev.c
@@ -551,11 +627,13 @@ Memory Pool Drivers
Bucket memory pool
M: Artem V. Andreev <artem.andreev@oktetlabs.ru>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
F: drivers/mempool/bucket/
Marvell cnxk
M: Ashwin Sekhar T K <asekhar@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/mempool/cnxk/
F: doc/guides/mempool/cnxk.rst
@@ -567,20 +645,24 @@ Bus Drivers
AMD CDX bus
M: Nipun Gupta <nipun.gupta@amd.com>
M: Nikhil Agarwal <nikhil.agarwal@amd.com>
+S: Supported
F: drivers/bus/cdx/
Auxiliary bus driver - EXPERIMENTAL
M: Parav Pandit <parav@nvidia.com>
M: Xueming Li <xuemingl@nvidia.com>
+S: Supported
F: drivers/bus/auxiliary/
Intel FPGA bus
M: Rosen Xu <rosen.xu@intel.com>
+S: Supported
F: drivers/bus/ifpga/
NXP buses
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/common/dpaax/
F: drivers/bus/dpaa/
F: drivers/bus/fslmc/
@@ -588,36 +670,43 @@ F: drivers/bus/fslmc/
PCI bus driver
M: Chenbo Xia <chenbo.xia@intel.com>
M: Nipun Gupta <nipun.gupta@amd.com>
+S: Supported
F: drivers/bus/pci/
Platform bus driver
M: Tomasz Duszynski <tduszynski@marvell.com>
+S: Supported
F: drivers/bus/platform/
VDEV bus driver
+S: Maintained
F: drivers/bus/vdev/
F: app/test/test_vdev.c
VMBUS bus driver
M: Long Li <longli@microsoft.com>
+S: Supported
F: drivers/bus/vmbus/
Networking Drivers
------------------
M: Ferruh Yigit <ferruh.yigit@amd.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: doc/guides/nics/features/default.ini
Link bonding
M: Chas Williams <chas3@att.com>
M: Min Hu (Connor) <humin29@huawei.com>
+S: Supported
F: drivers/net/bonding/
F: doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
F: app/test/test_link_bonding*
F: examples/bond/
Linux KNI
+S: Obsolete
F: kernel/linux/kni/
F: lib/kni/
F: doc/guides/prog_guide/kernel_nic_interface.rst
@@ -625,12 +714,14 @@ F: app/test/test_kni.c
Linux AF_PACKET
M: John W. Linville <linville@tuxdriver.com>
+S: Odd Fixes
F: drivers/net/af_packet/
F: doc/guides/nics/features/afpacket.ini
Linux AF_XDP
M: Ciara Loftus <ciara.loftus@intel.com>
M: Qi Zhang <qi.z.zhang@intel.com>
+S: Supported
F: drivers/net/af_xdp/
F: doc/guides/nics/af_xdp.rst
F: doc/guides/nics/features/af_xdp.ini
@@ -641,24 +732,28 @@ M: Shai Brandes <shaibran@amazon.com>
M: Evgeny Schemeilin <evgenys@amazon.com>
M: Igor Chauskin <igorch@amazon.com>
M: Ron Beider <rbeider@amazon.com>
+S: Supported
F: drivers/net/ena/
F: doc/guides/nics/ena.rst
F: doc/guides/nics/features/ena.ini
AMD axgbe
M: Chandubabu Namburu <chandu@amd.com>
+S: Supported
F: drivers/net/axgbe/
F: doc/guides/nics/axgbe.rst
F: doc/guides/nics/features/axgbe.ini
AMD Pensando ionic
M: Andrew Boyer <andrew.boyer@amd.com>
+S: Supported
F: drivers/net/ionic/
F: doc/guides/nics/ionic.rst
F: doc/guides/nics/features/ionic.ini
Marvell/Aquantia atlantic
M: Igor Russkikh <irusskikh@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/atlantic/
F: doc/guides/nics/atlantic.rst
@@ -668,6 +763,7 @@ Atomic Rules ARK
M: Shepard Siegel <shepard.siegel@atomicrules.com>
M: Ed Czeck <ed.czeck@atomicrules.com>
M: John Miller <john.miller@atomicrules.com>
+S: Supported
F: drivers/net/ark/
F: doc/guides/nics/ark.rst
F: doc/guides/nics/features/ark.ini
@@ -675,6 +771,7 @@ F: doc/guides/nics/features/ark.ini
Broadcom bnxt
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Somnath Kotur <somnath.kotur@broadcom.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-brcm
F: drivers/net/bnxt/
F: doc/guides/nics/bnxt.rst
@@ -683,6 +780,7 @@ F: doc/guides/nics/features/bnxt.ini
Cavium ThunderX nicvf
M: Jerin Jacob <jerinj@marvell.com>
M: Maciej Czekaj <mczekaj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/thunderx/
F: doc/guides/nics/thunderx.rst
@@ -690,6 +788,7 @@ F: doc/guides/nics/features/thunderx.ini
Cavium OCTEON TX
M: Harman Kalra <hkalra@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/octeontx/
F: drivers/mempool/octeontx/
@@ -699,6 +798,7 @@ F: doc/guides/nics/features/octeontx.ini
Chelsio cxgbe
M: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
+S: Supported
F: drivers/net/cxgbe/
F: doc/guides/nics/cxgbe.rst
F: doc/guides/nics/features/cxgbe.ini
@@ -706,6 +806,7 @@ F: doc/guides/nics/features/cxgbe.ini
Cisco enic
M: John Daley <johndale@cisco.com>
M: Hyong Youb Kim <hyonkim@cisco.com>
+S: Supported
F: drivers/net/enic/
F: doc/guides/nics/enic.rst
F: doc/guides/nics/features/enic.ini
@@ -715,6 +816,7 @@ M: Junfeng Guo <junfeng.guo@intel.com>
M: Jeroen de Borst <jeroendb@google.com>
M: Rushil Gupta <rushilg@google.com>
M: Joshua Washington <joshwash@google.com>
+S: Supported
F: drivers/net/gve/
F: doc/guides/nics/gve.rst
F: doc/guides/nics/features/gve.ini
@@ -722,6 +824,7 @@ F: doc/guides/nics/features/gve.ini
Hisilicon hns3
M: Dongdong Liu <liudongdong3@huawei.com>
M: Yisen Zhuang <yisen.zhuang@huawei.com>
+S: Supported
F: drivers/net/hns3/
F: doc/guides/nics/hns3.rst
F: doc/guides/nics/features/hns3.ini
@@ -730,6 +833,7 @@ Huawei hinic
M: Ziyang Xuan <xuanziyang2@huawei.com>
M: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
M: Guoyang Zhou <zhouguoyang@huawei.com>
+S: Supported
F: drivers/net/hinic/
F: doc/guides/nics/hinic.rst
F: doc/guides/nics/features/hinic.ini
@@ -737,6 +841,7 @@ F: doc/guides/nics/features/hinic.ini
Intel e1000
M: Simei Su <simei.su@intel.com>
M: Wenjun Wu <wenjun1.wu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/e1000/
F: doc/guides/nics/e1000em.rst
@@ -747,6 +852,7 @@ F: doc/guides/nics/features/igb*.ini
Intel ixgbe
M: Qiming Yang <qiming.yang@intel.com>
M: Wenjun Wu <wenjun1.wu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ixgbe/
F: doc/guides/nics/ixgbe.rst
@@ -756,6 +862,7 @@ F: doc/guides/nics/features/ixgbe*.ini
Intel i40e
M: Yuying Zhang <Yuying.Zhang@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/i40e/
F: doc/guides/nics/i40e.rst
@@ -765,6 +872,7 @@ F: doc/guides/nics/features/i40e*.ini
Intel fm10k
M: Qi Zhang <qi.z.zhang@intel.com>
M: Xiao Wang <xiao.w.wang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/fm10k/
F: doc/guides/nics/fm10k.rst
@@ -773,6 +881,7 @@ F: doc/guides/nics/features/fm10k*.ini
Intel iavf
M: Jingjing Wu <jingjing.wu@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/iavf/
F: drivers/common/iavf/
@@ -781,6 +890,7 @@ F: doc/guides/nics/features/iavf*.ini
Intel ice
M: Qiming Yang <qiming.yang@intel.com>
M: Qi Zhang <qi.z.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ice/
F: doc/guides/nics/ice.rst
@@ -789,6 +899,7 @@ F: doc/guides/nics/features/ice.ini
Intel idpf
M: Jingjing Wu <jingjing.wu@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/idpf/
F: drivers/common/idpf/
@@ -798,6 +909,7 @@ F: doc/guides/nics/features/idpf.ini
Intel cpfl - EXPERIMENTAL
M: Yuying Zhang <yuying.zhang@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/cpfl/
F: doc/guides/nics/cpfl.rst
@@ -806,6 +918,7 @@ F: doc/guides/nics/features/cpfl.ini
Intel igc
M: Junfeng Guo <junfeng.guo@intel.com>
M: Simei Su <simei.su@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/igc/
F: doc/guides/nics/igc.rst
@@ -814,6 +927,7 @@ F: doc/guides/nics/features/igc.ini
Intel ipn3ke
M: Rosen Xu <rosen.xu@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
+S: Supported
F: drivers/net/ipn3ke/
F: doc/guides/nics/ipn3ke.rst
F: doc/guides/nics/features/ipn3ke.ini
@@ -823,6 +937,7 @@ M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Kiran Kumar K <kirankumark@marvell.com>
M: Sunil Kumar Kori <skori@marvell.com>
M: Satha Rao <skoteshwar@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/cnxk/
F: drivers/net/cnxk/
@@ -832,6 +947,7 @@ F: doc/guides/platform/cnxk.rst
Marvell mvpp2
M: Liron Himi <lironh@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/mvep/
F: drivers/net/mvpp2/
@@ -841,6 +957,7 @@ F: doc/guides/nics/features/mvpp2.ini
Marvell mvneta
M: Zyta Szpak <zr@semihalf.com>
M: Liron Himi <lironh@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
@@ -848,6 +965,7 @@ F: doc/guides/nics/features/mvneta.ini
Marvell OCTEON TX EP - endpoint
M: Vamsi Attunuru <vattunuru@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/octeon_ep/
F: doc/guides/nics/features/octeon_ep.ini
@@ -856,6 +974,7 @@ F: doc/guides/nics/octeon_ep.rst
NVIDIA mlx4
M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mlx
F: drivers/net/mlx4/
F: doc/guides/nics/mlx4.rst
@@ -866,6 +985,7 @@ M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
M: Ori Kam <orika@nvidia.com>
M: Suanming Mou <suanmingm@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mlx
F: drivers/common/mlx5/
F: drivers/net/mlx5/
@@ -875,23 +995,27 @@ F: doc/guides/nics/features/mlx5.ini
Microsoft mana
M: Long Li <longli@microsoft.com>
+S: Supported
F: drivers/net/mana/
F: doc/guides/nics/mana.rst
F: doc/guides/nics/features/mana.ini
Microsoft vdev_netvsc - EXPERIMENTAL
M: Matan Azrad <matan@nvidia.com>
+S: Supported
F: drivers/net/vdev_netvsc/
F: doc/guides/nics/vdev_netvsc.rst
Microsoft Hyper-V netvsc
M: Long Li <longli@microsoft.com>
+S: Supported
F: drivers/net/netvsc/
F: doc/guides/nics/netvsc.rst
F: doc/guides/nics/features/netvsc.ini
Netcope nfb
M: Martin Spinler <spinler@cesnet.cz>
+S: Supported
F: drivers/net/nfb/
F: doc/guides/nics/nfb.rst
F: doc/guides/nics/features/nfb.ini
@@ -899,6 +1023,7 @@ F: doc/guides/nics/features/nfb.ini
Netronome nfp
M: Chaoyong He <chaoyong.he@corigine.com>
M: Niklas Soderlund <niklas.soderlund@corigine.com>
+S: Supported
F: drivers/net/nfp/
F: doc/guides/nics/nfp.rst
F: doc/guides/nics/features/nfp*.ini
@@ -906,6 +1031,7 @@ F: doc/guides/nics/features/nfp*.ini
NXP dpaa
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/mempool/dpaa/
F: drivers/net/dpaa/
F: doc/guides/nics/dpaa.rst
@@ -914,6 +1040,7 @@ F: doc/guides/nics/features/dpaa.ini
NXP dpaa2
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/mempool/dpaa2/
F: drivers/net/dpaa2/
F: doc/guides/nics/dpaa2.rst
@@ -922,6 +1049,7 @@ F: doc/guides/nics/features/dpaa2.ini
NXP enetc
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
@@ -929,18 +1057,21 @@ F: doc/guides/nics/features/enetc.ini
NXP enetfec - EXPERIMENTAL
M: Apeksha Gupta <apeksha.gupta@nxp.com>
M: Sachin Saxena <sachin.saxena@nxp.com>
+S: Supported
F: drivers/net/enetfec/
F: doc/guides/nics/enetfec.rst
F: doc/guides/nics/features/enetfec.ini
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
+S: Supported
F: doc/guides/nics/pfe.rst
F: drivers/net/pfe/
F: doc/guides/nics/features/pfe.ini
Marvell QLogic bnx2x
M: Julien Aube <julien_dpdk@jaube.fr>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/bnx2x/
F: doc/guides/nics/bnx2x.rst
@@ -949,6 +1080,7 @@ F: doc/guides/nics/features/bnx2x*.ini
Marvell QLogic qede PMD
M: Devendra Singh Rawat <dsinghrawat@marvell.com>
M: Alok Prasad <palok@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/qede/
F: doc/guides/nics/qede.rst
@@ -956,6 +1088,7 @@ F: doc/guides/nics/features/qede*.ini
Solarflare sfc_efx
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
F: drivers/common/sfc_efx/
F: drivers/net/sfc/
F: doc/guides/nics/sfc_efx.rst
@@ -963,6 +1096,7 @@ F: doc/guides/nics/features/sfc.ini
Wangxun ngbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
+S: Supported
F: drivers/net/ngbe/
F: doc/guides/nics/ngbe.rst
F: doc/guides/nics/features/ngbe.ini
@@ -970,12 +1104,14 @@ F: doc/guides/nics/features/ngbe.ini
Wangxun txgbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
M: Jian Wang <jianwang@trustnetic.com>
+S: Supported
F: drivers/net/txgbe/
F: doc/guides/nics/txgbe.rst
F: doc/guides/nics/features/txgbe.ini
VMware vmxnet3
M: Jochen Behrens <jbehrens@vmware.com>
+S: Supported
F: drivers/net/vmxnet3/
F: doc/guides/nics/vmxnet3.rst
F: doc/guides/nics/features/vmxnet3.ini
@@ -983,6 +1119,7 @@ F: doc/guides/nics/features/vmxnet3.ini
Vhost-user
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
F: lib/vhost/
F: doc/guides/prog_guide/vhost_lib.rst
@@ -997,6 +1134,7 @@ F: doc/guides/sample_app_ug/vdpa.rst
Vhost PMD
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
F: drivers/net/vhost/
F: doc/guides/nics/vhost.rst
@@ -1005,6 +1143,7 @@ F: doc/guides/nics/features/vhost.ini
Virtio PMD
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
F: drivers/net/virtio/
F: doc/guides/nics/virtio.rst
@@ -1013,26 +1152,31 @@ F: doc/guides/nics/features/virtio*.ini
Wind River AVP
M: Steven Webster <steven.webster@windriver.com>
M: Matt Peters <matt.peters@windriver.com>
+S: Supported
F: drivers/net/avp/
F: doc/guides/nics/avp.rst
F: doc/guides/nics/features/avp.ini
PCAP PMD
+S: Orphan
F: drivers/net/pcap/
F: doc/guides/nics/pcap_ring.rst
F: doc/guides/nics/features/pcap.ini
Tap PMD
+S: Orphan
F: drivers/net/tap/
F: doc/guides/nics/tap.rst
F: doc/guides/nics/features/tap.ini
KNI PMD
+S: Obsolete
F: drivers/net/kni/
F: doc/guides/nics/kni.rst
Ring PMD
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: drivers/net/ring/
F: doc/guides/nics/pcap_ring.rst
F: app/test/test_pmd_ring.c
@@ -1040,21 +1184,25 @@ F: app/test/test_pmd_ring_perf.c
Null Networking PMD
M: Tetsuya Mukawa <mtetsuyah@gmail.com>
+S: Supported
F: drivers/net/null/
Fail-safe PMD
M: Gaetan Rivet <grive@u256.net>
+S: Supported
F: drivers/net/failsafe/
F: doc/guides/nics/fail_safe.rst
F: doc/guides/nics/features/failsafe.ini
Softnic PMD
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: drivers/net/softnic/
F: doc/guides/nics/softnic.rst
Memif PMD
M: Jakub Grajciar <jgrajcia@cisco.com>
+S: Supported
F: drivers/net/memif/
F: doc/guides/nics/memif.rst
F: doc/guides/nics/features/memif.ini
@@ -1062,17 +1210,20 @@ F: doc/guides/nics/features/memif.ini
Crypto Drivers
--------------
+S: Maintained
T: git://dpdk.org/next/dpdk-next-crypto
F: doc/guides/cryptodevs/features/default.ini
AMD CCP Crypto
M: Sunil Uttarwar <sunilprakashrao.uttarwar@amd.com>
+S: Supported
F: drivers/crypto/ccp/
F: doc/guides/cryptodevs/ccp.rst
F: doc/guides/cryptodevs/features/ccp.ini
ARMv8 Crypto
M: Ruifeng Wang <ruifeng.wang@arm.com>
+S: Supported
F: drivers/crypto/armv8/
F: doc/guides/cryptodevs/armv8.rst
F: doc/guides/cryptodevs/features/armv8.ini
@@ -1081,12 +1232,14 @@ Broadcom FlexSparc
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
M: Vikas Gupta <vikas.gupta@broadcom.com>
+S: Supported
F: drivers/crypto/bcmfs/
F: doc/guides/cryptodevs/bcmfs.rst
F: doc/guides/cryptodevs/features/bcmfs.ini
Cavium OCTEON TX crypto
M: Anoob Joseph <anoobj@marvell.com>
+S: Supported
F: drivers/common/cpt/
F: drivers/crypto/octeontx/
F: doc/guides/cryptodevs/octeontx.rst
@@ -1094,17 +1247,20 @@ F: doc/guides/cryptodevs/features/octeontx.ini
Crypto Scheduler
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/scheduler/
F: doc/guides/cryptodevs/scheduler.rst
HiSilicon UADK crypto
M: Zhangfei Gao <zhangfei.gao@linaro.org>
+S: Supported
F: drivers/crypto/uadk/
F: doc/guides/cryptodevs/uadk.rst
F: doc/guides/cryptodevs/features/uadk.ini
Intel QuickAssist
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/qat/
F: drivers/common/qat/
F: doc/guides/cryptodevs/qat.rst
@@ -1113,6 +1269,7 @@ F: doc/guides/cryptodevs/features/qat.ini
IPsec MB
M: Kai Ji <kai.ji@intel.com>
M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
+S: Supported
F: drivers/crypto/ipsec_mb/
F: doc/guides/cryptodevs/aesni_gcm.rst
F: doc/guides/cryptodevs/aesni_mb.rst
@@ -1131,6 +1288,7 @@ Marvell cnxk crypto
M: Ankur Dwivedi <adwivedi@marvell.com>
M: Anoob Joseph <anoobj@marvell.com>
M: Tejasree Kondoj <ktejasree@marvell.com>
+S: Supported
F: drivers/crypto/cnxk/
F: doc/guides/cryptodevs/cnxk.rst
F: doc/guides/cryptodevs/features/cn9k.ini
@@ -1139,6 +1297,7 @@ F: doc/guides/cryptodevs/features/cn10k.ini
Marvell mvsam
M: Michael Shamis <michaelsh@marvell.com>
M: Liron Himi <lironh@marvell.com>
+S: Supported
F: drivers/crypto/mvsam/
F: doc/guides/cryptodevs/mvsam.rst
F: doc/guides/cryptodevs/features/mvsam.ini
@@ -1146,18 +1305,21 @@ F: doc/guides/cryptodevs/features/mvsam.ini
Marvell Nitrox
M: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
M: Srikanth Jampala <jsrikanth@marvell.com>
+S: Supported
F: drivers/crypto/nitrox/
F: doc/guides/cryptodevs/nitrox.rst
F: doc/guides/cryptodevs/features/nitrox.ini
NVIDIA mlx5
M: Matan Azrad <matan@nvidia.com>
+S: Supported
F: drivers/crypto/mlx5/
F: doc/guides/cryptodevs/mlx5.rst
F: doc/guides/cryptodevs/features/mlx5.ini
Null Crypto
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/null/
F: doc/guides/cryptodevs/null.rst
F: doc/guides/cryptodevs/features/null.ini
@@ -1165,6 +1327,7 @@ F: doc/guides/cryptodevs/features/null.ini
NXP CAAM JR
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/crypto/caam_jr/
F: doc/guides/cryptodevs/caam_jr.rst
F: doc/guides/cryptodevs/features/caam_jr.ini
@@ -1172,6 +1335,7 @@ F: doc/guides/cryptodevs/features/caam_jr.ini
NXP DPAA_SEC
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/crypto/dpaa_sec/
F: doc/guides/cryptodevs/dpaa_sec.rst
F: doc/guides/cryptodevs/features/dpaa_sec.ini
@@ -1179,18 +1343,21 @@ F: doc/guides/cryptodevs/features/dpaa_sec.ini
NXP DPAA2_SEC
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/crypto/dpaa2_sec/
F: doc/guides/cryptodevs/dpaa2_sec.rst
F: doc/guides/cryptodevs/features/dpaa2_sec.ini
OpenSSL
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/openssl/
F: doc/guides/cryptodevs/openssl.rst
F: doc/guides/cryptodevs/features/openssl.ini
Virtio
M: Jay Zhou <jianjay.zhou@huawei.com>
+S: Supported
F: drivers/crypto/virtio/
F: doc/guides/cryptodevs/virtio.rst
F: doc/guides/cryptodevs/features/virtio.ini
@@ -1198,31 +1365,37 @@ F: doc/guides/cryptodevs/features/virtio.ini
Compression Drivers
-------------------
+S: Maintained
T: git://dpdk.org/next/dpdk-next-crypto
Cavium OCTEON TX zipvf
M: Ashish Gupta <ashish.gupta@marvell.com>
+S: Supported
F: drivers/compress/octeontx/
F: doc/guides/compressdevs/octeontx.rst
F: doc/guides/compressdevs/features/octeontx.ini
Intel QuickAssist
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/compress/qat/
F: drivers/common/qat/
ISA-L
M: Lee Daly <lee.daly@intel.com>
+S: Supported
F: drivers/compress/isal/
F: doc/guides/compressdevs/isal.rst
F: doc/guides/compressdevs/features/isal.ini
NVIDIA mlx5
M: Matan Azrad <matan@nvidia.com>
+S: Supported
F: drivers/compress/mlx5/
ZLIB
M: Sunila Sahu <ssahu@marvell.com>
+S: Supported
F: drivers/compress/zlib/
F: doc/guides/compressdevs/zlib.rst
F: doc/guides/compressdevs/features/zlib.ini
@@ -1234,34 +1407,40 @@ DMAdev Drivers
Intel IDXD - EXPERIMENTAL
M: Bruce Richardson <bruce.richardson@intel.com>
M: Kevin Laatz <kevin.laatz@intel.com>
+S: Supported
F: drivers/dma/idxd/
F: doc/guides/dmadevs/idxd.rst
Intel IOAT
M: Bruce Richardson <bruce.richardson@intel.com>
M: Conor Walsh <conor.walsh@intel.com>
+S: Supported
F: drivers/dma/ioat/
F: doc/guides/dmadevs/ioat.rst
HiSilicon DMA
M: Chengwen Feng <fengchengwen@huawei.com>
+S: Supported
F: drivers/dma/hisilicon/
F: doc/guides/dmadevs/hisilicon.rst
Marvell CNXK DPI DMA
M: Vamsi Attunuru <vattunuru@marvell.com>
+S: Supported
F: drivers/dma/cnxk/
F: doc/guides/dmadevs/cnxk.rst
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/dma/dpaa/
F: doc/guides/dmadevs/dpaa.rst
NXP DPAA2 QDMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/dma/dpaa2/
F: doc/guides/dmadevs/dpaa2.rst
@@ -1271,12 +1450,14 @@ RegEx Drivers
Marvell OCTEON CN9K regex
M: Liron Himi <lironh@marvell.com>
+S: Supported
F: drivers/regex/cn9k/
F: doc/guides/regexdevs/cn9k.rst
F: doc/guides/regexdevs/features/cn9k.ini
NVIDIA mlx5
M: Ori Kam <orika@nvidia.com>
+S: Supported
F: drivers/regex/mlx5/
F: doc/guides/regexdevs/mlx5.rst
F: doc/guides/regexdevs/features/mlx5.ini
@@ -1287,6 +1468,7 @@ MLdev Drivers
Marvell ML CNXK
M: Srikanth Yalavarthi <syalavarthi@marvell.com>
+S: Supported
F: drivers/common/cnxk/hw/ml.h
F: drivers/common/cnxk/roc_ml*
F: drivers/ml/cnxk/
@@ -1299,6 +1481,7 @@ T: git://dpdk.org/next/dpdk-next-virtio
Intel ifc
M: Xiao Wang <xiao.w.wang@intel.com>
+S: Supported
F: drivers/vdpa/ifc/
F: doc/guides/vdpadevs/ifc.rst
F: doc/guides/vdpadevs/features/ifcvf.ini
@@ -1306,12 +1489,14 @@ F: doc/guides/vdpadevs/features/ifcvf.ini
NVIDIA mlx5 vDPA
M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
+S: Supported
F: drivers/vdpa/mlx5/
F: doc/guides/vdpadevs/mlx5.rst
F: doc/guides/vdpadevs/features/mlx5.ini
Xilinx sfc vDPA
M: Vijay Kumar Srivastava <vsrivast@xilinx.com>
+S: Supported
F: drivers/vdpa/sfc/
F: doc/guides/vdpadevs/sfc.rst
F: doc/guides/vdpadevs/features/sfc.ini
@@ -1320,42 +1505,50 @@ F: doc/guides/vdpadevs/features/sfc.ini
Eventdev Drivers
----------------
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
Cavium OCTEON TX ssovf
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
F: drivers/event/octeontx/
F: doc/guides/eventdevs/octeontx.rst
Cavium OCTEON TX timvf
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
F: drivers/event/octeontx/timvf_*
Intel DLB2
M: Timothy McDaniel <timothy.mcdaniel@intel.com>
+S: Supported
F: drivers/event/dlb2/
F: doc/guides/eventdevs/dlb2.rst
Marvell cnxk
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
M: Shijith Thotton <sthotton@marvell.com>
+S: Supported
F: drivers/event/cnxk/
F: doc/guides/eventdevs/cnxk.rst
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/event/dpaa/
F: doc/guides/eventdevs/dpaa.rst
NXP DPAA2 eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/event/dpaa2/
F: doc/guides/eventdevs/dpaa2.rst
Software Eventdev PMD
M: Harry van Haaren <harry.van.haaren@intel.com>
+S: Supported
F: drivers/event/sw/
F: doc/guides/eventdevs/sw.rst
F: examples/eventdev_pipeline/
@@ -1363,11 +1556,13 @@ F: doc/guides/sample_app_ug/eventdev_pipeline.rst
Distributed Software Eventdev PMD
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+S: Supported
F: drivers/event/dsw/
F: doc/guides/eventdevs/dsw.rst
Software OPDL Eventdev PMD
M: Liang Ma <liangma@liangbit.com>
+S: Supported
M: Peter Mccarthy <peter.mccarthy@intel.com>
F: drivers/event/opdl/
F: doc/guides/eventdevs/opdl.rst
@@ -1378,6 +1573,7 @@ Baseband Drivers
Intel baseband
M: Nicolas Chautru <nicolas.chautru@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: drivers/baseband/turbo_sw/
F: doc/guides/bbdevs/turbo_sw.rst
@@ -1397,6 +1593,7 @@ F: doc/guides/bbdevs/features/vrb1.ini
Null baseband
M: Nicolas Chautru <nicolas.chautru@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: drivers/baseband/null/
F: doc/guides/bbdevs/null.rst
@@ -1405,6 +1602,7 @@ F: doc/guides/bbdevs/features/null.ini
NXP LA12xx
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: drivers/baseband/la12xx/
F: doc/guides/bbdevs/la12xx.rst
@@ -1416,6 +1614,7 @@ GPU Drivers
NVIDIA CUDA
M: Elena Agostini <eagostini@nvidia.com>
+S: Supported
F: drivers/gpu/cuda/
F: doc/guides/gpus/cuda.rst
@@ -1426,6 +1625,7 @@ Rawdev Drivers
Intel FPGA
M: Rosen Xu <rosen.xu@intel.com>
M: Tianfei zhang <tianfei.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/raw/ifpga/
F: doc/guides/rawdevs/ifpga.rst
@@ -1433,18 +1633,21 @@ F: doc/guides/rawdevs/ifpga.rst
Marvell CNXK BPHY
M: Jakub Palider <jpalider@marvell.com>
M: Tomasz Duszynski <tduszynski@marvell.com>
+S: Supported
F: doc/guides/rawdevs/cnxk_bphy.rst
F: drivers/raw/cnxk_bphy/
Marvell CNXK GPIO
M: Jakub Palider <jpalider@marvell.com>
M: Tomasz Duszynski <tduszynski@marvell.com>
+S: Supported
F: doc/guides/rawdevs/cnxk_gpio.rst
F: drivers/raw/cnxk_gpio/
NTB
M: Jingjing Wu <jingjing.wu@intel.com>
M: Junfeng Guo <junfeng.guo@intel.com>
+S: Supported
F: drivers/raw/ntb/
F: doc/guides/rawdevs/ntb.rst
F: examples/ntb/
@@ -1452,6 +1655,7 @@ F: doc/guides/sample_app_ug/ntb.rst
NXP DPAA2 CMDIF
M: Gagandeep Singh <g.singh@nxp.com>
+S: Supported
F: drivers/raw/dpaa2_cmdif/
F: doc/guides/rawdevs/dpaa2_cmdif.rst
@@ -1461,12 +1665,14 @@ Packet processing
Network headers
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/net/
F: app/test/test_cksum.c
F: app/test/test_cksum_perf.c
Packet CRC
M: Jasvinder Singh <jasvinder.singh@intel.com>
+S: Supported
F: lib/net/net_crc.h
F: lib/net/rte_net_crc*
F: lib/net/net_crc_avx512.c
@@ -1475,6 +1681,7 @@ F: app/test/test_crc.c
IP fragmentation & reassembly
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/ip_frag/
F: doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
F: app/test/test_ipfrag.c
@@ -1486,16 +1693,19 @@ F: doc/guides/sample_app_ug/ip_reassembly.rst
Generic Receive Offload - EXPERIMENTAL
M: Jiayu Hu <jiayu.hu@intel.com>
+S: Supported
F: lib/gro/
F: doc/guides/prog_guide/generic_receive_offload_lib.rst
Generic Segmentation Offload
M: Jiayu Hu <jiayu.hu@intel.com>
+S: Supported
F: lib/gso/
F: doc/guides/prog_guide/generic_segmentation_offload_lib.rst
IPsec
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/ipsec/
F: app/test/test_ipsec*
@@ -1506,12 +1716,14 @@ F: app/test-sad/
PDCP - EXPERIMENTAL
M: Anoob Joseph <anoobj@marvell.com>
M: Volodymyr Fialko <vfialko@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/pdcp/
F: doc/guides/prog_guide/pdcp_lib.rst
F: app/test/test_pdcp*
Flow Classify - EXPERIMENTAL - UNMAINTAINED
+S: Orphan
F: lib/flow_classify/
F: app/test/test_flow_classify*
F: doc/guides/prog_guide/flow_classify_lib.rst
@@ -1520,6 +1732,7 @@ F: doc/guides/sample_app_ug/flow_classify.rst
Distributor
M: David Hunt <david.hunt@intel.com>
+S: Supported
F: lib/distributor/
F: doc/guides/prog_guide/packet_distrib_lib.rst
F: app/test/test_distributor*
@@ -1528,6 +1741,7 @@ F: doc/guides/sample_app_ug/dist_app.rst
Reorder
M: Volodymyr Fialko <vfialko@marvell.com>
+S: Supported
F: lib/reorder/
F: doc/guides/prog_guide/reorder_lib.rst
F: app/test/test_reorder*
@@ -1536,6 +1750,7 @@ F: doc/guides/sample_app_ug/packet_ordering.rst
Hierarchical scheduler
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/sched/
F: doc/guides/prog_guide/qos_framework.rst
F: app/test/test_pie.c
@@ -1547,6 +1762,7 @@ F: doc/guides/sample_app_ug/qos_scheduler.rst
Packet capture
M: Reshma Pattan <reshma.pattan@intel.com>
M: Stephen Hemminger <stephen@networkplumber.org>
+S: Maintained
F: lib/pdump/
F: doc/guides/prog_guide/pdump_lib.rst
F: app/test/test_pdump.*
@@ -1562,6 +1778,7 @@ F: doc/guides/tools/dumpcap.rst
Packet Framework
----------------
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Obsolete
F: lib/pipeline/
F: lib/port/
F: lib/table/
@@ -1579,6 +1796,7 @@ Algorithms
ACL
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/acl/
F: doc/guides/prog_guide/packet_classif_access_ctrl.rst
F: app/test-acl/
@@ -1587,6 +1805,7 @@ F: app/test/test_acl.*
EFD
M: Byron Marohn <byron.marohn@intel.com>
M: Yipeng Wang <yipeng1.wang@intel.com>
+S: Supported
F: lib/efd/
F: doc/guides/prog_guide/efd_lib.rst
F: app/test/test_efd*
@@ -1598,6 +1817,7 @@ M: Yipeng Wang <yipeng1.wang@intel.com>
M: Sameh Gobriel <sameh.gobriel@intel.com>
M: Bruce Richardson <bruce.richardson@intel.com>
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
+S: Supported
F: lib/hash/
F: doc/guides/prog_guide/hash_lib.rst
F: doc/guides/prog_guide/toeplitz_hash_lib.rst
@@ -1607,6 +1827,7 @@ F: app/test/test_func_reentrancy.c
LPM
M: Bruce Richardson <bruce.richardson@intel.com>
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
+S: Supported
F: lib/lpm/
F: doc/guides/prog_guide/lpm*
F: app/test/test_lpm*
@@ -1616,12 +1837,14 @@ F: app/test/test_xmmt_ops.h
Membership - EXPERIMENTAL
M: Yipeng Wang <yipeng1.wang@intel.com>
M: Sameh Gobriel <sameh.gobriel@intel.com>
+S: Supported
F: lib/member/
F: doc/guides/prog_guide/member_lib.rst
F: app/test/test_member*
RIB/FIB
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
+S: Supported
F: lib/rib/
F: app/test/test_rib*
F: lib/fib/
@@ -1630,6 +1853,7 @@ F: app/test-fib/
Traffic metering
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/meter/
F: doc/guides/sample_app_ug/qos_scheduler.rst
F: app/test/test_meter.c
@@ -1642,12 +1866,14 @@ Other libraries
Configuration file
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/cfgfile/
F: app/test/test_cfgfile.c
F: app/test/test_cfgfiles/
Interactive command line
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/cmdline/
F: app/test-cmdline/
F: app/test/test_cmdline*
@@ -1656,11 +1882,13 @@ F: doc/guides/sample_app_ug/cmd_line.rst
Key/Value parsing
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/kvargs/
F: app/test/test_kvargs.c
RCU
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+S: Supported
F: lib/rcu/
F: app/test/test_rcu*
F: doc/guides/prog_guide/rcu_lib.rst
@@ -1668,11 +1896,13 @@ F: doc/guides/prog_guide/rcu_lib.rst
PCI
M: Chenbo Xia <chenbo.xia@intel.com>
M: Gaetan Rivet <grive@u256.net>
+S: Supported
F: lib/pci/
Power management
M: Anatoly Burakov <anatoly.burakov@intel.com>
M: David Hunt <david.hunt@intel.com>
+S: Supported
F: lib/power/
F: doc/guides/prog_guide/power_man.rst
F: app/test/test_power*
@@ -1683,6 +1913,7 @@ F: doc/guides/sample_app_ug/vm_power_management.rst
Timers
M: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
+S: Supported
F: lib/timer/
F: doc/guides/prog_guide/timer_lib.rst
F: app/test/test_timer*
@@ -1690,25 +1921,30 @@ F: examples/timer/
F: doc/guides/sample_app_ug/timer.rst
Job statistics
+S: Orphan
F: lib/jobstats/
F: examples/l2fwd-jobstats/
F: doc/guides/sample_app_ug/l2_forward_job_stats.rst
Metrics
+S: Orphan
F: lib/metrics/
F: app/test/test_metrics.c
Bit-rate statistics
+S: Orphan
F: lib/bitratestats/
F: app/test/test_bitratestats.c
Latency statistics
M: Reshma Pattan <reshma.pattan@intel.com>
+S: Supported
F: lib/latencystats/
F: app/test/test_latencystats.c
Telemetry
M: Ciara Power <ciara.power@intel.com>
+S: Supported
F: lib/telemetry/
F: app/test/test_telemetry*
F: usertools/dpdk-telemetry*
@@ -1716,6 +1952,7 @@ F: doc/guides/howto/telemetry.rst
BPF
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/bpf/
F: examples/bpf/
F: app/test/test_bpf.c
@@ -1727,6 +1964,7 @@ M: Jerin Jacob <jerinj@marvell.com>
M: Kiran Kumar K <kirankumark@marvell.com>
M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Zhirun Yan <zhirun.yan@intel.com>
+S: Supported
F: lib/graph/
F: doc/guides/prog_guide/graph_lib.rst
F: app/test/test_graph*
@@ -1736,6 +1974,7 @@ F: doc/guides/sample_app_ug/l3_forward_graph.rst
Nodes - EXPERIMENTAL
M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
F: lib/node/
@@ -1743,6 +1982,7 @@ Test Applications
-----------------
Unit tests framework
+S: Maintained
F: app/test/commands.c
F: app/test/has_hugepage.py
F: app/test/packet_burst_generator.c
@@ -1758,45 +1998,53 @@ F: app/test/virtual_pmd.h
Sample packet helper functions for unit test
M: Reshma Pattan <reshma.pattan@intel.com>
+S: Supported
F: app/test/sample_packet_forward.c
F: app/test/sample_packet_forward.h
Networking drivers testing tool
M: Aman Singh <aman.deep.singh@intel.com>
M: Yuying Zhang <yuying.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: app/test-pmd/
F: doc/guides/testpmd_app_ug/
DMA device performance tool
M: Cheng Jiang <cheng1.jiang@intel.com>
+S: Supported
F: app/test-dma-perf/
F: doc/guides/tools/dmaperf.rst
Flow performance tool
M: Wisam Jaddo <wisamm@nvidia.com>
+S: Supported
F: app/test-flow-perf/
F: doc/guides/tools/flow-perf.rst
Security performance tool
M: Anoob Joseph <anoobj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: app/test-security-perf/
F: doc/guides/tools/securityperf.rst
Compression performance test application
T: git://dpdk.org/next/dpdk-next-crypto
+S: Orphan
F: app/test-compress-perf/
F: doc/guides/tools/comp_perf.rst
Crypto performance test application
M: Ciara Power <ciara.power@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: app/test-crypto-perf/
F: doc/guides/tools/cryptoperf.rst
Eventdev test application
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: app/test-eventdev/
F: doc/guides/tools/testeventdev.rst
@@ -1806,12 +2054,14 @@ F: app/test/test_event_ring.c
Procinfo tool
M: Maryam Tahhan <maryam.tahhan@intel.com>
M: Reshma Pattan <reshma.pattan@intel.com>
+S: Supported
F: app/proc-info/
F: doc/guides/tools/proc_info.rst
DTS
M: Lijuan Tu <lijuan.tu@intel.com>
M: Juraj Linkeš <juraj.linkes@pantheon.tech>
+S: Supported
F: dts/
F: devtools/dts-check-format.sh
F: doc/guides/tools/dts.rst
@@ -1821,77 +2071,92 @@ Other Example Applications
--------------------------
Ethtool example
+S: Orphan
F: examples/ethtool/
F: doc/guides/sample_app_ug/ethtool.rst
FIPS validation example
M: Brian Dooley <brian.dooley@intel.com>
M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
+S: Supported
F: examples/fips_validation/
F: doc/guides/sample_app_ug/fips_validation.rst
Flow filtering example
M: Ori Kam <orika@nvidia.com>
+S: Supported
F: examples/flow_filtering/
F: doc/guides/sample_app_ug/flow_filtering.rst
Helloworld example
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: examples/helloworld/
F: doc/guides/sample_app_ug/hello_world.rst
IPsec security gateway example
M: Radu Nicolau <radu.nicolau@intel.com>
M: Akhil Goyal <gakhil@marvell.com>
+S: Supported
F: examples/ipsec-secgw/
F: doc/guides/sample_app_ug/ipsec_secgw.rst
IPv4 multicast example
+S: Orphan
F: examples/ipv4_multicast/
F: doc/guides/sample_app_ug/ipv4_multicast.rst
L2 forwarding example
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: examples/l2fwd/
F: doc/guides/sample_app_ug/l2_forward_real_virtual.rst
L2 forwarding with cache allocation example
M: Tomasz Kantecki <tomasz.kantecki@intel.com>
+S: Supported
F: doc/guides/sample_app_ug/l2_forward_cat.rst
F: examples/l2fwd-cat/
L2 forwarding with eventdev example
M: Sunil Kumar Kori <skori@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: examples/l2fwd-event/
F: doc/guides/sample_app_ug/l2_forward_event.rst
L3 forwarding example
+S: Maintained
F: examples/l3fwd/
F: doc/guides/sample_app_ug/l3_forward.rst
Link status interrupt example
+S: Maintained
F: examples/link_status_interrupt/
F: doc/guides/sample_app_ug/link_status_intr.rst
PTP client example
M: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
+S: Supported
F: examples/ptpclient/
Rx/Tx callbacks example
M: Bruce Richardson <bruce.richardson@intel.com>
M: John McNamara <john.mcnamara@intel.com>
+S: Supported
F: examples/rxtx_callbacks/
F: doc/guides/sample_app_ug/rxtx_callbacks.rst
Skeleton example
M: Bruce Richardson <bruce.richardson@intel.com>
M: John McNamara <john.mcnamara@intel.com>
+S: Supported
F: examples/skeleton/
F: doc/guides/sample_app_ug/skeleton.rst
VMDq examples
+S: Orphan
F: examples/vmdq/
F: doc/guides/sample_app_ug/vmdq_forwarding.rst
F: examples/vmdq_dcb/
--
2.39.2
^ permalink raw reply [relevance 1%]
* [PATCH 1/1] node: remove MAX macro from all nodes
@ 2023-07-19 12:30 3% Rakesh Kudurumalla
0 siblings, 0 replies; 200+ results
From: Rakesh Kudurumalla @ 2023-07-19 12:30 UTC (permalink / raw)
To: Nithin Dabilpuram, Pavan Nikhilesh; +Cc: dev, jerinj, Rakesh Kudurumalla
Removed MAX macro from all graph nodes to extend
edges to nodes without ABI breakage
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
Depends-on: series-28807 ("add UDP v4 support")
lib/node/ip4_lookup.c | 2 +-
lib/node/ip6_lookup.c | 2 +-
lib/node/rte_node_ip4_api.h | 2 --
lib/node/rte_node_ip6_api.h | 2 --
4 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/lib/node/ip4_lookup.c b/lib/node/ip4_lookup.c
index d3fc48baf7..21a135a674 100644
--- a/lib/node/ip4_lookup.c
+++ b/lib/node/ip4_lookup.c
@@ -225,7 +225,7 @@ static struct rte_node_register ip4_lookup_node = {
.init = ip4_lookup_node_init,
- .nb_edges = RTE_NODE_IP4_LOOKUP_NEXT_MAX,
+ .nb_edges = RTE_NODE_IP4_LOOKUP_NEXT_IP4_LOCAL + 1,
.next_nodes = {
[RTE_NODE_IP4_LOOKUP_NEXT_IP4_LOCAL] = "ip4_local",
[RTE_NODE_IP4_LOOKUP_NEXT_REWRITE] = "ip4_rewrite",
diff --git a/lib/node/ip6_lookup.c b/lib/node/ip6_lookup.c
index 646e466551..6f56eb5ec5 100644
--- a/lib/node/ip6_lookup.c
+++ b/lib/node/ip6_lookup.c
@@ -362,7 +362,7 @@ static struct rte_node_register ip6_lookup_node = {
.init = ip6_lookup_node_init,
- .nb_edges = RTE_NODE_IP6_LOOKUP_NEXT_MAX,
+ .nb_edges = RTE_NODE_IP6_LOOKUP_NEXT_PKT_DROP + 1,
.next_nodes = {
[RTE_NODE_IP6_LOOKUP_NEXT_REWRITE] = "ip6_rewrite",
[RTE_NODE_IP6_LOOKUP_NEXT_PKT_DROP] = "pkt_drop",
diff --git a/lib/node/rte_node_ip4_api.h b/lib/node/rte_node_ip4_api.h
index 405bdd3283..f9e38a4b14 100644
--- a/lib/node/rte_node_ip4_api.h
+++ b/lib/node/rte_node_ip4_api.h
@@ -32,8 +32,6 @@ enum rte_node_ip4_lookup_next {
/**< Packet drop node. */
RTE_NODE_IP4_LOOKUP_NEXT_IP4_LOCAL,
/** IP Local node. */
- RTE_NODE_IP4_LOOKUP_NEXT_MAX,
- /**< Number of next nodes of lookup node. */
};
/**
diff --git a/lib/node/rte_node_ip6_api.h b/lib/node/rte_node_ip6_api.h
index f3b5a1002a..a538dc2ea7 100644
--- a/lib/node/rte_node_ip6_api.h
+++ b/lib/node/rte_node_ip6_api.h
@@ -30,8 +30,6 @@ enum rte_node_ip6_lookup_next {
/**< Rewrite node. */
RTE_NODE_IP6_LOOKUP_NEXT_PKT_DROP,
/**< Packet drop node. */
- RTE_NODE_IP6_LOOKUP_NEXT_MAX,
- /**< Number of next nodes of lookup node. */
};
/**
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 0/5] bbdev: API extension for 23.11
2023-07-17 22:28 0% ` Chautru, Nicolas
@ 2023-07-18 9:18 0% ` Hemant Agrawal
1 sibling, 0 replies; 200+ results
From: Hemant Agrawal @ 2023-07-18 9:18 UTC (permalink / raw)
To: Nicolas Chautru, dev, maxime.coquelin
Cc: trix, hemant.agrawal, david.marchand, hernan.vargas
Series-
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
On 15-Jun-23 10:18 PM, Nicolas Chautru wrote:
> Caution: This is an external email. Please take care when clicking links or opening attachments. When in doubt, report the message using the 'Report this email' button
>
>
> v2: moving the new mld functions at the end of struct rte_bbdev to avoid
> ABI offset changes based on feedback with Maxime.
> Adding a commit to waive the FFT ABI warning since that experimental function
> could break ABI (let me know if preferred to be merged with the FFT
> commit causing the FFT change).
>
>
> Including v1 for extending the bbdev api for 23.11.
> The new MLD-TS is expected to be non ABI compatible, the other ones
> should not break ABI.
> I will send a deprecation notice in parallel.
>
> This introduces a new operation (on top of FEC and FFT) to support
> equalization for MLD-TS. There also more modular API extension for
> existing FFT and FEC operation.
>
> Thanks
> Nic
>
>
> Nicolas Chautru (5):
> bbdev: add operation type for MLDTS procession
> bbdev: add new capabilities for FFT processing
> bbdev: add new capability for FEC 5G UL processing
> bbdev: improving error handling for queue configuration
> devtools: ignore changes into bbdev experimental API
>
> devtools/libabigail.abignore | 4 +-
> doc/guides/prog_guide/bbdev.rst | 83 ++++++++++++++++++
> lib/bbdev/rte_bbdev.c | 26 +++---
> lib/bbdev/rte_bbdev.h | 76 +++++++++++++++++
> lib/bbdev/rte_bbdev_op.h | 143 +++++++++++++++++++++++++++++++-
> lib/bbdev/version.map | 5 ++
> 6 files changed, 323 insertions(+), 14 deletions(-)
>
> --
> 2.34.1
>
^ permalink raw reply [relevance 0%]
* RE: [PATCH 3/3] doc: announce bonding function change
2023-07-17 15:13 3% ` Ferruh Yigit
@ 2023-07-18 1:15 0% ` Chaoyong He
0 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-07-18 1:15 UTC (permalink / raw)
To: Ferruh Yigit, dev; +Cc: oss-drivers, Niklas Soderlund, Long Wu
> On 7/14/2023 9:15 AM, Chaoyong He wrote:
> > In order to support inclusive naming, some of the function in DPDK
> > will need to be renamed. Do this through deprecation process now for
> 23.07.
> >
> > Signed-off-by: Long Wu <long.wu@corigine.com>
> > Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
>
> <...>
>
> > --- a/drivers/net/bonding/rte_eth_bond.h
> > +++ b/drivers/net/bonding/rte_eth_bond.h
> > @@ -121,8 +121,16 @@ rte_eth_bond_free(const char *name);
> > * @return
> > * 0 on success, negative value otherwise
> > */
> > +__rte_experimental
> > int
> > -rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t
> > slave_port_id);
> > +rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t
> > +member_port_id);
> > +
> > +__rte_deprecated
> > +static inline int
> > +rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t
> > +slave_port_id) {
> > + return rte_eth_bond_member_add(bonded_port_id, slave_port_id); }
> >
>
> This will make old symbols disappear from shared library, since they are static
> inline functions and not object in the shared library.
> And this will break the ABI, you can see this from the CI test:
> https://mails.dpdk.org/archives/test-report/2023-July/427987.html
>
> One option is to add old functions to the .c file, and keep old function
> declarations in the header file, with '__rte_deprecated' attribute.
>
> But I think it is simpler/safer to rename in one go in v23.11 release, so this
> patch can update only deprecation notice to list functions that will be renamed
> in v23.11 release.
Okay. I will revise as your advice in the v2 patch, thanks.
^ permalink raw reply [relevance 0%]
* RE: [PATCH 2/3] doc: announce bonding data change
2023-07-17 15:03 3% ` Ferruh Yigit
@ 2023-07-18 1:13 0% ` Chaoyong He
0 siblings, 0 replies; 200+ results
From: Chaoyong He @ 2023-07-18 1:13 UTC (permalink / raw)
To: Ferruh Yigit, dev; +Cc: oss-drivers, Niklas Soderlund
> On 7/14/2023 9:15 AM, Chaoyong He wrote:
> > In order to support inclusive naming, the data structure of bonding
> > 8023 info need to be renamed. Do this through deprecation process now
> > for 23.07.
> >
> > Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 3 +++
> > drivers/net/bonding/rte_eth_bond_8023ad.c | 2 +-
> > drivers/net/bonding/rte_eth_bond_8023ad.h | 4 ++--
> > drivers/net/bonding/rte_eth_bond_pmd.c | 4 ++--
> > 4 files changed, 8 insertions(+), 5 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index c9477dd0da..5b16b66267 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -165,3 +165,6 @@ Deprecation Notices
> > * bonding: The macro ``RTE_ETH_DEV_BONDED_SLAVE`` will be deprecated
> in
> > DPDK 23.07, and removed in DPDK 23.11. The relevant code can be
> updated using
> > ``RTE_ETH_DEV_BONDING_MEMBER``.
> > + The data structure ``struct rte_eth_bond_8023ad_slave_info`` will
> > + be deprecated in DPDK 23.07, and removed in DPDK 23.11. The
> > + relevant code can be updated using ``struct
> rte_eth_bond_8023ad_member_info``.
>
> <...>
>
> > --- a/drivers/net/bonding/rte_eth_bond_8023ad.h
> > +++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
> > @@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
> > enum rte_bond_8023ad_agg_selection agg_selection; };
> >
> > -struct rte_eth_bond_8023ad_slave_info {
> > +struct rte_eth_bond_8023ad_member_info {
> > enum rte_bond_8023ad_selection selected;
> > uint8_t actor_state;
> > struct port_params actor;
>
> There is no good way to deprecate struct names.
>
> For macros it is possible to keep both old and new ones, old ones will give
> warning but still continue to work, so that is just a heads up for the user.
> But above is rename and will break the application, forcing user to update their
> code.
> And if we will force user to update their code, this should be done on ABI
> break release, v23.11.
>
> That is why I suggest just keep the deprecation notice update, saying that
> struct will be renamed in v23.11, without mentioning from deprecating struct
> in this release etc..
Got it. Thanks for point it out, I will revise in the v2 patch.
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 0/5] bbdev: API extension for 23.11
@ 2023-07-17 22:28 0% ` Chautru, Nicolas
2023-08-04 16:14 0% ` Vargas, Hernan
2023-07-18 9:18 0% ` Hemant Agrawal
1 sibling, 1 reply; 200+ results
From: Chautru, Nicolas @ 2023-07-17 22:28 UTC (permalink / raw)
To: dev, maxime.coquelin
Cc: Rix, Tom, hemant.agrawal, david.marchand, Vargas, Hernan
Hi Maxime, Hemant,
Can I get some review/ack for this serie please.
Thanks
Nic
> -----Original Message-----
> From: Chautru, Nicolas <nicolas.chautru@intel.com>
> Sent: Thursday, June 15, 2023 9:49 AM
> To: dev@dpdk.org; maxime.coquelin@redhat.com
> Cc: Rix, Tom <trix@redhat.com>; hemant.agrawal@nxp.com;
> david.marchand@redhat.com; Vargas, Hernan <hernan.vargas@intel.com>;
> Chautru, Nicolas <nicolas.chautru@intel.com>
> Subject: [PATCH v2 0/5] bbdev: API extension for 23.11
>
> v2: moving the new mld functions at the end of struct rte_bbdev to avoid
> ABI offset changes based on feedback with Maxime.
> Adding a commit to waive the FFT ABI warning since that experimental
> function could break ABI (let me know if preferred to be merged with the
> FFT commit causing the FFT change).
>
>
> Including v1 for extending the bbdev api for 23.11.
> The new MLD-TS is expected to be non ABI compatible, the other ones
> should not break ABI.
> I will send a deprecation notice in parallel.
>
> This introduces a new operation (on top of FEC and FFT) to support
> equalization for MLD-TS. There also more modular API extension for
> existing FFT and FEC operation.
>
> Thanks
> Nic
>
>
> Nicolas Chautru (5):
> bbdev: add operation type for MLDTS procession
> bbdev: add new capabilities for FFT processing
> bbdev: add new capability for FEC 5G UL processing
> bbdev: improving error handling for queue configuration
> devtools: ignore changes into bbdev experimental API
>
> devtools/libabigail.abignore | 4 +-
> doc/guides/prog_guide/bbdev.rst | 83 ++++++++++++++++++
> lib/bbdev/rte_bbdev.c | 26 +++---
> lib/bbdev/rte_bbdev.h | 76 +++++++++++++++++
> lib/bbdev/rte_bbdev_op.h | 143
> +++++++++++++++++++++++++++++++-
> lib/bbdev/version.map | 5 ++
> 6 files changed, 323 insertions(+), 14 deletions(-)
>
> --
> 2.34.1
^ permalink raw reply [relevance 0%]
* Re: [PATCH 3/3] doc: announce bonding function change
@ 2023-07-17 15:13 3% ` Ferruh Yigit
2023-07-18 1:15 0% ` Chaoyong He
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-07-17 15:13 UTC (permalink / raw)
To: Chaoyong He, dev; +Cc: oss-drivers, niklas.soderlund, Long Wu
On 7/14/2023 9:15 AM, Chaoyong He wrote:
> In order to support inclusive naming, some of the function in DPDK will
> need to be renamed. Do this through deprecation process now for 23.07.
>
> Signed-off-by: Long Wu <long.wu@corigine.com>
> Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
<...>
> --- a/drivers/net/bonding/rte_eth_bond.h
> +++ b/drivers/net/bonding/rte_eth_bond.h
> @@ -121,8 +121,16 @@ rte_eth_bond_free(const char *name);
> * @return
> * 0 on success, negative value otherwise
> */
> +__rte_experimental
> int
> -rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id);
> +rte_eth_bond_member_add(uint16_t bonded_port_id, uint16_t member_port_id);
> +
> +__rte_deprecated
> +static inline int
> +rte_eth_bond_slave_add(uint16_t bonded_port_id, uint16_t slave_port_id)
> +{
> + return rte_eth_bond_member_add(bonded_port_id, slave_port_id);
> +}
>
This will make old symbols disappear from shared library, since they are
static inline functions and not object in the shared library.
And this will break the ABI, you can see this from the CI test:
https://mails.dpdk.org/archives/test-report/2023-July/427987.html
One option is to add old functions to the .c file, and keep old function
declarations in the header file, with '__rte_deprecated' attribute.
But I think it is simpler/safer to rename in one go in v23.11 release,
so this patch can update only deprecation notice to list functions that
will be renamed in v23.11 release.
^ permalink raw reply [relevance 3%]
* Re: [PATCH 2/3] doc: announce bonding data change
@ 2023-07-17 15:03 3% ` Ferruh Yigit
2023-07-18 1:13 0% ` Chaoyong He
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-07-17 15:03 UTC (permalink / raw)
To: Chaoyong He, dev; +Cc: oss-drivers, niklas.soderlund
On 7/14/2023 9:15 AM, Chaoyong He wrote:
> In order to support inclusive naming, the data structure of bonding 8023
> info need to be renamed. Do this through deprecation process now for
> 23.07.
>
> Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 3 +++
> drivers/net/bonding/rte_eth_bond_8023ad.c | 2 +-
> drivers/net/bonding/rte_eth_bond_8023ad.h | 4 ++--
> drivers/net/bonding/rte_eth_bond_pmd.c | 4 ++--
> 4 files changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index c9477dd0da..5b16b66267 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -165,3 +165,6 @@ Deprecation Notices
> * bonding: The macro ``RTE_ETH_DEV_BONDED_SLAVE`` will be deprecated in
> DPDK 23.07, and removed in DPDK 23.11. The relevant code can be updated using
> ``RTE_ETH_DEV_BONDING_MEMBER``.
> + The data structure ``struct rte_eth_bond_8023ad_slave_info`` will be
> + deprecated in DPDK 23.07, and removed in DPDK 23.11. The relevant code can be
> + updated using ``struct rte_eth_bond_8023ad_member_info``.
<...>
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.h
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
> @@ -141,7 +141,7 @@ struct rte_eth_bond_8023ad_conf {
> enum rte_bond_8023ad_agg_selection agg_selection;
> };
>
> -struct rte_eth_bond_8023ad_slave_info {
> +struct rte_eth_bond_8023ad_member_info {
> enum rte_bond_8023ad_selection selected;
> uint8_t actor_state;
> struct port_params actor;
There is no good way to deprecate struct names.
For macros it is possible to keep both old and new ones, old ones will
give warning but still continue to work, so that is just a heads up for
the user.
But above is rename and will break the application, forcing user to
update their code.
And if we will force user to update their code, this should be done on
ABI break release, v23.11.
That is why I suggest just keep the deprecation notice update, saying
that struct will be renamed in v23.11, without mentioning from
deprecating struct in this release etc..
^ permalink raw reply [relevance 3%]
* Re: [PATCH v1] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-17 11:43 0% ` Jerin Jacob
@ 2023-07-17 12:42 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-07-17 12:42 UTC (permalink / raw)
To: Jerin Jacob, Sivaprasad Tummala
Cc: dev, bruce.richardson, david.marchand, thomas
On 7/17/2023 12:43 PM, Jerin Jacob wrote:
> On Mon, Jul 17, 2023 at 4:54 PM Sivaprasad Tummala
> <sivaprasad.tummala@amd.com> wrote:
>>
>> Deprecation notice to add "rte_eventdev_port_data" field to
>> ``rte_event_fp_ops`` for callback support.
>>
>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
>
>> ---
>> doc/guides/rel_notes/deprecation.rst | 7 +++++++
>> 1 file changed, 7 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index fb771a0305..057f97ce5a 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -130,6 +130,13 @@ Deprecation Notices
>> ``rte_cryptodev_get_auth_algo_string``, ``rte_cryptodev_get_aead_algo_string`` and
>> ``rte_cryptodev_asym_get_xform_string`` respectively.
>>
>> +* eventdev: The struct rte_event_fp_ops will be updated with a new element
>> + rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
>> + rte_eventdev_port_data is used to hold callbacks registered optionally
>> + per event device port and associated callback data. By adding rte_eventdev_port_data
>> + to rte_event_fp_ops, allows to fetch this data for fastpath eventdev inline functions
>> + in advance. This changes the size of rte_event_fp_ops and result in ABI change.
>> +
>> * security: Hide structures ``rte_security_ops`` and ``rte_security_ctx``
>> as these are internal to DPDK library and drivers.
>>
>> --
>> 2.34.1
>>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v1] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-17 11:24 5% ` [PATCH v1] " Sivaprasad Tummala
@ 2023-07-17 11:43 0% ` Jerin Jacob
2023-07-17 12:42 0% ` Ferruh Yigit
2023-07-25 8:40 0% ` Ferruh Yigit
1 sibling, 1 reply; 200+ results
From: Jerin Jacob @ 2023-07-17 11:43 UTC (permalink / raw)
To: Sivaprasad Tummala
Cc: dev, ferruh.yigit, bruce.richardson, david.marchand, thomas
On Mon, Jul 17, 2023 at 4:54 PM Sivaprasad Tummala
<sivaprasad.tummala@amd.com> wrote:
>
> Deprecation notice to add "rte_eventdev_port_data" field to
> ``rte_event_fp_ops`` for callback support.
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index fb771a0305..057f97ce5a 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -130,6 +130,13 @@ Deprecation Notices
> ``rte_cryptodev_get_auth_algo_string``, ``rte_cryptodev_get_aead_algo_string`` and
> ``rte_cryptodev_asym_get_xform_string`` respectively.
>
> +* eventdev: The struct rte_event_fp_ops will be updated with a new element
> + rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
> + rte_eventdev_port_data is used to hold callbacks registered optionally
> + per event device port and associated callback data. By adding rte_eventdev_port_data
> + to rte_event_fp_ops, allows to fetch this data for fastpath eventdev inline functions
> + in advance. This changes the size of rte_event_fp_ops and result in ABI change.
> +
> * security: Hide structures ``rte_security_ops`` and ``rte_security_ctx``
> as these are internal to DPDK library and drivers.
>
> --
> 2.34.1
>
^ permalink raw reply [relevance 0%]
* [PATCH v1] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-12 17:30 5% [PATCH] doc: deprecation notice to add callback data to rte_event_fp_ops Sivaprasad Tummala
2023-07-13 8:51 0% ` Jerin Jacob
@ 2023-07-17 11:24 5% ` Sivaprasad Tummala
2023-07-17 11:43 0% ` Jerin Jacob
2023-07-25 8:40 0% ` Ferruh Yigit
1 sibling, 2 replies; 200+ results
From: Sivaprasad Tummala @ 2023-07-17 11:24 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, bruce.richardson, david.marchand, thomas, jerinjacobk
Deprecation notice to add "rte_eventdev_port_data" field to
``rte_event_fp_ops`` for callback support.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
doc/guides/rel_notes/deprecation.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index fb771a0305..057f97ce5a 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -130,6 +130,13 @@ Deprecation Notices
``rte_cryptodev_get_auth_algo_string``, ``rte_cryptodev_get_aead_algo_string`` and
``rte_cryptodev_asym_get_xform_string`` respectively.
+* eventdev: The struct rte_event_fp_ops will be updated with a new element
+ rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
+ rte_eventdev_port_data is used to hold callbacks registered optionally
+ per event device port and associated callback data. By adding rte_eventdev_port_data
+ to rte_event_fp_ops, allows to fetch this data for fastpath eventdev inline functions
+ in advance. This changes the size of rte_event_fp_ops and result in ABI change.
+
* security: Hide structures ``rte_security_ops`` and ``rte_security_ctx``
as these are internal to DPDK library and drivers.
--
2.34.1
^ permalink raw reply [relevance 5%]
* Re: [PATCH] doc: announce ethdev operation struct changes
2023-07-13 12:50 0% ` Morten Brørup
@ 2023-07-17 8:28 0% ` Andrew Rybchenko
0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2023-07-17 8:28 UTC (permalink / raw)
To: Morten Brørup, Feifei Wang, dev
Cc: nd, Honnappa Nagarahalli, Ruifeng Wang, Konstantin Ananyev,
Ferruh Yigit, thomas
On 7/13/23 15:50, Morten Brørup wrote:
>> From: Feifei Wang [mailto:Feifei.Wang2@arm.com]
>> Sent: Thursday, 13 July 2023 04.37
>>
>>> From: Feifei Wang
>>> Sent: Tuesday, July 4, 2023 4:17 PM
>>>
>>>> From: Feifei Wang <feifei.wang2@arm.com>
>>>> Sent: Tuesday, July 4, 2023 4:10 PM
>>>>
>>>> To support mbufs recycle mode, announce the coming ABI changes from
>>>> DPDK 23.11.
>>>>
>>>> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
>>>> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
>>>> ---
>>>> doc/guides/rel_notes/deprecation.rst | 4 ++++
>>>> 1 file changed, 4 insertions(+)
>>>>
>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>> b/doc/guides/rel_notes/deprecation.rst
>>>> index 66431789b0..c7e1ffafb2 100644
>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>> @@ -118,6 +118,10 @@ Deprecation Notices
>>>> The legacy actions should be removed
>>>> once ``MODIFY_FIELD`` alternative is implemented in drivers.
>>>>
>>>> +* ethdev: The Ethernet device data structure ``struct rte_eth_dev``
>>>> +and
>>>> + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be
>>>> +updated
>>>> + with new fields to support mbufs recycle mode from DPDK 23.11.
>>>> +
>>>> * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
>>>> to have another parameter ``qp_id`` to return the queue pair ID
>>>> which got error interrupt to the application,
>>>> --
>>>> 2.25.1
>>
>> Ping~
>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
^ permalink raw reply [relevance 0%]
* [RFC] MAINTAINERS: add status information
@ 2023-07-16 21:25 1% Stephen Hemminger
2023-07-19 16:07 1% ` [PATCH v2] " Stephen Hemminger
` (4 more replies)
0 siblings, 5 replies; 200+ results
From: Stephen Hemminger @ 2023-07-16 21:25 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Thomas Monjalon
Add a new field S: which indicates the status of support for
individual sub-trees. Almost everything is marked as supported
but components without any maintainer are listed as Orphan.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
MAINTAINERS | 267 +++++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 266 insertions(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5bb8090ebe7e..d523ea34d4f3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17,6 +17,16 @@ Descriptions of section entries:
X: Files and directories exclusion, same rules as F:
K: Keyword regex pattern to match content.
One regex pattern per line. Multiple K: lines acceptable.
+ S: *Status*, one of the following:
+ Supported: Someone is actually paid to look after this.
+ Maintained: Someone actually looks after it.
+ Odd Fixes: It has a maintainer but they don't have time to do
+ much other than throw the odd patch in. See below..
+ Orphan: No current maintainer [but maybe you could take the
+ role as you write your new code].
+ Obsolete: Old code. Something tagged obsolete generally means
+ it has been replaced by a better system and you
+ should be using that.
General Project Administration
@@ -25,44 +35,54 @@ General Project Administration
Main Branch
M: Thomas Monjalon <thomas@monjalon.net>
M: David Marchand <david.marchand@redhat.com>
+S: Supported
T: git://dpdk.org/dpdk
Next-net Tree
M: Ferruh Yigit <ferruh.yigit@amd.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
Next-net-brcm Tree
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-brcm
Next-net-intel Tree
M: Qi Zhang <qi.z.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
Next-net-mrvl Tree
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
Next-net-mlx Tree
M: Raslan Darawsheh <rasland@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mlx
Next-virtio Tree
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
Next-crypto Tree
M: Akhil Goyal <gakhil@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
Next-eventdev Tree
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
Next-baseband Tree
M: Maxime Coquelin <maxime.coquelin@redhat.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
Stable Branches
@@ -70,17 +90,21 @@ M: Luca Boccassi <bluca@debian.org>
M: Kevin Traynor <ktraynor@redhat.com>
M: Christian Ehrhardt <christian.ehrhardt@canonical.com>
M: Xueming Li <xuemingl@nvidia.com>
+S: Supported
T: git://dpdk.org/dpdk-stable
Security Issues
M: maintainers@dpdk.org
+S: Supported
Documentation (with overlaps)
F: README
F: doc/
+S: Supported
Developers and Maintainers Tools
M: Thomas Monjalon <thomas@monjalon.net>
+S: Supported
F: MAINTAINERS
F: devtools/build-dict.sh
F: devtools/check-abi.sh
@@ -110,7 +134,7 @@ F: .mailmap
Build System
M: Bruce Richardson <bruce.richardson@intel.com>
-F: Makefile
+S: Maintained
F: meson.build
F: meson_options.txt
F: config/
@@ -130,11 +154,13 @@ F: devtools/check-meson.py
Public CI
M: Aaron Conole <aconole@redhat.com>
M: Michael Santana <maicolgabriel@hotmail.com>
+S: Supported
F: .github/workflows/build.yml
F: .ci/
Driver information
M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
+S: Maintained
F: buildtools/coff.py
F: buildtools/gen-pmdinfo-cfile.py
F: buildtools/pmdinfogen.py
@@ -147,6 +173,7 @@ Environment Abstraction Layer
T: git://dpdk.org/dpdk
EAL API and common code
+S: Supported
F: lib/eal/common/
F: lib/eal/unix/
F: lib/eal/include/
@@ -180,6 +207,7 @@ F: app/test/test_version.c
Trace - EXPERIMENTAL
M: Jerin Jacob <jerinj@marvell.com>
M: Sunil Kumar Kori <skori@marvell.com>
+S: Supported
F: lib/eal/include/rte_trace*.h
F: lib/eal/common/eal_common_trace*.c
F: lib/eal/common/eal_trace.h
@@ -188,6 +216,7 @@ F: app/test/test_trace*
Memory Allocation
M: Anatoly Burakov <anatoly.burakov@intel.com>
+S: Supported
F: lib/eal/include/rte_fbarray.h
F: lib/eal/include/rte_mem*
F: lib/eal/include/rte_malloc.h
@@ -209,11 +238,13 @@ F: app/test/test_memzone.c
Interrupt Subsystem
M: Harman Kalra <hkalra@marvell.com>
+S: Supported
F: lib/eal/include/rte_epoll.h
F: lib/eal/*/*interrupts.*
F: app/test/test_interrupts.c
Keep alive
+S: Orphan
F: lib/eal/include/rte_keepalive.h
F: lib/eal/common/rte_keepalive.c
F: examples/l2fwd-keepalive/
@@ -221,6 +252,7 @@ F: doc/guides/sample_app_ug/keep_alive.rst
Secondary process
M: Anatoly Burakov <anatoly.burakov@intel.com>
+S: Maintained
K: RTE_PROC_
F: lib/eal/common/eal_common_proc.c
F: doc/guides/prog_guide/multi_proc_support.rst
@@ -230,6 +262,7 @@ F: doc/guides/sample_app_ug/multi_process.rst
Service Cores
M: Harry van Haaren <harry.van.haaren@intel.com>
+S: Supported
F: lib/eal/include/rte_service.h
F: lib/eal/include/rte_service_component.h
F: lib/eal/common/rte_service.c
@@ -240,44 +273,52 @@ F: doc/guides/sample_app_ug/service_cores.rst
Bitops
M: Joyce Kong <joyce.kong@arm.com>
+S: Supported
F: lib/eal/include/rte_bitops.h
F: app/test/test_bitops.c
Bitmap
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/eal/include/rte_bitmap.h
F: app/test/test_bitmap.c
MCSlock
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+S: Supported
F: lib/eal/include/rte_mcslock.h
F: app/test/test_mcslock.c
Sequence Lock
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+S: Supported
F: lib/eal/include/rte_seqcount.h
F: lib/eal/include/rte_seqlock.h
F: app/test/test_seqlock.c
Ticketlock
M: Joyce Kong <joyce.kong@arm.com>
+S: Supported
F: lib/eal/include/rte_ticketlock.h
F: app/test/test_ticketlock.c
Pseudo-random Number Generation
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+S: Supported
F: lib/eal/include/rte_random.h
F: lib/eal/common/rte_random.c
F: app/test/test_rand_perf.c
ARM v7
M: Ruifeng Wang <ruifeng.wang@arm.com>
+S: Supported
F: config/arm/
F: lib/eal/arm/
X: lib/eal/arm/include/*_64.h
ARM v8
M: Ruifeng Wang <ruifeng.wang@arm.com>
+S: Supported
F: config/arm/
F: doc/guides/linux_gsg/cross_build_dpdk_for_arm64.rst
F: lib/eal/arm/
@@ -291,12 +332,14 @@ F: examples/common/neon/
LoongArch
M: Min Zhou <zhoumin@loongson.cn>
+S: Supported
F: config/loongarch/
F: doc/guides/linux_gsg/cross_build_dpdk_for_loongarch.rst
F: lib/eal/loongarch/
IBM POWER (alpha)
M: David Christensen <drc@linux.vnet.ibm.com>
+S: Supported
F: config/ppc/
F: lib/eal/ppc/
F: lib/*/*_altivec*
@@ -307,6 +350,7 @@ F: examples/common/altivec/
RISC-V
M: Stanislaw Kardach <kda@semihalf.com>
+S: Supported
F: config/riscv/
F: doc/guides/linux_gsg/cross_build_dpdk_for_riscv.rst
F: lib/eal/riscv/
@@ -314,6 +358,7 @@ F: lib/eal/riscv/
Intel x86
M: Bruce Richardson <bruce.richardson@intel.com>
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: config/x86/
F: doc/guides/linux_gsg/nic_perf_intel_platform.rst
F: buildtools/binutils-avx512-check.py
@@ -330,28 +375,34 @@ F: examples/*/*_avx*
F: examples/common/sse/
Linux EAL (with overlaps)
+S: Maintained
F: lib/eal/linux/
F: doc/guides/linux_gsg/
Linux UIO
+S: Maintained
F: drivers/bus/pci/linux/*uio*
Linux VFIO
M: Anatoly Burakov <anatoly.burakov@intel.com>
+S: Supported
F: lib/eal/linux/*vfio*
F: drivers/bus/pci/linux/*vfio*
FreeBSD EAL (with overlaps)
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: lib/eal/freebsd/
F: doc/guides/freebsd_gsg/
FreeBSD contigmem
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: kernel/freebsd/contigmem/
FreeBSD UIO
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: kernel/freebsd/nic_uio/
Windows support
@@ -359,12 +410,14 @@ M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
M: Narcisa Ana Maria Vasile <navasile@linux.microsoft.com>
M: Dmitry Malloy <dmitrym@microsoft.com>
M: Pallavi Kadam <pallavi.kadam@intel.com>
+S: Supported
F: lib/eal/windows/
F: buildtools/map_to_win.py
F: doc/guides/windows_gsg/
Windows memory allocation
M: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
+S: Supported
F: lib/eal/windows/eal_hugepages.c
F: lib/eal/windows/eal_mem*
@@ -372,10 +425,12 @@ F: lib/eal/windows/eal_mem*
Core Libraries
--------------
T: git://dpdk.org/dpdk
+S: Maintained
Memory pool
M: Olivier Matz <olivier.matz@6wind.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
F: lib/mempool/
F: drivers/mempool/ring/
F: doc/guides/prog_guide/mempool_lib.rst
@@ -385,6 +440,7 @@ F: app/test/test_func_reentrancy.c
Ring queue
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/ring/
F: doc/guides/prog_guide/ring_lib.rst
F: app/test/test_ring*
@@ -392,6 +448,7 @@ F: app/test/test_func_reentrancy.c
Stack
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/stack/
F: drivers/mempool/stack/
F: app/test/test_stack*
@@ -399,6 +456,7 @@ F: doc/guides/prog_guide/stack_lib.rst
Packet buffer
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/mbuf/
F: doc/guides/prog_guide/mbuf_lib.rst
F: app/test/test_mbuf.c
@@ -407,6 +465,7 @@ Ethernet API
M: Thomas Monjalon <thomas@monjalon.net>
M: Ferruh Yigit <ferruh.yigit@amd.com>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/
F: app/test/test_ethdev*
@@ -415,6 +474,7 @@ F: doc/guides/prog_guide/switch_representation.rst
Flow API
M: Ori Kam <orika@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: app/test-pmd/cmdline_flow.c
F: doc/guides/prog_guide/rte_flow.rst
@@ -422,18 +482,21 @@ F: lib/ethdev/rte_flow*
Traffic Management API - EXPERIMENTAL
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/rte_tm*
F: app/test-pmd/cmdline_tm.*
Traffic Metering and Policing API - EXPERIMENTAL
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: lib/ethdev/rte_mtr*
F: app/test-pmd/cmdline_mtr.*
Baseband API
M: Nicolas Chautru <nicolas.chautru@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: lib/bbdev/
F: doc/guides/prog_guide/bbdev.rst
@@ -446,6 +509,7 @@ F: doc/guides/sample_app_ug/bbdev_app.rst
Crypto API
M: Akhil Goyal <gakhil@marvell.com>
M: Fan Zhang <fanzhang.oss@gmail.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/cryptodev/
F: app/test/test_cryptodev*
@@ -453,6 +517,7 @@ F: examples/l2fwd-crypto/
Security API
M: Akhil Goyal <gakhil@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
@@ -461,6 +526,7 @@ F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <fanzhang.oss@gmail.com>
M: Ashish Gupta <ashish.gupta@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/compressdev/
F: drivers/compress/
@@ -470,6 +536,7 @@ F: doc/guides/compressdevs/features/default.ini
RegEx API - EXPERIMENTAL
M: Ori Kam <orika@nvidia.com>
+S: Supported
F: lib/regexdev/
F: app/test-regex/
F: doc/guides/prog_guide/regexdev.rst
@@ -477,6 +544,7 @@ F: doc/guides/regexdevs/features/default.ini
Machine Learning device API - EXPERIMENTAL
M: Srikanth Yalavarthi <syalavarthi@marvell.com>
+S: Supported
F: lib/mldev/
F: doc/guides/prog_guide/mldev.rst
F: app/test-mldev/
@@ -484,6 +552,7 @@ F: doc/guides/tools/testmldev.rst
DMA device API - EXPERIMENTAL
M: Chengwen Feng <fengchengwen@huawei.com>
+S: Supported
F: lib/dmadev/
F: drivers/dma/skeleton/
F: app/test/test_dmadev*
@@ -495,6 +564,7 @@ F: doc/guides/sample_app_ug/dma.rst
General-Purpose Graphics Processing Unit (GPU) API - EXPERIMENTAL
M: Elena Agostini <eagostini@nvidia.com>
+S: Supported
F: lib/gpudev/
F: doc/guides/prog_guide/gpudev.rst
F: doc/guides/gpus/features/default.ini
@@ -502,6 +572,7 @@ F: app/test-gpudev/
Eventdev API
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/
F: drivers/event/skeleton/
@@ -510,6 +581,7 @@ F: examples/l3fwd/l3fwd_event*
Eventdev Ethdev Rx Adapter API
M: Naga Harish K S V <s.v.naga.harish.k@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*eth_rx_adapter*
F: app/test/test_event_eth_rx_adapter.c
@@ -517,6 +589,7 @@ F: doc/guides/prog_guide/event_ethernet_rx_adapter.rst
Eventdev Ethdev Tx Adapter API
M: Naga Harish K S V <s.v.naga.harish.k@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*eth_tx_adapter*
F: app/test/test_event_eth_tx_adapter.c
@@ -524,6 +597,7 @@ F: doc/guides/prog_guide/event_ethernet_tx_adapter.rst
Eventdev Timer Adapter API
M: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*timer_adapter*
F: app/test/test_event_timer_adapter.c
@@ -531,6 +605,7 @@ F: doc/guides/prog_guide/event_timer_adapter.rst
Eventdev Crypto Adapter API
M: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: lib/eventdev/*crypto_adapter*
F: app/test/test_event_crypto_adapter.c
@@ -539,6 +614,7 @@ F: doc/guides/prog_guide/event_crypto_adapter.rst
Raw device API
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: lib/rawdev/
F: drivers/raw/skeleton/
F: app/test/test_rawdev.c
@@ -551,11 +627,13 @@ Memory Pool Drivers
Bucket memory pool
M: Artem V. Andreev <artem.andreev@oktetlabs.ru>
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
F: drivers/mempool/bucket/
Marvell cnxk
M: Ashwin Sekhar T K <asekhar@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/mempool/cnxk/
F: doc/guides/mempool/cnxk.rst
@@ -567,20 +645,24 @@ Bus Drivers
AMD CDX bus
M: Nipun Gupta <nipun.gupta@amd.com>
M: Nikhil Agarwal <nikhil.agarwal@amd.com>
+S: Supported
F: drivers/bus/cdx/
Auxiliary bus driver - EXPERIMENTAL
M: Parav Pandit <parav@nvidia.com>
M: Xueming Li <xuemingl@nvidia.com>
+S: Supported
F: drivers/bus/auxiliary/
Intel FPGA bus
M: Rosen Xu <rosen.xu@intel.com>
+S: Supported
F: drivers/bus/ifpga/
NXP buses
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/common/dpaax/
F: drivers/bus/dpaa/
F: drivers/bus/fslmc/
@@ -588,36 +670,43 @@ F: drivers/bus/fslmc/
PCI bus driver
M: Chenbo Xia <chenbo.xia@intel.com>
M: Nipun Gupta <nipun.gupta@amd.com>
+S: Supported
F: drivers/bus/pci/
Platform bus driver
M: Tomasz Duszynski <tduszynski@marvell.com>
+S: Supported
F: drivers/bus/platform/
VDEV bus driver
+S: Maintained
F: drivers/bus/vdev/
F: app/test/test_vdev.c
VMBUS bus driver
M: Long Li <longli@microsoft.com>
+S: Supported
F: drivers/bus/vmbus/
Networking Drivers
------------------
M: Ferruh Yigit <ferruh.yigit@amd.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: doc/guides/nics/features/default.ini
Link bonding
M: Chas Williams <chas3@att.com>
M: Min Hu (Connor) <humin29@huawei.com>
+S: Supported
F: drivers/net/bonding/
F: doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
F: app/test/test_link_bonding*
F: examples/bond/
Linux KNI
+S: Obsolete
F: kernel/linux/kni/
F: lib/kni/
F: doc/guides/prog_guide/kernel_nic_interface.rst
@@ -625,12 +714,14 @@ F: app/test/test_kni.c
Linux AF_PACKET
M: John W. Linville <linville@tuxdriver.com>
+S: Odd Fixes
F: drivers/net/af_packet/
F: doc/guides/nics/features/afpacket.ini
Linux AF_XDP
M: Ciara Loftus <ciara.loftus@intel.com>
M: Qi Zhang <qi.z.zhang@intel.com>
+S: Supported
F: drivers/net/af_xdp/
F: doc/guides/nics/af_xdp.rst
F: doc/guides/nics/features/af_xdp.ini
@@ -641,24 +732,28 @@ M: Shai Brandes <shaibran@amazon.com>
M: Evgeny Schemeilin <evgenys@amazon.com>
M: Igor Chauskin <igorch@amazon.com>
M: Ron Beider <rbeider@amazon.com>
+S: Supported
F: drivers/net/ena/
F: doc/guides/nics/ena.rst
F: doc/guides/nics/features/ena.ini
AMD axgbe
M: Chandubabu Namburu <chandu@amd.com>
+S: Supported
F: drivers/net/axgbe/
F: doc/guides/nics/axgbe.rst
F: doc/guides/nics/features/axgbe.ini
AMD Pensando ionic
M: Andrew Boyer <andrew.boyer@amd.com>
+S: Supported
F: drivers/net/ionic/
F: doc/guides/nics/ionic.rst
F: doc/guides/nics/features/ionic.ini
Marvell/Aquantia atlantic
M: Igor Russkikh <irusskikh@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/atlantic/
F: doc/guides/nics/atlantic.rst
@@ -668,6 +763,7 @@ Atomic Rules ARK
M: Shepard Siegel <shepard.siegel@atomicrules.com>
M: Ed Czeck <ed.czeck@atomicrules.com>
M: John Miller <john.miller@atomicrules.com>
+S: Supported
F: drivers/net/ark/
F: doc/guides/nics/ark.rst
F: doc/guides/nics/features/ark.ini
@@ -675,6 +771,7 @@ F: doc/guides/nics/features/ark.ini
Broadcom bnxt
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Somnath Kotur <somnath.kotur@broadcom.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-brcm
F: drivers/net/bnxt/
F: doc/guides/nics/bnxt.rst
@@ -683,6 +780,7 @@ F: doc/guides/nics/features/bnxt.ini
Cavium ThunderX nicvf
M: Jerin Jacob <jerinj@marvell.com>
M: Maciej Czekaj <mczekaj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/thunderx/
F: doc/guides/nics/thunderx.rst
@@ -690,6 +788,7 @@ F: doc/guides/nics/features/thunderx.ini
Cavium OCTEON TX
M: Harman Kalra <hkalra@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/octeontx/
F: drivers/mempool/octeontx/
@@ -699,6 +798,7 @@ F: doc/guides/nics/features/octeontx.ini
Chelsio cxgbe
M: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
+S: Supported
F: drivers/net/cxgbe/
F: doc/guides/nics/cxgbe.rst
F: doc/guides/nics/features/cxgbe.ini
@@ -706,6 +806,7 @@ F: doc/guides/nics/features/cxgbe.ini
Cisco enic
M: John Daley <johndale@cisco.com>
M: Hyong Youb Kim <hyonkim@cisco.com>
+S: Supported
F: drivers/net/enic/
F: doc/guides/nics/enic.rst
F: doc/guides/nics/features/enic.ini
@@ -715,6 +816,7 @@ M: Junfeng Guo <junfeng.guo@intel.com>
M: Jeroen de Borst <jeroendb@google.com>
M: Rushil Gupta <rushilg@google.com>
M: Joshua Washington <joshwash@google.com>
+S: Supported
F: drivers/net/gve/
F: doc/guides/nics/gve.rst
F: doc/guides/nics/features/gve.ini
@@ -722,6 +824,7 @@ F: doc/guides/nics/features/gve.ini
Hisilicon hns3
M: Dongdong Liu <liudongdong3@huawei.com>
M: Yisen Zhuang <yisen.zhuang@huawei.com>
+S: Supported
F: drivers/net/hns3/
F: doc/guides/nics/hns3.rst
F: doc/guides/nics/features/hns3.ini
@@ -730,6 +833,7 @@ Huawei hinic
M: Ziyang Xuan <xuanziyang2@huawei.com>
M: Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>
M: Guoyang Zhou <zhouguoyang@huawei.com>
+S: Supported
F: drivers/net/hinic/
F: doc/guides/nics/hinic.rst
F: doc/guides/nics/features/hinic.ini
@@ -737,6 +841,7 @@ F: doc/guides/nics/features/hinic.ini
Intel e1000
M: Simei Su <simei.su@intel.com>
M: Wenjun Wu <wenjun1.wu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/e1000/
F: doc/guides/nics/e1000em.rst
@@ -747,6 +852,7 @@ F: doc/guides/nics/features/igb*.ini
Intel ixgbe
M: Qiming Yang <qiming.yang@intel.com>
M: Wenjun Wu <wenjun1.wu@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ixgbe/
F: doc/guides/nics/ixgbe.rst
@@ -756,6 +862,7 @@ F: doc/guides/nics/features/ixgbe*.ini
Intel i40e
M: Yuying Zhang <Yuying.Zhang@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/i40e/
F: doc/guides/nics/i40e.rst
@@ -765,6 +872,7 @@ F: doc/guides/nics/features/i40e*.ini
Intel fm10k
M: Qi Zhang <qi.z.zhang@intel.com>
M: Xiao Wang <xiao.w.wang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/fm10k/
F: doc/guides/nics/fm10k.rst
@@ -773,6 +881,7 @@ F: doc/guides/nics/features/fm10k*.ini
Intel iavf
M: Jingjing Wu <jingjing.wu@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/iavf/
F: drivers/common/iavf/
@@ -781,6 +890,7 @@ F: doc/guides/nics/features/iavf*.ini
Intel ice
M: Qiming Yang <qiming.yang@intel.com>
M: Qi Zhang <qi.z.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/ice/
F: doc/guides/nics/ice.rst
@@ -789,6 +899,7 @@ F: doc/guides/nics/features/ice.ini
Intel idpf
M: Jingjing Wu <jingjing.wu@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/idpf/
F: drivers/common/idpf/
@@ -798,6 +909,7 @@ F: doc/guides/nics/features/idpf.ini
Intel cpfl - EXPERIMENTAL
M: Yuying Zhang <yuying.zhang@intel.com>
M: Beilei Xing <beilei.xing@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/cpfl/
F: doc/guides/nics/cpfl.rst
@@ -806,6 +918,7 @@ F: doc/guides/nics/features/cpfl.ini
Intel igc
M: Junfeng Guo <junfeng.guo@intel.com>
M: Simei Su <simei.su@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/net/igc/
F: doc/guides/nics/igc.rst
@@ -814,6 +927,7 @@ F: doc/guides/nics/features/igc.ini
Intel ipn3ke
M: Rosen Xu <rosen.xu@intel.com>
T: git://dpdk.org/next/dpdk-next-net-intel
+S: Supported
F: drivers/net/ipn3ke/
F: doc/guides/nics/ipn3ke.rst
F: doc/guides/nics/features/ipn3ke.ini
@@ -823,6 +937,7 @@ M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Kiran Kumar K <kirankumark@marvell.com>
M: Sunil Kumar Kori <skori@marvell.com>
M: Satha Rao <skoteshwar@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/cnxk/
F: drivers/net/cnxk/
@@ -832,6 +947,7 @@ F: doc/guides/platform/cnxk.rst
Marvell mvpp2
M: Liron Himi <lironh@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/common/mvep/
F: drivers/net/mvpp2/
@@ -841,6 +957,7 @@ F: doc/guides/nics/features/mvpp2.ini
Marvell mvneta
M: Zyta Szpak <zr@semihalf.com>
M: Liron Himi <lironh@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
@@ -848,6 +965,7 @@ F: doc/guides/nics/features/mvneta.ini
Marvell OCTEON TX EP - endpoint
M: Vamsi Attunuru <vattunuru@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/octeon_ep/
F: doc/guides/nics/features/octeon_ep.ini
@@ -856,6 +974,7 @@ F: doc/guides/nics/octeon_ep.rst
NVIDIA mlx4
M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mlx
F: drivers/net/mlx4/
F: doc/guides/nics/mlx4.rst
@@ -866,6 +985,7 @@ M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
M: Ori Kam <orika@nvidia.com>
M: Suanming Mou <suanmingm@nvidia.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mlx
F: drivers/common/mlx5/
F: drivers/net/mlx5/
@@ -875,23 +995,27 @@ F: doc/guides/nics/features/mlx5.ini
Microsoft mana
M: Long Li <longli@microsoft.com>
+S: Supported
F: drivers/net/mana/
F: doc/guides/nics/mana.rst
F: doc/guides/nics/features/mana.ini
Microsoft vdev_netvsc - EXPERIMENTAL
M: Matan Azrad <matan@nvidia.com>
+S: Supported
F: drivers/net/vdev_netvsc/
F: doc/guides/nics/vdev_netvsc.rst
Microsoft Hyper-V netvsc
M: Long Li <longli@microsoft.com>
+S: Supported
F: drivers/net/netvsc/
F: doc/guides/nics/netvsc.rst
F: doc/guides/nics/features/netvsc.ini
Netcope nfb
M: Martin Spinler <spinler@cesnet.cz>
+S: Supported
F: drivers/net/nfb/
F: doc/guides/nics/nfb.rst
F: doc/guides/nics/features/nfb.ini
@@ -899,6 +1023,7 @@ F: doc/guides/nics/features/nfb.ini
Netronome nfp
M: Chaoyong He <chaoyong.he@corigine.com>
M: Niklas Soderlund <niklas.soderlund@corigine.com>
+S: Supported
F: drivers/net/nfp/
F: doc/guides/nics/nfp.rst
F: doc/guides/nics/features/nfp*.ini
@@ -906,6 +1031,7 @@ F: doc/guides/nics/features/nfp*.ini
NXP dpaa
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/mempool/dpaa/
F: drivers/net/dpaa/
F: doc/guides/nics/dpaa.rst
@@ -914,6 +1040,7 @@ F: doc/guides/nics/features/dpaa.ini
NXP dpaa2
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/mempool/dpaa2/
F: drivers/net/dpaa2/
F: doc/guides/nics/dpaa2.rst
@@ -922,6 +1049,7 @@ F: doc/guides/nics/features/dpaa2.ini
NXP enetc
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/net/enetc/
F: doc/guides/nics/enetc.rst
F: doc/guides/nics/features/enetc.ini
@@ -929,18 +1057,21 @@ F: doc/guides/nics/features/enetc.ini
NXP enetfec - EXPERIMENTAL
M: Apeksha Gupta <apeksha.gupta@nxp.com>
M: Sachin Saxena <sachin.saxena@nxp.com>
+S: Supported
F: drivers/net/enetfec/
F: doc/guides/nics/enetfec.rst
F: doc/guides/nics/features/enetfec.ini
NXP pfe
M: Gagandeep Singh <g.singh@nxp.com>
+S: Supported
F: doc/guides/nics/pfe.rst
F: drivers/net/pfe/
F: doc/guides/nics/features/pfe.ini
Marvell QLogic bnx2x
M: Julien Aube <julien_dpdk@jaube.fr>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/bnx2x/
F: doc/guides/nics/bnx2x.rst
@@ -949,6 +1080,7 @@ F: doc/guides/nics/features/bnx2x*.ini
Marvell QLogic qede PMD
M: Devendra Singh Rawat <dsinghrawat@marvell.com>
M: Alok Prasad <palok@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/net/qede/
F: doc/guides/nics/qede.rst
@@ -956,6 +1088,7 @@ F: doc/guides/nics/features/qede*.ini
Solarflare sfc_efx
M: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
+S: Supported
F: drivers/common/sfc_efx/
F: drivers/net/sfc/
F: doc/guides/nics/sfc_efx.rst
@@ -963,6 +1096,7 @@ F: doc/guides/nics/features/sfc.ini
Wangxun ngbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
+S: Supported
F: drivers/net/ngbe/
F: doc/guides/nics/ngbe.rst
F: doc/guides/nics/features/ngbe.ini
@@ -970,12 +1104,14 @@ F: doc/guides/nics/features/ngbe.ini
Wangxun txgbe
M: Jiawen Wu <jiawenwu@trustnetic.com>
M: Jian Wang <jianwang@trustnetic.com>
+S: Supported
F: drivers/net/txgbe/
F: doc/guides/nics/txgbe.rst
F: doc/guides/nics/features/txgbe.ini
VMware vmxnet3
M: Jochen Behrens <jbehrens@vmware.com>
+S: Supported
F: drivers/net/vmxnet3/
F: doc/guides/nics/vmxnet3.rst
F: doc/guides/nics/features/vmxnet3.ini
@@ -983,6 +1119,7 @@ F: doc/guides/nics/features/vmxnet3.ini
Vhost-user
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
F: lib/vhost/
F: doc/guides/prog_guide/vhost_lib.rst
@@ -997,6 +1134,7 @@ F: doc/guides/sample_app_ug/vdpa.rst
Vhost PMD
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
F: drivers/net/vhost/
F: doc/guides/nics/vhost.rst
@@ -1005,6 +1143,7 @@ F: doc/guides/nics/features/vhost.ini
Virtio PMD
M: Maxime Coquelin <maxime.coquelin@redhat.com>
M: Chenbo Xia <chenbo.xia@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-virtio
F: drivers/net/virtio/
F: doc/guides/nics/virtio.rst
@@ -1013,26 +1152,31 @@ F: doc/guides/nics/features/virtio*.ini
Wind River AVP
M: Steven Webster <steven.webster@windriver.com>
M: Matt Peters <matt.peters@windriver.com>
+S: Supported
F: drivers/net/avp/
F: doc/guides/nics/avp.rst
F: doc/guides/nics/features/avp.ini
PCAP PMD
+S: Orphan
F: drivers/net/pcap/
F: doc/guides/nics/pcap_ring.rst
F: doc/guides/nics/features/pcap.ini
Tap PMD
+S: Orphan
F: drivers/net/tap/
F: doc/guides/nics/tap.rst
F: doc/guides/nics/features/tap.ini
KNI PMD
+S: Obsolete
F: drivers/net/kni/
F: doc/guides/nics/kni.rst
Ring PMD
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: drivers/net/ring/
F: doc/guides/nics/pcap_ring.rst
F: app/test/test_pmd_ring.c
@@ -1040,21 +1184,25 @@ F: app/test/test_pmd_ring_perf.c
Null Networking PMD
M: Tetsuya Mukawa <mtetsuyah@gmail.com>
+S: Supported
F: drivers/net/null/
Fail-safe PMD
M: Gaetan Rivet <grive@u256.net>
+S: Supported
F: drivers/net/failsafe/
F: doc/guides/nics/fail_safe.rst
F: doc/guides/nics/features/failsafe.ini
Softnic PMD
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: drivers/net/softnic/
F: doc/guides/nics/softnic.rst
Memif PMD
M: Jakub Grajciar <jgrajcia@cisco.com>
+S: Supported
F: drivers/net/memif/
F: doc/guides/nics/memif.rst
F: doc/guides/nics/features/memif.ini
@@ -1062,17 +1210,20 @@ F: doc/guides/nics/features/memif.ini
Crypto Drivers
--------------
+S: Maintained
T: git://dpdk.org/next/dpdk-next-crypto
F: doc/guides/cryptodevs/features/default.ini
AMD CCP Crypto
M: Sunil Uttarwar <sunilprakashrao.uttarwar@amd.com>
+S: Supported
F: drivers/crypto/ccp/
F: doc/guides/cryptodevs/ccp.rst
F: doc/guides/cryptodevs/features/ccp.ini
ARMv8 Crypto
M: Ruifeng Wang <ruifeng.wang@arm.com>
+S: Supported
F: drivers/crypto/armv8/
F: doc/guides/cryptodevs/armv8.rst
F: doc/guides/cryptodevs/features/armv8.ini
@@ -1081,12 +1232,14 @@ Broadcom FlexSparc
M: Ajit Khaparde <ajit.khaparde@broadcom.com>
M: Raveendra Padasalagi <raveendra.padasalagi@broadcom.com>
M: Vikas Gupta <vikas.gupta@broadcom.com>
+S: Supported
F: drivers/crypto/bcmfs/
F: doc/guides/cryptodevs/bcmfs.rst
F: doc/guides/cryptodevs/features/bcmfs.ini
Cavium OCTEON TX crypto
M: Anoob Joseph <anoobj@marvell.com>
+S: Supported
F: drivers/common/cpt/
F: drivers/crypto/octeontx/
F: doc/guides/cryptodevs/octeontx.rst
@@ -1094,17 +1247,20 @@ F: doc/guides/cryptodevs/features/octeontx.ini
Crypto Scheduler
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/scheduler/
F: doc/guides/cryptodevs/scheduler.rst
HiSilicon UADK crypto
M: Zhangfei Gao <zhangfei.gao@linaro.org>
+S: Supported
F: drivers/crypto/uadk/
F: doc/guides/cryptodevs/uadk.rst
F: doc/guides/cryptodevs/features/uadk.ini
Intel QuickAssist
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/qat/
F: drivers/common/qat/
F: doc/guides/cryptodevs/qat.rst
@@ -1113,6 +1269,7 @@ F: doc/guides/cryptodevs/features/qat.ini
IPsec MB
M: Kai Ji <kai.ji@intel.com>
M: Pablo de Lara <pablo.de.lara.guarch@intel.com>
+S: Supported
F: drivers/crypto/ipsec_mb/
F: doc/guides/cryptodevs/aesni_gcm.rst
F: doc/guides/cryptodevs/aesni_mb.rst
@@ -1131,6 +1288,7 @@ Marvell cnxk crypto
M: Ankur Dwivedi <adwivedi@marvell.com>
M: Anoob Joseph <anoobj@marvell.com>
M: Tejasree Kondoj <ktejasree@marvell.com>
+S: Supported
F: drivers/crypto/cnxk/
F: doc/guides/cryptodevs/cnxk.rst
F: doc/guides/cryptodevs/features/cn9k.ini
@@ -1139,6 +1297,7 @@ F: doc/guides/cryptodevs/features/cn10k.ini
Marvell mvsam
M: Michael Shamis <michaelsh@marvell.com>
M: Liron Himi <lironh@marvell.com>
+S: Supported
F: drivers/crypto/mvsam/
F: doc/guides/cryptodevs/mvsam.rst
F: doc/guides/cryptodevs/features/mvsam.ini
@@ -1146,18 +1305,21 @@ F: doc/guides/cryptodevs/features/mvsam.ini
Marvell Nitrox
M: Nagadheeraj Rottela <rnagadheeraj@marvell.com>
M: Srikanth Jampala <jsrikanth@marvell.com>
+S: Supported
F: drivers/crypto/nitrox/
F: doc/guides/cryptodevs/nitrox.rst
F: doc/guides/cryptodevs/features/nitrox.ini
NVIDIA mlx5
M: Matan Azrad <matan@nvidia.com>
+S: Supported
F: drivers/crypto/mlx5/
F: doc/guides/cryptodevs/mlx5.rst
F: doc/guides/cryptodevs/features/mlx5.ini
Null Crypto
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/null/
F: doc/guides/cryptodevs/null.rst
F: doc/guides/cryptodevs/features/null.ini
@@ -1165,6 +1327,7 @@ F: doc/guides/cryptodevs/features/null.ini
NXP CAAM JR
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/crypto/caam_jr/
F: doc/guides/cryptodevs/caam_jr.rst
F: doc/guides/cryptodevs/features/caam_jr.ini
@@ -1172,6 +1335,7 @@ F: doc/guides/cryptodevs/features/caam_jr.ini
NXP DPAA_SEC
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/crypto/dpaa_sec/
F: doc/guides/cryptodevs/dpaa_sec.rst
F: doc/guides/cryptodevs/features/dpaa_sec.ini
@@ -1179,18 +1343,21 @@ F: doc/guides/cryptodevs/features/dpaa_sec.ini
NXP DPAA2_SEC
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/crypto/dpaa2_sec/
F: doc/guides/cryptodevs/dpaa2_sec.rst
F: doc/guides/cryptodevs/features/dpaa2_sec.ini
OpenSSL
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/crypto/openssl/
F: doc/guides/cryptodevs/openssl.rst
F: doc/guides/cryptodevs/features/openssl.ini
Virtio
M: Jay Zhou <jianjay.zhou@huawei.com>
+S: Supported
F: drivers/crypto/virtio/
F: doc/guides/cryptodevs/virtio.rst
F: doc/guides/cryptodevs/features/virtio.ini
@@ -1198,31 +1365,37 @@ F: doc/guides/cryptodevs/features/virtio.ini
Compression Drivers
-------------------
+S: Maintained
T: git://dpdk.org/next/dpdk-next-crypto
Cavium OCTEON TX zipvf
M: Ashish Gupta <ashish.gupta@marvell.com>
+S: Supported
F: drivers/compress/octeontx/
F: doc/guides/compressdevs/octeontx.rst
F: doc/guides/compressdevs/features/octeontx.ini
Intel QuickAssist
M: Kai Ji <kai.ji@intel.com>
+S: Supported
F: drivers/compress/qat/
F: drivers/common/qat/
ISA-L
M: Lee Daly <lee.daly@intel.com>
+S: Supported
F: drivers/compress/isal/
F: doc/guides/compressdevs/isal.rst
F: doc/guides/compressdevs/features/isal.ini
NVIDIA mlx5
M: Matan Azrad <matan@nvidia.com>
+S: Supported
F: drivers/compress/mlx5/
ZLIB
M: Sunila Sahu <ssahu@marvell.com>
+S: Supported
F: drivers/compress/zlib/
F: doc/guides/compressdevs/zlib.rst
F: doc/guides/compressdevs/features/zlib.ini
@@ -1234,34 +1407,40 @@ DMAdev Drivers
Intel IDXD - EXPERIMENTAL
M: Bruce Richardson <bruce.richardson@intel.com>
M: Kevin Laatz <kevin.laatz@intel.com>
+S: Supported
F: drivers/dma/idxd/
F: doc/guides/dmadevs/idxd.rst
Intel IOAT
M: Bruce Richardson <bruce.richardson@intel.com>
M: Conor Walsh <conor.walsh@intel.com>
+S: Supported
F: drivers/dma/ioat/
F: doc/guides/dmadevs/ioat.rst
HiSilicon DMA
M: Chengwen Feng <fengchengwen@huawei.com>
+S: Supported
F: drivers/dma/hisilicon/
F: doc/guides/dmadevs/hisilicon.rst
Marvell CNXK DPI DMA
M: Vamsi Attunuru <vattunuru@marvell.com>
+S: Supported
F: drivers/dma/cnxk/
F: doc/guides/dmadevs/cnxk.rst
NXP DPAA DMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/dma/dpaa/
F: doc/guides/dmadevs/dpaa.rst
NXP DPAA2 QDMA
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
F: drivers/dma/dpaa2/
F: doc/guides/dmadevs/dpaa2.rst
@@ -1271,12 +1450,14 @@ RegEx Drivers
Marvell OCTEON CN9K regex
M: Liron Himi <lironh@marvell.com>
+S: Supported
F: drivers/regex/cn9k/
F: doc/guides/regexdevs/cn9k.rst
F: doc/guides/regexdevs/features/cn9k.ini
NVIDIA mlx5
M: Ori Kam <orika@nvidia.com>
+S: Supported
F: drivers/regex/mlx5/
F: doc/guides/regexdevs/mlx5.rst
F: doc/guides/regexdevs/features/mlx5.ini
@@ -1287,6 +1468,7 @@ MLdev Drivers
Marvell ML CNXK
M: Srikanth Yalavarthi <syalavarthi@marvell.com>
+S: Supported
F: drivers/common/cnxk/hw/ml.h
F: drivers/common/cnxk/roc_ml*
F: drivers/ml/cnxk/
@@ -1299,6 +1481,7 @@ T: git://dpdk.org/next/dpdk-next-virtio
Intel ifc
M: Xiao Wang <xiao.w.wang@intel.com>
+S: Supported
F: drivers/vdpa/ifc/
F: doc/guides/vdpadevs/ifc.rst
F: doc/guides/vdpadevs/features/ifcvf.ini
@@ -1306,12 +1489,14 @@ F: doc/guides/vdpadevs/features/ifcvf.ini
NVIDIA mlx5 vDPA
M: Matan Azrad <matan@nvidia.com>
M: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
+S: Supported
F: drivers/vdpa/mlx5/
F: doc/guides/vdpadevs/mlx5.rst
F: doc/guides/vdpadevs/features/mlx5.ini
Xilinx sfc vDPA
M: Vijay Kumar Srivastava <vsrivast@xilinx.com>
+S: Supported
F: drivers/vdpa/sfc/
F: doc/guides/vdpadevs/sfc.rst
F: doc/guides/vdpadevs/features/sfc.ini
@@ -1320,42 +1505,50 @@ F: doc/guides/vdpadevs/features/sfc.ini
Eventdev Drivers
----------------
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
Cavium OCTEON TX ssovf
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
F: drivers/event/octeontx/
F: doc/guides/eventdevs/octeontx.rst
Cavium OCTEON TX timvf
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
F: drivers/event/octeontx/timvf_*
Intel DLB2
M: Timothy McDaniel <timothy.mcdaniel@intel.com>
+S: Supported
F: drivers/event/dlb2/
F: doc/guides/eventdevs/dlb2.rst
Marvell cnxk
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
M: Shijith Thotton <sthotton@marvell.com>
+S: Supported
F: drivers/event/cnxk/
F: doc/guides/eventdevs/cnxk.rst
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/event/dpaa/
F: doc/guides/eventdevs/dpaa.rst
NXP DPAA2 eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Sachin Saxena <sachin.saxena@oss.nxp.com>
+S: Supported
F: drivers/event/dpaa2/
F: doc/guides/eventdevs/dpaa2.rst
Software Eventdev PMD
M: Harry van Haaren <harry.van.haaren@intel.com>
+S: Supported
F: drivers/event/sw/
F: doc/guides/eventdevs/sw.rst
F: examples/eventdev_pipeline/
@@ -1363,11 +1556,13 @@ F: doc/guides/sample_app_ug/eventdev_pipeline.rst
Distributed Software Eventdev PMD
M: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
+S: Supported
F: drivers/event/dsw/
F: doc/guides/eventdevs/dsw.rst
Software OPDL Eventdev PMD
M: Liang Ma <liangma@liangbit.com>
+S: Supported
M: Peter Mccarthy <peter.mccarthy@intel.com>
F: drivers/event/opdl/
F: doc/guides/eventdevs/opdl.rst
@@ -1378,6 +1573,7 @@ Baseband Drivers
Intel baseband
M: Nicolas Chautru <nicolas.chautru@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: drivers/baseband/turbo_sw/
F: doc/guides/bbdevs/turbo_sw.rst
@@ -1397,6 +1593,7 @@ F: doc/guides/bbdevs/features/vrb1.ini
Null baseband
M: Nicolas Chautru <nicolas.chautru@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: drivers/baseband/null/
F: doc/guides/bbdevs/null.rst
@@ -1405,6 +1602,7 @@ F: doc/guides/bbdevs/features/null.ini
NXP LA12xx
M: Gagandeep Singh <g.singh@nxp.com>
M: Hemant Agrawal <hemant.agrawal@nxp.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-baseband
F: drivers/baseband/la12xx/
F: doc/guides/bbdevs/la12xx.rst
@@ -1416,6 +1614,7 @@ GPU Drivers
NVIDIA CUDA
M: Elena Agostini <eagostini@nvidia.com>
+S: Supported
F: drivers/gpu/cuda/
F: doc/guides/gpus/cuda.rst
@@ -1426,6 +1625,7 @@ Rawdev Drivers
Intel FPGA
M: Rosen Xu <rosen.xu@intel.com>
M: Tianfei zhang <tianfei.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net-intel
F: drivers/raw/ifpga/
F: doc/guides/rawdevs/ifpga.rst
@@ -1433,18 +1633,21 @@ F: doc/guides/rawdevs/ifpga.rst
Marvell CNXK BPHY
M: Jakub Palider <jpalider@marvell.com>
M: Tomasz Duszynski <tduszynski@marvell.com>
+S: Supported
F: doc/guides/rawdevs/cnxk_bphy.rst
F: drivers/raw/cnxk_bphy/
Marvell CNXK GPIO
M: Jakub Palider <jpalider@marvell.com>
M: Tomasz Duszynski <tduszynski@marvell.com>
+S: Supported
F: doc/guides/rawdevs/cnxk_gpio.rst
F: drivers/raw/cnxk_gpio/
NTB
M: Jingjing Wu <jingjing.wu@intel.com>
M: Junfeng Guo <junfeng.guo@intel.com>
+S: Supported
F: drivers/raw/ntb/
F: doc/guides/rawdevs/ntb.rst
F: examples/ntb/
@@ -1452,6 +1655,7 @@ F: doc/guides/sample_app_ug/ntb.rst
NXP DPAA2 CMDIF
M: Gagandeep Singh <g.singh@nxp.com>
+S: Supported
F: drivers/raw/dpaa2_cmdif/
F: doc/guides/rawdevs/dpaa2_cmdif.rst
@@ -1461,12 +1665,14 @@ Packet processing
Network headers
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/net/
F: app/test/test_cksum.c
F: app/test/test_cksum_perf.c
Packet CRC
M: Jasvinder Singh <jasvinder.singh@intel.com>
+S: Supported
F: lib/net/net_crc.h
F: lib/net/rte_net_crc*
F: lib/net/net_crc_avx512.c
@@ -1475,6 +1681,7 @@ F: app/test/test_crc.c
IP fragmentation & reassembly
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/ip_frag/
F: doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
F: app/test/test_ipfrag.c
@@ -1486,16 +1693,19 @@ F: doc/guides/sample_app_ug/ip_reassembly.rst
Generic Receive Offload - EXPERIMENTAL
M: Jiayu Hu <jiayu.hu@intel.com>
+S: Supported
F: lib/gro/
F: doc/guides/prog_guide/generic_receive_offload_lib.rst
Generic Segmentation Offload
M: Jiayu Hu <jiayu.hu@intel.com>
+S: Supported
F: lib/gso/
F: doc/guides/prog_guide/generic_segmentation_offload_lib.rst
IPsec
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/ipsec/
F: app/test/test_ipsec*
@@ -1506,12 +1716,14 @@ F: app/test-sad/
PDCP - EXPERIMENTAL
M: Anoob Joseph <anoobj@marvell.com>
M: Volodymyr Fialko <vfialko@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/pdcp/
F: doc/guides/prog_guide/pdcp_lib.rst
F: app/test/test_pdcp*
Flow Classify - EXPERIMENTAL - UNMAINTAINED
+S: Orphan
F: lib/flow_classify/
F: app/test/test_flow_classify*
F: doc/guides/prog_guide/flow_classify_lib.rst
@@ -1520,6 +1732,7 @@ F: doc/guides/sample_app_ug/flow_classify.rst
Distributor
M: David Hunt <david.hunt@intel.com>
+S: Supported
F: lib/distributor/
F: doc/guides/prog_guide/packet_distrib_lib.rst
F: app/test/test_distributor*
@@ -1528,6 +1741,7 @@ F: doc/guides/sample_app_ug/dist_app.rst
Reorder
M: Volodymyr Fialko <vfialko@marvell.com>
+S: Supported
F: lib/reorder/
F: doc/guides/prog_guide/reorder_lib.rst
F: app/test/test_reorder*
@@ -1536,6 +1750,7 @@ F: doc/guides/sample_app_ug/packet_ordering.rst
Hierarchical scheduler
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/sched/
F: doc/guides/prog_guide/qos_framework.rst
F: app/test/test_pie.c
@@ -1547,6 +1762,7 @@ F: doc/guides/sample_app_ug/qos_scheduler.rst
Packet capture
M: Reshma Pattan <reshma.pattan@intel.com>
M: Stephen Hemminger <stephen@networkplumber.org>
+S: Maintained
F: lib/pdump/
F: doc/guides/prog_guide/pdump_lib.rst
F: app/test/test_pdump.*
@@ -1562,6 +1778,7 @@ F: doc/guides/tools/dumpcap.rst
Packet Framework
----------------
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/pipeline/
F: lib/port/
F: lib/table/
@@ -1579,6 +1796,7 @@ Algorithms
ACL
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/acl/
F: doc/guides/prog_guide/packet_classif_access_ctrl.rst
F: app/test-acl/
@@ -1587,6 +1805,7 @@ F: app/test/test_acl.*
EFD
M: Byron Marohn <byron.marohn@intel.com>
M: Yipeng Wang <yipeng1.wang@intel.com>
+S: Supported
F: lib/efd/
F: doc/guides/prog_guide/efd_lib.rst
F: app/test/test_efd*
@@ -1598,6 +1817,7 @@ M: Yipeng Wang <yipeng1.wang@intel.com>
M: Sameh Gobriel <sameh.gobriel@intel.com>
M: Bruce Richardson <bruce.richardson@intel.com>
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
+S: Supported
F: lib/hash/
F: doc/guides/prog_guide/hash_lib.rst
F: doc/guides/prog_guide/toeplitz_hash_lib.rst
@@ -1607,6 +1827,7 @@ F: app/test/test_func_reentrancy.c
LPM
M: Bruce Richardson <bruce.richardson@intel.com>
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
+S: Supported
F: lib/lpm/
F: doc/guides/prog_guide/lpm*
F: app/test/test_lpm*
@@ -1616,12 +1837,14 @@ F: app/test/test_xmmt_ops.h
Membership - EXPERIMENTAL
M: Yipeng Wang <yipeng1.wang@intel.com>
M: Sameh Gobriel <sameh.gobriel@intel.com>
+S: Supported
F: lib/member/
F: doc/guides/prog_guide/member_lib.rst
F: app/test/test_member*
RIB/FIB
M: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
+S: Supported
F: lib/rib/
F: app/test/test_rib*
F: lib/fib/
@@ -1630,6 +1853,7 @@ F: app/test-fib/
Traffic metering
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/meter/
F: doc/guides/sample_app_ug/qos_scheduler.rst
F: app/test/test_meter.c
@@ -1642,12 +1866,14 @@ Other libraries
Configuration file
M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
+S: Supported
F: lib/cfgfile/
F: app/test/test_cfgfile.c
F: app/test/test_cfgfiles/
Interactive command line
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/cmdline/
F: app/test-cmdline/
F: app/test/test_cmdline*
@@ -1656,11 +1882,13 @@ F: doc/guides/sample_app_ug/cmd_line.rst
Key/Value parsing
M: Olivier Matz <olivier.matz@6wind.com>
+S: Supported
F: lib/kvargs/
F: app/test/test_kvargs.c
RCU
M: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
+S: Supported
F: lib/rcu/
F: app/test/test_rcu*
F: doc/guides/prog_guide/rcu_lib.rst
@@ -1668,11 +1896,13 @@ F: doc/guides/prog_guide/rcu_lib.rst
PCI
M: Chenbo Xia <chenbo.xia@intel.com>
M: Gaetan Rivet <grive@u256.net>
+S: Supported
F: lib/pci/
Power management
M: Anatoly Burakov <anatoly.burakov@intel.com>
M: David Hunt <david.hunt@intel.com>
+S: Supported
F: lib/power/
F: doc/guides/prog_guide/power_man.rst
F: app/test/test_power*
@@ -1683,6 +1913,7 @@ F: doc/guides/sample_app_ug/vm_power_management.rst
Timers
M: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
+S: Supported
F: lib/timer/
F: doc/guides/prog_guide/timer_lib.rst
F: app/test/test_timer*
@@ -1690,25 +1921,30 @@ F: examples/timer/
F: doc/guides/sample_app_ug/timer.rst
Job statistics
+S: Orphan
F: lib/jobstats/
F: examples/l2fwd-jobstats/
F: doc/guides/sample_app_ug/l2_forward_job_stats.rst
Metrics
+S: Orphan
F: lib/metrics/
F: app/test/test_metrics.c
Bit-rate statistics
+S: Orphan
F: lib/bitratestats/
F: app/test/test_bitratestats.c
Latency statistics
M: Reshma Pattan <reshma.pattan@intel.com>
+S: Supported
F: lib/latencystats/
F: app/test/test_latencystats.c
Telemetry
M: Ciara Power <ciara.power@intel.com>
+S: Supported
F: lib/telemetry/
F: app/test/test_telemetry*
F: usertools/dpdk-telemetry*
@@ -1716,6 +1952,7 @@ F: doc/guides/howto/telemetry.rst
BPF
M: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
+S: Supported
F: lib/bpf/
F: examples/bpf/
F: app/test/test_bpf.c
@@ -1727,6 +1964,7 @@ M: Jerin Jacob <jerinj@marvell.com>
M: Kiran Kumar K <kirankumark@marvell.com>
M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Zhirun Yan <zhirun.yan@intel.com>
+S: Supported
F: lib/graph/
F: doc/guides/prog_guide/graph_lib.rst
F: app/test/test_graph*
@@ -1736,6 +1974,7 @@ F: doc/guides/sample_app_ug/l3_forward_graph.rst
Nodes - EXPERIMENTAL
M: Nithin Dabilpuram <ndabilpuram@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
F: lib/node/
@@ -1743,6 +1982,7 @@ Test Applications
-----------------
Unit tests framework
+S: Maintained
F: app/test/commands.c
F: app/test/has_hugepage.py
F: app/test/packet_burst_generator.c
@@ -1758,45 +1998,53 @@ F: app/test/virtual_pmd.h
Sample packet helper functions for unit test
M: Reshma Pattan <reshma.pattan@intel.com>
+S: Supported
F: app/test/sample_packet_forward.c
F: app/test/sample_packet_forward.h
Networking drivers testing tool
M: Aman Singh <aman.deep.singh@intel.com>
M: Yuying Zhang <yuying.zhang@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-net
F: app/test-pmd/
F: doc/guides/testpmd_app_ug/
DMA device performance tool
M: Cheng Jiang <cheng1.jiang@intel.com>
+S: Supported
F: app/test-dma-perf/
F: doc/guides/tools/dmaperf.rst
Flow performance tool
M: Wisam Jaddo <wisamm@nvidia.com>
+S: Supported
F: app/test-flow-perf/
F: doc/guides/tools/flow-perf.rst
Security performance tool
M: Anoob Joseph <anoobj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: app/test-security-perf/
F: doc/guides/tools/securityperf.rst
Compression performance test application
T: git://dpdk.org/next/dpdk-next-crypto
+S: Orphan
F: app/test-compress-perf/
F: doc/guides/tools/comp_perf.rst
Crypto performance test application
M: Ciara Power <ciara.power@intel.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-crypto
F: app/test-crypto-perf/
F: doc/guides/tools/cryptoperf.rst
Eventdev test application
M: Jerin Jacob <jerinj@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: app/test-eventdev/
F: doc/guides/tools/testeventdev.rst
@@ -1806,12 +2054,14 @@ F: app/test/test_event_ring.c
Procinfo tool
M: Maryam Tahhan <maryam.tahhan@intel.com>
M: Reshma Pattan <reshma.pattan@intel.com>
+S: Supported
F: app/proc-info/
F: doc/guides/tools/proc_info.rst
DTS
M: Lijuan Tu <lijuan.tu@intel.com>
M: Juraj Linkeš <juraj.linkes@pantheon.tech>
+S: Supported
F: dts/
F: devtools/dts-check-format.sh
F: doc/guides/tools/dts.rst
@@ -1821,77 +2071,92 @@ Other Example Applications
--------------------------
Ethtool example
+S: Orphan
F: examples/ethtool/
F: doc/guides/sample_app_ug/ethtool.rst
FIPS validation example
M: Brian Dooley <brian.dooley@intel.com>
M: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
+S: Supported
F: examples/fips_validation/
F: doc/guides/sample_app_ug/fips_validation.rst
Flow filtering example
M: Ori Kam <orika@nvidia.com>
+S: Supported
F: examples/flow_filtering/
F: doc/guides/sample_app_ug/flow_filtering.rst
Helloworld example
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: examples/helloworld/
F: doc/guides/sample_app_ug/hello_world.rst
IPsec security gateway example
M: Radu Nicolau <radu.nicolau@intel.com>
M: Akhil Goyal <gakhil@marvell.com>
+S: Supported
F: examples/ipsec-secgw/
F: doc/guides/sample_app_ug/ipsec_secgw.rst
IPv4 multicast example
+S: Orphan
F: examples/ipv4_multicast/
F: doc/guides/sample_app_ug/ipv4_multicast.rst
L2 forwarding example
M: Bruce Richardson <bruce.richardson@intel.com>
+S: Supported
F: examples/l2fwd/
F: doc/guides/sample_app_ug/l2_forward_real_virtual.rst
L2 forwarding with cache allocation example
M: Tomasz Kantecki <tomasz.kantecki@intel.com>
+S: Supported
F: doc/guides/sample_app_ug/l2_forward_cat.rst
F: examples/l2fwd-cat/
L2 forwarding with eventdev example
M: Sunil Kumar Kori <skori@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+S: Supported
T: git://dpdk.org/next/dpdk-next-eventdev
F: examples/l2fwd-event/
F: doc/guides/sample_app_ug/l2_forward_event.rst
L3 forwarding example
+S: Maintained
F: examples/l3fwd/
F: doc/guides/sample_app_ug/l3_forward.rst
Link status interrupt example
+S: Maintained
F: examples/link_status_interrupt/
F: doc/guides/sample_app_ug/link_status_intr.rst
PTP client example
M: Kirill Rybalchenko <kirill.rybalchenko@intel.com>
+S: Supported
F: examples/ptpclient/
Rx/Tx callbacks example
M: Bruce Richardson <bruce.richardson@intel.com>
M: John McNamara <john.mcnamara@intel.com>
+S: Supported
F: examples/rxtx_callbacks/
F: doc/guides/sample_app_ug/rxtx_callbacks.rst
Skeleton example
M: Bruce Richardson <bruce.richardson@intel.com>
M: John McNamara <john.mcnamara@intel.com>
+S: Supported
F: examples/skeleton/
F: doc/guides/sample_app_ug/skeleton.rst
VMDq examples
+S: Orphan
F: examples/vmdq/
F: doc/guides/sample_app_ug/vmdq_forwarding.rst
F: examples/vmdq_dcb/
--
2.39.2
^ permalink raw reply [relevance 1%]
* RE: [PATCH] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-13 10:40 0% ` Jerin Jacob
@ 2023-07-14 11:32 0% ` Tummala, Sivaprasad
0 siblings, 0 replies; 200+ results
From: Tummala, Sivaprasad @ 2023-07-14 11:32 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, Yigit, Ferruh, bruce.richardson, david.marchand, thomas
[AMD Official Use Only - General]
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, July 13, 2023 4:11 PM
> To: Tummala, Sivaprasad <Sivaprasad.Tummala@amd.com>
> Cc: dev@dpdk.org; Yigit, Ferruh <Ferruh.Yigit@amd.com>;
> bruce.richardson@intel.com; david.marchand@redhat.com; thomas@monjalon.net
> Subject: Re: [PATCH] doc: deprecation notice to add callback data to
> rte_event_fp_ops
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Thu, Jul 13, 2023 at 4:08 PM Tummala, Sivaprasad
> <Sivaprasad.Tummala@amd.com> wrote:
> >
> > [AMD Official Use Only - General]
> >
> > Hi Jerin,
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Thursday, July 13, 2023 2:22 PM
> > > To: Tummala, Sivaprasad <Sivaprasad.Tummala@amd.com>
> > > Cc: dev@dpdk.org; Yigit, Ferruh <Ferruh.Yigit@amd.com>;
> > > bruce.richardson@intel.com; david.marchand@redhat.com;
> > > thomas@monjalon.net
> > > Subject: Re: [PATCH] doc: deprecation notice to add callback data to
> > > rte_event_fp_ops
> > >
> > > Caution: This message originated from an External Source. Use proper
> > > caution when opening attachments, clicking links, or responding.
> > >
> > >
> > > On Wed, Jul 12, 2023 at 11:01 PM Sivaprasad Tummala
> > > <sivaprasad.tummala@amd.com> wrote:
> > > >
> > > > Deprecation notice to add "rte_eventdev_port_data" field to
> > >
> > > Could you share the rational for why rte_eventdev_port_data needs to be
> added?
> >
> > "rte_eventdev_port_data" is used to hold callbacks registered optionally per
> event device port and associated callback data.
> > By adding "rte_eventdev_port_data" to "rte_event_fp_ops", allows to fetch this
> data for fastpath eventdev inline functions in advance.
>
> Please add above info in the release notes for next version.
Sure, will do the same.
>
> >
> > >
> > >
> > > > ``rte_event_fp_ops`` for callback support.
> > > >
> > > > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > > ---
> > > > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > > > 1 file changed, 4 insertions(+)
> > > >
> > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > > b/doc/guides/rel_notes/deprecation.rst
> > > > index 8e1cdd677a..2c69338818 100644
> > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > @@ -133,6 +133,10 @@ Deprecation Notices
> > > > ``rte_cryptodev_get_auth_algo_string``,
> > > ``rte_cryptodev_get_aead_algo_string`` and
> > > > ``rte_cryptodev_asym_get_xform_string`` respectively.
> > > >
> > > > +* eventdev: The struct rte_event_fp_ops will be updated with a
> > > > +new element
> > > > + rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
> > > > +This changes
> > > > + the size of rte_event_fp_ops and result in ABI change.
> > > > +
> > > > * flow_classify: The flow_classify library and example have no maintainer.
> > > > The library is experimental and, as such, it could be removed from DPDK.
> > > > Its removal has been postponed to let potential users report
> > > > interest
> > > > --
> > > > 2.34.1
> > > >
^ permalink raw reply [relevance 0%]
* Re: [PATCH V6 0/5] app/testpmd: support multiple process attach and detach port
@ 2023-07-14 7:21 0% ` lihuisong (C)
0 siblings, 0 replies; 200+ results
From: lihuisong (C) @ 2023-07-14 7:21 UTC (permalink / raw)
To: dev, ferruh.yigit
Cc: thomas, andrew.rybchenko, liudongdong3, liuyonglong, fengchengwen
Hi Ferruh,
Can you take a look at this series?
I added the call stack info for segment fault.
/Huisong
在 2023/5/27 10:11, Huisong Li 写道:
> This patchset fix some bugs and support attaching and detaching port
> in primary and secondary.
>
> ---
> -v6: adjust rte_eth_dev_is_used position based on alphabetical order
> in version.map
> -v5: move 'ALLOCATED' state to the back of 'REMOVED' to avoid abi break.
> -v4: fix a misspelling.
> -v3:
> #1 merge patch 1/6 and patch 2/6 into patch 1/5, and add modification
> for other bus type.
> #2 add a RTE_ETH_DEV_ALLOCATED state in rte_eth_dev_state to resolve
> the probelm in patch 2/5.
> -v2: resend due to CI unexplained failure.
>
> Huisong Li (5):
> drivers/bus: restore driver assignment at front of probing
> ethdev: fix skip valid port in probing callback
> app/testpmd: check the validity of the port
> app/testpmd: add attach and detach port for multiple process
> app/testpmd: stop forwarding in new or destroy event
>
> app/test-pmd/testpmd.c | 47 +++++++++++++++---------
> app/test-pmd/testpmd.h | 1 -
> drivers/bus/auxiliary/auxiliary_common.c | 9 ++++-
> drivers/bus/dpaa/dpaa_bus.c | 9 ++++-
> drivers/bus/fslmc/fslmc_bus.c | 8 +++-
> drivers/bus/ifpga/ifpga_bus.c | 12 ++++--
> drivers/bus/pci/pci_common.c | 9 ++++-
> drivers/bus/vdev/vdev.c | 10 ++++-
> drivers/bus/vmbus/vmbus_common.c | 9 ++++-
> drivers/net/bnxt/bnxt_ethdev.c | 3 +-
> drivers/net/bonding/bonding_testpmd.c | 1 -
> drivers/net/mlx5/mlx5.c | 2 +-
> lib/ethdev/ethdev_driver.c | 13 +++++--
> lib/ethdev/ethdev_driver.h | 12 ++++++
> lib/ethdev/ethdev_pci.h | 2 +-
> lib/ethdev/rte_class_eth.c | 2 +-
> lib/ethdev/rte_ethdev.c | 4 +-
> lib/ethdev/rte_ethdev.h | 4 +-
> lib/ethdev/version.map | 1 +
> 19 files changed, 114 insertions(+), 44 deletions(-)
>
^ permalink raw reply [relevance 0%]
* RE: [PATCH] doc: announce ethdev operation struct changes
2023-07-13 2:37 0% ` Feifei Wang
@ 2023-07-13 12:50 0% ` Morten Brørup
2023-07-17 8:28 0% ` Andrew Rybchenko
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2023-07-13 12:50 UTC (permalink / raw)
To: Feifei Wang, dev
Cc: nd, Honnappa Nagarahalli, Ruifeng Wang, Konstantin Ananyev,
Ferruh Yigit, thomas, Andrew Rybchenko, nd, nd
> From: Feifei Wang [mailto:Feifei.Wang2@arm.com]
> Sent: Thursday, 13 July 2023 04.37
>
> > From: Feifei Wang
> > Sent: Tuesday, July 4, 2023 4:17 PM
> >
> > > From: Feifei Wang <feifei.wang2@arm.com>
> > > Sent: Tuesday, July 4, 2023 4:10 PM
> > >
> > > To support mbufs recycle mode, announce the coming ABI changes from
> > > DPDK 23.11.
> > >
> > > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > ---
> > > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > > 1 file changed, 4 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > b/doc/guides/rel_notes/deprecation.rst
> > > index 66431789b0..c7e1ffafb2 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -118,6 +118,10 @@ Deprecation Notices
> > > The legacy actions should be removed
> > > once ``MODIFY_FIELD`` alternative is implemented in drivers.
> > >
> > > +* ethdev: The Ethernet device data structure ``struct rte_eth_dev``
> > > +and
> > > + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be
> > > +updated
> > > + with new fields to support mbufs recycle mode from DPDK 23.11.
> > > +
> > > * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
> > > to have another parameter ``qp_id`` to return the queue pair ID
> > > which got error interrupt to the application,
> > > --
> > > 2.25.1
>
> Ping~
Acked-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-13 10:38 0% ` Tummala, Sivaprasad
@ 2023-07-13 10:40 0% ` Jerin Jacob
2023-07-14 11:32 0% ` Tummala, Sivaprasad
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-07-13 10:40 UTC (permalink / raw)
To: Tummala, Sivaprasad
Cc: dev, Yigit, Ferruh, bruce.richardson, david.marchand, thomas
On Thu, Jul 13, 2023 at 4:08 PM Tummala, Sivaprasad
<Sivaprasad.Tummala@amd.com> wrote:
>
> [AMD Official Use Only - General]
>
> Hi Jerin,
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, July 13, 2023 2:22 PM
> > To: Tummala, Sivaprasad <Sivaprasad.Tummala@amd.com>
> > Cc: dev@dpdk.org; Yigit, Ferruh <Ferruh.Yigit@amd.com>;
> > bruce.richardson@intel.com; david.marchand@redhat.com; thomas@monjalon.net
> > Subject: Re: [PATCH] doc: deprecation notice to add callback data to
> > rte_event_fp_ops
> >
> > Caution: This message originated from an External Source. Use proper caution
> > when opening attachments, clicking links, or responding.
> >
> >
> > On Wed, Jul 12, 2023 at 11:01 PM Sivaprasad Tummala
> > <sivaprasad.tummala@amd.com> wrote:
> > >
> > > Deprecation notice to add "rte_eventdev_port_data" field to
> >
> > Could you share the rational for why rte_eventdev_port_data needs to be added?
>
> "rte_eventdev_port_data" is used to hold callbacks registered optionally per event device port and associated callback data.
> By adding "rte_eventdev_port_data" to "rte_event_fp_ops", allows to fetch this data for fastpath eventdev inline functions in advance.
Please add above info in the release notes for next version.
>
> >
> >
> > > ``rte_event_fp_ops`` for callback support.
> > >
> > > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > ---
> > > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > > 1 file changed, 4 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > b/doc/guides/rel_notes/deprecation.rst
> > > index 8e1cdd677a..2c69338818 100644
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > @@ -133,6 +133,10 @@ Deprecation Notices
> > > ``rte_cryptodev_get_auth_algo_string``,
> > ``rte_cryptodev_get_aead_algo_string`` and
> > > ``rte_cryptodev_asym_get_xform_string`` respectively.
> > >
> > > +* eventdev: The struct rte_event_fp_ops will be updated with a new
> > > +element
> > > + rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
> > > +This changes
> > > + the size of rte_event_fp_ops and result in ABI change.
> > > +
> > > * flow_classify: The flow_classify library and example have no maintainer.
> > > The library is experimental and, as such, it could be removed from DPDK.
> > > Its removal has been postponed to let potential users report
> > > interest
> > > --
> > > 2.34.1
> > >
^ permalink raw reply [relevance 0%]
* RE: [PATCH] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-13 8:51 0% ` Jerin Jacob
@ 2023-07-13 10:38 0% ` Tummala, Sivaprasad
2023-07-13 10:40 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Tummala, Sivaprasad @ 2023-07-13 10:38 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, Yigit, Ferruh, bruce.richardson, david.marchand, thomas
[AMD Official Use Only - General]
Hi Jerin,
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, July 13, 2023 2:22 PM
> To: Tummala, Sivaprasad <Sivaprasad.Tummala@amd.com>
> Cc: dev@dpdk.org; Yigit, Ferruh <Ferruh.Yigit@amd.com>;
> bruce.richardson@intel.com; david.marchand@redhat.com; thomas@monjalon.net
> Subject: Re: [PATCH] doc: deprecation notice to add callback data to
> rte_event_fp_ops
>
> Caution: This message originated from an External Source. Use proper caution
> when opening attachments, clicking links, or responding.
>
>
> On Wed, Jul 12, 2023 at 11:01 PM Sivaprasad Tummala
> <sivaprasad.tummala@amd.com> wrote:
> >
> > Deprecation notice to add "rte_eventdev_port_data" field to
>
> Could you share the rational for why rte_eventdev_port_data needs to be added?
"rte_eventdev_port_data" is used to hold callbacks registered optionally per event device port and associated callback data.
By adding "rte_eventdev_port_data" to "rte_event_fp_ops", allows to fetch this data for fastpath eventdev inline functions in advance.
>
>
> > ``rte_event_fp_ops`` for callback support.
> >
> > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index 8e1cdd677a..2c69338818 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -133,6 +133,10 @@ Deprecation Notices
> > ``rte_cryptodev_get_auth_algo_string``,
> ``rte_cryptodev_get_aead_algo_string`` and
> > ``rte_cryptodev_asym_get_xform_string`` respectively.
> >
> > +* eventdev: The struct rte_event_fp_ops will be updated with a new
> > +element
> > + rte_eventdev_port_data to support optional callbacks in DPDK 23.11.
> > +This changes
> > + the size of rte_event_fp_ops and result in ABI change.
> > +
> > * flow_classify: The flow_classify library and example have no maintainer.
> > The library is experimental and, as such, it could be removed from DPDK.
> > Its removal has been postponed to let potential users report
> > interest
> > --
> > 2.34.1
> >
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: deprecation notice to add callback data to rte_event_fp_ops
2023-07-12 17:30 5% [PATCH] doc: deprecation notice to add callback data to rte_event_fp_ops Sivaprasad Tummala
@ 2023-07-13 8:51 0% ` Jerin Jacob
2023-07-13 10:38 0% ` Tummala, Sivaprasad
2023-07-17 11:24 5% ` [PATCH v1] " Sivaprasad Tummala
1 sibling, 1 reply; 200+ results
From: Jerin Jacob @ 2023-07-13 8:51 UTC (permalink / raw)
To: Sivaprasad Tummala
Cc: dev, ferruh.yigit, bruce.richardson, david.marchand, thomas
On Wed, Jul 12, 2023 at 11:01 PM Sivaprasad Tummala
<sivaprasad.tummala@amd.com> wrote:
>
> Deprecation notice to add "rte_eventdev_port_data" field to
Could you share the rational for why rte_eventdev_port_data needs to be added?
> ``rte_event_fp_ops`` for callback support.
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 8e1cdd677a..2c69338818 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -133,6 +133,10 @@ Deprecation Notices
> ``rte_cryptodev_get_auth_algo_string``, ``rte_cryptodev_get_aead_algo_string`` and
> ``rte_cryptodev_asym_get_xform_string`` respectively.
>
> +* eventdev: The struct rte_event_fp_ops will be updated with a new element
> + rte_eventdev_port_data to support optional callbacks in DPDK 23.11. This changes
> + the size of rte_event_fp_ops and result in ABI change.
> +
> * flow_classify: The flow_classify library and example have no maintainer.
> The library is experimental and, as such, it could be removed from DPDK.
> Its removal has been postponed to let potential users report interest
> --
> 2.34.1
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: announce ethdev operation struct changes
2023-07-05 11:32 0% ` Konstantin Ananyev
@ 2023-07-13 7:52 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2023-07-13 7:52 UTC (permalink / raw)
To: Konstantin Ananyev, Feifei Wang
Cc: dev, nd, Honnappa.Nagarahalli, Ruifeng Wang
On 7/5/2023 12:32 PM, Konstantin Ananyev wrote:
> 04/07/2023 09:10, Feifei Wang пишет:
>> To support mbufs recycle mode, announce the coming ABI changes
>> from DPDK 23.11.
>>
>> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
>> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
>> ---
>> doc/guides/rel_notes/deprecation.rst | 4 ++++
>> 1 file changed, 4 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst
>> b/doc/guides/rel_notes/deprecation.rst
>> index 66431789b0..c7e1ffafb2 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -118,6 +118,10 @@ Deprecation Notices
>> The legacy actions should be removed
>> once ``MODIFY_FIELD`` alternative is implemented in drivers.
>> +* ethdev: The Ethernet device data structure ``struct rte_eth_dev``
>> and
>> + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be
>> updated
>> + with new fields to support mbufs recycle mode from DPDK 23.11.
>> +
>> * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
>> to have another parameter ``qp_id`` to return the queue pair ID
>> which got error interrupt to the application,
>
> Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
^ permalink raw reply [relevance 0%]
* RE: [PATCH] doc: announce ethdev operation struct changes
2023-07-04 8:17 0% ` Feifei Wang
@ 2023-07-13 2:37 0% ` Feifei Wang
2023-07-13 12:50 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Feifei Wang @ 2023-07-13 2:37 UTC (permalink / raw)
To: dev
Cc: nd, Honnappa Nagarahalli, Ruifeng Wang, Konstantin Ananyev, mb,
Ferruh Yigit, thomas, Andrew Rybchenko, nd, nd
> -----Original Message-----
> From: Feifei Wang
> Sent: Tuesday, July 4, 2023 4:17 PM
> To: Feifei Wang <feifei.wang2@arm.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; Ruifeng Wang
> <Ruifeng.Wang@arm.com>; Konstantin Ananyev
> <konstantin.v.ananyev@yandex.ru>; mb@smartsharesystems.com; Ferruh
> Yigit <ferruh.yigit@amd.com>; thomas@monjalon.net; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>; nd <nd@arm.com>
> Subject: RE: [PATCH] doc: announce ethdev operation struct changes
>
>
>
> > -----Original Message-----
> > From: Feifei Wang <feifei.wang2@arm.com>
> > Sent: Tuesday, July 4, 2023 4:10 PM
> > Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> > <Honnappa.Nagarahalli@arm.com>; Feifei Wang <Feifei.Wang2@arm.com>;
> > Ruifeng Wang <Ruifeng.Wang@arm.com>
> > Subject: [PATCH] doc: announce ethdev operation struct changes
> >
> > To support mbufs recycle mode, announce the coming ABI changes from
> > DPDK 23.11.
> >
> > Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > ---
> > doc/guides/rel_notes/deprecation.rst | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index 66431789b0..c7e1ffafb2 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -118,6 +118,10 @@ Deprecation Notices
> > The legacy actions should be removed
> > once ``MODIFY_FIELD`` alternative is implemented in drivers.
> >
> > +* ethdev: The Ethernet device data structure ``struct rte_eth_dev``
> > +and
> > + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be
> > +updated
> > + with new fields to support mbufs recycle mode from DPDK 23.11.
> > +
> > * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
> > to have another parameter ``qp_id`` to return the queue pair ID
> > which got error interrupt to the application,
> > --
> > 2.25.1
Ping~
^ permalink raw reply [relevance 0%]
* [PATCH] doc: deprecation notice to add callback data to rte_event_fp_ops
@ 2023-07-12 17:30 5% Sivaprasad Tummala
2023-07-13 8:51 0% ` Jerin Jacob
2023-07-17 11:24 5% ` [PATCH v1] " Sivaprasad Tummala
0 siblings, 2 replies; 200+ results
From: Sivaprasad Tummala @ 2023-07-12 17:30 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, bruce.richardson, david.marchand, thomas
Deprecation notice to add "rte_eventdev_port_data" field to
``rte_event_fp_ops`` for callback support.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 8e1cdd677a..2c69338818 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -133,6 +133,10 @@ Deprecation Notices
``rte_cryptodev_get_auth_algo_string``, ``rte_cryptodev_get_aead_algo_string`` and
``rte_cryptodev_asym_get_xform_string`` respectively.
+* eventdev: The struct rte_event_fp_ops will be updated with a new element
+ rte_eventdev_port_data to support optional callbacks in DPDK 23.11. This changes
+ the size of rte_event_fp_ops and result in ABI change.
+
* flow_classify: The flow_classify library and example have no maintainer.
The library is experimental and, as such, it could be removed from DPDK.
Its removal has been postponed to let potential users report interest
--
2.34.1
^ permalink raw reply [relevance 5%]
* Re: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-12 10:21 0% ` Ferruh Yigit
@ 2023-07-12 14:51 0% ` Hemant Agrawal
0 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2023-07-12 14:51 UTC (permalink / raw)
To: Ferruh Yigit, Sivaprasad Tummala, dev
Cc: bruce.richardson, david.marchand, thomas
On 12-Jul-23 3:51 PM, Ferruh Yigit wrote:
> Caution: This is an external email. Please take care when clicking links or opening attachments. When in doubt, report the message using the 'Report this email' button
>
>
> On 7/12/2023 11:18 AM, Sivaprasad Tummala wrote:
>> To allow new cpu features to be added without ABI breakage,
>> RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
>>
>> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
>> ---
>> doc/guides/rel_notes/deprecation.rst | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index 8e1cdd677a..92db59d9c2 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -28,6 +28,9 @@ Deprecation Notices
>> the replacement API rte_thread_set_name and rte_thread_create_control being
>> marked as stable, and planned to be removed by the 23.11 release.
>>
>> +* eal: RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release. This is
>> + to allow new cpu features to be added without ABI breakage.
>> +
>> * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
>> not allow for writing optimized code for all the CPU architectures supported
>> in DPDK. DPDK has adopted the atomic operations from
>>
> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
>
> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
2023-07-12 10:18 8% [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS Sivaprasad Tummala
@ 2023-07-12 10:21 0% ` Ferruh Yigit
2023-07-12 14:51 0% ` Hemant Agrawal
2023-07-25 8:39 3% ` Ferruh Yigit
1 sibling, 1 reply; 200+ results
From: Ferruh Yigit @ 2023-07-12 10:21 UTC (permalink / raw)
To: Sivaprasad Tummala, dev; +Cc: bruce.richardson, david.marchand, thomas
On 7/12/2023 11:18 AM, Sivaprasad Tummala wrote:
> To allow new cpu features to be added without ABI breakage,
> RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 8e1cdd677a..92db59d9c2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -28,6 +28,9 @@ Deprecation Notices
> the replacement API rte_thread_set_name and rte_thread_create_control being
> marked as stable, and planned to be removed by the 23.11 release.
>
> +* eal: RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release. This is
> + to allow new cpu features to be added without ABI breakage.
> +
> * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
> not allow for writing optimized code for all the CPU architectures supported
> in DPDK. DPDK has adopted the atomic operations from
>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
^ permalink raw reply [relevance 0%]
* [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS
@ 2023-07-12 10:18 8% Sivaprasad Tummala
2023-07-12 10:21 0% ` Ferruh Yigit
2023-07-25 8:39 3% ` Ferruh Yigit
0 siblings, 2 replies; 200+ results
From: Sivaprasad Tummala @ 2023-07-12 10:18 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, bruce.richardson, david.marchand, thomas
To allow new cpu features to be added without ABI breakage,
RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release.
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
doc/guides/rel_notes/deprecation.rst | 3 +++
1 file changed, 3 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 8e1cdd677a..92db59d9c2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -28,6 +28,9 @@ Deprecation Notices
the replacement API rte_thread_set_name and rte_thread_create_control being
marked as stable, and planned to be removed by the 23.11 release.
+* eal: RTE_CPUFLAG_NUMFLAGS will be removed in DPDK 23.11 release. This is
+ to allow new cpu features to be added without ABI breakage.
+
* rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
not allow for writing optimized code for all the CPU architectures supported
in DPDK. DPDK has adopted the atomic operations from
--
2.34.1
^ permalink raw reply [relevance 8%]
* Re: [RFC v2 2/2] eal: add high-performance timer facility
2023-07-06 22:41 3% ` Stephen Hemminger
@ 2023-07-12 8:58 4% ` Mattias Rönnblom
0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2023-07-12 8:58 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, Erik Gabriel Carrillo, David Marchand, Maria Lingemark,
Stefan Sundkvist, Morten Brørup, Tyler Retzlaff,
Mattias Rönnblom
On 2023-07-07 00:41, Stephen Hemminger wrote:
> On Wed, 15 Mar 2023 18:03:42 +0100
> Mattias Rönnblom <mattias.ronnblom@ericsson.com> wrote:
>
>> The htimer library attempts at providing a timer facility with roughly
>> the same functionality, but less overhead and better scalability than
>> DPDK timer library.
>
> I don't understand. Why not just fix and extend existing timers.
> Sure you will need to add some API's and maybe drop some of the existing
> experimental ones (ie alt_timer). Even change the ABI.
>
> It would be better to have one high performance, scaleable timer than
> spend the next 3 years telling users which one to use and why!
>
> So please make rte_timer work better in 23.11 release rather
> than reinventing a new variant.
I wanted to explore how a data plane timer API should look like.
Something like a "first principles" type approach. As it happens, it
seems like I will converge on something that's pretty similar to how
rte_timer (and most kernel timers) API works, for example in regards to
timer memory allocation.
Clearly, there should not be two DPDK timer APIs that provide the same
functionality. That was never the intention. Since so much DPDK code and
more importantly application code depends on <rte_timer.h> it wasn't
obvious that the best option was make extensive changes to rte_timer API
and implementation. One way that seemed like a plausible option (how
much so depending on the extend of the rte_timer vs rte_htimer API
differences) was to have a new API, and depreciate <rte_timer.h> in the
release htimer was introduced.
That said, at this point, it's not clear to me which option is the best
one of "making extensive changes to rte_timer" or "having rte_htimer on
the side for a couple of releases".
An imaginary alternative where the <rte_timer.h> API/ABI can be
maintained, and you get all the performance and scalability and improved
API semantics of htimer, would obviously be the best option. But I don't
think that is possible. Especially not if you want to end up with a
nice, orthogonal API and a clean implementation.
I think changes in both ABI and API are inevitable, and a good thing,
considering some of the quirks for the current API.
A side note: It seems to me at this point there should be two public
timer APIs, but providing different functionality, at slightly different
levels of abstraction. One is the <rte_timer.h> lookalike, and the other
what in the current patchset is represented by <rte_htw.h>, but minus
the callbacks, as per Morten Brørup's suggestion. The latter would be a
low-level HTW only, with no MT safety, no lcore knowledge, no opinions
on time source, etc.
^ permalink raw reply [relevance 4%]
* [PATCH v9 10/14] eal: expand most macros to empty when using MSVC
@ 2023-07-11 16:49 5% ` Tyler Retzlaff
2023-07-11 16:49 3% ` [PATCH v9 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-07-11 16:49 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Konstantin Ananyev, Ciara Power, thomas,
david.marchand, mb, Tyler Retzlaff
For now expand a lot of common rte macros empty. The catch here is we
need to test that most of the macros do what they should but at the same
time they are blocking work needed to bootstrap of the unit tests.
Later we will return and provide (where possible) expansions that work
correctly for msvc and where not possible provide some alternate macros
to achieve the same outcome.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/rte_branch_prediction.h | 8 +++++
lib/eal/include/rte_common.h | 54 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_compat.h | 20 ++++++++++++
3 files changed, 82 insertions(+)
diff --git a/lib/eal/include/rte_branch_prediction.h b/lib/eal/include/rte_branch_prediction.h
index 414cd92..c0356ca 100644
--- a/lib/eal/include/rte_branch_prediction.h
+++ b/lib/eal/include/rte_branch_prediction.h
@@ -24,7 +24,11 @@
* do_stuff();
*/
#ifndef likely
+#ifdef RTE_TOOLCHAIN_MSVC
+#define likely(x) (!!(x))
+#else
#define likely(x) __builtin_expect(!!(x), 1)
+#endif
#endif /* likely */
/**
@@ -37,7 +41,11 @@
* do_stuff();
*/
#ifndef unlikely
+#ifdef RTE_TOOLCHAIN_MSVC
+#define unlikely(x) (!!(x))
+#else
#define unlikely(x) __builtin_expect(!!(x), 0)
+#endif
#endif /* unlikely */
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 2f464e3..b087532 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -41,6 +41,10 @@
#define RTE_STD_C11
#endif
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __extension__
+#endif
+
/*
* RTE_TOOLCHAIN_GCC is defined if the target is built with GCC,
* while a host application (like pmdinfogen) may have another compiler.
@@ -65,7 +69,11 @@
/**
* Force alignment
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_aligned(a)
+#else
#define __rte_aligned(a) __attribute__((__aligned__(a)))
+#endif
#ifdef RTE_ARCH_STRICT_ALIGN
typedef uint64_t unaligned_uint64_t __rte_aligned(1);
@@ -80,16 +88,29 @@
/**
* Force a structure to be packed
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_packed
+#else
#define __rte_packed __attribute__((__packed__))
+#endif
/**
* Macro to mark a type that is not subject to type-based aliasing rules
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_may_alias
+#else
#define __rte_may_alias __attribute__((__may_alias__))
+#endif
/******* Macro to mark functions and fields scheduled for removal *****/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_deprecated
+#define __rte_deprecated_msg(msg)
+#else
#define __rte_deprecated __attribute__((__deprecated__))
#define __rte_deprecated_msg(msg) __attribute__((__deprecated__(msg)))
+#endif
/**
* Macro to mark macros and defines scheduled for removal
@@ -110,14 +131,22 @@
/**
* Force symbol to be generated even if it appears to be unused.
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_used
+#else
#define __rte_used __attribute__((used))
+#endif
/*********** Macros to eliminate unused variable warnings ********/
/**
* short definition to mark a function parameter unused
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_unused
+#else
#define __rte_unused __attribute__((__unused__))
+#endif
/**
* Mark pointer as restricted with regard to pointer aliasing.
@@ -141,6 +170,9 @@
* even if the underlying stdio implementation is ANSI-compliant,
* so this must be overridden.
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_format_printf(format_index, first_arg)
+#else
#if RTE_CC_IS_GNU
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(gnu_printf, format_index, first_arg)))
@@ -148,6 +180,7 @@
#define __rte_format_printf(format_index, first_arg) \
__attribute__((format(printf, format_index, first_arg)))
#endif
+#endif
/**
* Tells compiler that the function returns a value that points to
@@ -222,7 +255,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
/**
* Hint never returning function
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_noreturn
+#else
#define __rte_noreturn __attribute__((noreturn))
+#endif
/**
* Issue a warning in case the function's return value is ignored.
@@ -247,12 +284,20 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* }
* @endcode
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_warn_unused_result
+#else
#define __rte_warn_unused_result __attribute__((warn_unused_result))
+#endif
/**
* Force a function to be inlined
*/
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_always_inline
+#else
#define __rte_always_inline inline __attribute__((always_inline))
+#endif
/**
* Force a function to be noinlined
@@ -437,7 +482,11 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
#define RTE_CACHE_LINE_MIN_SIZE 64
/** Force alignment to cache line. */
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_cache_aligned
+#else
#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)
+#endif
/** Force minimum cache line alignment. */
#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE)
@@ -812,6 +861,10 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
* struct wrapper *w = container_of(x, struct wrapper, c);
*/
#ifndef container_of
+#ifdef RTE_TOOLCHAIN_MSVC
+#define container_of(ptr, type, member) \
+ ((type *)((uintptr_t)(ptr) - offsetof(type, member)))
+#else
#define container_of(ptr, type, member) __extension__ ({ \
const typeof(((type *)0)->member) *_ptr = (ptr); \
__rte_unused type *_target_ptr = \
@@ -819,6 +872,7 @@ static void __attribute__((destructor(RTE_PRIO(prio)), used)) func(void)
(type *)(((uintptr_t)_ptr) - offsetof(type, member)); \
})
#endif
+#endif
/** Swap two variables. */
#define RTE_SWAP(a, b) \
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index fc9fbaa..716bc03 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -12,14 +12,22 @@
#ifndef ALLOW_EXPERIMENTAL_API
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_experimental
+#else
#define __rte_experimental \
__attribute__((deprecated("Symbol is not yet part of stable ABI"), \
section(".text.experimental")))
+#endif
#else
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_experimental
+#else
#define __rte_experimental \
__attribute__((section(".text.experimental")))
+#endif
#endif
@@ -30,23 +38,35 @@
#if !defined ALLOW_INTERNAL_API && __has_attribute(error) /* For GCC */
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_internal
+#else
#define __rte_internal \
__attribute__((error("Symbol is not public ABI"), \
section(".text.internal")))
+#endif
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_internal
+#else
#define __rte_internal \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
section(".text.internal"))) \
_Pragma("GCC diagnostic pop")
+#endif
#else
+#ifdef RTE_TOOLCHAIN_MSVC
+#define __rte_internal
+#else
#define __rte_internal \
__attribute__((section(".text.internal")))
+#endif
#endif
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [PATCH v9 12/14] telemetry: avoid expanding versioned symbol macros on MSVC
2023-07-11 16:49 5% ` [PATCH v9 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
@ 2023-07-11 16:49 3% ` Tyler Retzlaff
1 sibling, 0 replies; 200+ results
From: Tyler Retzlaff @ 2023-07-11 16:49 UTC (permalink / raw)
To: dev
Cc: Bruce Richardson, Konstantin Ananyev, Ciara Power, thomas,
david.marchand, mb, Tyler Retzlaff
Windows does not support versioned symbols. Fortunately Windows also
doesn't have an exported stable ABI.
Export rte_tel_data_add_array_int -> rte_tel_data_add_array_int_24
and rte_tel_data_add_dict_int -> rte_tel_data_add_dict_int_v24
functions.
Windows does have a way to achieve similar versioning for symbols but it
is not a simple #define so it will be done as a work package later.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/telemetry/telemetry_data.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/lib/telemetry/telemetry_data.c b/lib/telemetry/telemetry_data.c
index 0c7187b..523287b 100644
--- a/lib/telemetry/telemetry_data.c
+++ b/lib/telemetry/telemetry_data.c
@@ -83,8 +83,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_array_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_array_int, _v24, 24);
+#ifdef RTE_TOOLCHAIN_MSVC
+int
+rte_tel_data_add_array_int(struct rte_tel_data *d, int64_t x)
+{
+ return rte_tel_data_add_array_int_v24(d, x);
+}
+#else
MAP_STATIC_SYMBOL(int rte_tel_data_add_array_int(struct rte_tel_data *d,
int64_t x), rte_tel_data_add_array_int_v24);
+#endif
int
rte_tel_data_add_array_uint(struct rte_tel_data *d, uint64_t x)
@@ -218,8 +226,16 @@
/* mark the v23 function as the older version, and v24 as the default version */
VERSION_SYMBOL(rte_tel_data_add_dict_int, _v23, 23);
BIND_DEFAULT_SYMBOL(rte_tel_data_add_dict_int, _v24, 24);
+#ifdef RTE_TOOLCHAIN_MSVC
+int
+rte_tel_data_add_dict_int(struct rte_tel_data *d, const char *name, int64_t val)
+{
+ return rte_tel_data_add_dict_int_v24(d, name, val);
+}
+#else
MAP_STATIC_SYMBOL(int rte_tel_data_add_dict_int(struct rte_tel_data *d,
const char *name, int64_t val), rte_tel_data_add_dict_int_v24);
+#endif
int
rte_tel_data_add_dict_uint(struct rte_tel_data *d,
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v1 0/2] add IPv6 extension push remove
2023-07-10 14:41 3% ` Stephen Hemminger
@ 2023-07-11 6:16 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-11 6:16 UTC (permalink / raw)
To: Ferruh Yigit, Stephen Hemminger
Cc: Rongwei Liu, Ori Kam, Andrew Rybchenko, dev, Matan Azrad,
Slava Ovsiienko, Suanming Mou
10/07/2023 16:41, Stephen Hemminger:
> On Mon, 10 Jul 2023 09:55:59 +0100
> Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>
> > On 7/10/2023 3:32 AM, Rongwei Liu wrote:
> > > Hi Ferruh & Andrew & Ori & Thomas:
> > > Sorry, we can't commit the PMD implementation for "IPv6 extension push remove" feature in time for this release.
> > > There are some dis-agreements which need to be addressed internally.
> > > We will continue to work on this and plan to push it in the next release.
> > >
> > > RFC link: https://patchwork.dpdk.org/project/dpdk/cover/20230417022630.2377505-1-rongweil@nvidia.com/
> > > V1 patch with full PMD implementation: https://patchwork.dpdk.org/project/dpdk/cover/20230417092540.2617450-1-rongweil@nvidia.com/
> > >
> >
> > Hi Rongwei,
> >
> > Thanks for the heads up.
> > As long as there is a plan to upstream driver implementation, I think it
> > is OK to keep ethdev change and wait for driver implementation for
> > better design instead of rushing it for this release with lower quality
> > (although target should be to have driver changes with same release with
> > API changes for future features).
>
> Please wait the change until driver is ready.
> Don't want to deal with API/ABI changes when driver is upstream.
> Also, no unused code please.
There was a driver patch sent in April.
It was impossible to imagine it was not good enough to be merged.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v1 0/2] add IPv6 extension push remove
@ 2023-07-10 14:41 3% ` Stephen Hemminger
2023-07-11 6:16 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-07-10 14:41 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Rongwei Liu, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
Andrew Rybchenko, dev, Matan Azrad, Slava Ovsiienko,
Suanming Mou
On Mon, 10 Jul 2023 09:55:59 +0100
Ferruh Yigit <ferruh.yigit@amd.com> wrote:
> On 7/10/2023 3:32 AM, Rongwei Liu wrote:
> > Hi Ferruh & Andrew & Ori & Thomas:
> > Sorry, we can't commit the PMD implementation for "IPv6 extension push remove" feature in time for this release.
> > There are some dis-agreements which need to be addressed internally.
> > We will continue to work on this and plan to push it in the next release.
> >
> > RFC link: https://patchwork.dpdk.org/project/dpdk/cover/20230417022630.2377505-1-rongweil@nvidia.com/
> > V1 patch with full PMD implementation: https://patchwork.dpdk.org/project/dpdk/cover/20230417092540.2617450-1-rongweil@nvidia.com/
> >
>
> Hi Rongwei,
>
> Thanks for the heads up.
> As long as there is a plan to upstream driver implementation, I think it
> is OK to keep ethdev change and wait for driver implementation for
> better design instead of rushing it for this release with lower quality
> (although target should be to have driver changes with same release with
> API changes for future features).
Please wait the change until driver is ready.
Don't want to deal with API/ABI changes when driver is upstream.
Also, no unused code please.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 1/2] net/virtio: fix legacy device IO port map in secondary process
@ 2023-07-07 17:03 3% ` Gupta, Nipun
0 siblings, 0 replies; 200+ results
From: Gupta, Nipun @ 2023-07-07 17:03 UTC (permalink / raw)
To: Xia, Chenbo, David Marchand, Li, Miao, Maxime Coquelin; +Cc: dev, stable
On 7/3/2023 3:01 PM, Xia, Chenbo wrote:
> +Nipun
>
>> -----Original Message-----
>> From: David Marchand <david.marchand@redhat.com>
>> Sent: Monday, July 3, 2023 4:58 PM
>> To: Li, Miao <miao.li@intel.com>
>> Cc: dev@dpdk.org; stable@dpdk.org; Maxime Coquelin
>> <maxime.coquelin@redhat.com>; Xia, Chenbo <chenbo.xia@intel.com>
>> Subject: Re: [PATCH v2 1/2] net/virtio: fix legacy device IO port map in
>> secondary process
>>
>> On Mon, Jul 3, 2023 at 10:54 AM Li, Miao <miao.li@intel.com> wrote:
>>>>> When doing IO port map for legacy device in secondary process,
>>>>> vfio_cfg setup for legacy device like vfio_group_fd and vfio_dev_fd
>> is
>>>>> missing. So, in secondary process, rte_pci_map_device is added for
>>>>> legacy device to setup vfio_cfg and fill in region info like in
>>>>> primary process.
>>>>
>>>> I think, in legacy mode, there is no PCI mappable memory.
>>>> So there should be no need for this call to rte_pci_map_device.
>>>>
>>>> What is missing is a vfio setup, is this correct?
>>>> I'd rather see this issue be fixed in the pci_vfio_ioport_map()
>> function.
>>>>
>>> If adding vfio setup in the pci_vfio_ioport_map() function, vfio will be
>> setup twice in primary process because rte_pci_map_device will be called
>> for legacy device in primary process.
>>> I add IO port region check to skip region map in the next patch.
>>
>> Well, something must be done so that it is not mapped twice, I did not
>> look into the details.
>> This current patch looks wrong to me and I understand this is not a
>> virtio only issue.
>
> I think we could have some way to improve this:
>
> 1. Make rte_pci_map_device map either PIO or MMIO (Based on my knowledge, it's doable
> for vfio. For UIO, I am no expert and not sure). For ioport, it's only about setting
> up the ioport offset and size.
> 2. rte_pci_ioport_map may not be needed anymore.
> 3. struct rte_pci_ioport may not be needed anymore as the info could be saved in
> struct rte_pci_device_internal.
> 4. ioport device uses bar #, len, offset to RW specific BAR.
>
> Then for virtio device, either primary or secondary process only calls rte_pci_map_device
> once.
>
> Any comments?
Wouldn't a call to API rte_vfio_setup_device() to setup vfio_cfg,
vfio_group_fd, vfio_dev_fd under a secondary process check suffice to
handle IO port map for legacy device in secondary process?
I do not have much info on legacy Virtio device, and I am not clear on
why and how for these devices rte_pci_map_device() would be called in
case of primary process, but not in case of secondary process as
mentioned by Miao Li?
Steps you have mentioned looks fine but note that this would cause an
ABI breakage and as you mentioned may need changes in UIO (even I am not
an expert in UIO).
Thanks,
Nipun
>
> Thanks,
> Chenbo
>
>>
>>
>> --
>> David Marchand
>
^ permalink raw reply [relevance 3%]
* Re: [PATCH v4 3/3] ring: add telemetry cmd for ring info
2023-07-06 8:52 3% ` David Marchand
@ 2023-07-07 2:18 0% ` Jie Hai
0 siblings, 0 replies; 200+ results
From: Jie Hai @ 2023-07-07 2:18 UTC (permalink / raw)
To: David Marchand, Thomas Monjalon
Cc: honnappa.nagarahalli, konstantin.v.ananyev, dev, liudongdong3,
bruce.richardson
On 2023/7/6 16:52, David Marchand wrote:
> On Tue, Jul 4, 2023 at 4:11 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>>
>> 04/07/2023 10:04, Jie Hai:
>>> On 2023/6/20 22:34, Thomas Monjalon wrote:
>>>> 20/06/2023 10:14, Jie Hai:
>>>>> On 2023/2/20 20:55, David Marchand wrote:
>>>>>> On Fri, Feb 10, 2023 at 3:50 AM Jie Hai <haijie1@huawei.com> wrote:
>>>>>>>
>>>>>>> This patch supports dump of ring information by its name.
>>>>>>> An example using this command is shown below:
>>>>>>>
>>>>>>> --> /ring/info,MP_mb_pool_0
>>>>>>> {
>>>>>>> "/ring/info": {
>>>>>>> "name": "MP_mb_pool_0",
>>>>>>> "socket": 0,
>>>>>>> "flags": "0x0",
>>>>>>> "producer_type": "MP",
>>>>>>> "consumer_type": "MC",
>>>>>>> "size": 262144,
>>>>>>> "mask": "0x3ffff",
>>>>>>> "capacity": 262143,
>>>>>>> "used_count": 153197,
>>>>>>> "consumer_tail": 2259,
>>>>>>> "consumer_head": 2259,
>>>>>>> "producer_tail": 155456,
>>>>>>> "producer_head": 155456,
>>>>>>
>>>>>> What would an external user make of such an information?
>>>>>>
>>>>>> I'd like to have a better idea what your usecase is.
>>>>>> If it is for debugging, well, gdb is probably a better candidate.
>>>>>>
>>>>>>
>>>>> Hi David,
>>>>> Thanks for your question and I'm sorry for getting back to you so late.
>>>>> There was a problem with my mailbox and I lost all my mails.
>>>>>
>>>>> The ring information exported by telemetry can be used to check the ring
>>>>> status periodically during normal use. When an error occurs, the fault
>>>>> cause can be deduced based on the information.
>>>>> GDB is more suitable for locating errors only when they are sure that
>>>>> errors will occur.
>>>>
>>>> Yes, when an error occurs, you can use GDB,
>>>> and you don't need all these internal values in telemetry.
>>>>
>>>>
>>> Hi, David, Thomas,
>>>
>>> Would it be better to delete the last four items?
>>> "consumer_tail": 2259,
>>> "consumer_head": 2259,
>>> "producer_tail": 155456,
>>> "producer_head": 155456,
>>
>> Yes it would be better.
>> David, other maintainers, would it make the telemetry command a good idea?
>>
>>
>
> Without the ring head/tail exposed, it seems ok.
> It still exposes the ring flags which are kind of internal things, but
> those are parts of the API/ABI, iiuc, so it should not be an issue.
>
>
Similar to "name" and "size" of ring, "flags" of ring is also determined
by user input. I think it's ok to expose it, users can use it to check
if the configuration is as they want.
And the proc-info also exposes this flag.
^ permalink raw reply [relevance 0%]
* Re: [RFC v2 2/2] eal: add high-performance timer facility
@ 2023-07-06 22:41 3% ` Stephen Hemminger
2023-07-12 8:58 4% ` Mattias Rönnblom
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2023-07-06 22:41 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: dev, Erik Gabriel Carrillo, David Marchand, maria.lingemark,
Stefan Sundkvist, Morten Brørup, Tyler Retzlaff
On Wed, 15 Mar 2023 18:03:42 +0100
Mattias Rönnblom <mattias.ronnblom@ericsson.com> wrote:
> The htimer library attempts at providing a timer facility with roughly
> the same functionality, but less overhead and better scalability than
> DPDK timer library.
I don't understand. Why not just fix and extend existing timers.
Sure you will need to add some API's and maybe drop some of the existing
experimental ones (ie alt_timer). Even change the ABI.
It would be better to have one high performance, scaleable timer than
spend the next 3 years telling users which one to use and why!
So please make rte_timer work better in 23.11 release rather
than reinventing a new variant.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v13 0/4] devtools: scripts to count and track symbols
@ 2023-07-06 19:13 0% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-07-06 19:13 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, bruce.richardson, ferruh.yigit, thomas, ktraynor, aconole,
roy.fan.zhang, arkadiuszx.kusztal, gakhil
On Thu, 9 Sep 2021 14:48:04 +0100
Ray Kinsella <mdr@ashroe.eu> wrote:
> The symbol-tool script reports on the growth of symbols over releases
> and list expired symbols. The notify-symbol-maintainers script
> consumes the input from symbol-tool and generates email notifications
> of expired symbols.
>
> v2: reworked to fix pylint errors
> v3: sent with the correct in-reply-to
> v4: fix typos picked up by the CI
> v5: fix terminal_size & directory args
> v6: added list-expired, to list expired experimental symbols
> v7: fix typo in comments
> v8: added tool to notify maintainers of expired symbols
> v9: removed hardcoded emails addressed and script names
> v10: added ability to identify and notify the original contributors
> v11: addressed feedback from Aaron Conole, including PEP8 errors.
> v12: added symbol-tool ignore functionality, to ignore specific symbols
> v13: renamed symboltool.abignore, typos, added ack from Akhil Goyal
>
> Ray Kinsella (4):
> devtools: script to track symbols over releases
> devtools: script to send notifications of expired symbols
> maintainers: add new abi scripts
> devtools: add asym crypto to symbol-tool ignore
Not sure why this never made it in.
Series-Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4 3/3] ring: add telemetry cmd for ring info
@ 2023-07-06 8:52 3% ` David Marchand
2023-07-07 2:18 0% ` Jie Hai
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-07-06 8:52 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Jie Hai, honnappa.nagarahalli, konstantin.v.ananyev, dev,
liudongdong3, bruce.richardson
On Tue, Jul 4, 2023 at 4:11 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 04/07/2023 10:04, Jie Hai:
> > On 2023/6/20 22:34, Thomas Monjalon wrote:
> > > 20/06/2023 10:14, Jie Hai:
> > >> On 2023/2/20 20:55, David Marchand wrote:
> > >>> On Fri, Feb 10, 2023 at 3:50 AM Jie Hai <haijie1@huawei.com> wrote:
> > >>>>
> > >>>> This patch supports dump of ring information by its name.
> > >>>> An example using this command is shown below:
> > >>>>
> > >>>> --> /ring/info,MP_mb_pool_0
> > >>>> {
> > >>>> "/ring/info": {
> > >>>> "name": "MP_mb_pool_0",
> > >>>> "socket": 0,
> > >>>> "flags": "0x0",
> > >>>> "producer_type": "MP",
> > >>>> "consumer_type": "MC",
> > >>>> "size": 262144,
> > >>>> "mask": "0x3ffff",
> > >>>> "capacity": 262143,
> > >>>> "used_count": 153197,
> > >>>> "consumer_tail": 2259,
> > >>>> "consumer_head": 2259,
> > >>>> "producer_tail": 155456,
> > >>>> "producer_head": 155456,
> > >>>
> > >>> What would an external user make of such an information?
> > >>>
> > >>> I'd like to have a better idea what your usecase is.
> > >>> If it is for debugging, well, gdb is probably a better candidate.
> > >>>
> > >>>
> > >> Hi David,
> > >> Thanks for your question and I'm sorry for getting back to you so late.
> > >> There was a problem with my mailbox and I lost all my mails.
> > >>
> > >> The ring information exported by telemetry can be used to check the ring
> > >> status periodically during normal use. When an error occurs, the fault
> > >> cause can be deduced based on the information.
> > >> GDB is more suitable for locating errors only when they are sure that
> > >> errors will occur.
> > >
> > > Yes, when an error occurs, you can use GDB,
> > > and you don't need all these internal values in telemetry.
> > >
> > >
> > Hi, David, Thomas,
> >
> > Would it be better to delete the last four items?
> > "consumer_tail": 2259,
> > "consumer_head": 2259,
> > "producer_tail": 155456,
> > "producer_head": 155456,
>
> Yes it would be better.
> David, other maintainers, would it make the telemetry command a good idea?
>
>
Without the ring head/tail exposed, it seems ok.
It still exposes the ring flags which are kind of internal things, but
those are parts of the API/ABI, iiuc, so it should not be an issue.
--
David Marchand
^ permalink raw reply [relevance 3%]
* Re: [PATCH] ptp: replace terms master/slave
@ 2023-07-05 17:27 3% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-07-05 17:27 UTC (permalink / raw)
To: Ajit Khaparde; +Cc: dev, Somnath Kotur, Kirill Rybalchenko
On Fri, 19 May 2023 11:15:49 -0700
Stephen Hemminger <stephen@networkplumber.org> wrote:
> The IEEE has revised the naming in PTP protocol.
> Use these new terms to replace master and slave.
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
This patch has several acks but is still not merged.
Since it is examples, no API/ABI deprecation needed.
^ permalink raw reply [relevance 3%]
* RE: [EXT] Re: [PATCH v2] doc: announce single-event enqueue/dequeue ABI change
2023-07-05 13:00 4% ` Jerin Jacob
@ 2023-07-05 13:02 4% ` Pavan Nikhilesh Bhagavatula
2023-07-28 15:51 4% ` Thomas Monjalon
2023-07-26 12:04 4% ` Jerin Jacob
1 sibling, 1 reply; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2023-07-05 13:02 UTC (permalink / raw)
To: Jerin Jacob, Mattias Rönnblom
Cc: Jerin Jacob Kollanukkaran, Thomas Monjalon, hofors, dev,
Timothy McDaniel, Hemant Agrawal, Sachin Saxena,
Harry van Haaren, Liang Ma, Peter Mccarthy
> On Wed, Jul 5, 2023 at 4:48 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
> >
> > Announce the removal of the single-event enqueue and dequeue
> > operations from the eventdev ABI.
> >
> > Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
>
> >
> > ---
> > PATCH v2: Fix commit subject prefix.
> > ---
> > doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> > index 66431789b0..ca192d838d 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -153,3 +153,11 @@ Deprecation Notices
> > The new port library API (functions rte_swx_port_*)
> > will gradually transition from experimental to stable status
> > starting with DPDK 23.07 release.
> > +
> > +* eventdev: The single-event (non-burst) enqueue and dequeue
> > + operations, used by static inline burst enqueue and dequeue
> > + functions in <rte_eventdev.h>, will be removed in DPDK 23.11. This
> > + simplification includes changing the layout and potentially also the
> > + size of the public rte_event_fp_ops struct, breaking the ABI. Since
> > + these functions are not called directly by the application, the API
> > + remains unaffected.
> > --
> > 2.34.1
> >
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2] doc: announce single-event enqueue/dequeue ABI change
2023-07-05 11:12 13% ` [PATCH v2] doc: " Mattias Rönnblom
@ 2023-07-05 13:00 4% ` Jerin Jacob
2023-07-05 13:02 4% ` [EXT] " Pavan Nikhilesh Bhagavatula
2023-07-26 12:04 4% ` Jerin Jacob
0 siblings, 2 replies; 200+ results
From: Jerin Jacob @ 2023-07-05 13:00 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: jerinj, Thomas Monjalon, hofors, dev, Pavan Nikhilesh,
Timothy McDaniel, Hemant Agrawal, Sachin Saxena,
Harry van Haaren, Liang Ma, Peter Mccarthy
On Wed, Jul 5, 2023 at 4:48 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> Announce the removal of the single-event enqueue and dequeue
> operations from the eventdev ABI.
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
>
> ---
> PATCH v2: Fix commit subject prefix.
> ---
> doc/guides/rel_notes/deprecation.rst | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 66431789b0..ca192d838d 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -153,3 +153,11 @@ Deprecation Notices
> The new port library API (functions rte_swx_port_*)
> will gradually transition from experimental to stable status
> starting with DPDK 23.07 release.
> +
> +* eventdev: The single-event (non-burst) enqueue and dequeue
> + operations, used by static inline burst enqueue and dequeue
> + functions in <rte_eventdev.h>, will be removed in DPDK 23.11. This
> + simplification includes changing the layout and potentially also the
> + size of the public rte_event_fp_ops struct, breaking the ABI. Since
> + these functions are not called directly by the application, the API
> + remains unaffected.
> --
> 2.34.1
>
^ permalink raw reply [relevance 4%]
* Re: [PATCH] doc: announce ethdev operation struct changes
2023-07-04 8:10 3% [PATCH] doc: announce ethdev operation struct changes Feifei Wang
2023-07-04 8:17 0% ` Feifei Wang
@ 2023-07-05 11:32 0% ` Konstantin Ananyev
2023-07-13 7:52 0% ` Ferruh Yigit
2023-07-28 14:56 3% ` Thomas Monjalon
2 siblings, 1 reply; 200+ results
From: Konstantin Ananyev @ 2023-07-05 11:32 UTC (permalink / raw)
To: Feifei Wang; +Cc: dev, nd, Honnappa.Nagarahalli, Ruifeng Wang
04/07/2023 09:10, Feifei Wang пишет:
> To support mbufs recycle mode, announce the coming ABI changes
> from DPDK 23.11.
>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 66431789b0..c7e1ffafb2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -118,6 +118,10 @@ Deprecation Notices
> The legacy actions should be removed
> once ``MODIFY_FIELD`` alternative is implemented in drivers.
>
> +* ethdev: The Ethernet device data structure ``struct rte_eth_dev`` and
> + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be updated
> + with new fields to support mbufs recycle mode from DPDK 23.11.
> +
> * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
> to have another parameter ``qp_id`` to return the queue pair ID
> which got error interrupt to the application,
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t
@ 2023-07-05 11:32 0% ` Konstantin Ananyev
0 siblings, 0 replies; 200+ results
From: Konstantin Ananyev @ 2023-07-05 11:32 UTC (permalink / raw)
To: Sivaprasad Tummala, david.hunt; +Cc: dev, david.marchand, ferruh.yigit
18/04/2023 09:25, Sivaprasad Tummala пишет:
> A new flag RTE_CPUFLAG_MONITORX is added to rte_cpu_flag_t in
> DPDK 23.07 release to support monitorx instruction on EPYC processors.
> This results in ABI breakage for legacy apps.
>
> Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index dcc1ca1696..831713983f 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -163,3 +163,6 @@ Deprecation Notices
> The new port library API (functions rte_swx_port_*)
> will gradually transition from experimental to stable status
> starting with DPDK 23.07 release.
> +
> +* eal/x86: The enum ``rte_cpu_flag_t`` will be extended with a new cpu flag
> + ``RTE_CPUFLAG_MONITORX`` to support monitorx instruction on EPYC processors.
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
^ permalink raw reply [relevance 0%]
* [PATCH v2] doc: announce single-event enqueue/dequeue ABI change
2023-07-05 8:48 13% [PATCH] eventdev: announce single-event enqueue/dequeue ABI change Mattias Rönnblom
@ 2023-07-05 11:12 13% ` Mattias Rönnblom
2023-07-05 13:00 4% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2023-07-05 11:12 UTC (permalink / raw)
To: jerinj, Thomas Monjalon
Cc: Jerin Jacob, hofors, dev, Pavan Nikhilesh, Timothy McDaniel,
Hemant Agrawal, Sachin Saxena, Harry van Haaren, Liang Ma,
Peter Mccarthy, Mattias Rönnblom
Announce the removal of the single-event enqueue and dequeue
operations from the eventdev ABI.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
PATCH v2: Fix commit subject prefix.
---
doc/guides/rel_notes/deprecation.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 66431789b0..ca192d838d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -153,3 +153,11 @@ Deprecation Notices
The new port library API (functions rte_swx_port_*)
will gradually transition from experimental to stable status
starting with DPDK 23.07 release.
+
+* eventdev: The single-event (non-burst) enqueue and dequeue
+ operations, used by static inline burst enqueue and dequeue
+ functions in <rte_eventdev.h>, will be removed in DPDK 23.11. This
+ simplification includes changing the layout and potentially also the
+ size of the public rte_event_fp_ops struct, breaking the ABI. Since
+ these functions are not called directly by the application, the API
+ remains unaffected.
--
2.34.1
^ permalink raw reply [relevance 13%]
* [PATCH] eventdev: announce single-event enqueue/dequeue ABI change
@ 2023-07-05 8:48 13% Mattias Rönnblom
2023-07-05 11:12 13% ` [PATCH v2] doc: " Mattias Rönnblom
0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2023-07-05 8:48 UTC (permalink / raw)
To: jerinj
Cc: Jerin Jacob, hofors, dev, Pavan Nikhilesh, Timothy McDaniel,
Hemant Agrawal, Sachin Saxena, Harry van Haaren, Liang Ma,
Peter Mccarthy, Mattias Rönnblom
Announce the removal of the single-event enqueue and dequeue
operations from the eventdev ABI.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
doc/guides/rel_notes/deprecation.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 66431789b0..ca192d838d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -153,3 +153,11 @@ Deprecation Notices
The new port library API (functions rte_swx_port_*)
will gradually transition from experimental to stable status
starting with DPDK 23.07 release.
+
+* eventdev: The single-event (non-burst) enqueue and dequeue
+ operations, used by static inline burst enqueue and dequeue
+ functions in <rte_eventdev.h>, will be removed in DPDK 23.11. This
+ simplification includes changing the layout and potentially also the
+ size of the public rte_event_fp_ops struct, breaking the ABI. Since
+ these functions are not called directly by the application, the API
+ remains unaffected.
--
2.34.1
^ permalink raw reply [relevance 13%]
* Re: [PATCH] eventdev: remove single-event enqueue operation
2023-07-05 7:47 0% ` Jerin Jacob
@ 2023-07-05 8:41 0% ` Mattias Rönnblom
0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2023-07-05 8:41 UTC (permalink / raw)
To: Jerin Jacob, Mattias Rönnblom
Cc: jerinj, dev, Pavan Nikhilesh, Timothy McDaniel, Hemant Agrawal,
Sachin Saxena, Harry van Haaren, Liang Ma, Peter Mccarthy
On 2023-07-05 09:47, Jerin Jacob wrote:
> On Tue, Jul 4, 2023 at 5:29 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>>
>> Eliminate non-burst enqueue operation from Eventdev.
>>
>> The effect of this change is to reduce Eventdev code complexity
>> somewhat and slightly improve performance.
>>
>> The single-event enqueue shortcut provided a very minor performance
>> advantage in some situations (e.g., with a compile time-constant burst
>> size of '1'), but would in other situations cause a noticeable
>> performance penalty (e.g., rte_event_enqueue_forward_burst() with run
>> time-variable burst sizes varying between '1' and larger burst sizes).
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>
>> --
>>
>> PATCH: Add ABI deprecation notice.
>
>
> Need to split this patch as depreciation notice only will be merged to
> this release.
> Example: https://patches.dpdk.org/project/dpdk/patch/20230704194445.3332847-1-gakhil@marvell.com/
>
> I think, we need to remove the single dequeue as well. So I think, we
> can write a generic deprecation notice
> which says size of struct rte_event_fp_ops will be changed by removing
> single enqueue, dequeue and.
> Reservation fields size . Later we can analysis the performance impact
> when the implementation patch is ready.
> For now, let make deprecation notice for this release.
OK, sounds good.
The size may be the same, but layout will be different.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] eventdev: remove single-event enqueue operation
2023-07-04 11:53 4% ` [PATCH] " Mattias Rönnblom
@ 2023-07-05 7:47 0% ` Jerin Jacob
2023-07-05 8:41 0% ` Mattias Rönnblom
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2023-07-05 7:47 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: jerinj, hofors, dev, Pavan Nikhilesh, Timothy McDaniel,
Hemant Agrawal, Sachin Saxena, Harry van Haaren, Liang Ma,
Peter Mccarthy
On Tue, Jul 4, 2023 at 5:29 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> Eliminate non-burst enqueue operation from Eventdev.
>
> The effect of this change is to reduce Eventdev code complexity
> somewhat and slightly improve performance.
>
> The single-event enqueue shortcut provided a very minor performance
> advantage in some situations (e.g., with a compile time-constant burst
> size of '1'), but would in other situations cause a noticeable
> performance penalty (e.g., rte_event_enqueue_forward_burst() with run
> time-variable burst sizes varying between '1' and larger burst sizes).
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>
> --
>
> PATCH: Add ABI deprecation notice.
Need to split this patch as depreciation notice only will be merged to
this release.
Example: https://patches.dpdk.org/project/dpdk/patch/20230704194445.3332847-1-gakhil@marvell.com/
I think, we need to remove the single dequeue as well. So I think, we
can write a generic deprecation notice
which says size of struct rte_event_fp_ops will be changed by removing
single enqueue, dequeue and.
Reservation fields size . Later we can analysis the performance impact
when the implementation patch is ready.
For now, let make deprecation notice for this release.
^ permalink raw reply [relevance 0%]
* Re: [RFC] eventdev: remove single-event enqueue operation
2023-06-30 4:37 3% ` Jerin Jacob
@ 2023-07-04 12:01 0% ` Mattias Rönnblom
0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2023-07-04 12:01 UTC (permalink / raw)
To: Jerin Jacob, Mattias Rönnblom
Cc: jerinj, dev, Pavan Nikhilesh, Timothy McDaniel, Hemant Agrawal,
Sachin Saxena, Harry van Haaren, Liang Ma, Peter Mccarthy
On 2023-06-30 06:37, Jerin Jacob wrote:
> On Fri, Jun 9, 2023 at 11:18 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>>
>> Eliminate non-burst enqueue operation from Eventdev.
>>
>> The effect of this change is to reduce Eventdev code complexity
>> somewhat and slightly improve performance.
>>
>> The single-event enqueue shortcut provided a very minor performance
>> advantage in some situations (e.g., with a compile time-constant burst
>> size of '1'), but would in other situations cause a noticeable
>> performance penalty (e.g., rte_event_enqueue_forward_burst() with run
>> time-variable burst sizes varying between '1' and larger burst sizes).
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>
>>
>> -typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
>> -/**< @internal Enqueue event on port of a device */
>> -
>> typedef uint16_t (*event_enqueue_burst_t)(void *port,
>> const struct rte_event ev[],
>> uint16_t nb_events);
>> @@ -45,8 +42,6 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
>> struct rte_event_fp_ops {
>> void **data;
>> /**< points to array of internal port data pointers */
>> - event_enqueue_t enqueue;
>> - /**< PMD enqueue function. */
>
> Can we remove "dequeue" as well?
Seems likely, but I have no data on that option.
> In any event, Please send a deprecation notice as it is an ABI change,
> and we need to get merge the deprecation notice patch for v23.07.
> I can review the deprecation notice patch quickly as soon as you send
> it to make forward progress.
>
OK.
>
>> event_enqueue_burst_t enqueue_burst;
>> /**< PMD enqueue burst function. */
>> event_enqueue_burst_t enqueue_new_burst;
>> @@ -65,7 +60,7 @@ struct rte_event_fp_ops {
>> /**< PMD Tx adapter enqueue same destination function. */
>> event_crypto_adapter_enqueue_t ca_enqueue;
>> /**< PMD Crypto adapter enqueue function. */
>> - uintptr_t reserved[6];
>> + uintptr_t reserved[7];
>> } __rte_cache_aligned;
>>
^ permalink raw reply [relevance 0%]
* [PATCH] eventdev: remove single-event enqueue operation
2023-06-30 4:37 3% ` Jerin Jacob
@ 2023-07-04 11:53 4% ` Mattias Rönnblom
2023-07-05 7:47 0% ` Jerin Jacob
1 sibling, 1 reply; 200+ results
From: Mattias Rönnblom @ 2023-07-04 11:53 UTC (permalink / raw)
To: jerinj
Cc: Jerin Jacob, hofors, dev, Pavan Nikhilesh, Timothy McDaniel,
Hemant Agrawal, Sachin Saxena, Harry van Haaren, Liang Ma,
Peter Mccarthy, Mattias Rönnblom
Eliminate non-burst enqueue operation from Eventdev.
The effect of this change is to reduce Eventdev code complexity
somewhat and slightly improve performance.
The single-event enqueue shortcut provided a very minor performance
advantage in some situations (e.g., with a compile time-constant burst
size of '1'), but would in other situations cause a noticeable
performance penalty (e.g., rte_event_enqueue_forward_burst() with run
time-variable burst sizes varying between '1' and larger burst sizes).
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
--
PATCH: Add ABI deprecation notice.
---
doc/guides/rel_notes/deprecation.rst | 4 ++
drivers/event/cnxk/cn10k_eventdev.c | 1 -
drivers/event/cnxk/cn10k_worker.c | 49 ++++++++++------------
drivers/event/cnxk/cn10k_worker.h | 1 -
drivers/event/cnxk/cn9k_eventdev.c | 2 -
drivers/event/cnxk/cn9k_worker.c | 27 ++++--------
drivers/event/cnxk/cn9k_worker.h | 1 -
drivers/event/dlb2/dlb2.c | 13 ------
drivers/event/dpaa/dpaa_eventdev.c | 7 ----
drivers/event/dpaa2/dpaa2_eventdev.c | 7 ----
drivers/event/dsw/dsw_evdev.c | 1 -
drivers/event/dsw/dsw_evdev.h | 1 -
drivers/event/dsw/dsw_event.c | 6 ---
drivers/event/octeontx/ssovf_worker.c | 14 ++-----
drivers/event/opdl/opdl_evdev.c | 13 ------
drivers/event/opdl/opdl_evdev.h | 1 -
drivers/event/skeleton/skeleton_eventdev.c | 14 -------
drivers/event/sw/sw_evdev.c | 1 -
drivers/event/sw/sw_evdev.h | 1 -
drivers/event/sw/sw_evdev_worker.c | 6 ---
lib/eventdev/eventdev_pmd.h | 2 -
lib/eventdev/eventdev_private.c | 11 -----
lib/eventdev/rte_eventdev.h | 10 +----
lib/eventdev/rte_eventdev_core.h | 7 +---
24 files changed, 42 insertions(+), 158 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 66431789b0..badf011ab2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -153,3 +153,7 @@ Deprecation Notices
The new port library API (functions rte_swx_port_*)
will gradually transition from experimental to stable status
starting with DPDK 23.07 release.
+
+* eventdev: The single-event enqueue operation, used by static inline
+ burst-enqueue functions in <rte_eventdev.h>, has been removed,
+ breaking the ABI. The API remains unaffected.
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 499a3aace7..51b2345269 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -412,7 +412,6 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
#undef T
};
- event_dev->enqueue = cn10k_sso_hws_enq;
event_dev->enqueue_burst = cn10k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst;
diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c
index 9b5bf90159..62dd8e5c5d 100644
--- a/drivers/event/cnxk/cn10k_worker.c
+++ b/drivers/event/cnxk/cn10k_worker.c
@@ -107,32 +107,6 @@ sso_lmt_aw_wait_fc(struct cn10k_sso_hws *ws, int64_t req)
}
}
-uint16_t __rte_hot
-cn10k_sso_hws_enq(void *port, const struct rte_event *ev)
-{
- struct cn10k_sso_hws *ws = port;
-
- switch (ev->op) {
- case RTE_EVENT_OP_NEW:
- return cn10k_sso_hws_new_event(ws, ev);
- case RTE_EVENT_OP_FORWARD:
- cn10k_sso_hws_forward_event(ws, ev);
- break;
- case RTE_EVENT_OP_RELEASE:
- if (ws->swtag_req) {
- cnxk_sso_hws_desched(ev->u64, ws->base);
- ws->swtag_req = 0;
- break;
- }
- cnxk_sso_hws_swtag_flush(ws->base);
- break;
- default:
- return 0;
- }
-
- return 1;
-}
-
#define VECTOR_SIZE_BITS 0xFFFFFFFFFFF80000ULL
#define VECTOR_GET_LINE_OFFSET(line) (19 + (3 * line))
@@ -384,8 +358,29 @@ uint16_t __rte_hot
cn10k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
uint16_t nb_events)
{
+ struct cn10k_sso_hws *ws = port;
+
RTE_SET_USED(nb_events);
- return cn10k_sso_hws_enq(port, ev);
+
+ switch (ev->op) {
+ case RTE_EVENT_OP_NEW:
+ return cn10k_sso_hws_new_event(ws, ev);
+ case RTE_EVENT_OP_FORWARD:
+ cn10k_sso_hws_forward_event(ws, ev);
+ break;
+ case RTE_EVENT_OP_RELEASE:
+ if (ws->swtag_req) {
+ cnxk_sso_hws_desched(ev->u64, ws->base);
+ ws->swtag_req = 0;
+ break;
+ }
+ cnxk_sso_hws_swtag_flush(ws->base);
+ break;
+ default:
+ return 0;
+ }
+
+ return 1;
}
uint16_t __rte_hot
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index b4ee023723..c0db92f3a8 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -306,7 +306,6 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev,
}
/* CN10K Fastpath functions. */
-uint16_t __rte_hot cn10k_sso_hws_enq(void *port, const struct rte_event *ev);
uint16_t __rte_hot cn10k_sso_hws_enq_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 6cce5477f0..fb967635af 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -434,7 +434,6 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
#undef T
};
- event_dev->enqueue = cn9k_sso_hws_enq;
event_dev->enqueue_burst = cn9k_sso_hws_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;
event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst;
@@ -469,7 +468,6 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
sso_hws_tx_adptr_enq);
if (dev->dual_ws) {
- event_dev->enqueue = cn9k_sso_hws_dual_enq;
event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;
event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst;
event_dev->enqueue_forward_burst =
diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c
index abbbfffd85..fa5924e113 100644
--- a/drivers/event/cnxk/cn9k_worker.c
+++ b/drivers/event/cnxk/cn9k_worker.c
@@ -8,10 +8,13 @@
#include "cn9k_cryptodev_ops.h"
uint16_t __rte_hot
-cn9k_sso_hws_enq(void *port, const struct rte_event *ev)
+cn9k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
{
struct cn9k_sso_hws *ws = port;
+ RTE_SET_USED(nb_events);
+
switch (ev->op) {
case RTE_EVENT_OP_NEW:
return cn9k_sso_hws_new_event(ws, ev);
@@ -33,14 +36,6 @@ cn9k_sso_hws_enq(void *port, const struct rte_event *ev)
return 1;
}
-uint16_t __rte_hot
-cn9k_sso_hws_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return cn9k_sso_hws_enq(port, ev);
-}
-
uint16_t __rte_hot
cn9k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[],
uint16_t nb_events)
@@ -66,14 +61,18 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],
return 1;
}
+
/* Dual ws ops. */
uint16_t __rte_hot
-cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev)
+cn9k_sso_hws_dual_enq_burst(void *port, const struct rte_event ev[],
+ uint16_t nb_events)
{
struct cn9k_sso_hws_dual *dws = port;
uint64_t base;
+ RTE_SET_USED(nb_events);
+
base = dws->base[!dws->vws];
switch (ev->op) {
case RTE_EVENT_OP_NEW:
@@ -96,14 +95,6 @@ cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev)
return 1;
}
-uint16_t __rte_hot
-cn9k_sso_hws_dual_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return cn9k_sso_hws_dual_enq(port, ev);
-}
-
uint16_t __rte_hot
cn9k_sso_hws_dual_enq_new_burst(void *port, const struct rte_event ev[],
uint16_t nb_events)
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 9ddab095ac..12426d58bc 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -365,7 +365,6 @@ cn9k_sso_hws_get_work_empty(uint64_t base, struct rte_event *ev,
}
/* CN9K Fastpath functions. */
-uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev);
uint16_t __rte_hot cn9k_sso_hws_enq_burst(void *port,
const struct rte_event ev[],
uint16_t nb_events);
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 60c5cd4804..e7fe0ba576 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -1502,10 +1502,6 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
return ret;
}
-static inline uint16_t
-dlb2_event_enqueue_delayed(void *event_port,
- const struct rte_event events[]);
-
static inline uint16_t
dlb2_event_enqueue_burst_delayed(void *event_port,
const struct rte_event events[],
@@ -1697,7 +1693,6 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
* performance reasons.
*/
if (qm_port->token_pop_mode == DELAYED_POP) {
- dlb2->event_dev->enqueue = dlb2_event_enqueue_delayed;
dlb2->event_dev->enqueue_burst =
dlb2_event_enqueue_burst_delayed;
dlb2->event_dev->enqueue_new_burst =
@@ -3141,13 +3136,6 @@ dlb2_event_enqueue_burst_delayed(void *event_port,
return __dlb2_event_enqueue_burst(event_port, events, num, true);
}
-static inline uint16_t
-dlb2_event_enqueue(void *event_port,
- const struct rte_event events[])
-{
- return __dlb2_event_enqueue_burst(event_port, events, 1, false);
-}
-
static inline uint16_t
dlb2_event_enqueue_delayed(void *event_port,
const struct rte_event events[])
@@ -4585,7 +4573,6 @@ dlb2_entry_points_init(struct rte_eventdev *dev)
/* Expose PMD's eventdev interface */
dev->dev_ops = &dlb2_eventdev_entry_ops;
- dev->enqueue = dlb2_event_enqueue;
dev->enqueue_burst = dlb2_event_enqueue_burst;
dev->enqueue_new_burst = dlb2_event_enqueue_new_burst;
dev->enqueue_forward_burst = dlb2_event_enqueue_forward_burst;
diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c
index 4b3d16735b..8809f2ecd9 100644
--- a/drivers/event/dpaa/dpaa_eventdev.c
+++ b/drivers/event/dpaa/dpaa_eventdev.c
@@ -112,12 +112,6 @@ dpaa_event_enqueue_burst(void *port, const struct rte_event ev[],
return nb_events;
}
-static uint16_t
-dpaa_event_enqueue(void *port, const struct rte_event *ev)
-{
- return dpaa_event_enqueue_burst(port, ev, 1);
-}
-
static void drain_4_bytes(int fd, fd_set *fdset)
{
if (FD_ISSET(fd, fdset)) {
@@ -1008,7 +1002,6 @@ dpaa_event_dev_create(const char *name, const char *params)
priv = eventdev->data->dev_private;
eventdev->dev_ops = &dpaa_eventdev_ops;
- eventdev->enqueue = dpaa_event_enqueue;
eventdev->enqueue_burst = dpaa_event_enqueue_burst;
if (dpaa_event_check_flags(params)) {
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index fa1a1ade80..de08fa1b78 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -202,12 +202,6 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
}
-static uint16_t
-dpaa2_eventdev_enqueue(void *port, const struct rte_event *ev)
-{
- return dpaa2_eventdev_enqueue_burst(port, ev, 1);
-}
-
static void dpaa2_eventdev_dequeue_wait(uint64_t timeout_ticks)
{
struct epoll_event epoll_ev;
@@ -1103,7 +1097,6 @@ dpaa2_eventdev_create(const char *name)
}
eventdev->dev_ops = &dpaa2_eventdev_ops;
- eventdev->enqueue = dpaa2_eventdev_enqueue;
eventdev->enqueue_burst = dpaa2_eventdev_enqueue_burst;
eventdev->enqueue_new_burst = dpaa2_eventdev_enqueue_burst;
eventdev->enqueue_forward_burst = dpaa2_eventdev_enqueue_burst;
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index 6c5cde2468..f3bcacfaf8 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -439,7 +439,6 @@ dsw_probe(struct rte_vdev_device *vdev)
return -EFAULT;
dev->dev_ops = &dsw_evdev_ops;
- dev->enqueue = dsw_event_enqueue;
dev->enqueue_burst = dsw_event_enqueue_burst;
dev->enqueue_new_burst = dsw_event_enqueue_new_burst;
dev->enqueue_forward_burst = dsw_event_enqueue_forward_burst;
diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h
index 6416a8a898..ca5d4714b0 100644
--- a/drivers/event/dsw/dsw_evdev.h
+++ b/drivers/event/dsw/dsw_evdev.h
@@ -263,7 +263,6 @@ struct dsw_ctl_msg {
struct dsw_queue_flow qfs[DSW_MAX_FLOWS_PER_MIGRATION];
} __rte_aligned(4);
-uint16_t dsw_event_enqueue(void *port, const struct rte_event *event);
uint16_t dsw_event_enqueue_burst(void *port,
const struct rte_event events[],
uint16_t events_len);
diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index 93bbeead2e..1a4ea6629c 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -1242,12 +1242,6 @@ dsw_port_flush_out_buffers(struct dsw_evdev *dsw, struct dsw_port *source_port)
dsw_port_transmit_buffered(dsw, source_port, dest_port_id);
}
-uint16_t
-dsw_event_enqueue(void *port, const struct rte_event *ev)
-{
- return dsw_event_enqueue_burst(port, ev, unlikely(ev == NULL) ? 0 : 1);
-}
-
static __rte_always_inline uint16_t
dsw_event_enqueue_burst_generic(struct dsw_port *source_port,
const struct rte_event events[],
diff --git a/drivers/event/octeontx/ssovf_worker.c b/drivers/event/octeontx/ssovf_worker.c
index 36454939ea..2b0e255499 100644
--- a/drivers/event/octeontx/ssovf_worker.c
+++ b/drivers/event/octeontx/ssovf_worker.c
@@ -148,12 +148,14 @@ ssows_deq_timeout_burst_ ##name(void *port, struct rte_event ev[], \
SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
#undef R
-__rte_always_inline uint16_t __rte_hot
-ssows_enq(void *port, const struct rte_event *ev)
+uint16_t __rte_hot
+ssows_enq_burst(void *port, const struct rte_event ev[], uint16_t nb_events)
{
struct ssows *ws = port;
uint16_t ret = 1;
+ RTE_SET_USED(nb_events);
+
switch (ev->op) {
case RTE_EVENT_OP_NEW:
rte_smp_wmb();
@@ -171,13 +173,6 @@ ssows_enq(void *port, const struct rte_event *ev)
return ret;
}
-uint16_t __rte_hot
-ssows_enq_burst(void *port, const struct rte_event ev[], uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return ssows_enq(port, ev);
-}
-
uint16_t __rte_hot
ssows_enq_new_burst(void *port, const struct rte_event ev[], uint16_t nb_events)
{
@@ -336,7 +331,6 @@ ssovf_fastpath_fns_set(struct rte_eventdev *dev)
{
struct ssovf_evdev *edev = ssovf_pmd_priv(dev);
- dev->enqueue = ssows_enq;
dev->enqueue_burst = ssows_enq_burst;
dev->enqueue_new_burst = ssows_enq_new_burst;
dev->enqueue_forward_burst = ssows_enq_fwd_burst;
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 9ce8b39b60..6bde153514 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -41,18 +41,6 @@ opdl_event_enqueue_burst(void *port,
return p->enq(p, ev, num);
}
-uint16_t
-opdl_event_enqueue(void *port, const struct rte_event *ev)
-{
- struct opdl_port *p = port;
-
- if (unlikely(!p->opdl->data->dev_started))
- return 0;
-
-
- return p->enq(p, ev, 1);
-}
-
uint16_t
opdl_event_dequeue_burst(void *port,
struct rte_event *ev,
@@ -714,7 +702,6 @@ opdl_probe(struct rte_vdev_device *vdev)
dev->dev_ops = &evdev_opdl_ops;
- dev->enqueue = opdl_event_enqueue;
dev->enqueue_burst = opdl_event_enqueue_burst;
dev->enqueue_new_burst = opdl_event_enqueue_burst;
dev->enqueue_forward_burst = opdl_event_enqueue_burst;
diff --git a/drivers/event/opdl/opdl_evdev.h b/drivers/event/opdl/opdl_evdev.h
index 1ca166b37c..1bf862cfff 100644
--- a/drivers/event/opdl/opdl_evdev.h
+++ b/drivers/event/opdl/opdl_evdev.h
@@ -275,7 +275,6 @@ opdl_pmd_priv_const(const struct rte_eventdev *eventdev)
return eventdev->data->dev_private;
}
-uint16_t opdl_event_enqueue(void *port, const struct rte_event *ev);
uint16_t opdl_event_enqueue_burst(void *port, const struct rte_event ev[],
uint16_t num);
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index 8513b9a013..b31c902d42 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -25,18 +25,6 @@
#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
/**< Skeleton event device PMD name */
-static uint16_t
-skeleton_eventdev_enqueue(void *port, const struct rte_event *ev)
-{
- struct skeleton_port *sp = port;
-
- RTE_SET_USED(sp);
- RTE_SET_USED(ev);
- RTE_SET_USED(port);
-
- return 0;
-}
-
static uint16_t
skeleton_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
uint16_t nb_events)
@@ -349,7 +337,6 @@ skeleton_eventdev_init(struct rte_eventdev *eventdev)
PMD_DRV_FUNC_TRACE();
eventdev->dev_ops = &skeleton_eventdev_ops;
- eventdev->enqueue = skeleton_eventdev_enqueue;
eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
eventdev->dequeue = skeleton_eventdev_dequeue;
eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst;
@@ -439,7 +426,6 @@ skeleton_eventdev_create(const char *name, int socket_id)
}
eventdev->dev_ops = &skeleton_eventdev_ops;
- eventdev->enqueue = skeleton_eventdev_enqueue;
eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
eventdev->dequeue = skeleton_eventdev_dequeue;
eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst;
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index cfd659d774..7655505b7c 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -1080,7 +1080,6 @@ sw_probe(struct rte_vdev_device *vdev)
return -EFAULT;
}
dev->dev_ops = &evdev_sw_ops;
- dev->enqueue = sw_event_enqueue;
dev->enqueue_burst = sw_event_enqueue_burst;
dev->enqueue_new_burst = sw_event_enqueue_burst;
dev->enqueue_forward_burst = sw_event_enqueue_burst;
diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h
index c7b943a72b..110724d52d 100644
--- a/drivers/event/sw/sw_evdev.h
+++ b/drivers/event/sw/sw_evdev.h
@@ -288,7 +288,6 @@ sw_pmd_priv_const(const struct rte_eventdev *eventdev)
return eventdev->data->dev_private;
}
-uint16_t sw_event_enqueue(void *port, const struct rte_event *ev);
uint16_t sw_event_enqueue_burst(void *port, const struct rte_event ev[],
uint16_t num);
diff --git a/drivers/event/sw/sw_evdev_worker.c b/drivers/event/sw/sw_evdev_worker.c
index 063b919c7e..f041bae2a0 100644
--- a/drivers/event/sw/sw_evdev_worker.c
+++ b/drivers/event/sw/sw_evdev_worker.c
@@ -131,12 +131,6 @@ sw_event_enqueue_burst(void *port, const struct rte_event ev[], uint16_t num)
return enq;
}
-uint16_t
-sw_event_enqueue(void *port, const struct rte_event *ev)
-{
- return sw_event_enqueue_burst(port, ev, 1);
-}
-
uint16_t
sw_event_dequeue_burst(void *port, struct rte_event *ev, uint16_t num,
uint64_t wait)
diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index c68c3a2262..fab0bf501b 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -159,8 +159,6 @@ struct rte_eventdev {
uint8_t attached : 1;
/**< Flag indicating the device is attached */
- event_enqueue_t enqueue;
- /**< Pointer to PMD enqueue function. */
event_enqueue_burst_t enqueue_burst;
/**< Pointer to PMD enqueue burst function. */
event_enqueue_burst_t enqueue_new_burst;
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
index 1d3d9d357e..4c998669c8 100644
--- a/lib/eventdev/eventdev_private.c
+++ b/lib/eventdev/eventdev_private.c
@@ -5,15 +5,6 @@
#include "eventdev_pmd.h"
#include "rte_eventdev.h"
-static uint16_t
-dummy_event_enqueue(__rte_unused void *port,
- __rte_unused const struct rte_event *ev)
-{
- RTE_EDEV_LOG_ERR(
- "event enqueue requested for unconfigured event device");
- return 0;
-}
-
static uint16_t
dummy_event_enqueue_burst(__rte_unused void *port,
__rte_unused const struct rte_event ev[],
@@ -86,7 +77,6 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op)
{
static void *dummy_data[RTE_MAX_QUEUES_PER_PORT];
static const struct rte_event_fp_ops dummy = {
- .enqueue = dummy_event_enqueue,
.enqueue_burst = dummy_event_enqueue_burst,
.enqueue_new_burst = dummy_event_enqueue_burst,
.enqueue_forward_burst = dummy_event_enqueue_burst,
@@ -107,7 +97,6 @@ void
event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op,
const struct rte_eventdev *dev)
{
- fp_op->enqueue = dev->enqueue;
fp_op->enqueue_burst = dev->enqueue_burst;
fp_op->enqueue_new_burst = dev->enqueue_new_burst;
fp_op->enqueue_forward_burst = dev->enqueue_forward_burst;
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index b6a4fa1495..2461d002da 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1929,14 +1929,8 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
}
#endif
rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, (void *)fn);
- /*
- * Allow zero cost non burst mode routine invocation if application
- * requests nb_events as const one
- */
- if (nb_events == 1)
- return (fp_ops->enqueue)(port, ev);
- else
- return fn(port, ev, nb_events);
+
+ return fn(port, ev, nb_events);
}
/**
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index c328bdbc82..5bc3e645b9 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -12,9 +12,6 @@
extern "C" {
#endif
-typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
-/**< @internal Enqueue event on port of a device */
-
typedef uint16_t (*event_enqueue_burst_t)(void *port,
const struct rte_event ev[],
uint16_t nb_events);
@@ -45,8 +42,6 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
struct rte_event_fp_ops {
void **data;
/**< points to array of internal port data pointers */
- event_enqueue_t enqueue;
- /**< PMD enqueue function. */
event_enqueue_burst_t enqueue_burst;
/**< PMD enqueue burst function. */
event_enqueue_burst_t enqueue_new_burst;
@@ -65,7 +60,7 @@ struct rte_event_fp_ops {
/**< PMD Tx adapter enqueue same destination function. */
event_crypto_adapter_enqueue_t ca_enqueue;
/**< PMD Crypto adapter enqueue function. */
- uintptr_t reserved[6];
+ uintptr_t reserved[7];
} __rte_cache_aligned;
extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS];
--
2.34.1
^ permalink raw reply [relevance 4%]
* [PATCH v7 0/3] add telemetry cmds for ring
@ 2023-07-04 9:04 3% ` Jie Hai
2023-07-04 9:04 3% ` [PATCH v7 1/3] ring: fix unmatched type definition and usage Jie Hai
2023-08-18 6:53 0% ` [PATCH v7 0/3] add telemetry cmds for ring Jie Hai
0 siblings, 2 replies; 200+ results
From: Jie Hai @ 2023-07-04 9:04 UTC (permalink / raw)
Cc: haijie1, dev, liudongdong3
This patch set supports telemetry cmd to list rings and dump information
of a ring by its name.
v1->v2:
1. Add space after "switch".
2. Fix wrong strlen parameter.
v2->v3:
1. Remove prefix "rte_" for static function.
2. Add Acked-by Konstantin Ananyev for PATCH 1.
3. Introduce functions to return strings instead copy strings.
4. Check pointer to memzone of ring.
5. Remove redundant variable.
6. Hold lock when access ring data.
v3->v4:
1. Update changelog according to reviews of Honnappa Nagarahalli.
2. Add Reviewed-by Honnappa Nagarahalli.
3. Correct grammar in help information.
4. Correct spell warning on "te" reported by checkpatch.pl.
5. Use ring_walk() to query ring info instead of rte_ring_lookup().
6. Fix that type definition the flag field of rte_ring does not match the usage.
7. Use rte_tel_data_add_dict_uint_hex instead of rte_tel_data_add_dict_u64
for mask and flags.
v4->v5:
1. Add Acked-by Konstantin Ananyev and Chengwen Feng.
2. Add ABI change explanation for commit message of patch 1/3.
v5->v6:
1. Add Acked-by Morten Brørup.
2. Fix incorrect reference of commit.
v6->v7:
1. Remove prod/consumer head/tail info.
Jie Hai (3):
ring: fix unmatched type definition and usage
ring: add telemetry cmd to list rings
ring: add telemetry cmd for ring info
lib/ring/meson.build | 1 +
lib/ring/rte_ring.c | 135 +++++++++++++++++++++++++++++++++++++++
lib/ring/rte_ring_core.h | 2 +-
3 files changed, 137 insertions(+), 1 deletion(-)
--
2.33.0
^ permalink raw reply [relevance 3%]
* [PATCH v7 1/3] ring: fix unmatched type definition and usage
2023-07-04 9:04 3% ` [PATCH v7 " Jie Hai
@ 2023-07-04 9:04 3% ` Jie Hai
2023-08-18 6:53 0% ` [PATCH v7 0/3] add telemetry cmds for ring Jie Hai
1 sibling, 0 replies; 200+ results
From: Jie Hai @ 2023-07-04 9:04 UTC (permalink / raw)
To: Honnappa Nagarahalli, Konstantin Ananyev; +Cc: haijie1, dev, liudongdong3
Field 'flags' of struct rte_ring is defined as int type. However,
it is used as unsigned int. To ensure consistency, change the
type of flags to unsigned int. Since these two types has the
same byte size, this change is not an ABI change.
Fixes: af75078fece3 ("first public release")
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/ring/rte_ring_core.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 82b237091b71..1c809abeb531 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -120,7 +120,7 @@ struct rte_ring_hts_headtail {
struct rte_ring {
char name[RTE_RING_NAMESIZE] __rte_cache_aligned;
/**< Name of the ring. */
- int flags; /**< Flags supplied at creation. */
+ uint32_t flags; /**< Flags supplied at creation. */
const struct rte_memzone *memzone;
/**< Memzone, if any, containing the rte_ring */
uint32_t size; /**< Size of ring. */
--
2.33.0
^ permalink raw reply [relevance 3%]
* RE: [PATCH] doc: announce ethdev operation struct changes
2023-07-04 8:10 3% [PATCH] doc: announce ethdev operation struct changes Feifei Wang
@ 2023-07-04 8:17 0% ` Feifei Wang
2023-07-13 2:37 0% ` Feifei Wang
2023-07-05 11:32 0% ` Konstantin Ananyev
2023-07-28 14:56 3% ` Thomas Monjalon
2 siblings, 1 reply; 200+ results
From: Feifei Wang @ 2023-07-04 8:17 UTC (permalink / raw)
To: Feifei Wang
Cc: dev, nd, Honnappa Nagarahalli, Ruifeng Wang, Konstantin Ananyev,
mb, Ferruh Yigit, thomas, Andrew Rybchenko, nd
> -----Original Message-----
> From: Feifei Wang <feifei.wang2@arm.com>
> Sent: Tuesday, July 4, 2023 4:10 PM
> Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; Feifei Wang <Feifei.Wang2@arm.com>;
> Ruifeng Wang <Ruifeng.Wang@arm.com>
> Subject: [PATCH] doc: announce ethdev operation struct changes
>
> To support mbufs recycle mode, announce the coming ABI changes from
> DPDK 23.11.
>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 66431789b0..c7e1ffafb2 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -118,6 +118,10 @@ Deprecation Notices
> The legacy actions should be removed
> once ``MODIFY_FIELD`` alternative is implemented in drivers.
>
> +* ethdev: The Ethernet device data structure ``struct rte_eth_dev`` and
> + the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be
> +updated
> + with new fields to support mbufs recycle mode from DPDK 23.11.
> +
> * cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
> to have another parameter ``qp_id`` to return the queue pair ID
> which got error interrupt to the application,
> --
> 2.25.1
^ permalink raw reply [relevance 0%]
* [PATCH] doc: announce ethdev operation struct changes
@ 2023-07-04 8:10 3% Feifei Wang
2023-07-04 8:17 0% ` Feifei Wang
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Feifei Wang @ 2023-07-04 8:10 UTC (permalink / raw)
Cc: dev, nd, Honnappa.Nagarahalli, Feifei Wang, Ruifeng Wang
To support mbufs recycle mode, announce the coming ABI changes
from DPDK 23.11.
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 66431789b0..c7e1ffafb2 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -118,6 +118,10 @@ Deprecation Notices
The legacy actions should be removed
once ``MODIFY_FIELD`` alternative is implemented in drivers.
+* ethdev: The Ethernet device data structure ``struct rte_eth_dev`` and
+ the fast-path ethdev flat array ``struct rte_eth_fp_ops`` will be updated
+ with new fields to support mbufs recycle mode from DPDK 23.11.
+
* cryptodev: The function ``rte_cryptodev_cb_fn`` will be updated
to have another parameter ``qp_id`` to return the queue pair ID
which got error interrupt to the application,
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: kni: check abi version between kmod and lib
@ 2023-07-04 2:56 7% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2023-07-04 2:56 UTC (permalink / raw)
To: Stephen Coleman; +Cc: dev, Ray Kinsella, Ferruh Yigit
On Thu, 21 Apr 2022 12:38:26 +0800
Stephen Coleman <omegacoleman@gmail.com> wrote:
> KNI ioctl functions copy data from userspace lib, and this interface
> of kmod is not compatible indeed. If the user use incompatible rte_kni.ko
> bad things happen: sometimes various fields contain garbage value,
> sometimes it cause a kmod soft lockup.
>
> Some common distros ship their own rte_kni.ko, so this is likely to
> happen.
>
> This patch add abi version checking between userland lib and kmod so
> that:
>
> * if kmod ioctl got a wrong abi magic, it refuse to go on
> * if userland lib, probed a wrong abi version via newly added ioctl, it
> also refuse to go on
>
> Bugzilla ID: 998
>
> Signed-off-by: youcai <omegacoleman@gmail.com>
KNI is deprecated and scheduled for removal.
Even though this fixes a bug, because it changes API/ABI it can't go in.
Dropping the patch.
^ permalink raw reply [relevance 7%]
* Re: [EXT] Re: [PATCH v3] bitmap: add scan from offset function
2023-07-03 12:02 4% ` [EXT] " Volodymyr Fialko
@ 2023-07-03 12:17 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-07-03 12:17 UTC (permalink / raw)
To: Dumitrescu, Cristian, Volodymyr Fialko
Cc: dev, Jerin Jacob Kollanukkaran, Anoob Joseph
03/07/2023 14:02, Volodymyr Fialko:
>
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Monday, July 3, 2023 1:51 PM
> > To: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Volodymyr Fialko <vfialko@marvell.com>
> > Cc: dev@dpdk.org; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Anoob Joseph
> > <anoobj@marvell.com>
> > Subject: [EXT] Re: [PATCH v3] bitmap: add scan from offset function
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > 03/07/2023 12:56, Volodymyr Fialko:
> > > Since it's header-only library, there is issue with using __rte_intenal (appeared in v4).
> >
> > What is the issue?
>
> From V4 ci build failure(http://mails.dpdk.org/archives/test-report/2023-July/421235.html):
> In file included from ../examples/ipsec-secgw/event_helper.c:6:
> ../lib/eal/include/rte_bitmap.h:645:2: error: Symbol is not public ABI
> __rte_bitmap_scan_init_at(bmp, offset);
> ^
> ../lib/eal/include/rte_bitmap.h:150:1: note: from 'diagnose_if' attribute on '__rte_bitmap_scan_init_at':
> __rte_internal
> ^~~~~~~~~~~~~~
> ../lib/eal/include/rte_compat.h:42:16: note: expanded from macro '__rte_internal'
> __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> ^ ~
> 1 error generated.
OK I see.
So we should give up with __rte_internal for inline functions.
As it is not supposed to be exposed to the applications,
I think we can skip the __rte_experimental flag.
> > > Even if the function itself is not used directly, it get's included to the other public files.
> > > It explains why other functions in this library does not have the rte_internal prefix, but the double
> > underscores.
> > > So, should I simply remove __rte_internal from v4, or there's another approach to resolve this
> > issue(beside creating .c file)?
> > >
> > > /Volodymyr
> > >
> > > > -----Original Message-----
> > > > From: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> > > > Sent: Friday, June 23, 2023 2:41 PM
> > > > To: Thomas Monjalon <thomas@monjalon.net>; Volodymyr Fialko
> > > > <vfialko@marvell.com>
> > > > Cc: dev@dpdk.org; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> > > > Anoob Joseph <anoobj@marvell.com>
> > > > Subject: [EXT] RE: [PATCH v3] bitmap: add scan from offset function
> > > >
> > > > External Email
> > > >
> > > > --------------------------------------------------------------------
> > > > --
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > > Sent: Thursday, June 22, 2023 6:45 PM
> > > > > To: Volodymyr Fialko <vfialko@marvell.com>
> > > > > Cc: dev@dpdk.org; Dumitrescu, Cristian
> > > > > <cristian.dumitrescu@intel.com>; jerinj@marvell.com;
> > > > > anoobj@marvell.com
> > > > > Subject: Re: [PATCH v3] bitmap: add scan from offset function
> > > > >
> > > > > 21/06/2023 12:01, Volodymyr Fialko:
> > > > > > Currently, in the case when we search for a bit set after a
> > > > > > particular value, the bitmap has to be scanned from the
> > > > > > beginning and
> > > > > > rte_bitmap_scan() has to be called multiple times until we hit the value.
> > > > > >
> > > > > > Add a new rte_bitmap_scan_from_offset() function to initialize
> > > > > > scan state at the given offset and perform scan, this will allow
> > > > > > getting the next set bit after certain offset within one scan call.
> > > > > >
> > > > > > Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
> > > > > > ---
> > > > > > v2:
> > > > > > - added rte_bitmap_scan_from_offset
> > > > > > v3
> > > > > > - added note for internal use only for init_at function
> > > > > [...]
> > > > > > +/**
> > > > > > + * @warning
> > > > > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > > > > + *
> > > > > > + * Bitmap initialize internal scan pointers at the given
> > > > > > +position for the scan
> > > > > function.
> > > > > > + *
> > > > > > + * Note: for private/internal use, for public:
> > > > > > + * @see rte_bitmap_scan_from_offset()
> > > > > > + *
> > > > > > + * @param bmp
> > > > > > + * Handle to bitmap instance
> > > > > > + * @param pos
> > > > > > + * Bit position to start scan
> > > > > > + */
> > > > > > +__rte_experimental
> > > > > > +static inline void
> > > > > > +__rte_bitmap_scan_init_at(struct rte_bitmap *bmp, uint32_t pos)
> > > > >
> > > > > I think it should marked with __rte_internal instead of experimental.
> > > > >
> > > > >
> > > >
> > > >
> > > > +1
> > >
> >
> >
> >
> >
>
>
^ permalink raw reply [relevance 0%]
* RE: [EXT] Re: [PATCH v3] bitmap: add scan from offset function
@ 2023-07-03 12:02 4% ` Volodymyr Fialko
2023-07-03 12:17 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Volodymyr Fialko @ 2023-07-03 12:02 UTC (permalink / raw)
To: Thomas Monjalon, Dumitrescu, Cristian
Cc: dev, Jerin Jacob Kollanukkaran, Anoob Joseph
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, July 3, 2023 1:51 PM
> To: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Volodymyr Fialko <vfialko@marvell.com>
> Cc: dev@dpdk.org; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Anoob Joseph
> <anoobj@marvell.com>
> Subject: [EXT] Re: [PATCH v3] bitmap: add scan from offset function
>
> External Email
>
> ----------------------------------------------------------------------
> 03/07/2023 12:56, Volodymyr Fialko:
> > Since it's header-only library, there is issue with using __rte_intenal (appeared in v4).
>
> What is the issue?
From V4 ci build failure(http://mails.dpdk.org/archives/test-report/2023-July/421235.html):
In file included from ../examples/ipsec-secgw/event_helper.c:6:
../lib/eal/include/rte_bitmap.h:645:2: error: Symbol is not public ABI
__rte_bitmap_scan_init_at(bmp, offset);
^
../lib/eal/include/rte_bitmap.h:150:1: note: from 'diagnose_if' attribute on '__rte_bitmap_scan_init_at':
__rte_internal
^~~~~~~~~~~~~~
../lib/eal/include/rte_compat.h:42:16: note: expanded from macro '__rte_internal'
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
^ ~
1 error generated.
/Volodymyr
>
> > Even if the function itself is not used directly, it get's included to the other public files.
> > It explains why other functions in this library does not have the rte_internal prefix, but the double
> underscores.
> > So, should I simply remove __rte_internal from v4, or there's another approach to resolve this
> issue(beside creating .c file)?
> >
> > /Volodymyr
> >
> > > -----Original Message-----
> > > From: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> > > Sent: Friday, June 23, 2023 2:41 PM
> > > To: Thomas Monjalon <thomas@monjalon.net>; Volodymyr Fialko
> > > <vfialko@marvell.com>
> > > Cc: dev@dpdk.org; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> > > Anoob Joseph <anoobj@marvell.com>
> > > Subject: [EXT] RE: [PATCH v3] bitmap: add scan from offset function
> > >
> > > External Email
> > >
> > > --------------------------------------------------------------------
> > > --
> > >
> > >
> > > > -----Original Message-----
> > > > From: Thomas Monjalon <thomas@monjalon.net>
> > > > Sent: Thursday, June 22, 2023 6:45 PM
> > > > To: Volodymyr Fialko <vfialko@marvell.com>
> > > > Cc: dev@dpdk.org; Dumitrescu, Cristian
> > > > <cristian.dumitrescu@intel.com>; jerinj@marvell.com;
> > > > anoobj@marvell.com
> > > > Subject: Re: [PATCH v3] bitmap: add scan from offset function
> > > >
> > > > 21/06/2023 12:01, Volodymyr Fialko:
> > > > > Currently, in the case when we search for a bit set after a
> > > > > particular value, the bitmap has to be scanned from the
> > > > > beginning and
> > > > > rte_bitmap_scan() has to be called multiple times until we hit the value.
> > > > >
> > > > > Add a new rte_bitmap_scan_from_offset() function to initialize
> > > > > scan state at the given offset and perform scan, this will allow
> > > > > getting the next set bit after certain offset within one scan call.
> > > > >
> > > > > Signed-off-by: Volodymyr Fialko <vfialko@marvell.com>
> > > > > ---
> > > > > v2:
> > > > > - added rte_bitmap_scan_from_offset
> > > > > v3
> > > > > - added note for internal use only for init_at function
> > > > [...]
> > > > > +/**
> > > > > + * @warning
> > > > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > > > + *
> > > > > + * Bitmap initialize internal scan pointers at the given
> > > > > +position for the scan
> > > > function.
> > > > > + *
> > > > > + * Note: for private/internal use, for public:
> > > > > + * @see rte_bitmap_scan_from_offset()
> > > > > + *
> > > > > + * @param bmp
> > > > > + * Handle to bitmap instance
> > > > > + * @param pos
> > > > > + * Bit position to start scan
> > > > > + */
> > > > > +__rte_experimental
> > > > > +static inline void
> > > > > +__rte_bitmap_scan_init_at(struct rte_bitmap *bmp, uint32_t pos)
> > > >
> > > > I think it should marked with __rte_internal instead of experimental.
> > > >
> > > >
> > >
> > >
> > > +1
> >
>
>
>
>
^ permalink raw reply [relevance 4%]
* Re: [RFC] eventdev: remove single-event enqueue operation
@ 2023-06-30 4:37 3% ` Jerin Jacob
2023-07-04 12:01 0% ` Mattias Rönnblom
2023-07-04 11:53 4% ` [PATCH] " Mattias Rönnblom
1 sibling, 1 reply; 200+ results
From: Jerin Jacob @ 2023-06-30 4:37 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: jerinj, hofors, dev, Pavan Nikhilesh, Timothy McDaniel,
Hemant Agrawal, Sachin Saxena, Harry van Haaren, Liang Ma,
Peter Mccarthy
On Fri, Jun 9, 2023 at 11:18 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> Eliminate non-burst enqueue operation from Eventdev.
>
> The effect of this change is to reduce Eventdev code complexity
> somewhat and slightly improve performance.
>
> The single-event enqueue shortcut provided a very minor performance
> advantage in some situations (e.g., with a compile time-constant burst
> size of '1'), but would in other situations cause a noticeable
> performance penalty (e.g., rte_event_enqueue_forward_burst() with run
> time-variable burst sizes varying between '1' and larger burst sizes).
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>
> -typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
> -/**< @internal Enqueue event on port of a device */
> -
> typedef uint16_t (*event_enqueue_burst_t)(void *port,
> const struct rte_event ev[],
> uint16_t nb_events);
> @@ -45,8 +42,6 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port,
> struct rte_event_fp_ops {
> void **data;
> /**< points to array of internal port data pointers */
> - event_enqueue_t enqueue;
> - /**< PMD enqueue function. */
Can we remove "dequeue" as well?
In any event, Please send a deprecation notice as it is an ABI change,
and we need to get merge the deprecation notice patch for v23.07.
I can review the deprecation notice patch quickly as soon as you send
it to make forward progress.
> event_enqueue_burst_t enqueue_burst;
> /**< PMD enqueue burst function. */
> event_enqueue_burst_t enqueue_new_burst;
> @@ -65,7 +60,7 @@ struct rte_event_fp_ops {
> /**< PMD Tx adapter enqueue same destination function. */
> event_crypto_adapter_enqueue_t ca_enqueue;
> /**< PMD Crypto adapter enqueue function. */
> - uintptr_t reserved[6];
> + uintptr_t reserved[7];
> } __rte_cache_aligned;
>
^ permalink raw reply [relevance 3%]
* [PATCH v4] doc: prefer installing using meson rather than ninja
@ 2023-06-23 11:43 4% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2023-06-23 11:43 UTC (permalink / raw)
To: dev; +Cc: thomas, david.marchand, Bruce Richardson
After doing a build, to install DPDK system-wide our documentation
recommended using the "ninja install" command. However, for anyone
building as a non-root user and only installing as root, the "meson
install" command is a better alternative, as it provides for
automatically dropping or elevating privileges as necessary in more
recent meson releases [1].
[1] https://mesonbuild.com/Installing.html#installing-as-the-superuser
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
V4:
* replaced missed calls to ninja install in ci script and
test-meson-builds script.
V3:
* correct order of arguments to meson in CI scripts. The "-C" option
must follow the meson "install" command. [This is consistent with
other uses e.g. meson compile -C ..., meson test -C ...]
V2:
* Fix one missed reference to "ninja install" in Linux GSG
* Changed CI scripts to use "meson install" to ensure step is
properly tested.
---
.ci/linux-build.sh | 6 +++---
devtools/test-meson-builds.sh | 4 ++--
doc/guides/contributing/coding_style.rst | 2 +-
doc/guides/cryptodevs/uadk.rst | 2 +-
doc/guides/freebsd_gsg/build_dpdk.rst | 2 +-
doc/guides/freebsd_gsg/build_sample_apps.rst | 2 +-
doc/guides/linux_gsg/build_dpdk.rst | 4 ++--
doc/guides/prog_guide/build-sdk-meson.rst | 4 ++--
8 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index d0d9f89bae..45f2729996 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -151,14 +151,14 @@ if [ "$ABI_CHECKS" = "true" ]; then
git clone --single-branch -b "$REF_GIT_TAG" $REF_GIT_REPO $refsrcdir
meson setup $OPTS -Dexamples= $refsrcdir $refsrcdir/build
ninja -C $refsrcdir/build
- DESTDIR=$(pwd)/reference ninja -C $refsrcdir/build install
+ DESTDIR=$(pwd)/reference meson install -C $refsrcdir/build
find reference/usr/local -name '*.a' -delete
rm -rf reference/usr/local/bin
rm -rf reference/usr/local/share
echo $REF_GIT_TAG > reference/VERSION
fi
- DESTDIR=$(pwd)/install ninja -C build install
+ DESTDIR=$(pwd)/install meson install -C build
devtools/check-abi.sh reference install ${ABI_CHECKS_WARN_ONLY:-}
fi
@@ -172,7 +172,7 @@ fi
# Test examples compilation with an installed dpdk
if [ "$BUILD_EXAMPLES" = "true" ]; then
- [ -d install ] || DESTDIR=$(pwd)/install ninja -C build install
+ [ -d install ] || DESTDIR=$(pwd)/install meson install -C build
export LD_LIBRARY_PATH=$(dirname $(find $(pwd)/install -name librte_eal.so)):$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=$(dirname $(find $(pwd)/install -name libdpdk.pc)):$PKG_CONFIG_PATH
export PKGCONF="pkg-config --define-prefix"
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 1eb28a2490..84b907d2ea 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -158,8 +158,8 @@ compile () # <builddir>
install_target () # <builddir> <installdir>
{
rm -rf $2
- echo "DESTDIR=$2 $ninja_cmd -C $1 install" >&$verbose
- DESTDIR=$2 $ninja_cmd -C $1 install >&$veryverbose
+ echo "DESTDIR=$2 $MESON install -C $1" >&$verbose
+ DESTDIR=$2 $MESON install -C $1 >&$veryverbose
}
build () # <directory> <target cc | cross file> <ABI check> [meson options]
diff --git a/doc/guides/contributing/coding_style.rst b/doc/guides/contributing/coding_style.rst
index 0861305dc6..13b2390d9e 100644
--- a/doc/guides/contributing/coding_style.rst
+++ b/doc/guides/contributing/coding_style.rst
@@ -975,7 +975,7 @@ ext_deps
headers
**Default Value = []**.
Used to return the list of header files for the library that should be
- installed to $PREFIX/include when ``ninja install`` is run. As with
+ installed to $PREFIX/include when ``meson install`` is run. As with
source files, these should be specified using the meson ``files()``
function.
When ``check_includes`` build option is set to ``true``, each header file
diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst
index 9af6b88a5a..136ab4be6a 100644
--- a/doc/guides/cryptodevs/uadk.rst
+++ b/doc/guides/cryptodevs/uadk.rst
@@ -90,7 +90,7 @@ Test steps
meson setup build (--reconfigure)
cd build
ninja
- sudo ninja install
+ sudo meson install
#. Prepare hugepages for DPDK (see also :doc:`../tools/hugepages`)
diff --git a/doc/guides/freebsd_gsg/build_dpdk.rst b/doc/guides/freebsd_gsg/build_dpdk.rst
index 514d18c870..86e8e5a805 100644
--- a/doc/guides/freebsd_gsg/build_dpdk.rst
+++ b/doc/guides/freebsd_gsg/build_dpdk.rst
@@ -47,7 +47,7 @@ The final, install, step generally needs to be run as root::
meson setup build
cd build
ninja
- ninja install
+ meson install
This will install the DPDK libraries and drivers to `/usr/local/lib` with a
pkg-config file `libdpdk.pc` installed to `/usr/local/lib/pkgconfig`. The
diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst
index c87e982759..b1ab7545b1 100644
--- a/doc/guides/freebsd_gsg/build_sample_apps.rst
+++ b/doc/guides/freebsd_gsg/build_sample_apps.rst
@@ -22,7 +22,7 @@ the system when DPDK is installed, and so can be built using GNU make.
on the FreeBSD system.
The following shows how to compile the helloworld example app, following
-the installation of DPDK using `ninja install` as described previously::
+the installation of DPDK using `meson install` as described previously::
$ export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig
diff --git a/doc/guides/linux_gsg/build_dpdk.rst b/doc/guides/linux_gsg/build_dpdk.rst
index bbd2efc9d8..9c0dd9daf6 100644
--- a/doc/guides/linux_gsg/build_dpdk.rst
+++ b/doc/guides/linux_gsg/build_dpdk.rst
@@ -68,11 +68,11 @@ Once configured, to build and then install DPDK system-wide use:
cd build
ninja
- ninja install
+ meson install
ldconfig
The last two commands above generally need to be run as root,
-with the `ninja install` step copying the built objects to their final system-wide locations,
+with the `meson install` step copying the built objects to their final system-wide locations,
and the last step causing the dynamic loader `ld.so` to update its cache to take account of the new objects.
.. note::
diff --git a/doc/guides/prog_guide/build-sdk-meson.rst b/doc/guides/prog_guide/build-sdk-meson.rst
index 5deabbe54c..93aa1f80e3 100644
--- a/doc/guides/prog_guide/build-sdk-meson.rst
+++ b/doc/guides/prog_guide/build-sdk-meson.rst
@@ -12,7 +12,7 @@ following set of commands::
meson setup build
cd build
ninja
- ninja install
+ meson install
This will compile DPDK in the ``build`` subdirectory, and then install the
resulting libraries, drivers and header files onto the system - generally
@@ -165,7 +165,7 @@ printing each command on a new line as it runs.
Installing the Compiled Files
------------------------------
-Use ``ninja install`` to install the required DPDK files onto the system.
+Use ``meson install`` to install the required DPDK files onto the system.
The install prefix defaults to ``/usr/local`` but can be used as with other
options above. The environment variable ``DESTDIR`` can be used to adjust
the root directory for the install, for example when packaging.
--
2.39.2
^ permalink raw reply [relevance 4%]
* Re: [PATCH] ci: fix libabigail cache in GHA
2023-06-20 14:21 0% ` Aaron Conole
@ 2023-06-22 17:41 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2023-06-22 17:41 UTC (permalink / raw)
To: David Marchand; +Cc: dev, stable, Michael Santana, Aaron Conole
20/06/2023 16:21, Aaron Conole:
> David Marchand <david.marchand@redhat.com> writes:
>
> > In repositories where multiple branches run the ABI checks using
> > different versions of libabigail (for example, a 22.11 branch using
> > libabigail-1.8 and a main branch using libabigail-2.1), a collision
> > happens on the libabigail binary cache entry.
> > As a single cache entry is used, the content of the cache (let's say the
> > cache was built for libabigail 2.1) won't match what the branch wants to
> > use (in this example running the check for 22.11 branch requires
> > libabigail 1.8).
> > .ci/linux-build.sh then tries to recompile libabigail but it fails as
> > the packages used for building libabigail are missing.
> >
> > Add the version to the cache entry name to avoid this collision.
> >
> > Fixes: 443267090edc ("ci: enable v21 ABI checks")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
>
> Acked-by: Aaron Conole <aconole@redhat.com>
Applied, thanks.
^ permalink raw reply [relevance 0%]
* Re: [PATCH] ci: fix libabigail cache in GHA
2023-06-20 13:29 10% [PATCH] ci: fix libabigail cache in GHA David Marchand
@ 2023-06-20 14:21 0% ` Aaron Conole
2023-06-22 17:41 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Aaron Conole @ 2023-06-20 14:21 UTC (permalink / raw)
To: David Marchand; +Cc: dev, stable, Michael Santana, Thomas Monjalon
David Marchand <david.marchand@redhat.com> writes:
> In repositories where multiple branches run the ABI checks using
> different versions of libabigail (for example, a 22.11 branch using
> libabigail-1.8 and a main branch using libabigail-2.1), a collision
> happens on the libabigail binary cache entry.
> As a single cache entry is used, the content of the cache (let's say the
> cache was built for libabigail 2.1) won't match what the branch wants to
> use (in this example running the check for 22.11 branch requires
> libabigail 1.8).
> .ci/linux-build.sh then tries to recompile libabigail but it fails as
> the packages used for building libabigail are missing.
>
> Add the version to the cache entry name to avoid this collision.
>
> Fixes: 443267090edc ("ci: enable v21 ABI checks")
> Cc: stable@dpdk.org
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
Acked-by: Aaron Conole <aconole@redhat.com>
^ permalink raw reply [relevance 0%]
* [PATCH v3 4/4] ci: build examples externally
@ 2023-06-20 14:07 10% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2023-06-20 14:07 UTC (permalink / raw)
To: dev; +Cc: thomas, bruce.richardson, Aaron Conole, Michael Santana
Enhance our CI coverage by building examples against an installed DPDK.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Aaron Conole <aconole@redhat.com>
---
Changes since v2:
- dropped unneeded -e in sed cmdline,
Changes since v1:
- reworked built examples discovery,
- added comment for people who are not sed fluent,
---
.ci/linux-build.sh | 27 ++++++++++++++++++++++++++-
.github/workflows/build.yml | 6 +++---
2 files changed, 29 insertions(+), 4 deletions(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 9631e342b5..d0d9f89bae 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -1,7 +1,7 @@
#!/bin/sh -xe
if [ -z "${DEF_LIB:-}" ]; then
- DEF_LIB=static ABI_CHECKS= BUILD_DOCS= RUN_TESTS= $0
+ DEF_LIB=static ABI_CHECKS= BUILD_DOCS= BUILD_EXAMPLES= RUN_TESTS= $0
DEF_LIB=shared $0
exit
fi
@@ -99,6 +99,7 @@ if [ "$MINI" = "true" ]; then
else
OPTS="$OPTS -Ddisable_libs="
fi
+OPTS="$OPTS -Dlibdir=lib"
if [ "$ASAN" = "true" ]; then
OPTS="$OPTS -Db_sanitize=address"
@@ -168,3 +169,27 @@ if [ "$RUN_TESTS" = "true" ]; then
catch_coredump
[ "$failed" != "true" ]
fi
+
+# Test examples compilation with an installed dpdk
+if [ "$BUILD_EXAMPLES" = "true" ]; then
+ [ -d install ] || DESTDIR=$(pwd)/install ninja -C build install
+ export LD_LIBRARY_PATH=$(dirname $(find $(pwd)/install -name librte_eal.so)):$LD_LIBRARY_PATH
+ export PKG_CONFIG_PATH=$(dirname $(find $(pwd)/install -name libdpdk.pc)):$PKG_CONFIG_PATH
+ export PKGCONF="pkg-config --define-prefix"
+ find build/examples -maxdepth 1 -type f -name "dpdk-*" |
+ while read target; do
+ target=${target%%:*}
+ target=${target#build/examples/dpdk-}
+ if [ -e examples/$target/Makefile ]; then
+ echo $target
+ continue
+ fi
+ # Some examples binaries are built from an example sub
+ # directory, discover the "top level" example name.
+ find examples -name Makefile |
+ sed -n "s,examples/\([^/]*\)\(/.*\|\)/$target/Makefile,\1,p"
+ done | sort -u |
+ while read example; do
+ make -C install/usr/local/share/dpdk/examples/$example clean shared
+ done
+fi
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 3b629fcdbd..414dd089e0 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -20,6 +20,7 @@ jobs:
BUILD_32BIT: ${{ matrix.config.cross == 'i386' }}
BUILD_DEBUG: ${{ contains(matrix.config.checks, 'debug') }}
BUILD_DOCS: ${{ contains(matrix.config.checks, 'doc') }}
+ BUILD_EXAMPLES: ${{ contains(matrix.config.checks, 'examples') }}
CC: ccache ${{ matrix.config.compiler }}
DEF_LIB: ${{ matrix.config.library }}
LIBABIGAIL_VERSION: libabigail-2.1
@@ -39,7 +40,7 @@ jobs:
mini: mini
- os: ubuntu-20.04
compiler: gcc
- checks: abi+debug+doc+tests
+ checks: abi+debug+doc+examples+tests
- os: ubuntu-20.04
compiler: clang
checks: asan+doc+tests
@@ -96,12 +97,11 @@ jobs:
- name: Install packages
run: sudo apt install -y ccache libarchive-dev libbsd-dev libfdt-dev
libibverbs-dev libjansson-dev libnuma-dev libpcap-dev libssl-dev
- ninja-build python3-pip python3-pyelftools python3-setuptools
+ ninja-build pkg-config python3-pip python3-pyelftools python3-setuptools
python3-wheel zlib1g-dev
- name: Install libabigail build dependencies if no cache is available
if: env.ABI_CHECKS == 'true' && steps.libabigail-cache.outputs.cache-hit != 'true'
run: sudo apt install -y autoconf automake libdw-dev libtool libxml2-dev
- pkg-config
- name: Install i386 cross compiling packages
if: env.BUILD_32BIT == 'true'
run: sudo apt install -y gcc-multilib g++-multilib
--
2.40.1
^ permalink raw reply [relevance 10%]
* [PATCH] ci: fix libabigail cache in GHA
@ 2023-06-20 13:29 10% David Marchand
2023-06-20 14:21 0% ` Aaron Conole
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2023-06-20 13:29 UTC (permalink / raw)
To: dev; +Cc: stable, Aaron Conole, Michael Santana, Thomas Monjalon
In repositories where multiple branches run the ABI checks using
different versions of libabigail (for example, a 22.11 branch using
libabigail-1.8 and a main branch using libabigail-2.1), a collision
happens on the libabigail binary cache entry.
As a single cache entry is used, the content of the cache (let's say the
cache was built for libabigail 2.1) won't match what the branch wants to
use (in this example running the check for 22.11 branch requires
libabigail 1.8).
.ci/linux-build.sh then tries to recompile libabigail but it fails as
the packages used for building libabigail are missing.
Add the version to the cache entry name to avoid this collision.
Fixes: 443267090edc ("ci: enable v21 ABI checks")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
.github/workflows/build.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 3b629fcdbd..7b69771a58 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -69,7 +69,7 @@ jobs:
id: get_ref_keys
run: |
echo 'ccache=ccache-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-'$(date -u +%Y-w%W) >> $GITHUB_OUTPUT
- echo 'libabigail=libabigail-${{ matrix.config.os }}' >> $GITHUB_OUTPUT
+ echo 'libabigail=libabigail-${{ env.LIBABIGAIL_VERSION }}-${{ matrix.config.os }}' >> $GITHUB_OUTPUT
echo 'abi=abi-${{ matrix.config.os }}-${{ matrix.config.compiler }}-${{ matrix.config.cross }}-${{ env.REF_GIT_TAG }}' >> $GITHUB_OUTPUT
- name: Retrieve ccache cache
uses: actions/cache@v3
--
2.40.1
^ permalink raw reply [relevance 10%]
* RE: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
2023-06-16 7:36 4% ` Maxime Coquelin
@ 2023-06-16 15:48 0% ` Chautru, Nicolas
0 siblings, 0 replies; 200+ results
From: Chautru, Nicolas @ 2023-06-16 15:48 UTC (permalink / raw)
To: Maxime Coquelin, David Marchand, hemant.agrawal
Cc: Stephen Hemminger, dev, Rix, Tom, Vargas, Hernan
Hi Maxime, Hermant,
Hermant can I have your view on this topic below.
> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> On 6/15/23 21:30, Chautru, Nicolas wrote:
> > Hi Maxime,
> >
> >> -----Original Message-----
> >> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>
> >> On 6/14/23 20:18, Chautru, Nicolas wrote:
> >>> Hi Maxime,
> >>>
> >>>> -----Original Message-----
> >>>> From: Maxime Coquelin <maxime.coquelin@redhat.com> Hi,
> >>>>
> >>>> On 6/13/23 19:16, Chautru, Nicolas wrote:
> >>>>> Hi Maxime,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>>>>
> >>>>>>
> >>>>>> On 6/12/23 22:53, Chautru, Nicolas wrote:
> >>>>>>> Hi Maxime, David,
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>>>>>>>
> >>>>>>>> On 6/6/23 23:01, Chautru, Nicolas wrote:
> >>>>>>>>> Hi David,
> >>>>>>>>>
> >>>>>>>>>> -----Original Message-----
> >>>>>>>>>> From: David Marchand <david.marchand@redhat.com>> >> On
> >>>> Mon, Jun
> >>>>>> 5,
> >>>>>>>>>> 2023 at 10:08 PM Chautru, Nicolas <nicolas.chautru@intel.com>
> >>>>>>>>>> wrote:
> >>>>>>>>>>> Wrt the MLD functions: these are new into the related serie
> >>>>>>>>>>> but still the
> >>>>>>>>>> break the ABI since the struct rte_bbdev includes these
> >>>>>>>>>> functions hence causing offset changes.
> >>>>>>>>>>>
> >>>>>>>>>>> Should I then just rephrase as:
> >>>>>>>>>>>
> >>>>>>>>>>> +* bbdev: Will extend the API to support the new operation
> >>>>>>>>>>> +type
> >>>>>>>>>>> +``RTE_BBDEV_OP_MLDTS`` as per
> >>>>>>>>>>> + this `v1
> >>>>>>>>>>>
> >> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
> >>>>>>>>>>> This
> >>>>>>>>>>> + will notably introduce + new symbols for
> >>>>>>>>>>> ``rte_bbdev_dequeue_mldts_ops``,
> >>>>>>>>>>> +``rte_bbdev_enqueue_mldts_ops`` into the stuct rte_bbdev.
> >>>>>>>>>>
> >>>>>>>>>> I don't think we need this deprecation notice.
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Do you need to expose those new mldts ops in rte_bbdev?
> >>>>>>>>>> Can't they go to dev_ops?
> >>>>>>>>>> If you can't, at least moving those new ops at the end of the
> >>>>>>>>>> structure would avoid the breakage on rte_bbdev.
> >>>>>>>>>
> >>>>>>>>> It would probably be best to move all these ops at the end of
> >>>>>>>>> the structure
> >>>>>>>> (ie. keep them together).
> >>>>>>>>> In that case the deprecation notice would call out that the
> >>>>>>>>> rte_bbdev
> >>>>>>>> structure content is more generally modified. Probably best for
> >>>>>>>> the longer run.
> >>>>>>>>> David, Maxime, ok with that option?
> >>>>>>>>>
> >>>>>>>>> struct __rte_cache_aligned rte_bbdev {
> >>>>>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
> >>>>>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
> >>>>>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
> >>>>>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
> >>>>>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
> >>>>>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
> >>>>>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
> >>>>>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
> >>>>>>>>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
> >>>>>>>>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
> >>>>>>>>> const struct rte_bbdev_ops *dev_ops;
> >>>>>>>>> struct rte_bbdev_data *data;
> >>>>>>>>> enum rte_bbdev_state state;
> >>>>>>>>> struct rte_device *device;
> >>>>>>>>> struct rte_bbdev_cb_list list_cbs;
> >>>>>>>>> struct rte_intr_handle *intr_handle;
> >>>>>>>>> };
> >>>>>>>>
> >>>>>>>> The best thing, as suggested by David, would be to move all the
> >>>>>>>> ops out of struct rte_bbdev, as these should not be visible to
> >>>>>>>> the
> >>>> application.
> >>>>>>>
> >>>>>>> That would be quite disruptive across all PMDs and possible perf
> >>>>>>> impact to
> >>>>>> validate. I don’t think this is anywhere realistic to consider
> >>>>>> such a change in 23.11.
> >>>>>>> I believe moving these function at the end of the structure is a
> >>>>>>> good
> >>>>>> compromise to avoid future breakage of rte_bbdev structure with
> >>>>>> almost seamless impact (purely a ABI break when moving into 23.11
> >>>>>> which is not avoidable. Retrospectively we should have done that
> >>>>>> in
> >>>>>> 22.11
> >>>> really.
> >>>>>>
> >>>>>> If we are going to break the ABI, better to do the right rework
> >>>>>> directly. Otherwise we'll end-up breaking it again next year.
> >>>>>
> >>>>> With the suggested change, this will not break ABI next year. Any
> >>>>> future
> >>>> functions are added at the end of the structure anyway.
> >>>>
> >>>> I'm not so sure, it depends if adding a new field at the end cross
> >>>> a cacheline boundary or not:
> >>>>
> >>>> /*
> >>>> * Global array of all devices. This is not static because it's used by the
> >>>> * inline enqueue and dequeue functions
> >>>> */
> >>>> struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
> >>>>
> >>>> If the older inlined functions used by the application retrieve the
> >>>> dev pointer from the array directly (they do) and added new fields
> >>>> in new version cross a cacheline, then there will be a
> >>>> misalignement between the new lib version and the application using
> >>>> the older inlined
> >> functions.
> >>>>
> >>>> ABI-wise, this is not really future proof.
> >>>>
> >>>>>
> >>>>>>
> >>>>>> IMHO, moving these ops should be quite trivial and not much work.
> >>>>>>
> >>>>>> Otherwise, if we just placed the rte_bbdev_dequeue_mldts_ops and
> >>>>>> rte_bbdev_enqueue_mldts_ops at the bottom of struct rte_bbdev, it
> >>>>>> may not break the ABI, but that's a bit fragile:
> >>>>>> - rte_bbdev_devices[] is not static, but is placed in the BSS section so
> >>>>>> should be OK
> >>>>>> - struct rte_bbdev is cache-aligned, so it may work if adding these two
> >>>>>> ops do not overlap a cacheline which depends on the CPU
> >> architecture.
> >>>>>
> >>>>> If you prefer to add the only 2 new functions at the end of the
> >>>>> structure
> >>>> that is okay. I believe it would be cleaner to move all these
> >>>> enqueue/dequeue funs down together without drawback I can think of.
> >>>> Let me know.
> >>>>
> >>>> Adding the new ones at the end is not future proof, but at least it
> >>>> does not break ABI just for cosmetic reasons (that's a big drawback
> >> IMHO).
> >>>>
> >>>> I just checked using pahole:
> >>>>
> >>>> struct rte_bbdev {
> >>>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops; /* 0 8 */
> >>>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops; /* 8 8 */
> >>>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops; /* 16 8 */
> >>>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops; /* 24 8 */
> >>>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops; /* 32 8
> >>>> */
> >>>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops; /* 40 8
> >>>> */
> >>>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops; /* 48 8
> >>>> */
> >>>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops; /* 56 8
> >>>> */
> >>>> /* --- cacheline 1 boundary (64 bytes) --- */
> >>>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops; /* 64 8 */
> >>>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops; /* 72 8 */
> >>>> const struct rte_bbdev_ops * dev_ops; /* 80 8 */
> >>>> struct rte_bbdev_data * data; /* 88 8 */
> >>>> enum rte_bbdev_state state; /* 96 4 */
> >>>>
> >>>> /* XXX 4 bytes hole, try to pack */
> >>>>
> >>>> struct rte_device * device; /* 104 8 */
> >>>> struct rte_bbdev_cb_list list_cbs; /* 112 16 */
> >>>> /* --- cacheline 2 boundary (128 bytes) --- */
> >>>> struct rte_intr_handle * intr_handle; /* 128 8 */
> >>>>
> >>>> /* size: 192, cachelines: 3, members: 16 */
> >>>> /* sum members: 132, holes: 1, sum holes: 4 */
> >>>> /* padding: 56 */
> >>>> } __attribute__((__aligned__(64)));
> >>>>
> >>>> We're lucky on x86, we still have 56 bytes, so we can add 7 new ops
> >>>> at the end before breaking the ABI if I'm not mistaken.
> >>>>
> >>>> I checked the other architecture, and it seems we don't support any
> >>>> with 32B cacheline size so we're good for a while.
> >>>
> >>> OK then just adding the new functions at the end, no other cosmetic
> >> changes. Will update the patch to match this.
> >>> In term of deprecation notice, you are okay with latest draft?
> >>>
> >>> +* bbdev: Will extend the API to support the new operation type
> >>> +``RTE_BBDEV_OP_MLDTS`` as per this `v1
> >>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
> >>> + This will notably introduce new symbols for
> >>> +``rte_bbdev_dequeue_mldts_ops``, ``rte_bbdev_enqueue_mldts_ops``
> >> into the stuct rte_bbdev.
> >>
> >> This is not needed in the deprecation notice.
> >> If you are willing to announce it, it could be part of the Intel roadmap.
> >>
> >
> > I still see this abi failure as we extend the struct (see below), what is the harm
> in calling it out in the deprecation notice?
> >
> > 1 function with some indirect sub-type change:
> >
> > [C] 'function rte_bbdev* rte_bbdev_allocate(const char*)' at
> rte_bbdev.c:174:1 has some indirect sub-type changes:
> > return type changed:
> > in pointed to type 'struct rte_bbdev' at rte_bbdev.h:498:1:
> > type size hasn't changed
> > 2 data member insertions:
> > 'rte_bbdev_enqueue_mldts_ops_t enqueue_mldts_ops', at offset 1088
> (in bits) at rte_bbdev.h:527:1
> > 'rte_bbdev_dequeue_mldts_ops_t dequeue_mldts_ops', at offset 1152
> (in bits) at rte_bbdev.h:529:1
> > no data member changes (12 filtered);
> >
> > Error: ABI issue reported for abidiff --suppr
> > /home-local/jenkins-local/jenkins-agent/workspace/Generic-DPDK-Compile
> > -ABI@2/dpdk/devtools/libabigail.abignore --no-added-syms
> > --headers-dir1 reference/usr/local/include --headers-dir2
> > build_install/usr/local/include
> > reference/usr/local/lib64/librte_bbdev.so.23.0
> > build_install/usr/local/lib64/librte_bbdev.so.23.2
> > ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a
> potential issue).
> >
> >>
> >> To be on the safe side, could you try to dynamically link an
> >> application with a DPDK version before this change, then rebuild DPDK
> >> with adding these two fields. Then test with at least 2 devices with
> >> test-bbdev and see if it does not crash or fail?
> >
> > This is not something we would validate really. But I agree the chance of that
> ABI having an actual impact is slim based on the address alignment.
>
> Bbdev is not only about Intel AFAICS.
Agreed 100% but I have no way to validate on non-Intel architecture or using non-Intel HW. All I can guarantee is that this would be fine on Intel environment based on the warning below.
Hemant, can you share your view on this? Any concern on the bbdev extension for the new operation (this is similar addition to the FFT one done in 22.11) in your architecture?
Thanks
Nic
>
> > Still by process, we can use the abi warning above as a litmus the ABI has some
> minor change.
> > Also we introduce this change in the new LTS version, so unsure this is
> controversial or impactful.
> > Let me know if different opinion.
>
> Either we are sure we can waive this warning, e.g. by testing it.
> Or we cannot, and in this case we have an ABI break.
> If we are going to have an ABI break, let's do the right thing now and move ops in
> a dedicated struct as suggested by Stephen, David and myself.
>
> Maxime
>
> > Thanks
> > Nic
> >
> >
> >>
> >> Thanks,
> >> Maxime
> >>
> >>>
> >>>>
> >>>> Maxime
> >>>>
> >>>>>
> >>>>>>
> >>>>>> Maxime
> >>>>>>
> >>>>>>> What do you think Maxime, David? Based on this I can adjust the
> >>>>>>> change for
> >>>>>> 23.11 and update slightly the deprecation notice accordingly.
> >>>>>>>
> >>>>>>> Thanks
> >>>>>>> Nic
> >>>>>>>
> >>>>>
> >>>
> >
^ permalink raw reply [relevance 0%]
* Minutes of Technical Board Meeting 2023-06-14
@ 2023-06-16 8:37 5% Richardson, Bruce
0 siblings, 0 replies; 200+ results
From: Richardson, Bruce @ 2023-06-16 8:37 UTC (permalink / raw)
To: dev
Attendees
----------
* Aaron
* Bruce
* Hemant
* Honnappa
* Jerin
* Kevin
* Maxime
* Thomas
* Morten
* Tyler
* Nathan
* Akhil
* David M.
* Dave Y.
NOTES:
* Next meeting on 2023-06-28 will be chaired by Hemant
General Updates
================
Documentation Rework
~~~~~~~~~~~~~~~~~~~~~
* Dave Young has started on DPDK project as technical writer
* Bruce and Nathan are currently acting as main points of contact but many
queries are being handled via the DPDK #doc-rework slack channel
* There is an open invitation to all who wish to help out with
documentation rework to join this channel - it's not just for TB members
Reviewers for DPDK Summit
~~~~~~~~~~~~~~~~~~~~~~~~~
* The call for papers for the DPDK summit has gone out.
* It is planned to review submissions at the start of July.
* Review panel to be made of tech-board members and others heavily involved
in the project and regular techboard meeting attendees.
Agenda Discussion
=================
Managing Planned ABI changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Reviewed previous discussion on how to manage code changes which break
ABI ahead of the next ABI-change release
* Decision to generally allow use of NEXT_ABI macro to include changes
earlier while keeping stability when NEXT_ABI is off (the default)
* Each ABI change needs individual discussion before merging with NEXT_ABI
to ensure easier or better solutions are not missed.
DPDK API Stability
~~~~~~~~~~~~~~~~~~
Issue raised that, while DPDK has an ABI stability policy, it does not have
an *API* stability policy.
It was suggested we introduce such a stability policy to match that of ABI
one, leading to much discussion.
* A number of people on the call reported feedback from users of the
difficulty of moving DPDK releases because of API changes i.e. they had
to change their own code, not just recompile.
* Presenters at previous DPDK conferences reported issues with e.g.
open-source apps, trying to support multiple releases of DPDK underneath.
The support requires much use of DPDK version-related ifdefs.
This contrasts with other projects like VPP or OVS which only support a
single DPDK release at a time, for the same reason.
* On the other hand, concern was expressed at how the imposition of API
stability might impact feature delivery. We don't want new features held
up for long periods.
* It was pointed out that much of our recent API change issues stem from
cleanup of published macros/enums that don't have proper "RTE" prefixes.
The hope is that the API will naturally be more stable now that this work
is nearing completion.
* Beyond API changes specifically, concern was expressed with our current
releases about:
- changes of behaviour within functions without an API change, or ABI
versioning to catch this
- changes to behaviour or API not being properly documented in release
notes
- lack of rigour in our doxygen function documentation, e.g. lack of
clarity on edge-case behaviour, and specifics of what error codes are
returned.
- use of error numbers as return codes, vs use of -1 & errno global, for
flagging error. [Latter leads to more resiliency, especially when it
comes to using switch statements for handing the documented error
values, and a new error return code is added]
* Proposal was made to look at having 1 year API stability policy to match
that of ABI policy.
* At end of discussion quorum was no longer present and no vote was taken
on the issue at this point. It will be discussed further at later
meetings.
---
All other agenda items postponed to a future meeting.
^ permalink raw reply [relevance 5%]
* Re: [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension
@ 2023-06-16 7:36 4% ` Maxime Coquelin
2023-06-16 15:48 0% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2023-06-16 7:36 UTC (permalink / raw)
To: Chautru, Nicolas, David Marchand
Cc: Stephen Hemminger, dev, Rix, Tom, hemant.agrawal, Vargas, Hernan
On 6/15/23 21:30, Chautru, Nicolas wrote:
> Hi Maxime,
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>>
>> On 6/14/23 20:18, Chautru, Nicolas wrote:
>>> Hi Maxime,
>>>
>>>> -----Original Message-----
>>>> From: Maxime Coquelin <maxime.coquelin@redhat.com> Hi,
>>>>
>>>> On 6/13/23 19:16, Chautru, Nicolas wrote:
>>>>> Hi Maxime,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>>>
>>>>>>
>>>>>> On 6/12/23 22:53, Chautru, Nicolas wrote:
>>>>>>> Hi Maxime, David,
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>>>>>>
>>>>>>>> On 6/6/23 23:01, Chautru, Nicolas wrote:
>>>>>>>>> Hi David,
>>>>>>>>>
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: David Marchand <david.marchand@redhat.com>> >> On
>>>> Mon, Jun
>>>>>> 5,
>>>>>>>>>> 2023 at 10:08 PM Chautru, Nicolas <nicolas.chautru@intel.com>
>>>>>>>>>> wrote:
>>>>>>>>>>> Wrt the MLD functions: these are new into the related serie
>>>>>>>>>>> but still the
>>>>>>>>>> break the ABI since the struct rte_bbdev includes these
>>>>>>>>>> functions hence causing offset changes.
>>>>>>>>>>>
>>>>>>>>>>> Should I then just rephrase as:
>>>>>>>>>>>
>>>>>>>>>>> +* bbdev: Will extend the API to support the new operation
>>>>>>>>>>> +type
>>>>>>>>>>> +``RTE_BBDEV_OP_MLDTS`` as per
>>>>>>>>>>> + this `v1
>>>>>>>>>>>
>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
>>>>>>>>>>> This
>>>>>>>>>>> + will notably introduce + new symbols for
>>>>>>>>>>> ``rte_bbdev_dequeue_mldts_ops``,
>>>>>>>>>>> +``rte_bbdev_enqueue_mldts_ops`` into the stuct rte_bbdev.
>>>>>>>>>>
>>>>>>>>>> I don't think we need this deprecation notice.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Do you need to expose those new mldts ops in rte_bbdev?
>>>>>>>>>> Can't they go to dev_ops?
>>>>>>>>>> If you can't, at least moving those new ops at the end of the
>>>>>>>>>> structure would avoid the breakage on rte_bbdev.
>>>>>>>>>
>>>>>>>>> It would probably be best to move all these ops at the end of
>>>>>>>>> the structure
>>>>>>>> (ie. keep them together).
>>>>>>>>> In that case the deprecation notice would call out that the
>>>>>>>>> rte_bbdev
>>>>>>>> structure content is more generally modified. Probably best for
>>>>>>>> the longer run.
>>>>>>>>> David, Maxime, ok with that option?
>>>>>>>>>
>>>>>>>>> struct __rte_cache_aligned rte_bbdev {
>>>>>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops;
>>>>>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops;
>>>>>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops;
>>>>>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops;
>>>>>>>>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops;
>>>>>>>>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops;
>>>>>>>>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
>>>>>>>>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
>>>>>>>>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
>>>>>>>>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
>>>>>>>>> const struct rte_bbdev_ops *dev_ops;
>>>>>>>>> struct rte_bbdev_data *data;
>>>>>>>>> enum rte_bbdev_state state;
>>>>>>>>> struct rte_device *device;
>>>>>>>>> struct rte_bbdev_cb_list list_cbs;
>>>>>>>>> struct rte_intr_handle *intr_handle;
>>>>>>>>> };
>>>>>>>>
>>>>>>>> The best thing, as suggested by David, would be to move all the
>>>>>>>> ops out of struct rte_bbdev, as these should not be visible to
>>>>>>>> the
>>>> application.
>>>>>>>
>>>>>>> That would be quite disruptive across all PMDs and possible perf
>>>>>>> impact to
>>>>>> validate. I don’t think this is anywhere realistic to consider such
>>>>>> a change in 23.11.
>>>>>>> I believe moving these function at the end of the structure is a
>>>>>>> good
>>>>>> compromise to avoid future breakage of rte_bbdev structure with
>>>>>> almost seamless impact (purely a ABI break when moving into 23.11
>>>>>> which is not avoidable. Retrospectively we should have done that in
>>>>>> 22.11
>>>> really.
>>>>>>
>>>>>> If we are going to break the ABI, better to do the right rework
>>>>>> directly. Otherwise we'll end-up breaking it again next year.
>>>>>
>>>>> With the suggested change, this will not break ABI next year. Any
>>>>> future
>>>> functions are added at the end of the structure anyway.
>>>>
>>>> I'm not so sure, it depends if adding a new field at the end cross a
>>>> cacheline boundary or not:
>>>>
>>>> /*
>>>> * Global array of all devices. This is not static because it's used by the
>>>> * inline enqueue and dequeue functions
>>>> */
>>>> struct rte_bbdev rte_bbdev_devices[RTE_BBDEV_MAX_DEVS];
>>>>
>>>> If the older inlined functions used by the application retrieve the
>>>> dev pointer from the array directly (they do) and added new fields in
>>>> new version cross a cacheline, then there will be a misalignement
>>>> between the new lib version and the application using the older inlined
>> functions.
>>>>
>>>> ABI-wise, this is not really future proof.
>>>>
>>>>>
>>>>>>
>>>>>> IMHO, moving these ops should be quite trivial and not much work.
>>>>>>
>>>>>> Otherwise, if we just placed the rte_bbdev_dequeue_mldts_ops and
>>>>>> rte_bbdev_enqueue_mldts_ops at the bottom of struct rte_bbdev, it
>>>>>> may not break the ABI, but that's a bit fragile:
>>>>>> - rte_bbdev_devices[] is not static, but is placed in the BSS section so
>>>>>> should be OK
>>>>>> - struct rte_bbdev is cache-aligned, so it may work if adding these two
>>>>>> ops do not overlap a cacheline which depends on the CPU
>> architecture.
>>>>>
>>>>> If you prefer to add the only 2 new functions at the end of the
>>>>> structure
>>>> that is okay. I believe it would be cleaner to move all these
>>>> enqueue/dequeue funs down together without drawback I can think of.
>>>> Let me know.
>>>>
>>>> Adding the new ones at the end is not future proof, but at least it
>>>> does not break ABI just for cosmetic reasons (that's a big drawback
>> IMHO).
>>>>
>>>> I just checked using pahole:
>>>>
>>>> struct rte_bbdev {
>>>> rte_bbdev_enqueue_enc_ops_t enqueue_enc_ops; /* 0 8 */
>>>> rte_bbdev_enqueue_dec_ops_t enqueue_dec_ops; /* 8 8 */
>>>> rte_bbdev_dequeue_enc_ops_t dequeue_enc_ops; /* 16 8 */
>>>> rte_bbdev_dequeue_dec_ops_t dequeue_dec_ops; /* 24 8 */
>>>> rte_bbdev_enqueue_enc_ops_t enqueue_ldpc_enc_ops; /* 32 8
>>>> */
>>>> rte_bbdev_enqueue_dec_ops_t enqueue_ldpc_dec_ops; /* 40 8
>>>> */
>>>> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops; /* 48 8
>>>> */
>>>> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops; /* 56 8
>>>> */
>>>> /* --- cacheline 1 boundary (64 bytes) --- */
>>>> rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops; /* 64 8 */
>>>> rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops; /* 72 8 */
>>>> const struct rte_bbdev_ops * dev_ops; /* 80 8 */
>>>> struct rte_bbdev_data * data; /* 88 8 */
>>>> enum rte_bbdev_state state; /* 96 4 */
>>>>
>>>> /* XXX 4 bytes hole, try to pack */
>>>>
>>>> struct rte_device * device; /* 104 8 */
>>>> struct rte_bbdev_cb_list list_cbs; /* 112 16 */
>>>> /* --- cacheline 2 boundary (128 bytes) --- */
>>>> struct rte_intr_handle * intr_handle; /* 128 8 */
>>>>
>>>> /* size: 192, cachelines: 3, members: 16 */
>>>> /* sum members: 132, holes: 1, sum holes: 4 */
>>>> /* padding: 56 */
>>>> } __attribute__((__aligned__(64)));
>>>>
>>>> We're lucky on x86, we still have 56 bytes, so we can add 7 new ops
>>>> at the end before breaking the ABI if I'm not mistaken.
>>>>
>>>> I checked the other architecture, and it seems we don't support any
>>>> with 32B cacheline size so we're good for a while.
>>>
>>> OK then just adding the new functions at the end, no other cosmetic
>> changes. Will update the patch to match this.
>>> In term of deprecation notice, you are okay with latest draft?
>>>
>>> +* bbdev: Will extend the API to support the new operation type
>>> +``RTE_BBDEV_OP_MLDTS`` as per this `v1
>>> +<https://patches.dpdk.org/project/dpdk/list/?series=28192>`.
>>> + This will notably introduce new symbols for
>>> +``rte_bbdev_dequeue_mldts_ops``, ``rte_bbdev_enqueue_mldts_ops``
>> into the stuct rte_bbdev.
>>
>> This is not needed in the deprecation notice.
>> If you are willing to announce it, it could be part of the Intel roadmap.
>>
>
> I still see this abi failure as we extend the struct (see below), what is the harm in calling it out in the deprecation notice?
>
> 1 function with some indirect sub-type change:
>
> [C] 'function rte_bbdev* rte_bbdev_allocate(const char*)' at rte_bbdev.c:174:1 has some indirect sub-type changes:
> return type changed:
> in pointed to type 'struct rte_bbdev' at rte_bbdev.h:498:1:
> type size hasn't changed
> 2 data member insertions:
> 'rte_bbdev_enqueue_mldts_ops_t enqueue_mldts_ops', at offset 1088 (in bits) at rte_bbdev.h:527:1
> 'rte_bbdev_dequeue_mldts_ops_t dequeue_mldts_ops', at offset 1152 (in bits) at rte_bbdev.h:529:1
> no data member changes (12 filtered);
>
> Error: ABI issue reported for abidiff --suppr /home-local/jenkins-local/jenkins-agent/workspace/Generic-DPDK-Compile-ABI@2/dpdk/devtools/libabigail.abignore --no-added-syms --headers-dir1 reference/usr/local/include --headers-dir2 build_install/usr/local/include reference/usr/local/lib64/librte_bbdev.so.23.0 build_install/usr/local/lib64/librte_bbdev.so.23.2
> ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a potential issue).
>
>>
>> To be on the safe side, could you try to dynamically link an application with
>> a DPDK version before this change, then rebuild DPDK with adding these two
>> fields. Then test with at least 2 devices with test-bbdev and see if it does not
>> crash or fail?
>
> This is not something we would validate really. But I agree the chance of that ABI having an actual impact is slim based on the address alignment.
Bbdev is not only about Intel AFAICS.
> Still by process, we can use the abi warning above as a litmus the ABI has some minor change.
> Also we introduce this change in the new LTS version, so unsure this is controversial or impactful.
> Let me know if different opinion.
Either we are sure we can waive this warning, e.g. by testing it.
Or we cannot, and in this case we have an ABI break.
If we are going to have an ABI break, let's do the right thing now and
move ops in a dedicated struct as suggested by Stephen, David and
myself.
Maxime
> Thanks
> Nic
>
>
>>
>> Thanks,
>> Maxime
>>
>>>
>>>>
>>>> Maxime
>>>>
>>>>>
>>>>>>
>>>>>> Maxime
>>>>>>
>>>>>>> What do you think Maxime, David? Based on this I can adjust the
>>>>>>> change for
>>>>>> 23.11 and update slightly the deprecation notice accordingly.
>>>>>>>
>>>>>>> Thanks
>>>>>>> Nic
>>>>>>>
>>>>>
>>>
>
^ permalink raw reply [relevance 4%]
Results 1001-1200 of ~18000 next (older) | prev (newer) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2021-06-18 16:36 [dpdk-dev] [PATCH] devtools: script to track map symbols Ray Kinsella
2021-09-09 13:48 ` [dpdk-dev] [PATCH v13 0/4] devtools: scripts to count and track symbols Ray Kinsella
2023-07-06 19:13 0% ` Stephen Hemminger
2022-04-20 8:16 [PATCH v1 0/5] Direct re-arming of buffers on receive side Feifei Wang
2023-08-02 7:38 ` [PATCH v8 0/4] Recycle mbufs from Tx queue into Rx queue Feifei Wang
2023-08-02 7:38 3% ` [PATCH v8 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2023-08-02 8:08 ` [PATCH v9 0/4] Recycle mbufs from Tx queue into Rx queue Feifei Wang
2023-08-02 8:08 3% ` [PATCH v9 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2023-08-04 9:24 ` [PATCH v10 0/4] Recycle mbufs from Tx queue into Rx queue Feifei Wang
2023-08-04 9:24 3% ` [PATCH v10 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2023-08-22 7:27 ` [PATCH v11 0/4] Recycle mbufs from Tx queue into Rx queue Feifei Wang
2023-08-22 7:27 3% ` [PATCH v11 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2023-08-22 23:33 0% ` Konstantin Ananyev
2023-08-24 3:38 0% ` Feifei Wang
2023-08-24 7:36 ` [PATCH v12 0/4] Recycle mbufs from Tx queue into Rx queue Feifei Wang
2023-08-24 7:36 3% ` [PATCH v12 1/4] ethdev: add API for mbufs recycle mode Feifei Wang
2022-04-21 4:38 kni: check abi version between kmod and lib Stephen Coleman
2023-07-04 2:56 7% ` Stephen Hemminger
2022-08-29 15:18 [RFC PATCH 0/3] Split logging out of EAL Bruce Richardson
2023-07-31 10:17 3% ` [PATCH v6 0/3] Split logging functionality " Bruce Richardson
2023-07-31 15:38 4% ` [PATCH v7 " Bruce Richardson
2023-08-09 13:35 3% ` [PATCH v8 " Bruce Richardson
2023-08-11 12:46 4% ` David Marchand
2023-01-31 2:28 [PATCH v3 0/2] add ring telemetry cmds Jie Hai
2023-06-20 14:34 ` [PATCH v4 3/3] ring: add telemetry cmd for ring info Thomas Monjalon
2023-07-04 8:04 ` Jie Hai
2023-07-04 14:11 ` Thomas Monjalon
2023-07-06 8:52 3% ` David Marchand
2023-07-07 2:18 0% ` Jie Hai
2023-02-28 9:39 [RFC 0/2] Add high-performance timer facility Mattias Rönnblom
2023-03-15 17:03 ` [RFC v2 " Mattias Rönnblom
2023-03-15 17:03 ` [RFC v2 2/2] eal: add " Mattias Rönnblom
2023-07-06 22:41 3% ` Stephen Hemminger
2023-07-12 8:58 4% ` Mattias Rönnblom
2023-03-15 11:00 [PATCH 0/5] support setting and querying RSS algorithms Dongdong Liu
2023-08-26 7:46 ` [PATCH v2 " Jie Hai
2023-08-26 7:46 5% ` [PATCH v2 1/5] ethdev: support setting and querying RSS algorithm Jie Hai
2023-03-29 23:40 [PATCH v12 00/22] Covert static log types in libraries to dynamic Stephen Hemminger
2023-08-21 16:09 2% ` [PATCH v13 00/21] Convert static log types in libraries to dynamic types Stephen Hemminger
2023-08-21 16:09 2% ` [PATCH v13 17/21] hash: move rte_hash_set_alg out header Stephen Hemminger
2023-04-03 21:52 [PATCH 0/9] msvc integration changes Tyler Retzlaff
2023-07-11 16:49 ` [PATCH v9 00/14] " Tyler Retzlaff
2023-07-11 16:49 5% ` [PATCH v9 10/14] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-07-11 16:49 3% ` [PATCH v9 12/14] telemetry: avoid expanding versioned symbol macros on MSVC Tyler Retzlaff
2023-08-02 21:35 ` [PATCH v10 00/13] msvc integration changes Tyler Retzlaff
2023-08-02 21:35 5% ` [PATCH v10 10/13] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-08-11 19:20 ` [PATCH v11 00/16] msvc integration changes Tyler Retzlaff
2023-08-11 19:20 5% ` [PATCH v11 10/16] eal: expand most macros to empty when using MSVC Tyler Retzlaff
2023-04-17 4:31 [PATCH v3 1/4] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
2023-04-18 8:25 ` [PATCH v4 0/4] power: monitor support for AMD EPYC processors Sivaprasad Tummala
2023-04-18 8:25 ` [PATCH v4 1/4] doc: announce new cpu flag added to rte_cpu_flag_t Sivaprasad Tummala
2023-07-05 11:32 0% ` Konstantin Ananyev
2023-04-18 10:45 [PATCH] eventdev: fix alignment padding Sivaprasad Tummala
2023-04-18 12:30 ` Mattias Rönnblom
2023-04-18 14:07 ` Morten Brørup
2023-04-18 15:16 ` Mattias Rönnblom
2023-05-17 13:20 ` Jerin Jacob
2023-05-17 13:35 ` Morten Brørup
2023-05-23 15:15 ` Jerin Jacob
2023-08-02 16:19 0% ` Jerin Jacob
2023-08-08 10:24 0% ` Jerin Jacob
2023-08-08 10:25 0% ` Jerin Jacob
2023-05-09 9:24 [PATCH v6 0/3] add telemetry cmds for ring Jie Hai
2023-07-04 9:04 3% ` [PATCH v7 " Jie Hai
2023-07-04 9:04 3% ` [PATCH v7 1/3] ring: fix unmatched type definition and usage Jie Hai
2023-08-18 6:53 0% ` [PATCH v7 0/3] add telemetry cmds for ring Jie Hai
2023-05-17 16:15 [PATCH 00/20] Replace use of term sanity-check Stephen Hemminger
2023-08-02 23:25 ` [PATCH v4 00/19] replace use of term "sanity check" Stephen Hemminger
2023-08-02 23:25 5% ` [PATCH v4 01/19] mbuf: replace term sanity check Stephen Hemminger
2023-05-18 8:44 [PATCH v4] net/bonding: replace master/slave to main/member Chaoyong He
2023-08-16 6:27 ` [PATCH v5 0/2] " Chaoyong He
2023-08-16 6:27 1% ` [PATCH v5 2/2] net/bonding: " Chaoyong He
2023-08-17 2:36 0% ` lihuisong (C)
2023-05-19 18:15 [PATCH] ptp: replace terms master/slave Stephen Hemminger
2023-07-05 17:27 3% ` Stephen Hemminger
2023-05-24 6:55 [PATCH v1 1/8] ethdev: add IPv6 extension push remove action Ori Kam
2023-05-24 7:39 ` [PATCH v1 0/2] add IPv6 extension push remove Rongwei Liu
2023-06-02 14:39 ` Ferruh Yigit
2023-07-10 2:32 ` Rongwei Liu
2023-07-10 8:55 ` Ferruh Yigit
2023-07-10 14:41 3% ` Stephen Hemminger
2023-07-11 6:16 0% ` Thomas Monjalon
2023-05-24 11:38 [PATCH] doc: deprecation notice to add new hash function Xueming Li
2023-08-07 11:54 ` [PATCH] ethdev: add new symmetric " Xueming Li
2023-08-07 22:32 ` Ivan Malov
2023-08-08 1:43 3% ` fengchengwen
2023-08-09 12:00 0% ` Xueming(Steven) Li
2023-05-26 2:11 [PATCH v1 0/1] doc: accounce change in bbdev extension Nicolas Chautru
2023-05-26 2:11 ` [PATCH v1 1/1] doc: announce change in bbdev api related to operation extension Nicolas Chautru
2023-05-26 3:47 ` Stephen Hemminger
2023-06-05 19:07 ` Maxime Coquelin
2023-06-05 20:08 ` Chautru, Nicolas
2023-06-06 9:20 ` David Marchand
2023-06-06 21:01 ` Chautru, Nicolas
2023-06-08 8:47 ` Maxime Coquelin
2023-06-12 20:53 ` Chautru, Nicolas
2023-06-13 8:14 ` Maxime Coquelin
2023-06-13 17:16 ` Chautru, Nicolas
2023-06-13 20:00 ` Maxime Coquelin
2023-06-14 18:18 ` Chautru, Nicolas
2023-06-15 7:52 ` Maxime Coquelin
2023-06-15 19:30 ` Chautru, Nicolas
2023-06-16 7:36 4% ` Maxime Coquelin
2023-06-16 15:48 0% ` Chautru, Nicolas
2023-06-06 12:11 [PATCH] doc: deprecation notice to add RSS hash algorithm field Dongdong Liu
2023-06-06 15:50 ` Ferruh Yigit
2023-06-06 16:35 ` Stephen Hemminger
2023-07-28 15:06 0% ` Thomas Monjalon
2023-06-09 10:51 [PATCH] doc: prefer installing using meson rather than ninja Bruce Richardson
2023-06-23 11:43 4% ` [PATCH v4] " Bruce Richardson
2023-06-09 17:42 [RFC] eventdev: remove single-event enqueue operation Mattias Rönnblom
2023-06-30 4:37 3% ` Jerin Jacob
2023-07-04 12:01 0% ` Mattias Rönnblom
2023-07-04 11:53 4% ` [PATCH] " Mattias Rönnblom
2023-07-05 7:47 0% ` Jerin Jacob
2023-07-05 8:41 0% ` Mattias Rönnblom
2023-06-13 8:17 [PATCH 0/4] Test examples compilation externally David Marchand
2023-06-20 14:07 ` [PATCH v3 " David Marchand
2023-06-20 14:07 10% ` [PATCH v3 4/4] ci: build examples externally David Marchand
2023-06-13 15:40 [PATCH v2] bitmap: add scan from offset function Volodymyr Fialko
2023-06-23 12:40 ` [PATCH v3] " Dumitrescu, Cristian
2023-07-03 10:56 ` Volodymyr Fialko
2023-07-03 11:51 ` Thomas Monjalon
2023-07-03 12:02 4% ` [EXT] " Volodymyr Fialko
2023-07-03 12:17 0% ` Thomas Monjalon
2023-06-15 16:48 [PATCH v2 0/5] bbdev: API extension for 23.11 Nicolas Chautru
2023-07-17 22:28 0% ` Chautru, Nicolas
2023-08-04 16:14 0% ` Vargas, Hernan
2023-07-18 9:18 0% ` Hemant Agrawal
2023-06-16 8:37 5% Minutes of Technical Board Meeting 2023-06-14 Richardson, Bruce
2023-06-20 13:29 10% [PATCH] ci: fix libabigail cache in GHA David Marchand
2023-06-20 14:21 0% ` Aaron Conole
2023-06-22 17:41 0% ` Thomas Monjalon
2023-06-28 6:36 [PATCH 1/2] net/virtio: fix legacy device IO port map in secondary process Miao Li
2023-06-29 2:26 ` [PATCH v2 " Miao Li
2023-07-03 7:47 ` David Marchand
2023-07-03 8:54 ` Li, Miao
2023-07-03 8:57 ` David Marchand
2023-07-03 9:31 ` Xia, Chenbo
2023-07-07 17:03 3% ` Gupta, Nipun
2023-07-04 8:10 3% [PATCH] doc: announce ethdev operation struct changes Feifei Wang
2023-07-04 8:17 0% ` Feifei Wang
2023-07-13 2:37 0% ` Feifei Wang
2023-07-13 12:50 0% ` Morten Brørup
2023-07-17 8:28 0% ` Andrew Rybchenko
2023-07-05 11:32 0% ` Konstantin Ananyev
2023-07-13 7:52 0% ` Ferruh Yigit
2023-07-28 14:56 3% ` Thomas Monjalon
2023-07-28 15:04 0% ` Thomas Monjalon
2023-07-28 15:08 0% ` Morten Brørup
2023-07-28 15:20 0% ` Thomas Monjalon
2023-07-28 15:33 0% ` Morten Brørup
2023-07-28 15:37 0% ` Thomas Monjalon
2023-07-28 15:55 0% ` Morten Brørup
2023-08-01 3:19 0% ` Feifei Wang
2023-07-05 8:48 13% [PATCH] eventdev: announce single-event enqueue/dequeue ABI change Mattias Rönnblom
2023-07-05 11:12 13% ` [PATCH v2] doc: " Mattias Rönnblom
2023-07-05 13:00 4% ` Jerin Jacob
2023-07-05 13:02 4% ` [EXT] " Pavan Nikhilesh Bhagavatula
2023-07-28 15:51 4% ` Thomas Monjalon
2023-07-26 12:04 4% ` Jerin Jacob
2023-07-12 10:18 8% [PATCH] doc: announce deprecation of RTE_CPUFLAG_NUMFLAGS Sivaprasad Tummala
2023-07-12 10:21 0% ` Ferruh Yigit
2023-07-12 14:51 0% ` Hemant Agrawal
2023-07-25 8:39 3% ` Ferruh Yigit
2023-07-25 8:40 0% ` Bruce Richardson
2023-07-25 9:24 0% ` Morten Brørup
2023-07-25 9:36 0% ` Kevin Traynor
2023-07-25 14:18 0% ` Tyler Retzlaff
2023-07-25 14:24 0% ` Jerin Jacob
2023-07-25 16:45 0% ` Hemant Agrawal
2023-07-28 10:11 0% ` Thomas Monjalon
2023-07-12 17:30 5% [PATCH] doc: deprecation notice to add callback data to rte_event_fp_ops Sivaprasad Tummala
2023-07-13 8:51 0% ` Jerin Jacob
2023-07-13 10:38 0% ` Tummala, Sivaprasad
2023-07-13 10:40 0% ` Jerin Jacob
2023-07-14 11:32 0% ` Tummala, Sivaprasad
2023-07-17 11:24 5% ` [PATCH v1] " Sivaprasad Tummala
2023-07-17 11:43 0% ` Jerin Jacob
2023-07-17 12:42 0% ` Ferruh Yigit
2023-07-25 8:40 0% ` Ferruh Yigit
2023-07-25 16:46 0% ` Hemant Agrawal
2023-07-25 18:44 0% ` Pavan Nikhilesh Bhagavatula
2023-07-28 15:42 3% ` Thomas Monjalon
2023-07-14 8:15 [PATCH 0/3] announce bonding macro and function change Chaoyong He
2023-07-14 8:15 ` [PATCH 2/3] doc: announce bonding data change Chaoyong He
2023-07-17 15:03 3% ` Ferruh Yigit
2023-07-18 1:13 0% ` Chaoyong He
2023-07-14 8:15 ` [PATCH 3/3] doc: announce bonding function change Chaoyong He
2023-07-17 15:13 3% ` Ferruh Yigit
2023-07-18 1:15 0% ` Chaoyong He
2023-07-16 21:25 1% [RFC] MAINTAINERS: add status information Stephen Hemminger
2023-07-19 16:07 1% ` [PATCH v2] " Stephen Hemminger
2023-07-20 17:21 1% ` [PATCH v3] " Stephen Hemminger
2023-07-20 17:45 5% ` [PATCH v2 ] tap: fix build of TAP BPF program Stephen Hemminger
2023-07-20 23:25 4% ` [PATCH v3] " Stephen Hemminger
2023-07-22 16:32 4% ` [PATCH v4] " Stephen Hemminger
2023-07-19 12:30 3% [PATCH 1/1] node: remove MAX macro from all nodes Rakesh Kudurumalla
2023-07-19 15:12 [PATCH] doc: postpone deprecation of pipeline legacy API Cristian Dumitrescu
2023-07-19 16:08 3% ` Bruce Richardson
2023-07-20 10:37 0% ` Dumitrescu, Cristian
2023-07-28 16:02 0% ` Thomas Monjalon
2023-07-25 23:04 [PATCH] doc: announce changes to event device structures pbhagavatula
2023-07-26 15:55 ` [PATCH v2] " pbhagavatula
2023-07-27 9:01 ` Jerin Jacob
2023-07-28 15:14 3% ` Thomas Monjalon
2023-07-26 1:35 [PATCH v3] Add support for IBM Z s390x David Miller
2023-08-02 15:25 ` David Marchand
2023-08-02 15:34 3% ` David Miller
2023-08-02 15:48 3% ` David Miller
2023-07-28 14:29 27% [PATCH] doc: announce new major ABI version Thomas Monjalon
2023-07-28 15:18 27% ` [PATCH v2] " Thomas Monjalon
2023-07-28 15:23 4% ` Bruce Richardson
2023-07-28 16:03 4% ` Thomas Monjalon
2023-07-28 17:02 7% ` Patrick Robb
2023-07-28 17:33 4% ` Thomas Monjalon
2023-07-31 4:42 8% ` [EXT] " Akhil Goyal
2023-07-28 15:25 4% ` Morten Brørup
2023-07-28 20:37 3% DPDK 23.07 released Thomas Monjalon
2023-07-29 22:54 1% [PATCH] kni: remove deprecated kernel network interface Stephen Hemminger
2023-07-30 2:12 1% ` [PATCH v2] " Stephen Hemminger
2023-07-30 17:12 ` Stephen Hemminger
2023-07-31 8:40 ` Thomas Monjalon
2023-07-31 15:13 3% ` Stephen Hemminger
2023-07-31 15:21 4% ` David Marchand
2023-07-31 9:43 4% [PATCH 0/3] version: 23.11-rc0 David Marchand
2023-07-31 9:43 12% ` [PATCH 1/3] " David Marchand
2023-07-31 10:00 0% ` Bruce Richardson
2023-07-31 19:03 0% ` Aaron Conole
2023-07-31 9:43 8% ` [PATCH 2/3] telemetry: remove v23 ABI compatibility David Marchand
2023-07-31 10:01 4% ` Bruce Richardson
2023-07-31 9:43 8% ` [PATCH 3/3] vhost: " David Marchand
2023-07-31 10:38 4% [PATCH] build: update DPDK to use C11 standard Bruce Richardson
2023-07-31 15:58 4% ` [PATCH v2] " Bruce Richardson
2023-07-31 16:42 0% ` Tyler Retzlaff
2023-07-31 16:58 4% ` [PATCH v3] " Bruce Richardson
2023-08-01 13:15 4% ` [PATCH v4] " Bruce Richardson
2023-08-02 12:31 4% ` [PATCH v5] " Bruce Richardson
2023-07-31 15:41 3% cmdline programmer documentation Stephen Hemminger
2023-08-01 9:40 [RFC] eventdev/eth_rx: update adapter create APIs Naga Harish K S V
2023-08-01 13:51 ` [PATCH v2] " Naga Harish K S V
2023-08-01 15:23 ` Jerin Jacob
2023-08-02 14:19 ` Naga Harish K, S V
2023-08-02 16:12 ` Jerin Jacob
2023-08-10 7:38 4% ` Naga Harish K, S V
2023-08-10 8:07 0% ` Jerin Jacob
2023-08-10 11:58 0% ` Naga Harish K, S V
2023-08-01 16:04 [PATCH v2 0/2] Remove disabled functionality Stephen Hemminger
2023-08-01 16:05 1% ` [PATCH v2 2/2] kni: remove deprecated kernel network interface Stephen Hemminger
[not found] <20220825024425.10534-1-lihuisong@huawei.com>
2023-05-27 2:11 ` [PATCH V6 0/5] app/testpmd: support multiple process attach and detach port Huisong Li
2023-07-14 7:21 0% ` lihuisong (C)
2023-08-02 3:15 3% ` [PATCH RESEND v6 " Huisong Li
2023-08-02 3:15 2% ` [PATCH RESEND v6 2/5] ethdev: fix skip valid port in probing callback Huisong Li
2023-08-02 20:48 [PATCH] eal/windows: resolve conversion and truncation warnings Tyler Retzlaff
2023-08-02 22:29 ` Dmitry Kozlyuk
2023-08-02 22:41 3% ` Tyler Retzlaff
2023-08-02 23:44 0% ` Dmitry Kozlyuk
2023-08-03 0:30 0% ` Tyler Retzlaff
2023-08-02 21:11 2% [PATCH 1/2] eal: remove RTE_CPUFLAG_NUMFLAGS Sivaprasad Tummala
2023-08-02 21:11 3% ` [PATCH 2/2] test/cpuflags: " Sivaprasad Tummala
2023-08-02 23:50 0% ` [PATCH 1/2] eal: " Stanisław Kardach
2023-08-11 4:02 2% ` Tummala, Sivaprasad
2023-08-11 6:07 3% ` [PATCH v2 1/2] test/cpuflags: removed test for NUMFLAGS Sivaprasad Tummala
2023-08-11 6:07 2% ` [PATCH v2 2/2] eal: remove NUMFLAGS enumeration Sivaprasad Tummala
2023-08-15 6:10 3% ` Stanisław Kardach
2023-08-08 4:02 [RFC] ethdev: introduce maximum Rx buffer size Huisong Li
2023-08-11 12:07 3% ` Andrew Rybchenko
2023-08-15 8:16 0% ` lihuisong (C)
2023-08-08 17:35 3% [PATCH 00/20] remove experimental flag from some API's Stephen Hemminger
2023-08-08 18:19 0% ` Tyler Retzlaff
2023-08-08 21:33 0% ` Stephen Hemminger
2023-08-08 23:23 0% ` Tyler Retzlaff
2023-08-09 0:09 3% ` [PATCH v2 00/29] promote many API's to stable Stephen Hemminger
2023-08-09 0:10 2% ` [PATCH v2 24/29] compressdev: remove experimental flag Stephen Hemminger
2023-08-08 17:53 3% C11 atomics adoption blocked Tyler Retzlaff
2023-08-08 18:23 0% ` Bruce Richardson
2023-08-08 19:19 0% ` Tyler Retzlaff
2023-08-08 20:22 0% ` Morten Brørup
2023-08-08 20:49 0% ` Tyler Retzlaff
2023-08-09 8:48 0% ` Morten Brørup
2023-08-14 13:46 ` Thomas Monjalon
2023-08-14 15:13 3% ` Morten Brørup
2023-08-16 17:25 0% ` Tyler Retzlaff
2023-08-16 20:30 0% ` Morten Brørup
2023-08-11 1:31 4% [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 1:31 2% ` [PATCH 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-11 17:32 3% ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 17:32 2% ` [PATCH v2 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-14 8:05 0% ` Morten Brørup
2023-08-16 19:19 3% ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-16 19:19 2% ` [PATCH v3 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-16 21:38 3% ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-16 21:38 2% ` [PATCH v4 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-17 21:42 3% ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
2023-08-17 21:42 2% ` [PATCH v5 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-21 22:27 0% ` [PATCH v5 0/6] optional rte optional stdatomics API Konstantin Ananyev
2023-08-22 21:00 3% ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
2023-08-22 21:00 2% ` [PATCH v6 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-22 10:23 3% Minutes of Technical Board Meeting, 2023-August -9 Jerin Jacob Kollanukkaran
2023-08-24 11:09 [PATCH 00/27] refact the nfpcore module Chaoyong He
2023-08-24 11:09 1% ` [PATCH 02/27] net/nfp: unify the indent coding style Chaoyong He
2023-08-24 11:09 3% ` [PATCH 05/27] net/nfp: standard the local variable " Chaoyong He
2023-08-24 11:09 1% ` [PATCH 07/27] net/nfp: standard the comment style Chaoyong He
2023-08-24 11:09 5% ` [PATCH 19/27] net/nfp: refact the nsp module Chaoyong He
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).